LLMs, huh, what are they good for?

Absolutely nothing? Or some things.

Over the last year or so, I have been expressing the view that LLMs are unsuited for lots of things. And that remains true. But I have also been working through their improvements and uses, and I have found some things that LLMs are useful for.

Use cases

The technology of large language models is often mistaken for some form of intelligence. But the term "AI" is not really useful as a replacement for intelligence. The reality behind the hype of AI has always been automation. Just like automating factory processes can be a great improvement, especially for repetitive tasks, so can the many forms of AI developed over the years. Here are some examples:

Putting these things together

Just like any other programming approach, you build up programs from the assembly of parts. For example, you can provide a list of facts, tell the LLM to parse human statements into a list of claims made in those statements, compare the claims to the facts, and produce a table listing which claims are true, which are false, and which are not determinable by the facts, producing a factual rebuttal to claims that are false, and it will do it pretty well, quickly, inexpensively, and reliably.

Like in programming, where you can express things many different ways to produce similar results, how you request things from LLMs leads to different, even if similar results. In traditional programming, errors accumulate, so for example, 1/3 expressed in binary always produces an inexact answer. Whether high or low in the last bit, as you do more and more with it, the errors accumulate unless controlled, until the answers come out completely wrong, errors of kind rather than small errors of amount.

Unlike traditional programs, the same input and program often do not produce the same outputs in LLMs. Part of the problem of building up more complex use cases is constraining the expansion of outcomes to desired subsets. The expansion of outputs, if otherwise uncontrolled, leads to unpredictable results that go far astray as the overall LLM 'program by prompt' method expands minor errors into huge ones in a less predictable manner, and because the pivot points to get to errors of kind rather than amount are unclear, some programmatic process is most effective in constraining results by reforming them, rejecting a step and trying again, and other similar methods.

Thus today, the most successful attempts I have seen, use LLM for some steps and traditional programming between those steps. This mix and how to implement it become a fundamental of success in this space for business purposes.

Rules of thumb Here are a few rules to thrive by in the LLM space:

Conclusions

LLMs are good for a lot of things, if properly architected in the context of a trust architecture for the context of their use.

More information?

If you want more details, join us on our monthly free advisory call, usually at 0900 Pacific time on the 1st Thursday of the month:

Advisory Session

and we will be happy to answer any of your questions.

In summary

LLMs are good for lots of things, if you know how to use them well for those purposes.

Copyright(c) Fred Cohen, 2025 - All Rights Reserved