Last year, millions of Americans discovered something startling: a machine could write an essay, draft a sermon, summarize a classified-style briefing, or compose a college paper in seconds. It felt like science fiction. But what most people do not realize is this: the system doing that work is only one piece of a much larger technological shift. And the next step may be far more consequential.
Today’s most visible AI systems are large language models, often called LLMs. These are the engines behind tools developed by companies such as OpenAI. At their core, LLMs are extraordinary pattern predictors. They are trained on vast amounts of text and learn to predict the next word in a sequence. When you ask a question, the system calculates probabilities and produces the most statistically likely continuation.
That simple mechanism—predicting the next word—turns out to be remarkably powerful. It allows AI to write smoothly, translate languages, summarize reports, and even generate computer code. It feels intelligent because it mirrors how humans express thought.
But here is the key: LLMs are optimized for plausibility, not reality. They generate what sounds right. They do not inherently test whether a plan works in the physical world or whether constraints make an idea impossible. They are masters of language, not necessarily masters of logistics, engineering, or strategy.
That is where a different kind of AI enters the picture: energy-based models.
The future of AI: language and energy
Unlike language models, energy-based systems do not predict words one at a time. Instead, they evaluate entire possibilities and assign them a score. You can imagine a landscape of hills and valleys. Each possible solution sits somewhere on that terrain. The system searches for the lowest valley—the most stable or optimal solution given the rules.
These models are well suited for problems involving constraints: routing military supplies, managing power grids, coordinating robotics, modeling physics, or allocating limited resources. They ask not “What comes next?” but “Which overall solution best fits the requirements?”
Individually, each approach has strengths and weaknesses. Language models are flexible and conversational but can drift from hard constraints. Energy-based systems are rigorous and structured but not naturally conversational or adaptable in open-ended dialogue.
Now consider what happens if the two are combined.
An LLM could interpret a complex problem in natural language, explore options, and communicate with human users. An energy-based system could evaluate those options under real-world constraints—time, fuel, cost, risk, physics—and refine the outcome toward something that actually works.
One system generates possibilities. The other filters and optimizes them.
That pairing begins to look different from what we have today. Add memory, visual perception, long-term planning modules, and reinforcement learning from experience, and you move closer to something that appears broadly capable across domains. It could draft a strategic plan, test its feasibility, revise it under constraints, and explain the reasoning in plain English.
This is where the discussion of artificial general intelligence—AGI—enters the conversation.
AGI is often described as AI that can perform a wide range of intellectual tasks at or above human level. We are not there. But hybrid systems that combine language prediction with structured optimization may represent the next major step in that direction. They would not merely talk intelligently. They would integrate communication with constraint-based reasoning.
Would that mean machines are conscious? No. These systems would still rely on mathematics and statistical learning. They would not possess moral awareness, self-reflection, or human experience. But they could simulate broad competence across fields in ways that increasingly resemble general intelligence.
For policymakers, military planners, business leaders, and educators, this distinction matters. The current generation of AI writes impressively. The next generation may also optimize and execute complex, real-world decisions at scale.
The combination of language models and energy-based systems may not be the final destination. But it could mark the next major inflection point—moving AI from fluent assistant to structured decision partner.
And that is when the conversation must shift from novelty to governance, from fascination to responsibility.
***
Notice: This column is printed with permission. Opinion pieces published by AFN.net are the sole responsibility of the article's author(s), or of the person(s) or organization(s) quoted therein, and do not necessarily represent those of the staff or management of, or advertisers who support the American Family News Network, AFN.net, our parent organization or its other affiliates.