A Yann LeCun–Linked Startup Charts a New Path to AGI

If you ask Yann LeCun, Silicon Valley has a groupthink problem. Since leaving Meta in November, the AI researcher and luminary has taken on the orthodox view that large language models (LLMs) will take us to artificial general intelligence (AGI), the threshold where computers match or exceed human intelligence. Everyone, he said in a recent interview, was “looted by the LLM.”
On January 21, San Francisco-based startup Logical Intelligence named LeCun to its board of directors. Building on a theory conceived by LeCun twenty years earlier, the startup claims to have developed a different form of AI, better equipped to learn, reason and self-correct.
Logical Intelligence has developed what is called an Energy-Based Reasoning Model (EBM). While LLMs efficiently predict the most likely next word in a sequence, EBMs absorb a set of parameters (e.g. sudoku rules) and perform a task within those limits. This method is supposed to eliminate errors and require much less calculation, as there is less trial and error.
The startup’s first model, Kona 1.0, can solve sudoku puzzles several times faster than the world’s leading LLMs, despite running on a single Nvidia H100 GPU, according to founder and CEO Eve Bodnia, in an interview with WIRED. (In this test, LLMs cannot use coding skills that would allow them to “brute force” the puzzle.)
Logical Intelligence claims to be the first company to build a working EBM, which until now was just an academic fantasy. The idea is for Kona to solve wicked problems like optimizing energy networks or automating sophisticated manufacturing processes, in contexts with no tolerance for error. “None of these tasks are associated with language. It’s anything but language,” says Bodnia.
Bodnia expects Logical Intelligence to work closely with AMI Labs, a Paris-based startup recently launched by LeCun, which is developing yet another form of AI, a so-called world model, intended to recognize physical dimensions, demonstrate persistent memory and anticipate the results of one’s actions. According to Bodnia, the path to AGI begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take on reasoning tasks, while models of the world will help robots act in 3D space.
Bodnia spoke to WIRED via video conference from his San Francisco office this week. The following interview has been edited for clarity and length.
WIRED: I should ask about Yann. Tell me how you met, his role in leading research at Logical Intelligence, and what his role on the board will entail.
Bodnia: Yann has a lot of academic experience as a professor at New York University, but he has been exposed to the real industry through Meta and other collaborators for many, many years. He saw both worlds.
For us, he is the only expert in energy-based models and the different types of associated architectures. When we started working on this EBM, he was the only person I could talk to. It helps our technical team navigate in certain directions. He was very, very involved. Without Yann, I can’t imagine us evolving so quickly.
Yann speaks openly about the potential limitations of LLMs and the model architectures most likely to advance AI research. Where are you?
LLMs are a big guessing game. That’s why you need a lot of calculation. You take a neural network, feed it pretty much all the garbage from the Internet, and try to teach it how people communicate with each other.
When you speak, your language is intelligent to me, but not because of the language. Language is a manifestation of everything in your brain. My reasoning takes place in a sort of abstract space that I decode into language. I feel like people are trying to reverse engineer intelligence by imitating intelligence.




