AI world models need to understand cause and effect

They should be able to map how reality works, not just how it looks


Human intellect rests on three pillars: seeing (observing the world), doing (intervening in it) and imagining (simulating what might happen under different choices). Right now, artificial intelligence inhabits only one of these pillars.

Expanding existing frontier AI models will not address this problem. The breakthrough that set off today’s frenzy was the transformer architecture, developed at Google and scaled up into large language models trained on much of the public internet and used to write text and code. Then came agents that stitch these models together into automated workflows. Now the focus is on “world models”, which try to capture the physical environment from vast streams of video and other inputs. 

World models are an important evolution from LLMs. This so-called spatial intelligence is being used to develop technology that can enable driverless cars and robotic factory workers. The trouble is that systems built in this way do not really understand the world they record. Instead, they mimic it one 3D object at a time. They conflate coincidence with cause. They can act without being able to explain why, optimise without grasping what happens if conditions change and hallucinate with great confidence. In domains such as healthcare, energy grids or, worse, autonomous weapons, the repercussions of this may not just be embarrassing but lethal.

Decades ago, Alan Turing argued that a truly intelligent machine should “learn from experience”. It should not passively observe but act. It should learn from the consequences of its actions and ask “what if?” Training a machine to do this will require something new — a “causal world model” that acts as an internal map of how a slice of reality works, not just how it looks.

Over the past 20 years, a small but determined group of scientists has been building a mathematical language of cause and effect, and with it a solid theoretical foundation for such models. The work, popularised in Judea Pearl’s The Book of Why, explains how to distinguish correlation from causation, formalise interventions and generate counterfactuals — in other words, the worlds that might have been. 

Current AI models focus on correlations between variables. This works well in predictive situations where pattern recognition can be used. But causal world models are needed if we are to address the problems that matter most this century. Planning climate adaptation scenarios in megacities like São Paulo, where I live, requires asking “what if” questions about extreme events that have not yet occurred (and may not).

Genuine scientific discovery is not possible without models that can generalise, follow the causal rules of a system and generate realistic scenarios, and go beyond simply extrapolating and automating existing processes. Take complex biological networks, for example. How can we discover the novel bioproducts needed to accelerate the energy transition or solve complex diseases? Designing drought-resilient crops is not a matter of finding patterns in past yields; it requires an understanding of the ways in which the soil microbiome, plant genetics, water, nutrients, pests, diseases and weather interact — and an understanding of what drives what, when and where.

Emerging markets, which are both vulnerable and full of challenges that provide useful experimental data, should be at the forefront of this. They are ideally-suited innovation test beds, partners and co-developers.

The world faces a choice. It can continue racing to build hyperscale infrastructure to support existing AI models, or it can opt to focus some of that attention towards developing models that grasp how the world really works and how it can be deliberately changed and controlled for the better.

Developing causal models may have other unexpected upsides. The brute-force approach of testing trillions of possible correlations and weighting them by trial and error will consume data, energy, emissions and money. But causal models should be parsimonious by design. Training and inference can be orders of magnitude more efficient because the machine would not be blindly searching; it would be probing along meaningful lines of causality under the constraints of the laws of physics that govern the real world.

From São Paulo to Nairobi to Mumbai, the costs of delay are counted in failed harvests and avoidable emissions. Without a revolution in how machines reason about cause and effect, the current AI boom risks ending in disappointment.


Article published in FT.com, March 16th, 2026

Written by Juan Carlos Castilla-Rubio, chair of SpaceTime Labs


Fabio Issao
Currently focused on Branding and Information Design, Fabio Issao helps individuals and organizations to improve their visions, purposes and businesses strategies through design-oriented methodologies. In the last 12 years, Fabio co-founded 3 design studios (LUME, Flag and Camisa10). After that, he served as the Strategic Design Director at Mandalah, a global conscious innovation consultancy, for 5 years, where he helped global and local brands to implement design as a changing-driver for all its projects. Since July 2014 he's been working on different projects, all of them based on creating social good and purposeful products and services.
http://www.fabioissao.com
Next
Next

The Earth BioGenome Project Phase II: illuminating the eukaryotic tree of life