AI, Entropy, and the Order of Knowledge
Artificial intelligence will greatly enhance efficiency. It operates with speed and scale beyond human capacity, applying computational brute force to problems that would otherwise take eons to resolve. By running trillions of simulations and mapping vast outcome spaces, AI can assign probabilities and correlations with an objectivity and endurance that human cognition cannot match. From this capacity will emerge new materials, processes, business models, and solutions, not through inspiration, but through exhaustive exploration.
Yet one must not underestimate brute force. Though AI may accelerate discovery, it may also diminish diversity. By organizing and filtering information, it reduces informational entropy. In the short term, this produces clarity and coherence; over time, it risks narrowing the range of perspectives upon which collective intelligence depends.
History shows that concentration of information and resources often precedes systemic failure. The Soviet experiment in central planning collapsed because it could not allocate resources efficiently to meet basic needs. In contrast, contemporary capitalism skews in the opposite direction: inequality channels capital toward luxuries and passion projects while neglecting accessible necessities such as education, healthcare, eldercare, and environmental stability. Even when bias is unintentional, inequality degrades informational efficiency, the economy’s ability to use distributed knowledge to allocate resources optimally.
Before the internet, information transmission was slow and search costs were high. The internet improved access and speed; search engines improved precision. But each advance introduced a trade-off. As search efficiency increased, systems favoured convenience and speed over diversity and depth. Artificial intelligence represents a further acceleration of this substitution.
A large language model, such as a GPT, is a statistical system trained on immense corpora of human language to generate plausible, coherent responses. It excels at synthesis, but its very strength lies in averaging, producing the most likely continuation of thought. As AI systems become embedded in decision-making, their outputs become inputs for other agents, reinforcing consensus and compressing variance. The informational field becomes more ordered, but less exploratory.
This leads to an intriguing question: what is the entropy of an AI system?
Entropy analysis depends on how one defines the system’s boundaries. If we consider only the informational domain, excluding the physical infrastructure, AI appears to reduce entropy. It organizes data, filters noise, and imposes order. Yet, when viewed as a complete system, including data centers, networks, and power grids, the Second Law of Thermodynamics still applies: the total entropy of a closed system cannot decrease.
If information is inseparable from its physical substrate, if computation is physical, then reductions in informational entropy must be offset by increases in physical entropy. In other words, every increment of informational order produced by a GPT requires a corresponding increase in heat, energy consumption, and material disorder elsewhere. More order in information implies more chaos in energy.
This realization reframes the energy problem of AI as not merely technical, but thermodynamic. Solutions lie not only in renewable energy or cooling systems, but also in informational design — in how models compress, store, and recall data within their context windows. Efficient representation of knowledge directly reduces the physical entropy generated in maintaining it.
When quantum computing integrates with AI, these relationships will deepen. Concepts such as physical–informational equivalence and quantum thermodynamic entropy will shift from theory to engineering. Yet even quantum systems cannot transcend the fundamental constraints of energy, entropy, and information. The same principles that govern stars will govern algorithms.
There will be limits to what quantum-accelerated AI can achieve, limits that investors and engineers alike should acknowledge before the next wave of over-optimism.