top of page

Chemistry, Quantum Computing and Artificial General Intelligence (AGI)

AGI is the dream of many researchers working in the field of AI. AGI is defined as the ability of a machine to think and act equally to, or better than, humans.  Today’s AI is simply mathematical calculation having no direct relationship to human thinking.  AI can certainly calculate faster than humans and identify data patterns that humans might miss, but it does not think. This article considers the inter-relationship of AGI, chemistry, and quantum computing.

The Mathematical argument against AGI

The mathematical argument against AGI (or “Strong AI”) was presented powerfully by renowned mathematician Sir Roger Penrose in his 1994 book “Shadows of the Mind”. He used the power of Goedel’s Incompleteness Theorem to conclusively demonstrate (from a pure mathematical perspective) the impossibility of AGI. This is a long and detailed argument, not to be replicated here. Those interested should read the book.

The Chemical argument against AGI

The primary research tool that we have to understand life on earth (of any form) is Chemistry. Evolution suggests that life began as "a bunch of chemical reactions within ocean scum washed-up on a beach” (Richard Dawkins, The Selfish Gene, 1976). Today we understand life through physiology and basic chemistry e.g., the movement of calcium ions in a brain cell.

 

We have dozens of different chemical process models for many different aspects of life (cell growth, genetics, digestion etc.) and, in theory, might simulate the behaviour of a whole human organism. But it is far too complex, and we simply do not have the computer processing power to do it. We cannot even fully simulate the life of a single-cell amoeba. Calculations of chemical reaction processes require accurate calculations of molecular energies and the most accurate models (Full Configuration Interaction) require computation effort of the order of N^N (N = no. of atoms). If a methane molecule calculation (5 atoms) takes 1 day on a (very) fast super-computer, ethane (8 atoms) takes 15 years and propane (11 atoms) takes over 250,000 years on the same computer. For a complex, folded protein with hundreds of atoms, the computational task is impossible on today’s computers. If quantum computers ever achieve real advantage, it is possible that we might be able to speed-up this calculation, but HIGHLY unlikely that we could ever simulate the chemistry of a complete human organism, its brain, or even just a single cell.

Whatever future AGI might be, it will NOT be a complete chemical/mathematical simulation of a human brain and nervous system but something much simpler. From this Chemical argument, AGI will not be human thinking.

Could AGI grow through accelerated evolution?

The September 2019 publication by OpenAI on multi-agent competition in the game “Hide & Seek” raised an interesting question of AI evolution. In this experiment, OpenAI used two AI Agents competing against each other – “Hiders” and “Seekers”. They played the traditional game of Hide & Seek and were provided with virtual physical objects to enable the game – rooms, boxes, ramps, walls etc. (to provide hiding places and seek methods). The Agent machine learning tool was Reinforcement Learning.

After several hundred million games the Open AI researchers were astounded to find that the Agents had developed physically realistic game strategies and techniques totally unexpected by the human researchers. For example, the Seekers jumped onto boxes to enable them to “surf” along the floor and jump over a wall where Hiders were hiding (see video). A comparison was drawn with the process of lifeform evolution and natural selection. Could AI Agents competing with each other evolve at a much faster pace than earthly life-forms? Should we encourage such experiments?

How will we know AGI when we see it?

This is a much tougher question than the Turing Test or Searle’s Chinese Room. How should we judge the capabilities of AGI against humans?

  • Against one human or against all humans, against children or octogenarians?

  • In terms of mathematical, logical, and linguistic capabilities? (an easy win for AGI)

  • Against the full range of human emotions of love, hate, desire, anger etc.? (NOT so easy for AGI)

  • In terms of having a “zest for life” and the desire to procreate?

There are an infinite number of tests that could be conceived as a composite of many different factors. It will be tough to choose. Most likely, AGI will creep up on us like old age or a terminal disease. It will be a continuous evolution of today’s simplistic AI and one day someone will declare “Wow, we’ve got AGI”. By then it will be too late to do anything about it.

The AGI Myth

The mathematical and chemical arguments against AGI are substantial and it is highly unlikely that computers (of any type) will be able to fully simulate all aspects of human thinking. From this perspective, AGI is a myth. But why would you want to simulate human thinking in order to generate a superior intelligence?

The AGI Reality

The reality today is that AI can undertake numerous specific tasks faster and more reliably than humans. AI will continue to evolve, and humanity will benefit from specific AI capabilities that are better than equivalent human capabilities, for example in cancer tumour analysis, traffic flow management and agricultural productivity management. In these specific cases we will have “AI supremacy” but it will not be widespread AGI. In this regard, AI will be hugely beneficial for society.

In parallel to these specific cases, AI could evolve itself, just as humans evolved. The Hide & Seek example above is just a very early illustration of this potential. But this will not be human intelligence, it will be an “Alien Intelligence” and it could evolve much faster than human intelligence evolved. This is the danger that humanity faces from AGI.

bottom of page