M

This essay is now a Draft

The list of questions relevant to this contest is here. Once you submit your essay, it will be available for judges to review and will no longer be able to edit it. Please make sure to review the eligibility criteria before submitting. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Where is the AGI Roadmap?

{{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
AI Progress Essay Contest

This essay was submitted to the AI Progress Essay Contest, an initiative that focused on the timing and impact of transformative artificial intelligence. You can read the results of the contest and the winning essays here.


This essay addresses directly or indirectly 4 of the Metaculus forecasting questions.


If you were to scan the pages of say Scientific American or perhaps The Economist you will undoubtedly come across references to new applications for AI and machine learning. By my reckoning expect around 5 articles per week. You might come away with the impression that AI is one of the biggest things in technology right now and you’d be right. You might also think that this is the prelude to even smarter AI – AGI. I believe you would also be correct in this prediction. But not necessarily for the right reasons.

First, lets take the forecast for IT services as a percentage of GDP in 2030.


The community prediction is 5%. This might well be an underestimate. What we are seeing in all those AI / machine learning headlines is a massive expansion of the capabilities of software. Take this example from the Economist, (May 30th 2020):

"The DoD have, for the first time, deployed artificial intelligence to determine when a thorough check up of a Black Hawk helicopter is in order. The algorithm, trained on maintenance records and sensor data, calculates how long the aircraft can fly safely in, say, a desert, before its engines should be cleaned to prevent sand melting into glass that could cause them to fail."

Here are software algorithms that can take in potentially 100s of relevant data inputs and make predictions based on all that complexity. This is a new paradigm [1] in software, call it Software 2.0. In a world that is complex and noisy, AI that uses machine learning opens up huge new areas for digital transformation. What can be automated will be automated. This is also why I believe that the first AGI will extensively use machine learning [2].   


It is also worth considering what are known as ‘no code’ or ‘low code’ initiatives that make using Cloud based machine learning not much more than drag and drop. When this happens the percentage of office workers who ‘code’ will likely increase rapidly, further suggesting the community view of the value of IT services in 2030 could be an underestimate.

So machine learning AI will be very prevalent in the 2030 timeframe and with this progress will come substantial revenues to the Tech firms that win out and super-sized R&D budgets to keep revenues pouring in. But it will be painfully obvious to many that this AI is very narrow in how it can be used. Hypersmart but fragile algorithms. Essentially it is dumb. How can AI become smarter, more logical, more human in its way of thinking? How can you add ‘System 2’ type thinking to AI? [3] This is the problem that will occupy us in the 2030s as there is currently no roadmap from narrow to general AI.

So does this mean that predictions for AGI and human-machine intelligence parity are off the mark?


I believe the community prediction for AGI is correct but I have reservations about the predictions around human-machine intelligence. Whilst we do not have a roadmap to get to AGI we do at least understand the basics of what the problem is with regards to AGI.

We have as an example of a general intelligence our own brains and I anticipate significant strides in understanding how the brain works over the coming decades (for some up to date reading on the subject see recommended reading below). We are unlocking how our brains work and figuring out more and more about exactly what intelligence is. A personal prediction is that causality will become a hot topic very soon.

But let’s take a deeper look at the prediction relating to the first AGI. Once we start to unlock the secrets of logical, ‘system 2’ type intelligence, narrow AI will become smarter – it will become AGI. However, I expect this will be gradual rather than a step change from current AI to AGI. Consider the case of human workers. Humans all possess general intelligence but we all have different skills (through training, education etc.) as well as more innate abilities. It will prove the same for AI. Specialisms will develop to optimize for different AI tasks. Some might use NLP to assist in the creative industries such as in music creation or the creation of (partial) TV or film scripts. Others might be optimized to discover and assess new materials and drugs. Other types might crunch through data to help manage the complexity of a smart city. Others just act as tools to improve the productivity of knowledge workers – think of ‘clippy’ the intelligent Microsoft assistant only this time the assistant actually does aid productivity. I am confident this will happen in the timeframes given in the community predictions.

By my definition above, AGI will happen ahead of human-machine intelligence parity and I believe by some way. Until we understand ‘the hard problem’ of consciousness better and the role emotions play in our intelligence we will only have AGI that is smart in the sense that it is not dumb. But given this AGI will drive the development of ever more powerful technology it will be essential to get to a thorough understanding of human level intelligence to ensure we can safely manage any emergent super AGI.         

 

Recommended Reading

Some books that I found informative but didn’t reference directly in the above essay:

Thomas W Malone Superminds                      

Mark Humphries The Spike                         

Matthew Cobb The Idea of the Brain     

Pedro Domingos The Master Algorithm

 

And some informative articles:

Will Cappelli                    Why computational neuroscience and AI will converge (link)

Deepmind various           Reinforcement Learning (link)

Javier Ideami                    Journey to the centre of the neuron (link)

Towards the end of deep learning (link)

 

[1] I usually hate hyperbole but in this case I think it’s justified

[2] Note this is distinct from the Metaculus question about AGI being based on machine learning

[3] ‘Thinking, Fast and Slow’, (2011) by Daniel Kahneman


Categories:
Artificial Intelligence
Submit Essay

Once you submit your essay, you can no longer edit it.