“People underestimate how complex intelligence is.”
How close are we to Artificial General Intelligence (AGI)? It seems we take a step closer to that reality with every breakthrough. And yet, it feels like a million miles away in the future.
Why are we still so distant from AGI despite the unabated rise in computational hardware? What’s holding us back from programming machines that generalize to multiple domains?
We invited Professor Melanie Mitchell to answer these pertinent and pressing questions in episode #19 of the DataHack Radio podcast. She is Professor of Computer Science at Portland State University and the author of multiple books on artificial intelligence.
Professor Melanie brings over three decades of teaching and academia experience to this DataHack Radio episode. It was delightful listening to her thoughts on topics like:
I have summarized the key points discussed in this episode here. Make sure you tune in and listen to the full podcast!
Listen and subscribe to this, and all previous DataHack Radio podcast episodes, on any of the below platforms:
Where did it all start for Professor Melanie Mitchell? How did she become enthralled by the field of computer science?
As she tells us in this episode, her interest was kindled during her undergraduate days via a book by Douglas Hofstadter titled “Gödel, Escher, Bach: an Eternal Golden Braid”. It’s essentially a book about artificial intelligence that inspired Professor Melanie to pursue research in this field.
She contacted Mr. Douglas (the author) to pick his brains about certain topics in AI. These conversations carried over into Professor Melanie’s Ph.D in Computer Science with Douglas Hofstadter as her thesis advisor. It’s a great example of how persistence and belief in your passion can fuel you to achieve your dreams.
AI was a fairly well-known field of research back in the mid-1980s and early 1990s. Neural Networks were just starting to become popular.
Most of us think of them as a dense network of layers and neurons now but it took a good while for them to acquire the “deep” moniker. Back then, these neural networks were fairly shallow. There simply wasn’t enough computational power to generate any sort of deep neural network!
Professor Melanie holds a Ph.D in Computer Science from the University of Michigan. Her dissertation was around the development of a program that can make analogies, called Copycat. It’s considered one of the earliest approaches to analogy-making.
You can read more about Copycat and it’s functionalities here. Her research was about attempting to get machines to generalize to new domains. Yes, that means artificial general intelligence – an area of research we are still trying to make headway with in 2019.
Most of the breakthroughs we’ve seen in artificial intelligence have been thanks to improvements in computing and the availability of large datasets, rather than any mind-blowing insights. For example, Deep convolutional neural networks (CNNs), a raging trend these days, were invented back in the 1980s!
These deep CNNs were used on problems like handwritten digit recognition but to a very limited degree. The computational resources just weren’t there. Now? Most of us with a half-decent machine can build an accurate digit recognition model!
“There is a lot of data in the world. It’s just not labelled, though.”
The biggest takeaway from the last year or two? The unabated rise of unsupervised learning. Machines are able to learn what are the important features in a model from looking at the data. Unlike supervised learning, there’s no need to label the training data. This means a significant reduction in the model training cost.
You can see why it’s an area most researchers would pursue! This line of thought could potentially be our way to making AGI a reality in the coming decades.
“People underestimate how complex intelligence is.”
Professor Melanie recently penned a really thought-provoking article in the New York Times titled “Artificial Intelligence Hits the Barrier of Meaning“. It talks about how machine learning algorithms don’t (yet) understand things the way humans do. These algorithms still can’t understand context – a vital aspect of our own thinking and behavior.
Take the example of autonomous cars. They have been on the verge of becoming mainstream for a number of years – but we’re still not sure when they’ll truly be ready.
“If there’s a paper bag on the road, we don’t have to worry about driving over it. Autonomous cars, on the other hand, have a lot of trouble figuring out which obstacles they should avoid and ones they don’t need to.”
Professor Melanie has written multiple books over the years and has one coming up later this year, titled “Artificial Intelligence: A Guide for Thinking Humans”. The book is targeted to people from all backgrounds. But it will go a little deeper than most general AI books out there.
The theme of the book is how much does AI actually need to understand the data that it’s dealing with in order to be reliable.
Topics covered in the book include how artificial intelligence algorithms work, what are their applications, their limitations, etc.
We read and come across industry perspectives on artificial intelligence all the time. It’s refreshing to hear from people working in academia about their thoughts on artificial intelligence and where it’s headed in the near future. That’s especially enriching when it comes from someone as experienced and well-versed as Professor Melanie Mitchell.
AGI is a much debated topic and there’s no real consensus on when it might come about. It feels like we are getting closer with each breakthrough in the field and yet are a million miles away from the end goal. What are your thoughts on AGI? Do you feel we are any closer to it now than, perhaps, 10 years ago?