Ever since the release of ChatGPT last year, the world has been obsessed with Artificial Intelligence, describing it as a one-size-fits-all solution to humanity’s greatest challenges. But how does this ‘artificial intelligence’ that everyone is so excited about actually work? What limitations and dangers might it have? What is in store for us in the future of AI?
What is ‘AI’? How Does it Work?
When asked to imagine an artificial intelligence, many would picture a sentient, often all-powerful being like J.A.R.V.I.S. from Iron Man or SkyNet from The Terminator. In reality, we still have a long way to go before we create “true” artificial intelligence. Today, AI is the field of developing software and hardware that can use the information they receive to autonomously make complex decisions without human assistance. The primary way to do this is through a category of computer algorithms we call Machine Learning (ML) algorithms.
Whether it’s telling the difference between a picture of a cat and that of a dog, or trying to discover what types of pollution have the biggest impact on global warming, an ML algorithm’s primary job is to find patterns in the data you give it.
One way this is done is by giving the ML algorithm data with a ‘right’ or ‘wrong’ answer, which is defined by a human. Think of this data like a textbook with practice questions from which the algorithm can learn. By working through these practice problems, the ML algorithm can optimize itself to get as many of the problems right as possible by recognising the patterns in the solutions to the problems. This way, we can then use the ‘trained’ algorithm to provide us with the correct answer for new problems similar to the practice questions. We call this method ‘supervised learning.’
Sometimes, though, we don’t know what patterns a set of data contains, and we simply want the ML algorithm to discover patterns if there are any. In this case, we simply feed the algorithm as much data as possible and ask it to try and make sense of it by identifying trends or grouping them into different types. Here the ML algorithm can once again optimize itself to sort the data in a way that results in as few anomalies as possible. We call this type of learning ‘unsupervised learning.’
The Limitations and Dangers of AI
Although it may seem like that, using the above two methods of learning, AI can solve any problem we throw at it, this is not the case. At its core, the process of learning is the same for both humans and machine learning algorithms: We take in information in the world around us and try to make sense of it by finding patterns in the information. The main advantage of ML algorithms over humans is their ability to quickly process large amounts of data, however, given a limited amount of data, humans are still much better than AI at finding potential patterns. Therefore, for problems where only a limited amount of data can be acquired, AI will struggle to outperform their human counterparts.
In addition, although AI can provide solutions to difficult problems, it is practically impossible to understand how exactly the AI reached its conclusion. The most advanced ML models today compute results using an enormous amount of variables that would take years for humans to decipher for themselves. This means that crucial miscalculations in the AI’s decision-making process can go unnoticed. Even if these errors are noticed, it is very difficult for data scientists to determine what caused them or how to correct them.
The Future of AI
Although the AI we currently use still harbours a number of serious issues, the field of AI is one of the fastest-advancing fields in the world. Researchers across the world are hard at work finding ways to make AI learn more from small datasets, and for ML algorithms to operate in a way that is easily comprehensible for humans. From self-driving cars to highly efficient rocket engine designs, AI has already helped many industries make strides never before possible. For better or worse, AI is here to stay. What we must do, is make AI stay for good.
If you want to explore this topic further, take a look at MIT’s Introduction to Machine Learning or any of Andrew Ng’s Coursera courses such as this.
Kommentare