Learning: A day with Artificial Intelligence

Have you ever wondered how Google comes up with bespoke suggestions for what you are searching for? How Amazon identifies items you may buy? Which techniques driverless cars are using to navigate in traffic? Or what really goes on inside Alexa?

We are all surrounded by AI in our daily lives, even if we are not always aware of it. With AI being predicted to take over a huge amount of the existing jobs in the future, it would be strange not to have at least a tiny bit of curiosity. I have high curiosity, but limited knowledge of AI, so I decided to spend a Sunday in the class room to learn more. 

The group of people who joined me was a mixed bunch. Some were there because they could see AI sneaking into their work place and felt they needed to know more. Some were considering AI as a future career. And some were just curious like me.

The sessions were highly interactive with lots of discussion. Topics included the definition of AI, AI history, applications of machine learning in modern life and the future of AI. A short Python programming session was included as well. Below are some reflections on the discussions we had.

Although AI can outperform humans in narrow, well-defined tasks, we are far from replicating our adaptive human intelligence and our ability to understand abstract concepts. Example: AI can compose excellent film music, producing specific moods in predefined time windows, but struggles to put together a coherent movie. An expert survey from 2017 reveals that machines outperforming humans in general may not be as far into the future as one might think. Out of 253 AI researchers, 50% thought that machines would outperform humans within 45 years. The experts may be wrong, but during the next century we may start seeing machines creating actual movies, not just the soundtracks. Interestingly, the 253 AI researchers concluded that the job least likely to be replaced by machines was: AI researcher!

In 2017 DeepMind launched AlphaZero, who taught itself to play chess, Go and shogi. AlphaZero outperformed earlier game engines – including IBM’s Deep Blue which famously beat chess world champion Garry Kasparov back in 1997 – with style and creativity. The fact that machines teaching themselves new skills1) – or machines teaching other machines – can outperform machine learning guided by humans is fascinating and slightly disturbing. This means that the progress in AI could at some point accelerate with supersonic speed and without sufficient human control. Ok. I have probably watched too many science fiction movies. But are the doomsday scenarios from The Matrix or Terminator completely unrealistic? 

Could machines take over like in the Matrix? Or perhaps they did already and we are just not aware…

Nick Bostrom, professor at Oxford University, does not seem to think so. He elaborates on this in Superintelligence: Paths, Dangers, Strategies, a book recommended in the course. Some people might find it unnecessary to spend time mulling over what could potentially happen in the future, when it all appears so farfetched. However, if we look back at the developments in the financial sector and more recently in the tech sector, I think he has a point. The hurdling speed with which new types of products were launched, the insane creativity and the boundary-pushing uses of knowledge/data was so overwhelming, it would have been difficult to formulate a governance structure, which would keep matters under control in advance. But I guess nobody really tried and the regulators hired the necessary expertise only after the damage was done. Seeing that the consequences of an out-of-control AI programme could be a lot more devastating than the issues in investment banks and tech companies, I think it makes sense to work on potential future scenarios, and use these as input to the regulatory framework going forward.  

Besides from the whole machines-taking-over-the-world aspect there are other issues relating to AI. With all the data which can be collected and processed by AI, a true big-brother state can be built, where the government is aware of our every move and action. As New York Time reports, China is already going in that direction. Data can be useful and help to protect us from crime and terrorism, but where do we set the limit? Speaking of terrorism, in a future AI based society, where all functions are linked to a central system and the systems speak to each other, the scope for terror attacks is immense. Literally, an entire society can be brought to a halt, if someone manages to hack into the right system. 

Humans are amazing creatures, but we do have our faults and weaknesses. How do we avoid that AI programmes inherit these? Initiatives such as blackmail, taking hostages, getting rid of certain people, etc. may in some cases appear to be the simplest solution to a well-meaning objective. Would save the planet mean getting rid of humans for example? Furthermore, humans can be biased. When AI is used in connection with legal cases or job application screenings, any bias humans may have had in the past will be reflected in the AI’s recommendations.

An ethical dilemma relates to the AI programmes themselves. If AI reaches human intelligence, should we treat them like humans? Can we identify if they develop consciousness? What are their rights?

And one of the more pleasant aspects perhaps: What should humans do with all their time if AI takes over all the jobs? 

China launched the first AI news reader in Nov 2018

Of course, there are no easy answers to these questions. Nevertheless, I still found it fascinating trying to get my head around the wide-reaching consequences AI may have for our society. 

Afterwards, I thought about any concrete skills I learned from the course and I guess these were limited. The level was set so people with no technical background could join in and the Python session was short. The most concrete take-away probably was, that Python seemed very easy to use (famous last words…) and for people used to C++ or similar programming languages, Python should be child’s play. 

If you are looking for suggestions for when to use multi-layer perceptrons as opposed to recurrent neural networks or want to investigate the deeper mysteries of the TensorFlowTM library, this is not the course for you. If you enjoy an interesting and brain-stimulating day filled with discussion about a highly relevant topic, you should give it a go. I can honestly say, it has been a while since I had this much fun. Oh, and did I mention the South Korean robot slalom championship?

The course I joined was Introduction to Artificial Intelligence at City Lit in London. 

1) ’Machines teaching themselves’ means that the input data on which the learning is based come in a very raw and unprocessed format and the machine must build the model from scratch. This would typically require a deep neural network and excessive processing power. In the chess example AlphaGo taught itself chess just using the game rules as input whereas previous chess engines have been fed input from humans regarding what should be considered, when doing a specific move.