CIOReview
CIOREVIEW >> Artificial Intelligence >>

AI Summers and Winters and What They Teach Us About the Future

Andreas Merentitis, Director of Data Science (Global), OLX Group
Andreas Merentitis, Director of Data Science (Global), OLX Group

Andreas Merentitis, Director of Data Science (Global), OLX Group

During the last decade, Data Science emerged and matured as a field, transforming several industries and promising to do the same for many more. However, this is not the first time that “AI” is at the forefront of attention, as we are now experiencing the 3rd (some would argue 4th) major AI wave. Each of the previous ones were followed by a respective winter. In this article, I will summarize what in my opinion is different this time, as well as certain trends that are either already happening or just down the road. The second part of the article will focus on practical aspects, such as how to identify potential applications for Data Science. Finally, the article will conclude with a forward look at the topic of dangers related to AI.

While the promise of Data Science and Machine Learning has created a lot of traction, we had before two other cases of AI summers: in the 1950s and 1960s, we had the “age of reasoning” and prototype AI, followed by the AI winter of the 1970s. Similarly, in the 1980s and early 1990s, we had a second boom in the “age of knowledge representation,” that produced some expert systems. But again, a second AI winter came in the mid-1990s, lasting till the mid-2000s. What is different this time? The first significant difference is that now we have a critical mass of applications that already benefit from Machine Learning, at our current level of technical capability. For many use cases, the benefits are not a future promise but are already here, and any further technical improvements will only increase them. The second difference is that the virtuous loop of more data is leading to better algorithms and improved applications. Thus more (and better) data has generated a multitude of trends that are reinforcing each other, which was not the case in the two previous AI summers. Will this be enough to avoid a third AI winter? It’s, of course, not possible to say, but even in the pessimistic scenario, we will see more incremental applications over what has already proven to work. This is perhaps the key takeaway: although certainly there is hype, this time is arguably different from the previous two AI summers, as we already have a range of use cases with strong practical impact and a few trends that synergize to further expand that range.

What are these reinforcing trends that I am referring to? Some of them are technical, from improved hardware and software libraries to train Machine Learning models at scale, to better fundamental algorithms that can take advantage of the more data now available. Perhaps the more interesting ones though are multidisciplinary: for example, the improvements in ML coupled with advances in design promise to change the way we interact with computers, including our mobile phones. The move towards channels like voice is already happening, while more futuristic means of human-computer interaction might become practical in the future. Moreover, trends like interpretable AI and fairness in Machine Learning promise to make our predictions easier to understand and explain, while also ensuring integrity in the decision process.

 More data has generated a multitude of trends that are reinforcing each other, which was not the case in the two previous AI summers​ 

Given that Data Science is already making a difference across fields and many companies are expressing their commitment to the adoption of these methods, a question that often arises is what are the use cases where we can apply Data Science and where to start? The first thing I would check is if there is historical data in the domain you are considering or if it would be possible to get such data. Then I would also check if there is no easy solution at hand, often a simple statistical rule can work wonders. If you have the data, either there is no immediate and straightforward solution or the potential impact from improving the existing solution is enormous (in terms of customer experience, revenue, etc.), Andrew Ng, in his seminal talk, gives the rule of thumb “if humans can do it in less than a second” criterion, which is not always accurate but gives a rough idea. Take face recognition or object localization; both can be done very fast from humans. These are examples we can also now train machines to do very efficiently. However, most importantly, the rule of thumb does not address if we should be prioritizing solving that use case with Data Science. Hence a joint effort with business and product people is typically necessary to prioritize the use cases that make the most sense for a given company at a given point in time.

Another more futuristic but also a critical question is if we are in danger from AI, in the sense of it posing an existential threat to us. We cannot rule out that it might be the case, but not any time soon. To answer this question, I would decompose in two parts: how far are we from Artificial General Intelligence? And do we need to plan and prepare to start from now?

For the first part of the question, the answer is simple, very far. We are not even sure if it is possible to achieve the “singularity”, let alone that any of the paths we are now exploring will at some point in time lead to that. Artificial General Intelligence is on the mission statement of DeepMind and similar institutions. However, even with the progress in Meta-learning and Reinforcement Learning, we have little to show in terms of out of sample generalization (predicting on data different from the training distribution), let alone exact rationalization. We, humans, do both in our daily life based on broad previous experiences from other domains. For example, when a human learns to drive, he already knows how gravity works, what is acceleration, and also has a good idea of physics in terms of their effect in daily life. He has a “model of the world” that he can use and learn driving skills with the help of that knowledge. However, he still has to determine what the different signs mean and how hard he can press a brake in different conditions. Machines, however, still have to explicitly learn every new concept largely from scratch. This way of learning is neither easy nor efficient.

That being said, even if we are far from it, it does not mean we should not think and set policies already. Technology progresses non-linearly and there were only a couple of decades between the small rockets and the Apollo program. Besides, events of low probability but high impact are not to be trifled with. In fact, two of my favourite books, the “Black Swan” and “The Art of the Long View,” come from different perspectives but advice against trying to make predictions (the former proposes scenarios) for precisely these types of events. In practical terms, however, I would say the topics of interpretable AI and fairness in Machine Learning are more urgent concerns.

What do you think, are we going to face a third AI Winter, and what can we do to prevent it or make it milder? In case we continue on the growing trend, what topics related to ethics in AI should we prioritize to solve, and how?

Read Also

The Intelligent Legal Department

The Intelligent Legal Department

Mick Sheehy, Partner, PwC NewLaw
Data Protection Trends - GDPR as a forthcoming global privacy benchmark

Data Protection Trends - GDPR as a forthcoming global privacy benchmark

Jiri Cerny, Legal & Corporate Affairs Director, Microsoft
Technology as a Tool to Aid the Legal Function

Technology as a Tool to Aid the Legal Function

Surabhi Madan, Senior Global Legal Counsel, ASM Front-End Mfg (S) Pte Ltd
Building On Your Legal Tech Journey

Building On Your Legal Tech Journey

Neil Blum, National IT Manager, Barry.Nilsson
Enhancing Productivity of Lawyers with Technology

Enhancing Productivity of Lawyers with Technology

Vimaljit Kaur, Senior Legal Counsel, Sembcorp Industries