Could deep learning come to an end?

Credit: HackerNoon

Deep learning is one of the most exciting research fields in technology and the basis of so much AI: could its days really be numbered?

In 2000, Igor Aizenberg introduced deep learning in connection with artificial neural networks (ANN) to determine Boolean threshold neurons for reinforcement learning. It was a revelation. To many, it’s still the most exciting thing in artificial intelligence.

Deep learning was born in the embers of Y2K and has gone on to shape the 21st Century. Automatic translations, autonomous vehicles and customer experience are all indebted to this concept: the idea that if tech can teach itself, we as a species can simply step back and let the machines do the hard work.

Some believe that deep learning is the last true invention of the human race. Others believe it’s a matter of time before robots rise up and destroy us. We assume that AI will outlive us: what if deep learning has a lifespan, though?

MIT Technology Review looked into the history of AI, analysing 16,625 papers to chart trends and mentions of various terms to track exactly what’s risen in popularity and when. Their conclusion was intriguing: deep learning could well be coming to an end.

The emergence of the deep learning era

The terms “artificial intelligence”, “machine learning” and “deep learning” are often used as interchangeable buzzwords for any kind of computing project that requires algorithms of some kind.

This is, of course, misleading. This chart is a common visual explanation of how deep learning is merely a subsection of machine learning, and machine learning a subsection of AI.

Deep learning is but an era of artificial intelligence. MIT used the largest open-source databases of scientific papers, known as the arXiv, and tracked words mentioned to discover how AI has evolved.   

These findings found three major trends. Firstly, there was a gradual shift towards machine learning that begun on the cusp of the 21st Century. Secondly, neural networks began to pick up speed around a decade later, just as the likes of Amazon and Apple were incorporating AI in their products. Reinforcement learning has been the third big wave of the last few years.


Neural networks weren’t always this popular. They peaked in the 1960s and dipped below the surface, returning briefly in the 80s and then again around 20 years later.


MIT found a transition away from knowledge-based systems (KBS) – computer programs that reason and use a knowledge base to solve complex problems – by the 21st Century. It was replaced by machine learning, which comes up with a model just from the available training data and uses that model to infer conclusions from new observations, as opposed to a KBS’s method of arriving at a conclusion based on the facts or knowledge and the “if-then” rules it has been fed.

What comes next?

There is more than one way to train a machine.

Supervised learning is the most popular form of machine learning. Decisions made in this method don’t affect what an AI sees in the future. This is the principle of image recognition: all you need is the knowledge of what a cat looks like, to recognise a cat.

Reinforcement learning mimics how we learn though: it is a sequential way of learning, meaning that that the next input of the AI depends on a decision made with the current input. Think of it more like a board game: you can play chess by learning all the rules but you truly progress as a player by earning experience.

In October 2015, DeepMind’s AlphaGo trained with reinforcement learning managed to defeat the world champion in the ancient game of Go by learning from experience. This had a huge impact on reinforcement learning. Since then, it has been picking up traction, just as deep learning experienced its boom after Geoffrey Hinton made image recognition breakthroughs towards the end of the 2000s.

[forminator_poll id=”2995″]

AI has genre shifts like music. Just as synth-pop dominated the 80s, replaced by the grunge and Britpop of the 90s, artificial intelligence experiences the same waves of popularity. The 1980s saw knowledge-based systems dominate, replaced by Bayesian networks the following decade; support vector machines were in favour in the 2000s, with neural networks becoming more popular this decade.

Neural networks weren’t always this popular. They peaked in the 1960s and dipped below the surface, returning briefly in the 80s and then again around 20 years later. There’s no reason that the 2020s won’t bring about new changes to the way that we use AI. There are competing ideas so far about the next revolution to take hold; whatever it is could see deep learning leave the spotlight for a while.

Luke Conrad

Technology & Marketing Enthusiast

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...

Is automation the silver bullet for customer retention?

Carter Busse • 22nd October 2024

CX innovation has accelerated rapidly since 2020, as business and consumer expectations evolved dramatically during the Covid-19 pandemic. Now, finding the best way to engage and respond to customers has become a top business priority and a key business challenge. Not only do customers expect the highest standard, but companies are prioritising superb CX to...

Automated Testing Tools and Their Impact on Software Quality

Natalia Yanchii • 09th October 2024

Test automation refers to using specialized software tools and frameworks to automate the execution of test cases, thereby reducing the time and effort required for manual testing. This approach ensures that automation tests run quickly and consistently, allowing development teams to identify and resolve defects more effectively. Test automation provides greater accuracy by eliminating human...

Custom Software Development

Natalia Yanchii • 04th October 2024

There is a wide performance gap between industry-leading companies and other market players. What helps these top businesses outperform their competitors? McKinsey & Company researchers are confident that these are digital technologies and custom software solutions. Nearly 70% of the top performers develop their proprietary products to differentiate themselves from competitors and drive growth. As...

The Impact of Test Automation on Software Quality

Natalia Yanchii • 04th October 2024

Software systems have become highly complex now, with multiple interconnected components, diverse user interfaces, and business logic. To ensure quality, QA engineers thoroughly test these systems through either automated or manual testing. At Testlum, we met many software development teams who were pressured to deliver new features and updates at a faster pace. The manual...