Deep learning: new neural nets could model continuous processes

In deep learning, neural nets use specific hidden layers to deliver defined results. AI researcher David Duvenaud is questioning all of that with ODE nets.

Deep learning is incredible: truly, it is. Being able to map human-like brain power onto a computer, so that it learns as we do, should never be taken for granted. It is one of the most astonishing scientific breakthroughs in the history of our species, however, deep learning is not beyond improvement.

At the heart of a deep learning model lies a neural net. This is the brain, if you like: a combination of stacked layers of simple nodes that work to try and find the patterns in data. The net then assigns values to data that it processes, filtering this data through different layers to come to a final conclusion.

Now, scientists are questioning how the values are assigned to data and whether there’s a more efficient way to run deep learning algorithms.

David Duvenaud, an AI researcher at the University of Toronto, set out to build a medical deep learning model that would predict a patient’s health over a period of time. Traditional neural networks thrive when they learn from data with defined observation stages: basically, the hidden layers within a deep learning model. This is difficult to align with healthcare.

Health is a continuous topic to assess. It does not rely on binary questions as it contains so many variables. So how can a neural net pick up on continuous data?

Can neural nets be improved?

Think of a deep learning model as being similar to a game of classic board game, Guess Who. In the game, each player has a selection of characters in front of them, all with a different appearance: some have facial hair, glasses, blue eyes, brown eyes and each of them unique.

One player of Guess Who asks the other binary questions to discount characters from their investigation, until they are left with the final chosen character through this process of elimination: this is the output layer.

This is similar to how a neural network works. It processes its data through different stages, eliminating more and more of the dataset until it’s left with the correct answers available. This is the technology that is used in face recognition software, for example.

Software 2.0: How neural networks work
Basic neural network model

David Duvenaud saw an opportunity. He sought to break from the binary for a more fluid form of deep learning.

Traditionally, the answer is to simply add more layers to a neural net to reach a more accurate endpoint. This is not always sensible though. Why, for example, should you have to define the number of layers within a neural network, train the data and then wait to see how accurate it is? Duvenaud’s neural net lets you specify the accuracy first, then it finds the most efficient way to train itself within that margin of error.

This is what researchers describe as an “ODE net”, short for “ordinary differential equations”.

How can an ODE be solved?

Solving an ODE numerically can be done by integration. This is a computationally intensive task and there have been methods suggested in the past to reduce the hidden stages within deep learning.

Duvenaud worked with a number of researchers on a paper that proposed a simpler method to solve an ODE. The method relies on solving a second, augmented ODE backwards and doesn’t take up too much memory. The gradient computation algorithm works by introducing an “ODEsolve” operation as an operator later on in the process.


The ODE poses interesting questions about what the most efficient methods of deep learning truly are.


This operator relies on the initial state, the function, the initial time, the end time and the searched parameters from the ODE. The presented paper provided Python code to easily compute the derivatives of the ODE solver.

The paper suggested that supervised learning – particularly MNIST written digit classification – was one application in which the ODESolve method can perform compared to a residual network with much fewer parameters.

Will ODEs revolutionise deep learning?

The ODE is not the only way to run a deep learning model. There could be any number of reasons that a scientist would want to define the number of stages for the AI that they run. Either way, “it’s not ready for prime time yet,” Duvenaud claims.

However, the ODE poses interesting questions for deep learning moving forward about how we build neural nets and what the most efficient methods of deep learning truly are. This is not a particularly new idea, but this is a breakthrough of kinds. Whether this approach works for a range of models remains to be seen.

Luke Conrad

Technology & Marketing Enthusiast

Six ways to maintain compliance and remain secure

Patrick Spencer VP at Kiteworks • 16th September 2024

With approximately 3.4 billion malicious emails circulating daily, it is crucial for organisations to implement strong safeguards to protect against phishing and business email compromise (BEC) attacks. It is a problem that is not going to go away. In fact, email phishing scams continue to rise, with news of Screwfix customers being targeted breaking at...

Enriching the Edge-Cloud Continuum with eLxr

Jeff Reser • 12th September 2024

At the global Debian conference this summer, the eLxr Project was launched, delivering the first release of a Debian derivative that inherits the intelligent edge capabilities of Debian, with plans to expand these for a streamlined edge-to-cloud deployment approach. eLxr is an open source, enterprise-grade Linux distribution that addresses the unique challenges of near-edge networks...

Embracing digital AI recruitment without rocking the boat

Katherine Loranger • 11th September 2024

Artificial intelligence (AI) is set to become indispensable in business operations. For global enterprises, AI offers significant benefits by simplifying complexity and enabling confident decisions—when used in the right way. Those HR recruitment teams that seamlessly integrate AI technologies will optimise their recruitment practices and will have the opportunity to better realise their commitment to...

Why a data strategy underpins a successful AI strategy

Jim Liddle • 05th September 2024

AI and machine learning offer exciting innovation capabilities for businesses, from next-level predictive analytics to human-like conversational interfaces for functions such as customer service. But despite these tools’ undeniable potential many enterprises today are unprepared to fully leverage AI’s capabilities because they lack a prioritised data strategy. Bringing siloed and far-flung unstructured data repositories into...
The Digital Transformation Expo is coming to London on October 2-3. Register now!