One of the world’s foremost experts in deep learning, who recently taught advanced techniques to Winton employees, shares his thoughts on the field.
In the blog post below, Jeremy explores the breadth of the potential applications of deep learning, a subset of machine learning and artificial intelligence.
By Jeremy Howard
Deep learning is a computer technique to extract and transform data – with use cases ranging from human speech recognition to animal imagery classification – by using multiple layers of so-called neural networks. Each of these layers take the inputs from previous layers and progressively refines them. Moreover, the algorithms involved can train themselves by learning to minimise errors and improve their own accuracy.
Deep learning has power, flexibility, and simplicity. What’s more, anyone with as little as a year of coding experience can use deep learning to improve their work.
That’s why I believe it should be taught and applied across many disciplines. These include the social and physical sciences, the arts, medicine, and, appropriately for Winton, financial market investing.
To give a personal example, despite having no background in medicine, I started Enlitic, a company that uses deep learning algorithms to diagnose illness and disease. And Enlitic now does better than doctors in certain cases.
The hardest part of deep learning is artisanal: how do you know if you’ve got enough data; whether it is in the right format; if your model is training properly; and if it’s not, what should you do about it?
That is why I believe in learning by doing. As with basic data science skills, with deep learning you only get better through practical experience. Trying to spend too much time on the theory can be counterproductive. The key is to just code and try to solve problems: the theory can come later, when you have motivation and context.
David Perkins, who wrote Making Learning Whole, has much to say about this. The basic idea is to teach “the whole game”. That means that’s if you’re teaching baseball, you first take people to a baseball game or get them to play it. You don’t teach them how to line thread into a ball, the physics of a parabola, or the coefficient of friction of a ball on a bat.
Or take music. You don’t start out by studying the theory of harmonics and how strings vibrate and the circle of fifths. Instead, you would take someone to a concert or get them to start playing an instrument.
In deep learning, it really helps if you have the motivation to fix your model to get it to do better. That’s when you start learning the relevant theory. But you need to have the model in the first place. We teach almost everything through a Kaggle competition. The first job is to replicate everything you’ve seen.
Ultimately, good deep learning practitioners can switch between different coding tasks fairly effortlessly - whether that’s whipping out numpy matrix multiplications or broadcasting operations, or throwing together a quick list comprehension to convert all the Booleans into 1s and 0s. But if those concepts sound like Greek to you, that’s fine! The point is that they will continue to do so until you roll up your sleeves and start applying deep learning techniques.
So what sort of tasks make for good test cases? You could train your model to distinguish between Picasso and Monet paintings or to pick out pictures of your daughter instead of pictures of your son. It helps to focus on your hobbies – setting yourself four of five little projects rather than striving to solve a big, grand problem tends to work better. Since it is easy to get stuck, trying to be too ambitious can often backfire.
Common character traits in the people that do well at deep learning include playfulness and curiosity. The late physicist Richard Feynman is an example of someone who I’d expect to be great at deep learning: his development of an understanding of the movement of subatomic particles came from his amusement at how plates wobble when they spin in the air. He went about analysing that.
Deep learning can be set to work on almost any problem. My first startup was a company called FastMail, which provided enhanced email services when it launched in 1999. We used a primitive form of deep learning – single-layer neural networks – to help to categorise customers and stop them receiving spam.
FastMail was nothing more than software – which, simply put, is computers doing stuff for you. I have tried to translate as much of the work I’ve found around me into algorithms, and I’ve used models for the entirety of my working life. I strongly believe that other people can use models and deep learning to do improve what they do, too.
Jeremy Howard is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible, and also a Distinguished Research Scientist at the University of San Francisco, a faculty member at Singularity University, and a Young Global Leader with the World Economic Forum.
Jeremy's most recent startup, Enlitic, was the first company to apply deep learning to medicine, and has been selected one of the world's top 50 smartest companies by MIT Tech Review two years running. He was previously the President and Chief Scientist of the data science platform Kaggle and before that worked set up a clutch of other companies after starting his career in management consulting.