Deep Learning

Deep learning is a branch of Artificial Intelligence related to—the broader category of machine learning—where machines can learn by experience and acquire skills without human involvement.

Deep learning is where neural networks—algorithms inspired by the human brain—learn from large amounts of data. Deep learning uses neural networks with many layers. In traditional machine learning, the algorithm is given a set of relevant features to analyze. However, in deep learning, the algorithm is given raw data (typically multiple related data-sets on specific aspects of the world) and decides for itself what features are relevant. Deep learning networks will often improve as you increase the amount of data being used to train them.

Deep learning essentially tries to mimic how the human brain works. As when humans learn from experience, a deep learning algorithm can perform a task repeatedly, each time tweaking it to improve the outcome. Whereby ‘deep learning’ refers to the neural networks having many layers that—change or evolve over time as they tackle related problems over and over again—to enable learning.

Problem Definition

For the last several decades, in order to get computers to respond to our requests for information, or to define the problem we would like them to solve—we had to learn to speak to them in a way they would understand. 

This meant humans having to learn things like boolean query language, or how to write complex rules—encoded as programs—that carefully instructed the computer what actions to take.  However with deep learning, we simply provide a definition of the desired outcome and an example set of inputs, and the deep learning algorithm will backward solve the answer to your question.

How Deep Learning Works

The main facet of a Deep Learning neural network is a layer of computational nodes called “neurons”. Every neuron connects to all of the neurons in the underlying layer. There are three types of layers:

  • Input layer of nodes, which receives the information and transfers it to the underlying nodes
  • Hidden node layers are the ones where the computations take place
  • Output node layer where the results of the computations appear

The procedure of “deep learning” means the neural network is leveraging at least two hidden layers. By adding more hidden layers the researchers enable more in-depth calculations. However deploying such networks demands immense amounts of computational power—both in processing terms and possibly also large resources in memory/storage etc.

How does such an algorithm work? Well typically we begin by providing volumes of ‘training-data’ for the deep-learning algorithm—whereupon the network enters a ‘training phase’—the procedure requiring multiple iterations of ‘runs’ or analysis/learning runs/cycles. Each cycle has the aim of being to learn (a little more) about how to achieve its primary task.  Such training data consists of multiple data-samples that have been pre-analysed—and correctly identified—outside of the process. 

For example a sequence of photographs of animals  may be provided; whereby some of those images that contain cats that have been identified by humans. Whereupon the goal of the training process for the deep-algorithm might be to learn—over many analysis cycles—which images contain cats—and which do not contain cats.

Accordingly, an automatic training process commences. Henceforth the algorithm proceeds to run itself repeatedly in an iterative fashion whereby for each iteration, each connection in the neural network has its weight or importance set. The initial values of weight are assigned randomly (or according to their perceived importance for the machine-learning model by the human creator). 

There is also an Activation Function for every neuron that evaluates the path the signal should take, as with the real human brain. If after the data set is analysed—for each training cycle— the results differ from the expected, the weight values for specific connections are configured anew and a separate iteration is run. Each iteration yields new results and these often slightly differ from the expected ones. The value of the difference between the real outcomes and the AI’s calculated/predicted outcomes is called the Loss Function. We want this function to be as close to zero as possible, meaning the algorithm has produced the correct outcome.

Each time we adjust the weight of the connections between the nodes (or neurons) and run the algorithm against the data set, we can see if the next run yielded more accurate results, or else become even less precise. By evaluating the results after each iteration, the algorithm is able to adjust the weights in small increments and understand the direction to reach the minimum Loss Function.

In summary, over several iterations (typically thousands, or even many millions) the trained Deep Learning model will produce acceptably accurate results and will be ready to be used in production (a real world setting).

Deep learning and bias: A danger zone

Like other branches of AI, or in fact any technological solution designed by humans, deep learning is not without other important drawbacks and application considerations, such as the possibility of bias.

It’s easy to blindly trust the results of a deep learning algorithm, but like any machine learning algorithm, the results are only as good as the data the algorithm it is trained on. If the data contains an unconscious bias (human creator) or issues with fairness/relevancy, machine learning cannot provide the most accurate results.

Applications of Deep Learning

1. Self-driving cars

Companies building these types of driver-assistance services, as well as full-blown self-driving cars like Google’s, need to teach a computer how to take over key parts (or all) of driving using digital sensor systems instead of a human’s senses.

2. Deep Learning in Healthcare

Innovations in Deep Learning are advancing the future of precision medicine and population health management in unbelievable ways. Computer-aided detection, quantitative imaging, decision support tools and computer-aided diagnosis will play a big role in years to come.

3. Voice Search & Voice-Activated Assistants

One of the most popular usage areas of deep learning is voice search & voice-activated intelligent assistants. With the big tech giants have already made significant investments in this area, voice-activated assistants can be found on nearly every smartphone. Apple’s Siri is on the market since October 2011.

4. Automatically Adding Sounds To Silent Movies

In this task, the system must synthesize sounds to match a silent video. The system is trained using 1000 examples of video with sound of a drumstick striking different surfaces and creating different sounds. A deep learning model associates the video frames with a database of pre-rerecorded sounds in order to select a sound to play that best matches what is happening in the scene.

5. Automatic Machine Translation

This is a task where given words, phrase or sentence in one language, automatically translate it into another language.

6. Automatic Text Generation

This is an interesting task, where a corpus of text is learned and from this model new text is generated, word-by-word or character-by-character.

The model is capable of learning how to spell, punctuate, form sentences and even capture the style of the text in the corpus.

7. Automatic Handwriting Generation

This is a task where given a corpus of handwriting examples, generate new handwriting for a given word or phrase.

8. Image Recognition

Another popular area regarding deep learning is image recognition. It aims to recognize and identify people and objects in images as well as to understand the content and context.

9. Automatic Image Caption Generation

Automatic image captioning is the task where given an image the system must generate a caption that describes the contents of the image.

10. Automatic Colorization

Image colorization is the problem of adding color to black and white photographs. Deep learning can be used to use the objects and their context within the photograph to color the image, much like a human operator might approach the problem.

11. Advertising

Advertising is another key area that has been transformed by deep learning. It has been used by both publishers and advertisers to increase the relevancy of their ads and boost the return on investment of their advertising campaigns.

12. Predicting Earthquakes

Scientists have used Deep Learning to teach a computer to perform viscoelastic computations, these are the computations used in predictions of earthquakes.

13. Neural Networks for Brain Cancer Detection

A team of French researchers noted that spotting invasive brain cancer cells during surgery is difficult, in part because of the effects of lighting in operating rooms. They found that using neural networks in conjunction with Raman spectroscopy during operations allows them to detect the cancerous cells easier and reduce residual cancer post-operation.

14. Neural Networks in Finance

Futures markets have seen a phenomenal success since their inception both in developed and developing countries during the last four decades. Historical daily prices of twenty stocks from each of the ten markets (five developed markets and five emerging markets) are used for the analysis.

15. Energy Market Price Forecasting

Researchers in Spain and Portugal have applied artificial neural networks to the energy grid in an effort to predict price and usage fluctuations.

Deep Learning Failures / Problems

Deep learning’s advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. So a Deep Learning algorithm works by comparing classes of patterns; and is able to spot when a new pattern is similar to, or contains similar features with, another pattern. Basically Deep Learning is an AI technique that is very good at solving Pattern Classification problems.

But many of the interesting problems in cognition—or simulation of human thinking— reserach aren’t classification problems at all. “People naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do,” says François Chollet, a researcher at Google. “But that’s just not true.”

Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber’s AI lab, recently published a trilogy of essays, offering a critical appraisal of deep learning. Marcus believes that deep learning is not “a universal solvent, but one tool among many.” And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve. 

According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow. 

The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. 

Finally, Deep Learning systems are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

Conclusion

In recent times, Deep Learning systems have been responsible for some of the most remarkable technological success-stories; providing everyday solutions to previously unsolved problems such as voice recognition and automated image analysis etc. Previously such problems (of advanced pattern recognition) were only possible by employing a human expert.

But it seems sensible to avoid the temptation of extending these victories to all the other areas of human cognition. The author believes that Deep Learning and related technologies will continue to make advances and become important tools that enable humanity to progress; but we should not confer on machines/computers a new ability to actually think in a human-like manner. Machines wholly lack the capability to actually understand matters in wider contexts; decide for themselves, self-program etc.

To enable such abilities, we need a completely new type of machine-brain; being one which is able to actually perceive/understand the environment as an embodied Thing which is inserted into the wider world; and thus to be able to understand/predict all of the many different potential results of its decisions/actions. Put simply, the machine mind must understand/know who/what it is, what the world is, and also who/what we humans are, and thus be able to create new ideas/concepts that are relevant  to the broader situation at hand.  Evidently, we still have a long, long, way to go; before we are able to actually manufacture a mind.