The List Of The Main Limitations Of Deep Learning Approaches

Key limitations of deep learning approaches you can identify by consulting relevant recent literature, including the motivation as to why do you think they are key drawbacks:

Large Data Requirements

Deep learning, even for not so complex problems, it needs to optimize millions of parameters during the training phase. For properly optimizing all parameters it requires a large amount of data. Moreover, during supervised learning, data must be labelled. Obtaining labelled training data in such a huge quantity is often proven to be big challenge [1]. Nowadays, plenty of data is available for many applications, thanks to social networks. But for complex applications like medical related applications (say, disease prediction) data must be combined from multiple sources like CT, MRI, ECG etc. It isn’t easy to collect such a huge amount of data for each modality for each patient. Additionally, this kind of complex input increases the complexity of the network and further increases the amount of data required for optimal training.

Blackbox problem

Despite their enormous success rate, neural networks are still being treated as black box operations, and the comprehensive theoretical understanding of the learning process has not been much explored. Which means why a neural network gives a certain output for a certain input, the process from input to output is not well understood. This problem not only prevents developer from improving the model with certainty, but also creates scepticism among the end-users [2]. For each limitation, a brief discussion on how you think it could potentially be tackled and what would be the expected impact.

Large Data Requirements

To overcome this problem, Generative Adversarial Network can be used [3]. A GAN is a type of neural network which can generate new artificial data, which looks very similar to the original. For example, if random noise and some real images of bedroom are fed into a GAN, it can generate new bed room images, which looks like the supplied images but not same.

Blackbox problem

Using approaches like information plan visualization, a better understanding of the learning mechanism can be archived, which can help in the development of better and more deterministic network architectures. Shwartz-Ziv et al. [4] have proposed the information Bottleneck based method to overcome this problem. They suggested a layered neural network with Markov chain of successive representations of the input layer and then to study the information planes, which are planes of mutual values of other variable with a certain input and its desired output.how do you think knowledge representation and semantics could play a role in addressing each of the limitations you have identified.

Large Data Requirements

Small datasets can be represented as different matrices like, knowledge graphs, fact-tables etc. to identify the relations between various attributes and provide a much more structured data to deep learning algorithm, less complex input means less complex network, which reduces the amount of data needed for optimal training [5].

Blackbox problem

Output from each hidden node can be stored in proper knowledge representation like semantic web or graphs, then information related to the processing of each step (each input to hidden to output) can be extracted and why a certain output is generated can be understood with certainty. Recently, a new branch of deep learning has emerged called Geometric Deep Learning which can work with data in non-Euclidean domains, such as graphs and manifolds [6][7]. Using this approach, working with graph-based data becomes very easy and effective.

References:

  1. Mangan, Michael; Cutkosky, Mark; Mura, Anna; Verschure, Paul F.M.J.; Prescott, Tony; Lepora, Nathan; Camilleri, Daniel (Eds.) (2017): Analysing the Limitations of Deep Learning for Developmental Robotics. Biomimetic and Biohybrid Systems. Cham: Springer International Publishing.
  2. A Deeper Theory of Deep Learning: Deep Learning and the Phase transitions in the Information Bottleneck framework – A project by Naftali Tishby, Noga Zaslavsky and others (https://www.researchgate.net/project/A-Deeper-Theory-of-Deep-Learning-Deep-Learning-and-the-Phase-transitions-in-the-Information-Bottleneck-framework)
  3. Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil et al. (2014): Generative Adversarial Networks.
  4. Shwartz-Ziv, Ravid; Tishby, Naftali (2017): Opening the Black Box of Deep Neural Networks via Information. In Why & When Deep Learning works: looking inside Deep Learning.
  5. Vieira, Armando (2016): Knowledge Representation in Graphs using Convolutional Neural Networks. Available online at http://arxiv.org/pdf/1612.02255v1.
  6. Bronstein, Michael M.; Bruna, Joan; LeCun, Yann; Szlam, Arthur; Vandergheynst, Pierre (2017): Geometric Deep Learning: Going beyond Euclidean data. In IEEE Signal Processing Magazine 34 (4), pp. 18–42
  7. Monti, Federico; Boscaini, Davide; Masci, Jonathan; Rodolà, Emanuele; Svoboda, Jan; Bronstein, Michael M. (2017): Geometric deep learning on graphs and manifolds using mixture model CNNs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
11 February 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now