Let’s compare and see if TF will be in the lead.
PyTorchThe primary software tool for deep learning after Tensorflow is PyTorch.
The PyTorch framework was developed for Facebook services but is already used for its own tasks by companies like Twitter and Salesforce.
Key Things to Know:Unlike TensorFlow, the PyTorch library operates with a dynamically updated graph.
This means that it allows you to make changes to the architecture in the process.
In PyTorch, you can use standard debuggers, for example, pdb or PyCharm.
What Is It Good For?The process of training a neural network is simple and clear.
At the same time, PyTorch supports the data parallelism and distributed learning model, and also contains many pre-trained models.
PyTorch is much better suited for small projects and prototyping.
When it comes to cross-platform solutions, TensorFlow looks like a more suitable choice.
However, it is worth noting that for the same tasks, the Caffe2 mobile framework introduced in 2017 can be used.
SonnetSonnet deep learning framework built on top of TensorFlow.
It is designed to create neural networks with a complex architecture by the world famous company DeepMind.
Key Things to Know:High-level object-oriented libraries that bring about abstraction when developing neural networks (NN) or other machine learning (ML) algorithms.
The idea of Sonnet is to construct the primary Python objects corresponding to a specific part of the neural network.
Further, these objects are independently connected to the computational TensorFlow graph.
Separating the process of creating objects and associating them with a graph simplifies the design of high-level architectures.
More information about these principles can be found in the framework documentation.
What Is It Good For?The main advantage of Sonnet, is you can use it to reproduce the research demonstrated in DeepMind’s papers with greater ease than Keras, since DeepMind will be using Sonnet themselves.
So all-in-all, it’s a flexible functional abstractions tool that is absolutely a worthy opponent for TF and PyTorch.
KerasKeras is a machine learning framework that might be your new best friend if you have a lot of data and/or you’re after the state-of-the-art in AI: deep learning.
Plus, it’s the most minimalist approach to using TensorFlow, Theano, or CNTK is the high-level Keras shell.
Key Things to Know:Keras is usable as a high-level API on top of other popular lower level libraries such as Theano and CNTK in addition to Tensorflow.
Prototyping here is facilitated to the limit.
Creating massive models of deep learning in Keras is reduced to single-line functions.
But this strategy makes Keras a less configurable environment than low-level frameworks.
What Is It Good For?Keras is the best Deep Learning framework for those who are just starting out.
It’s ideal for learning and prototyping simple concepts, to understand the very essence of the various models and processes of their learning.
Keras is a beautifully written API.
The functional nature of the API helps you completely and gets out of your way for more exotic applications.
Keras does not block access to lower level frameworks.
Keras results in a much more readable and succinct code.
Keras model Serialization/Deserialization APIs, callbacks, and data streaming using Python generators are very mature.
By the way, you cannot compare Keras and Tensorflow because they sit on different levels of abstraction.
PS: Tensorflow is on the Lower Level: This is where frameworks like MXNet, Theano, and PyTorch sit.
This is the level where mathematical operations like Generalized Matrix-Matrix multiplication and Neural Network primitives like Convolutional operations are implemented.
Keras is on the higher Level.
At this Level, the lower level primitives are used to implement Neural Network abstraction like Layers and models.
Generally, at this level, other helpful APIs like model saving and model training are also implemented.
MXNetMXNet is a highly scalable deep learning tool that can be used on a wide variety of devices.
Although it does not appear to be as widely used as yet compared to TensorFlow, MXNet growth likely will be boosted by becoming an Apache project.
The main emphasis is placed on the fact that the framework is very effectively parallel on multiple GPUs and many machines.
This, in particular, has been demonstrated by his work on Amazon Web Services.
What Is It Good For?Support of multiple GPUs (with optimized computations and fast context switching)Clean and easily maintainable code (Python, R, Scala, and other APIs)Fast problem-solving ability (vital, for newbies in deep learning, like me)Although it is not so popular as TF, MXNet has detailed documentation and is easy to use, with the ability to choose between imperative and symbolic programming styles, making it a great candidate for both beginners and experienced engineers.
GluonGlun is one more great Deep Learning framework that can be used to create simple as wells as sophisticated models.
Key Things to Know:The specificity of the Gluon project is a flexible interface that simplifies prototyping, building and training deep learning models without sacrificing learning speed.
Gluon is based on MXNet and offers a simple API that simplifies the creation of deep learning models.
Similar to PyTorch, the Gluon framework supports work with a dynamic graph, combining this with high-performance MXNet.
From this perspective, Gluon looks like an extremely interesting alternative to Keras for distributed computing.
What Is It Good For?In Gluon, you can define neural networks using the simple, clear, and concise code.
It brings together the training algorithm and neural network model, thus providing flexibility in the development process without sacrificing performance.
Gluon enables to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
SwiftIf you are into programming, when you hear Swift, you will probably think about app development for iOS or MacOS.
If you’re into deep learning, then you must have heard about Swift for Tensorflow (abbreviated as S4TF).
By integrating directly with a general purpose programming language, Swift for TensorFlow enables more powerful algorithms to be expressed like never before.
Key Things to Know:First-class autodiff.
Differentiable programming gets first-class support in a general-purpose programming language.
Take derivatives of any function, or make custom data structures differentiable at your fingertips.
New APIs informed by the best practices of today, and the research directions of tomorrow, are both easier to use and more powerful.
Building on TensorFlow, the Swift APIs give you transparent access to all low-level TensorFlow operators.
Building upon Jupyter and LLDB, Swift in Colab improves your productivity with helpful tooling such as context-aware autocomplete.
What Is It Good For?A great choice if dynamic languages are not good for your tasks.
If you have a problem arises when you have training running for hours and then your program encounters a type error and it all comes crashing down, enter Swift, a statically typed-language.
Here you will know ahead of any line of code running that the types are correct.
ChainerUntil the advent of DyNet at CMU, and PyTorch at Facebook, Chainer was the leading neural network framework for dynamic computation graphs or nets that allowed for input of varying length, a popular feature for NLP tasks.
Key Things to Know:The code is written in pure Python on top of the Numpy and CuPy libraries.
Chainer is the first framework to use a dynamic architecture model (as in PyTorch).
Chainer several times beat records on the effectiveness of scaling when modeling problems solved by neural networks.
What Is It Good For?By its own benchmarks, Chainer is notably faster than other Python-oriented frameworks, with TensorFlow the slowest of a test group that includes MxNet and CNTK.
Better GPU & GPU data center performance than TensorFlow.
(TensorFlow is optimized for TPU architecture) Recently, Chainer became the world champion for GPU data center performance.
Good Japanese support.
OOP like programming style.
DL4JThose who are on a short leg with Java or Scala should pay attention to DL4J (short for Deep Learning for Java).
Key Things to Know:Training of neural networks in DL4J is carried out in parallel through iterations through clusters.
The process is supported by Hadoop and Spark architectures.
Using Java allows you to use the library in the development cycle of programs for Android devices.
What Is It Good For?A very good platform if you are looking for a good Deep Learning Framework in Java.
ONNXThe ONNX project was born from the collaboration of Microsoft and Facebook as a search for an open format for the presentation of deep learning models.
ONNX simplifies the process of transferring models between different means of working with artificial intelligence.
Thus, ONNX allows you to consider the benefits of various Deep Learning frameworks.
Key Things to Know:ONNX enables models to be trained in one framework and transferred to another for inference.
ONNX models are currently supported in Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch, and there are connectors for many other common frameworks and libraries.
What Is It Good For?ONNX is a piece of good news for PyTorch developers.
But, for those who prefer to work with TensorFlow, Keras, etc.
might have to wait for a little.
So, Which Deep Learning Framework Should You Use?I tried to give a comprehensive analysis of the best tools that I would undoubtedly recommend.
So what is my final advice?If you are just starting out and want to figure out what’s what, the best choice is Keras.
For research purposes, choose PyTorch.
For production, you need to focus on the environment.
So, for Google Cloud, the best choice is TensorFlow, for AWS — MXNet and Gluon.
Android developers should pay attention to D4LJ, for iOS, a similar range of tasks is compromised by Core ML.
Finally, ONNX will help with questions of interaction between different frameworks.
Hope you liked this post.
Feel free to share your ideas, thoughts, and suggestions below.
Check out my Instagram blog with daily posts on AI, ML & Data Science topics.