js: Universal Deep Learning Models in The BrowserAn Introduction to The Universal Open Standard Deep Learning Format and Using It In The Browser (ONNX/ONNX.
js)Will badrBlockedUnblockFollowFollowingJan 8Photo by Franck V.
on UnsplashRunning deep learning models on the client-end browser is not something new.
Early 2018, Google released TensorFlow.
In December 2019, Amazon, Facebook and Microsoft announced the collaborative initiative of making Open Neural Network Exchange (ONNX) format production ready.
ONNX is available now to support many top frameworks and runtimes including Caffe2, MATLAB, Microsoft’s Cognitive Toolkit, Apache MXNet, PyTorch and NVIDIA’s TensorRT.
There is also an early-stage converter from TensorFlow and CoreML to ONNX that can be used today.
In November 2018, ONNX.
js was released.
js (something like TensorFlow.
You can use it to score pre-trained ONNX models directly on the browser .
Great News, Right?What is ONNX and Why is it So Cool?When you develop a machine learning or deep learning model, you often produce the trained model artifacts that will then be used for prediction.
That file contains the neural network parameters and weights that you spent hours training.
Every platform usually has a different output format.
For example, MXNet CNN models are saved in (*.
params and *.
This format will only work on MXNet runtime inference.
ONNX comes to solve that problem.
It makes deep learning models portable where you can develop a model using MXNet, Caffe, or PyTorch then use it on a different platform.
For example, you can develop an image classification model using PyTorch then deploy it on iPhone devices to use CoreML using ONNX format.
Why Deep Learning on The Browser?There are some important benefits from running a DL model on the client-side device (browsers):1- Privacy:If the ML/DL model is dealing with sensitive and private data and you do not want to send the data to a server for inference, then this would be a good solution for you.
2- Low Latency:Having the model directly on the client side reduces the client-server communication overhead.
3- Cross-Platform:It doesn’t matter what operating system you are using because it will work on the browser.
There is also no need to install any libraries.
Benchmark Results:Microsoft ran some benchmarking testing against tensorflow.
js and keras.
js library and the results were stunning.
You can see that ONNX.
js can be more than 8 times faster:https://github.
com/Microsoft/onnxjsNow, Let’s See Some Code:Now, let’s build a simple image classifier using the pre-build model — Inception_v2 in three simple steps.
You can download the pre-trained model from the ONNX model zoo.
The first step is to create an ONNX inference session with WebGL or WebAssembly backend.
In this example, I will use WebGL backend then I will load the model that I just downloaded usingsession.
The second step is to process and resize the input image then create a tensor out of the input image using the onnx.
The third step is to get the predictions using the session.
run() function and translate the output into the relevant class.
Code Snippet for using ONNX.
jsYou can get the full example code from here.
There is also a website deployed for all of the demos that you try here.
Conclusion:ONNX is a very powerful open standard format that makes model artifacts portable between platforms.
You can still use your favourite framework for coding and then distribute your results to make it work on any platform using ONNX format.
This is such a great way to democratize machine and deep learning development and inference.
There is definitely a lot more to it, but hopefully, the information in this article has served as a brief and useful introduction to ONNX and ONNX.