Samples catalog
This page catalogs code samples for ONNX Runtime, running locally, and on Azure, both cloud and edge.
Contents
- Python
- C/C++
- C#
- Java
- Node.js
- Azure Machine Learning
- Huggingface
- Azure IoT Edge
- Azure Media Services
- Azure SQL
- Windows Machine Learning
- ML.NET
Python
- Basic inference
- Resnet50 inference
- Inference samples with ONNX-Ecosystem Docker image
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- NUPHAR Execution Provider samples
- SKL tutorials
- Keras - Basic
- SSD Mobilenet (Tensorflow)
- BERT-SQuAD (PyTorch) on CPU
- BERT-SQuAD (PyTorch) on GPU
- BERT-SQuAD (Keras)
- BERT-SQuAD (Tensorflow)
- GPT2 (PyTorch)
- EfficientDet (Tensorflow)
- EfficientNet-Edge (Tensorflow)
- EfficientNet-Lite (Tensorflow)
- EfficientNet(Keras)
- MNIST (Keras)
- BERT Quantization on CPU
- Get started with training
- Train NVIDIA BERT transformer model
- Train HuggingFace GPT-2 model
C/C++
- C: SqueezeNet
- C++: model-explorer - single and batch processing
- C++: SqueezeNet
C#
Java
Node.js
Azure Machine Learning
Inference and deploy through AzureML
For aditional information on training in AzureML, please see AzureML Training Notebooks
- Inferencing on CPU using ONNX Model Zoo models:
- Inferencing on CPU with PyTorch model training:
- Inferencing on CPU with model conversion for existing (CoreML) model:
- Inferencing on GPU with TensorRT Execution Provider (AKS):
Huggingface
Azure IoT Edge
Azure Media Services
Azure SQL
Deploy ONNX model in Azure SQL Edge
Windows Machine Learning
Examples of inferencing with ONNX Runtime through Windows Machine Learning