You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Machine Learning Community Standup - Deep Learning with PyTorch ONNX
Machine Learning Community Standup - Deep Learning with PyTorch & ONNX
The "Machine Learning Community Standup - Deep Learning with PyTorch & ONNX" video covers various topics related to machine learning, PyTorch, and ONNX. One section covers overfitting and how to prevent it in neural networks by using dropout and cross-validation. The hosts also highlight various community-based machine learning projects and their upcoming events on using .NET with machine learning. The video also introduces PyTorch, a popular machine learning library used for computer vision and natural language processing, with various built-in modules such as torch vision and transforms. The speakers explain the ONNX format for representing machine learning models and its runtime for running inference and training in multiple languages. The tutorial also discusses how to use pre-built models in PyTorch's model zoo and covers debugging and managing Python packages and environments using Jupyter Notebooks and Anaconda. Additionally, the tutorial covers the details of training and exporting a PyTorch model using ONNX, which can be used with ONNX runtime to improve models' performance.
The video also discusses various topics related to machine learning and deep learning. The speakers talk about using SkiaSharp for image processing in Xamarin and the limitations of on-device models due to their size, but note the benefits of having on-device models. They also suggest various resources for learning the theory of machine learning such as the Andrew Ng Coursera class, and an applied machine learning class which gives high-level information on using tools and libraries to create machine learning models. The importance of having a goal when learning about machine learning and incorporating learning into one's job is also mentioned. Lastly, the speaker hints at upcoming content that may be of interest to the audience.
Object-Detection Yolov7, ML.NET onnx model
Object-Detection Yolov7, ML.NET onnx model
https://github.com/ptiszai/Object-Detection-yolov7-ML.NET-onnx
Implement Yolo3 Real-time using C#
Implement Yolo3 Real-time using C#
https://github.com/duonghb53/YoloOnCSharpGPU
Face Detection Using C# and OpenCVSharp - Practical ML.NET User Group 01/19/2022
Face Detection Using C# and OpenCVSharp - Practical ML.NET User Group 01/19/2022
The video tutorial on face detection using OpenCVSharp in C# started with the speaker introducing the OpenCVSharp library, an open-source library for computer vision tasks, with a .NET wrapper. The video discussed using different classifiers for detection, including for eyes, and the importance of experimentation in classifer selection. The tutorial assisted the listener in building a program for face and eye detection using webcams, with the aid of code snippets, visual studio, and .NET interactive notebooks. Different aspects, including how to overlay transparent images and handle mat objects properly, were also elaborated on. The speaker acknowledged the ease of use, speed, and compatibility of OpenCVSharp with .NET but also noted the lack of examples and uncertain long-term support.
Predicting on a Custom Vision ONNX Model with ML.NET
Predicting on a Custom Vision ONNX Model with ML.NET
In this YouTube video, the presenter discusses using ML.NET to predict on a custom vision ONNX model. This involves exporting the model from the custom vision service and importing it into the ML.NET project. The implementation includes resizing images, extracting image pixels, creating a data context and an empty data list to load the image data, using the ML.NET framework to make predictions on the model, and outputting the results. The video also demonstrates how to get the output name of a model using a tool called Neuron and how to get bounding box information from the model for a given test image. The presenter also shows how to draw a rectangle around the bounding box and display the predicted labels using the Graphics API. The implementation of the ONNX model using the ML.NET API and image resizing is emphasized as the most significant part of the implementation.
Making neural networks portable with ONNX
Making neural networks portable with ONNX
In this YouTube video, Ron Dagdag explains how to make neural networks portable with ONNX, focusing on the inferencing side of machine learning. ONNX is an open-source framework that allows for the portability of machine learning models across various processing units and devices. The speaker discusses the process of converting models to ONNX, deploying and integrating the model with applications, and using it for cloud and edge deployment. They also demonstrate how to load an ONNX model in Node.js and integrate image classification models into web and mobile applications using ONNX Runtime. ONNX models are an open standard that can be created from various frameworks to be deployed efficiently on the target platform.
On .NET Live - AI Everywhere: Azure ML and ONNX Runtime
On .NET Live - AI Everywhere: Azure ML and ONNX Runtime
The video "On .NET Live - AI Everywhere: Azure ML and ONNX Runtime" focuses on using Azure ML and ONNX Runtime for machine learning with C#. The speakers discuss the benefits of using ONNX format for exporting models across programming languages, the ONNX runtime's optimization for hardware acceleration and inference, and its compatibility with specific versions of the framework. They also showcase how to use ONNX Runtime with Azure ML in Python and .NET, create and train a neural network model, and explain inferencing and its final step in machine learning. The video concludes with the introduction of a new provider for the ONNX runtime that allows for the use of OpenVINO for the ARM CPU, providing debugging capabilities.
In this section of the video, the hosts discuss the flexibility and configurability of the ONNX runtime and its ability to run on various hardware and software platforms. The ONNX runtime is seen as a great wrapper for different platforms as customers can use it on a cloud, Android, iOS or the Snapdragon CPU, and it allows for faster inferencing.
Berlin Buzzwords 2019: Lester Solbakken – Scaling ONNX and TensorFlow model evaluation in search
Berlin Buzzwords 2019: Lester Solbakken – Scaling ONNX and TensorFlow model evaluation in search
Lester Solbakken discusses the challenges of scaling machine learning for search applications and proposes an alternative solution to using external model servers. He suggests evaluating machine learning models on content nodes, rather than sending data to external model servers, to improve scalability and control latency and throughput. Solbakken highlights Vespa's use of its own ranking language and tensor API extension to make it easy to create a declarative package of state for an application, and the ongoing effort to support machine learning models in Vespa. He emphasizes the importance of understanding the correlation between different phases of ranking to avoid system-level retrieval issues and encourages people to contribute to the open-source project.
Assimilate ONNX
Assimilate ONNX
In this video, the presenter introduces ONNX as an open standard for machine learning interoperability that can work on all different platforms. They go through the process of creating an ONNX project from scratch, tweaking an example from the Microsoft repo, troubleshooting issues, and exploring other ONNX-related Github projects. They then test an ONNX binding using GPT2 and CUDA, expressing interest in exploring ONNX runtime Rust bindings further in the future. The presenter notes the versatility and portability of ONNX and sees it as a good tool for experimentation and building more substantial projects in the future.
HITNET Vs. ACVNet Neural Stereo Depth Estimation Comparison (ONNX)
HITNET Vs. ACVNet Neural Stereo Depth Estimation Comparison (ONNX)
Comparison of the HITNET and ACVNet stereo depth estimation models in the Driving Stereo dataset.
Model Inference details (NVIDIA 1660 SUPER):
HITNET (640X480): 220 ms
ACVNet (640x384): 480 ms
References: [HITNET Inference] https://github.com/ibaiGorordo/ONNX-HITNET-Stereo-Depth-estimation
[ACVNet Inference] https://github.com/ibaiGorordo/ONNX-ACVNet-Stereo-Depth-Estimation
[Driving Stereo dataset] https://drivingstereo-dataset.github.io/