You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
What is ONNX Runtime (ORT)?
What is ONNX Runtime (ORT)?
ONNX Runtime (ORT) is a library that optimizes and accelerates machine learning inferencing, allowing users to train their models in any supported machine learning library, export to ONNX format, and perform inferencing in their preferred language. The speaker highlights an example of performing inference using PyTorch with ONNX Runtime and points out that users can visit ONNXRuntime.ai to explore the different APIs and tools required for their preferred setup.
2020 ONNX Roadmap Discussion #1 20200903
2020 ONNX Roadmap Discussion #1 20200903
The ONNX roadmap document, which has been open to contributions from the public, is a key topic in this video. The discussion covers extending ONNX on a machine learning pipeline, including evolving data, pre-processing, and extending ONNX onto horizontal pipelines like QFLO. Suggestions made by contributors include supporting data frames and adopting new operators for pre-processing. The speakers also discuss the adoption of the Python data API standard to expand ONNX's support and guarantee interoperability among other libraries. Additionally, the speakers discuss integrating ONNX into Kubernetes and Kubeflow to streamline ML development for users. The group plans to continue assessing the impact of the proposal and welcomes feedback through the roadmap or steering committee.
2020 ONNX Roadmap Discussion #2 20200909
2020 ONNX Roadmap Discussion #2 20200909
In the "ONNX Roadmap Discussion" video, the speakers discuss various topics related to ONNX's roadmap, including shape inference, operator definitions, reference implementations, and the ONNX spec. The speakers suggest building a generic shape inference infrastructure to improve shape inference optimization, reducing the number of primitive operators, adding reference implementations for every operator, and better-defined test cases to ensure proper implementation and testing of ONNX. The group plans to continue discussions within the operator SIG and on the GitHub discussions board for adding a new operator.
2020 ONNX Roadmap Discussion #3 20200916
2020 ONNX Roadmap Discussion #3 20200916
The discussion in this video centers around various topics related to ONNX, including improving error handling, adding a predefined metadata schema field to denote the creation of the model, the need for quantization physical optimization, and the possibility of updating ONNX models from Model Zoo to the most recent versions. The team plans to prioritize these topics based on their impact and cost and work on them post-1.8 release. Additionally, the group considers the idea of creating different language bindings for the ONNX toolset, with a particular interest in Java, in order to support different platforms such as Spark. The speakers also discuss the possibility of creating a Java wrapper around the ONNX Runtime.
2020 ONNX Roadmap Discussion #4 20200923
2020 ONNX Roadmap Discussion #4 20200923
The fourth part of the ONNX roadmap discussion covers the topics of data frame support, pre-processing, standardization, end-to-end machine learning pipeline, and tool recommendations. Data frame support is evaluated as valuable for classical machine learning models and could eliminate the need for pre-processing. The need for pre-processing to be captured within the ONNX model is highlighted to improve performance, with a focus on standardizing high-level categories like image-processing. The end-to-end pipeline is rated as a low priority, but gradually adding components to the pipeline is suggested. The discussion concludes with a recommendation to use a tool to aid further discussion and analysis of the agenda items.
2020 ONNX Roadmap Discussion #5 20201001
2020 ONNX Roadmap Discussion #5 20201001
During the ONNX Roadmap Discussion, the ONNX team discussed various features that were suggested by community members and scored by different people, including the steering committee. While some features were unanimously agreed upon, others split the community. The team discussed the possibility of changing ONNX IR to multiple IRs and centralized IR optimization libraries. They also discussed the idea of centralizing optimization libraries within ONNX and the requirement for ops to implement a standard interface and coding style. The team also debated the possibility of having a simple runtime for ONNX models and the use of custom Python ops for cases where the ONNX runtime is not available. Additionally, the team explored the relationship between pre-processing operations and the use of data frames, planning to turn their ideas into actionable proposals for future work.
2021 ONNX Roadmap Discussion #1 20210908
2021 ONNX Roadmap Discussion #1 20210908
During the ONNX Roadmap Discussion, IBM Research introduced their proposal for a new machine learning pipeline framework that converts typical data preprocessing patterns on Pandas Dataframe into ONNX format. The framework, called Data Frame Pipeline, is open-sourced on GitHub and can be defined using their provided API, which runs on Python during the training phase. The speakers also discussed the need to make ONNX visible in languages other than Python, such as Java, C#, and C++, and the exporting of ONNX models and emitting them from other languages. Additionally, they discussed the current functionalities of the ONNX Python and C++ converters and the need for scoping, naming, and patching functionalities when writing ONNX models.
2021 ONNX Roadmap Discussion #2 20210917
2021 ONNX Roadmap Discussion #2 20210917
In the ONNX Roadmap Discussion #2 20210917, various speakers discussed several key areas where ONNX needs improvement, including quantization and fusion friendliness, optimizing kernels for specific hardware platforms, and adding model local functions to ONNX. Other topics included feedback on end-to-end pipeline support, challenges faced by clients on different platforms, and issues with converting GRU and LSTM graphs. Some suggested solutions included providing more information for backends to execute pre-quantized graphs, improving interoperability of different frameworks, and including a namespace related to the original graph to allow for both a general and optimized solution. Additionally, speakers discussed the need for better deployment of packages for wider adoption and the potential for more converters to be developed to support multi-modal models.
2021 ONNX Roadmap Discussion #3 20210922
2021 ONNX Roadmap Discussion #3 20210922
During the ONNX Roadmap Discussion, speakers addressed the need to fix issues with ONNX's offset conversion tool to improve adoption of ONNX with the latest optimized stack for certain use cases. The speakers proposed a better coverage of models to test offset conversion and resolution of intermediate steps that are currently missing in operator or layer tests. They also discussed the importance of metadata and federated learning infrastructure, including the need to include metadata in the ONNX spec for transfer learning annotations and the concept of federated learning to enable privacy, efficiency, and use of computational resources. The speakers encouraged collaboration from the community and requested feedback to further discuss and implement these ideas. The next session is scheduled for October 1st.
ONNX Community Virtual Meetup – March 2021
000 ONNX 20211021 ONNX SC Welcome Progress Roadmap Release
The ONNX workshop started with an introduction, where the organizers emphasized the importance of community participation in the growth of the ONNX ecosystem. They also provided an overview of the agenda, which included updates on ONNX statistics, community presentations, and the roadmap discussions of ONNX's Steering Committee. The roadmap proposals are aimed at improving the support, robustness, and usability of the ONNX framework, and include pre-processing operators, C APIs, federated learning, and better integration of data processing and inference. The recent release of version 1.10 of the ONNX specs was also discussed, and attendees were encouraged to ask questions and participate in the ONNX Slack channel to continue the conversation.