Change the repository type filter
All
Repositories list
103 repositories
- This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
- Croissant is a high-level format for machine learning datasets that brings together four rich layers.
- MLPerf™ Storage Benchmark Suite
- MLCFlow: Simplifying MLPerf Automations
- A generalizable application framework for segmentation, regression, and classification using PyTorch
submissions_algorithms
Publicpolicies
Public- Collective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarks
GaNDLF-Synth
Public- CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
- Legacy CM repository with a collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the process of building, benchmarking and optimizing AI systems across diverse models, data sets, software and hardware
hpc
Public