GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
-
Updated
Feb 21, 2025 - C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
Official inference library for Mistral models
High-speed Large Language Model Serving for Local Deployment
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Superduper: Build end-to-end AI applications and agent workflows on your existing data infrastructure and preferred tools - without migrating your data.
Standardized Serverless ML Inference Platform on Kubernetes
📖A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, Flash-Attention, Paged-Attention, Parallelism, etc. 🎉🎉
Sparsity-aware deep learning inference runtime for CPUs
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Eko (Eko Keeps Operating) - Build Production-ready Agentic Workflow with Natural Language - eko.fellou.ai
Code examples and resources for DBRX, a large language model developed by Databricks
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
FlashInfer: Kernel Library for LLM Serving
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."