Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
AIMET is a library that provides advanced quantization and compression
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
A library for accelerating Transformer models on NVIDIA GPUs
Operating LLMs in production
The official Python client for the Huggingface Hub
Uncover insights, surface problems, monitor, and fine tune your LLM
Everything you need to build state-of-the-art foundation models
MII makes low-latency and high-throughput inference possible
Multilingual Automatic Speech Recognition with word-level timestamps
Library for OCR-related tasks powered by Deep Learning
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Bring the notion of Model-as-a-Service to life
Large Language Model Text Generation Inference
Easiest and laziest way for building multi-agent LLMs applications
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
Uplift modeling and causal inference with machine learning algorithms
State-of-the-art Parameter-Efficient Fine-Tuning
Simplifies the local serving of AI models from any source
Optimizing inference proxy for LLMs
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Official inference library for Mistral models