Training and deploying machine learning models on Amazon SageMaker
A high-throughput and memory-efficient inference and serving engine
The official Python client for the Huggingface Hub
Run Local LLMs on Any Device. Open-source
Single-cell analysis in Python
Ready-to-use OCR with 80+ supported languages
A unified framework for scalable computing
Gaussian processes in TensorFlow
Everything you need to build state-of-the-art foundation models
A Pythonic framework to simplify AI service building
FlashInfer: Kernel Library for LLM Serving
Uplift modeling and causal inference with machine learning algorithms
Phi-3.5 for Mac: Locally-run Vision and Language Models
Libraries for applying sparsification recipes to neural networks
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Operating LLMs in production
DoWhy is a Python library for causal inference
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
State-of-the-art Parameter-Efficient Fine-Tuning
The Triton Inference Server provides an optimized cloud
Efficient few-shot learning with Sentence Transformers
Library for OCR-related tasks powered by Deep Learning