Qwen2.5-VL is the multimodal large language model series
Qwen3-ASR is an open-source series of ASR models
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
DeepSeek Coder: Let the Code Write Itself
Chinese and English multimodal conversational language model
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
Open-source large language model family from Tencent Hunyuan
A state-of-the-art open visual language model
Large-language-model & vision-language-model based on Linear Attention
A Pragmatic VLA Foundation Model
Ling is a MoE LLM provided and open-sourced by InclusionAI
A series of math-specific large language models of our Qwen2 series
Official inference repo for FLUX.2 models
Contexts Optical Compression
Repo of Qwen2-Audio chat & pretrained large audio language model
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Research code artifacts for Code World Model (CWM)
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Tool for exploring and debugging transformer model behaviors
CLIP, Predict the most relevant text snippet given an image
GLM-4-Voice | End-to-End Chinese-English Conversational Model
Chat & pretrained large vision language model
A Family of Open Sourced Music Foundation Models
GPT4V-level open-source multi-modal model based on Llama3-8B