LFM2.5
Liquid AI’s LFM2.5 is the next generation of on-device AI foundation models designed to deliver high-performance, efficient AI inference on edge devices such as phones, laptops, vehicles, IoT systems, and embedded hardware without relying on cloud compute. It extends the previous LFM2 architecture by significantly increasing the pretraining scale and reinforcement learning stages, yielding a family of hybrid models around 1.2 billion parameters that balance instruction following, reasoning, and multimodal capabilities for real-world agentic use cases. The LFM2.5 family includes Base (for fine-tuning and customization), Instruct (general-purpose instruction-tuned), Japanese-optimized, Vision-Language, and Audio-Language variants, all optimized for fast, on-device inference under tight memory constraints and available as open-weight models deployable via frameworks like llama.cpp, MLX, vLLM, and ONNX.
Learn more
Solar Mini
Solar Mini is a pre‑trained large language model that delivers GPT‑3.5‑comparable responses with 2.5× faster inference while staying under 30 billion parameters. It achieved first place on the Hugging Face Open LLM Leaderboard in December 2023 by combining a 32‑layer Llama 2 architecture, initialized with high‑quality Mistral 7B weights, with an innovative “depth up‑scaling” (DUS) approach that deepens the model efficiently without adding complex modules. After DUS, continued pretraining restores and enhances performance, and instruction tuning in a QA format, especially for Korean, refines its ability to follow user prompts, while alignment tuning ensures its outputs meet human or advanced AI preferences. Solar Mini outperforms competitors such as Llama 2, Mistral 7B, Ko‑Alpaca, and KULLM across a variety of benchmarks, proving that compact size need not sacrifice capability.
Learn more
Claude Opus 3
Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence.
All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.
Learn more
Upstage AI
Upstage AI builds powerful large language models and document processing engines designed to transform workflows across industries like insurance, healthcare, and finance. Their enterprise-grade AI technology delivers high accuracy and performance, enabling businesses to automate complex tasks such as claims processing, underwriting, and clinical document analysis. Key products include Solar Pro 2, a fast and grounded enterprise language model, Document Parse for converting PDFs and scans into machine-readable text, and Information Extract for precise data extraction from contracts and invoices. Upstage’s AI solutions help companies save time and reduce manual work by providing instant, accurate answers from large document sets. The platform supports flexible deployment options including cloud, on-premises, and hybrid, meeting strict compliance requirements. Trusted by global clients, Upstage continues to advance AI innovation with top conference publications and industry awards.
Learn more