ID-based RAG FastAPI: Integration with Langchain and PostgreSQL
Package and deploy machine learning models using Docker containers
Open source annotation tool for machine learning practitioners
Operating LLMs in production
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
Containerized automation engine for programmable CI/CD workflows
Build AI-powered semantic search applications
Serving LangChain LLM apps automagically with FastApi
Open-source framework that gives you AI Agents
State-of-the-art Multilingual Question Answering research
A Deep-Learning-Based Chinese Speech Recognition System
Aseryla code repositories
A multi-modeling and simulation environment to study complex systems
Aims to enable researcher to tap in to mobile computing capability