Alpaca

Alpaca

Stanford Center for Research on Foundation Models (CRFM)
+
+

Related Products

  • Vertex AI
    961 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • LM-Kit.NET
    26 Ratings
    Visit Website
  • Evertune
    1 Rating
    Visit Website
  • Enterprise Bot
    23 Ratings
    Visit Website
  • Partful
    20 Ratings
    Visit Website
  • 6Storage
    99 Ratings
    Visit Website
  • Ango Hub
    15 Ratings
    Visit Website
  • ClickLearn
    67 Ratings
    Visit Website
  • AI Video Cut
    1 Rating
    Visit Website

About

Instruction-following models such as GPT-3.5 (text-DaVinci-003), ChatGPT, Claude, and Bing Chat have become increasingly powerful. Many users now interact with these models regularly and even use them for work. However, despite their widespread deployment, instruction-following models still have many deficiencies: they can generate false information, propagate social stereotypes, and produce toxic language. To make maximum progress on addressing these pressing problems, it is important for the academic community to engage. Unfortunately, doing research on instruction-following models in academia has been difficult, as there is no easily accessible model that comes close in capabilities to closed-source models such as OpenAI’s text-DaVinci-003. We are releasing our findings about an instruction-following language model, dubbed Alpaca, which is fine-tuned from Meta’s LLaMA 7B model.

About

This repository contains the research preview of LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more. LongLLaMA is built upon the foundation of OpenLLaMA and fine-tuned using the Focused Transformer (FoT) method. LongLLaMA code is built upon the foundation of Code Llama. We release a smaller 3B base variant (not instruction tuned) of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on hugging face. Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens). Additionally, we provide evaluation results and comparisons against the original OpenLLaMA models.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Organizations and researchers interested in a powerful Large Language Model

Audience

Users interested in a powerful Large Language Model solution

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

Free
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Stanford Center for Research on Foundation Models (CRFM)
United States
crfm.stanford.edu/2023/03/13/alpaca.html

Company Information

LongLLaMA
github.com/CStanKonrad/long_llama

Alternatives

Falcon-40B

Falcon-40B

Technology Innovation Institute (TII)

Alternatives

Llama 2

Llama 2

Meta
LTM-1

LTM-1

Magic AI
MPT-7B

MPT-7B

MosaicML
Dolly

Dolly

Databricks
Kimi K2

Kimi K2

Moonshot AI
Olmo 3

Olmo 3

Ai2

Categories

Categories

Integrations

BERT
ChatGPT
Dolly
GPT-4
Llama
Ludwig
Stable LM

Integrations

BERT
ChatGPT
Dolly
GPT-4
Llama
Ludwig
Stable LM
Claim Alpaca and update features and information
Claim Alpaca and update features and information
Claim LongLLaMA and update features and information
Claim LongLLaMA and update features and information