RedPajama Completes First Step to Open-Source ChatGPT Alternative

Yana Khare Last Updated : 26 Apr, 2023
3 min read
RedPajama Completes First Step to Open-Source Model | AI | LLaMA

The first stage of the ambitious project RedPajama’s purpose, was to reproduce the LLaMA training dataset. This dataset contains more than 1.2 trillion tokens. Additionally, it aims to create entirely open-source language models. The RedPajama effort seeks to alter the game by developing completely open-source models, facilitating research and customization. Most powerful foundation AI models are now only partially open-source and accessible through commercial APIs like ChatGPT.

Open-Source Models Gaining Traction

Recently, open-source models have advanced significantly, and a parallel movement centered on large language models is growing. Along with entirely open models like Pythia, OpenChatKit, Open Assistant, and Dolly, several semi-open models have also been available. These include LLaMA, Alpaca, Vicuna, and Koala. The ability of open-source models to compete with for-profit products and stimulate creativity through community involvement is demonstrated by Stable Diffusion.

Learn More: Everything You Must Know About Koalas!

RedPajama’s Three-Pronged Approach

The developers behind RedPajama are working to create a fully reproducible, top-tier language model with three essential components:

  1. Comprehensive, high-quality pre-training data.
  2. Base models trained at scale using this data.
  3. Instruction tuning data and models that enhance the base model, making it more usable and safe.

Starting with LLaMA

RedPajama's starting point is LLaMA for a complete open source model

RedPajama’s starting point is LLaMA, the leading suite of open base models, chosen for two primary reasons:

  1. LLaMA’s large dataset of over 1.2 trillion tokens, meticulously filtered for quality
  2. The 7 billion parameter LLaMA model, which has undergone extensive training, is far beyond the Chincilla-optimal point, providing optimal quality for its model size.
  3. A 7 billion parameter model can operate on a wide range of GPUs, including consumer-grade GPUs, making it particularly beneficial for the open community.

Reproducing the LLaMA Dataset

The creators want to produce a LLaMA copy that is entirely open-source and suited for business applications while providing a more open research pipeline. Despite being unable to use the original LLaMA dataset, they had access to a suitable recipe. Seven data slices comprise the dataset, including data from Wikipedia, Common Crawl, arxiv, Github, and a corpus of open literature.

Accessing RedPajama Dataset

You may get the RedPajama 1.2 trillion token datasets from Hugging Face and a condensed, easier-to-manage random sample. The entire dataset takes up around 5TB of disk space when unzipped and about 3TB when downloaded in compressed form. RedPajama-Data-1T contains seven data slices, all filtered by licensing and quality: C4, GitHub, arXiv, Books, Wikipedia, and StackExchange.

The Debate on Open-Source AI Models

The debate over open-source AI models is divisive. Ilya Sutskever, the co-founder and chief scientist of OpenAI, has argued that disclosing information so publicly is “wrong,” citing worries about safety and competition. While openness and accountability in AI models are essential, according to Joelle Pineau, vice president of AI research at Meta, access should depend on the model’s potential for damage. In an interview with VentureBeat, she said that while LLaMA had a restricted release to prevent being entirely open, some degrees of openness may be considered excessive.

Also Read: OpenAI Co-Founder & Chief Data Scientist On the Potential of AGI

Our Say

RedPajama's LLaMA step to make an AI that is completely open source

The first stage of fully open-source language models was successfully finished by RedPajama, which represents a significant advancement in artificial intelligence. This development encourages study and personalization, opening the door for improved AI models customized to particular use cases while igniting the discussion over the proper degree of openness in AI models.

Also Read: Stability AI’s StableLM to Rival ChatGPT in Text and Code Generation

A 23-year-old, pursuing her Master's in English, an avid reader, and a melophile. My all-time favorite quote is by Albus Dumbledore - "Happiness can be found even in the darkest of times if one remembers to turn on the light."

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details