The first stage of the ambitious project RedPajama’s purpose, was to reproduce the LLaMA training dataset. This dataset contains more than 1.2 trillion tokens. Additionally, it aims to create entirely open-source language models. The RedPajama effort seeks to alter the game by developing completely open-source models, facilitating research and customization. Most powerful foundation AI models are now only partially open-source and accessible through commercial APIs like ChatGPT.
Recently, open-source models have advanced significantly, and a parallel movement centered on large language models is growing. Along with entirely open models like Pythia, OpenChatKit, Open Assistant, and Dolly, several semi-open models have also been available. These include LLaMA, Alpaca, Vicuna, and Koala. The ability of open-source models to compete with for-profit products and stimulate creativity through community involvement is demonstrated by Stable Diffusion.
Learn More: Everything You Must Know About Koalas!
The developers behind RedPajama are working to create a fully reproducible, top-tier language model with three essential components:
RedPajama’s starting point is LLaMA, the leading suite of open base models, chosen for two primary reasons:
The creators want to produce a LLaMA copy that is entirely open-source and suited for business applications while providing a more open research pipeline. Despite being unable to use the original LLaMA dataset, they had access to a suitable recipe. Seven data slices comprise the dataset, including data from Wikipedia, Common Crawl, arxiv, Github, and a corpus of open literature.
You may get the RedPajama 1.2 trillion token datasets from Hugging Face and a condensed, easier-to-manage random sample. The entire dataset takes up around 5TB of disk space when unzipped and about 3TB when downloaded in compressed form. RedPajama-Data-1T contains seven data slices, all filtered by licensing and quality: C4, GitHub, arXiv, Books, Wikipedia, and StackExchange.
The debate over open-source AI models is divisive. Ilya Sutskever, the co-founder and chief scientist of OpenAI, has argued that disclosing information so publicly is “wrong,” citing worries about safety and competition. While openness and accountability in AI models are essential, according to Joelle Pineau, vice president of AI research at Meta, access should depend on the model’s potential for damage. In an interview with VentureBeat, she said that while LLaMA had a restricted release to prevent being entirely open, some degrees of openness may be considered excessive.
Also Read: OpenAI Co-Founder & Chief Data Scientist On the Potential of AGI
The first stage of fully open-source language models was successfully finished by RedPajama, which represents a significant advancement in artificial intelligence. This development encourages study and personalization, opening the door for improved AI models customized to particular use cases while igniting the discussion over the proper degree of openness in AI models.
Also Read: Stability AI’s StableLM to Rival ChatGPT in Text and Code Generation