Building and Training Large Language Models for Code: A Deep Dive into StarCoder

Guest Blog Last Updated : 20 May, 2024
7 min read

Introduction

Hey there, fellow tech enthusiasts! Today, I’m excited to take you on a journey through the fascinating world of building and training large language models (LLMs) for code. We will be diving deep into the intricacies of a remarkable model known as StarCoder, which is part of the BigCode project—an open initiative at the intersection of AI and code development.

Before we begin, I would like to thank Hugging Face’s machine learning engineer, Loubna Ben Allal, for her Data Hour session on ‘Building Large Language Models for Code’, on which this article is based. Now, buckle up, and let’s explore the magic behind this cutting-edge technology!

Learning Objectives

  • Grasp open and responsible practices in coding AI through the BigCode collaboration, emphasizing transparency and ethical development.
  • Comprehend LLM training essentials: data selection, architecture choices, and efficient parallelism, utilizing frameworks like Megatron-LM.
  • Explore LLM evaluation via benchmarks like HumanEval, facilitated by the BigCode evaluation harness, enabling effective model comparison.
  • Discover practical integration of LLMs into development environments using tools like VS Code extensions, aligning with ethical AI utilization.

Unleashing the Power of Large Language Models for Code

So, what’s the buzz about these large language models? Well, they’re like virtual coding wizards that can complete code snippets, generate entire functions, and even provide insights into fixing bugs—all based on natural language descriptions. Our star of the show, StarCoder, boasts a whopping 15.5 billion parameters and showcases outstanding code completion prowess and responsible AI practices.

Data Curation and Preparation: The Backbone of Success

Alright, let’s talk about the secret sauce—data curation. Our journey starts with The Stack dataset, a massive compilation of GitHub code that spans over 300 programming languages. However, quantity doesn’t always trump quality. We meticulously selected 86 relevant languages, prioritizing popularity and inclusivity while removing outdated languages.

The Stack dataset | GitHub | BigCode Project | AI for coding

But here’s the catch: We ended up with only about 800 gigabytes of code in 80 programming languages after extensive cleaning. We removed auto-generated files and duplicates through a process called deduplication, ensuring the model doesn’t memorize repeated patterns. This reduced dataset quality over quantity and paved the way for effective training.

Extensive data cleaning on Hugging Face | Building and training large language models (LLMs)

Tokenization and Metadata for Training: Cracking the Code

Next up, tokenization! We converted our clean text data into numerical inputs that the model can understand. To preserve metadata like repository and file names, we added special tokens at the start of each code snippet. This metadata can act as a roadmap for the model, guiding it on how to generate code snippets in different programming languages for example.

Generating code snippets | Building and training large language models (LLMs) & AI for code development.

We also got crafty with things like GitHub issues, git commits, and Jupyter notebooks. All these elements were structured with special tokens to give the model context. This metadata and formatting would later play a crucial role in the model’s performance and fine-tuning.

Building and training large language models (LLMs) & AI for code development.

Architecture Choices for StarCoder: Scaling New Heights

For the architecture, we aimed for speed and cost-effectiveness, which led us to opt for 15 billion parameters—a balance between power and practicality. We also embraced multi-query attention (MQA), a technique that efficiently processes larger batches of data and speeds up inference time without sacrificing quality.

Architecture choices for StarCoder | Hugging Face BigCode Project | AI for coding
Architecture choices: MQA

We also introduced large context length, thanks to the flash attention. This allowed us to scale up to 8000 tokens, maintaining efficiency and speed. And if you’re wondering about bidirectional context, we used Fill-In-The-Middle (FIM) approach to allow StarCoder to process code snippets from both left and right contexts.

Training and Evaluation: Putting StarCoder to the Test

Training and evaluation of StarCode | Hugging Face BigCode | AI for coding

Now, let’s talk about training. We harnessed the power of 512 GPUs and used Tensor Parallelism (TP) and Pipeline Parallelism (PP) to train StarCoder efficiently. We trained for 24 days using the Megatron-LM framework, and the results were impressive. But training is only half the journey—evaluation is where the rubber meets the road.

HumanEval testing | Building and training large language models (LLMs) for AI & code development.

We evaluated StarCoder on the HumanEval benchmark, where models complete code snippets, and their solutions are tested against various scenarios. StarCoder performed admirably, achieving a 33.6% pass@1 score and strong multilingual performance. Instruction-tuned versions of the model, such as WizardCoder-15b and OctoCoder, showcase enhanced performances. 

HumanEval becnhmark report for StarCoder | Building and training large language models (LLMs)
Multilingual Performance

Tools and Ecosystem: Beyond StarCoder

Our journey wouldn’t be complete without highlighting the tools and ecosystem built around StarCoder. We released a VS Code extension that offers code suggestions, completion, and even code attribution. You can also find plugins for Jupyter, VIM, and EMACs, catering to developers’ diverse preferences.

StarCoder Family | Hugging Face BigCode Project | AI for coding

To simplify the evaluation process, we created the BigCode Evaluation Harness—a framework that streamlines benchmark evaluation and unit testing and ensures reproducibility. We also introduced the BigCode Leaderboard, providing transparency and allowing the community to gauge performance across various models and languages.

Hugging Face | BigCode Ecosystem | AI for coding

Towards the Future: A Community-Driven Endeavor

By now, it’s been clear that the world of large language models for code is ever-evolving. The BigCode ecosystem continues to thrive, with models like OctoCoder, WizardCoder, and more, each building on the foundation laid by StarCoder. These models aren’t just tools; they’re a testament to collaborative innovation and the power of open-source development.

So there you have it—the story of how StarCoder and the BigCode community are pushing the boundaries of what’s possible in the realm of code generation. From meticulous data curation to advanced architecture choices and cutting-edge tools, it’s a journey fueled by passion and a commitment to shaping the future of AI in code development. As we venture into the future, who knows what incredible innovations the community will unveil next?

Advantages of StarCoder Model

Starcoder, a large language model designed specifically for code, offers several advantages to developers:

  • Increased Productivity: Starcoder automates repetitive tasks like code completion, generation and refactoring. This frees up developers’ time to focus on more complex and creative problem-solving aspects of coding.
  • Improved Code Quality: Starcoder can identify and suggest improvements to code, helping developers write cleaner, more maintainable code.
  • Reduced Development Time: By automating tasks and improving code quality, Starcoder can significantly reduce the overall development time for software projects.
  • Learning Assistant: Starcoder can be a valuable tool for new developers, helping them learn proper coding techniques and best practices.
  • Open-Source Accessibility: Starcoder being open-source makes its functionalities freely available, promoting innovation and wider adoption of AI in coding.pen_spark

tunesharemore_vert

Upload an image

This prompt requires an image that you need to add. Tap the image button to upload an image. Got it

Need a little help with this prompt?

Power up your prompt and Gemini will expand it to get you better results Got it

Today’s Skills for Tomorrow’s LLMs

Here’s what we’ll be carrying forward into the journey of building and training large language models in the future:

  • Training Setup and Frameworks: Training such massive models requires parallelism to accelerate the process. We utilized 3D parallelism, a combination of data, tensor, and pipeline parallelism. This approach allowed us to train on 512 GPUs for 24 days, achieving the best possible results. While we primarily used the Megatron-LM framework, we also highlighted alternative frameworks like Hugging Face Trainer with Deepspeed integration for more accessible and shorter fine-tuning processes.
  • Evaluating the Performance: Evaluating code models is no simple task. We discussed benchmarks like HumanEval and Multi-PLE, which measure the models’ ability to generate code solutions that pass specific tests. These benchmarks help us understand the model’s performance in various programming languages and contexts. We also introduced the BigCode evaluation harness, a framework that streamlines the evaluation process by providing consistent environments and reproducible results.
  • Tools and Ecosystem: We explored the tools and extensions that the BigCode ecosystem offers. From VS Code extensions to support in Jupyter notebooks, VIM, EMACs, and more, we’re making it easier for developers to integrate StarCoder and its descendants into their workflow. The release of StarCoder Plus and StarChart further extends the capabilities of our models, making them even more versatile and useful.
  • Responsible AI and Licensing: In line with responsible AI practices, we emphasize ethical guidelines in our models’ use. Our models are built on the CodeML OpenRAIL license, which promotes royalty-free usage, downstream distribution of derivatives, and ethical considerations. We are committed to ensuring that our models are powerful tools that benefit society while being used responsibly.

Conclusion

In this article, we’ve delved into the realm of building Large Language Models (LLMs) for code, exploring their impressive code completion abilities. The collaborative BigCode Project by Hugging Face and ServiceNow was highlighted as a beacon of open and responsible code models, addressing challenges like data privacy and reproducibility.

Our technical journey encompassed data curation, architecture decisions for models like StarCoder, and training methodologies using parallelism techniques. Model evaluation, marked by benchmarks like HumanEval and Multi-PLE, showcased performance comparisons across languages, with StarCoder versions leading the way.

Key Takeaways:

  • BigCode collaboration by HuggingFace and ServiceNow promotes responsible code model development.
  • Using StarCoder as an example, we have covered various training aspects, including data preparation, architecture, and efficient parallelism.
  • We discussed AI model evaluation using HumanEval and Multi-PLE benchmarks.

Frequently Asked Questions

Q1. What is the BigCode Project’s main objective?

Ans. The BigCode Project aims to foster open development and responsible practices in building large language models for code. It emphasizes open data, model weights availability, opt-out tools, and reproducibility to address issues seen in closed models, ensuring transparency and ethical usage.

Q2. How did data curation contribute to model training?

Ans. Data curation involved selecting relevant programming languages, cleaning data, and deduplication to improve data quality. It focused on retaining meaningful content while removing redundancy and irrelevant data, resulting in a curated dataset for training.

Q3. What strategies were employed for training large language models efficiently?

Ans. For efficient training of large models, the 3D parallelism approach was used, which combines data parallelism, tensor parallelism, and pipeline parallelism. Tools like Megatron-LM and the Hugging Face trainer with DeepSpeed integration were employed to distribute computations across multiple GPUs, allowing for faster training and optimized memory usage.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details