Code Like a Pro and Write SQL in Seconds with Snowflake Arctic

NISHANT TIWARI Last Updated : 21 May, 2024
7 min read

Introduction

Snowflake Arctic represents a solution for enterprise AI, offering efficiency, openness, and a strong focus on enterprise intelligence. This new model is designed to push the boundaries of cost-effective training and transparency, making it a significant advancement in large language models. Let’s explore all about coding with Snowflake Arctic with this post.

What is Snowflake Arctic?

Snowflake AI Research has addressed the traditional struggles of building top-tier enterprise-grade intelligence using LLMs. The high cost and resource requirements have been a significant barrier for enterprises, costing tens to hundreds of millions of dollars. Snowflake Arctic aims to revolutionize the landscape by offering efficiency, transparency, and enterprise focus. The introduction of Snowflake Arctic represents a significant leap forward in the field of large language models, providing a solution that is both cost-effective and accessible for the community.

Architecture Insights of Snowflake Arctic
Enterprise intelligence – average of Coding (HumanEval+ and MBPP+), SQL Generation (Spider), and Instruction following (IFEval) – vs. Training cost

The traditional approach to building enterprise-grade intelligence using LLMs has been cost-prohibitive and resource-intensive. Snowflake Arctic aims to address these challenges by offering a more efficient and transparent solution that is accessible to the community.

Also Read: Mixtral 8x22B – New Model Crushes Benchmarks in 4+ Languages

What is Snowflake Copilot?

Snowflake Copilot is an AI-powered assistant built on Snowflake Cortex, designed to help users with data analysis tasks using natural language. It can be thought of as a coding assistant specifically for SQL.

How to use Snowflake Arctic?

Snowflake Arctic is a large language model (LLM) designed for enterprise use within the Snowflake data cloud. There are a few ways to get started with Snowflake Arctic:

  • Snowflake Cortex: This serverless environment allows you to try out Arctic for free (with daily limits) until June 3rd, 2024. You can access Arctic through Cortex or other model gardens like Hugging Face https://huggingface.co/.
  • SQL Integration: Snowflake Arctic can be integrated into your SQL queries using the snowflake.cortex.complete function. This function allows you to send a prompt and data to the model and get results back within your SQL workflow https://quickstarts.snowflake.com/guide/getting_started_with_snowflake_arctic/

What LLM does snowflake use?

Snowflake Copilot utilizes a Large Language Model (LLM) that’s been fine-tuned by Snowflake to work within the Snowflake environment [1]. This LLM enables Copilot to understand your natural language requests and translate them into SQL queries, or even help refine existing ones.

While the specifics of the underlying LLM are not public knowledge, Snowflake mentions they leverage a mixed-model approach and constantly explore improvements including entirely new approaches. Their focus is on building a powerful LLM specifically suited for real-world enterprise use cases involving SQL 

Arctic’s Power: Architecture and Training 

Snowflake AI Research has developed the Arctic model, which is a top-tier enterprise-focused large language model (LLM) designed to excel at enterprise tasks such as SQL generation, coding, and instruction following benchmarks. The model is built upon the collective experiences of the diverse team at Snowflake AI Research and major insights and learnings from the community. The architecture and training of the Arctic are key components that contribute to its power and efficiency.

Architecture Insights

The architecture of Arctic is a unique Dense-MoE Hybrid transformer architecture that combines a 10B dense transformer model with a residual 128×3.66B MoE MLP, resulting in 480B total and 17B active parameters chosen using a top-2 gang. This architecture enables the training system to achieve good training efficiency via communication-computation overlap, hiding a significant portion of the communication overhead.

Architecture Insights of Snowflake
Standard MoE Architecture vs. Arctic

The model is designed to have 480B parameters spread across 128 fine-grained experts and uses top-2 gang to choose 17B active parameters, leveraging a large number of total parameters and many experts to enlarge the model capacity for top-tier intelligence while engaging a moderate number of active parameters for resource-efficient training and inference.

Training Innovations

The training of Arctic is based on three key insights and innovations.

Firstly, the model leverages many experts with more expert choices, allowing it to improve model quality without increasing compute cost.

Secondly, the combination of a dense transformer with a residual MoE component in the Arctic architecture enables the training system to achieve good training efficiency via communication-computation overlap, hiding a big portion of the communication overhead.

Lastly, the enterprise-focused data curriculum for training Arctic involves a three-stage curriculum, each with a different data composition focusing on generic skills in the first phase and enterprise-focused skills in the latter two phases. This curriculum is designed to effectively train the model for enterprise metrics like Code Generation and SQL.

Inference Efficiency and Openness of Snowflake Arctic

To achieve efficient inference, the Arctic model utilizes a unique Dense-MoE Hybrid transformer architecture. This architecture combines a 10B dense transformer model with a residual 128×3.66B MoE MLP, resulting in a total of 480B parameters and 17B active parameters chosen using a top-2 gang. This design and training approach is based on three key insights and innovations, which have enabled Arctic to achieve remarkable inference efficiency.

Inference Efficiency Insights

The first insight is related to the architecture and system co-design. Training a vanilla MoE architecture with a large number of experts can be inefficient due to high all-to-all communication overhead among experts. However, Arctic overcomes this inefficiency by combining a dense transformer with a residual MoE component, enabling the training system to achieve good efficiency through communication-computation overlap.

The second insight involves an enterprise-focused data curriculum. The Arctic was trained with a three-stage curriculum, each with a different data composition focusing on generic skills in the first phase and enterprise-focused skills in the latter two phases. This approach allowed the model to excel at enterprise metrics like code generation and SQL, while also learning generic skills effectively.

Inference Efficiency Insights

The third insight pertains to the number of experts and total parameters in the MoE model. Arctic is designed to have 480B parameters spread across 128 fine-grained experts and uses top-2 gang to choose 17B active parameters. This strategic utilization of a large number of total parameters and many experts enhances the model’s capacity for top-tier intelligence while ensuring resource-efficient training and inference.

Openness and Collaboration

In addition to focusing on inference efficiency, Snowflake AI Research emphasizes the importance of openness and collaboration. The construction of the Arctic has unfolded along two distinct trajectories: the open path, which was navigated swiftly thanks to the wealth of community insights, and the hard path, which required intensive debugging and numerous ablations.

To contribute to an open community where collective learning and advancement are the norms, Snowflake AI Research is sharing its research insights through a comprehensive ‘cookbook’ that opens up its findings from the hard path. This cookbook is designed to expedite the learning process for anyone looking to build world-class MoE models, offering a blend of high-level insights and granular technical details in crafting an LLM akin to the Arctic.

Furthermore, Snowflake AI Research is releasing model checkpoints for both the base and instruct-tuned versions of Arctic under an Apache 2.0 license, providing ungated access to weights and code. This open-source approach allows researchers and developers to use the model freely in their research, prototypes, and products.

Collaboration and Acknowledgments

Snowflake AI Research acknowledges the collaborative efforts of AWS and NVIDIA in building Arctic’s training cluster and infrastructure, as well as enabling Arctic support on NVIDIA NIM with TensorRT-LLM. The open-source community’s contributions in producing models, datasets, and dataset recipe insights have also been instrumental in making the release of Arctic possible.

Also Read: How Snowflake’s Text Embedding Models Are Disrupting the Industry

Collaboration and Availability

The Arctic ecosystem is a result of collaborative efforts and open availability, as demonstrated by Snowflake AI Research’s development and release of the Arctic model. The collaborative nature of the ecosystem is evident in the open-source serving code and the commitment to an open ecosystem. Snowflake AI Research has made model checkpoints for both the base and instruct-tuned versions of Arctic available under an Apache 2.0 license, allowing for free use in research, prototypes, and products. Additionally, the LoRA-based fine-tuning pipeline and recipe enable efficient model tuning on a single node, fostering collaboration and knowledge sharing within the AI community.

Open Research Insights

The availability of open research insights further emphasizes the collaborative nature of the Arctic ecosystem. Snowflake AI Research has shared comprehensive research insights through a ‘cookbook’ that opens up findings from the hard path of model construction. This ‘cookbook’ is designed to expedite the learning process for anyone looking to build world-class MoE models, providing a blend of high-level insights and granular technical details. The release of corresponding Medium.com blog posts daily over the next month demonstrates a commitment to knowledge sharing and collaboration within the AI research community.

Access and Collaboration

Here’s how we can collaborate on Arctic starting today:

  • Go to Hugging Face to directly download Arctic and use our Github repo for inference and fine-tuning recipes.
  • For a serverless experience in Snowflake Cortex, Snowflake customers with a payment method on file will be able to access Snowflake Arctic for free until June 3. Daily limits apply.
  • Access Arctic via your model garden or catalog of choice including Amazon Web Services (AWS), Lamini, Microsoft Azure, NVIDIA API catalog, Perplexity, Replicate and Together AI over the coming days.
  • Chat with Arctic! Try a live demo now on Streamlit Community Cloud or on Hugging Face Streamlit Spaces, with an API powered by our friends at Replicate.
  • Get mentorship and credits to help you build your own Arctic-powered applications during our Arctic-themed Community Hackathon.

Collaboration Initiatives

In addition to open availability, Snowflake AI Research is actively engaging the community through collaboration initiatives. These initiatives include live demos on Streamlit Community Cloud and Hugging Face Streamlit Spaces, mentorship opportunities, and a themed Community Hackathon focused on building Arctic-powered applications. These initiatives aim to encourage collaboration, knowledge sharing, and the development of innovative applications using the Arctic model.

Conclusion

Snowflake Arctic represents a significant milestone in the field of large language models, addressing the challenges of cost and resource requirements with a more efficient and transparent solution accessible to the wider community. The model’s unique architecture, training approach, and focus on enterprise tasks make it a valuable asset for businesses leveraging AI.

Arctic’s open-source nature and the collaborative efforts behind its development enhance its potential for innovation and continuous improvement. By combining cutting-edge technology with a commitment to open research and community engagement, Arctic exemplifies the power of large language models to revolutionize industries while underscoring the importance of accessibility, transparency, and collaboration in shaping the future of enterprise AI.

You can explore many more such AI tools and their applications here.

Seasoned AI enthusiast with a deep passion for the ever-evolving world of artificial intelligence. With a sharp eye for detail and a knack for translating complex concepts into accessible language, we are at the forefront of AI updates for you. Having covered AI breakthroughs, new LLM model launches, and expert opinions, we deliver insightful and engaging content that keeps readers informed and intrigued. With a finger on the pulse of AI research and innovation, we bring a fresh perspective to the dynamic field, allowing readers to stay up-to-date on the latest developments.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details