DeepSeek #OpenSourceWeek Day 1: Release of FlashMLA

Pankaj Singh Last Updated : 24 Feb, 2025
7 min read

Big news from DeepSeek! The company has officially launched its first open-source repository, leveraging CUDA Kernels to enhance the speed and efficiency of LLMs. At the heart of this update is FlashMLA, an advanced multi-latent attention (MLA) decoding kernel, specifically optimized for Hopper GPUs. This technology handles variable-length sequences more efficiently, making AI model hosting smoother and faster.

Key Highlights of the Release:

  • BF16 Support
  • Paged KV Cache with a block size of 64

These optimizations deliver up to 3000 GB/s in memory-bound configurations and 580 TFLOPS in computation-bound scenarios when running on H800 SXM5 GPUs with CUDA 12.6.

With this level of performance, AI inference just got a major upgrade! Sounds intriguing, right?

Note: Earlier, MLA was used in DeepSeek Models and now FlashMLA using CUDA Kernels makes hosting DeepSeek AI’s R1 + V3 faster!

What is FlashMLA?

FlashMLA is an optimised MLA decoding kernel designed specifically for Hopper GPUs, NVIDIA’s next-generation architecture. Built with performance in mind, it embodies Deepseek’s commitment to accelerating AI models at scale. FlashMLA ensures faster, more efficient processing where every millisecond counts.

Hardware Requirements

FlashMLA is designed to run on high-performance GPUs, specifically Hopper architecture GPUs such as the H800 SXM5. It requires CUDA 12.3+ and PyTorch 2.0+ for optimal performance.

Precision and Optimization

  • Currently supports BF16 precision, ensuring efficient computation while maintaining numerical stability.
  • Implements a paged KV cache with a block size of 64, enhancing memory efficiency and reducing latency in large-scale models.

Performance Benchmarks

Based on results from its official GitHub repository, FlashMLA delivers impressive performance:

  • Memory Efficiency: Achieves up to 3000 GB/s of memory bandwidth, approaching the theoretical peak of 3350 GB/s for the H800 SXM5.
  • Compute Power: Reaches up to 580 TFLOPS for BF16 matrix multiplication—significantly surpassing the H800’s theoretical peak of 260 TFLOPS, demonstrating optimized utilization of computational resources.

This combination of high memory bandwidth, efficient caching, and exceptional computational throughput makes FlashMLA a powerful choice for AI workloads requiring extreme performance.

If this is all gibberish for you then, don’t worry I will be explaining this in depth. Let’s start with Multi-head Latent Attention (MLA)

Brief About Multi-head Latent Attention (MLA)

Source: DeepSeek V3

The Multi-head Latent Attention (MLA) was introduced with the release of the DeepSeek-V2 a variant of multi-head attention (MHA). It belongs to a family of techniques designed to address a key challenge in scaling large models: reducing the KV-cache size, which can become a major memory bottleneck. Other methods in this category include Group-Query Attention and Multi-Query Attention. While these approaches help lower memory usage, they often come with a tradeoff—sacrificing some performance in exchange for greater scalability.

MLA takes a different approach by using a low-rank factorized projection matrix, which works somewhat like multi-query attention. However, instead of simply repeating a single head multiple times, it decompresses a latent vector to generate a unique and appropriate K and V head for each Q head. According to DeepSeek, this method not only reduces memory overhead but actually enhances the model’s performance rather than compromising it.

Standard Multi-Head Attention and Its Limitations

Multi-head attention (MHA) enhances a model’s ability to capture diverse relationships in data by processing queries, keys, and values independently across multiple attention heads. However, this flexibility comes at a cost, especially during inference. The KV cache, which stores keys and values from previous tokens, expands linearly with sequence length. This quickly becomes a bottleneck, consuming significant GPU memory for long sequences.

For a model with n_h attention heads and a head dimension of d_h, the KV cache size is calculated as:

For large sequence lengths, this can exceed memory limits, restricting model scalability and efficiency.

How MLA Optimizes Memory Usage?

Source: DeepSeek

Memory Latent Attention (MLA) addresses this challenge by introducing a more compact way to store KV information. Instead of directly caching keys and values, MLA compresses them into a latent vector c_t for each token t, significantly reducing storage requirements. The process works as follows:

  • The hidden state h_t is projected into a latent vector c_t using a learned transformation matrix W^{KV}, where c_t has a much smaller dimension d_c (compared to n_h * d_h).
  • Keys (k_t) and values (v_t) are reconstructed using:

Here, W^{UK} and W^{UV} are transformation matrices mapping d_c back to n_h * d_h.

  • Instead of storing k_t and v_t directly, MLA caches only c_t, reducing the KV cache size to seq_len × d_c.

This approach drastically cuts memory usage—DeepSeek-V2 demonstrates up to 93.3% reduction, allowing for longer context handling and more efficient processing.

  • Memory Optimization – Enables processing of extended sequences without exceeding GPU memory limits.
  • Performance Retention – Maintains or enhances model performance, as observed in DeepSeek-V2.
  • Cost Efficiency – Reduces computational costs for training and inference, making large-scale models more practical.

By leveraging MLA, models can achieve longer context understanding while keeping hardware requirements manageable, unlocking new possibilities for efficient large-scale AI applications.

To understand this in detail read:

Key-Value Caching: Enhancing Autoregressive Decoding

Key-value (KV) caching is a powerful optimization technique that accelerates the autoregressive decoding process by storing and reusing previously computed key-value pairs, rather than recalculating them at each step.

This method primarily serves during inference, as training still requires processing the entire input sequence simultaneously. By leveraging KV caching, we avoid redundant computations, significantly improving efficiency.

How KV Caching Works?

KV caching typically operates as a rolling buffer. During each decoding step:

  • Only the new query (Q) is computed.
  • Previously cached key-value pairs (K, V) are reused.
  • The attention mechanism then processes the new Q alongside the stored K and V.
  • The latest tokens K and V are added to the cache for future steps.

This approach reduces computational overhead, making autoregressive models more efficient. However, it comes with a trade-off: Increased memory usage. Since the KV cache scales proportionally with factors like batch size, sequence length, hidden size, and the number of attention heads, it can quickly become a memory bottleneck—especially for large batches or long sequences.

Overcoming the Memory Challenge

Source: DeepSeek V2

To tackle these memory constraints, two key strategies have emerged:

  • Multi-Query Attention (MQA): Reduces memory consumption by sharing K and V across multiple queries.
  • Grouped-Query Attention (GQA): Strikes a balance between standard multi-head attention and MQA by clustering queries into smaller groups, reducing memory load while maintaining efficiency.

By integrating these techniques, KV caching enables faster and more scalable inference, making it an essential component in modern transformer-based architectures.

FlashMLA: Powering DeepSeek’s Cutting-Edge Models

DeepSeek’s models leverage FlashMLA to achieve remarkable efficiency and scalability in the following models.

  • DeepSeek-R1
  • DeepSeek-V3

By integrating FlashMLA, DeepSeek is pushing the boundaries of AI efficiency and economic feasibility.

Now, let’s talk about the NVIDIA Hopper. 

What is NVIDIA Hopper?

NVIDIA Hopper is a revolutionary GPU architecture designed to supercharge artificial intelligence (AI) and high-performance computing (HPC) workloads. Named after the pioneering computer scientist Grace Hopper, this cutting-edge technology is built to handle large-scale parallel processing with exceptional memory efficiency. It empowers researchers, developers, and enterprises to achieve breakthrough speeds in AI, machine learning, and deep learning applications.

Inside the NVIDIA Hopper Architecture

The NVIDIA Hopper architecture is packed with over 80 billion transistors, built on TSMC’s advanced 4N process. It incorporates key innovations such as NVLink Switch, Confidential Computing, the Transformer Engine, and Second-Generation MIG (Multi-Instance GPU). These technologies fuel the power of NVIDIA’s H100 and H200 GPUs, making them the ultimate choice for AI workloads—from training and inference to generative AI and deep learning.

Source: NVIDIA

Whether you’re tackling massive datasets, training sophisticated AI models, or running complex simulations, NVIDIA Hopper delivers the speed, scalability, and efficiency needed to push the boundaries of AI and computing.

The Performance

The optimized CUDA Kernels in DeepSeek AI’s implementation are achieving an actual performance of 580 TFLOPS (trillion floating-point operations per second) for BF16 (bfloat16) matrix multiplication—which is more than double the theoretical peak of 260 TFLOPS for the H800 GPU.

What this Implies?

  1. Theoretical Peak vs. Actual Performance
    • Theoretical peak TFLOPS is a rough upper limit of what a GPU can achieve under ideal conditions.
    • In real-world scenarios, actual performance is often lower due to inefficiencies like memory bottlenecks and suboptimal kernel execution.
  2. Breaking the Limits with Optimization
    • DeepSeek’s CUDA Kernels (like FlashMLA) optimize how computations are scheduled and executed on the GPU.
    • They make better use of GPU cores, memory bandwidth, and instruction execution to exceed the expected performance.
  3. How Is This Possible?
    • The optimizations likely include techniques like tensor core fusion, efficient memory access patterns, and reduced computational overhead.
    • Instead of simply relying on raw TFLOPS, DeepSeek maximizes actual hardware utilization.

The fact that DeepSeek’s optimizations are more than doubling the expected performance suggests an extremely efficient use of the GPU’s computational power, making AI workloads much faster than conventional implementations.

Conclusion

DeepSeek’s release of FlashMLA marks a significant breakthrough in AI inference efficiency, particularly for Hopper GPUs. By introducing Multi-Latent Attention (MLA), DeepSeek optimizes memory usage while maintaining or even enhancing model performance. The paged KV cache and BF16 support allow for high-speed processing, with memory bandwidth reaching 3000 GB/s and computational performance up to 580 TFLOPS on H800 SXM5 GPUs.

MLA drastically reduces KV cache size—by up to 93.3%—making large-scale AI models more efficient and cost-effective. This innovation is central to DeepSeek-V2 and V3, enabling longer context handling, faster inference, and lower training costs. With FlashMLA, DeepSeek is pushing the limits of AI scalability, making large-scale AI more accessible and practical while setting new standards in model efficiency and economic viability.

Stay tuned to Analytics Vidhya Blog for our detailed analysis on DeepSeek’s Day 2 release!

Hi, I am Pankaj Singh Negi - Senior Content Editor | Passionate about storytelling and crafting compelling narratives that transform ideas into impactful content. I love reading about technology revolutionizing our lifestyle.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details