Big news from DeepSeek! The company has officially launched its first open-source repository, leveraging CUDA Kernels to enhance the speed and efficiency of LLMs. At the heart of this update is FlashMLA, an advanced multi-latent attention (MLA) decoding kernel, specifically optimized for Hopper GPUs. This technology handles variable-length sequences more efficiently, making AI model hosting smoother and faster.
Key Highlights of the Release:
These optimizations deliver up to 3000 GB/s in memory-bound configurations and 580 TFLOPS in computation-bound scenarios when running on H800 SXM5 GPUs with CUDA 12.6.
With this level of performance, AI inference just got a major upgrade! Sounds intriguing, right?
Note: Earlier, MLA was used in DeepSeek Models and now FlashMLA using CUDA Kernels makes hosting DeepSeek AI’s R1 + V3 faster!
FlashMLA is an optimised MLA decoding kernel designed specifically for Hopper GPUs, NVIDIA’s next-generation architecture. Built with performance in mind, it embodies Deepseek’s commitment to accelerating AI models at scale. FlashMLA ensures faster, more efficient processing where every millisecond counts.
FlashMLA is designed to run on high-performance GPUs, specifically Hopper architecture GPUs such as the H800 SXM5. It requires CUDA 12.3+ and PyTorch 2.0+ for optimal performance.
Based on results from its official GitHub repository, FlashMLA delivers impressive performance:
This combination of high memory bandwidth, efficient caching, and exceptional computational throughput makes FlashMLA a powerful choice for AI workloads requiring extreme performance.
If this is all gibberish for you then, don’t worry I will be explaining this in depth. Let’s start with Multi-head Latent Attention (MLA)
The Multi-head Latent Attention (MLA) was introduced with the release of the DeepSeek-V2 a variant of multi-head attention (MHA). It belongs to a family of techniques designed to address a key challenge in scaling large models: reducing the KV-cache size, which can become a major memory bottleneck. Other methods in this category include Group-Query Attention and Multi-Query Attention. While these approaches help lower memory usage, they often come with a tradeoff—sacrificing some performance in exchange for greater scalability.
MLA takes a different approach by using a low-rank factorized projection matrix, which works somewhat like multi-query attention. However, instead of simply repeating a single head multiple times, it decompresses a latent vector to generate a unique and appropriate K and V head for each Q head. According to DeepSeek, this method not only reduces memory overhead but actually enhances the model’s performance rather than compromising it.
Multi-head attention (MHA) enhances a model’s ability to capture diverse relationships in data by processing queries, keys, and values independently across multiple attention heads. However, this flexibility comes at a cost, especially during inference. The KV cache, which stores keys and values from previous tokens, expands linearly with sequence length. This quickly becomes a bottleneck, consuming significant GPU memory for long sequences.
For a model with n_h attention heads and a head dimension of d_h, the KV cache size is calculated as:
For large sequence lengths, this can exceed memory limits, restricting model scalability and efficiency.
Memory Latent Attention (MLA) addresses this challenge by introducing a more compact way to store KV information. Instead of directly caching keys and values, MLA compresses them into a latent vector c_t for each token t, significantly reducing storage requirements. The process works as follows:
Here, W^{UK} and W^{UV} are transformation matrices mapping d_c back to n_h * d_h.
This approach drastically cuts memory usage—DeepSeek-V2 demonstrates up to 93.3% reduction, allowing for longer context handling and more efficient processing.
By leveraging MLA, models can achieve longer context understanding while keeping hardware requirements manageable, unlocking new possibilities for efficient large-scale AI applications.
To understand this in detail read:
Key-value (KV) caching is a powerful optimization technique that accelerates the autoregressive decoding process by storing and reusing previously computed key-value pairs, rather than recalculating them at each step.
This method primarily serves during inference, as training still requires processing the entire input sequence simultaneously. By leveraging KV caching, we avoid redundant computations, significantly improving efficiency.
KV caching typically operates as a rolling buffer. During each decoding step:
This approach reduces computational overhead, making autoregressive models more efficient. However, it comes with a trade-off: Increased memory usage. Since the KV cache scales proportionally with factors like batch size, sequence length, hidden size, and the number of attention heads, it can quickly become a memory bottleneck—especially for large batches or long sequences.
To tackle these memory constraints, two key strategies have emerged:
By integrating these techniques, KV caching enables faster and more scalable inference, making it an essential component in modern transformer-based architectures.
DeepSeek’s models leverage FlashMLA to achieve remarkable efficiency and scalability in the following models.
By integrating FlashMLA, DeepSeek is pushing the boundaries of AI efficiency and economic feasibility.
Now, let’s talk about the NVIDIA Hopper.
NVIDIA Hopper is a revolutionary GPU architecture designed to supercharge artificial intelligence (AI) and high-performance computing (HPC) workloads. Named after the pioneering computer scientist Grace Hopper, this cutting-edge technology is built to handle large-scale parallel processing with exceptional memory efficiency. It empowers researchers, developers, and enterprises to achieve breakthrough speeds in AI, machine learning, and deep learning applications.
The NVIDIA Hopper architecture is packed with over 80 billion transistors, built on TSMC’s advanced 4N process. It incorporates key innovations such as NVLink Switch, Confidential Computing, the Transformer Engine, and Second-Generation MIG (Multi-Instance GPU). These technologies fuel the power of NVIDIA’s H100 and H200 GPUs, making them the ultimate choice for AI workloads—from training and inference to generative AI and deep learning.
Whether you’re tackling massive datasets, training sophisticated AI models, or running complex simulations, NVIDIA Hopper delivers the speed, scalability, and efficiency needed to push the boundaries of AI and computing.
The optimized CUDA Kernels in DeepSeek AI’s implementation are achieving an actual performance of 580 TFLOPS (trillion floating-point operations per second) for BF16 (bfloat16) matrix multiplication—which is more than double the theoretical peak of 260 TFLOPS for the H800 GPU.
The fact that DeepSeek’s optimizations are more than doubling the expected performance suggests an extremely efficient use of the GPU’s computational power, making AI workloads much faster than conventional implementations.
DeepSeek’s release of FlashMLA marks a significant breakthrough in AI inference efficiency, particularly for Hopper GPUs. By introducing Multi-Latent Attention (MLA), DeepSeek optimizes memory usage while maintaining or even enhancing model performance. The paged KV cache and BF16 support allow for high-speed processing, with memory bandwidth reaching 3000 GB/s and computational performance up to 580 TFLOPS on H800 SXM5 GPUs.
MLA drastically reduces KV cache size—by up to 93.3%—making large-scale AI models more efficient and cost-effective. This innovation is central to DeepSeek-V2 and V3, enabling longer context handling, faster inference, and lower training costs. With FlashMLA, DeepSeek is pushing the limits of AI scalability, making large-scale AI more accessible and practical while setting new standards in model efficiency and economic viability.
Stay tuned to Analytics Vidhya Blog for our detailed analysis on DeepSeek’s Day 2 release!