Published onMarch 25, 2026|Views: 96|7 min readStop using torch.cat for your KV cache implementationsllmkv-cachepytorchinferenceoptimizationtransformerstl;dr: `torch.cat` is not in-place, instead use pre-allocated buffers
Published onFebruary 9, 2026|Views: 2739|40 min readUnderstanding DeepSeek's Multi-Head Latent Attention (MLA)llmattentiontransformersdeepseekmlakv-cacheinferenceOn bottlenecks in attention, kv caching, long-context decoding, attention variants, and how DeepSeek MLA came to be. Part 1 of the FlashMLA blog series.