Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»Guides»DeepSeek’s new Engram technique could slash AI memory costs while boosting reasoning power and easing global DRAM pressure
    Guides

    DeepSeek’s new Engram technique could slash AI memory costs while boosting reasoning power and easing global DRAM pressure

    AwaisBy AwaisJanuary 17, 2026No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    DeepSeek’s new Engram technique could slash AI memory costs while boosting reasoning power and easing global DRAM pressure
    Share
    Facebook Twitter LinkedIn Pinterest Email


    • DeepSeek’s Engram separates static memory from computation, increasing efficiency in large AI models
    • The method reduces high-speed memory needs by enabling DeepSeek models to use lookups
    • Engram supports asynchronous prefetching across multiple GPUs with minimal performance overhead

    DeepSeek, in collaboration with Peking University, introduced a new training method called Engram, designed to decouple memory storage from computational processes.

    Traditional large language models require high-bandwidth memory for knowledge retrieval and basic computation, creating a bottleneck in both performance and cost.

    This HBM bottleneck is widely recognized as a key reason DRAM prices rose by 5X in just 10 weeks, as hardware demand spiked to support large AI models.


    You may like

    Validation and technical approach

    The researchers said existing models waste sequential depth on trivial operations, which could otherwise support higher-level reasoning.

    Engram allows models to efficiently “look up” essential information without overloading GPU memory, freeing capacity for more complex reasoning tasks.

    The system was tested on a 27-billion-parameter model and showed measurable improvements across standard industry benchmarks.

    By performing knowledge retrieval through hashed N-grams, Engram provides static memory access independent of the current context.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    The retrieved information is then adjusted using a context-aware gating mechanism to align with the model’s hidden state.

    This design allows models to handle long context inputs more efficiently and supports system-level prefetching with minimal performance overhead.

    The Engram method complements other hardware-efficient approaches, including solutions such as Phison’s AI inference accelerators.


    You may like

    Engram minimizes the amount of high-speed memory required by using lookups for static information, making memory usage more efficient.

    Phison offers a cost-effective way to expand total memory using SSDs, supporting large AI models such as Engram or Mixture-of-Experts systems.

    Combined, these approaches allow AI systems to optimize fast-memory usage while affordably increasing overall memory capacity.

    It also works alongside emerging CXL (Compute Express Link) standards, which aim to overcome GPU memory bottlenecks in large-scale AI workloads.

    The method separates static pattern storage from dynamic computation, enhancing the Transformer backbone without increasing FLOPs or parameter counts.

    DeepSeek formalized a U-shaped expansion rule to optimize the allocation of parameters between the MoE conditional computation module and the Engram memory module.

    Tests show that reallocating around 20–25% of the sparse parameter budget to Engram yields better performance than pure MoE models, maintaining stable gains across different scales.

    Memory slot expansion provides predictable improvements without additional computational cost.

    This confirms the scalability of conditional memory as an independent axis for sparse models.

    Engram’s deterministic retrieval mechanism allows memory capacity to scale linearly across multiple GPUs while supporting asynchronous prefetching during inference.

    It offloads static knowledge reconstruction from lower layers, freeing attention mechanisms to focus on global context.

    Hierarchical caching of frequently used embeddings enhances efficiency, and the module works with existing GPU and system memory architectures, potentially avoiding costly HBM upgrades.

    This technique may relieve pressure on expensive memory hardware, particularly in regions such as China, where HBM access lags behind competitors such as Samsung, SK Hynix, and Micron.

    Early validation of Engram suggests models can expand parameter scale and reasoning capacity while managing memory demands more efficiently.

    This approach may help ease memory constraints across AI infrastructure, potentially reducing sharp DDR5 DRAM price swings.

    Via SCMP


    Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

    boosting costs DeepSeeks DRAM easing Engram global Memory Power Pressure Reasoning Slash technique
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    CLAG: Adaptive Memory Organization via Agent-Driven Clustering for Small Language Model Agents

    March 17, 2026

    From Local to Global Time Series Explanations

    March 16, 2026

    SciMDR: Benchmarking and Advancing Scientific Multimodal Document Reasoning

    March 13, 2026

    RADAR: Reasoning-Ability and Difficulty-Aware Routing for Reasoning LLMs

    March 12, 2026

    Building a Like-for-Like solution for Stores in Power BI

    March 10, 2026

    AI assistants now equal 56% of global search engine volume: Study

    March 10, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    LinkedIn updates feed algorithm with LLM-powered ranking and retrieval

    March 17, 2026

    LinkedIn is launching a new AI-powered feed ranking system that uses large language models and…

    Trust Is The New Ranking Factor

    March 17, 2026

    CLAG: Adaptive Memory Organization via Agent-Driven Clustering for Small Language Model Agents

    March 17, 2026

    What They Mean and How to Use Them in Social Media Campaigns

    March 17, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    3 CMS Platforms Control 73% Of The Market & Shape Technical SEO Defaults

    March 17, 2026

    Top 7 Traackr Alternatives 2026

    March 17, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.