
Does "lobster" significantly extend the "memory" lifespan?
Morgan Stanley's report believes that AI represented by OpenClaw will shift demand from "generating answers" to "completing tasks." Frequent tool calls and multi-step orchestration in workflows have led to a surge in CPU computation, contributing to significant delays. At the same time, due to the need for frequent context sharing and unloading KV caches, DRAM has replaced HBM as the hard constraint bottleneck. This is expected to trigger a surge in memory prices in Q2 2026, prompting Morgan Stanley to raise its earnings expectations for SK Hynix and Samsung
Agentic AI tools represented by OpenClaw are pushing the demand logic of the memory market into a whole new paradigm. According to news from the Fengchao trading desk, a recent report released by Morgan Stanley on March 18 states: AI's transition from "thinking" to "executing" will make DRAM replace HBM as the most challenging chip bottleneck in AI infrastructure, leading to a memory cycle that exceeds expectations for longevity.
Channel surveys indicate that the price of server DRAM DDR5 is expected to rise by over 50% quarter-on-quarter in the second quarter of 2026, with some Chinese hyperscale cloud providers bidding even higher; the contract price for DDR4 is expected to increase by 40%-50%, and the price increase for enterprise-grade NAND SSDs is expected to be no less than 40%-50%. Morgan Stanley believes that the current memory cycle is in the mid-upturn phase, and the degree of supply tightening exceeds previous judgments—"Wall Street's profit forecasts will have to catch up with reality."
This judgment has been directly reflected in target price adjustments: SK Hynix's EPS forecasts for 2026-2027 have been raised by 24% and 32%, respectively, with the target price increasing from 1.1 million won to 1.3 million won, implying a 43% upside from the current price; Samsung Electronics' common stock target price has risen to 251,000 won, with both stocks maintaining an "overweight" rating.
Morgan Stanley's core judgment is: the market is accustomed to linear thinking, while the capability expansion of the AI intelligence layer is advancing at an exponential rate—when AI transitions from "generating answers" to "completing tasks," the scale of memory demand will leap accordingly, and this transformation has just begun to accelerate.
"Doing" Requires More Memory Than "Thinking"
The logical starting point of Morgan Stanley's report is a seemingly simple yet profound judgment: "Doing requires more DRAM than thinking."
The working mode of traditional large language models (LLMs) is a GPU-dominated linear process: receiving questions, batch processing all input tokens (pre-filling phase), and then generating responses token by token (decoding phase), with the CPU responsible for converting the results into text output. In this process, GPU computing power is the decisive bottleneck, and DRAM only needs to cooperate to complete cache read and write.
The emergence of Agentic AI has completely changed this logic. Taking OpenClaw as an example, this open-source self-hosted AI assistant can connect to over 50 messaging platforms simultaneously, including WhatsApp, Telegram, Slack, and Signal, and has system-level permissions for browser automation, file operations, command line execution, and API calls. It does not "answer questions," but rather "completes tasks"—searching the web, reading documents, calling external tools, executing code, and ultimately outputting a set of action results generated through multi-step collaboration.
The core technical implication of this paradigm shift is: the workflow expands from single GPU inference to multi-step coordination, tool invocation, and orchestration processes, where the CPU's computation time often contributes more to overall latency than the GPU. Meanwhile, multiple agents must continuously share context, offload KV cache (Key-Value Cache), and store and retrieve the results of each intermediate step—memory leaps from the backend of the computing power chain to the core bottleneck position

OpenClaw: An Extreme Magnifying Glass for Memory Requirements
Morgan Stanley conducted a detailed quantitative analysis of OpenClaw's memory requirements, concluding that in such intelligent agent tools, DRAM is paramount, while other hardware constraints take a back seat.
This tool has two distinctly different operating modes:
Lightweight Gateway Mode (remotely calling external APIs like Claude or GPT-4): Even so, the bottleneck is no longer in the GPU or CPU, but in the Node.js runtime's consumption of DRAM. The minimum requirement for practical use is 2GB of DRAM, with a recommended configuration of 4GB for stable production-level operation.
Local Model Mode (loading and running AI models directly on the device): DRAM and graphics HBM become dual constraints. Morgan Stanley recommends a configuration of 32GB system DRAM; running models with 7 billion to 8 billion parameters requires an additional 8GB of graphics DRAM, while models with 13 billion to 70 billion parameters require 16-24GB, and ultra-large models like Llama 3 70B and Qwen 72B require over 80GB.
The report specifically points out that the consequence of insufficient memory is not a performance decline, but a direct crash—JavaScript will throw a "heap out of memory" error, leading to installation failures and runtime interruptions. This detail profoundly reveals the hard constraint nature of memory in intelligent agent scenarios: insufficient memory does not mean slow; it means "dead."
Migration of Computing Bottlenecks: From HBM to System Memory
The memory requirement characteristics of OpenClaw are a microcosm of a broader structural transformation.
Morgan Stanley notes that the AI computing bottleneck is undergoing a systematic migration: from computing power itself to data movement, from HBM to system memory (DRAM), with the entire memory hierarchy architecture evolving from being HBM-centric to a multi-layer structure combining HBM, DRAM, and NVMe NAND SSD.
One of the technological drivers of this transition is the rapid expansion of long context demands. KV caching grows linearly with the number of tokens, and in distributed inference scenarios, it must be transmitted over the network, significantly increasing the CPU's I/O management burden. Core operations of intelligent agents, such as RAG retrieval and context management, all involve intensive memory I/O.
Market-level evidence is equally clear. According to Morgan Stanley, both Intel and AMD have recently confirmed a substantial supply-demand imbalance for high-core-count server processors; AMD EPYC CPU revenue has for the first time exceeded 40% of total server CPU revenue, and cloud instance deployments featuring EPYC have grown over 50% year-on-year. NVIDIA has launched the independently sold Vera CPU and entered into a multi-year agreement with Meta to deploy independent CPUs in large-scale scenarios for personal intelligent agent operations for the first time

Price Acceleration: Mid-Cycle, Space Still Exists
The aforementioned structural changes have already manifested significantly at the price level.
In terms of DRAM, the price of server DDR5 in the second quarter of 2026 has already seen limited spot trading with a quarter-on-quarter increase of 50%. Major cloud service providers have accepted this price, with some Chinese cloud companies offering even higher bids. By the end of February, the contract price for 64GB RDIMM had risen to $910-$920, approximately 20% higher than the average price of $800 in the first quarter. The price increase for LPDDR and consumer electronics-related DRAM in the second quarter is expected to be at least 40%-50%; the contract price for DDR4 is expected to rise by 40%-50%. Previously expected to decrease by 20%-25%, HBM3E has now turned to a single-digit percentage increase during the renewal of ASIC customer contracts.
In terms of NAND, the pricing for enterprise SSDs in the second quarter is expected to increase by 40%-50% quarter-on-quarter, while the price increase for consumer products is expected to be no less than 60%. In some scenarios, eSSD prices may double again in the second quarter.
Morgan Stanley believes that the year-on-year price acceleration trend continues, and we are currently still in the mid-cycle of an upward trend. Once the market adjusts profit forecasts to reflect the unprecedented capacity constraints, there is significant room for recovery in related targets; potential upward adjustments in capital returns may further support excess performance.

