"Father of HBM": The commercialization process of High Bandwidth Flash (HBF) exceeds expectations and may be integrated into GPUs within 2-3 years, with a market size surpassing HBM

Wallstreetcn
2026.01.17 03:02
portai
I'm PortAI, I can summarize articles.

The latest prediction from Kim Joungho, known as the "father of HBM," shows that the commercialization process of High Bandwidth Flash (HBF) has significantly accelerated compared to expectations. This technology is expected to be integrated into GPU products from NVIDIA, AMD, and others within the next 2-3 years. With the tenfold capacity advantage brought by NAND stacking, HBF will fill the storage gap of HBM in AI inference scenarios. The market expects it to achieve large-scale application in the HBM6 phase, and by 2038, the market size may even surpass HBM, becoming a key force in the next generation of high-bandwidth storage

The commercialization process of High Bandwidth Flash (HBF) is accelerating, and this new storage technology, regarded as the "NAND version of HBM," is expected to land earlier than anticipated. Kim Joungho, a professor at the Korea Advanced Institute of Science and Technology and known as the "father of HBM," recently revealed that Samsung Electronics and SanDisk plan to integrate HBF into products from NVIDIA, AMD, and Google by the end of 2027 to early 2028.

Kim Joungho pointed out that, thanks to the accumulated process and design experience from HBM, the commercialization process of HBF will be much faster than the development cycle of HBM back in the day. He predicts that HBF will achieve widespread application during the launch phase of HBM6, estimating that by around 2038, its market size is expected to surpass that of HBM.

The continuous growth of AI workloads is a key driver for the development of HBF. Compared to traditional DRAM-based HBM, HBF provides about 10 times the capacity while maintaining high bandwidth through vertically stacked NAND flash memory, making it particularly suitable for large-capacity scenarios such as AI inference. Currently, Samsung Electronics and SK Hynix have signed a memorandum of understanding with SanDisk to jointly promote the standardization of HBF, aiming to bring products to market by 2027.

HBF Technical Advantages: Balancing Capacity and Bandwidth

HBF adopts a vertical stacking architecture similar to HBM, but instead stacks NAND flash memory instead of DRAM chips, a key difference that brings significant capacity improvements. According to industry analysis, HBF bandwidth can exceed 1638 GB/s, far surpassing the approximately 7 GB/s bandwidth of NVMe PCIe 4.0 SSDs; its capacity is expected to reach 512GB, significantly exceeding the 64GB limit of HBM4.

Kim Joungho further explained HBF's positioning in AI workflows: Currently, GPUs need to read variable data from HBM for AI inference, but in the future, this task can be handled by HBF. Although HBM is faster, HBF can provide about 10 times the capacity of HBM, making it more suitable for large-capacity data processing scenarios.

In terms of technical limitations, Kim Joungho noted that HBF supports unlimited read operations, but write cycles are limited (about 100,000 times), which requires companies like OpenAI and Google to build read-centric optimized architectures in their software design. He vividly compared it:

“If HBM is likened to a family bookshelf, HBF is like studying at a library—slower in speed, but with a much larger knowledge base available for reference.”

Industry Layout: Storage Giants Accelerate Progress

SK Hynix is expected to launch a trial version of HBF and conduct technical demonstrations later this month. Previously, Samsung Electronics and SK Hynix signed a memorandum of understanding with SanDisk to jointly establish a consortium to promote the standardization process of HBF. Currently, both companies are actively developing related products.

According to TrendForce, SanDisk is set to be the first to release an HBF prototype in February 2025 and has established a technical advisory committee. In August of the same year, the company signed a memorandum of understanding with SK Hynix aimed at promoting specification standardization, planning to deliver engineering samples in the second half of 2026 and achieve commercialization in early 2027 Samsung Electronics has initiated the conceptual design phase for its own HBF products.

The technical implementation of HBF mainly relies on Through-Silicon Via (TSV) technology to achieve vertical stacking of multi-layer NAND chips, utilizing advanced 3D stacking architecture and chip-to-wafer bonding processes. Each package can stack up to 16 NAND chips, supporting multi-array parallel access, with bandwidth reaching 1.6TB/s to 3.2TB/s, comparable to HBM3 performance. The maximum single stack capacity is 512GB, and if an 8-stack configuration is used, the total capacity can reach 4TB, equivalent to 8 to 16 times the capacity of HBM.

Future Architecture: From HBM6 to "Memory Factory"

Kim Joungho predicts that HBF will achieve widespread application during the promotion phase of HBM6. He pointed out that after entering the HBM6 era, systems will no longer rely on a single stack but will form "storage clusters" through interconnections, similar to the construction logic of modern residential complexes. The HBM based on DRAM is significantly limited by capacity, while the NAND-stacked HBF will effectively fill this gap.

In terms of system architecture evolution, Kim Joungho proposed a more streamlined data path concept. Currently, GPUs obtain data through a complex transmission process involving storage networks, data processors, and GPU pipelines, while in the future, direct processing of data is expected to be achieved close to HBM. This architecture, referred to as the "Memory Factory," is anticipated to emerge in the HBM7 phase, greatly enhancing data processing efficiency.

In the future, HBF will be positioned alongside HBM, deployed around GPUs and other AI accelerators. Kim Joungho stated, "I believe that within 2 to 3 years, the term HBF will become familiar." He further noted that HBF will enter a rapid development phase and gradually take on a core role in backend data storage.

Looking at the long-term market, Kim Joungho predicts that by around 2038, the market size of HBF is expected to surpass that of HBM. This judgment is based on the continuous demand for high-capacity storage in AI inference scenarios and the inherent advantages of NAND flash in storage density compared to DRAM. However, due to the physical characteristics of NAND, HBF has higher latency than DRAM, making it more suitable for read-intensive AI inference tasks rather than applications that are extremely sensitive to latency.