In This Article:
-
The company plans to supply the highest-performing, highest-capacity 12-layer HBM3E to customers by the end of the year
-
DRAM chips made 40% thinner to increase capacity by 50% at the same thickness as the previous 8-layer product
-
The company to continue HBM's success with outstanding product performance and competitiveness
SEOUL, South Korea, Sept. 25, 2024 /PRNewswire/ -- SK hynix Inc. (or 'the company', www.skhynix.com) announced today that it has begun mass production of the world's first 12-layer HBM3E product with 36GB[1], the largest capacity of existing HBM[2] to date.
[1] Previously, the maximum capacity of HBM3E was 24GB from eight vertically stacked 3GB DRAM chips. |
[2] HBM (High Bandwidth Memory): This high-value, high-performance memory vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3E is the extended version of HBM3, the fourth generation product that succeeds the previous generations of HBM, HBM2 and HBM2E. |
The company plans to supply mass-produced products to customers within the year, proving its overwhelming technology once again six months after delivering the HBM3E 8-layer product to customers for the first time in the industry in March this year.
SK hynix is the only company in the world that has developed and supplied the entire HBM lineup from the first generation (HBM1) to the fifth generation (HBM3E), since releasing the world's first HBM in 2013. The company plans to continue its leadership in the AI memory market, addressing the growing needs of AI companies by being the first in the industry to mass-produce the 12-layer HBM3E.
According to the company, the 12-layer HBM3E product meets the world's highest standards in all areas that are essential for AI memory including speed, capacity and stability. SK hynix has increased the speed of memory operations to 9.6 Gbps, the highest memory speed available today. If 'Llama 3 70B'[3], a Large Language Model (LLM), is driven by a single GPU equipped with four HBM3E products, it can read 70 billion total parameters 35 times within a second.
[3] Llama 3: Open-source LLM released by Meta in April 2024, with 3 sizes in total: 8B (Billion), 70B, and 400B. |
SK hynix has increased the capacity by 50% by stacking 12 layers of 3GB DRAM chips at the same thickness as the previous eight-layer product. To achieve this, the company made each DRAM chip 40% thinner than before and stacked vertically using TSV[4] technology.
The company also solved structural issues that arise from stacking thinner chips higher by applying its core technology, the Advanced MR-MUF[5] process. This allows to provide 10% higher heat dissipation performance compared to the previous generation, and secure the stability and reliability of the product through enhanced warpage controlling.