In This Article:
-
Development of PCB01, 5th generation of 8-channel PCIe, to be followed by mass production within this year
-
PCB01, industry's best SSD for PCs, optimized for on-device AI
-
Advancement in NAND solution to extend success story of HBM, solidify leadership in AI memory space
SEOUL, South Korea, June 27, 2024 /PRNewswire/ -- SK hynix Inc. (or "the company", www.skhynix.com) announced today that it developed PCB01, an SSD product with the industry's best specifications, for on-device* AI PCs.
*On-device AI: a technology that implements AI functions on the device itself, instead of going through computation by a physically separated server. A smartphone's direct collection and computation of information allows fast reactions of the AI performance, while promising an improved customized AI service. |
The product marks the first case where the industry adopts the fifth generation of the 8-channel** PCIe*** technology and brings innovation to performance including the data processing speed.
**Channel: a route for the input/output of data between a NAND Flash and a controller on the SSD. An increase in the number of channels leads to advancement of the PCIe to the next generations and an improvement in the data processing speed. A 4-chnnel SSD is typically adopted for the conventional PCs, while an 8-channel SSD is for high-performance PC. |
***Peripheral Component Interconnect Express (PCle): a serial-structured, high-speed input/output interface used on the motherboard of digital devices |
The company expects the latest advancement in the NAND solution space to add to its success stories in the high-performance DRAM area led by HBM, enhancing its leadership in the overall AI memory space.
With a validation process with a global PC customer underway, SK hynix plans to mass produce and start shipping the products to both corporate customers and general consumers within this year.
PCB01 comes with the capabilities of sequential read and write speeds of 14GB and 12GB per second, respectively, bringing the performance of an SSD to the level unseen before. The speeds allow the operation of a large language model****, or LLM, for AI training and inference, in a second.
****Large language model (LLM): a language model trained on vast amounts of data, which is essential for performance of generative AI tasks such as creating, summarizing, and translating texts |
The product also comes with an improvement in power efficiency of more than 30% compared with the previous generation, enhancing the stability of the large-scale AI computing tasks.