Cryptopolitan
2025-09-12 09:15:25

SK Hynix achieves critical milestone in next-gen HBM4 chips

On Friday, SK Hynix completed the development of HBM4, its next-generation memory product for ultra-high-performance AI. The company also established a mass production system for the high-bandwidth memory chips for customers. The South Korean firm said the semiconductor chip vertically interconnects multiple DRAM chips and increases data processing speed compared to conventional DRAM products. The company believes that the mass production of the HBM4 chips will lead the AI industry. SK Hynix prepares mass production of HBM4 SK Hynix developed its product based on the recent dramatic increase in AI demand and data processing, which requires high-bandwidth memory for faster system speed. The company also believes that securing memory power efficiency has become a key requirement for customers, as power consumption for data center operation has surged. The semiconductor supplier hopes that HBM4’s increased bandwidth and power efficiency will be the optimal solution to meet customers’ needs. The production of the next-generation of HBM4 involves stacking chips vertically to save space and reduce power consumption, which helps to process large volumes of data generated by complex AI applications. “Completion of HBM4 development will be a new milestone for the industry. By supplying the product that meets customer needs in performance, power efficiency and reliability in timely manner, the company will fulfill time to market and maintain a competitive position.” – Joohwan Cho , Head of HBM Development at SK Hynix. The South Korean company revealed that its new product has the industry’s best data processing speed and power efficiency. According to the report, the chip’s bandwidth has doubled from the previous generation by adopting 2,048 I/O terminals, and its power efficiency has surged by over 40%. SK Hynix also maintained that HBM4 will improve AI service performance by up to 69% when the product is applied. The initiative aims to solve data bottlenecks and reduce data center power costs. The firm revealed that HBM4 exceeds the Joint Electron Device Engineering Council’s (JEDEC) standard operating speed (8Gbps) by incorporating over 10Gbps in the product. JEDEC is the global standardization body that develops open standards and publications for the microelectronics industry. SK Hynix also incorporated the Advanced MR-MUF (Mass Reflow Molded Underfill) process in HBM4, which allows stacking the chips and injecting liquid protective materials between them to protect the circuit between the chips and harden them. The company stated that the process has proved reliable and more efficient in the market for heat dissipation compared with the method of laying film-type materials for each chip stack. SK Hynix believes that its advanced MR-MUF technology helps secure a stable HBM mass production by providing good warpage control and reducing the pressure on the stacked chips. The firm also adopted the 1bnm process, the fifth generation of the 10-nanometer technology in HBM4. SK Hynix said it helps minimize the risk in market production. Justin Kim, head of AI Infra at the company, said SK Hynix plans to grow into a full-stack AI memory provider by supplying memory products with the best quality and diverse performance required in the AI industry. SK Hynix shares surge Following the release of HBM4, SK Hynix’s share price hit a record high on Friday, surging by as much as 6.60% to 327,500 KRW ($235.59). The company’s stock price has also increased by roughly 17.5% in the past five days and nearly 22% in the past month. Senior analyst at Meritz Securities, Kim Sunwoo, forecasted the company’s HBM market share will remain in the low 60% range in 2026. He argued that it will be supported by early HBM4 supply to key customers and the resulting first-mover advantage. SK Hynix supplies the most HBM semiconductor chips to Nvidia, followed by Samsung Electronics and Micron, which supply smaller volumes. The company’s head of HBM business planning, Choi Joon-yong, projected that the AI memory chip market will surge by 30% a year until 2030. Joon-yong stated that the end user’s demand for AI is very strong. He also sees the billions of dollars in AI capital spending from cloud computing companies being revised upwards in the future. Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

获取加密通讯
阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约