[Herald Interview] Why Korean Chip Engineers Should Watch the Next-Gen Standard

[Herald Interview] Why Korean Chip Engineers Should Watch the Next-Gen Standard



Richard Solomon, PCI-SIG Vice President (PCI-SIG)

More and more servers require faster semiconductor speeds in high-performance computing technologies, to meet the growing demand for big data analysis and autonomous mobility.

But a disparity in transfer speed between processors and memory chips has long been an obstacle to improving computer system performance, often creating a network bottleneck within a chipset.

That’s why memory chip engineers should familiarize themselves with plans for the next-generation interconnect standard, although they’re far from commercial.

“Member companies know that the path I’m on will allow me to grow, and then we try to come up with the specs ahead of time,” said Richard Solomon, vice president of the Peripheral Component Interconnect Special Interest Group, an association of the chip industry that defends PCI Express standard, in an interview with The Korea Herald.

South Korean powerhouses such as Samsung Electronics and SK hynix – both of which are PCI-SIG members – are no exception, he added.

PCI Express is one of the global semiconductor standards under which different types of chips such as graphics cards and solid-state drive storage can be connected to each other to provide seamless inter-chip data transmission. Its technology plan is “three to four years ahead of where the industry needs so much bandwidth,” according to Solomon.

Storage products such as SSDs had less scalability than CPUs in terms of bandwidth.

For storage products such as solid-state drive chips, the usual level of PCI Express could not meet these peak requirements due to more stringent upper caps of four for the number of lanes available, regardless of form factors.

This is in contrast to graphics cards, which can increase transfer speed simply by adding more lanes – usually eight, 16 or more – of the information highway, so that bandwidth can increase while still using the usual level PCI Express.

In order to ensure a data speed of 256 gigabytes per second in a data center, for example, a chip component can have 16 lanes of 16 gigabytes per second highways, under the existing PCI Express standard. But SSDs need a highway of at least 64 gigabytes per second per lane in terms of speed, which only exists in theory – urging memory chip engineers to speed up work to market.

“As capacity increases, they naturally have more chips, which means they can naturally get more bandwidth. But (memory and storage products) are stuck with this standard form factor defined as 4-way,” Solomon said.

The memory chip bottleneck was a concern for Samsung Electronics and SK hynix, which had about 75% combined market share for SSDs used for Internet servers globally.

For the past few years, these companies have been working on a revolutionary interconnect standard called CXL, designed for memory capacity and bandwidth expansion. In particular, Samsung unveiled memory prototypes delivering 512 gigabytes per second of speed by applying the CXL interconnect standard in May.

Solomon said that the two different chip interconnect standards, PCI Express and CXL, can complement each other on the same physical chipset layer because CXL focuses more on cache coherency, while PCI Express aims to optimize the mechanism. signaling.

Solomon visited Korea as one of the speakers at the PCI-SIG Developers Conference Asia-Pacific 2022 held in Seoul on Monday. This was the first time that PCI-SIG held such a conference in Korea. Previously, the Asia-Pacific conference was held in Taiwan and Japan.

Korea has 17 PCI-SIG members out of more than 900 worldwide as of September.

By Son Ji-hyoung (consnow@heraldcorp.com)

#Herald #Interview #Korean #Chip #Engineers #Watch #NextGen #Standard

Leave a Comment

Your email address will not be published. Required fields are marked *

Prendre rendez-vous en ligne