Intel wants CPU, GPU and FPGA to say "the same language"
A friend who has installed the machine may know the PCI-e bus, which can be understood as a bridge and language for most of the components inside the computer to communicate with each other. In today's AI era, in the use of deep neural network training and machine learning algorithms, devices with different computing architectures such as GPUs, FPGAs (field programmable gate arrays, and a custom computing device) are often used. If they continue to use PCI-e as the language of communication, performance will be greatly reduced.
At Intel's technology event Interconnect Day, Stephen Van Doren, director of Intel's processor interconnect architecture, pointed out that today's PCI-e is the bottleneck for interconnect technology: the technology industry is experiencing explosive data growth, AI Such use scenarios have prompted people to start working with dedicated hardware such as GPUs and FPGAs. As an old interconnect technology, PCI-e has been unable to meet the increasing data and computing speed requirements for memory usage efficiency, latency and data throughput.
For example, PCI-e creates too many isolated memory pools between processors of different architectures, resulting in inefficient memory usage. Not only that, the latest trend in the computing industry is memory de-aggregation, which means that servers are no longer equipped with low-utilization of excessive memory, and PCI-e cannot be satisfied in this trend. In other words, in the era of AI computing, PCI-e is not the best language for communication between CPUs, GPUs, FPGAs, and other AI computing devices, such as terminal AI computing cards.
In order to break through the bottleneck of PCI-e without over-innovating on the PCI-e hardware interface of the existing processor, Intel announced CXL, a new open interconnect technology in March this year. CXL's full name, Compute Express Link, is a new "language" designed by Intel for high-speed, low-latency interconnects between CPUs and workload accelerators such as GPUs and FPGAs.
Its benefits are first of all to achieve memory consistency between processors, allowing resource sharing to achieve higher performance, reduce the complexity of the software stack, and reduce the total cost of the system; second, it builds with PCI-e logic and Above the physical level, it is easier to be accepted by existing processors that support PCI-e ports (most common CPUs, GPUs, and FPGAs).
CXL's data exchange layer includes three sub-protocols: CXL.io, which is responsible for mutual discovery between devices, connection establishment, etc.; .cache, which allows non-CPU-based processors to directly read CPU data; .memory, let CPU The memory that comes with the non-CPU architecture processor can be read directly. This set of logic will eliminate the need for data center servers to be equipped with excessive, low-utilization memory.
Different from Intel's UPI protocol, which is common in autonomous architecture computing products, CXL is an asymmetric protocol, making memory calls between heterogeneous processors no longer bloated and data exchange faster.
Because it can exist at the physical level of PCI-e, Intel sees CXL as an optional protocol, meaning that PCI-e's interconnect protocol has not been completely abandoned. However, as a board member of the PCI-e standards development organization, Intel has plans to promote the adoption of CXL on the sixth generation of PCI-e standards.
Data center operators will be the most direct beneficiaries of this technology. This is also the big data center and cloud computing giants such as Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei and Microsoft. The reason for the alliance.
At present, the rapid advancement of heterogeneous computing technology has made AI computing possible from the alternative fantasy of decades ago, and has led to the emergence of numerous new usage scenarios and business opportunities. By advancing CXL, Intel hopes to give a new language to heterogeneous computing, allowing participants to more effectively "dialogue" on data.