For 2019 supercomputing, Intel introduces its thick hardware frameworks for 2021. In both the field of processors and general purpose GPUs, there are new details to report. But, as I said, we are talking about 2021, so until we see concrete hardware, it will take more than 12 months. For end customers, many questions remain open.
On the Xeon processors side, Intel calls the familiar schedule. By 2020, Cooper Lake-based Xeon processors are expected to stand out, particularly with regard to DL-Boost support, and especially Bfloat16. Cooper Lake will be the only architecture that will support Bfloat16 in this form. Offside, this is reserved for some AI accelerators. Cooper Lake-based Xeon processors will continue to be manufactured at 14 nm. In addition, Intel has so far only confirmed that these processors will offer up to 56 cores.
At the same time, beginning in the second half of the year, Intel will offer Ice Lake-based 10 nm Xeon processors. Again, DL-Boost plays a role, but Bfloat16 is not one of the supported command sets. Whether and how Intel can use Sunny Cove cores significantly in the server environment is a big question.
By 2021, Sapphire Rappids is planned. Which cores Intel defines here and what production size they manufacture are just the most important questions. On Memory Technology Day, at least something was unveiled to support Optane's next generation of storage solutions.
Intel dares Xe outlook for HPC and AI
The big topic of the coming years is certainly the development in the field of GPUs or GPUs, provided in the data center for HPC and AI applications.
Intel wants to find its way to the top here through various measures. The Ponte Vecchio GPU is said to combine several technological milestones. On the one hand, it will be the first 7 nm chip made by Intel – something that was unveiled a few months ago. In contrast, Intel's packaging technology, Foveros or other iteration, will also be used here. Through Xe-Link, the Compute Express Link-based interconnect, multiple GPUs can be interconnected.
Intel wants to talk about three points right now:
- an array engine can perform as many vector calculations as possible simultaneously
- computing power for double precision calculations (FP64) must be high
- Both caches and other storage have extremely high bandwidth.
However, Intel remains guilty of some facts – probably because there are still months to go before the official start.
One of the first applications of the new Sapphire Rapids-based Xeon processors and Ponte Vecchio-based Xe GPUs will be the Aurora supercomputer. This will be available from 2021 to the Argonne National Laboratory in Chicago.
Intel plans to use two Xeon processors per node. There are also six of the new Xe GPUs. All major components are interconnected via the CXL interconnect. Intel speaks of eight mesh endpoints – two Xeon processors and six Xe GPUs. Memory must be able to share all components with each other.
To use the hardware features, Intel is already planning more time with OneAPI. In a separate article, we have already highlighted the objectives of this initiative.
OneAPI should be the only solution for everyone and everyone – but this is not just a developer. Finally, end users, whether players, desktop users or data center operators, should also benefit. Intel provides optimized applications through API One, as well as middleware. At best, this should be done based on open source. However, this interpretation certainly has its obstacles; Finally, the most different hardware must be addressed. Intel not only offers processors for various markets, but also integrated and soon also dedicated graphics units. There are also AI accelerators (matrix calculations) and FPGAs (spatial calculations).
So Intel is preparing for the next step. In addition to Xeon processors, Xe GPUs form another level of hardware, which provides opportunities, which must first manifest in hardware form.