Today is the big day of 2019 for Intel's enterprise product announcements, combining some products that should be available from today and a few others set to be available in the coming months. Instead of opting for a tiered approach, we have everything in one: processors, accelerators, network and edge computing. Here's a quick summary of what's happening today, plus links to all of our more detailed dive articles, our reviews, and ad reviews.
Cascade Lake: Intel's new server and enterprise processor
The headliner of this festival is Intel's second-generation Xeon Scalable processor, Cascade Lake. This is the processor that Intel will promote heavily throughout its enterprise portfolio, especially when OEMs such as Dell, HP, Lenovo, Supermicro, QCT and others upgrade their product lines with new hardware. (You can read some of their ads here: Dell, Supermicro)
While these new CPUs do not use a new microarchitecture compared to the first generation of scalable Skylake-based Xeon processors, Intel surprised most of the press on its Tech Day with the vast number of improvements in other areas of Cascade Lake. Not only are there more hardware mitigations against Specter and Meltdown than we expected, but we also have Optane DC Permanent Memory support. High-volume processors get a performance boost with up to 25% extra cores, and each processor gets twice as much memory support (and faster memory as well). The use of the latest manufacturing technologies allows for frequency improvements that, when combined with the new AVX-512 modes, show some dramatic increases in machine learning performance for those who can use them.
|Intel Scalable Xeon|
|April 2019||Released||July 2017|
| Up to 28
 Up to 56
|Nuclei|| Up to 28|
|1 MB L2 per core
Up to 38.5 MB shared L3
|Cache||1 MB L2 per core
Up to 38.5 MB shared L3
|Up to 48 tracks||PCIe 3.0||Up to 48 tracks|
Up until DDR4-2933
1.5 TB standard
|Support DRAM||Six Channels
Up until DDR4-2666
768 GB standard
|Up to 4.5 TB per processor||Optane Support||–|
|AVX-512 VNNI with INT8||Compute Vector||AVX-512|
|Variant 2, 3, 3a, 4,
|Spectrum / Meltdown
| Up to 205 W
 Up to 400 W
|TDP||Up to 205 W|
A novelty in the Xeon Scalable family is the AP line of processors. Intel gave a hint at this late last year, but we finally got some details. This new family of Xeon Platinum 9200 parts combines two bits of 28 silicon cores into a single package, offering up to 56 cores and 112 segments with 12 memory channels in a thermal envelope of up to 400W. This is essentially a 2P configuration on a single chip and is designed for high density implementations. These BGA-only processors will be sold only with an underlying Intel-designed platform directly from OEMs, and will not have a direct price – customers will pay for the "solution" instead of the product.
For this generation, Intel will not be producing models with "F" Omnipath fabric on board. Instead, users will have some "M" models with 2TB memory support and "L" models with 4.5TB memory support, targeting the Optane markets. There will also be other letter designations, some of them new:
- M = medium memory support (2.0 TB)
- L = Large Memory Support (4.5 TB)
- Y = Speed Selection Models (see below)
- N = Network / Specialty NFV
- V = optimized virtual machine density value
- T = Long Life Cycle / Thermal
- S = Search Optimized
Beyond all of that, the Speed Select 'Y' models are the most interesting. They have additional power monitoring tools that allow applications to be attached to certain cores that can grow larger than other cores – distributing the available power to different locations in the cores based on what needs to be prioritized. These parts also allow for three base and turbo frequency configurations specified by OEM, so that one system can be focused on three different types of workloads.
We are currently in the process of writing our main review and plan to address the topic from several different angles in various stories. Stay tuned for that. We have the SKU lists and our launch day news found here:
The other key element for the processors is the Optane support, discussed below.
Optane DCPMM: Data Center Persistent Memory Modules
If you're confused about Optane, you're not the only one.
Overall, Intel has two different types of Optane: Optane Storage and Optane DIMMs. Storage products have been on the market for some time, both in the consumer and in the enterprise, showing exceptional random access latency above and beyond what any NAND can offer, albeit at a price. For users who can pay the cost, this is a great product for this market.
The Optane in memory module form factor actually works on the DDR4-T standard. The product is focused on the corporate market and although Intel has talked about "Optimal DIMMs" for a while, today it is the "official release." Some customers are already testing and using, while overall availability is expected in the coming months.
I with a 128GB Optane module. Images of Patrick Kennedy
The Optane DC Persistent Memory comes in DDR4 format and works with Cascade Lake processors to allow large amounts of memory in a single system – up to 6TB on a two-socket platform. Optane DCPMM is a bit slower than traditional DRAM, but it allows for a much larger memory density per socket. Intel is prepared to offer three modules of different sizes, 128 GB, 256 GB or 512 GB. Optane does not completely replace DDR4 – you need at least one standard DDR4 module in the system to work (it acts as a buffer), but it means that clients can pair 128 GB of DDR4 with 512 GB of Optane for 768 GB in the total. instead of looking at a 256GB pure DDR4 supported with NVMe.
With Optane DCPMM in one system, it can be used in two modes: Memory Mode and App Direct.
The first mode is the simplest way to think about it: like DRAM. The system will see the large DRAM allocation, but in reality it will use Optane DCPMM as the main memory storage and DDR4 as a buffer for it. If the buffer contains the required data immediately, it will produce a standard DRAM read / write pattern, whereas if it is in Optane, it will be a bit slower. How this is negotiated is between the DDR4 controller and the Optane DCPMM controller in the module, but this works great for large DRAM installations, rather than keeping everything in a slower NVMe.
The second mode is App Direct. In this case, DRAM acts as a large storage drive that is as fast as a RAM disk. This disk, although not bootable, will keep the data stored in it between the startups (an advantage of memory being persistent), allowing very fast reboots to avoid serious downtime. App Direct mode is a bit more esoteric than "just a lot of DRAM," because developers may have to re-architect their software stack to take advantage of the DRAM-like speeds that this disk will allow. It is essentially a large RAM disk that stores your data. (ed: I'll take two)
One of the problems, when Optane was first announced, was whether it would support enough read / write cycles to act as DRAM, since the same technology was also being used for storage. To ease fears, Intel will guarantee all Optane modules for 3 years, even if this module runs at peak recording for the entire warranty period. This not only means that Intel is putting its faith and honor in its own product, as even convinced the much skeptical Charlie of Semi-Precise, who has been a long-time critic of technology (mainly due to lack of pre- launch, but he seems satisfied for a while).
The price of Intel's Optane DCPMM was not released at this time. The official line is that there is no specific MSRP for modules of different sizes – it is likely to depend on what customers end up buying on the platform, how much, what level of support, and how Intel can interact with them to optimize the configuration. Cloud providers are likely to offer instances supported by Optane DCPMM, and OEMs such as Dell report that they have systems planned for general availability in June. Dell said it hoped that users who could take advantage of the great memory mode to get started with it first, with those who can speed up a workflow with App Direct mode, take some time to rewrite their software.
Intel has given us remote access on some systems with Optane DCPMM installed. We are still going through the process of finding the best way to evaluate the hardware, so keep that in mind.
Intel Agilex: the new generation of Intel FPGA
The acquisition of Altera a few years ago was a major breakthrough for Intel. The idea was to introduce FPGAs into Intel's family of products and eventually realize a series of synergies between the two, integrating the portfolio and at the same time leveraging Intel's manufacturing facilities and sales channels. Despite this happening in 2015, every product since then was developed before this acquisition, before the integration of the two companies – until today. The new Agilex family of FPGAs is the first developed and produced entirely under the name Intel.
The Agilex announcement is today, however, the first 10nm samples will be available in the third quarter. The role of the FPGA has evolved in recent times, from the provision of general-purpose space-computing hardware to the provision of enhanced accelerators and the enabling of new technologies. With Agilex, Intel intends to offer this acceleration and configuration mix not only with the main set of ports, but also because of additional chip-type extensions enabled by Intel Embedded Multi-Die Interconnect Bridge (EMIB) technology. These chiplets can be third-party IP custom transceivers, PCIe 5.0, HBM, 112G, or even the coherent cache architecture of Intel's Compute eXpress Link. Intel is promoting up to 40 TFLOPs of DSP performance and is promoting its use in mixed precision machine learning with enhanced support for bfloat16 and INT2 for INT8.
Intel will launch Agilex in three product families: F, I and M in this order of time and complexity. Intel Quartus Prime software to program these devices will be upgraded for support during April, but the first F models will be available in the third quarter.
Columbiaville: Going to 100 GbE with Intel 800 Series Controllers
Currently, Intel offers a large amount of 10 Gigabit Ethernet and 25 Gigabit Ethernet in the data center. The company launched the Omnipath 100G a few years ago as an initial alternative, and is looking at a second-generation Omnipath to double that speed. Meanwhile, Intel has developed and will launch Columbiaville, its driver offering for the 100G Ethernet market, labeled Intel 800-Series.
The introduction of faster networks in the data center infrastructure is certainly positive, but Intel is committed to promoting some new technology with the product. Application Design Queues (ADQs) are available to help accelerate hardware priority packets to ensure consistent performance while Dynamic Device Personalization (DDP) enables additional programming functionality in sending packets to unique network configurations to enable additional functionality and / or security.
The 100G two port board will be called the E810-CQDA2, and we are still waiting for information about the chip: size, cost, mold process etc. Intel reports that its 100 GbE offerings will be available in the third quarter.
Xeon D-1600: A Generational Efficiency Improvement for Edge Acceleration
One of Intel's key product areas is the advantage, both in terms of computing and networking. One of the products that Intel has focused on in this area is Xeon D, which encompasses high-performance computing with fast network and encryption (D-1500) and high-speed computing with accelerated network and encryption (D-2100). The first being well-based Broadwell and the latter is based on Skylake. The new Intel Xeon D-1600 is a direct successor to the D-1500: a true single chip solution, taking advantage of the additional frequency and increased efficiency in the manufacturing process. It is still built on the same manufacturing process as the D-1500, allowing Intel partners to easily access the new version without many functional changes.