Famous Graphics Chips: Intel’s GPU History

By Dr. Jon Peddie
Published 11/26/2020
Share this on:

Famous graphics chips: Intel’s GPU history

Intel tried five times since 1983, and once more in 2020

Intel has a long history in PC graphics chips and in late 2020 announced a new discrete GPU (dGPU), the Xe Max. The company has taken several runs building a discrete graphics chip to take on the market leader but has had a challenging time. They never seemed to address building a dGPU with the same seriousness and resources as the CPU.

That attitude has changed. Intel has plotted the development and introduction of its Xe dGPU line. It has been careful to communicate its confidence while keeping a tight grip on its messaging. Intel has now announced their thin and light notebook discrete (dGPU), the iRISXe Max, on Halloween, and it may have been scary news for some of the incumbents.

The basic specifications of the 2020 mobile dGPU are in the following table.

Technical Specifications
Product Name Intel iRISXe Max Graphics
EUs 96
Frequency 1.65 GHz
Lithography 10 nm Superfin
Graphics Memory Type LPDDR4x
Graphics Capacity 4GB LPX DDR4
Graphics memory bus width 128
Graphics Memory Bandwidth 68 GB/s
PCI Express Gena
Al Support Intel DLBoost: DP4A *; see section ‘what do we think?’ below
Media 2 Multi-Format Codec (MFX) Engines
Intel Deep Link Technology Yes
Number of Displays Supported eDP 1.4b, DP 1.4, HDMI 2.0b
Graphics Output 4096 × 2304@60 Hz (HDM1/eDp)
Max Resolution 7680 × 4320@60 Hz DP
Pixel depth 12-bit HDR
Graphics Features Variable Rate Shading, Adaptive Sync, Async Compute
DirectX Support (Beta)
OpenGL Support 12.1
OpenCL Support 4.6
Power 25W

Table 1: Intel’s 2020 discrete mobile GPU

Intel has paired the iRISXe Max dGPU with its new 11th-Gen Intel Core mobile processors. Intel claims the new dGPU delivers Additive AI. That means both GPUs (the new dGPU and the CPU’s iGPU) can work together on inferencing and rendering. And that, says Intel, can speed up content creation workloads as much as 7 times.

Intel is comparing this first product against a 10th-Gen Intel Core i7-1065G7 with a Nvidia GeForce MX350.

The 2020 dGPU offers Hyper Encode for up to 1.78 times faster encoding than a high-end desktop graphics AIB. For that test, Intel used a 10th-Gen Intel Core i9-10980HK with Nvidia GeForce RTX 2080 Super.

Also, iRISXe Max works with Intel’s Deep Link. Deep Link enables dynamic power-sharing. With it, the CPU can have all the power and thermal resources dedicated to it when the discrete graphics is idle, resulting, says Intel, in up to 20% better CPU performance.

Intel is no stranger to graphics, no newbie, and definitely no amateur. Intel has some of the finest graphics engineers in the business. And yet, with the best fabs, a bank account that others can only fantasize about, and a brand that could sell used 8-track players, and yet the company was not able to launch a successful discrete graphics processor product line in the past.

Nevertheless, Intel is the largest seller of graphics processors and ships more GPUs than all its competitors combined. Maybe another company would have been happy with that accomplishment, but not Intel.

Figure 1: Market share of PC graphics suppliers[1]

Try try again

The following is a brief review of some of those efforts, followed by what we think about the current offering.

1982   82720

In 1982, NEC changed the emerging computer graphics market landscape along with the introduction of the PC. NEC would significantly change the heretofore specialized and expensive computer graphics industry. NEC Information Systems, the US division of Nippon Electric Company (now NEC), introduced the µPD7220 Graphics Display Controller (GDC). NEC started the project in 1979 and published a paper on it in 1981 at the IEEE International Solid-State Circuit Conference in February 1981.

Figure 2: The SBX275 Video Graphics Controller with 82720 chip. (Source: Multibus International)

Intel licensed NEC’s graphics and in June 1983, Intel brought out the 82720, a clone of the µPD7220; it rolled out its iSBX 275 multibus-based add-in graphics board (AIB) with the chip later that year. Intel continued to offer the product up to 1986. You can read more about the venerable µPD7220.[2]

1986   82786

Intel saw the rise in discrete graphics controllers such as NEC’s µPD7220, Hitachi’sHD63484, and the several clones of IBM’s EGA, and concluded they ought to be the ones filling that socket. Intel’s intention always was and still is, to provide every bit of silicon in a PC, and a graphics controller would be no exception.

In 1986, the company introduced the 82786 as an intelligent graphics coprocessor that would replace subsystems and boards that traditionally used discrete components and/or software for graphics functions. Designed for any microprocessor but targeted at Intel’s l6-bit 80186 and 80286 and 32-bit 80386.

The 82786 was a VLSI graphics coprocessor. “One of the key hardware extensions that support the speed needed to do graphics and text is a graphics coprocessor,” said Bill Gates at the time. It used VRAM, and Intel said the 82786 could provide virtually unlimited color support and resolution.

Figure 3: Intel 82786 die shot. (Source: Commons.wikimedia.org)

Intel’s 82786 was available in a single 88-pin grid array or leaded carrier. It contained a display processor with a CRT controller and a bus interface unit with a DRAM/VRAM controller supporting 4 MB of memory. Intel was in the game.[3]

Intel sold the chip as a merchant part, and independent AIB suppliers built boards with it. In 1987, two companies were offering three AIBs using the 82786, and by 1988 ten companies were offering 15 AIBs using the chip. The chip wasn’t as powerful compared to others in the market, most noteworthy being Texas Instruments’ TSM34010, nor as popular as the IBM VGA and its many clones. Intel withdrew the chip with the introduction of the 86486 microprocessor in 1989.

1989   i860

The i860 (codenamed the N10) had several distinctive elements. It used a very-long-instruction-word (VLIW) architecture with high-speed floating-point operations. It had a 32-bit ALU Core and a 64-bit FPU. The FPU was different in that it had an adder and a multiplier, but also a graphics processor.

For that time, the graphics section of the i860 was distinctive. It used a 64-bit integer processor and tapped into the floating-point registers to save transistors – the FPU registers obviously not needed during INT ops. As a result, Intel was able to offer SIMD capabilities as well as the 64-bit INT. Intel learned from the design and used several of its features in the subsequent MMX unit in the Pentium processor.

Figure 4: Intel i860 microprocessor. (Source: Wikipedia)

A couple of leading-edge companies from that period tried to use the  i860. Steve’s Jobs’ Next Computer had one in the NeXTdimension[4] to run a PostScript stack—but they nevre got it running. They did use it to color and move pixels.

Truevision was more successful and built a pixel  accelerator board with an i860. They planned to use it with their Targa and Vista framebuffer cards—it wasn’t a big success. At the time, Pixar was experimenting with graphic accelerators and made a RenderMan accelerator that was much faster the 386 they sued as a host. Probably the biggest flop was SGI’s attempt to use several of them as RealityEngine accelerators for its geometry engine.

Such attempts of graphics acceleration disappeared dur to morre’s law and the improvements in  x86 CPUs. Intel lost interest too and focused on it Pentium processors. Intel terminated the i860 project in the mid-1990s and followed with the i960. The company merged it with the FPU to become the i960KB. Several graphics terminals used the chip.

1998   i740

One of the more Byzantine product developments, however, was Intel’s i740 (codenamed Auburn). The graphics engine was a spin-off of a simulator developed at Martin Marietta in 1995. They had just merged with Lockheed formed Lockheed Martin Corporation.

Iin January 1995 Lockheed Martin was looking for some return on the investment tey had made in simulator graphics. To do that the company established a new division, Real3D.  Real3D took the simulator technology and created the  R3D/100. One of their first customers was Sega. Sega was the leader in arcade machines. The Sega 2 and 3 were a big hit and Lockheed found themselves with a successful product. It was used  in over 200,000 arcade game machines.

In 1997, Intel purchased notebook graphics chipmaker Chips and Technologies for $430 million. However, no products taking advantage of technology acquired in the merger ever emerged.

But one system does not make a product, or a company and in May 1996, Real3D was looking for customers. Intel was looking for graphics, and set up a partnership between Intel, Real3D, and Chips and Technologies. The plan was  to launch an AIB for PCs—which became a project called Auburn. That project createdthe AGP-based Intel i740 graphics processor, which Intel released in 1998. Intel also purchased a 20% minority interest in Real3D.

Figure 5: Intel i740 prototype AIB with an AGP connector. (Source: www.SSSTjy.com)

By late 1999, Intel did two things, it shut down the i740 project and acquired the assets of Real3D from Lockheed Martin. As Real3D crumbled, A scandal erupted at the time when ATI hired the employees that didn’t go with Intel, and then opened ATI’s Orlando office (which is still in operation). [5]

On another front, before the sale of its assets to Nvidia, 3Dfx had sued Real3D over patent infringements. Intel cleverly resoled the issue by selling all the Real3D intellectual property to 3Dfx. And that ultimately ended up in Nvidia’s hands. Nvidia had SGI’s graphics development resources, which included a 10% share in Real3D. Then there was a cascade of lawsuits including ATI. All of two companies fought over Real3D’s patents when finally, in 2001, a  cross-licensing agreement got worked out.

Intel exited the discrete graphics chip market for PCs, a market it entered into less than 18 months earlier to fanfare and dismal sales. The company continued to produce integrated graphics chipsets, which combine a standard PC chipset with a graphics processor. Those products sold in computers costing $1,000 and less.

The experience caused bad feelings at Intel, and many in the company said Intel would never again venture into discrete graphics again. Then in 2007, Intel tried once more with the Larrabee project. That too ended in failure, and management said, never again (again). Most of those people are gone, and today in 2019, the company is producing a new generation discrete graphics chip family.

1999   i810

The industry expected the i810 IGC (the 82810. codenamed, Whitney) would be the integrated version of the i740. That belief came from Intel’s hints that i810 would have the core of the company’s upcoming low/mid-end graphics chip i752. The i752 was the successor of the i740 launched in April 1999. Intel built it on the 150-nm process and the Portola graphics processor.[6]

Figure 6: Intel 819 chipset. (Source: Wikipedia)

The i810 was one of Intel’s most successful iGPUs and considered by many as a breakout product. Intel manufactured it and tweaked versions of it for three years. It was a shared memory architecture with direct access to the system’s main memory via the memory bus.

2001   Extreme graphics

Intel began its Extreme Graphics family with the i830 (codenamed Amador) chipset. Designed for Pentium III-M, the systems used old SDRAM memory, limiting them to 1066 MB/s bandwidth, like earlier GPUs. The clock rate dropped from the i815’s 230 MHz to 166 MHz on the Amador chipsets to conserve power and reduce heat output.

In 2002, Intel introduced the i845 chipset (codenamed Brookdale). It was the introduction of Intel’s push to establish its iGPUs as serious contenders in the gaming market. The i845 had a new 32bpp graphics hardware engine. It employed Intel’s Dynamic Video Memory Technology (DVMT) and Intel Zone Rendering.

Figure 7: Intel’s i845 northbridge chipset was surprisingly small. (Source: Wikipedia)

The iGPU had two texture units and could do four textures in a single pass. Its fill rate was 200 to 266 mpixels/s and was DirectX 8.1 compatible. It didn’t have any vertex shaders (leaving that to the CPU) but did support bump mapping, environment mapping, and anisotropic filtering.

2006   Larrabee to Phi

Intel launched the Larrabee project in 2006. The company hinted about the project in 2007 during Paul Otellini, Intel’s CEO’s IDF keynote. At the time, he said it would be a 2010 release and compete against AMD and Nvidia in the realm of high-end graphics.

Intel officially introduced the Larrabee project in 2008 at the Hot Chips conference. The company said it would have dozens of small in-order x86 cores capable of running as many as 64 threads. The chip would be a coprocessor for scientific computing or for graphics processing. At the time, Intel said programmers could determine how they will use those cores at any given time.

Ray tracing was one of the showcase applications for Larrabee. At the 2008 IDF, Intel showed Quake IV running real-time ray tracing on 16 processors.

Figure 8: Intel Larrabee. (Source: Vgamusuem)

Although scheduled for launch in the 2009–2010 timeframe, in December 2009, Intel canceled it. Rumors circulated in late 2009 that Larrabee didn’t perform as well as expected. In 2010, Intel acknowledged the power density of x86 cores didn’t scale as well as those of a GPU.

Intel salvaged some of the work and introduced a compute-coprocessor named Knight’s bridge Phi.

2009   Whitney

Whitney was Intel’s first CPU with integrated GPU. And Intel was the first to introduce a CPU with a built-in GPU. Its Clarkdale and Arrandale processors included Ironlake graphics. Intel branded them as Celeron, Pentium, or Core with HD Graphics. The GPU had 12 execution units (shaders) and could deliver up to 43.2 GFLOPS at 900 MHz. The iGPU could decode an H264 1080p video at up to 40 fps.

Intel built the first implementation, Westmere, a multi-chip device in a single package. The CPU used Intel’s 32-nm process, and the GPU used 45 nm.

Sandy Bridge processor followed Westmere, which had a monolithic die and integrated GPU in it.

2010 to 2018 iGPUs

Intel continued to improve the iGPU in its CPU. In 2012, Intel introduced the Ivy Bridge CPU with HD 4000 iGPU with six execution units (EUs – shaders). The company expanded the HD to 10 EUs when it introduced the Haswell CPU in 2013. The Broadwell CPU brought out in 2014 had the GTI iGPU with 12 EUs. And the next three versions had HD Graphics 5300, 5500, 5600, and P5700 that used the GT2 chip with 24 EUs. In 2015, Intel brought out the popular i7 6700k (codenamed Skylake) using Broadwell iGPU with 24 EUs. 2017 saw the 7th gen CPU 17-7700k (codenamed Kaby Lake) with the new HD 630 iGPU that had 24 EUs. In 2018, Intel released the 9th gen core 19-9900k (codenamed Coffee Lake) with up to 18 CPU cores. The chip has the UHD 630 iGPU (GT2) with 24 unified EUs. Intel brought out a special version, the 19-9900KF, that did not have an iGPU.

2018   Kaby Lake G

Intel surprised the industry by announcing an Intel CPU with an embedded AMD GPU—the i7-8808G, -8709G, -8706G, -8705G, and i5-8305G. It was Intel’s 8th generation Intel Core processor with Radeon RX Vega M Graphics. The company offered two versions. The i7-8809G and -8709G had a 24 shader GPU with 96 texture units, and all the others had a 20 shader 80 texture unit version. They ran at 1.06 GHz (boost to 1.19) and 0.93 GHz (boost 1.01).

Figure 9: Intel multi-chip Kaby Lake G. Chip on the left is 4GB HMB2, the middle chip is Radeon RX Vega, chip on the right is 8th Gen core. (Source: Intel)

Intel discontinued the product line in January 2020.

2019   Ice Lake

The 10th gen i7-10600G7 (codenamed Sunny Cove) had Intel’s Gen 11 iGPU with up to 64 EUs. The GPU ran at 1.1 GHz, and the smallest version had 32 EUs.

2020 Xe

In late 2017, Intel hired the Radeon Technologies Group leader Raja Koduri away from AMD. The implications were clear. In 2018, Intel had hired several other AMD and other people. And then, big surprise, In June 2018, Intel said it would build  a dGPU. Later in 2020,it got  codenamed Arctic Sound. The chip would span from datacenters to entry-level PC gaming applications. Intel announced it would be a GPU-compute device in the Ponte Vecchio supercomputer.

During 2020, the company introduced its 11th gen CPU (codenamed Tiger Lake) with Xe iGPU. Intel said the architecture of the 11th gen iGPU with 96 EUs was the same as would be in all future Xe dGPUs. In October 2019, Intel reported (via Twitter) it had tested its first dGPU, codenamed DG1. And in late October, it announced the first dGPU product, the Intel iRISXe Max.

Figure 10: Intel’s iRISXe Max dGPU for thin and light notebooks. (Source: Intel)

The entry-level dGPU would be employed in thin and light notebooks. Acer will offer the Swift 3X, Asus, and the Dell Inspiron 15 7000 2-in-1 were the first OEMs to announce products. The company promised other parts based on Xe for 2021.

What do we think?

In most of Intel’s previous adventures with graphics, specifically the discrete graphics parts, scaling was an issue. The products that made it to the market were spot products with average-to-mediocre performance. They had no headroom and no way to scale. Intel’s last three generations of iGPUs have been different. They have demonstrated scaling and process exploitation very well.

Intel says they will offer Xe dGPUs from entry-level (like the current iRISXe Max) up to supercomputer accelerators like Porte Vecchio. That looks good on paper, but it is almost impossible to do. The only thing the current iRISXe Max and the Porte Vecchio might share is the basic ALU in the scaler. Caches, memory controllers, bus managers, video outputs, clock gating, and a myriad of other parts will be different as the one moved from one product segment to the next.

However, Intel’s DG1’s iRISXe Max’s roots are in Intel’s iGPUs in the 10th and 11th gen CPUs. That evidence comes from the type of memory used by iRISXe Max—LPX DDR4 instead of DGGR. So right there, we can see a scaling wall.

Intel promises a more robust dGPU, DG2, in early 2021. The company has implied it will be a desktop part and will use GDDR 6. But remember, there are four discrete segments in the desktop dGPU market: low-end, mid-range, high-end, and workstation. Meeting the demands of each of those segments that reveal a design’s ability to scale. Scalability was one of the things that killed the Larrabee project

Intel acknowledges the issue. “No single transistor is optimal across all design points,” said chief architect Raja Koduri. “The transistor we need for a performance desktop CPU, to hit super-high frequencies, is very different from the transistor we need for high-performance integrated GPUs.”

Figure 11: Advanced packaging is about mixing and matching the right transistors for each application to speed time to market while maximizing performance. (Source: Intel)

This time, however, things may be different. Since Larrabee, Intel has developed its Embedded Multi-die Interconnect Bridge (EMIB).

Designers put heterogeneous dies onto a single package before EMBI. Designers used multiple dies for maximum performance or features set. Designers used an interposer which had wires through the interposer for communication. Silicon vias (TSVs) passed through the interposer into a substrate, which formed the package’s base; often referred to as 2.5D packaging.

EMIB abandons the interposer in favor of tiny silicon bridges embedded in the substrate layer. The bridges contain micro-bumps that enable die-to-die connections. Intel demonstrated it with an FPGA implementation called Straatix.

Figure 12: EMIB creates a high-density connection between the Stratix 10 FPGA and two transceiver dies. (Source: Intel)

Silicon bridges are less expensive than interposers. One of Intel’s first products with embedded bridges was Kaby Lake G (see section ‘2018 Kaby Lake G above). Laptops based on Kaby Lake G were considered expensive. However, they demonstrated Intel’s EMIB would work with heterogeneous dies in one package. For one thing, it consolidates valuable board space. It can also improve performance and reduced cost compared to discrete components. Kaby Lake G used die from three different foundries. This was the foundation work Intel did for chiplet designs. And chiplet designs is how Intel will scale Xe dGPU processors (tiles) for the various segments. Also, since Intel is building DG2 in 7 nm at TSMC, multi-die, multi-vendor interoperability is critical.

Intel refers to this as the Advanced Interface Bus (AIB) between its core fabric and each tile.

Foveros allows Intel to pick the best process technology for each layer in a stack. The Lakefield processor had the first implementation of Foveros. It incorporated processing cores, memory control, and graphics using a 10-nm die. That chiplet sits on top of the base die, which includes the functions usually found in a platform controller hub (e.g., audio, storage, PCIe, etc.). Intel uses low-power 14 nm for those processors. Micro-bumps connect power, and communications through TSVs in the base die. Intel then puts LPDDR4X memory from one of its partners on the top of the stack.

Intel’s Xe may become what the company promised in 2019—a scalable architecture. One that can satisfy high-end GPU-compute to low-end thin and lite.[7],[8] A common architecture that can share one driver and live atop Intel’s One API concept.[9]

 

Figure 13: Intel plans to span the entire dGPU market (Source Intel)

Intel has demonstrated in the past with its spot products that it can get the industry’s attention but it also demonstrated an unwillingness to pay the price to be a winner (even though it spent a hell of a lot of money). Intel still has the same culture, and volume in the fabs is always paramount. A few million high-performance dGPUs aren’t going to get manufacturing’s attention, and certainly not the CFO’s. Offloading the dGPU to an external fab is an expediency which Intel will correct as soon as it gets its fabs up to speed at 5 nm. And then the company will have to come to grips with COG and Intel’s enormous overhead—for a few million parts. The prospects of Intel’s management and died-in-the-wool culture being able to endure such a situation are not particularly good—even if the TSMC built dGPU is a stellar performer.

[1] Peddie, Jon, Q2 2020 Market Watch, September 24, 2020.

[2] Peddie, jon, Famous Graphics Chips: NEC µPD7220 Graphics Display Controller, https://www.computer.org/publications/tech-news/chasing-pixels/famous-graphics-chips

[3] Peddie, Jon, Famous Graphics Chips: Intel’s 82786 Intel’s First Discrete Graphics Coprocessor, https://www.computer.org/publications/tech-news/chasing-pixels/Famous-Graphics-Chips-Intels-82786-Intels-First-Discrete-Graphics-Coprocessor

[4] https://en.wikipedia.org/wiki/NeXTdimension

[5] Peddie, Jon, Famous Graphics chips: Intel740, https://www.computer.org/publications/tech-news/chasing-pixels/famous-graphics-chips-Intel740

[6] Peddie, Jon, Famous Graphics Chips: The Integrated Graphics Controller, https://www.computer.org/publications/tech-news/chasing-pixels/the-integrated-graphics-controller.

[7] Peddie, Jon, Intel unveils Xe-architecture-based discrete GPU for HPC, https://www.jonpeddie.com/report/intel-unveils-xe-architecture-based-discrete-gpu-for-hpc/

[8] Peddie, Jon, Intel launches hybrid notebook processor: 3D stacking and very low power hallmarks of new, https://www.jonpeddie.com/report/intel-launches-hybrid-notebook-processor/

[9] Peddie, Jon, Intel’s stacked chip is sexy, https://www.jonpeddie.com/report/intels-stacked-chip-is-sexy/

Jon Peddie, is a recognized pioneer in the graphics industry, president of Jon Peddie Research and named one of the most influential analysts in the world. He lectures at numerous conferences and universities on topics pertaining to graphics technology and the emerging trends in digital media technology. Former president of Siggraph Pioneers, he serves on advisory boards of several conferences, organizations, and companies, and contributes articles to numerous publications. In 2015, he was given the Life Time Achievement award from the CAAD society. Peddie has published hundreds of papers, to date; and authored and contributed to 11 books, His most recent, Ray Tracing: A tool for all.