LCD Display Inverter

Display Inverter / VGA Board / LCD Controller

Independent graphics cards were abandoned by NVIDIA and AMD, but Intel chose to enter

At the end of the last century, due to the weak computer graphics processing power, the 3D game screen was terrible, even the top 3D accelerator card could not handle high-resolution games, and the high-resolution was 800×600. Therefore, 3dfx introduced the advanced technology of dual graphics card program at that time.

The resolution of the famous “Legend of Sword and Fairy” is 320 × 240, which is really incomprehensible to today’s players.

Since one 3D accelerator card is not enough, how about two parallel cards? 3dfx’s second-generation product, Voodoo 2, which supports dual card parallel connection, became a favorite of computer gamers around the world around 1998, and its popularity is no less than that of the iPhone. – Of course, the premise is that the money is not bad.

Later, 3dfx was acquired by NVIDIA, and the dual-card parallel (SLI) technology was also acquired by the latter. In the following ten years, the dual-card parallel platform has been the favorite of top players, and dual-card is not enough, so it has become Doka.

AMD, which acquired ATi, became NVIDIA’s only competitor in the graphics card field, and not to be outdone, it launched its own dual-card parallel solution, called CrossFire.

Today, the situation has changed a lot, whether it is two discrete graphics cards, or an integrated graphics card plus a discrete graphics card (this solution is quite awkward and the actual effect is quite disappointing), or a dual-core graphics card (surprisingly expensive), AMD Both NVIDIA and NVIDIA seem to have lost interest, and gradually no longer provide any support from hardware to software.

Why is this so?

Since Voodoo 2, which was born in 1997, the emergence of dual-card parallel function is mainly to deal with occasions where top-level graphics cards are not enough. After more than 20 years of development and changes, this most basic spirit has not been shaken. In this way, if it is not the top graphics card such as RTX 3090, there is no need for other graphics cards to form parallel.

Some people might say that two sub-high-end graphics cards combined may be cheaper than a top-level graphics card, and the theoretical performance of each graphics card is more than half of the top-level graphics card. That’s right, that’s right, in the case of the RTX 30 series, the RTX 3090 costs twice as much as the 3080, but offers nowhere near twice the performance.

However, don’t forget that SLI is lossy, and the actual performance is far less than doubled, and it can even be improved by 30%. It is not bad to build SLI for cost-effectiveness, so as to avoid expensive high-end flagship cards. This idea is actually relatively childish.

Furthermore, the multi-graphics card system greatly increases the difficulty of DIY, increases the space requirement and the power consumption of the whole machine, and also reduces the compatibility.

The sales of top graphics cards are not big, and this year’s top dual-card platform will be outdated next year, and the cost of replacement will double, so this solution is actually thankless for players, NVIDIA and AMD both see it clearly Therefore, it is reasonable for both parties to decide to fade out of this field.

However, Intel, which has re-entered the discrete graphics card, seems to be very interested in this technology, and the reason is very simple and cruel. Since the performance is lagging behind, it has to be made up in quantity.

Recently, two new Intel Xe graphics cards have appeared in SiSoftware’s database, one of which contains 128 execution units and 1024 stream processors, which is the highest specification of Intel Xe graphics cards that have appeared so far.

For comparison, the DG1 discrete graphics integrated in Tiger Lake should be 96 execution units and 768 stream processors. What did you find here? Although it is a discrete graphics card, it seems that there is no obvious advantage. How can this be played?

The core frequency of the new card has reached a new high of 1.4GHz, and it has 1MB of L2 cache and 3GB of video memory.

There is no doubt that such a new graphics card is an improvement for Intel itself, and it is unrealistic to get on the market and face AMD or NVIDIA.

Another graphics card in SiSoftware is more powerful, including 192 execution units and 1536 stream processors, but unexpectedly, the detection shows that this is a dual card, which is likely to be a combination of 96-unit core graphics and 96-unit independent graphics.

Looking further down, 2MB of L2 cache and 6GB of video memory obviously exist for dual GPUs. As for whether it is a single-card dual-core or dual-card parallel, it is still unknown.

In all fairness, old gamers are already a little tired of the fact that Intel has released a new discrete graphics card.

In the last century, the i740 barely occupied a place in the low-end field. After that, Intel can only do integrated graphics or core graphics cards. It is difficult to fill the gap of 20 years. More than ten years ago, Intel wanted to make Larrabee with great fanfare, but the result was a lot of thunder and little rain, and the product failed.

In recent years, Intel has never given up its desire to enter the discrete graphics card, but it has never made breakthrough progress. To make matters worse, ARM continues to erode Intel’s foundation, and NVIDIA’s acquisition of ARM further increases the threat. AMD has maintained its lead in the CPU field for more than two years – this has never been seen in the history of Intel’s competition with AMD. In the past, Intel is facing the most severe situation since the Grove era, and it is inevitable that one will lose the other.