According to foreign media reports, AMD’s acquisition of Xilinx is progressing steadily in the domestic approval process. And passed another hurdle in the latest round of negotiations. According to relevant sources, because large domestic partners have no intention of blocking this transaction, this has enabled everything to proceed smoothly.
According to previous reports, AMD’s $35 billion acquisition of Xilinx has entered the second phase of approval in July this year. AMD has confirmed that it has submitted the required documents to the relevant departments and looks forward to the completion of the transaction before the end of the year.
Analysts believe that with reference to Intel’s 2015 acquisition of Altera, which is also an FPGA giant, they are optimistic that AMD’s acquisition of this merger and acquisition by the Chinese side is not a big problem.
The United States, the European Union and the United Kingdom have all approved
In May of this year, according to Korean media reports, the US Fair Trade Commission (FTC) approved the merger of AMD and Xilinx.
The United States Fair Trade Commission believes that the merger of AMD and Xilinx is a business combination between American companies. The two companies have different main businesses and the semiconductors they supply do not directly compete, so there is no need to worry about competition restrictions. AMD is expected to complete the transaction in 2021, letting this merger officially settle.
However, the UK Competition and Markets Authority stated in early May that before a formal investigation into AMD’s anticipated acquisition of Xilinx, the agency is seeking opinions from relevant parties and is considering whether the aforementioned transaction will lead to “a significant reduction in competition”. “, the deadline is July 6. In addition, there are reports that, according to a document on the European Commission’s website, AMD has submitted its Xilinx acquisition plan to the European Union for review. The document’s provisional deadline is June 30.
But in July, there was news that this single transaction was approved in the UK and the EU.
Viewing AMD’s acquisition of Xilinx from the history of FPGA: not necessarily a good business
As one of the major acquisitions last year, AMD’s acquisition of Xilinx will have a significant impact on the merger of the electronics industry. When reading their introduction on the merger, AMD seemed to be digging deeper, thinking that the acquisition of Xilinx could significantly improve their business model and position.
Both AMD and Xilinx are technology leaders in their respective fields. For me, the question is whether the overall value of the merger exceeds the sum of the various parts, or this is just AMD adding a third leg to its CPU and GPU business. A very expensive way. AMD positions this acquisition as positive in the proposal, but based on the review of the technologies involved and the results of similar types of mergers, the answer may be more complicated than it seems on the surface.
A good starting point to understand this problem is to understand some of the businesses and technologies of FPGA, and then to understand the benefits Xilinx brings to AMD.
Next, without involving technical details, I will review the FPGA, which is enough for us to understand how Xilinx may affect AMD’s future business and strategy.
From the perspective of technological history, Xilinx’s acquisition marked a new stage in the FPGA era that began in the late 1980s. At the time, the concept of FPGA attracted the imagination of digital designers (and venture capitalists). Although programmable logic has been in the form of PLDs (Programmable Logic Devices) for ten years, the latter are simpler devices, limited to Boolean operations, lookup tables, and basic state machines.
With the advent of FPGAs, the size, complexity, and options of programmable devices have increased to the extent that high-level design languages (hardware design) are used as models for modeling. Features such as the ability to build equivalent software languages (such as Basic and C) attract developers.
By the mid-1990s, there were no fewer than twelve types of FGPA from start-up companies and semiconductor professional companies (including Intel and AMD), and they all competed in the FGPA field. For FPGA, this is a heyday. They developed a new architecture, FPGA is also a major research area of the university, and the entire conference is dedicated to FPGA topics. However, within ten years, the field has basically merged into five companies: Altera, Xilinx and Lattice (they all use RAM-programmed table architecture) and Actel and QuickLogic, which have anti-fuse one-time programmable systems structure.
Altera and Xilinx are in a leading position in terms of size and functionality, and dominate the market share. Ten years later, the pure FPGA field was further consolidated. Microsemi acquired Actel in 2010 (which became part of Microchip (MCHP) in 2018), and Intel (INTC) acquired Altera in 2015. With the withdrawal of Xilinx this time, this makes Lattice (which was shelved in the 2016 acquisition change) and QuickLogic the only FPGA leading companies.
However, as new FPGA companies continue to emerge, this field has indeed remained dynamic.
Therefore, history has provided some insights, and it may be predicted that one day, Lattice will become an acquisition target. To understand why FPGA companies have become such a major acquisition target, we need to spin off another layer.
The potential business problem with FPGAs is that they have never been the best solution for any high-volume application. PC is a typical example of high-volume products. There are massive sales every year, selling key components for the PC, you can make a lot of money. If there are no high-volume products, it means that compared to semiconductor companies with large-volume parts, FPGA business development is much more difficult, because you have to find more small-volume customers to sell the same number of parts.
When it comes to mass production, what are the disadvantages of FPGA? FGPA has different sizes and different functions, but as a standard, the price of a fully-produced high-end device is around US$300. When comparing Custom Foundry ICs (also known as ASICs) and FPGAs, due to the working principles of different devices, the typical rule is 1 ASIC gate = 4 FPGA gates (this is the price you pay for field programmability). Therefore, under the same conditions, a $300 FPGA should be equivalent to a $75 ASIC. The equivalent ASIC will have better speed and power performance (again, due to the limitations and overhead of the FPGA architecture), but here, we will ignore it for now.
All other conditions are not equal, because the one-time cost of ASIC is amortized into the FPGA price. It is assumed that the engineering work for a given design is the same in the FPGA and its equivalent ASIC (in reality, the front-end design is very similar). The main difference in cost is that the manufacturing cost of all mask and wafer fabs has been increased to 10-20 million US dollars, and the cost of all the infrastructure used to support the new chip architecture (licenses, tools, support) It may reach 10 million to 20 million U.S. dollars and so on. ).
Therefore, in terms of cost advantage, it depends on the quantity.
For 10K devices, the cost of the FPGA solution is US$3 million and the cost of the ASIC is US$41 million. Therefore, for the mass production of 10K, if other technical requirements can be met, FPGA is undoubtedly a winner. Raise the number to a new level and we will see a similar situation.
For 100,000 devices, the cost of the FPGA solution is US$30 million, while the cost of the ASIC is US$48 million. However, as the number increases, the impact of the cost of these masks and tools will decrease, and we have seen the story change.
For 500,000 devices, the cost of the FPGA solution is US$150 million, and the cost of the ASIC is US$78 million. For one million devices, the cost of the FPGA solution is $300 million, and the cost of the ASIC is $115 million. ASIC becomes a breeze, and as the volume increases, the cost advantage becomes greater (the cut-off point is about 180,000). I used generalized numbers and integer estimates, but you get the idea.
Now, after considering power and performance issues, it has become more and more complex, but the key is that the areas where FPGAs dominate are mainly in small-batch applications such as communications and industrial systems. In PCs, phones, and any high-capacity Electronic devices, you can hardly find FPGAs.
Therefore, from a long-term perspective, I have the first concern about acquisitions. How to really help AMD become the dominant FPGA manufacturer? I have to question how FPGA synergizes with a company focused on the consumer market.
AMD put forward some optimistic predictions in the merger proposal, explaining how the merger will make it grow. They say this allows them to diversify their product line. But the question becomes, given that they are on par with larger, very capable competitors, whether they really have the ability to move their eyes away from the ball in order to develop smaller (and therefore less profitable) parts, Will supplement their FPGA.
They said that this allowed them to enter new markets such as automobiles, 5G and IoT networks.
Diversification always makes sense, but it is important to understand that these are high-volume markets, and once the design is proven, it will switch to ASICs. Xilinx may win AMD orders in prototyping and low-volume production, but FPGAs are always too expensive compared to high-volume alternatives. In the past network expansion, this situation has appeared several times. FPGA is the first generation product, but it is processed by the cheaper and higher-performance NPU (network processor) ASIC or Intel for its next server. A function added by the device instead.
I expect that the value-added acceleration features discussed by AMD will follow the same development trajectory.
Some people will say that Intel has done a good job on Altera, and the effect is very good. Without a specific Intel and AMD comparison, I see 3 problems here:
From an absolute and relative perspective, the price AMD paid for Xilinx was twice the price Intel paid for Altera. In 2015, US$16 billion was approximately 15% of Intel’s market value, and the US$35 billion that AMD had to pay was 1/3 of AMD’s market value. That is a bigger bet.
Intel has entered the Altera agreement, and Altera is its foundry customer, and Intel needs to increase the stability of its factories to improve utilization. Therefore, regardless of other factors, this is a victory. AMD lacks the synergy of these foundries driven by Intel to Altera.
Intel has always been more diversified than AMD. Yes, PC chips dominate both, but since 2012, Intel has regularly acquired 3-4 companies every year, thereby expanding their experience in managing various technologies. In the past ten years, AMD has not made any major acquisitions. For large mergers like this, they are entering uncharted territory.
The fourth question is more vague. Intel has acquired Altera for 5 years. They have not yet implemented the Adaptive SoC platform. AMD believes this is the key to Xilinx’s value argument. At least part of the problem comes back to FPGA power and performance issues, which reduces the viability of such platforms.
In the past 20 years, many FPGA vendors have tried to use this platform, but the traction is different. Therefore, it is necessary to question when such adaptive platforms can become mainstream products and become so important. Given Intel’s five-year lead and foundry experience, whether AMD can compete on these platforms.
The last but not the least important question about the acquisition is the financial situation. AMD guarantees that the overall financial situation will be positive. However, the earnings per share obtained through some simple calculations are different.
Where will Xilinx go?
Just over three years ago, Victor Peng took over as CEO of Xilinx after Moshe Gavrielov retired. The change of management marked the beginning of a new era for the company. Peng began to transform the world’s largest programmable logic company into a company with broader market coverage and growth potential.
In 2018, a few weeks after taking over, Peng formulated a new strategy for Xilinx. The company puts “data center first” in the first place, hoping to seize the expected rapid growth of data center and cloud acceleration in the wave of artificial intelligence revolution. They also introduced what they claim is a new device called ACAP (Adaptive Computing Acceleration Platform). Peng also stated that Xilinx plans to “accelerate the growth of the core market” (that is, not to abandon its traditional FPGA customers) and “push adaptive computing” (basically, to find applications and markets for their planned new ACAP devices.)
Most of the strategic elements in Peng’s 2018 speech were “forward-looking statements” in the phrasing of a model safe harbor (because this is the whole content of the strategy.) But, as Yogi Berra said, “’It is difficult to make Forecasts, especially about the future.”
So now, three years later, how does Peng’s 2018 promise gain a foothold, and where will Xilinx go next?
Xilinx first solved the outstanding issues and has now delivered multiple “ACAP” device variants of its “Versal” series. These devices are manufactured using TSMC’s 7nm CMOS technology, including FPGA structure, ARM-based multi-core processing system, dedicated AI processing engine, ultra-high performance I/O, piles of memory and network on chip (NOC) to facilitate the determination of large amounts of data around the chip It is easy to move without causing a lot of traffic congestion in traditional FPGA routing.
Our views on ACAP in 2018 are lukewarm. We feel that these devices will be very complicated, which will scare away everyone except the core development team. We also believe that if you try to do everything at once, ACAP will be mediocre in everything. Moreover, we believe that ACAP has nothing to prove that this is a new device category, not just the next generation FPGA.
So what is the reality of ACAP?
Well, Xilinx did several practical things to solve the complexity problem. First, they have made great progress in improving the development tool environment, including the introduction of the VITIS software development platform, which enables software developers to target complex heterogeneous computing platforms (such as ACAP) without the need to delve into the LUTs and LUTs of programmable hardware. Latch, and master hardware description language, synthesis, placement and routing, and timing closure. In this regard, Xilinx won the victory. The team seems to be able to develop and deploy systems with ACAP well without being bothered by complexity and without having to hire a large FPGA wizard team to get them out of trouble.
Secondly, they launched Versal not as a single device series, but as multiple series for different application areas. Not all variants include all “ACAP” functions, and the result is a more sensible and practical solution that does not let a large number of expensive, highly profitable silicon chips sit idle in designs that do not require specific functions. Xilinx has also won in this regard. ACAP has been adapting to the application on the product table, allowing the team to choose a device that better suits their needs, rather than a platform that fits all methods.
Third, do you insist on ACAP as a new equipment category? Um. We are skeptical of any “category” that contains only one element. No one else has introduced a competitive “ACAP”. There are other devices on the market (such as from Intel and Achronix) that contain most or all of the same features that ACAP has, but are still called “FPGAs.” We will consider Xilinx a failure at this point. “ACAP is a brand, not a category. (These devices are just fancy FPGAs).
Our analogy to many of these points is the old Swiss army knife. When they added the corkscrew, no one shouted: “Hey, this is no longer a knife!” When other elements are introduced, people seem to be comfortable-toothpicks, saws, scissors-all cutting, poking, or splitting Things are absolutely like a knife. But what about the fork? nothing to say. Nevertheless, the world still retains the name “knife”. There has never been a new feature that pushes us to the “Swiss Adaptive Multi-Tool Pocket Device” (SAMPD). Moreover, this is a good thing.
Xilinx said at the time that NOC is their main force. It is this point that pulled ACAP out of the “FPGA” field and promoted it to a dominant position. Not a DSP/MAC block, not a block RAM, not a SerDes transceiver…all these things can still be on the FPGA. However, Xilinx used Zynq to give up the FPGA name a few years ago, taking the position that the ARM-based processing subsystem makes the device “SoC” instead of “FPGA”-this is another useless semantic argument. Not surprisingly, their competitors refuse to adopt this approach.
Let us boil down the entire ACAP naming controversy to meaningless meaning to the engineer who designed the system, but allowing reporters to fill in a few paragraphs with nonsense.
Continuing to discuss the “data center first” strategy, our concern in 2018 is that becoming a “data center first” will shift Xilinx’s attention from its core market. It no longer emphasizes FPGA as a single defining technology. The company has been the company’s dominant supplier for decades, but instead supports a small share of the larger market where its main competitor is, which seems unwise at best. of. If you are a world tennis champion, you won’t wake up one day and say: “Hey, I think I will be an Olympic gymnast, I have muscles.”
Xilinx is also worried that it will scare their loyal FPGA customer base. The “accelerating core market growth” part of the 2018 strategy yelled to them: “Hey, we have not given up on FPGAs. In fact, we plan to accelerate the growth there. We have your support.” In the past three years, the The company seems to have fulfilled this promise well, expanded its traditional FPGA products, and continued to maintain its market share, and achieved high levels of satisfaction among the FPGA customers we talked to.
But what about “data center first”?
In that area, we have to say that Xilinx has done a “good job” so far. But when you are preparing for an epic battle with the long-dominant giant, doing it well is a pretty good achievement. Intel will invest several times the market value of Xilinx to defend its dominance of its data center, instead of sitting idly by and letting a third party cut into its fortress. (Well, Intel did this with NVidia and GPU-based AI acceleration, but please be patient.)
However, as early as 2018, we suspected that Xilinx’s strategy had another thing-a potential subtext. When you dominate markets such as FPGAs, your prospects for explosive growth are limited. You will not grow by simply expanding your share of the main market, because you have already won the battle. Therefore, your growth prospects depend on the rapid growth of the market itself.
However, FPGAs do not seem to be ready for explosive growth.
At the time, there were also rumors that Xilinx positioned itself as an acquisition. The unfortunate American “fiduciary responsibility” concept basically means that the board of directors and management teams of listed companies are legally obliged to serve the interests of shareholders, higher than the interests of all others-higher than their employees, customers, and technological advancement. Even the environment or well-being of the earth. In the harshest terms, if you can legally increase the value of your stock by 10% by disrupting your own technology, abandoning customers, firing employees, and destroying the environment, then you must do so.
Of course, Xilinx does not need to do any of these things. They only need to convince potential suitors that they have the potential for explosive growth, and are not limited by the single-digit to low-double-digit growth potential of FPGAs. Their 2018 strategy addressed this problem in multiple ways. “Data center first” means that the company’s target market is 10 times larger than the FPGA sandbox, and it has explosive growth potential. Create a “new category” chip that is not an FPGA to support the concept of transcending the boundaries of programmable logic. All this makes sense. Xilinx is running personal ads.
When I think of Xilinx’s long journey in the data center, I think of AMD. The competition between AMD and Intel even surpassed the decades-long duel between Xilinx and Altera (now, we remind you that the latter is also part of Intel). The AMD Xilinx marriage lays the foundation for the next fiercer battle.
However, Xilinx’s achievements in the past three years go far beyond this. They have achieved impressive victories in 5G deployment and have launched some truly unique products for the market that should be at least ten years old. They are establishing a dominant position in the automotive market and hope to expand it to a broader set of sockets covering almost all ODMs, and they are expanding their support capabilities to make them a “solution” provider in addition to their traditional advantages. “Component” supplier.
We would love to know the impact of AMD’s acquisition. In the case of Intel/Altera (and of course most other acquisitions), there was a period of reduced productivity, staff turnover, and general chaos after the merger. This will certainly have at least some impact on Xilinx, but the depth of this impact will depend on how the transition is managed. It may be mild, or it may be fatal, we must wait and see.
In addition, in the Intel/Altera case, the merger seems to have moved the company away from traditional markets and customers in order to focus on winning the larger agenda of the data center. We hope that this is not the case with AMD/Xilinx, because Xilinx has some very important presence in markets outside of the data center. If the company does not pay attention to these markets, these markets will be harmed.
All in all, in the three years since 2018, Xilinx has undergone a positive transformation and was acquired by AMD. With the enthusiastic support of the design community, many impressive new technical solutions have been introduced to the market. The new market has been conquered, the old market has received stable support, and even has impressive growth. The future will be very interesting.