BadmintonDigest

BadmintonDigest

Sunday, 8 May 2022

[New post] AMD planning on Fusing the FPGA AI Engines Onto its upcoming EPYC Processors

Site logo image Nivedita Bangari posted: " AMD revealed during its results call that it will integrate Xilinx's FPGA-powered AI inference engine into its CPU range, with the first devices expected in 2023. The news indicates that AMD is moving quickly to incorporate the benefits of its $54 billio"

AMD planning on Fusing the FPGA AI Engines Onto its upcoming EPYC Processors

Nivedita Bangari

May 8

AMD revealed during its results call that it will integrate Xilinx's FPGA-powered AI inference engine into its CPU range, with the first devices expected in 2023. The news indicates that AMD is moving quickly to incorporate the benefits of its $54 billion Xilinx acquisition into its processor lineup, which isn't entirely surprising given that the company's recent patents show it is already well on its way to enabling multiple methods of connecting AI accelerators to its processors, including using sophisticated 3D chip stacking technology.  

AMD's desire to bundle its CPUs with in-built FPGAs isn't entirely new; Intel tried something similar with the FPGA portfolio it acquired from Altera in late 2015 for $16.7 billion. However, even though Intel announced the combined CPU+FPGA chip in 2014 and even demonstrated a test chip, the silicon didn't arrive until 2018, and then only in a limited experimental way that appears to have come to a halt.

AMD
credit:source

AMD hasn't published any details on its FPGA-enabled devices yet, but the company's strategy for connecting the Xilinx FPGA silicon to its chip will almost certainly be more complex. While Intel used normal PCIe lanes and its QPI interface to connect its FPGA chip to the CPU, AMD's latest patents suggest that it is working on an accelerator port that might support a variety of packaging alternatives.

Three-dimensional stacking chip technology, similar to what it employs to join SRAM chiplets in its Milan-X processors, might be used to fuse an FPGA chiplet on top of the processors' I/O die (IOD). This chip stacking strategy would improve performance, power, and memory throughput, but as we've seen with AMD's existing 3D stacking chips, it can also cause heart issues that stymie performance if the chiplet is placed too close to the compute dies. AMD's decision to put an accelerator on top of the I/O die makes logical because it would help alleviate heat issues, allowing AMD to squeeze more performance out of the CPU chiplets around it (CCDs).

AMD
credit:source

Other choices are available from AMD. The corporation can handle stacked chiplets on top of other dies or simply arrange them in normal 2.5D implementations that use a discrete accelerator chiplet instead of a CPU chiplet by establishing an accelerator port (see above diagrams). AMD also has the option of including various types of accelerators, such as GPUs, ASICs, or DSPs. This opens up a slew of alternatives for AMD's own exclusive future products, as well as the possibility of customers mixing and matching these chiplets into unique AMD semi-custom processors.

As the tsunami of personalization continues in the data centre, this type of foundational technology will undoubtedly come in handy, as proven by AMD's recently announced 128-core EPYC Bergamo CPUs, which have a new type of 'Zen 4c' core that's suited for cloud-native apps.

AMD is already addressing AI workloads with its data centre GPUs and CPUs, with the former often handling the compute-intensive process of training an AI model. AMD will largely employ the Xilinx FPGA AI engines for inference, which uses a pre-trained AI model to do a specific task.

AMD
credit:source

Xilinx already utilises the AI engine in image recognition and "all types" of inference applications in embedded applications and edge devices, such as vehicles, according to Victor Peng, AMD's president of its Adaptive and Embedded Computing branch, during the company's earnings call. The design is scalable, according to Peng, making it a natural fit for the company's CPUs.

Inference workloads require fewer processing resources and are significantly more common in data centre installations than training. As a result, inference workloads are widely deployed across large server farms, with Nvidia developing low-power inference GPUs such as the T4 and Intel relying on hardware-assisted AI acceleration in its Xeon CPUs to solve these workloads.

also read:

AMD confirms Phoenix and powerful Dragon Range APUs in its new Zen4 Roadmap
source

Comment

Unsubscribe to no longer receive posts from TechnoSports.
Change your email settings at manage subscriptions.

Trouble clicking? Copy and paste this URL into your browser:
https://technosports.co.in/2022/05/08/amd-epyc-processors/

Powered by Jetpack
Download on the App Store Get it on Google Play
at May 08, 2022
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

No comments:

Post a Comment

Newer Post Older Post Home
Subscribe to: Post Comments (Atom)

Merry Christmas from Everything Horse (and thank you)

Thanks for reading, sharing, and supporting us this year. Here's to a brilliant 2026...

  • Hoofbeat Update from Everything Horse
    Check out the latest equestrian news stories, event reports & more from Everything H...
  • Hoofbeat Update from Everything Horse
    Check out the latest equestrian news stories, event reports & more from Everything H...
  • Hoofbeat Update from Everything Horse
    Check out the latest equestrian news stories, event reports & more from Everything H...

Search This Blog

  • Home

About Me

BadmintonDigest
View my complete profile

Report Abuse

Blog Archive

  • December 2025 (6)
  • November 2025 (5)
  • October 2025 (7)
  • September 2025 (6)
  • August 2025 (6)
  • July 2025 (4)
  • June 2025 (8)
  • April 2025 (1)
  • February 2025 (1)
  • January 2025 (1)
  • December 2024 (1)
  • September 2024 (10)
  • August 2024 (2728)
  • July 2024 (3224)
  • June 2024 (3084)
  • May 2024 (3246)
  • April 2024 (3145)
  • March 2024 (3253)
  • February 2024 (3053)
  • January 2024 (3254)
  • December 2023 (3258)
  • November 2023 (3196)
  • October 2023 (3255)
  • September 2023 (3159)
  • August 2023 (3174)
  • July 2023 (3163)
  • June 2023 (3074)
  • May 2023 (3157)
  • April 2023 (3054)
  • March 2023 (3122)
  • February 2023 (2742)
  • January 2023 (3089)
  • December 2022 (3178)
  • November 2022 (3142)
  • October 2022 (3015)
  • September 2022 (3003)
  • August 2022 (2944)
  • July 2022 (3012)
  • June 2022 (3137)
  • May 2022 (3239)
  • April 2022 (3140)
  • March 2022 (3193)
  • February 2022 (2957)
  • January 2022 (3229)
  • December 2021 (3104)
  • November 2021 (3152)
  • October 2021 (3242)
  • September 2021 (1788)
Powered by Blogger.