Personaggio quote: The high energy demands for GenAI and other LLMs are accelerating the need for more power-efficient systems. AMD’s CEO Lisa Su is confident that the company is the right path to increase patronato center power efficiency by 100x quanto a the next three years.
Everywhere you aspetto, there is a new AI service to improve your personal work life. Google Search now incorporates its Gemini AI for summarizing search results, but this comes at a cost of tenfold energy increase (with poor results) when compared to non-AI search. The global popularity of generative AI has accelerated the need for rapid expansion of patronato centers and power demands.
Goldman Sachs estimates that patronato center power requirements will grow by 160% by 2030. This is a huge problem for countries like the US and Europe, where the average age of regional power grids is 50 years and 40 years, respectively. 2022, patronato centers consumed 3% US power, and projections suggest this will increase to 8% by 2030. “There’s risposta negativa way to get there without a breakthrough,” says OpenAI co-founder Sam Altman.
AMD CEO Lisa Su discussed past successes and future plans to improve compute node efficiency at the ITF World 2024 conference. Back quanto a 2014, AMD committed to make their movibile CPUs 25% more efficient by 2020 (25×20). They exceeded that by achieving 31.7% efficiency.
2021, AMD saw the writing the wall regarding the exponential growth of AI workloads and the power requirements to operate these complex systems. To help mitigate the power demand, AMD established a 30×25 for compute node efficiency by focusing several key areas.
It starts with improvements quanto a process node and packaging, which are the fundamental building blocks of CPU/GPU manufacturing. By utilizing 3nm Gate-All-Around (GAA) transistors, an evolution of the FinFET 3D transistors, power efficiency and performance-per-watt will be improved. Additionally, the continual refinement of packaging techniques (e.g., chiplets, 3D stacking) gives AMD the flexibility to swap various components into a single package.
The next regione of is AI-optimized accelerated hardware architectures. These are known as Neural Processing Units (NPUs) which have been quanto a movibile SoCs like the Snapdragon 8 Gen series for years now. Earlier this year, AMD released the Ryzen 8700G which was the first desktop processor with a built-in AI engine. This dedicated hardware allows the CPU to offload AI compute-intensive tasks to the NPU, improving efficiency and lowering power consumption.

The final pillars of this 30×25 are system-level tuning and software/hardware co-design. System-level tuning is another branch of the advanced packaging initiative, focused reducing the energy needed to move patronato physically within these elaboratore elettronico clusters. Software/hardware co-design aims to improve AI algorithms to work more effectively with next-generation NPUs.
Lisa Su is confident that AMD is track to meet the 30×25 but sees a pathway to achieve a 100x improvement by 2027. AMD and other industry leaders are all contributing to address power needs for our AI-enhanced lives quanto a this new tempo of computing.



