Dolphin Research
2026.05.06 03:29

AMD (Trans): CPU/GPU mix converging to 1:1; server CPU TAM raised to 120bn

Below is Dolphin Research's transcript-style summary of AMD FY26 Q1 earnings call. For our earnings take, please see the article 'CPU momentum builds, Helios on the horizon — is AMD finally turning the corner?'

I. AMD Print — Key Highlights

1. Shareholder returns: repurchased 1.1 mn shares, returning $221 mn to shareholders. At quarter-end, $9.2 bn remained under the buyback authorization.

2. Guidance: Q2 FY26 revenue ~$11.2 bn (±$0.3 bn), implying +46% YoY and +9% QoQ at the midpoint; non-GAAP GPM ~56%; non-GAAP OpEx ~$3.3 bn; non-GAAP other income ~$60 mn; non-GAAP effective tax rate 13%; diluted share count ~1.66 bn. Server CPU revenue in Q2 is guided to grow 70%+ YoY and remain strong into H2; client PC units to soften in H2 on higher memory/parts costs, but client biz. still expected to grow YoY and outpace the market; gaming revenue to decline 20%+ in H2 vs. H1.

3. KPIs: Q1 revenue $10.3 bn (+38% YoY, flat QoQ), above the high end of guidance. GPM 55% (+170 bps YoY); OP $2.5 bn (OPM 25%), with OP growing faster than revenue; diluted EPS $1.37 (+43% YoY). Record FCF of $2.6 bn (25% of revenue), over 3x YoY.

4. Capital & liquidity: CFO $3.0 bn in Q1. Cash, cash equivalents and short-term investments were $12.3 bn at quarter-end; inventory ~ $8.0 bn, roughly flat QoQ.

5. Long-term targets: maintain long-term GPM range of 55%–58%; strategic EPS target to surpass $20. Data center AI revenue (MI-series AI GPUs) expected to reach 'tens of billions' in 2027, exceeding the prior 80%+ CAGR long-term target.

II. Earnings Call Details

2.1 Management Commentary

1) Data Center (overall)

a. Data center revenue hit a record $5.8 bn, +57% YoY and +7% QoQ, driven by strong demand for EPYC CPUs and Instinct GPUs. OP was $1.6 bn with OPM of 28% (vs. 25% a year ago).

b. Data center is now the primary engine for revenue and profit growth, marking an inflection in the biz. mix. AI at scale is expanding accelerator demand and materially lifting demand for high-performance CPUs that orchestrate and run AI workloads.

2) Server CPUs (EPYC)

a. Fourth consecutive record quarter for server CPU revenue, up 50%+ YoY; both cloud and enterprise up 50%+ YoY. Fifth-gen EPYC Turin is ramping, while 4th-gen EPYC remains strong across many workloads; Turin accounted for 50%+ of server revenue in Q1.

b. Cloud: every major hyperscaler is expanding EPYC deployments across AI workloads (general compute, data processing, accelerator head nodes, and emerging agentic apps). EPYC-powered cloud instances rose ~50% YoY to 1,600+, now covering virtually all enterprise-grade workloads.

c. Enterprise: Q1 set records for both revenue and sell-through, adding customers in financials, healthcare, industrials, and digital infra. AMD continues to break into the mid-market and SMBs.

d. 6th-gen EPYC Venice (Zen 6 on 2nm) is on track to launch this year, spanning throughput-, perf/W-, and perf/$-optimized variants, plus Verano, the first EPYC optimized for AI infrastructure. It aims to lead x86 peers on perf/socket and perf/W, and offers >2x throughput per socket vs. leading ARM AI alternatives; customer validation and ramp platforms exceed prior EPYC generations at the same stage.

e. Server CPU TAM sharply raised: from Analyst Day's 18% CAGR and ~$60 bn by 2030 to 35%+ CAGR and $120 bn+ by 2030, driven by the structural pull from agentic AI. Q2 server CPU revenue is guided to grow 70%+ YoY and maintain strong growth through H2 and into 2027.

3) Data Center AI (Instinct)

a. Data center AI grew YoY by double digits but ticked down QoQ due to a shift in China biz. (strong Q4, softer Q1). Customers are moving from pilots to large-scale production, with a notable edge in inference given leading memory capacity and bandwidth.

b. Expanded strategic partnership with Meta to deploy up to 6 GW of AMD Instinct GPUs across multiple generations. This includes a custom accelerator based on the MI450 architecture, shipping in H2, using the Helios rack-level architecture (Instinct GPUs paired with EPYC Venice CPUs).

c. Collaboration with OpenAI is progressing well; AMD has become a core partner to the world's largest AI infra builders, with multi-year deployment visibility and deep co-design capability.

d. ROCm continues to advance: in the latest MLPerf, MI355X led across several categories; Day-0 support for open-source models including Google Gemma 4, Qwen, and Kimi. Software investment is increasing, and agent-based coding workflows are accelerating R&D velocity.

e. MI450 has sampled to top customers; Helios enters mass production in H2—initial shipments in Q3 and a meaningful ramp in Q4, continuing to climb in Q1 2027. Pipeline forecasts exceed initial 2027 plans, with multiple new customers discussing GW-scale deployments; confidence is increasing in reaching 'tens of billions' of data center AI revenue by 2027 and surpassing the 80%+ CAGR target. More details will be shared at the Advancing AI event in Jul.

4) Client & Gaming

a. Segment revenue was $3.6 bn (+23% YoY, -9% QoQ on seasonality). OP was $575 mn with OPM of 16% (vs. 17% a year ago).

b. Client revenue was $2.9 bn (+26% YoY, -7% QoQ); latest Ryzen (incl. X3D) gained share. Launched Ryzen AI 400 and Ryzen AI Pro 400 desktop CPUs; mobile Ryzen 400 ramp and commercial penetration accelerated; commercial Ryzen Pro PC sell-through rose 50%+ YoY as Dell, HP, and Lenovo expanded AMD portfolios, adding large customers across tech, financials, healthcare, and aerospace.

c. Gaming revenue was $720 mn (+11% YoY, -15% QoQ, in line). Radeon 9000-series GPU demand lifted graphics YoY; semi-custom declined YoY per console cycle; next-gen console partnerships remain tight; FSR software upgrades improved performance.

d. H2 PC shipments to dip on higher memory/component costs, but client biz. still expected to grow YoY and outpace the market; gaming revenue to fall 20%+ in H2 vs. H1.

5) Embedded

a. Revenue was $873 mn (+6% YoY, -8% QoQ on seasonality). OP was $338 mn with OPM of 39% (vs. 40% a year ago); growth was driven by test & measurement and simulation, aerospace & defense, communications, and x86 embedded products.

b. Design wins grew by double digits YoY with multi-billion-dollar wins added. Semi-custom continues to expand in data center and comms, broadening from FPGA-centric to a wider mix including adaptive embedded, x86, and semi-custom, significantly enlarging TAM.

6) Supply chain & capacity

a. AMD is working closely with foundry and OSAT partners, meaningfully expanding wafer and back-end capacity to support incremental demand. Execution remains a focus.

b. Deep collaborations with memory vendors have secured sufficient supply to meet and exceed targets, while acknowledging tightness and some cost headwinds. AMD is sharing part of the cost pressure with customers, but pricing still prioritizes unit growth.

2.2 Q&A

Q: What is the logic behind doubling the server CPU TAM from ~$60 bn to $120 bn in the near term? Can AMD reach a 50%+ share?

A: CPUs have always been foundational to data center infra, and last year we started to see early AI-driven CPU demand, prompting a lift to an 18% CAGR and ~$60 bn TAM. In recent months, deeper customer engagements show agentic AI and inference are pulling CPU demand much faster and stronger than expected, with virtually all major cloud and enterprise customers materially increasing CPU needs. As inference scales, more agents mean more CPUs for orchestration, data processing, and parallel execution.

Our bottom-up, long-range customer forecasts and workload analysis point to a 35%+ CAGR and a TAM of $120 bn+ by 2030. Specifically, CPU demand spans three buckets: general compute, accelerator head nodes, and CPUs dedicated to large-scale agentic AI tasks. This requires a full CPU matrix covering throughput-, power-, cost-, and AI infra-optimized variants, which is exactly what the Venice family targets; Turin is ramping fast with clear share gains, Venice is well positioned, and next-gen co-development is underway, giving us high confidence in 50%+ share.

Q: MI450/Helios forecasts are above plan — is that due to upsized OpenAI/Meta contracts, new customers, schedule pull-ins, or MI500?

A: We are very excited by MI450 and Helios progress, as customer interest has stepped up meaningfully. The two large programs with OpenAI and Meta are on track, with deep co-design work ongoing. Based on current visibility, aggregate customer forecasts already exceed our initial 2027 plan.

Encouragingly, new customers are preparing for large-scale MI450 deployments across both training and inference (with the largest leaning toward inference). Taken together with the scale of these new customers, we see a path to exceeding the prior 80% CAGR target; at the same time, MI355 engagements are healthy, while MI450/Helios is for large-scale rollouts and multiple customers are already in deep discussions with us on MI500. Overall, both customer and workload coverage are broadening.

Q: Has the quarterly cadence in data center (server +50%+, AI flat/down) shifted vs. the prior H2/Q4-weighted outlook?

A: In Q1, server CPUs grew 50%+ YoY, while data center AI dipped slightly QoQ due to China. Into Q2, we expect double-digit QoQ growth for the data center, with both servers and data center AI up double digits QoQ. Server CPUs should grow 70%+ YoY in Q2 and remain strong into H2; for data center AI, Helios ramps in H2 with initial Q3 shipments and a significant Q4 ramp, continuing into Q1 2027.

Q: Will 2027 growth be constrained by supply chain or power availability given tight supply and data center build limits?

A: We have strong visibility into 2027 deployments — down to which GPUs will be installed in which data centers. Supply is tight and there are some build bottlenecks, but we remain confident in meeting and exceeding growth targets, working closely with customers and partners to secure power availability, with more capacity coming online in 2027. It is a complex ramp, but execution is tracking well.

Q: With improved x86 competitor supply and rising ARM custom CPUs, how will AMD differentiate and where will share trend?

A: We engage closely with all major hyperscalers to understand their CPU needs and identified the AI attributes of CPUs early, aligning our roadmap with customers. The market needs a broad CPU portfolio across general compute, head nodes, and agentic AI — it is not a 'one CPU fits all' world. AMD holds a favorable position competitively with deep customer collaborations and a widening roadmap.

Do not view this as a binary x86 vs. ARM choice — most large hyperscalers will run both, and even in-house CPU customers will continue to buy at scale from the merchant market. Different workloads require different CPUs, and current demand is exceptionally robust.

Q: Will Helios/Instinct at scale dilute GPM structurally? How should margins trend?

A: Q1 GPM was strong, and Q2 is guided to 56%. In H2, we see several tailwinds: server CPUs up 70%+ YoY in H2 should lift margins; client mix moving upmarket (gaming down, premium client up) also helps; and Embedded maintains high margins.

On the other hand, MI450 starts in Q3 and ramps meaningfully in Q4 with margins below the corporate average, creating some Q4 dilution. Overall, FY26 GPM is well supported, trending toward the 55%–58% long-term range from Analyst Day, with good first-year progress and tailwinds extending into next year.

Q: Is server CPU growth more unit-driven or ASP-driven?

A: In Q1, server growth was more unit-driven, with ASP rising alongside volume, but units led — not only high-end Turin, but also generalist Zen 4 products shipped at scale. For Q2 and H2, pricing will reflect some inflation as the industry is in a tight-supply cycle and AMD shares some cost pressure with customers.

That said, our focus is on long-term unit growth, so most growth remains unit-driven, with ASP increases largely covering supply chain cost inflation. Jean added: higher core counts each generation improve mix and lift ASPs.

Q: How do you view markets for new low-latency CPU architectures?

A: As AI scales and TAM expands, we expect more differentiated compute architectures in pursuit of better TCO. GPUs will still represent the vast majority of the data center accelerator TAM, but inference, low latency, decode/prefill, and other stages will see targeted optimizations, which is a natural evolution.

AMD has a full-stack compute portfolio spanning CPUs, GPUs, interconnect, and semi-custom, enabling coverage of large segments including low latency. The share of these sub-segments will depend on technology cadence, and AMD is prepared for multiple variants.

Q: Is agentic CPU demand incremental, or does it displace some GPUs? Does the CPU TAM raise imply a broader AI TAM lift?

A: It is largely incremental. Foundational models still need accelerators; as agents execute tasks, they generate more CPU work. A key change is the CPU-to-GPU ratio at deployment — from historical 1:4/1:8 host-node ratios toward 1:1 or higher CPU shares, and in agent-dense scenarios, CPUs may outnumber GPUs.

Exact ratios are hard to predict, but customers are now co-designing CPU and accelerator footprints, and overall this represents TAM expansion. We see CPUs and accelerators growing in tandem.

Q: How do memory price increases affect AMD and customers? Is your memory supply sufficient?

A: On supply, AMD has deep partnerships with memory vendors and has secured sufficient supply to meet and exceed targets, though overall supply remains tight. On costs, industry-wide pressure is evident and we are working with all parties to manage it.

In data centers, given AI demand, customers prioritize supply assurance; in consumer (PC, gaming), rising memory prices could weigh on H2 demand, which is reflected in our outlook. We remain coordinated with memory vendors and customers to ensure every CPU/GPU ships with matched memory so compute is not stranded.

Q: Is there a structural ASP gap between AI-optimized CPUs and general-purpose CPUs?

A: CPU TAM splits into three: general-purpose CPUs growing at low-double digits, a smaller but growing head-node segment, and agentic AI as the largest incremental driver. ASPs depend on the workload, so there is no single comparable figure.

As core counts keep rising, ASPs should trend up — that is our direction. Importantly, TAM expansion is primarily from agentic AI CPU demand.

Q: How do you view ARM competition in server CPUs?

A: The fact that all customers are talking CPUs underscores their criticality to AI infra — a positive signal. ARM is an excellent architecture with a place in the data center, often as 'point products'; AMD's advantage is a comprehensive CPU portfolio across workloads.

Venice introduces the AI-optimized Verano, alongside throughput- and cost-optimized variants, offering strong competitiveness. We continue to innovate in architecture and advanced packaging; ultimately, the CPU TAM is bigger than most expect, leaving room for multiple product forms to grow.

Q: How will client trend in FY26, H2 seasonality, and the impact on client ASPs as some tiles shift from client to data center?

A: Client Q1 beat our expectations, with notebooks (especially premium) performing well, commercial PCs (notably AI PCs) progressing strongly, while desktops — being more consumer-exposed — were softer amid memory/parts inflation. For the full year, H2 will feel memory cost headwinds on demand, but we will focus on commercial and premium, and still expect client to grow YoY.

On ASPs, there is a structural push-pull between notebooks and desktops. Overall, we remain confident client will outgrow the market.

Q: With compute undersupplied, can Instinct margins converge toward the corporate average over the next few years?

A: At this stage, Instinct's core goal is to drive revenue growth. We take a strategic approach with customers, and margin levels differ by account.

As revenue scales, we will improve margins through both ASP and cost, with scale efficiency particularly important on the cost side. That is the path to convergence over time.

Q: Excluding China noise, did data center AI grow QoQ in Q1? And will GPUs and servers both grow double digits QoQ in Q2?

A: Data center AI was down modestly QoQ mainly due to China, and China was not a material mix for AMD in Q1. In Q2, both data center AI and server CPUs are expected to deliver double-digit QoQ growth.

Q: Why does OpEx keep running above guidance? How should we think about OpEx vs. revenue?

A: We are leaning into a very large opportunity and have been investing for several quarters. AI investments are directly driving revenue — +38% YoY in Q1 and +46% guided for Q2.

Some OpEx flexes with revenue outperformance; we are also deploying resources to support intensive co-development with data center AI customers. That explains the temporary variance.

Q: With a competitor restarting 7nm, will older CPUs linger longer and pressure margins?

A: We are not seeing older generations linger. Turin is very strong, already 50%+ of server revenue in Q1; Genoa (generalist) demand remains but is declining in mix, and customers prefer newer products for better performance, cost, and power.

Beyond cloud, enterprise uptake of new products is also robust. The supply chain is tight, but this is an AMD strength — deep engagements with wafer and OSAT partners scale supply as customers expand demand; we are already planning CPU needs for 2027/2028, which helps us stage capacity. Post-Venice ramp, Turin and generalists will keep shipping, but the preference for new nodes persists.

Q: SG&A is growing faster than R&D — is this a startup-phase investment, and how will R&D vs. SG&A trend for the year?

A: For the full year, R&D growth will clearly outpace SG&A. In recent quarters, we have been building go-to-market capacity and stepping up sales and marketing, so SG&A has run higher temporarily.

Looking ahead, R&D growth will exceed SG&A. Lisa added: sales and marketing are focused on enterprise servers, commercial PCs, and SMB — areas historically underinvested by AMD, but now offering higher ROI given a stronger server CPU and commercial PC lineup.

<End of text>

Risk Disclosure & Statement:Dolphin Research Disclaimer and General Disclosure