If you’ve tried to buy a GPU, SSD, or RAM this year, you’ve felt the pain: prices are up significantly from 2024. And unlike the crypto-mining surge of 2021-2022, this time there’s no sign of a sudden crash.
I’ve been tracking hardware markets for years, and what we’re seeing in 2025 is different. This isn’t speculation-driven volatility—it’s a fundamental restructuring of the entire semiconductor supply chain around one thing: Artificial Intelligence.
Let me break down what’s happening, why, and what you should do about it.
The Bottom Line: Hardware prices have increased 30-70% across categories since early 2024. Prices are expected to remain elevated through mid-2026, with gradual normalization in late 2026 to 2027. If you need hardware now, buy now—waiting will likely cost you more.
The Current State: What’s Happening to Prices (November 2025)
The Numbers (November 2025)
| Component | Q4 2024 Price | Q4 2025 Price | Increase |
|---|---|---|---|
| NVIDIA RTX 5090 | $1,999 (launch) | $2,499-2,999 | +25-50% |
| NVIDIA RTX 5080 | $999 (launch) | $1,199-1,399 | +20-40% |
| AMD RX 9070 XT | $649 (launch) | $749-849 | +15-30% |
| 2TB NVMe SSD (Gen5) | $180-220 | $250-300 | +35-40% |
| 2TB NVMe SSD (Gen4) | $100-120 | $140-170 | +40-45% |
| 32GB DDR5-6400 Kit | $100-120 | $135-160 | +30-35% |
| NVIDIA B200 (192GB HBM) | $40,000+ (early access) | $55,000-70,000 | +40-75% |
| NVIDIA H100 (80GB) | $30,000-35,000 | $45,000+ | +30-50% |
| Enterprise NVMe (15.36TB) | $1,800-2,200 | $2,800-3,500 | +55-60% |
Why Is This Happening? The AI Effect Intensified
The price surge that began in 2024 has intensified through 2025. The primary driver remains the same: the insatiable demand for AI infrastructure. But several factors have made 2025 even more challenging.
1. The NVIDIA Blackwell Supercycle
The situation: NVIDIA’s Blackwell architecture (B200, B100, GB200) launched in 2025 and demand has been unprecedented. Every major tech company is scrambling to secure allocation. NVIDIA reported a $150 billion backlog in their Q3 2025 earnings.
Why it affects you: TSMC’s advanced nodes (3nm, 2nm) are essentially 100% allocated to AI chips. Consumer GPU production runs on whatever capacity remains. The RTX 50 series uses similar packaging and memory to professional cards—and data centers always pay more.
GPU Manufacturing Priority (TSMC November 2025):
1. AI Data Center (B200, B100, GB200) - $50K-200K per chip
2. AI Accelerators (Google TPU v6, Amazon T3) - Strategic contracts
3. AMD Instinct MI350/MI400 - Growing share
4. Apple Silicon (M5 series) - Long-term contract
5. Consumer GPUs (RTX 50, RX 9000) - Best effort basis
2. The HBM Memory Emergency
The situation: High Bandwidth Memory (HBM3e) is now the most constrained component in tech. Each B200 GPU requires 192GB of HBM. SK Hynix, Samsung, and Micron have all shifted massive DRAM capacity to HBM production—but it’s still not enough.
The numbers are staggering:
- SK Hynix: 40% of DRAM capacity now HBM (up from 30% in 2024)
- Samsung: 35% of DRAM capacity now HBM
- HBM prices up 70% year-over-year
- Lead times: 12-18 months for new HBM orders
The Math: A single NVIDIA B200 uses as much HBM as ~80 high-end consumer PCs use DDR5. Microsoft alone has ordered 500,000+ Blackwell GPUs for 2025-2026. That’s the DDR5 equivalent of 40 million gaming PCs—just one customer.
3. NAND Flash Tightening
The situation: After a brief oversupply in mid-2024, NAND prices reversed hard. AI data centers need massive storage—we’re talking petabytes per rack for training data, checkpoints, and inference caching.
2025 developments:
- Enterprise SSD demand up 80% year-over-year
- 60TB and 120TB enterprise SSDs now common—consuming huge NAND volumes
- QLC NAND production couldn’t scale fast enough
- Consumer SSDs now compete directly with enterprise for flash allocation
4. The 2025 AI Capex Explosion
Here’s what major companies spent on AI infrastructure in 2025:
| Company | 2024 AI Capex | 2025 AI Capex | Growth | Focus |
|---|---|---|---|---|
| Microsoft | $55B | $95B | +73% | Azure AI, OpenAI, Copilot |
| $38B | $62B | +63% | TPU v6, Gemini 2.x infrastructure | |
| Meta | $38B | $58B | +53% | Llama 4/5, AI data centers |
| Amazon | $65B | $90B | +38% | AWS AI, Trainium 3, Bedrock |
| xAI | $10B | $25B | +150% | Colossus supercomputer expansion |
| Oracle | $8B | $18B | +125% | OCI GPU clusters |
| ByteDance | $12B | $22B | +83% | AI infrastructure (domestic) |
This is over $400 billion chasing the same limited supply of GPUs, memory, and storage in 2025 alone.
When Will Prices Normalize? The Updated Timeline
What’s Coming That Could Help
- TSMC Arizona (2026): First advanced node fab on US soil. Will add ~20,000 wafers/month capacity—but mostly committed to AI chips initially.
- HBM Production Expansion (2026): SK Hynix and Samsung both bringing new HBM3e/HBM4 capacity online. First real relief expected H2 2026.
- Competition Scaling: AMD MI350/MI400, Intel Gaudi 3, and custom silicon (Google TPU v6, Amazon Trainium 3) are all taking share. This is the biggest hope for price normalization.
- NAND Expansion: New 200+ layer NAND fabs coming online in 2026 should ease SSD pricing.
Why 2023 Prices Aren’t Coming Back
Here’s the uncomfortable truth: the “old normal” is gone. AI isn’t a bubble—it’s a structural shift in how computing resources are allocated. Even when supply eventually catches up, the “new normal” will be 20-40% higher than 2023 levels.
What To Do (And Not Do) Right Now
Specific Recommendations by Use Case (November 2025)
For Gamers
- GPU: AMD RX 9070 XT offers the best value. RTX 5080 if you need DLSS 4/ray tracing. Avoid scalped 5090s—4090 is nearly as capable for 1440p/4K.
- RAM: 32GB DDR5-6400 is the sweet spot. DDR5-7200+ premiums aren’t worth it.
- Storage: Gen4 2TB SSDs are still best value. Gen5 only if your motherboard supports it and you do heavy file work.
For Content Creators / AI Enthusiasts
- GPU: RTX 5090 if available at MSRP. Otherwise, hunt for RTX 4090 remaining stock or consider used/refurbished options.
- RAM: 64GB DDR5 minimum, 128GB if budget allows. Prices only going up.
- Storage: 4TB Gen5 NVMe for working drive. Enterprise SSDs on secondary market can be good value—verify health first.
For Enterprises / Data Centers
- GPUs: Lock in Blackwell contracts NOW if you haven’t. B200 lead times are 12-18 months. Seriously evaluate AMD MI350X—it’s competitive and available sooner.
- Cloud vs On-Prem: On-prem ROI has improved. Cloud GPU prices up 50%+ means break-even is faster—but only with high utilization.
- Storage: QLC enterprise SSDs for capacity tier. Multi-tier architecture is essential at these prices.
Is It Worth Investing in Hardware Today?
For Personal Use: Yes, If You Need It
Hardware is a tool, not a speculative asset. If a better GPU enables work you couldn’t do before, the ROI is real. Waiting 18 months for a 10-15% price drop while losing productivity now is bad math.
For Resale/Speculation: Absolutely Not
The arbitrage opportunities of 2021-2022 are gone. Supply chains have matured, manufacturers manage allocation better, and the market is more rational (if expensive). Don’t buy hardware hoping to flip it.
For Business Infrastructure: Do The TCO Math
# TCO comparison: Cloud vs On-Prem GPU (November 2025 prices)
# For AI/ML workloads
# On-Prem (NVIDIA RTX 5090 - inflated pricing)
on_prem_gpu_cost = 2800 # Current street price
on_prem_system_cost = 3500 # Rest of system
power_cost_per_month = 65 # ~450W at $0.15/kWh, 8hr/day
maintenance_per_year = 250
lifespan_years = 4
total_on_prem = (on_prem_gpu_cost + on_prem_system_cost +
(power_cost_per_month * 12 * lifespan_years) +
(maintenance_per_year * lifespan_years))
print(f"On-prem 4-year TCO: ${total_on_prem:,}")
# Output: On-prem 4-year TCO: $10,420
# Cloud (AWS p5.48xlarge - H100 - November 2025 pricing)
cloud_hourly = 98.32 # Current price (up from $65 in 2024)
hours_per_month = 160 # 8hr/day, 20 days
cloud_monthly = cloud_hourly * hours_per_month
cloud_4_year = cloud_monthly * 12 * 4
print(f"Cloud 4-year TCO: ${cloud_4_year:,.0f}")
# Output: Cloud 4-year TCO: $754,214
# Break-even (at 8hr/day utilization)
monthly_savings = cloud_monthly - power_cost_per_month
break_even_months = (on_prem_gpu_cost + on_prem_system_cost) / monthly_savings
print(f"On-prem break-even: {break_even_months:.1f} months")
# Output: On-prem break-even: 0.4 months
# Reality check: At even 2 hours/day utilization
low_util_cloud_monthly = cloud_hourly * 40 # 2hr/day, 20 days
low_util_break_even = (on_prem_gpu_cost + on_prem_system_cost) / low_util_cloud_monthly
print(f"Break-even at 2hr/day: {low_util_break_even:.1f} months")
# Output: Break-even at 2hr/day: 1.6 months
The verdict: On-premises hardware pays off faster than ever due to cloud price increases. Even at modest utilization, break-even is measured in weeks, not years. But this only applies if you have the capital and technical capability to run your own infrastructure.
The Bigger Picture
What we’re witnessing is the semiconductor industry’s most significant restructuring since the mobile revolution. AI has become the primary driver of chip development, manufacturing investment, and supply chain decisions.
The implications:
- Consumer hardware is now the “secondary market” for semiconductor manufacturing
- Prices won’t return to 2023 levels—ever
- Competition (AMD, Intel, custom silicon) is the best hope for relief
- This is the new baseline—plan your budgets accordingly
Silver linings:
- AMD is genuinely competitive now—MI350X is excellent for inference
- Intel Arc has improved dramatically—a real budget option
- TSMC/Samsung capacity expansions will help by 2027
- Used enterprise market is thriving as data centers refresh
Key Takeaways
- Prices are up 30-70% from 2024 across GPUs, SSDs, and RAM
- This isn’t temporary: Structural AI demand means higher baseline prices
- Timeline: Peak H1 2026, gradual relief H2 2026, stabilization 2027
- Buy now if you need it: Waiting likely means paying more, not less
- Explore alternatives: AMD, Intel Arc, previous-gen, used enterprise
- On-prem math has changed: Break-even vs cloud is now weeks, not years
References & Sources
- TrendForce Q4 2025 Report – DRAM/NAND market analysis
- NVIDIA Q3 FY2026 Earnings – Backlog and demand commentary
- SK Hynix Investor Day 2025 – HBM production allocation data
- Tom’s Hardware – GPU and component pricing
- SemiAnalysis – AI chip market analysis
- Company Earnings Reports – Microsoft, Google, Meta, Amazon AI capex (Q3 2025)
What’s your hardware strategy? Share your thoughts on GitHub or LinkedIn.
Discover more from Code, Cloud & Context
Subscribe to get the latest posts sent to your email.