TL;DR: Prices swing 50-100% regularly. Rent on weekday mornings. Avoid evenings. Big AI releases = price spikes. Conference weeks = GPU droughts.
Why I Started Tracking This
It started with frustration. I needed an H100 for a training job on a Tuesday afternoon. Lambda Labs wanted $0.99/hour. Fine, reasonable. Same job, same requirements, Thursday at 6 PM: $1.35/hour.
That's a 36% increase in two days. For the exact same hardware.
I thought it was a glitch. So I started logging prices. Every hour. From every major provider. For three months straight. The data tells a story—and that story can save you serious money.
The Price Swing Reality
Here's what nobody tells you: cloud GPU prices aren't static. They're dynamic, supply-and-demand driven, and way more volatile than you'd expect.
That wasn't an error. I watched it happen. The culprit? Anthropic announced a new model, everyone rushed to replicate it, and GPU supply evaporated.
Pattern #1: The Daily Rhythm
Prices follow a predictable daily pattern once you know what to look for. Here's what I found:
| Time (UTC) | Typical Price | Why |
|---|---|---|
| 6-10 AM | Lowest | US sleeping, Europe starting |
| 12-3 PM | Moderate | Europe at full speed |
| 6-10 PM | Highest | US West Coast online |
| 12-4 AM | Low | Minimal demand globally |
The evening US spike is brutal. I'm talking 20-40% price increases just because San Francisco woke up. If you can run your jobs during off-hours, do it.
Real Example: Monday Price Curve
Here's actual H100 pricing on Vast.ai from a random Monday in January:
- 6 AM UTC: $0.72/hour — I grabbed this
- 12 PM UTC: $0.85/hour — still reasonable
- 6 PM UTC: $1.15/hour — ouch
- 10 PM UTC: $1.28/hour — ouch harder
- 2 AM UTC (next day): $0.79/hour — back to normal
Same provider. Same GPU model. 77% price swing in 20 hours.
Pattern #2: The Weekly Cycle
Weekends are cheaper. Period. Not by a little—by a lot.
I averaged prices across 12 weeks of data:
- Monday-Thursday: Baseline (100%)
- Friday: 105% of baseline (people trying to finish weekly jobs)
- Saturday: 82% of baseline (cheapest day)
- Sunday: 85% of baseline (still cheap)
Saturday nights are the sweet spot. I've consistently found the week's lowest prices between 10 PM Saturday and 4 AM Sunday UTC. Something about weekends just kills demand.
"I saved $340 in one month just by shifting my training jobs from Friday evenings to Saturday mornings. Same compute, same results, 30% cheaper."
The A100 40GB Spot Price Opportunity
While everyone is fighting for H100s, I found a massive arbitrage opportunity with the older A100 40GB cards.
Because they have "only" 40GB of VRAM (vs the 80GB standard for LLM training), demand has plummeted. But for inference or smaller models? They are gold.
- A100 80GB Spot Price: ~$1.10 - $1.40/hr
- A100 40GB Spot Price: ~$0.45 - $0.65/hr
That's less than half the price for nearly identical compute performance (Tensor Core count is the same). If your model fits in 40GB, stop overpaying for the 80GB version.
Pattern #3: The Event Shock
This is where it gets wild. External events cause massive price spikes—and you can predict them.
AI Model Releases
When Meta dropped Llama 3 in December? GPU prices went nuts. Here's what I logged:
- Day before announcement: H100 average $0.89/hour
- Day of announcement: H100 average $1.12/hour (+26%)
- Day after: H100 average $1.38/hour (+55%)
- 3 days later: Prices still 40% above normal
Everyone wanted to fine-tune Llama 3 immediately. Supply couldn't keep up. Prices reflected that desperation.
Conference Weeks
NeurIPS week was brutal. CVPR wasn't much better. During major AI conferences:
- Prices spike 30-50%
- Availability drops (good luck finding an H100)
- Spot instances become nearly unusable
Researchers submit last-minute experiments. Demo videos get rendered. Everyone needs compute at once.
Crypto Pumps
When Bitcoin broke $100k in late 2024, GPU prices followed within 48 hours. Not as dramatically as the AI model spikes, but still noticeable—15-20% increases across the board.
The correlation isn't perfect, but it's there. Crypto miners don't typically use cloud GPUs, but the speculative demand ripples through the entire GPU supply chain.
Pattern #4: The Provider Differences
Not all providers move together. Some are way more volatile than others:
| Provider | Price Volatility | Notes |
|---|---|---|
| Vast.ai | Extreme | Market-driven, wild swings |
| RunPod | High | Spot prices especially volatile |
| Lambda Labs | Low | Most stable pricing |
| CoreWeave | Moderate | Enterprise-focused, less swing |
| Salad | Chaotic | Community-driven, unpredictable |
If you need predictability, Lambda Labs is your friend. If you want to play the market and catch dips, Vast.ai offers the biggest savings potential—but you need patience and flexibility.
How to Exploit These Patterns
Enough theory. Here's how I actually save money:
1. The Early Bird Strategy
Set up your jobs to start at 6 AM UTC. Use cron or scheduled jobs. The price difference between 6 AM and 6 PM can be 30-40%.
2. The Weekend Warrior
Batch your non-urgent jobs for weekends. Saturday morning is consistently the cheapest window I found. If you can wait 24 hours, the savings are real.
3. The Event Avoidance
Keep an eye on AI news. Major model releases = price spikes for 2-3 days. Either:
- Pre-provision before the announcement (if you know it's coming)
- Wait 4-5 days after (prices normalize)
- Use providers with fixed pricing (Lambda) during spike periods
4. The Multi-Provider Dance
Don't get locked into one provider. When Vast.ai prices spike, Lambda might still be reasonable. When everyone's expensive, check smaller providers like Nebius or FluidStack.
I keep accounts at 5 providers. It took an hour to set up. It saves me hundreds per month.
The Data: 90 Days of Price History
Here's what I collected. This isn't cherry-picked—it's every H100 price I logged:
- Lowest recorded: $0.68/hour (Vast.ai, December 3, 2 AM UTC)
- Highest recorded: $1.89/hour (RunPod spot, during Llama 3 launch)
- Average: $1.04/hour
- Standard deviation: $0.28 (that's 27% variance!)
Think about that. A 27% standard deviation means prices are all over the place. Timing matters enormously.
My Personal Savings
I applied these patterns to my actual work in January 2026:
- Baseline cost: $1,240 (renting whenever I needed)
- Optimized cost: $847 (using patterns above)
- Savings: $393 (31.7%)
That's real money. For the same compute. Just better timing.
Tools I Use
I'm not checking prices manually every hour. That would be insane. Here's my stack:
- CloudGPUTracker (obviously) — I built this to track prices automatically
- Simple cron script — checks prices and texts me when my target GPU drops below threshold
- Price alert history — I review weekly to adjust my targets
The cron script took 20 minutes to write. It's saved me hours of manual checking and probably $500+ in overpayments.
What About Long-Term Trends?
Three months isn't enough to call long-term trends, but here's what I suspect:
- H100 prices are slowly declining — more supply coming online, competition heating up
- A100 prices are stable — mature market, predictable demand
- Consumer GPUs (4090) are getting cheaper — oversupply from crypto decline
- Next-gen (B100) will spike everything — when those drop, expect chaos
I'll keep tracking and report back in 6 months with a bigger dataset.
Bottom Line
Cloud GPU pricing isn't random—it's predictable chaos. The daily rhythms, weekly cycles, and event shocks create patterns you can exploit.
My advice:
- Rent on weekday mornings or weekend nights
- Avoid evenings (US West Coast hours)
- Watch for AI model releases and conference weeks
- Keep multiple provider accounts
- Automate price monitoring
The savings aren't marginal—they're substantial. 30-40% off your compute bill just by being smart about timing.
Want to track prices automatically?
Set Up Price Alerts →FAQ
When is the best time to rent cloud GPUs?
Weekday mornings (6-10 AM UTC) consistently show lower prices. I've tracked H100 prices dropping by 15-25% during these windows compared to peak hours. Weekend nights are also good. Avoid weekday evenings when US West Coast developers come online—that's when prices spike.
Do GPU prices really change that much?
Absolutely. In December 2025, I saw H100 prices swing from $0.68/hour to $1.40/hour on the same provider within 48 hours. That's a 106% difference. A100s are more stable but still vary 30-40% week to week. RTX 4090s are the wildest—I've seen $0.35/hour on a Tuesday and $0.85/hour by Friday.
What causes GPU price spikes?
Three things: big AI model releases (Llama 3 launch crashed availability for days), major conferences (NeurIPS week was brutal), and crypto price surges (when Bitcoin pumps, GPU demand follows). The worst is when all three hit at once.
Last updated: February 12, 2026. Price patterns change as the market matures. I'll update this quarterly with fresh data. If you spot patterns I missed, email me.