When you need an H100 GPU for training a large language model (LLM) or fine-tuning Llama 3, you don't want to hop on a call with an enterprise sales rep. You want to spin up a node, SSH in, and start training.

And more importantly, you want to know what it's actually going to cost.

We tracked real-time pricing across the major specialized cloud providers to answer a simple question: Who has the cheapest H100 rental right now?

The H100 Pricing Landscape

NVIDIA's H100 is the gold standard for AI workloads, but prices vary wildly. We found a spread of over 40% between the cheapest and most expensive providers for the exact same compute power.

Here is the current snapshot for on-demand pricing (per GPU/hour):

Provider Price (On-Demand) Availability Notes
Vast.ai ~$1.80 - $2.20 Variable Community cloud; reliability varies.
RunPod $2.69 High Great DX, "Secure Cloud" options.
Lambda Labs $2.49 Low Often sold out; best value for pure stability.
CoreWeave Contact Sales Low Enterprise focused.
Paperspace $3.09 Medium Easy to use, but pricier.

The "Hidden" Cost of Cheap GPUs

If you look at the table, Vast.ai seems like the winner. And if you are cost-constrained and tolerant of interruptions, it is. But there's a catch.

Vast.ai aggregates consumer and spare enterprise hardware. "Cheapest" often means you are renting a machine in a tier 3 datacenter (or someone's basement) with consumer-grade internet bandwidth. For multi-node training, this latency kill performance, negating the cost savings.

RunPod and Lambda offer the sweet spot: Tier 1 datacenters with guaranteed uptime and fast interconnects (Infiniband or fast ethernet) at a price that doesn't make your CFO cry.

Spot Instances: The Real Savings

If you can handle interruptions (i.e., you have solid checkpointing code), spot instances are where the real value lies.

We've seen H100 spot instances pop up on RunPod for as low as $1.99/hr. That is significantly cheaper than the $3.50+ you might pay on AWS or Azure (if you can even get quota there).

Recommendation

Our Picks

  • For Experimentation & Debugging: Go with RunPod Secure Cloud. It's instant, reliable, and the Docker experience is seamless.
  • For Long Training Runs: Wait for Lambda Labs availability or commit to a reserved instance contract if you need >8 GPUs.
  • For Maximum Thriftiness: Vast.ai, but monitor your connectivity closely.

Conclusion

The "cheapest" H100 isn't always the one with the lowest hourly tag. It's the one that lets you finish your training run without crashing, stalling, or losing data.

We update our main tracking table every hour. Check the homepage for the latest availability.

FAQ