Current GPU Prices in the USA (Updated Daily)
These are actual prices I've verified in the last 24 hours. The "lowest"
prices are spot instances or marketplace rates—great if your workload
can handle interruptions. The typical range is what you'll pay for
reliable on-demand instances.
| GPU Model | Lowest Price | Provider | Typical On-Demand Range |
| NVIDIA H100 | $2.29/hr | Vast.ai | $2.99 - $4.50/hr |
| NVIDIA A100 | $1.25/hr | Vast.ai | $1.99 - $3.20/hr |
| RTX 4090 | $0.52/hr | Vast.ai | $0.70 - $1.20/hr |
| RTX 3090 | $0.35/hr | Vast.ai | $0.50 - $0.90/hr |
| A10G | $0.75/hr | AWS Spot | $1.00 - $1.50/hr |
💡 Pro tip: Vast.ai has the lowest prices but variable reliability.
For production workloads, I recommend Lambda Labs or CoreWeave despite the
higher cost.
Top 5 Cloud GPU Providers in the USA
I've personally tested each of these providers with real
workloads—training LLMs, running Stable Diffusion inference, and
rendering 3D scenes. Here's my honest take on which ones are worth your
money.
#1
A100 from $1.99/hr, H100 from $2.99/hr
📍 California, Texas
I've personally trained multiple models on Lambda's infrastructure. Their A100 instances are rock-solid stable, and you actually get the full GPU—no virtualization overhead. The customer support is surprisingly responsive for a smaller provider.
View Lambda Labs Prices →
#2
RTX 4090 from $0.52/hr
📍 Distributed (peer-to-peer)
Vast.ai is basically the Airbnb of GPU rental. Prices can be absurdly low because you're renting from individuals with spare hardware. I've gotten RTX 3090s for $0.35/hour. The catch? Reliability varies. Some hosts are amazing; others disappear mid-training.
View Vast.ai Prices →
#3
Dedicated H100 clusters
📍 New Jersey, Nevada, Illinois
CoreWeave is built differently—they own their infrastructure. When you need hundreds of GPUs for a big training run, they're the provider that actually has them available. Prices aren't the cheapest, but you're paying for guaranteed availability and no noisy neighbors.
View CoreWeave Prices →
#4
Cold start in 2-3 seconds
📍 Georgia, Oregon, Ohio
RunPod's serverless offering is genuinely impressive for inference workloads. You pay only for actual compute time, and the cold start is faster than anything I've seen from the big clouds. I use it for a production LLM API and the cost savings are massive.
View RunPod Prices →
#5
Full ecosystem integration
📍 10+ US regions
Look, AWS GPU instances are expensive. But if your company is already on AWS, the integration benefits often outweigh the cost premium. Plus, their spot instances can actually be competitive if you're willing to handle interruptions.
View AWS Prices →
US Data Center Regions Explained
Location matters. Pick the wrong region and you'll pay more for egress
bandwidth, suffer higher latency, or struggle with GPU availability.
Here's where the major providers have presence and which regions make
sense for different use cases.
N. Virginia (us-east-1)
AWS, Azure, Google Cloud, CoreWeave
Best for: Best overall availability
Oregon (us-west-2)
AWS, Azure, Google Cloud, Lambda
Best for: West Coast, renewable energy
California
Lambda, CoreWeave, Google Cloud
Best for: Low latency to Asia-Pacific
Texas
Lambda, Crusoe, CoreWeave
Best for: Energy costs, spot availability
Illinois
CoreWeave
Best for: Central US coverage
Nevada
CoreWeave
Best for: Tax benefits, cheap power
My Region Recommendations
- For most AI training: us-east-1 (N. Virginia) or us-west-2
(Oregon). Best availability, lowest spot instance interruption rates.
- For inference serving US users: Pick the region closest
to your users. If they're nationwide, deploy to both coasts and use a
load balancer.
- For cost optimization: Texas regions often have lower
energy costs, which providers pass on as lower compute prices. Nevada
for tax advantages.
- For data sovereignty: All US regions are subject to the
CLOUD Act. If you need non-US jurisdiction, check our Europe page.
Compliance & Enterprise Considerations
🏥 HIPAA Compliance
Training medical AI? You'll need a BAA. AWS, Google Cloud, and Azure
all offer HIPAA-compliant GPU instances. Among specialized
providers, CoreWeave and Lambda Labs can sign BAAs for enterprise
customers. Expect 20-40% price premium.
🔒 SOC 2 & Security
CoreWeave, Lambda Labs, and the major clouds (AWS/GCP/Azure) all
have SOC 2 Type II. Vast.ai and other marketplace providers
generally don't—fine for research, risky for production with
sensitive data.
⚖️ Export Controls
H100 and A100 GPUs are export-controlled under US regulations. All
legitimate US providers handle compliance, but if you're doing
international inference, make sure your provider allows traffic from
your target countries.
Frequently Asked Questions
What is the cheapest cloud GPU provider in the USA?
Based on our daily price tracking, Vast.ai consistently offers the cheapest cloud GPU rentals in the USA, with RTX 4090 instances starting around $0.52/hour. However, Lambda Labs offers the best value for H100 and A100 instances with prices starting at $1.99/hour for A100 and $2.99/hour for H100, backed by enterprise-grade reliability.
Which US data center regions are best for GPU rental?
For most AI/ML workloads, we recommend us-east-1 (N. Virginia) or us-west-2 (Oregon) as they offer the best availability and lowest latency for the majority of users. California regions (us-west-1) are great if you're training on West Coast data. Texas regions often have better spot instance availability.
Do US cloud GPU providers offer HIPAA compliance?
Yes, several providers offer HIPAA-compliant GPU instances including AWS, Google Cloud, and Azure. Among specialized GPU providers, Lambda Labs and CoreWeave offer BAA (Business Associate Agreement) options for healthcare AI workloads. Expect to pay 20-40% premium for compliant instances.
Can I get spot instances for GPU rental in the USA?
Absolutely. AWS, Google Cloud, and Azure all offer GPU spot/preemptible instances with discounts of 60-90%. Vast.ai specializes in spot-like marketplace pricing. Just be aware that these instances can be interrupted with little notice—great for fault-tolerant training, risky for long inference jobs.
Compare All GPU Prices in Real-Time
Don't just take my word for it—browse live prices from 40+ US
providers. Filter by GPU model, region, and price range to find
exactly what you need.
Browse All GPU Prices