🚀 New model available: openai/gpt-oss-120b 🚀
Go from idea to GPU-powered container in under 10 seconds. No wait times, no complex configs, no bundles - just the power you need, when you need it. Choose a single card or scale up instantly. Pay only for what you use!
Select the GPU tier that fits your workload — high-performance, balanced, or entry-level — with pay-as-you-go pricing and no hidden fees.
Give your container a memorable name and select your base image (TensorFlow, PyTorch, CUDA, etc.) so your environments stay organized.
Copy the one-line SSH command we generate for you and paste it into your terminal — no manual key setup or VPNs needed.
Your container launches fully configured. Start training, fine-tuning, or running inference on demand with transparent, pay-as-you-go billing.
Specification | NVIDIA B200 |
---|---|
Architecture | Blackwell |
Form Factor | SXM6 |
FP4 Tensor Core | 18 petaFLOPS |
FP8/FP6 Tensor Core | 9 petaFLOPS |
INT8 Tensor Core | 9 petaOPS |
FP16/BF16 Tensor Core | 4.5 petaFLOPS |
TF32 Tensor Core | 2.2 petaFLOPS |
FP32 | 75 teraFLOPS |
FP64/FP64 Tensor Core | 37 teraFLOPS |
GPU Memory | 180GB HBM3e |
GPU Memory Bandwidth | 7.7 TB/s |
Multi-Instance GPU (MIG) | Up to 7 MIGs @ 23 GB |
Decompression Engine | Yes |
Decoders | 7 NVDEC, 7 NVJPG |
Max Thermal Design Power (TDP) | Up to 1,000W |
Interconnect | 5th Generation NVLink: 1.8TB/s, PCIe Gen5: 128GB/s |
Access high-power GPUs when you need them. Pay by the minute with no egress fees.
GPUs | Total VRAM | vCPUs | RAM | Disk | Price |
---|---|---|---|---|---|
1x NVIDIA B200 | 180 GB | 16 cores | 250 GB | 2 TB SSD | $2.49/hour |
2x NVIDIA B200 | 360 GB | 32 cores | 500 GB | 4 TB SSD | $4.98/hour |
4x NVIDIA B200 | 720 GB | 64 cores | 1000 GB | 8 TB SSD | $9.96/hour |
8x NVIDIA B200 | 1440 GB | 128 cores | 1500 GB | 15 TB SSD | $19.92/hour |
© 2025 Deep Infra. All rights reserved.