Dashboard

Select your GPU type

Customize your cluster for optimal performance and scalability

nvidia-logo
B200
VRAM:180 GB
Socket:SXM6
nvidia-logo
H200
VRAM:141 GB
Socket:SXM5
nvidia-logo
H100
VRAM:80 GB
Socket:PCIe, SXM5
nvidia-logo
A100
VRAM:80 GB
Socket:PCIe, SXM4
nvidia-logo
GH200
VRAM:96 GB
Socket:SXM5
nvidia-logo
H200
VRAM:96 GB
Socket:SXM5
nvidia-logo
A100
VRAM:40 GB
Socket:PCIe, SXM4
nvidia-logo
RTX Pro 6000
VRAM:96 GB
Socket:PCIe
nvidia-logo
RTX5090
VRAM:32 GB
Socket:PCIe
nvidia-logo
RTX4090
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX6000 Ada
VRAM:48 GB
Socket:PCIe
nvidia-logo
A6000
VRAM:48 GB
Socket:PCIe
nvidia-logo
RTX3090
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX3090 Ti
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX5000 Ada
VRAM:32 GB
Socket:PCIe
nvidia-logo
V100
VRAM:32 GB
Socket:SXM2
nvidia-logo
V100
VRAM:16 GB
Socket:PCIe, SXM2
nvidia-logo
A10
VRAM:24 GB
Socket:PCIe
nvidia-logo
A40
VRAM:48 GB
Socket:PCIe
nvidia-logo
L40
VRAM:48 GB
Socket:PCIe
nvidia-logo
L40S
VRAM:48 GB
Socket:PCIe
nvidia-logo
A30
VRAM:24 GB
Socket:PCIe
nvidia-logo
A5000
VRAM:24 GB
Socket:PCIe
nvidia-logo
L4
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX4000
VRAM:8 GB
Socket:PCIe
nvidia-logo
RTX5000
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX6000
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX8000
VRAM:48 GB
Socket:PCIe
nvidia-logo
RTX4000 Ada
VRAM:20 GB
Socket:PCIe
nvidia-logo
A4500
VRAM:20 GB
Socket:PCIe
nvidia-logo
RTX4080
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX4080 Ti
VRAM:16 GB
Socket:PCIe
nvidia-logo
A4000
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX3080 Ti
VRAM:12 GB
Socket:PCIe
nvidia-logo
RTX4070 Ti
VRAM:12 GB
Socket:PCIe
nvidia-logo
RTX3080
VRAM:10 GB
Socket:PCIe
nvidia-logo
RTX3070
VRAM:8 GB
Socket:PCIe
nvidia-logo
A2000
VRAM:6 GB
Socket:PCIe
nvidia-logo
P100
VRAM:16 GB
Socket:PCIe
cpu-logo
CPU Node
VRAM:0 GB
Socket:PCIe

Cluster base image

Select a pre-configured cluster setup tailored to your specific needs, requiring no extra configurations and ready to integrate with your codebase immediately.

clusterimages/ubuntu-logo
UBUNTU 22, CUDA 12
white-check

Base image running Ubuntu 22 and CUDA 12. Ideal for devs who prefer to customize their environment. Fastest spin up times.

base image
clusterimages/pytorch-logo
CUDA 12.1, Pytorch 2.2
grey-cirlce-icon

Docker image with PyTorch 2.2.2 and CUDA 12.1, ready for PyTorch model development.

pytorch/pytorch:2.2.2-cuda12.1-cudnn8-runtime
clusterimages/pytorch-logo
CUDA 12.4, Pytorch 2.5.1
grey-cirlce-icon

Docker image with PyTorch 2.5.1 and CUDA 12.4.1, ready for PyTorch model development.

pytorch/pytorch:2.5.1-cuda12.4.1-cudnn8-runtime
clusterimages/pytorch-logo
CUDA 12.4, Pytorch 2.6.0
grey-cirlce-icon

Docker image with PyTorch 2.6.0 and CUDA 12.4.1, ready for PyTorch model development.

pytorch/pytorch:2.6.0-cuda12.4.1-cudnn8-runtime
clusterimages/pytorch-logo
CUDA 12.6, Pytorch 2.7.0
grey-cirlce-icon

Docker image with PyTorch 2.7.0 and CUDA 12.6.3, ready for PyTorch model development.

pytorch/pytorch:2.7.0-cuda12.6.3-cudnn8-runtime
clusterimages/a1111-logo
Stable Diffusion Web UI
grey-cirlce-icon

Docker image running Stable Diffusion Web UI to immediately start generating stunning generative AI images.

primeintellect/stable-diffusion
clusterimages/axolotl-logo
Axolotl
grey-cirlce-icon

Docker Image running Axolotl, the best library to start fine-tuning various AI models.

axolotlai/axolotl-cloud
clusterimages/prime-logo-light
Prime RL (RFT)
grey-cirlce-icon

Image pre-installed with prime-rl and verifiers to enable RL and RFT training at scale.

primeintellect/prime-rl-devel
clusterimages/bittensor-logo
Bittensor
grey-cirlce-icon

Docker Image running the Bittensor CLI to immediately start mining or validating on the Bittensor network.

opentensorfdn/bittensor
clusterimages/plus-logo
Create Custom Template
open-new-page

Create your own custom template with your specific requirements and configurations.

Summary

Review your GPU selection and configuration details.

Filters

Compute types
Show Spot Instances
Disks
Filter by your existing disks
CPU
-
RAM
-

Rate your experience