Select your GPU type

Customize your cluster for optimal performance and scalability

magnify-icon-small
nvidia-logo
H100
VRAM:80 GB
Socket:PCIe, SXM5
nvidia-logo
A100
VRAM:80 GB
Socket:PCIe, SXM4
nvidia-logo
H200
VRAM:96 GB
Socket:SXM5
nvidia-logo
H200
VRAM:141 GB
Socket:SXM5
nvidia-logo
A100
VRAM:40 GB
Socket:PCIe, SXM4
nvidia-logo
RTX4090
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX6000 Ada
VRAM:48 GB
Socket:PCIe
nvidia-logo
A6000
VRAM:48 GB
Socket:PCIe
nvidia-logo
RTX3090
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX3090 Ti
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX5000 Ada
VRAM:32 GB
Socket:PCIe
nvidia-logo
V100
VRAM:32 GB
Socket:SXM2
nvidia-logo
V100
VRAM:16 GB
Socket:PCIe, SXM2
nvidia-logo
A10
VRAM:24 GB
Socket:PCIe
nvidia-logo
A40
VRAM:48 GB
Socket:PCIe
nvidia-logo
L40
VRAM:48 GB
Socket:PCIe
nvidia-logo
L40S
VRAM:48 GB
Socket:PCIe
nvidia-logo
A30
VRAM:24 GB
Socket:PCIe
nvidia-logo
A5000
VRAM:24 GB
Socket:PCIe
nvidia-logo
L4
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX4000
VRAM:8 GB
Socket:PCIe
nvidia-logo
RTX5000
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX6000
VRAM:24 GB
Socket:PCIe
nvidia-logo
RTX8000
VRAM:48 GB
Socket:PCIe
nvidia-logo
RTX4000 Ada
VRAM:20 GB
Socket:PCIe
nvidia-logo
A4500
VRAM:20 GB
Socket:PCIe
nvidia-logo
RTX4080
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX4080 Ti
VRAM:16 GB
Socket:PCIe
nvidia-logo
A4000
VRAM:16 GB
Socket:PCIe
nvidia-logo
RTX3080 Ti
VRAM:12 GB
Socket:PCIe
nvidia-logo
RTX4070 Ti
VRAM:12 GB
Socket:PCIe
nvidia-logo
RTX3080
VRAM:10 GB
Socket:PCIe
nvidia-logo
RTX3070
VRAM:8 GB
Socket:PCIe
nvidia-logo
A2000
VRAM:6 GB
Socket:PCIe
nvidia-logo
P100
VRAM:16 GB
Socket:PCIe
cpu-logo
CPU Node
VRAM:0 GB
Socket:PCIe

Cluster base image

Select a pre-configured cluster setup tailored to your specific needs, requiring no extra configurations and ready to integrate with your codebase immediately.

clusterimages/ubuntu-logo
UBUNTU 22, CUDA 12
white-check

Base image running Ubuntu 22 and CUDA 12. Ideal for devs who prefer to customize their environment. Fastest spin up times.

base image
clusterimages/pytorch-logo
CUDA 12.1, Pytorch 2.2
grey-cirlce-icon

Docker image with PyTorch 2.2.2 and CUDA 12.1, ready for PyTorch model development.

pytorch/pytorch:2.2.2-cuda12.1-cudnn8-runtime
clusterimages/pytorch-logo
CUDA 12.4, Pytorch 2.4
grey-cirlce-icon

Docker image with PyTorch 2.4.0 and CUDA 12.4.1, ready for PyTorch model development.

pytorch/pytorch:2.4.0-cuda12.4.1-cudnn8-runtime
clusterimages/a1111-logo
Stable Diffusion Web UI
grey-cirlce-icon

Docker image running Stable Diffusion Web UI to immediately start generating stunning generative AI images.

primeintellect/stable-diffusion
clusterimages/black-forst-labs-logo
Flux ComfyUI
coming soon

Docker image running Flux ComfyUI to immediately start generating state-of-the-art AI images.

primeintellect/flux
clusterimages/ubuntu-logo
Axolotl
grey-cirlce-icon

Docker Image running Axolotl, the best library to start fine-tuning various AI models.

winglian/axolotl
clusterimages/bittensor-logo
Bittensor
grey-cirlce-icon

Docker Image running the Bittensor CLI to immediately start mining or validating on the Bittensor network.

opentensorfdn/bittensor
clusterimages/prime-logo-light
Prime Intellect - OpenDiLoCo
coming soon

Join the Prime Intellect distributed low-communication (DiLoCo) training with this Docker image.

primeintellect/open_diloco
clusterimages/prime-logo-light
VLLM Inference of Llama-3.1-8B-Instruct
grey-cirlce-icon

Deploy your personal API instance of Llama3.1 8B Instruct via the VLLM inference library.

primeintellect/vllm-llama-3-1-8b-instruct
clusterimages/plus-logo
Create Custom Template
open-new-page

Create your own custom template with your specific requirements and configurations.

Summary

Review your GPU selection and configuration details.

Compute types
Show Spot Instances