Blackwell, A100 & V100 NVLink π
Trooper.AI delivers enterprise-grade GPU servers in the EU β from NVIDIA RTX Pro 6000 Blackwell 96 GB, A100 40 GB, to V100 32 GB NVLink clusters. Train, fine-tune and deploy large AI models on real bare-metal GPUs, hosted securely in ISO/IEC 27001 certified European data centers.
Deploy Enterprise GPU Server Now
π Pay β¬100, get β¬150 (+50%) up to β¬1500!
Use your bonus credits for companies across Blackwell, A100 and NVLink V100 clusters.
Promotion valid this month.
The new reference standard for enterprise AI.
Your proven enterprise workhorse.
Multi-GPU NVLink for cost-efficient scaling.
All Trooper.AI enterprise GPUs deliver:
From Germany for the EU:




See how easy it is to deploy a Enterprise GPU Server in the EU!
| GPU | VRAM | TAIFlops | flux schnell | Qwen3 8B |
|---|---|---|---|---|
| RTX 5090 | 32 GB | 207 | 13.9 s/image | 668 tokens/s |
| A100 | 40 GB | 152 | 18.2 s/image | 550 tokens/s |
| RTX 4080 Super Pro | 32 GB | 139 | 19.9 s/image | 330 tokens/s |
| RTX 4090 | 24 GB | 133 | 12.6 s/image | 424 tokens/s |
| RTX 3090 | 24 GB | 100 | 19.3 s/image | 365 tokens/s |
| GPU | VRAM | TAIFlops | flux schnell | Llama 3.1 8B FP8 |
|---|---|---|---|---|
| RTX Pro 6000 Blackwell | 96 GB | 377 | 1.37 s/image | 1999 tokens/s |
| RTX 4090 Pro | 48 GB | 238 | 2.63 s/image | 1221 tokens/s |
| RTX 5090 | 32 GB | 207 | 13.9 s/image | 522 tokens/s |
| A100 | 40 GB | 152 | 18.2 s/image | n/a |
| GPU | VRAM | TAIFlops | SDXL | Qwen3 4B |
|---|---|---|---|---|
| RTX 3090 | 24 GB | 100 | 5.40 s/image | 583 tokens/s |
| V100 | 32 GB | 84 | 5.81 s/image | 401 tokens/s |
| V100 | 16 GB | 62 | 6.09 s/image | 230 tokens/s |
| RTX 4070 Ti Super | 16 GB | 56 | 4.43 s/image | 242 tokens/s |
| RTX A4000 | 16 GB | 51 | 7.93 s/image | 163 tokens/s |
Blackwell dominates ultra-large models, A100 remains ultra-efficient, and NVLink V100 delivers scalable cost-performance.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Price | +50% Promo Price** |
|---|---|---|---|---|---|---|
| hyperionai.s1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 64 GB | 6 | 180 GB | β¬1.00/h or β¬573/m | |
| hyperionai.m1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 100 GB | 12 | 600 GB | β¬1.05/h or β¬600/m | |
| hyperionai.l1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 128 GB | 20 | 900 GB | β¬1.09/h or β¬627/m | |
| hyperionai.l2 | 2Γ RTX Pro 6000 Blackwell 96 GB | 196 GB | 40 | 900 GB | β¬2.04/h or β¬1173/m | |
| hyperionai.v2 | 2Γ RTX Pro 6000 Blackwell 96 GB | 208 GB | 40 | 1600 GB | β¬2.08/h or β¬1197/m | |
| hyperionai.l3 | 3Γ RTX Pro 6000 Blackwell 96 GB | 256 GB | 60 | 900 GB | β¬2.99/h or β¬1720/m | |
| hyperionai.xl4 | 4Γ RTX Pro 6000 Blackwell 96 GB | 386 GB | 80 | 1600 GB | β¬3.99/h or β¬2293/m |
| Blib Name | GPU | CPU RAM | Cores | NVMe | Price | +50% Promo Price** |
|---|---|---|---|---|---|---|
| infinityai.m1 | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.71/h or β¬410/m | |
| infinityai.m1.de | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.76/h or β¬437/m | |
| infinityai.l1.de | 1Γ A100 40 GB | 92 GB | 20 | 1200 GB | β¬0.81/h or β¬467/m | |
| infinityai.m2 | 2Γ A100 40 GB | 140 GB | 10 | 600 GB | β¬1.30/h or β¬747/m | |
| infinityai.m2.de | 2Γ A100 40 GB | 140 GB | 10 | 1200 GB | β¬1.40/h or β¬803/m | |
| infinityai.m4 | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.63/h or β¬1510/m | |
| infinityai.m4.de | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.77/h or β¬1593/m | |
| infinityai.l6.de | 6Γ A100 40 GB | 512 GB | 120 | 3900 GB | β¬4.36/h or β¬2510/m | |
| infinityai.l8 | 8Γ A100 40 GB | 768 GB | 90 | 3900 GB | β¬5.29/h or β¬3043/m | |
| infinityai.l8.de | 8Γ A100 40 GB | 768 GB | 160 | 5350 GB | β¬5.83/h or β¬3353/m |
| Blib Name | GPU | CPU RAM | Cores | NVMe | Price | +50% Promo Price** |
|---|---|---|---|---|---|---|
| novatesla.s1 | 1Γ Tesla V100 32 GB | 42 GB | 4 | 180 GB | β¬0.25/h or β¬143/m | |
| novatesla.m1 | 1Γ Tesla V100 32 GB | 58 GB | 8 | 600 GB | β¬0.29/h or β¬163/m | |
| novatesla.l1 | 1Γ Tesla V100 32 GB | 64 GB | 12 | 900 GB | β¬0.31/h or β¬180/m | |
| novatesla.s2 | 2Γ Tesla V100 32 GB | 84 GB | 6 | 180 GB | β¬0.47/h or β¬267/m | |
| novatesla.m2 | 2Γ Tesla V100 32 GB | 116 GB | 10 | 600 GB | β¬0.51/h or β¬290/m | |
| novatesla.l2 | 2Γ Tesla V100 32 GB | 128 GB | 24 | 1200 GB | β¬0.57/h or β¬330/m | |
| novatesla.m3 | 3Γ Tesla V100 32 GB | 174 GB | 24 | 900 GB | β¬0.77/h or β¬440/m | |
| novatesla.m4 | 4Γ Tesla V100 32 GB | 232 GB | 48 | 900 GB | β¬1.05/h or β¬600/m | |
| novatesla.xl4 | 4Γ Tesla V100 32 GB | 232 GB | 64 | 1600 GB | β¬1.12/h or β¬643/m | |
| novatesla.m4.v | 4Γ Tesla V100 32 GB | 160 GB | 40 | 3600 GB | β¬1.13/h or β¬650/m | |
| novatesla.l8.nvlink | 8Γ Tesla V100 32 GB | 396 GB | 88 | 3600 GB | β¬2.11/h or β¬1217/m |
* Weekly and monthly subscriptions offer reduced rates.
** Promotional pricing includes a 50% extra credits bonus.
We maintain Blackwell, A100 and V100 NVLink clusters with obsessive precision β so your AI never waits for hardware.
π₯οΈ Order Console ποΈ Get 5 β¬ Credit π§ sales@trooper.ai π +49 6126 928999-1
Build real enterprise AI on European GPU infrastructure β Blackwell, A100 & NVLink powered by Trooper.AI.