Trooper.AI provides dedicated GPU servers featuring NVIDIA A100 40 GB, Tesla V100 32 GB NVLINK and our new HyperionAI (RTX Pro 6000 Blackwell 96 GB) configurations, delivering uncompromised performance for enterprise AI workloads π. Our solutions are designed to empower your IT teams with the resources needed for large-scale model training, complex AI inference, and efficient LLM fine-tuning. Built for production deployments and hosted securely within the EU, Trooper.AI ensures reliable and scalable GPU infrastructure for your organization. Enterprise contracts come with 12-month periods.
Our A100 40 GB instances offer:
We love AI to serve you with the best AI ever:




Our V100 32 GB NVLINK instances offer is mostly comparable, differences are:
Our HyperionAI servers powered by the RTX Pro 6000 Blackwell 96 GB are built for next-generation AI workloads:
HyperionAI closes the gap between classic data center GPUs and modern high-VRAM workstation-class accelerators β optimized for real-world enterprise AI production.
The A100 40 GB delivers a strong balance across compute, image generation, and LLM inference. While the RTX Pro 6000 Blackwell clearly leads in raw performance, the A100 remains competitive and well-rounded for mixed AI workloads.
| GPU | VRAM | TAIFlops | flux schnell | Qwen3 8B |
|---|---|---|---|---|
| RTX Pro 6000 Blackwell | 96 GB | 377 | 1.37 s/image | 1531 tokens/s |
| RTX 5090 | 32 GB | 207 | 13.9 s/image | 668 tokens/s |
| A100 | 40 GB | 152 | 18.2 s/image | 550 tokens/s |
| RTX Pro 4500 Blackwell | 32 GB | 146 | 16.7 s/image | 378 tokens/s |
| RTX 4080 Super Pro | 32 GB | 139 | 19.9 s/image | 330 tokens/s |
| RTX 4090 | 24 GB | 133 | 12.6 s/image | 424 tokens/s |
| V100 | 32 GB | 84 | 75.1 s/image | 251 tokens/s |
RTX Pro 6000 Blackwell Leads in every metric: highest TAIFlops (377), fastest image generation (1.37 s/image), and strongest token throughput (1531 tokens/s).
A100 (40 GB) Balanced performance with 152 TAIFlops, 18.2 s/image, and 550 tokens/s. Clearly ahead of RTX 4080 Super Pro, RTX Pro 4500 Blackwell, and V100 in LLM throughput, while remaining competitive in image generation.
V100 with NVLINK Lowest overall performance: slowest image generation (75.1 s/image) and lowest token throughput (251 tokens/s).
The A100 40 GB does not top the charts, but it remains a reliable, well-balanced GPU for mixed AI workloadsβespecially when combining solid inference throughput with stable image generation performance and 40 GB VRAM capacity.
Explore our NVIDIA A100 server options and pricing. Servers hosted with a β.deβ domain are located in Germany within ISO/IEC 27001 certified data centers. Enterprise contracts start at 12-month periods.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Monthly Price |
|---|---|---|---|---|---|
| infinityai.s1 | 1Γ A100 40 GB | 64 GB | 8 | 180 GB | β¬480/m |
| infinityai.m1 | 1Γ A100 40 GB | 80 GB | 10 | 600 GB | β¬505/m |
| infinityai.s1.de | 1Γ A100 40 GB | 64 GB | 12 | 180 GB | β¬520/m |
| infinityai.l1 | 1Γ A100 40 GB | 128 GB | 16 | 900 GB | β¬545/m |
| infinityai.m1.de | 1Γ A100 40 GB | 80 GB | 16 | 600 GB | β¬555/m |
| infinityai.l1.de | 1Γ A100 40 GB | 128 GB | 26 | 900 GB | β¬600/m |
| infinityai.m2 | 2Γ A100 40 GB | 160 GB | 22 | 600 GB | β¬960/m |
| infinityai.l2 | 2Γ A100 40 GB | 256 GB | 34 | 900 GB | β¬1015/m |
| infinityai.m2.de | 2Γ A100 40 GB | 160 GB | 32 | 600 GB | β¬1035/m |
| infinityai.xl2 | 2Γ A100 40 GB | 260 GB | 36 | 1200 GB | β¬1040/m |
| infinityai.l2.de | 2Γ A100 40 GB | 256 GB | 52 | 900 GB | β¬1120/m |
| infinityai.xl2.de | 2Γ A100 40 GB | 260 GB | 54 | 1200 GB | β¬1140/m |
| infinityai.l4.de | 4Γ A100 40 GB | 528 GB | 104 | 900 GB | β¬2165/m |
| infinityai.xl4.de | 4Γ A100 40 GB | 532 GB | 106 | 1200 GB | β¬2180/m |
| infinityai.l5 | 5Γ A100 40 GB | 656 GB | 86 | 900 GB | β¬2445/m |
| infinityai.xl5 | 5Γ A100 40 GB | 660 GB | 88 | 1200 GB | β¬2465/m |
| infinityai.l5.de | 5Γ A100 40 GB | 656 GB | 130 | 900 GB | β¬2680/m |
| infinityai.xl5.de | 5Γ A100 40 GB | 660 GB | 132 | 1200 GB | β¬2700/m |
| infinityai.l6 | 6Γ A100 40 GB | 800 GB | 102 | 900 GB | β¬2920/m |
| infinityai.xl6 | 6Γ A100 40 GB | 804 GB | 104 | 1200 GB | β¬2940/m |
| infinityai.l6.de | 6Γ A100 40 GB | 800 GB | 156 | 900 GB | β¬3205/m |
| infinityai.xl6.de | 6Γ A100 40 GB | 804 GB | 158 | 1200 GB | β¬3225/m |
| infinityai.l8 | 8Γ A100 40 GB | 960 GB | 124 | 900 GB | β¬3815/m |
| infinityai.xl8 | 8Γ A100 40 GB | 960 GB | 124 | 1200 GB | β¬3830/m |
| infinityai.l8.de | 8Γ A100 40 GB | 960 GB | 188 | 900 GB | β¬4170/m |
| infinityai.xl8.de | 8Γ A100 40 GB | 960 GB | 188 | 1200 GB | β¬4185/m |
Explore in addition our NVIDIA V100 server options and pricing. Servers hosted in Tier 3 Data Center in NL.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Monthly Price |
|---|---|---|---|---|---|
| novatesla.s1 | 1Γ Tesla V100 32 GB | 42 GB | 4 | 180 GB | β¬220/m |
| novatesla.m1 | 1Γ Tesla V100 32 GB | 58 GB | 8 | 600 GB | β¬250/m |
| novatesla.l1 | 1Γ Tesla V100 32 GB | 64 GB | 12 | 900 GB | β¬275/m |
| novatesla.s2 | 2Γ Tesla V100 32 GB | 84 GB | 6 | 180 GB | β¬420/m |
| novatesla.m2 | 2Γ Tesla V100 32 GB | 116 GB | 10 | 600 GB | β¬455/m |
| novatesla.l2 | 2Γ Tesla V100 32 GB | 128 GB | 24 | 1200 GB | β¬510/m |
| novatesla.m3 | 3Γ Tesla V100 32 GB | 174 GB | 24 | 900 GB | β¬690/m |
| novatesla.m4 | 4Γ Tesla V100 32 GB | 232 GB | 48 | 900 GB | β¬935/m |
| novatesla.xl4 | 4Γ Tesla V100 32 GB | 232 GB | 64 | 1600 GB | β¬1000/m |
| novatesla.l8.nvlink | 8Γ Tesla V100 32 GB | 396 GB | 88 | 3600 GB | β¬1895/m |
Discover our new HyperionAI servers powered by RTX Pro 6000 Blackwell 96 GB.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Monthly Price |
|---|---|---|---|---|---|
| hyperionai.s1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 60 GB | 16 | 180 GB | β¬895/m |
| hyperionai.m1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 80 GB | 22 | 600 GB | β¬930/m |
| hyperionai.l1 | 1Γ RTX Pro 6000 Blackwell 96 GB | 120 GB | 34 | 900 GB | β¬980/m |
| hyperionai.m2 | 2Γ RTX Pro 6000 Blackwell 96 GB | 160 GB | 44 | 600 GB | β¬1765/m |
| hyperionai.l2 | 2Γ RTX Pro 6000 Blackwell 96 GB | 240 GB | 68 | 900 GB | β¬1850/m |
| hyperionai.xl2 | 2Γ RTX Pro 6000 Blackwell 96 GB | 244 GB | 70 | 1200 GB | β¬1870/m |
| hyperionai.m3 | 3Γ RTX Pro 6000 Blackwell 96 GB | 240 GB | 66 | 600 GB | β¬2605/m |
| hyperionai.l3 | 3Γ RTX Pro 6000 Blackwell 96 GB | 368 GB | 102 | 900 GB | β¬2730/m |
| hyperionai.xl3 | 3Γ RTX Pro 6000 Blackwell 96 GB | 372 GB | 104 | 1200 GB | β¬2745/m |
| hyperionai.l4 | 4Γ RTX Pro 6000 Blackwell 96 GB | 439 GB | 124 | 900 GB | β¬3565/m |
| hyperionai.xl4 | 4Γ RTX Pro 6000 Blackwell 96 GB | 439 GB | 124 | 1200 GB | β¬3575/m |
* Minimum duration for enterprise service is 12 months.
Schedule a personalized tour of our GPU servers and discuss your specific enterprise requirements with our sales team. We offer tailored solutions to optimize your AI workflows and provide dedicated support every step of the way. Contact sales today at sales@trooper.ai to learn more about our offerings and receive a custom quote. Weβre happy to answer any questions and help you determine the best configuration for your needs.
We donβt do sales or pitch meetings.