Trooper.AI delivers NVIDIA A100 40β―GB GPU servers with full power and zero overhead. Whether youβre training large models, running complex AI inference, or fine-tuning LLMs β our Ampere-powered NVIDIA A100 instances are ready for production use, hosted securely in the EU.
This gives you more hours on A100 40β―GB at half the price. Use it to launch high-performance GPU jobs right away!
Promotion valid only this December.
While not the latest generation, the A100 with 40 GB remains a highly capable and efficient solution for a wide range of AI training and inference workloads. We specialize in maximizing the A100βs performance by optimizing CPU speed, utilizing the latest NVMe drives up to 4,500 MB/s, and maximizing RAM capacity.
Our A100 40β―GB instances offer:
We love AI to serve you with the best AI ever:




π X-Mas Upgrade: Get 50% Extra Credits, β¬100 β β¬150 up to β¬1500 π β This halves the hourly price effectively!
| GPU | VRAM | SDXL (16 imgs)* | LLM 8b Token/s** |
|---|---|---|---|
| A100 | 40β―GB | 1:18 min | 104 r_t/s |
| 4090 | 24/48β―GB | 1:19 min | 87 r_t/s |
| V100 | 16/32β―GB | 2:36 min | 62 r_t/s |
| 3090 | 24β―GB | 2:24 min | 69 r_t/s |
** Speed benchmark conducted using Automatic1111 v1.6.0 with the following settings:
Model: sd_xl_base.1.0,
Prompt: βcute Maltese puppy on green grassβ (no negative prompt),
Sampler: DPM++ 2M Karras,
No Refiner, No Hires Fix,
CFG Scale: 7, Steps: 30,
Resolution: 1024Γ1024,
Batch Count: 4, Batch Size: 4,
Random Seed, PNG Images.
** LLM speed benchmark conducted using a default OpenWebUI installation without any modifications.
Context length: 2048 (default).
No system prompt.
Query prompt: βName the 5 biggest similarities between a wild tiger and a domestic cat.β
Model (small): llama3.1:8b-instruct-q8_0
Best of 3 runs, measured in response tokens per second β higher values indicate better performance.
Explore our A100 server options and pricing. Servers hosted with a β.deβ domain are located in Germany within ISO/IEC 27001 certified data centers.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Price | +50% Promo Price** |
|---|---|---|---|---|---|---|
| infinityai.m1 | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.76/h or β¬437/m | |
| infinityai.m1.de | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.81/h or β¬463/m | |
| infinityai.l1.de | 1Γ A100 40 GB | 92 GB | 20 | 1200 GB | β¬0.86/h or β¬493/m | |
| infinityai.m2 | 2Γ A100 40 GB | 140 GB | 10 | 1200 GB | β¬1.41/h or β¬813/m | |
| infinityai.m2.de | 2Γ A100 40 GB | 140 GB | 10 | 1200 GB | β¬1.49/h or β¬860/m | |
| infinityai.m4 | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.80/h or β¬1610/m | |
| infinityai.m4.de | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.95/h or β¬1697/m | |
| infinityai.l6.de | 6Γ A100 40 GB | 512 GB | 120 | 3900 GB | β¬4.63/h or β¬2667/m | |
| infinityai.l8.de | 8Γ A100 40 GB | 768 GB | 160 | 5350 GB | β¬6.19/h or β¬3563/m |
* Weekly and monthly subscriptions offer reduced rates.
** Promotional pricing includes a 50% extra credits bonus, valid for this month.
We meticulously maintain our servers, exemplified by builds like the A100 40GB βInfinityAI.β This allows you to focus on your AI projects β development or production β without hardware concerns. As a boutique hyperscaler in Europe, we provide dedicated attention to detail and performance.
π₯οΈ Order Console
ποΈ Get 5β―β¬ Credit
π§ sales@trooper.ai
π +49 6126 928999-1
Build real AI on the GPU that powers already services like OpenAI ChatGPT & Meta Llama Training β the NVIDIA A100.
We breathe new life into powerful components β and pass the savings on to you.