Trooper.AI delivers NVIDIA A100 40β―GB GPU servers with full power and zero overhead. Whether youβre training large models, running complex AI inference, or fine-tuning LLMs β our Ampere-powered NVIDIA A100 instances are ready for production use, hosted securely in the EU.
Prepay β¬200 and get β¬400 in GPU credits β or go big: Prepay β¬500 and receive β¬1000.
This gives you more hours on A100 40β―GB at half the price. Use it to launch high-performance GPU jobs right away!
Promotion valid only while credit batches last.
Our A100 40β―GB instances offer:
We love AI to serve you with the best AI ever:




π₯ βDouble creditsβ promotion example:
Deposit β¬200 β get β¬400
Deposit β¬500 β get β¬1000
β This halves the hourly price effectively!
| GPU | VRAM | SDXL (16 imgs)* | LLM 8b Token/s** |
|---|---|---|---|
| A100 | 40β―GB | 1:18 min | 104 r_t/s |
| 4090 | 24/48β―GB | 1:19 min | 87 r_t/s |
| V100 | 16/32β―GB | 2:36 min | 62 r_t/s |
| 3090 | 24β―GB | 2:24 min | 69 r_t/s |
** Speed benchmark conducted using Automatic1111 v1.6.0 with the following settings:
Model: sd_xl_base.1.0,
Prompt: βcute Maltese puppy on green grassβ (no negative prompt),
Sampler: DPM++ 2M Karras,
No Refiner, No Hires Fix,
CFG Scale: 7, Steps: 30,
Resolution: 1024Γ1024,
Batch Count: 4, Batch Size: 4,
Random Seed, PNG Images.
** LLM speed benchmark conducted using a default OpenWebUI installation without any modifications.
Context length: 2048 (default).
No system prompt.
Query prompt: βName the 5 biggest similarities between a wild tiger and a domestic cat.β
Model (small): llama3.1:8b-instruct-q8_0
Best of 3 runs, measured in response tokens per second β higher values indicate better performance.
Explore our A100 server options and pricing. Servers hosted with a β.deβ domain are located in Germany within ISO/IEC 27001 certified data centers.
| Blib Name | GPU | CPU RAM | Cores | NVMe | Price | Promo Price* |
|---|---|---|---|---|---|---|
| infinityai.m1 | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.69/h or β¬395/m | |
| infinityai.m1.de | 1Γ A100 40 GB | 78 GB | 8 | 900 GB | β¬0.74/h or β¬423/m | |
| infinityai.l1.de | 1Γ A100 40 GB | 92 GB | 20 | 1200 GB | β¬0.77/h or β¬443/m | |
| infinityai.m2 | 2Γ A100 40 GB | 140 GB | 10 | 1200 GB | β¬1.29/h or β¬745/m | |
| infinityai.m2.de | 2Γ A100 40 GB | 140 GB | 10 | 1200 GB | β¬1.37/h or β¬785/m | |
| infinityai.m4 | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.56/h or β¬1470/m | |
| infinityai.m4.de | 4Γ A100 40 GB | 356 GB | 30 | 2000 GB | β¬2.69/h or β¬1548/m | |
| infinityai.l6.de | 6Γ A100 40 GB | 512 GB | 120 | 3900 GB | β¬4.19/h or β¬2413/m | |
| infinityai.l8.de | 8Γ A100 40 GB | 768 GB | 160 | 5350 GB | β¬5.59/h or β¬3215/m |
* Weekly and monthly subscriptions offer reduced rates. Promotional pricing includes a 100% budget bonus, valid for this month.
We meticulously maintain our servers, exemplified by builds like the A100 40GB βInfinityAI.β This allows you to focus on your AI projects β development or production β without hardware concerns. As a boutique hyperscaler in Europe, we provide dedicated attention to detail and performance.
π₯οΈ Order Console
ποΈ Get 5β―β¬ Credit
π§ sales@trooper.ai
π +49 6126 928999-1
Build real AI on the GPU that powers already services like OpenAI ChatGPT & Meta Llama Training β the NVIDIA A100.
We breathe new life into powerful components β and pass the savings on to you.