Power Up Inference & Training with NVIDIA A100 40β€―GB πŸš€

Trooper.AI delivers NVIDIA A100 40β€―GB GPU servers with full power and zero overhead. Whether you’re training large models, running complex AI inference, or fine-tuning LLMs – our Ampere-powered NVIDIA A100 instances are ready for production use, hosted securely in the EU.

Deploy A100 Server Now


πŸŽ„ X-Mas Upgrade: Get 50% Extra Credits, €100 β†’ €150 up to €1500 πŸš€

This gives you more hours on A100 40β€―GB at half the price. Use it to launch high-performance GPU jobs right away!

Promotion valid only this December.


πŸ’ͺ The A100 40β€―GB: Your Ultimate AI Workhorse

While not the latest generation, the A100 with 40 GB remains a highly capable and efficient solution for a wide range of AI training and inference workloads. We specialize in maximizing the A100’s performance by optimizing CPU speed, utilizing the latest NVMe drives up to 4,500 MB/s, and maximizing RAM capacity.

Our A100 40β€―GB instances offer:

  • βœ… 100β€―% raw GPU performance
  • βœ… 40β€―GB HBM2 memory – perfect for LLMs and Video Generation
  • βœ… Full CUDA stack, FP16, BF16, INT8 support
  • βœ… Great for LoRA, QLoRA, Flux, ControlNet, Llama & more
  • βœ… Run ComfyUI, Ubuntu Desktop, Jupyter, OpenWebUI, and any Docker image
  • βœ… High speed NVMe drives
  • βœ… Powered by upcycled, optimized systems in top EU data centers
  • βœ… Including an automated backup
  • βœ… Upgrade any time without loosing any data

We love AI to serve you with the best AI ever:

Start A100 Server Now


Introduction to A100 by Markus

πŸŽ„ X-Mas Upgrade: Get 50% Extra Credits, €100 β†’ €150 up to €1500 πŸš€ β†’ This halves the hourly price effectively!

Claim Extra Credits


πŸ“Š Why the A100 40β€―GB Rocks

GPU VRAM SDXL (16 imgs)* LLM 8b Token/s**
A100 40β€―GB 1:18 min 104 r_t/s
4090 24/48β€―GB 1:19 min 87 r_t/s
V100 16/32β€―GB 2:36 min 62 r_t/s
3090 24β€―GB 2:24 min 69 r_t/s

** Speed benchmark conducted using Automatic1111 v1.6.0 with the following settings: Model: sd_xl_base.1.0, Prompt: β€œcute Maltese puppy on green grass” (no negative prompt), Sampler: DPM++ 2M Karras, No Refiner, No Hires Fix, CFG Scale: 7, Steps: 30, Resolution: 1024Γ—1024, Batch Count: 4, Batch Size: 4, Random Seed, PNG Images.

** LLM speed benchmark conducted using a default OpenWebUI installation without any modifications. Context length: 2048 (default). No system prompt. Query prompt: β€œName the 5 biggest similarities between a wild tiger and a domestic cat.” Model (small): llama3.1:8b-instruct-q8_0 Best of 3 runs, measured in response tokens per second – higher values indicate better performance.


🌱 Better for You, Better for the Planet

  • We upcycle high-end GPUs and systems – eco-friendly and performant
  • We host and maintain all servers ourselves with 100% Eco-Power
  • German ISO/IEC 27001-certified locations (Frankfurt)

Activate A100 Server Now


πŸ’‘ What’s Included with A100 Blibs?

  • βš™οΈ Pre-installed stacks: A1111, ComfyUI, Jupyter, Ubuntu, OpenWebUI
  • πŸ” Encrypted volumes & private storage
  • 🧠 CUDA and driver freedom
  • 🧰 Root access, SSH, terminal, dashboard
  • 🌍 EU-based hosting (Germany, Netherlands, France)
  • πŸ”„ Stop/freeze servers to save money
  • 🌐 10 open ports, NAT + firewall
  • ♻️ Clean, silent, energy-optimized builds
  • πŸ“¦ Free traffic, no hidden fees
  • πŸ’Έ Voucher, top-up bonus, monthly savings

Our A100 Blibs

Explore our A100 server options and pricing. Servers hosted with a β€œ.de” domain are located in Germany within ISO/IEC 27001 certified data centers.

Blib Name GPU CPU RAM Cores NVMe Price +50% Promo Price**
infinityai.m1 1Γ— A100 40 GB 78 GB 8 900 GB €1.14/h €0.76/h or €437/m
infinityai.m1.de 1Γ— A100 40 GB 78 GB 8 900 GB €1.21/h €0.81/h or €463/m
infinityai.l1.de 1Γ— A100 40 GB 92 GB 20 1200 GB €1.29/h €0.86/h or €493/m
infinityai.m2 2Γ— A100 40 GB 140 GB 10 1200 GB €2.12/h €1.41/h or €813/m
infinityai.m2.de 2Γ— A100 40 GB 140 GB 10 1200 GB €2.24/h €1.49/h or €860/m
infinityai.m4 4Γ— A100 40 GB 356 GB 30 2000 GB €4.20/h €2.80/h or €1610/m
infinityai.m4.de 4Γ— A100 40 GB 356 GB 30 2000 GB €4.42/h €2.95/h or €1697/m
infinityai.l6.de 6Γ— A100 40 GB 512 GB 120 3900 GB €6.95/h €4.63/h or €2667/m
infinityai.l8.de 8Γ— A100 40 GB 768 GB 160 5350 GB €9.28/h €6.19/h or €3563/m

* Weekly and monthly subscriptions offer reduced rates.
** Promotional pricing includes a 50% extra credits bonus, valid for this month.


🧠 Use Cases for A100 Blibs

  • 🧬 Fine-tune large models with 40β€―GB VRAM
  • 🧠 Train and deploy LLMs (Llama 3, Mistral, Falcon)
  • 🎨 High-res SDXL generations with ControlNet and LoRA
  • πŸ“¦ Run containerized workloads via Docker
  • πŸ’» Interactive Jupyter-based model testing
  • 🧰 Full dev environments, custom scripts, CI/CD workflows

πŸ—£οΈ Why Trooper.AI?

Trooper AI up-cycling hardware
Trooper AI up-cycling hardware

  • Experts in GPU system building & upcycling
  • No middlemen – direct hosting, fast provisioning
  • Full control for developers, data scientists & researchers
  • Native German support team
  • GDPR-compliant & ISO/IEC 27001 certified

We Take Care

Deploying an upcycled A100 40 GB Server into Datacenter
Deploying an upcycled A100 40 GB Server into Datacenter

We meticulously maintain our servers, exemplified by builds like the A100 40GB β€œInfinityAI.” This allows you to focus on your AI projects – development or production – without hardware concerns. As a boutique hyperscaler in Europe, we provide dedicated attention to detail and performance.


🌍 EU-Based, Developer-Friendly

  • πŸ‡©πŸ‡ͺ πŸ‡«πŸ‡· πŸ‡³πŸ‡± Hosting in Germany, France & Netherlands
  • GDPR-compliant infrastructure
  • Fast SSD storage & high-core CPUs
  • Pay via PayPal, SEPA, credit card
  • Request DPA or NDA on demand

🎁 Claim Your €5 Voucher Now

πŸ–₯️ Order Console
🎟️ Get 5 € Credit
πŸ“§ sales@trooper.ai
πŸ“ž +49 6126 928999-1

Build real AI on the GPU that powers already services like OpenAI ChatGPT & Meta Llama Training – the NVIDIA A100.


πŸ’š We Love Upcycling

We breathe new life into powerful components – and pass the savings on to you.

Claim Your 5 € Voucher