Enterprise Solutions powered by NVIDIA A100 and V100

Trooper.AI provides dedicated GPU servers featuring NVIDIA A100 40 GB and Tesla V100 32 GB configurations, delivering uncompromised performance for enterprise AI workloads 🚀. Our solutions are designed to empower your IT teams with the resources needed for large-scale model training, complex AI inference, and efficient LLM fine-tuning. Built for production deployments and hosted securely within the EU, Trooper.AI ensures reliable and scalable GPU infrastructure for your organization.

Talk to Sales


💪 The NVIDIA A100 40 GB: Your Ultimate AI Workhorse

Deploying an upcycled A100 40 GB Server into Datacenter
Deploying an upcycled A100 40 GB Server into Datacenter

Our A100 40 GB instances offer:

  • ✅ 100 % raw GPU performance
  • ✅ ISO 27001 Compliant Data Center in Germany
  • ✅ 40 GB HBM2 memory – perfect for LLMs and SDXL
  • ✅ 1x to 8x GPU configurations
  • ✅ up to 1 TB ECC RAM, 8 TB NVME (RAID 1) and 256 Cores of AMD EPYC™ Milan
  • ✅ Full CUDA stack, FP16, BF16, INT8 support
  • ✅ Free NVIDIA driver choice
  • ✅ Run everything on Ubuntu 22 thanks to Root Access
  • ✅ Advanced, dedicated Linux AI developer support access
  • ✅ Payment via wire transfer by invoice and Purchase Order Number (PO)

We love AI to serve you with the best AI ever:

Our V100 32 GB instances offer is mostly comparable, differences are:

  • ✅ NVLINK SXM2 possible
  • ✅ Tier 3 Data Center (NL)
  • ✅ up to 512 GB ECC RAM and 128 Xeon Platinum Scalable Cores

sales@trooper.ai


📊 Why the A100 40 GB Rocks

GPU VRAM SDXL (16 imgs)* LLM 8b Token/s**
A100 40 GB 1:18 min 104 r_t/s
4090 24/48 GB 1:19 min 87 r_t/s
V100 16/32 GB 2:36 min 62 r_t/s
3090 24 GB 2:24 min 69 r_t/s

** Speed benchmark conducted using Automatic1111 v1.6.0 with the following settings: Model: sd_xl_base.1.0, Prompt: “cute Maltese puppy on green grass” (no negative prompt), Sampler: DPM++ 2M Karras, No Refiner, No Hires Fix, CFG Scale: 7, Steps: 30, Resolution: 1024×1024, Batch Count: 4, Batch Size: 4, Random Seed, PNG Images.

** LLM speed benchmark conducted using a default OpenWebUI installation without any modifications. Context length: 2048 (default). No system prompt. Query prompt: “Name the 5 biggest similarities between a wild tiger and a domestic cat.” Model (small): llama3.1:8b-instruct-q8_0 Best of 3 runs, measured in response tokens per second – higher values indicate better performance.


🌱 Better for You, Better for the Planet

  • We upcycle high-end GPUs and systems – eco-friendly and performant
  • Short footprint: We host and maintain all servers ourselves
  • 100 % Green energy in datacenter

sales@trooper.ai


💡 What’s Included with Enterprise Blibs?

  • ⚙️ Pre-installed stacks and professional managed in any issue: A1111, ComfyUI, Jupyter, Ubuntu, OpenWebUI and more
  • 🔐 Encrypted volumes & private storage
  • 🧠 CUDA and driver freedom
  • 🧰 Root access, SSH, terminal, dashboard
  • 🛡️ Backups, Snapshots and SLA
  • 🌍 EU-based hosting (Germany, Netherlands, France)
  • 🌐 up to 80 open ports, NAT + firewall
  • ♻️ Clean, silent, energy-optimized builds
  • 📦 Free traffic, no hidden fees

Our Enterprise Blibs

Explore our NVIDIA A100 server options and pricing. Servers hosted with a “.de” domain are located in Germany within ISO/IEC 27001 certified data centers.

Blib Name GPU CPU RAM Cores NVMe Monthly Price
infinityai.m1 1× A100 40 GB 78 GB 8 900 GB €790/m
infinityai.m1.de 1× A100 40 GB 78 GB 8 900 GB €845/m
infinityai.l1.de 1× A100 40 GB 92 GB 20 1200 GB €885/m
infinityai.m2 2× A100 40 GB 140 GB 10 1200 GB €1490/m
infinityai.m2.de 2× A100 40 GB 140 GB 10 1200 GB €1570/m
infinityai.m4 4× A100 40 GB 356 GB 30 2000 GB €2940/m
infinityai.m4.de 4× A100 40 GB 356 GB 30 2000 GB €3095/m
infinityai.l6.de 6× A100 40 GB 512 GB 120 3900 GB €4825/m
infinityai.l8.de 8× A100 40 GB 768 GB 160 5350 GB €6430/m

Explore in addition our NVIDIA V100 server options and pricing. Servers hosted in Tier 3 Data Center in NL.

Blib Name GPU CPU RAM Cores NVMe Monthly Price
novatesla.s1 1× Tesla V100 32 GB 42 GB 4 180 GB €240/m
novatesla.m1 1× Tesla V100 32 GB 58 GB 8 600 GB €270/m
novatesla.l1 1× Tesla V100 32 GB 64 GB 12 900 GB €290/m
novatesla.s2 2× Tesla V100 32 GB 84 GB 6 180 GB €440/m
novatesla.m2 2× Tesla V100 32 GB 116 GB 10 600 GB €475/m
novatesla.l2 2× Tesla V100 32 GB 128 GB 24 1200 GB €540/m
novatesla.m3 3× Tesla V100 32 GB 174 GB 24 900 GB €730/m
novatesla.m4 4× Tesla V100 32 GB 232 GB 48 900 GB €990/m
novatesla.l4 4× Tesla V100 32 GB 232 GB 64 1600 GB €1050/m
novatesla.l8.nvlink 8× Tesla V100 32 GB 464 GB 96 3800 GB €2035/m

* Minimum duration for enterprise service is 12 months.


🧠 Use Cases for Enterprise Blibs

  • 🧬 Fine-tune large models with 40 GB VRAM
  • 🧠 Train and deploy LLMs (Llama 3, Mistral, Falcon)
  • 🎨 High-res SDXL generations with ControlNet and LoRA
  • 📦 Run containerized workloads via Docker
  • 💻 Interactive Jupyter-based model testing
  • 🧰 Full dev environments, custom scripts, CI/CD workflows

🗣️ Why Trooper.AI?

We at Germany AI conference
We at Germany AI conference

  • Experts in GPU system building & upcycling
  • No middlemen – direct hosting, fast provisioning
  • Full control for developers, data scientists & researchers
  • Total data safety incl. RAID 1 storage, Automated Backups and more
  • Managed server handling (just write an email and its done - what ever you need)
  • Native German support team
  • GDPR-compliant & ISO/IEC 27001 certified

💚 Request a Free Enterprise Demonstration

Schedule a personalized tour of our GPU servers and discuss your specific enterprise requirements with our sales team. We offer tailored solutions to optimize your AI workflows and provide dedicated support every step of the way. Contact sales today at sales@trooper.ai to learn more about our offerings and receive a custom quote. We’re happy to answer any questions and help you determine the best configuration for your needs.

Contact Sales