L40S GPU Instance

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Universal usage

The L40S GPU Instance offers unparalleled performance across a spectrum of tasks, including gen AI, LLM inference, small-model training and fine-tuning to 3D graphics, rendering, and video applications.

Cost-effective scalability

Starting at €1.4/hour for 1 GPU with 48GB of GPU memory and available in 4 different formats (1, 2, 4, 8 GPUs), the L40S GPU Instance enables cost-efficient scaling according to workload demands, ensuring optimal resource utilization on top of high-performance capability.

K8s compatibility

Seamlessly integrate the L40S GPU Instance into your existing infrastructure with Kubernetes support, streamlining deployment and management of AI workloads while maintaining scalability and flexibility.

Available zones:
Paris:PAR 2

L40S GPU technical specifications

  • GPU NVIDIA L40S GPU

  • GPU memory48GB GDDR6 (864GB/s)

  • Processor8 vCPUs AMD EPYC 7413

  • Processor frequency2.65 Ghz

  • Memory92GB of RAM

  • Memory typeDDR4

  • Network Bandwidth2.5 Gbps

  • Storage1.6TB of Scratch Storage and additional Block Storage

  • CoresTensor Cores 4th generation RT Cores 3rd generation

  • Ideal use cases with the L40S GPU Instance

    LLM fine-tuning & training

    Use H100 PCIe GPU Instances for Medium to Large scale Foundational model training but harness L40S capabilities to fine tune in hours and train in days small LLMs.

    • An infrastructure powered by L40S GPUs can train Models in days
      To train Llama 2-7B (100B tokens) it would require 64 L40S GPUs and take 2.9 days (versus 1 day with H100 NVlink GPUs, like on Nabu2023)
    • Fine-tune Models in hours
      To fine tune Llama 2-70B SFT (1T tokens), it will require 64 L40S GPUs L40S and take 8.2 hours (versus 2.5hours with H100 NVlink GPUs, like on Nabu2023)

    Source: NVIDIA L40S Product Deck, October 2023

    Build and monitor a flexible and secured cloud infrastructure powered by GPU

    Benefit from a complete cloud ecosystem

    Kubernetes Kapsule

    Match any growth of resource needs effortlessly with an easy-to-use managed Kubernetes compatible with a dedicated control plane for high-performance container management.

    Learn more

    Load Balancer

    Distribute workloads across multiple servers with Load Balancer to ensure continued availability and avoid servers being overloaded.

    Learn more

    Virtual Private Cloud

    Secure your cloud resources with ease on a resilient regional network.

    Learn more