Infrastructure

Modal Labs

A serverless cloud platform for AI/ML workloads — run GPU functions, batch jobs, and model APIs with Python-first infrastructure, sub-second cold starts, and pay-per-second billing.

Cold start
1–3 seconds
GPU types
T4 to H100
Billing
per-second

Table of Contents

SECTION 01

What Is Modal?

Modal is a serverless compute platform designed for ML/AI workloads. You write Python functions decorated with @app.function(); Modal handles containerisation, scheduling, autoscaling, and GPU provisioning. Unlike AWS Lambda, Modal supports GPU instances, long-running jobs, and arbitrary Python dependencies. Sub-second cold starts for CPU; 1–3 second cold starts for GPU instances with pre-cached container images.

SECTION 02

Functions & Containers

Define a Modal app with a container image and annotated functions.

import modal
app = modal.App("llm-inference")
image = modal.Image.debian_slim().pip_install(
    "torch", "transformers", "accelerate"
)
@app.function(gpu="T4", image=image, timeout=300)
def generate_text(prompt: str) -> str:
    from transformers import pipeline
    pipe = pipeline("text-generation", model="gpt2", device=0)
    result = pipe(prompt, max_new_tokens=100)[0]["generated_text"]
    return result
# Run locally via Modal
with app.run():
    result = generate_text.remote("The future of AI is")
    print(result)
SECTION 03

Running LLM Inference

For production LLM serving, use modal serve to keep a warm instance running. " "vLLM on Modal is a common pattern for cheap, scalable LLM APIs.

@app.cls(gpu="A10G", image=image, concurrency_limit=10)
class LLMServer:
    @modal.enter()
    def load_model(self):
        from vllm import LLM, SamplingParams
        self.llm = LLM(model="mistralai/Mistral-7B-Instruct-v0.2")
@modal.method()
    def generate(self, prompt: str) -> str:
        params = SamplingParams(temperature=0.7, max_tokens=512)
        outputs = self.llm.generate([prompt], params)
        return outputs[0].outputs[0].text
@app.function()
@modal.web_endpoint(method="POST")
def inference_endpoint(request: dict) -> dict:
    server = LLMServer()
    return {"response": server.generate.remote(request["prompt"])}
SECTION 04

Scheduled Jobs

Run batch jobs on a schedule with @app.function(schedule=...). Modal.Period(hours=24) runs daily; modal.Cron('0 2 * * *') runs at 2 AM. Ideal for: nightly data processing, scheduled eval runs, dataset refresh pipelines, and periodic report generation.

SECTION 05

Secrets & Storage

Store API keys as Modal Secrets: modal.Secret.from_name('openai-secret'). Secrets are injected as environment variables into function containers. For persistent storage, use modal.Volume (NFS-like) or modal.CloudBucketMount (S3/GCS). modal.Volume persists across function calls and is mounted at a specified path.

SECTION 06

Comparison to Lambda/GCP Functions

AWS Lambda: CPU only, 15-min timeout, 10GB memory. Good for lightweight APIs. Modal: GPU support, 24-hour timeout, arbitrary memory, Python-first. Good for AI workloads. GCP Cloud Run: container-based, GPU support (beta), scales to zero. Modal's key advantage: zero-config GPU provisioning and Python-native infrastructure definition that feels like local code.

SECTION 07

Advanced Implementation

This section covers advanced patterns and implementation considerations for production environments. Understanding these concepts ensures robust and scalable deployments.

import modal app = modal.App("my-app") @app.function() def process_data(x): return x * 2 @app.local_entrypoint() def main(): result = process_data.remote(21) print(result) # 42
@app.function(
    image=modal.Image.debian_slim().pip_install("pandas"),
    timeout=300,
    memory=2000
)
def train_model(data_path: str) -> str:
    import pandas as pd
    df = pd.read_csv(data_path)
    # Training logic
    return "model_trained"
SECTION 08

Comparison & Evaluation

Criteria Description Consideration
Performance Latency and throughput metrics Measure against baselines
Scalability Horizontal and vertical scaling Plan for growth
Integration Compatibility with ecosystem Reduce friction
Cost Operational and infrastructure costs Total cost of ownership

Understanding the fundamentals enables practitioners to make informed decisions about tool selection and implementation strategy. These foundational concepts shape how systems are architected and operated in production environments.

Production deployments require careful consideration of operational characteristics including resource consumption, latency profiles, and failure modes. Comprehensive testing against real-world scenarios helps validate assumptions and identify edge cases.

Community adoption and ecosystem maturity directly impact long-term viability. Active maintenance, thorough documentation, and responsive support channels significantly reduce implementation friction and maintenance burden.

Cost considerations extend beyond initial implementation to include ongoing operational expenses, training requirements, and opportunity costs of technology choices. A holistic cost analysis accounts for both direct and indirect expenses over the system lifetime.

Integration patterns and interoperability with existing infrastructure determine deployment success. Compatibility layers, standardized interfaces, and clear migration paths smooth the adoption process for teams with legacy systems.

Monitoring and observability are critical aspects of production systems. Establishing comprehensive metrics, logging, and alerting mechanisms enables rapid detection and resolution of issues before they impact end users.

Understanding the fundamentals enables practitioners to make informed decisions about tool selection and implementation strategy. These foundational concepts shape how systems are architected and operated in production environments. Key considerations include performance characteristics, resource utilization patterns, and integration requirements that vary significantly based on specific use cases and organizational constraints.

Production deployments require careful consideration of operational characteristics including resource consumption, latency profiles, failure modes, and recovery mechanisms. Comprehensive testing against real-world scenarios helps validate assumptions, identify edge cases, and stress-test systems under realistic conditions. Automation of testing pipelines ensures consistent quality and reduces manual effort during deployment cycles.

Community adoption and ecosystem maturity directly impact long-term viability and maintenance burden. Active development communities, thorough documentation, responsive support channels, and regular updates significantly reduce implementation friction. The availability of third-party integrations, plugins, and extensions extends functionality and accelerates time-to-value for organizations adopting these technologies.

Cost considerations extend beyond initial implementation to include ongoing operational expenses, training requirements, infrastructure costs, and opportunity costs of technology choices. A holistic cost analysis accounts for both direct expenses and indirect costs spanning acquisition, deployment, operational overhead, and eventual maintenance or replacement. Return on investment calculations must consider these multifaceted dimensions.

Integration patterns and interoperability with existing infrastructure determine deployment success and organizational impact. Compatibility layers, standardized interfaces, clear migration paths, and backward compatibility mechanisms smooth adoption for teams managing legacy systems. Understanding integration points and potential bottlenecks helps avoid common pitfalls and ensures smooth operational transitions.

Monitoring and observability are critical aspects of modern production systems and operational excellence. Establishing comprehensive metrics, structured logging, distributed tracing, and alerting mechanisms enables rapid detection and resolution of issues before they impact end users. Instrumentation at multiple layers provides visibility into system behavior and helps drive continuous improvements.

Security considerations span multiple dimensions including authentication, authorization, encryption, data protection, and compliance with regulatory frameworks. Implementing defense-in-depth strategies with multiple layers of security controls reduces risk exposure. Regular security audits, penetration testing, and vulnerability assessments help identify and remediate weaknesses proactively before they become exploitable.

Scalability architecture decisions influence system behavior under load and determine capacity for future growth. Horizontal and vertical scaling approaches present different tradeoffs in terms of complexity, cost, and operational overhead. Designing systems with scalability in mind from inception prevents costly refactoring and ensures smooth expansion as demand increases.

Understanding the fundamentals enables practitioners to make informed decisions about tool selection and implementation strategy. These foundational concepts shape how systems are architected and operated in production environments. Key considerations include performance characteristics, resource utilization patterns, and integration requirements that vary significantly based on specific use cases and organizational constraints.

Production deployments require careful consideration of operational characteristics including resource consumption, latency profiles, failure modes, and recovery mechanisms. Comprehensive testing against real-world scenarios helps validate assumptions, identify edge cases, and stress-test systems under realistic conditions. Automation of testing pipelines ensures consistent quality and reduces manual effort during deployment cycles.

Community adoption and ecosystem maturity directly impact long-term viability and maintenance burden. Active development communities, thorough documentation, responsive support channels, and regular updates significantly reduce implementation friction. The availability of third-party integrations, plugins, and extensions extends functionality and accelerates time-to-value for organizations adopting these technologies.

Cost considerations extend beyond initial implementation to include ongoing operational expenses, training requirements, infrastructure costs, and opportunity costs of technology choices. A holistic cost analysis accounts for both direct expenses and indirect costs spanning acquisition, deployment, operational overhead, and eventual maintenance or replacement. Return on investment calculations must consider these multifaceted dimensions.

Integration patterns and interoperability with existing infrastructure determine deployment success and organizational impact. Compatibility layers, standardized interfaces, clear migration paths, and backward compatibility mechanisms smooth adoption for teams managing legacy systems. Understanding integration points and potential bottlenecks helps avoid common pitfalls and ensures smooth operational transitions.

Monitoring and observability are critical aspects of modern production systems and operational excellence. Establishing comprehensive metrics, structured logging, distributed tracing, and alerting mechanisms enables rapid detection and resolution of issues before they impact end users. Instrumentation at multiple layers provides visibility into system behavior and helps drive continuous improvements.

Security considerations span multiple dimensions including authentication, authorization, encryption, data protection, and compliance with regulatory frameworks. Implementing defense-in-depth strategies with multiple layers of security controls reduces risk exposure. Regular security audits, penetration testing, and vulnerability assessments help identify and remediate weaknesses proactively before they become exploitable.

Scalability architecture decisions influence system behavior under load and determine capacity for future growth. Horizontal and vertical scaling approaches present different tradeoffs in terms of complexity, cost, and operational overhead. Designing systems with scalability in mind from inception prevents costly refactoring and ensures smooth expansion as demand increases.