Q&A 8 How do you containerize your model API using Docker for reproducible deployment?

8.1 Explanation

Docker allows you to package your model API and dependencies into a container that runs the same way on any machine. This makes your deployment:

  • Portable across teams and clouds
  • Reproducible and isolated from system conflicts
  • Easy to scale or integrate with CI/CD pipelines

We’ll build a Docker container for your FastAPI model API that loads saved .joblib models and runs with Uvicorn.

8.2 Project Structure

cdi-model-deployment/
├── script/
│   └── model_api.py
├── models/
│   └── [saved models here]
├── requirements.txt
└── Dockerfile

8.3 Dockerfile

# Base image
FROM python:3.12-slim

# Set working directory
WORKDIR /app

# Copy code and model directory
COPY script/model_api.py ./script/model_api.py
COPY models/ ./models/
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Expose port for Uvicorn
EXPOSE 8000

# Run the API with Uvicorn
CMD ["uvicorn", "script.model_api:app", "--host", "0.0.0.0", "--port", "8000"]

8.4 Libraries in requirements.txt (Minimum Needed)

fastapi==0.115.4
uvicorn==0.35.0
joblib==1.4.2
scikit-learn==1.6.0
pandas==2.2.3
gradio==5.9.1
streamlit==1.39.0

8.5 Build the Docker Image

docker build -t model-api .

8.6 Run the Docker Container

docker run -p 8000:8000 model-api

Then test it at:

http://127.0.0.1:8000/docs

8.7 Optional: .dockerignore

To avoid copying your local venv or large files:

__pycache__/
venv/
*.csv
*.ipynb

✅ Takeaway: Docker lets you containerize your model API so it runs identically on any machine — a must-have for production deployment.