From Jupyter to Production: ML Model Deployment That Actually Works

A practical guide to deploying ML models in production. Learn the infrastructure patterns, monitoring strategies, and deployment approaches that prevent model failures.

Ignacio Van Droogenbroeck
Ignacio Van Droogenbroeck
8 min read
From Jupyter to Production: ML Model Deployment That Actually Works

Photo by h heyerlein on Unsplash

"It works on my laptop" is the most expensive phrase in machine learning. After helping dozens of teams move models from notebooks to production, I've seen every possible failure mode. Here's how to deploy ML models that actually work in the real world.

The Reality of ML in Production

Most ML projects fail not because of bad algorithms, but because of infrastructure problems:

  • Model drift goes undetected for months
  • Feature pipelines break silently
  • Predictions become stale or inconsistent
  • Scaling causes memory issues or timeouts
  • Rollbacks take hours instead of minutes

Let's fix this systematically.

Architecture: The Foundation

Before deploying any model, establish the right architecture. Here's what works:

The Modern ML Stack

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Data Sources  │───▶│  Feature Store   │───▶│  Model Registry │
└─────────────────┘    └──────────────────┘    └─────────────────┘
                                │                       │
                                ▼                       ▼
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Monitoring    │◀───│  Model Serving   │◀───│  Training Jobs  │
└─────────────────┘    └──────────────────┘    └─────────────────┘

Container-First Approach

Every model should be containerized from day one:

FROM python:3.9-slim

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies  
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy model artifacts
COPY model/ /app/model/
COPY src/ /app/src/

WORKDIR /app

# Health check endpoint
HEALTHCHECK --interval=30s --timeout=30s --start-period=60s \
  CMD curl -f http://localhost:8000/health || exit 1

# Run the model server
CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000"]

Feature Engineering in Production

The biggest gap between training and production? Feature engineering.

Pattern 1: Feature Store

from feast import FeatureStore

def get_features_for_prediction(user_ids, timestamp):
    """Get features from feature store for online prediction"""
    
    store = FeatureStore(repo_path="/app/feature_repo")
    
    # Define feature views
    feature_views = [
        "user_demographics:age",
        "user_demographics:location", 
        "user_behavior:click_rate_7d",
        "user_behavior:purchase_amount_30d"
    ]
    
    # Fetch features
    features = store.get_online_features(
        features=feature_views,
        entity_rows=[{"user_id": uid} for uid in user_ids],
        full_feature_names=True
    )
    
    return features.to_dict()

Pattern 2: Real-time Feature Pipeline

import asyncio
import redis
from typing import Dict, List

class FeaturePipeline:
    def __init__(self):
        self.redis_client = redis.Redis(host='redis', port=6379, db=0)
        
    async def compute_realtime_features(self, user_id: str) -> Dict:
        """Compute features in real-time for prediction"""
        
        # Get base features from cache
        base_features = self.redis_client.hgetall(f"user:{user_id}")
        
        # Compute derived features
        features = {
            'user_id': user_id,
            'age': int(base_features.get(b'age', 0)),
            'location': base_features.get(b'location', b'').decode(),
            'account_age_days': self._calculate_account_age(user_id),
            'recent_activity_score': await self._calculate_activity_score(user_id)
        }
        
        return features
        
    def _calculate_account_age(self, user_id: str) -> int:
        # Implementation here
        pass
        
    async def _calculate_activity_score(self, user_id: str) -> float:
        # Implementation here  
        pass

Model Serving Patterns

Pattern 1: Synchronous REST API

Best for: Low-latency, real-time predictions

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import joblib
import numpy as np
from typing import List

app = FastAPI()

# Load model at startup
model = joblib.load('/app/model/model.pkl')
scaler = joblib.load('/app/model/scaler.pkl')

class PredictionRequest(BaseModel):
    features: List[float]

class PredictionResponse(BaseModel):
    prediction: float
    confidence: float
    model_version: str

@app.post("/predict", response_model=PredictionResponse)
async def predict(request: PredictionRequest):
    try:
        # Preprocess features
        features_scaled = scaler.transform([request.features])
        
        # Make prediction
        prediction = model.predict(features_scaled)[0]
        confidence = model.predict_proba(features_scaled).max()
        
        return PredictionResponse(
            prediction=float(prediction),
            confidence=float(confidence),
            model_version="v1.2.3"
        )
        
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.get("/health")
async def health_check():
    return {"status": "healthy", "model_loaded": model is not None}

Pattern 2: Batch Processing

Best for: High-throughput, non-real-time predictions

import pandas as pd
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta

def batch_predict(**context):
    """Run batch predictions on new data"""
    
    # Load new data
    df = pd.read_sql("""
        SELECT user_id, feature1, feature2, feature3
        FROM user_features  
        WHERE created_at >= %(start_date)s
        AND created_at < %(end_date)s
    """, connection, params={
        'start_date': context['ds'],
        'end_date': context['next_ds']
    })
    
    # Load model
    model = joblib.load('/models/current/model.pkl')
    
    # Make predictions
    features = df[['feature1', 'feature2', 'feature3']].values
    predictions = model.predict(features)
    
    # Save results
    results_df = pd.DataFrame({
        'user_id': df['user_id'],
        'prediction': predictions,
        'prediction_date': context['ds'],
        'model_version': 'v1.2.3'
    })
    
    results_df.to_sql('predictions', connection, if_exists='append', index=False)

dag = DAG(
    'batch_ml_predictions',
    default_args={
        'start_date': datetime(2024, 1, 1),
        'retries': 2,
    },
    schedule_interval='@daily',
    catchup=False
)

predict_task = PythonOperator(
    task_id='batch_predict',
    python_callable=batch_predict,
    dag=dag
)

Monitoring: The Critical Missing Piece

Most ML monitoring focuses on infrastructure metrics. That's not enough.

Model Performance Monitoring

import logging
import numpy as np
from typing import Dict, Any

class ModelMonitor:
    def __init__(self, model_name: str, version: str):
        self.model_name = model_name
        self.version = version
        
    def log_prediction(self, features: np.ndarray, prediction: float, 
                      confidence: float, metadata: Dict[str, Any] = None):
        """Log individual predictions for monitoring"""
        
        log_data = {
            'model_name': self.model_name,
            'model_version': self.version,
            'prediction': prediction,
            'confidence': confidence,
            'feature_hash': hash(features.tobytes()),
            'timestamp': datetime.utcnow().isoformat(),
            'metadata': metadata or {}
        }
        
        # Send to monitoring system (Prometheus, DataDog, etc.)
        logging.info(f"ML_PREDICTION: {log_data}")
        
    def check_drift(self, current_features: np.ndarray, 
                   baseline_features: np.ndarray) -> Dict[str, float]:
        """Detect feature drift using statistical tests"""
        
        from scipy import stats
        
        drift_scores = {}
        
        for i in range(current_features.shape[1]):
            current_col = current_features[:, i]
            baseline_col = baseline_features[:, i]
            
            # KS test for distribution drift
            ks_stat, p_value = stats.ks_2samp(baseline_col, current_col)
            
            drift_scores[f'feature_{i}_drift'] = {
                'ks_statistic': ks_stat,
                'p_value': p_value,
                'drift_detected': p_value < 0.05
            }
            
        return drift_scores

Data Quality Checks

def validate_input_data(df: pd.DataFrame) -> Dict[str, bool]:
    """Validate input data quality before making predictions"""
    
    checks = {
        'no_missing_values': not df.isnull().any().any(),
        'correct_schema': list(df.columns) == EXPECTED_COLUMNS,
        'value_ranges_valid': all(
            df[col].between(FEATURE_RANGES[col][0], FEATURE_RANGES[col][1]).all()
            for col in NUMERIC_COLUMNS
        ),
        'no_extreme_outliers': all(
            abs(df[col] - df[col].mean()) <= 4 * df[col].std()
            for col in NUMERIC_COLUMNS
        )
    }
    
    # Alert if any check fails
    for check_name, passed in checks.items():
        if not passed:
            send_alert(f"Data quality check failed: {check_name}")
            
    return checks

Deployment Strategies

Blue-Green Deployment

# kubernetes-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ml-model-blue
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ml-model
      version: blue
  template:
    metadata:
      labels:
        app: ml-model
        version: blue
    spec:
      containers:
      - name: model-server
        image: your-registry/ml-model:v1.2.3
        ports:
        - containerPort: 8000
        env:
        - name: MODEL_VERSION
          value: "v1.2.3"
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
---
apiVersion: v1
kind: Service  
metadata:
  name: ml-model-service
spec:
  selector:
    app: ml-model
    version: blue  # Switch to green when ready
  ports:
  - port: 80
    targetPort: 8000

Canary Deployment

def canary_deployment_check(new_version_metrics: Dict, 
                           current_version_metrics: Dict) -> bool:
    """Determine if canary deployment should proceed"""
    
    # Define success criteria
    criteria = {
        'error_rate_threshold': 0.01,  # 1% error rate max
        'latency_p99_threshold': 500,  # 500ms P99 latency max
        'accuracy_drop_threshold': 0.02  # 2% accuracy drop max
    }
    
    # Check error rate
    if new_version_metrics['error_rate'] > criteria['error_rate_threshold']:
        return False
        
    # Check latency
    if new_version_metrics['latency_p99'] > criteria['latency_p99_threshold']:
        return False
        
    # Check accuracy degradation
    accuracy_drop = current_version_metrics['accuracy'] - new_version_metrics['accuracy']
    if accuracy_drop > criteria['accuracy_drop_threshold']:
        return False
        
    return True

Common Gotchas and Solutions

1. Memory Leaks in Long-Running Services

import gc
import psutil
import threading

class ModelServer:
    def __init__(self):
        self.prediction_count = 0
        self.memory_monitor = threading.Timer(300, self._check_memory)
        self.memory_monitor.start()
        
    def predict(self, features):
        prediction = self.model.predict(features)
        self.prediction_count += 1
        
        # Periodic garbage collection
        if self.prediction_count % 1000 == 0:
            gc.collect()
            
        return prediction
        
    def _check_memory(self):
        """Monitor memory usage and alert if too high"""
        memory_percent = psutil.virtual_memory().percent
        
        if memory_percent > 85:
            logging.warning(f"High memory usage: {memory_percent}%")
            # Consider restarting service or scaling up
            
        # Schedule next check
        self.memory_monitor = threading.Timer(300, self._check_memory)
        self.memory_monitor.start()

2. Handling Model Version Mismatches

class ModelRegistry:
    def __init__(self):
        self.models = {}
        
    def load_model(self, version: str):
        """Load specific model version with compatibility checks"""
        
        model_path = f"/models/{version}/model.pkl"
        metadata_path = f"/models/{version}/metadata.json"
        
        # Check if model exists
        if not os.path.exists(model_path):
            raise ValueError(f"Model version {version} not found")
            
        # Load metadata
        with open(metadata_path, 'r') as f:
            metadata = json.load(f)
            
        # Check feature compatibility
        if metadata['feature_schema'] != CURRENT_FEATURE_SCHEMA:
            raise ValueError(f"Feature schema mismatch for version {version}")
            
        # Load model
        model = joblib.load(model_path)
        self.models[version] = {
            'model': model,
            'metadata': metadata,
            'loaded_at': datetime.utcnow()
        }
        
        return model

Production Checklist

Before deploying any ML model, ensure:

  • Model artifacts are version-controlled
  • Feature pipeline handles missing data gracefully
  • Input validation prevents bad data from reaching the model
  • Monitoring tracks predictions, drift, and performance
  • Rollback plan is tested and documented
  • Load testing confirms performance under expected traffic
  • Security includes authentication and input sanitization
  • Documentation covers API specs and troubleshooting

The Bottom Line

ML model deployment is 20% machine learning and 80% software engineering. Focus on:

  1. Reproducible environments (Docker + version control)
  2. Robust feature pipelines (handle real-world data messiness)
  3. Comprehensive monitoring (track everything that matters)
  4. Safe deployment practices (blue-green, canary, rollbacks)
  5. Operational excellence (logging, alerting, documentation)

Get these fundamentals right, and your models will actually work in production. Skip them, and you'll be firefighting instead of innovating.


Ready to move your models from notebooks to production? We help teams build reliable ML deployment pipelines that scale. Let's discuss your ML infrastructure needs.

Ignacio Van Droogenbroeck

About Ignacio Van Droogenbroeck

DevOps Engineer and Infrastructure Consultant with 20+ years of experience building scalable systems. Founder of Basekick.