Enabling SSH in Azure App Service for OCaml Containers: A Production Journey

From local development to production deployment: implementing SSH access, managing database migrations, and navigating Azure PostgreSQL restrictions in an OCaml web application.

Project: Chaufr – Personal drivers, on demand, in your own vehicle
Tech Stack: OCaml 5.3.0, Dream Framework, Azure App Service (Linux), PostgreSQL Flexible Server, Docker
Status: βœ… Production - First Driver Created Successfully

This post chronicles the complete journey of deploying an OCaml custom container to Azure App Service, implementing SSH access for operational control, managing database migrations in production, and resolving Azure-specific PostgreSQL challenges. If you're deploying OCaml (or any custom container) to Azure and need SSH access for migrations or debugging, this guide will save you hours of trial and error.


The Challenge: Production Operations Without SSH

When deploying custom containers to Azure App Service, you lose the comfort of SSH access by default. This becomes critical when you need to:

Unlike Azure's "blessed" built-in images (Node.js, Python, etc.) that include SSH by default, custom Docker images require explicit SSH configuration following Azure's specific requirements.


Architecture Overview

Here's how SSH access flows through the Azure deployment:

GitHub Actions (CI/CD)
        ↓
Azure Container Registry (ACR)
        ↓
Azure App Service (Linux)
        ↓
Custom OCaml Container
  β”œβ”€ OpenSSH Server (Port 2222)
  β”œβ”€ Environment Variable Export
  β”œβ”€ Migration Binary (migrate.exe)
  └─ Web Application (chaufr)
        ↓
Azure PostgreSQL Flexible Server

Key Accomplishments

1. πŸ‹ Multi-Stage Docker Build with SSH Support

Challenge: Build an OCaml container that supports both the application and SSH access while keeping the final image small.

Solution: Implemented a multi-stage Dockerfile with SSH integration and proper file ownership.

Complete Dockerfile Structure

# --- Build stage ---
FROM ocaml/opam:debian-12-ocaml-5.3 AS builder
WORKDIR /app

# System dependencies - Build tools for Azure Container Registry
RUN sudo apt-get update && sudo apt-get install -y --no-install-recommends \
    libpq-dev \
    pkg-config \
    ca-certificates \
    git \
    libcurl4-openssl-dev \
    libffi-dev \
    libgmp-dev \
    libev-dev && \
    sudo rm -rf /var/lib/apt/lists/*

# Copy opam metadata early for caching
COPY dune-project chaufr.opam ./

# Install OCaml dependencies
RUN eval $(opam env) && \
    opam update && \
    opam pin add -y simple_dotenv.1.0.0 git+https://github.com/Lomig/simple_dotenv.git#05ef4a35eff29784abc3f454ee36163f3ae48747

RUN eval $(opam env) && opam install -y curl ocurl.transition

RUN sudo apt-get update && \
    sudo apt-get install -y --no-install-recommends libargon2-dev && \
    sudo rm -rf /var/lib/apt/lists/* && \
    eval $(opam env) && opam install -y --deps-only .

# Copy source and build
COPY . .
RUN eval $(opam env) && dune build --profile=release ./bin/main.exe ./bin/migrate.exe


# --- Runtime stage ---
FROM debian:bookworm-slim

# Runtime dependencies - Includes OpenSSH for Azure Portal SSH
RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq5 \
    ca-certificates \
    libcurl4 \
    libgmp10 \
    libev4 \
    libargon2-1 \
    openssh-server \
    dialog \
    binutils && \
    rm -rf /var/lib/apt/lists/*

# Create SSH directory and set root password for Azure Portal SSH
# Password must be exactly "Docker!" per Azure requirements
RUN mkdir -p /run/sshd && \
    echo "root:Docker!" | chpasswd

# Create profile.d directory for environment variable scripts
RUN mkdir -p /etc/profile.d && chmod 755 /etc/profile.d

# Non-root user for security
RUN useradd -m -d /home/appuser appuser
WORKDIR /home/appuser

# Copy binaries from builder stage with proper ownership
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/main.exe ./chaufr
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/migrate.exe ./migrate

# Strip debug symbols and set executable permissions
RUN strip ./chaufr ./migrate && \
    chmod +x ./chaufr ./migrate

# Copy SSH configuration and entrypoint script
COPY sshd_config /etc/ssh/
COPY --chown=appuser:appuser entrypoint.sh ./
RUN chmod +x ./entrypoint.sh

# Azure App Service configuration
ENV PORT=8080
EXPOSE 8080 2222

# Use entrypoint script to start SSH and application
ENTRYPOINT ["./entrypoint.sh"]

Key Design Decisions:

  1. Multi-stage build - Separates build tools (~800MB) from runtime (~150MB)
  2. Proper ownership - --chown=appuser:appuser ensures files are accessible
  3. Strip in runtime stage - Reduces binary size by 30-50% where we have permissions
  4. Both binaries - chaufr (web app) and migrate (database migrations)
  5. SSH requirements - Port 2222, specific password, OpenSSH server

2. πŸ” Azure-Compliant SSH Configuration

Challenge: Azure App Service requires specific SSH configuration that differs from standard OpenSSH setups.

Solution: Created sshd_config following Azure's strict requirements.

SSH Configuration File

Port                   2222
ListenAddress          0.0.0.0
LoginGraceTime                 180
X11Forwarding          yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes            yes
SyslogFacility                 DAEMON
PasswordAuthentication         yes
PermitEmptyPasswords   no
PermitRootLogin        yes
Subsystem sftp internal-sftp

Azure Requirements:

Security Note: The password Docker! is only accessible within Azure's private virtual network. External attackers cannot reach port 2222.

3. Entrypoint Script with Environment Variable Export

Challenge: Azure App Service environment variables are available to the main process but not to SSH sessions, breaking manual migration commands.

Solution: Created an entrypoint script that exports environment variables system-wide.

Entrypoint Script (entrypoint.sh)

#!/bin/sh
set -e

# Export Azure App Service environment variables to make them available in SSH sessions
# This creates a system-wide profile script that loads env vars for all shells
echo "Exporting environment variables for SSH access..."
printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//="/' | sed 's/$/"/' > /etc/profile.d/azure-env.sh
chmod +x /etc/profile.d/azure-env.sh

# Also export to /etc/profile for login shells (backward compatibility)
eval $(cat /etc/profile.d/azure-env.sh)

echo "Starting SSH ..."
service ssh start

# Run migrations if MIGRATE_ON_STARTUP is set
if [ "$MIGRATE_ON_STARTUP" = "true" ]; then
    echo "Running database migrations..."
    ./migrate up || echo "Migration failed, continuing anyway..."
fi

# Start the OCaml web application
echo "Starting Chaufr application..."
exec ./chaufr

Environment Variable Export Strategy:

  1. System-wide availability - Export to /etc/profile.d/azure-env.sh
  2. SSH session access - All shells automatically source this file
  3. Main process access - Eval'd into current shell for app startup
  4. Optional auto-migrations - Set MIGRATE_ON_STARTUP=true in Azure if desired

Why This Matters: Without this, DATABASE_URL and other Azure environment variables are invisible in SSH sessions, causing migration commands to fail.


Technical Deep Dive

Data Flow: From Build to Production User

1. GitHub Push
   ↓ Triggers GitHub Actions
2. Docker Build (Multi-stage)
   ↓ Builder: Compile OCaml β†’ migrate.exe, chaufr
   ↓ Runtime: Install SSH, copy binaries, configure
3. Azure Container Registry
   ↓ Image stored: chaufr:latest
4. Azure App Service
   ↓ Pull image, start container
5. Container Startup (entrypoint.sh)
   ↓ Export env vars β†’ Start SSH β†’ Start app
6. SSH Session (Azure Portal)
   ↓ Source /etc/profile.d/azure-env.sh
   ↓ Run: /home/appuser/migrate up
7. Database Migrations
   ↓ Connect to Azure PostgreSQL
   ↓ Apply schema changes
8. Production User Creation
   βœ“ Driver signup successful

File Ownership Architecture

Challenge Encountered: Initial attempts showed ls returning nothing despite find locating the binaries.

Root Cause: Files copied from builder stage had incorrect ownership, making them invisible to ls for the current user.

Solution:

# Before (files invisible to appuser):
COPY --from=builder /app/_build/default/bin/main.exe ./chaufr

# After (proper ownership):
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/main.exe ./chaufr

Lesson Learned: Always use --chown flags when copying files to containers with non-root users.


Azure PostgreSQL Extension Challenges

The pgcrypto Extension Blocker

Error Encountered:

ERROR: extension "pgcrypto" is not allow-listed for users in Azure Database for PostgreSQL
HINT: to learn how to allow an extension or see the list of allowed extensions

Root Cause: Azure PostgreSQL Flexible Server requires:

  1. Extensions to be allow-listed at the server level
  2. Admin privileges to create extensions in a database

Solution (Two-Step Process):

Step 1: Allow-list the Extension

# Enable pgcrypto in Azure PostgreSQL server parameters
az postgres flexible-server parameter set \
  --resource-group ocaml-chaufr \
  --server-name chaufr-pg \
  --name azure.extensions \
  --value pgcrypto,uuid-ossp

Output:

{
  "value": "pgcrypto,uuid-ossp",
  "source": "user-override",
  "allowedValues": "...,pgcrypto,...,uuid-ossp,...",
  "isConfigPendingRestart": false
}

Step 2: Create Extension as Admin

# Connect to database as admin
az postgres flexible-server connect \
  --name chaufr-pg \
  --admin-user postgres \
  --database-name chaufr \
  --interactive

# Inside psql session (run ONE command at a time):
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
\dx  # List installed extensions
\q   # Quit

Verification:

                                  List of installed extensions
    Name    | Version |   Schema   |                         Description
------------+---------+------------+--------------------------------------------------------------
 pgcrypto   | 1.3     | public     | cryptographic functions
 uuid-ossp  | 1.1     | public     | generate universally unique identifiers (UUIDs)

Why This Restriction Exists

Azure PostgreSQL implements extension restrictions for security and stability:


Running Migrations in Production

SSH Access via Azure Portal

Step 1: Navigate to SSH

Azure Portal β†’ Your App Service β†’ Development Tools β†’ SSH β†’ Click "Go"

Step 2: Load Environment Variables

# Source Azure environment variables (if not auto-loaded)
source /etc/profile.d/azure-env.sh

# Verify DATABASE_URL is available
echo $DATABASE_URL
# Output: postgresql://postgres:***@chaufr-pg.postgres.database.azure.com:5432/chaufr

Step 3: Initialize Migration System

/home/appuser/migrate init

Output:

No .env file to parse
Migration system initialized successfully

Step 4: Check Migration Status

/home/appuser/migrate status

Output:

No .env file to parse
Database Version: None
Applied Migrations: 0
Pending Migrations: 4

Step 5: Apply Migrations

/home/appuser/migrate up

Success Output:

No .env file to parse
All migrations completed successfully

Step 6: Verify Current Version

/home/appuser/migrate version

Output:

No .env file to parse
Current database version: 4_add_missing_columns

Migration Schema Applied

The successful migration created these tables:

  1. users - User accounts (car owners)
  2. drivers - Driver accounts
  3. rides - Ride booking records
  4. passwords - Password authentication (Argon2)
  5. passkeys - WebAuthn/Passkey credentials
  6. schema_migrations - Migration tracking

Production Validation: First User Created

The Moment of Truth:

After completing the migration setup, I tested user creation via the production API:

# Driver signup request to production Azure endpoint
POST https://chaufr-app.azurewebsites.net/auth/signup
{
  "name": "Test Driver",
  "email": "driver@example.com",
  "password": "securePassword123",
  "role": "driver"
}

Response:

{
  "success": true,
  "message": "User created and logged in",
  "user_id": "aea6ec28-24dd-42bd-809d-a5ca213ad9de"
}

Validation Confirmed:

Database Verification:

SELECT id, name, email, created_at FROM users 
WHERE id = 'aea6ec28-24dd-42bd-809d-a5ca213ad9de';

-- Result:
--   id: aea6ec28-24dd-42bd-809d-a5ca213ad9de
--   name: Test Driver
--   email: driver@example.com
--   created_at: 2025-11-21 22:15:43

Houston, We Have Liftoff! πŸš€


Troubleshooting Guide

Issue 1: Binaries Not Visible in SSH

Symptom:

root@container:/home/appuser# ls
# (empty output)

root@container:/home/appuser# ./migrate status
-bash: ./migrate: No such file or directory

But:

root@container:/home/appuser# find / -name migrate 2>/dev/null
/home/appuser/migrate

Diagnosis: File ownership mismatch - files owned by wrong user or have restrictive permissions.

Solution:

# Add --chown flags in Dockerfile
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/migrate.exe ./migrate

Verification:

ls -la /home/appuser/
# Should show:
# -rwxr-xr-x 1 appuser appuser 4927960 Nov 20 21:57 migrate

Issue 2: Environment Variables Not Available in SSH

Symptom:

/home/appuser/migrate status
# Error: Request to <postgresql://postgres:_@...> failed
# (password masked/empty)

Diagnosis: DATABASE_URL not accessible in SSH session.

Solution:

# Source the environment variables
source /etc/profile.d/azure-env.sh

# Verify
echo $DATABASE_URL
# Should show full connection string

# Try migration again
/home/appuser/migrate status

Permanent Fix: Ensure entrypoint.sh exports to /etc/profile.d/azure-env.sh as shown earlier.

Issue 3: pgcrypto Extension Error

Symptom:

ERROR: extension "pgcrypto" is not allow-listed for users

Solution: Follow the two-step process in "Azure PostgreSQL Extension Challenges" section above.

Issue 4: psql Meta-Command Syntax Error

Symptom:

postgres@chaufr-pg:chaufr> \dx
syntax error at or near "\"

Diagnosis: Running psql meta-commands in a multi-line SQL block.

Solution: Run each command separately:

-- Run this:
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Press Enter, wait for confirmation

-- Then run this:
\dx
-- Press Enter to see extension list

Key Lessons Learned

1. File Ownership Matters in Multi-Stage Builds

When copying files between Docker stages, always use --chown flags:

# ❌ Wrong (permission issues)
COPY --from=builder /app/binary ./binary

# βœ… Right (proper ownership)
COPY --from=builder --chown=appuser:appuser /app/binary ./binary

2. Azure SSH Requirements Are Non-Negotiable

Azure App Service SSH has specific requirements that must be followed exactly:

Deviation from these breaks SSH access.

3. Environment Variables Need Explicit Export

Azure environment variables are available to the main process but NOT to SSH sessions unless explicitly exported to /etc/profile.d/.

4. Azure PostgreSQL Extension Management

Extensions like pgcrypto require:

  1. Server-level allow-listing
  2. Admin-level creation
  3. One-time setup per database

Plan for this in your deployment documentation.

5. Multi-Stage Builds Save Space

Final image size comparison:

6. Stripping Binaries in Runtime Stage

OCaml's opam user in builder stage lacks write permissions to _build/:

7. Manual Migrations Are Fine for MVP

For early-stage products:


Performance Considerations

Docker Image Optimization

Before Optimization:

After Optimization:

Impact:

SSH Performance

Overhead:

Benefit:


Conclusion

Deploying OCaml containers to Azure App Service with SSH access requires careful attention to Azure's specific requirements, but the result is a production-ready system with full operational control. The ability to SSH into running containers, manage database migrations manually, and debug issues in real-time provides invaluable flexibility for early-stage products.

Key Takeaways:

  1. Multi-stage Docker builds keep production images lean while maintaining full build capability
  2. Azure SSH requirements are strict but well-documented; follow them exactly
  3. Environment variable export to /etc/profile.d/ enables SSH session access
  4. Azure PostgreSQL extensions require server-level allow-listing and admin creation
  5. Manual migrations via SSH are perfectly acceptable for MVP stages
  6. File ownership in Docker is critical for proper operation

The journey from local development to production deployment involved multiple hurdlesβ€”file ownership issues, environment variable visibility, Azure extension restrictionsβ€”but each challenge reinforced the importance of understanding the platform you're deploying to.

Current Status: Chaufr is live on Azure App Service, accepting driver signups, and ready for the next phase of development.


Resources

Happy deploying! πŸš€

Shipped code beats perfect code, every time.

Hey, this site is part of ring.muhokama.fun!