Enabling SSH in Azure App Service for OCaml Containers: A Production Journey
From local development to production deployment: implementing SSH access, managing database migrations, and navigating Azure PostgreSQL restrictions in an OCaml web application.
Project: Chaufr β Personal drivers, on demand, in your
own vehicle
Tech Stack: OCaml 5.3.0, Dream Framework, Azure App
Service (Linux), PostgreSQL Flexible Server, Docker
Status: β
Production - First Driver Created
Successfully
This post chronicles the complete journey of deploying an OCaml custom container to Azure App Service, implementing SSH access for operational control, managing database migrations in production, and resolving Azure-specific PostgreSQL challenges. If you're deploying OCaml (or any custom container) to Azure and need SSH access for migrations or debugging, this guide will save you hours of trial and error.
The Challenge: Production Operations Without SSH
When deploying custom containers to Azure App Service, you lose the comfort of SSH access by default. This becomes critical when you need to:
- Run database migrations manually in production
- Debug container issues in real-time
- Inspect running processes and file systems
- Execute one-off commands without redeployment
- Verify environment variables in the runtime environment
Unlike Azure's "blessed" built-in images (Node.js, Python, etc.) that include SSH by default, custom Docker images require explicit SSH configuration following Azure's specific requirements.
Architecture Overview
Here's how SSH access flows through the Azure deployment:
GitHub Actions (CI/CD)
β
Azure Container Registry (ACR)
β
Azure App Service (Linux)
β
Custom OCaml Container
ββ OpenSSH Server (Port 2222)
ββ Environment Variable Export
ββ Migration Binary (migrate.exe)
ββ Web Application (chaufr)
β
Azure PostgreSQL Flexible Server
Key Accomplishments
1. π Multi-Stage Docker Build with SSH Support
Challenge: Build an OCaml container that supports both the application and SSH access while keeping the final image small.
Solution: Implemented a multi-stage Dockerfile with SSH integration and proper file ownership.
Complete Dockerfile Structure
# --- Build stage ---
FROM ocaml/opam:debian-12-ocaml-5.3 AS builder
WORKDIR /app
# System dependencies - Build tools for Azure Container Registry
RUN sudo apt-get update && sudo apt-get install -y --no-install-recommends \
libpq-dev \
pkg-config \
ca-certificates \
git \
libcurl4-openssl-dev \
libffi-dev \
libgmp-dev \
libev-dev && \
sudo rm -rf /var/lib/apt/lists/*
# Copy opam metadata early for caching
COPY dune-project chaufr.opam ./
# Install OCaml dependencies
RUN eval $(opam env) && \
opam update && \
opam pin add -y simple_dotenv.1.0.0 git+https://github.com/Lomig/simple_dotenv.git#05ef4a35eff29784abc3f454ee36163f3ae48747
RUN eval $(opam env) && opam install -y curl ocurl.transition
RUN sudo apt-get update && \
sudo apt-get install -y --no-install-recommends libargon2-dev && \
sudo rm -rf /var/lib/apt/lists/* && \
eval $(opam env) && opam install -y --deps-only .
# Copy source and build
COPY . .
RUN eval $(opam env) && dune build --profile=release ./bin/main.exe ./bin/migrate.exe
# --- Runtime stage ---
FROM debian:bookworm-slim
# Runtime dependencies - Includes OpenSSH for Azure Portal SSH
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
ca-certificates \
libcurl4 \
libgmp10 \
libev4 \
libargon2-1 \
openssh-server \
dialog \
binutils && \
rm -rf /var/lib/apt/lists/*
# Create SSH directory and set root password for Azure Portal SSH
# Password must be exactly "Docker!" per Azure requirements
RUN mkdir -p /run/sshd && \
echo "root:Docker!" | chpasswd
# Create profile.d directory for environment variable scripts
RUN mkdir -p /etc/profile.d && chmod 755 /etc/profile.d
# Non-root user for security
RUN useradd -m -d /home/appuser appuser
WORKDIR /home/appuser
# Copy binaries from builder stage with proper ownership
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/main.exe ./chaufr
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/migrate.exe ./migrate
# Strip debug symbols and set executable permissions
RUN strip ./chaufr ./migrate && \
chmod +x ./chaufr ./migrate
# Copy SSH configuration and entrypoint script
COPY sshd_config /etc/ssh/
COPY --chown=appuser:appuser entrypoint.sh ./
RUN chmod +x ./entrypoint.sh
# Azure App Service configuration
ENV PORT=8080
EXPOSE 8080 2222
# Use entrypoint script to start SSH and application
ENTRYPOINT ["./entrypoint.sh"]
Key Design Decisions:
- Multi-stage build - Separates build tools (~800MB) from runtime (~150MB)
-
Proper ownership -
--chown=appuser:appuserensures files are accessible - Strip in runtime stage - Reduces binary size by 30-50% where we have permissions
-
Both binaries -
chaufr(web app) andmigrate(database migrations) - SSH requirements - Port 2222, specific password, OpenSSH server
2. π Azure-Compliant SSH Configuration
Challenge: Azure App Service requires specific SSH configuration that differs from standard OpenSSH setups.
Solution: Created sshd_config following
Azure's strict requirements.
SSH Configuration File
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Azure Requirements:
- β
Port must be
2222(not standard 22) -
β
Ciphers must include at least one:
aes128-cbc,3des-cbc, oraes256-cbc -
β
MACs must include at least one:
hmac-sha1orhmac-sha1-96 - β Root login must be permitted
- β Password authentication must be enabled
Security Note: The password Docker! is
only accessible within Azure's private virtual network. External
attackers cannot reach port 2222.
3. Entrypoint Script with Environment Variable Export
Challenge: Azure App Service environment variables are available to the main process but not to SSH sessions, breaking manual migration commands.
Solution: Created an entrypoint script that exports environment variables system-wide.
Entrypoint Script (entrypoint.sh)
#!/bin/sh
set -e
# Export Azure App Service environment variables to make them available in SSH sessions
# This creates a system-wide profile script that loads env vars for all shells
echo "Exporting environment variables for SSH access..."
printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//="/' | sed 's/$/"/' > /etc/profile.d/azure-env.sh
chmod +x /etc/profile.d/azure-env.sh
# Also export to /etc/profile for login shells (backward compatibility)
eval $(cat /etc/profile.d/azure-env.sh)
echo "Starting SSH ..."
service ssh start
# Run migrations if MIGRATE_ON_STARTUP is set
if [ "$MIGRATE_ON_STARTUP" = "true" ]; then
echo "Running database migrations..."
./migrate up || echo "Migration failed, continuing anyway..."
fi
# Start the OCaml web application
echo "Starting Chaufr application..."
exec ./chaufr
Environment Variable Export Strategy:
-
System-wide availability - Export to
/etc/profile.d/azure-env.sh - SSH session access - All shells automatically source this file
- Main process access - Eval'd into current shell for app startup
-
Optional auto-migrations - Set
MIGRATE_ON_STARTUP=truein Azure if desired
Why This Matters: Without this,
DATABASE_URL and other Azure environment variables are
invisible in SSH sessions, causing migration commands to fail.
Technical Deep Dive
Data Flow: From Build to Production User
1. GitHub Push
β Triggers GitHub Actions
2. Docker Build (Multi-stage)
β Builder: Compile OCaml β migrate.exe, chaufr
β Runtime: Install SSH, copy binaries, configure
3. Azure Container Registry
β Image stored: chaufr:latest
4. Azure App Service
β Pull image, start container
5. Container Startup (entrypoint.sh)
β Export env vars β Start SSH β Start app
6. SSH Session (Azure Portal)
β Source /etc/profile.d/azure-env.sh
β Run: /home/appuser/migrate up
7. Database Migrations
β Connect to Azure PostgreSQL
β Apply schema changes
8. Production User Creation
β Driver signup successful
File Ownership Architecture
Challenge Encountered: Initial attempts showed
ls returning nothing despite find locating the
binaries.
Root Cause: Files copied from builder stage had
incorrect ownership, making them invisible to ls for the
current user.
Solution:
# Before (files invisible to appuser):
COPY --from=builder /app/_build/default/bin/main.exe ./chaufr
# After (proper ownership):
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/main.exe ./chaufr
Lesson Learned: Always use --chown flags
when copying files to containers with non-root users.
Azure PostgreSQL Extension Challenges
The pgcrypto Extension Blocker
Error Encountered:
ERROR: extension "pgcrypto" is not allow-listed for users in Azure Database for PostgreSQL
HINT: to learn how to allow an extension or see the list of allowed extensions
Root Cause: Azure PostgreSQL Flexible Server requires:
- Extensions to be allow-listed at the server level
- Admin privileges to create extensions in a database
Solution (Two-Step Process):
Step 1: Allow-list the Extension
# Enable pgcrypto in Azure PostgreSQL server parameters
az postgres flexible-server parameter set \
--resource-group ocaml-chaufr \
--server-name chaufr-pg \
--name azure.extensions \
--value pgcrypto,uuid-ossp
Output:
{
"value": "pgcrypto,uuid-ossp",
"source": "user-override",
"allowedValues": "...,pgcrypto,...,uuid-ossp,...",
"isConfigPendingRestart": false
}
Step 2: Create Extension as Admin
# Connect to database as admin
az postgres flexible-server connect \
--name chaufr-pg \
--admin-user postgres \
--database-name chaufr \
--interactive
# Inside psql session (run ONE command at a time):
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
\dx # List installed extensions
\q # Quit
Verification:
List of installed extensions
Name | Version | Schema | Description
------------+---------+------------+--------------------------------------------------------------
pgcrypto | 1.3 | public | cryptographic functions
uuid-ossp | 1.1 | public | generate universally unique identifiers (UUIDs)
Why This Restriction Exists
Azure PostgreSQL implements extension restrictions for security and stability:
- Prevents malicious or unstable extensions from affecting the database
- Ensures only vetted extensions run in shared infrastructure
-
Requires explicit admin approval for security-sensitive extensions
like
pgcrypto
Running Migrations in Production
SSH Access via Azure Portal
Step 1: Navigate to SSH
Azure Portal β Your App Service β Development Tools β SSH β Click "Go"
Step 2: Load Environment Variables
# Source Azure environment variables (if not auto-loaded)
source /etc/profile.d/azure-env.sh
# Verify DATABASE_URL is available
echo $DATABASE_URL
# Output: postgresql://postgres:***@chaufr-pg.postgres.database.azure.com:5432/chaufr
Step 3: Initialize Migration System
/home/appuser/migrate init
Output:
No .env file to parse
Migration system initialized successfully
Step 4: Check Migration Status
/home/appuser/migrate status
Output:
No .env file to parse
Database Version: None
Applied Migrations: 0
Pending Migrations: 4
Step 5: Apply Migrations
/home/appuser/migrate up
Success Output:
No .env file to parse
All migrations completed successfully
Step 6: Verify Current Version
/home/appuser/migrate version
Output:
No .env file to parse
Current database version: 4_add_missing_columns
Migration Schema Applied
The successful migration created these tables:
- users - User accounts (car owners)
- drivers - Driver accounts
- rides - Ride booking records
- passwords - Password authentication (Argon2)
- passkeys - WebAuthn/Passkey credentials
- schema_migrations - Migration tracking
Production Validation: First User Created
The Moment of Truth:
After completing the migration setup, I tested user creation via the production API:
# Driver signup request to production Azure endpoint
POST https://chaufr-app.azurewebsites.net/auth/signup
{
"name": "Test Driver",
"email": "driver@example.com",
"password": "securePassword123",
"role": "driver"
}
Response:
{
"success": true,
"message": "User created and logged in",
"user_id": "aea6ec28-24dd-42bd-809d-a5ca213ad9de"
}
Validation Confirmed:
- β Database schema fully operational
- β UUID generation working (UUIDv7 with pgcrypto)
- β Password hashing functional (Argon2)
- β Session creation successful
- β Authentication flow complete
Database Verification:
SELECT id, name, email, created_at FROM users
WHERE id = 'aea6ec28-24dd-42bd-809d-a5ca213ad9de';
-- Result:
-- id: aea6ec28-24dd-42bd-809d-a5ca213ad9de
-- name: Test Driver
-- email: driver@example.com
-- created_at: 2025-11-21 22:15:43
Houston, We Have Liftoff! π
Troubleshooting Guide
Issue 1: Binaries Not Visible in SSH
Symptom:
root@container:/home/appuser# ls
# (empty output)
root@container:/home/appuser# ./migrate status
-bash: ./migrate: No such file or directory
But:
root@container:/home/appuser# find / -name migrate 2>/dev/null
/home/appuser/migrate
Diagnosis: File ownership mismatch - files owned by wrong user or have restrictive permissions.
Solution:
# Add --chown flags in Dockerfile
COPY --from=builder --chown=appuser:appuser /app/_build/default/bin/migrate.exe ./migrate
Verification:
ls -la /home/appuser/
# Should show:
# -rwxr-xr-x 1 appuser appuser 4927960 Nov 20 21:57 migrate
Issue 2: Environment Variables Not Available in SSH
Symptom:
/home/appuser/migrate status
# Error: Request to <postgresql://postgres:_@...> failed
# (password masked/empty)
Diagnosis: DATABASE_URL not accessible in
SSH session.
Solution:
# Source the environment variables
source /etc/profile.d/azure-env.sh
# Verify
echo $DATABASE_URL
# Should show full connection string
# Try migration again
/home/appuser/migrate status
Permanent Fix: Ensure
entrypoint.sh exports to
/etc/profile.d/azure-env.sh as shown earlier.
Issue 3: pgcrypto Extension Error
Symptom:
ERROR: extension "pgcrypto" is not allow-listed for users
Solution: Follow the two-step process in "Azure PostgreSQL Extension Challenges" section above.
Issue 4: psql Meta-Command Syntax Error
Symptom:
postgres@chaufr-pg:chaufr> \dx
syntax error at or near "\"
Diagnosis: Running psql meta-commands in a multi-line SQL block.
Solution: Run each command separately:
-- Run this:
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Press Enter, wait for confirmation
-- Then run this:
\dx
-- Press Enter to see extension list
Key Lessons Learned
1. File Ownership Matters in Multi-Stage Builds
When copying files between Docker stages, always use
--chown flags:
# β Wrong (permission issues)
COPY --from=builder /app/binary ./binary
# β
Right (proper ownership)
COPY --from=builder --chown=appuser:appuser /app/binary ./binary
2. Azure SSH Requirements Are Non-Negotiable
Azure App Service SSH has specific requirements that must be followed exactly:
- Port
2222(not 22) - Password
Docker!(not configurable) - Specific cipher and MAC lists
PermitRootLogin yes
Deviation from these breaks SSH access.
3. Environment Variables Need Explicit Export
Azure environment variables are available to the main process but NOT to
SSH sessions unless explicitly exported to /etc/profile.d/.
4. Azure PostgreSQL Extension Management
Extensions like pgcrypto require:
- Server-level allow-listing
- Admin-level creation
- One-time setup per database
Plan for this in your deployment documentation.
5. Multi-Stage Builds Save Space
Final image size comparison:
- Without multi-stage: ~800MB (includes compiler, build tools)
- With multi-stage: ~150MB (runtime only)
- Result: 81% size reduction
6. Stripping Binaries in Runtime Stage
OCaml's opam user in builder stage lacks write permissions to
_build/:
- β Strip in builder stage β Permission denied
- β Strip in runtime stage β Works perfectly
7. Manual Migrations Are Fine for MVP
For early-stage products:
- Manual migrations via SSH are acceptable
- Provides full control and visibility
- Easy to verify before proceeding
- Automate when deployment frequency increases
Performance Considerations
Docker Image Optimization
Before Optimization:
- Single-stage build: 800MB
- Debug symbols included
- All build tools present
After Optimization:
- Multi-stage build: 150MB
- Debug symbols stripped (30% smaller binaries)
- Only runtime dependencies
Impact:
- Faster deployments to Azure
- Lower Azure Container Registry storage costs
- Quicker container startup times
SSH Performance
Overhead:
- SSH daemon: ~2MB memory
- Port 2222 listener: negligible CPU
- Environment export: one-time startup cost
Benefit:
- Zero downtime debugging
- Manual migration control
- Production incident response capability
Conclusion
Deploying OCaml containers to Azure App Service with SSH access requires careful attention to Azure's specific requirements, but the result is a production-ready system with full operational control. The ability to SSH into running containers, manage database migrations manually, and debug issues in real-time provides invaluable flexibility for early-stage products.
Key Takeaways:
- Multi-stage Docker builds keep production images lean while maintaining full build capability
- Azure SSH requirements are strict but well-documented; follow them exactly
-
Environment variable export to
/etc/profile.d/enables SSH session access - Azure PostgreSQL extensions require server-level allow-listing and admin creation
- Manual migrations via SSH are perfectly acceptable for MVP stages
- File ownership in Docker is critical for proper operation
The journey from local development to production deployment involved multiple hurdlesβfile ownership issues, environment variable visibility, Azure extension restrictionsβbut each challenge reinforced the importance of understanding the platform you're deploying to.
Current Status: Chaufr is live on Azure App Service, accepting driver signups, and ready for the next phase of development.
Resources
Happy deploying! π
Shipped code beats perfect code, every time.