Securing the Monolith: CSRF, Redis, and the Driver Lifecycle
Building a secure authentication flow, optimizing for Azure, and the journey of reclaiming my mental model from AI agents.
Project: Chaufr – Personal drivers, on demand, in your
own vehicle
Tech Stack: OCaml 5.3.0, Dream Framework, Azure Redis,
PostgreSQL, GitHub Actions
Status: ✅ Production - Driver Flow & Security
Hardening
This weekend was about moving from "it works" to "it is right." As I transition Chaufr from a prototype to a robust monolithic application, I tackled three critical pillars: security (fixing CSRF), architecture (the driver signup lifecycle), and infrastructure (Azure Redis and deployment resilience).
Here is how I secured the monolith and structured the data.
1. The CSRF Trap: Moving beyond ~csrf:false
In the early stages of development, specifically when testing with Postman or cURL, it is tempting to disable Cross-Site Request Forgery (CSRF) protection to reduce friction. I had this flag sitting in my handlers:
match%lwt Dream.form ~csrf:false request with
(* ... *)
While this allowed for rapid API testing, leaving it in a browser-based application is a critical vulnerability. It allows attackers to trick users into submitting forms (like "Create Driver") without their consent.
The Fix: passing the request
Fixing this wasn't just about changing false to
true. It forced a better architectural flow. I realized
that to generate a secure token, the View layer needed
access to the Request object, something I hadn't been
passing down.
The Pattern:
-
Controller: Pass the
requestobject to the render function. -
View: Use
Dream.csrf_token requestto generate the secret. - HTML: Inject it into a hidden input field.
(* In the View (driver_profile.ml) *)
input
[
type_ "hidden";
name "dream.csrf";
value "%s" (Dream.csrf_token request)
];
This simple change rippled through auth.ml,
login.ml, and signup.ml, locking down the
entire application. Every write operation is now intentional and
cryptographically verified.
2. Architecting the Driver Flow
With security locked in, I moved to the core business logic: The distinction between a User (Car Owner) and a Driver.
I didn't want a fragmented database, so I opted for a linked approach using Foreign Keys.
The "Fork in the Road" UX
The signup process now handles two distinct paths:
-
Car Owner: Completes the standard signup and provides
a
vehicle_description. - Driver: Skips the vehicle description. The system detects this intention and redirects them to a secondary Driver Profile form.
Database Implications
To support this, I modified the drivers table to enforce
strict ownership:
ALTER TABLE drivers
ADD CONSTRAINT fk_user
FOREIGN KEY (user_id) REFERENCES users (id);
This keeps a single user identity while allowing specific accounts to
"upgrade" to driver status. I also added performance indexes
on email (for login speed) and location (for
the upcoming geospatial search feature).
3. Infrastructure: Azure Redis & Deployment Resilience
As traffic grows, relying on in-memory session storage isn't enough. I provisioned an Azure Redis resource to handle:
- Rate Limiting: Preventing abuse on login/signup endpoints.
- Session Storage: Decoupling state from the application container.
The "Heisenbug" in Deployment
Deploying this updated stack revealed a flaw in my GitHub Actions
pipeline. My health check script was failing with a cryptic
cat: response.txt: No such file or directory.
The issue? curl was failing to connect (because the app
hadn't fully started or the DB was cold), exiting immediately, and never
creating the output file.
The Robust Fix: I rewrote the deployment script to guarantee file existence and handle connection refusals gracefully:
# 1. Ensure file exists before writing
rm -f response.txt
touch response.txt
# 2. Run curl with a fallback
HTTP_STATUS=$(curl -s --max-time 10 -o response.txt -w "%{http_code}" "$APP_URL/health" || echo "000")
# 3. Read safely
cat response.txt
This small change ensures the CI/CD pipeline waits patiently for the application (and the database) to wake up before declaring failure.
Conclusion: Reclaiming the Mental Model
I recently uninstalled my "Agentic AI" tools. While they wrote code fast, I realized I was losing the mental model of my own codebase.
Debugging the CSRF token flow today proved why that was the right
decision. By manually threading the request object through
the controllers and views, I now understand exactly how data flows
through Chaufr. I understand the Why, not just the
How.
We are now live with a secure, indexed, and architecturally sound foundation.
Next Steps:
- Implement the Redis-backed Rate Limiter in Azure.
- Build the "Request a Ride" frontend and backend.
Slow is smooth, and smooth is fast.