The problem we were asked to solve
A bank has hundreds of borrowers in default. Each defaulted loan eventually becomes a legal case: a notice goes out, a panel lawyer takes it on, the case moves through stages — pre-litigation, suit filed, decree, execution — and somewhere along the way, the bank has to know exactly where every case stands, what's owed, who's handling it, and what the next hearing date is.
When we first started talking to the legal departments at our financial-sector clients, this was the workflow: a physical register, an Excel sheet that someone updates on Mondays, and a folder per borrower in a steel cabinet. Reports for the board were prepared by walking to the cabinet, opening the folders, and typing the numbers into a Word document. Approvals for new cases were paper-based, signed in person, and stored — also in the cabinet.
We built LDMS — our Litigation & Documentation Management System — to replace that workflow. This post is about the design decisions that mattered, the ones we got right on the first pass, and the ones we changed after the first deployment.
The constraint that shaped everything: maker–checker
You cannot build software for a bank's legal department without internalizing one rule: no single user can both create and activate a record that affects money or legal status. This is the maker–checker pattern, and in banking it's not a nice-to-have. It's how the institution proves to its regulator that no individual employee can act unilaterally on a borrower's case.
In LDMS, every meaningful action — creating a case, updating a stage, marking an outstanding amount, assigning a lawyer — goes through the same shape:
- Maker drafts the action. Nothing is live yet.
- Checker sees the pending action in their queue, reviews it, and either approves or rejects with a comment.
- On approval, the action becomes part of the case record and triggers downstream effects (notifications, audit log entry, dashboard update).
- On rejection, the action is recorded as rejected — including the reason — and the maker is notified.
Every action — whether approved or rejected — leaves an entry in the audit log. The maker and checker can never be the same user.
This sounds simple. It is not. A few things we learned the hard way:
- The pending state is a first-class entity. A pending update is not just "the next version of the case." It's its own row, with its own permissions, its own history, and its own lifecycle. If you model it as "the case, but draft," you'll write inconsistent queries forever.
- Maker and checker can never be the same user, and the system has to enforce this in code. We had a bug for two weeks where it was almost enforced — the UI hid the approve button if you were the maker, but the API didn't check. A determined user with a REST client could approve their own work. Now the API rejects it explicitly with a 403, and we have a test that tries to do exactly that and asserts the rejection.
- Rejected actions need to be visible. The first version of the system silently dropped rejections from the UI. Auditors flagged it. They want to see every rejected action, who rejected it, and why — that's part of the audit trail too.
The dynamic form engine, and why it was the right call
When we scoped the project, the client gave us a list of fields for each borrower type — Individual, Proprietor, Partnership, Company. About sixty fields total. We coded them. We deployed.
Two weeks later, the bank's legal head came back with three new fields they wanted to capture for partnerships. Two weeks after that, regulatory guidance changed and a new mandatory field appeared for company borrowers. Two weeks after that, internal policy added a separate validation rule for one of the existing fields.
We were heading toward a future where every form change required a code deploy, and every code deploy required a UAT cycle and a maintenance window. That's not a tenable model for legal software, where the rules genuinely change.
So we built a dynamic form engine. The fields on each form (borrower creation, case creation, case update) are defined in a database table, not in code. A System Admin can add a field, set its type (text, number, date, dropdown), mark it required, and it appears in the form on the next page load. The data lands in a structured JSON column on the relevant entity, with the field schema versioned alongside the data.
The honest tradeoffs:
- Querying gets harder. "Find all partnership borrowers with outstanding > 10M" used to be a clean SQL query. Now half the answer lives in a JSON column. We solved this for the common cases by promoting frequently-queried fields to first-class columns and indexing them. The long tail still goes through JSON queries, which are fine on modern PostgreSQL but require some care.
- Validation is more complex. Required-ness, type, and basic format checks are easy to express declaratively. Cross-field rules ("if borrower type is Company, then GST number is required") needed a small rules engine. We kept it minimal on purpose — anything more sophisticated and you're shipping a programming language to your admins, which is not a good thing.
- The admin UI is the hardest part. Building the form was easy. Building the form-that-builds-forms was the actual project.
If the client had been okay with quarterly form changes, we'd have skipped this and saved a month. They weren't, and we're glad we did.
The reporting requirement nobody scopes correctly
Every RFP for litigation software has a line that says "the system shall generate reports for management." Nobody scopes what that means. Then deployment week happens and you discover the bank needs:
- A monthly board report in a specific format with a specific cover page and a specific signature block.
- A regulator filing in a different format with different fields aggregated differently.
- A weekly stage-distribution report that goes to the head of legal in their inbox at 9am every Monday.
- An ad-hoc report any officer can build by picking columns from a list.
These are four completely different problems. We solved them as four separate features, not one report engine, and that was the right call.
For the structured reports (board, regulator), we built fixed templates with the institution's letterhead, exported as both PDF and CSV. For the stage-distribution one, we built a scheduled job that runs every Monday and emails the result. For the ad-hoc one, we built a column picker over the case dataset that exports CSV.
Trying to unify these would have produced one system that does all four poorly. Building them separately took a week longer and produces four things that each do their job.
Notifications: email is required, SMS is the one that matters
Banking workflows in our market live on SMS. People check their email a few times a day; they look at their phone every few minutes. So while LDMS sends both, the SMS channel is the one that drives action.
Two patterns that made notifications actually useful instead of noise:
- Notify on the transition, not the state. "Case moved from suit-filed to decree" is a notification. "Case is in decree stage" is not — it should already be on your dashboard. We send notifications on transitions only, which keeps volume sane.
- Per-user notification preferences, with sensible defaults. Recovery officers want to know about cases they own. The head of legal wants to know about high-value cases regardless of owner. The CFO wants the weekly digest, not individual events. We built preferences early because retrofitting them was going to be painful.
A note on the stack: .NET + React, not Django
Most of what Fionetix ships runs on Django. LDMS is on ASP.NET Core with a React frontend. Two reasons:
- The client's IT team standardized on .NET for new internal applications. Handover and long-term maintenance considerations dominated the technology choice.
- ASP.NET Core's built-in support for things banks care about — Windows authentication if needed, structured logging, strong typing all the way through — is genuinely good. We didn't have to fight the framework on any of the regulatory-control work.
The React frontend was straightforward — component-driven, TypeScript throughout, talking to a REST API. The interesting frontend work was on the dynamic form renderer, which renders forms from the backend's field schema, and on the maker–checker queues, which need to update reactively when other users approve or reject.
What the time savings actually looked like
Here's the honest version of the impact, six months in:
| Workflow | Before | After | Notes |
|---|---|---|---|
| Finding a case file | 10–20 minutes | Under 30 seconds | This is the win users notice every single day. |
| Preparing the board report | 2–3 hours, monthly | 5–10 minutes | Fixed template + live data did most of the work. |
| Registering a new case | 45–60 minutes | 15–20 minutes | Faster, but more structured data than before. |
| Approval routing | 1–2 days | Hours, sometimes minutes | Limited by the checker's availability now. |
The number we don't put in the table is reduced legal exposure from missed deadlines. We can't quantify it precisely, but the bank has not missed a hearing date since the system went live — which had been happening once or twice a quarter before. That's the result that matters most, and it's the hardest one to put in a slide.
What I'd do differently
If we were building LDMS again from scratch:
- Model the audit log first. We added it incrementally, and ended up backfilling. Modeling it from day one would have saved retrofitting work and made some early decisions cleaner.
- Build the regulator report before the board report. The regulator's format is non-negotiable; the board's format will change as soon as a new chairperson takes over. We built them in the wrong order.
- Don't promise "configurable everything." Configurability has a cost. We promised more of it than we should have, and ended up building admin UIs for fields that have not been edited once since launch. A list of which kinds of changes are admin-configurable and which require a deploy would have saved us a lot of work.
- Invest in the search experience early. Once a litigation database has a few thousand cases, search is the entire UX. We treated it as a feature; it should have been a foundation.
What's next
The next post in this series will be about the document management half of LDMS — how we handled court filings, sanction letters, and the question of whether to store originals in the database, on disk, or in object storage. (Spoiler: not all three, and not the one you'd guess.)
If you're building legal or case management software for a regulated environment and any of this resonates, we'd love to compare notes — and if you're a bank or NBFI thinking about replacing your litigation tracking, we'd love to talk.
