Building a Self-Service App for a Nationwide ISP: What Shaped the Product

How we shipped an ISP subscriber self-service app for a nationwide operator in Bangladesh — the bets we made on bilingual UI, mock-mode parallel tracks, and treating support-call deflection as the only metric that mattered.

A person holding a smartphone, illustrating a customer-facing mobile app

The wall every ISP eventually hits

Every ISP in Bangladesh eventually hits the same wall: the support desk becomes the product. Customers call to check their balance, call again to confirm a payment went through, call a third time to raise a complaint they already raised last week. For one of the country's prominent nationwide ISPs, the cost of that pattern was measured in two things we cared about — support headcount that scaled linearly with subscribers, and a customer experience that felt stuck in 2012 while the rest of the consumer internet moved on.

This post is the product-and-delivery view of what it took to ship a subscriber self-service app for that operator. What we bet on, what we got wrong, and the decisions we'd make the same way again.

The problem we were actually solving

It is tempting to frame a project like this as "build a mobile app." That framing is a trap. The real problem statement, in plain language, was:

A nationwide subscriber base that pays monthly, uses the product every day, and currently depends on a human being picking up a phone to transact with us.

Once the problem is stated that way, the scope decisions become easier. Anything the app does not do, a human on a phone line still has to do. So the question for every feature debate became: does this remove a reason to call support? If yes, it belongs in v1. If no, it can wait.

Candidate featuresPromotional bannersReferral programGamificationChatbotDashboardPay Bill / History / UsageComplaints / ConnectionFilter"removes a reason to call?"v1 ship listDashboardPay BillPayment HistoryUsage HistoryComplaintsNew ConnectionProfile / Deletion

One filter, applied ruthlessly. Promotional banners, referral programs, gamification, and a chatbot were all proposed and all cut.

The features that survived that filter in the first release:

  • Dashboard — balance, active package, expiry date, connection status, usage. The single "did I pay my bill, am I about to be cut off" screen.
  • Pay Bill — the self-service moment of truth. A customer should never have to speak to a human to renew a subscription they already have.
  • Payment History — because the first thing a customer asks after paying is "did it actually go through."
  • Usage History — the second most common call after billing.
  • Complaints — structured ticket creation with categories, so the support team stops transcribing phone calls into a CRM.
  • New Connection — lead capture, so the sales funnel lives in the same app as the retention funnel.
  • Profile / password change / account deletion — table stakes for any modern app, and in the case of account deletion, a non-negotiable store requirement.

Nothing else made v1. No promotional banners, no referral program, no gamification, no chat bot. The discipline of refusing features in v1 was the single most important product decision on this project.

Bet #1: bilingual from day zero, not bolted on later

English-only would have been faster. It would also have been wrong. For a subscriber base that spans the country — government, institutional, residential customers who speak Bangla at home and at work — an English-only UI is a friction tax paid every single session.

We decided the app would ship with Bangla and English parity from the first build. Every label, every toast message, every error string would exist in both languages, loaded through a single translation map with a persistent user preference. Screens would be designed and QA'd in Bangla, not translated from English as an afterthought.

The cost of that choice was roughly 10–15% more UI work per screen and a hard rule that no new string could be hard-coded. The payoff was that the app reads naturally to the customer who actually uses it, and the Bangla experience is never "v2 someday."

Bet #2: mobile-first, not mobile-also

A competing proposal was "web portal first, mobile app when we have time." We killed that. Our customers are already on their phones. The web portal would have been a concession to our own convenience — it's faster for a web team to ship than for a mobile team — and a disservice to the actual user.

The decision let us invest in the things that matter on mobile and only on mobile: offline-aware UI, native payment handoff, biometric-class session persistence, push-ready analytics. A web-first strategy would have forced a lowest-common-denominator product.

A person holding a smartphone and a payment card, completing a mobile transaction
Pay Bill is the self-service moment of truth. If a customer cannot complete a renewal in the app, the rest of the product does not matter.

Bet #3: decouple the mobile team from the backend team

This was the single biggest delivery accelerator.

The backend for a national ISP is not a greenfield Rails app — it is a real operational system with billing reconciliation, a live network operations centre, and integrations with a payment aggregator. Any plan that said "the mobile team waits for the backend team to finish endpoint X before starting screen X" would have added weeks of idle time for every endpoint.

Instead, we decided up front that the mobile app would ship with a complete mock data layer behind a feature-flag switch. The mobile team defined the API contract, wrote it up as a Postman collection and a markdown spec, and handed it to the backend team. Mobile development then proceeded against mocks. When a backend endpoint went live, flipping a single environment variable pointed the mobile app at real data for that feature.

Shared API contractPostman collection + markdown specMobile trackBuild screenMock dataQA + shipBackend trackImplement endpointReconcile billingDeployFeature flag flipUSE_MOCKS=false → real data

Two tracks running in parallel against a shared contract. A single env var collapses the gap when each endpoint is ready.

Practically, this meant the mobile team was never blocked on backend, and the backend team was never guessing at a schema. The two tracks ran in parallel and met in the middle.

If I had one piece of advice for any leader shipping a mobile app against a legacy backend: force the mock-mode architecture in week one. Do not let "we'll add it later" win that argument.

Bet #4: treat app store compliance as a product requirement, not a release blocker

Account deletion is the obvious example. Google Play now requires an in-app path for users to request deletion of their account and associated data. If you treat that as a last-minute compliance item, it becomes a panicked scramble right before launch that touches auth, analytics, and backend all at once.

We scoped it in as a first-class feature from the start: its own screen, its own confirmation flow, its own backend endpoint to capture the request, and an explicit hook into the analytics layer to reset the on-device analytics ID so we don't continue attributing events to a user who asked to leave. A privacy policy screen shipped alongside it.

Treating compliance as a feature — with a spec, a design, and a QA pass — is slower on paper and dramatically faster in practice, because nothing blocks the submission.

What we got wrong

We under-scoped the payment redirect flow. The payment aggregator redirects through multiple hosts depending on the channel (card, mobile wallet, bank). The first cut of the app only knew about one host, and customers ended up stranded on a confirmation page that the app failed to recognize as "payment complete." Fixing this meant treating the redirect host list as a config value, not a constant, so we could add hosts without shipping a new build. Lesson: anything that talks to a third party is configuration, not code.

We deferred offline UX too long. "The app is useless without internet" sounds true until you realize half of Bangladesh's mobile users drop to 2G in specific buildings or neighbourhoods. An offline banner and graceful loading skeletons should have been in v1, not v1.1. Customers are forgiving of slowness; they are unforgiving of a white screen.

We let the first login flow get too clever. The first version tried to auto-detect connection status, prefetch packages, and pre-calculate the next bill before showing the dashboard. It looked great when it worked. It looked like a broken app when any of those three calls was slow. The fix was to show the dashboard immediately with skeleton placeholders and let each card resolve independently.

Before — three calls in seriesLoginConnection statuswait…Packageswait…Next billwait…Dashboard paintsany one slow call delays everything; the app looks brokenAfter — progressive disclosureLoginDashboard paintsskeleton placeholdersConnection status cardPackages cardNext bill cardcards resolve independently

Progressive disclosure beats perfect disclosure. Skeleton placeholders buy you the patience the network sometimes can't.

What we'd do the same way again

  • Ruthless v1 scope. Every feature that didn't directly remove a support call got cut.
  • Bilingual from commit one. Never to be retrofitted.
  • Mock-mode parallel tracks. Mobile and backend delivered independently.
  • Compliance as product. Deletion, privacy policy, analytics hygiene treated as features.
  • Cross-platform from one codebase. One team, one language, two app stores.

The honest scorecard

We did not set out to build a beautiful app. We set out to remove a reason to call support. The feature set is narrow on purpose, the UI is conservative on purpose, and the roadmap is deliberately short. The success metric we care about is simple: every month, a greater share of routine subscriber interactions should complete inside the app and never touch the call center.

That's it. That's the whole project, stated in one sentence, and it is the sentence we kept coming back to every time someone proposed adding something that would have been nice to have but did not move that number.

Build that discipline into the room before you write a single line of code, and the rest of the decisions get much easier.

Share
Fatimatuj Johora
Written by

Fatimatuj Johora

Chief Operating Officer

COO at Fionetix Solutions, overseeing operations, delivery, and customer success across all product lines.

Continue reading

More from the Fionetix blog