Case Study: Building a Multi-Tier FX Fallback System Before Launch
We designed our exchange-rate pipeline for graceful degradation from day one. The goal was deterministic behavior under provider failures, rate limits, and partial outages, so users never see a broken conversion.
Tier-by-Tier Resolution Pipeline
Every request follows the same ordered chain. We never skip ahead. Fixed order makes incidents easier to trace and reproduce.
| Tier | Entry Condition | Data Source | Cache/TTL Policy | Failure Behavior |
|---|---|---|---|---|
| Tier 0 | Request enters API route | Edge cache key by pair + timestamp bucket | 5 min edge TTL | Cache miss proceeds to Tier 1 |
| Tier 1 | Primary provider path | ECB/Frankfurter normalized feed | 5 min in-memory snapshot reuse | Provider/network failure goes to Tier 2 |
| Tier 2 | Primary path unavailable | Alternative Frankfurter/ECB path | Same normalization + schema guards | Invalid payload or timeout goes to Tier 3 |
| Tier 3 | Tier 1-2 exhausted | TwelveData (quota-constrained) | Guarded by request budget + cooldown | Quota/rate-limit event goes to Tier 4 |
| Tier 4 | Live providers unavailable | Static fallback table | Versioned constants + stale marker | Response still returned with provenance |
| Post-Resolve | Rate selected | Unified response envelope | Includes source tier + timestamp | Client can show trust/freshness state |
Failure Modes and Technical Controls
Before launch, we focus on known fault classes and explicit controls rather than uptime percentages. Each mode maps to a specific mitigation in code.
| Failure Mode | Signal | Detection Logic | Mitigation | Residual Risk |
|---|---|---|---|---|
| Provider timeout | Latency spike / 5xx | Per-tier timeout budget + retry cap | Advance to next tier | Possible stale result |
| Schema drift | Missing/renamed fields | Runtime parse guards and numeric validation | Reject payload and fail over | Lower freshness under outage |
| Rate-limit pressure | 429 or quota depletion | Budget counters + cooldown window | Bypass constrained tier | Dependency on static fallback |
| Partial currency coverage | Pair unavailable upstream | Coverage map by provider | Cross-rate transform or next tier | Wider approximation window |
| Cold-start pressure | Burst traffic after idle | Warm cache check + deduped fetch | Single fetch fan-out | First request pays latency |
| Total upstream outage | All live tiers unavailable | Exhausted failover chain | Serve static with stale provenance | Accuracy degrades with outage duration |
Engineering Takeaways
Determinism beats ad-hoc fallback
A fixed tier order gives repeatable outcomes and faster incident debugging. Two identical requests under the same conditions should resolve to the same source tier.
Normalization is a reliability layer
Provider diversity is useful only if outputs are normalized into one strict schema. Parse guards and value sanity checks prevent silent data corruption.
Fallback without provenance is dangerous
Every resolved rate is tagged with source tier and timestamp. This lets the client communicate freshness and avoids showing stale values as if they're current.
Pre-launch reliability is testable without traffic
Even before public launch, synthetic fault injection (timeouts, 429s, malformed payloads) validates the chain and finds gaps in cache and provider logic.
Try the Converter Pipeline
Run conversions and inspect behavior under our multi-tier resolver design, built to keep working when a provider goes down.
Open Money VisualiserArchitecture references the current website implementation: edge + in-memory caching, ECB/Frankfurter primary sourcing, TwelveData constrained fallback, and static terminal fallback for continuity.
