Announcing research partnership with leading Oxford AI group

Back to Trade Finance Intelligence
Methodology

How We Calculate the Trade Finance Difficulty Index

The Problem with Existing Approaches

Most trade finance risk assessments rely on analyst judgment: an expert reads the news, weighs up several factors, and assigns a rating. This works until you need to explain why Nigeria scored 61.9 in March 2024 but 41.3 in January 2026 - or why it scored worse than Kenya in one month and better the next. Judgment-based models produce numbers that cannot be audited, reproduced, or challenged on their own terms. Two analysts reviewing the same country in the same week will often produce different scores, and neither can point to a mechanical reason why.

We take a different approach. Every score published on this platform is deterministic: given the same observable inputs, the model will always produce the same output. There is no LLM in the loop, no sentiment weighting, no editorial discretion at the scoring stage. The role of industry experts is upstream - in designing the rubric, calibrating the bands, and setting the weights - not in producing the number itself.

Architecture: Three Files, Zero Judgment at Runtime

The scoring system consists of three components, each with a distinct role.

The Rubric

Defines what we measure and how each measurement maps to a score. It specifies five layers, twenty-five sub-indicators, and fixed scoring bands for each. The rubric is a model specification: it encodes the collective judgment of trade finance practitioners about what makes a corridor difficult, but it does so once, as a reusable structure, not on a case-by-case basis.

The Country Data File

Contains raw, observable facts for each scoring period. These are verifiable numbers and categorical statuses - not opinions. For example: “Nigeria had 3 international banks with direct subsidiaries in January 2026” or “FATF removed Nigeria from the grey list in October 2025.” Every data point is sourced and timestamped. Anyone can check whether the input is correct.

The Scoring Engine

Reads the rubric and the data file, matches each input to its corresponding band, applies the specified weights, and outputs the composite score with a full audit trail. The engine contains no conditional logic about any specific country. It is a mechanical translator: data in, scores out.

This separation means that if you disagree with a score, you can identify exactly where your disagreement lies. Is the rubric wrong - should LC confirmation fees above 600 basis points score 95 instead of the current band? Is the data wrong - were there actually 4 international banks, not 3? Or is the weight wrong - should compliance matter more than 25%? Each of these is a distinct, testable claim.

The Five Layers

The index measures trade finance difficulty across five dimensions, each weighted by its impact on whether an international transaction can actually be executed and at what cost.

Layer 1: Confirming Bank Availability 30% weight

This is the heaviest layer because it answers the most fundamental question: can you find a bank willing to confirm a letter of credit from this country? If the answer is no, the other layers are academic.

We measure five sub-indicators: the number of major international banks with direct presence, the number of local banks covered by DFI trade guarantee programmes (such as IFC GTFP or the AfDB Trade Finance Programme), the total value of DFI facility commitments, the percentage of commercial banks meeting minimum capital requirements, and the trend in correspondent banking relationships.

Example

In June 2022, Nigeria scored 24.5 on this layer - its best ever. Eight local banks were covered by IFC GTFP, IFC had committed $1.5 billion to Nigeria through the programme alone, all banks met capital requirements, and correspondent relationships were stable. By March 2024, the same layer scored 51.8. What changed? The CBN's March 2024 recapitalisation order (raising minimum capital from ₦25 billion to ₦500 billion) meant that effectively 0% of banks met the new threshold. Meanwhile, FATF grey-listing triggered a correspondent banking pullback so severe it registered as “severe de-risking” in our rubric.

Layer 2: Compliance and KYC Overhead 25% weight

International banks operate under compliance frameworks that treat certain country-level designations as binary gates. A FATF grey listing does not merely increase paperwork - it can trigger automatic restrictions in correspondent banking systems, force enhanced due diligence that adds weeks to transaction processing, and cause some institutions to suspend country limits entirely.

We track four sub-indicators: FATF listing status (with granular bands distinguishing “grey-listed early in action plan” from “grey-listed but substantially completed” from “removed within 12 months”), EU AML high-risk third-country list status, US sanctions or designations (including CPC status under the International Religious Freedom Act), and the maturity of domestic AML infrastructure.

Example

Nigeria's compliance score was 9.0 in June 2022 - the lowest (best) in our entire dataset. The country was clean on every international list, had just enacted comprehensive new AML legislation (the Money Laundering Prevention and Prohibition Act 2022), and had a fully independent, Egmont-member FIU. Twelve months later, in June 2023, the same layer scored 56.8. The FATF had grey-listed Nigeria in February 2023, and the EU adopted a delegated regulation adding Nigeria to its high-risk list in May 2023 (effective July). This 47.8-point swing in a single layer was the fastest deterioration we have recorded for any country in any layer.

Layer 3: Instrument-Specific Barriers 20% weight

Even when banks are available and compliance is manageable, operational frictions can make specific instruments impractical. This layer captures the procedural reality of executing trade finance in a given market.

Sub-indicators include: the complexity of import documentation (e.g., Nigeria's Form M regime), the breadth of import prohibitions or FX restrictions, the existence of a legal framework for receivables finance and factoring, LC payment performance trends, average days sales outstanding for corporate receivables, and the enforceability of demand guarantees under URDG 758.

Example

Layer 3 is the structural floor in Nigeria's score - it has never dropped below 51.0 (June 2014) and has been essentially flat at 57.0 since August 2025. The reason is that several sub-indicators are stuck: Nigeria still has no standalone factoring law, the Form M import documentation regime remains multi-step with bank validation, URDG 758 adoption is voluntary with mixed judicial enforcement, and the CBN reimposed all 43 FX-restricted items even after the October 2023 lifting. These are legislative and institutional barriers that do not respond to macroeconomic improvement or compliance reform.

Layer 4: Cost and Pricing Signals 15% weight

Market pricing is the most honest signal of difficulty. When confirming banks charge 800 basis points for an LC from a country that used to price at 125, the market is telling you something that no qualitative assessment can capture as precisely.

We track LC confirmation fees (in basis points), sovereign credit ratings (mapped to a composite across Fitch, Moody's, and S&P), Eurobond spreads over US Treasuries, and the availability and cost of trade credit insurance.

Example

In June 2014, a top-tier Nigerian bank's LC could be confirmed at roughly 125 basis points. Nigeria was rated BB−/Ba3, its Eurobond spread was around 310 basis points, and trade credit insurance was available at moderate premiums. By June 2024, confirmation fees had reached 800 basis points - an increase of more than 500% - the sovereign rating had fallen to B−/Caa1 (seven notches below investment grade), and several major trade credit insurers had withdrawn cover entirely. The layer score moved from 32.5 to 78.8, driven entirely by observable market prices.

Layer 5: Macro and Corridor Disruption 10% weight

The lightest layer, but it captures conditions that amplify every other difficulty. A currency in freefall makes LC tenors riskier; thin reserves signal potential FX rationing; high inflation erodes the real value of receivables; congested ports delay document presentation.

Sub-indicators: the three-month FX trend, the official-to-parallel exchange rate spread, reserve import cover in months, headline inflation, preferential trade access status (e.g., AGOA eligibility), and average port cargo dwell time.

Example

In June 2018, Nigeria's macro layer scored 31.5. Reserves had peaked at $47.5 billion (over 21 months of import cover), the I&E window had essentially eliminated the parallel market premium, and inflation was declining toward 11%. By June 2016 - just two years earlier - the same layer had scored 68.0: reserves were at $26 billion, the naira peg was collapsing, the parallel spread had blown out to 80%, and inflation was running at 16.5%. Same country, same rubric, radically different observable conditions.

How the Bands Work

Each sub-indicator maps its raw input to a score between 5 and 100 through predefined bands. Bands are either numeric (the input is a number that falls within a range) or categorical (the input matches a defined status).

Numeric example - LC confirmation fees:

Fee Range (bps)ScoreLabel
0-5010Investment-grade norm
51-10025Low premium
101-20040Moderate
201-40060Elevated
401-60080High
601+95Distressed

If Nigeria's LC confirmation fee is 250 bps, the engine matches it to the 201-400 band and assigns a score of 60. If it rises to 800 bps, the match shifts to the 601+ band and the score becomes 95. No judgment is applied - the fee is a number, the band is a range, and the mapping is fixed.

Categorical example - FATF status:

StatusScoreLabel
Not listed (or removed >24 months ago)10Clean
Removed 12-24 months ago20Observation period
Removed within last 12 months35Recent removal
Grey-listed, substantially completed action plan60Improving
Grey-listed, action plan in progress75Active grey list
Recently grey-listed, early in action plan85Early grey list
Black-listed100Maximum restriction

Nigeria was grey-listed in February 2023 and removed in October 2025. At any point between those dates, the engine selects the appropriate categorical band based on where the country sits in its action plan. In January 2026 - three months after removal - the status is “removed within last 12 months,” which maps to a score of 35. This will automatically improve to 20 once twelve months have passed, reflecting the empirical reality that compliance teams and correspondent banks take time to update their internal risk assessments after a delisting.

From Sub-Indicators to Composite

The composite score is a weighted average of weighted averages. Within each layer, sub-indicator scores are combined using their specified weights. The resulting layer score is then weighted by the layer's share of the total index.

Worked example - Nigeria, January 2026:

LayerLayer Score× Weight= Contribution
L1: Bank Availability31.8× 30%9.53
L2: Compliance33.2× 25%8.31
L3: Instruments57.0× 20%11.40
L4: Pricing52.5× 15%7.88
L5: Macro42.0× 10%4.20
Composite41.3

The composite is the sum of the weighted contributions: 9.53 + 8.31 + 11.40 + 7.88 + 4.20 = 41.3. This places Nigeria in the Moderate difficulty zone (30-45), improved from Severe (60-75) just twenty months earlier.

What the Scale Means

RangeZonePractical Meaning
0-15EffortlessMultiple confirming banks compete for business; standard documentation; investment-grade pricing
15-30LowTrade finance readily available; minor friction in documentation or compliance
30-45ModerateWorkable but requires effort; limited confirming bank options; elevated fees; some procedural delays
45-60ElevatedSignificant barriers; specialist banks or DFI support often required; high pricing
60-75SevereMajor structural obstacles; many banks decline exposure; LC rejections common
75-100CriticalNear-impossible to execute standard trade finance; sanctions, comprehensive restrictions, or system breakdown

The Role of Experts

Industry practitioners contribute at three stages, all of which occur before the scoring engine runs.

Rubric design

The choice of layers, sub-indicators, and weights reflects practitioner consensus about what drives trade finance difficulty. The decision to weight Layer 1 (Bank Availability) at 30% - heavier than any other - reflects the professional reality that without a willing confirming bank, no other factor matters. The decision to include URDG 758 enforceability as a sub-indicator reflects the experience of guarantee practitioners who have seen demand guarantees treated as sureties by local courts.

Band calibration

The thresholds within each band are set by reference to market experience. The LC confirmation fee bands, for instance, are calibrated so that 50 bps (the investment-grade norm for major trading economies) scores 10, while 600+ bps (the level at which many banks simply decline to quote) scores 95. These thresholds are not arbitrary - they correspond to observable decision points in trade finance origination.

Data validation

Raw country data is reviewed against published sources: central bank circulars, FATF plenary outcomes, rating agency actions, NBS releases, DMO Eurobond pricing, and IFC programme disclosures. Where precise data is unavailable (for instance, average LC confirmation fees are not published by any central authority), we use practitioner-sourced ranges anchored to observable transactions.

At no point does an expert override the engine's output. If the model produces a score that seems wrong, the correct response is to examine the inputs and the rubric - not to manually adjust the result.

Reproducibility

Every score we publish can be independently verified. The rubric specification, raw country data, and scoring engine are available for inspection. If you take our rubric, plug in our data, and run the engine, you will get exactly the same number we published. If you disagree with the number, you can identify the specific sub-indicator, data point, or band calibration that you would change - and calculate what the score would be under your assumptions.

This is what we mean by deterministic scoring: not that the model is perfect, but that it is transparent, auditable, and mechanically reproducible. The debate shifts from “I think Nigeria is moderate risk” versus “I think Nigeria is elevated risk” to “I think the LC confirmation fee band should break at 500 bps instead of 600” or “I think the FATF removal observation period should last 18 months instead of 12.” Those are arguments that can be resolved with evidence.

Full Audit Trail

Every score includes a downloadable audit trail showing, for each sub-indicator: the raw input value, the band it matched, the sub-indicator score, and its weight within the layer. The audit trail makes it possible to trace any composite score back to the twenty-five individual measurements that produced it.

For Nigeria in January 2026, the audit trail shows that the composite of 41.3 was driven by, among other things: 3 international banks with direct presence (score: 50), 8 DFI-covered banks (score: 10), FATF status “removed within 12 months” (score: 35), 43 items on the FX restriction list (score: 70), LC confirmation fees of 250 bps (score: 60), a B− sovereign rating (score: 60), and inflation of 15.15% (score: 60). Each of these is an observable fact mapped to a predetermined band. The composite is simply the arithmetic consequence of those mappings.