Imagine a World Without Credit Scores
What FICO Did for Lending, and Why AI Governance Is One Standard Away from the Same Transformation
T H E M E A S U R E D V I E W
AIQA Global’s series on the ideas, standards, and market forces shaping enterprise AI governance. In a field defined by speed and speculation, these articles offer something different: informed perspective grounded in the discipline of measurement. Because the organizations that will lead in AI are the ones that can prove it.
The Thought Experiment
A woman walks into a bank with three years of tax returns, a binder of customer references, a stack of contracts, and a pitch she has rehearsed countless times in the parking lot. She is asking for a $250,000 line of credit. The loan officer pours her coffee and starts asking questions.
In today’s world, the conversation is mostly theatre. Before she sat down, the bank pulled her FICO score, ran it through an underwriting model, and arrived at the answer the loan officer is in the room to deliver. In the imagined world—the one without credit scores—none of that has happened. The decision will turn entirely on the next forty minutes: the questions, her answers, his read on her character, whatever he privately decides her business is worth.
That afternoon she has another appointment, in Dallas, at a different bank. She tells the same story. A different officer asks different questions, weighs them differently, and reaches a different conclusion. If she is denied, no one will explain why. If she is approved, the rate she pays will reflect not the underlying risk of her business but the personal confidence of whoever happened to be in the chair across from her.
This is, in essential terms, how American small-business and consumer lending worked until the late 1980s. There was no FICO score. Decisions were made by individual loan officers reading individual files, and the answer you got depended a great deal on which officer happened to be on duty. The first widely available FICO score was introduced in 1989. Six years later, Fannie Mae and Freddie Mac adopted it for conforming mortgage underwriting.1 Within a single generation, three letters and three digits had become the bedrock of an entire financial system.
Today, the global mortgage market does not function without standardized consumer credit ratings. Cross-border bond markets do not function without their corporate equivalents. Trillions of dollars move every day on the strength of numbers that did not exist within the working memory of most CFOs.
Enterprise AI governance is in the position consumer lending occupied in 1988. The work is being done—frantically, expensively, and repeatedly, in every Fortune 500 risk department in the world—but no two organizations are doing it the same way, and no two outputs can be compared. The standard has not yet arrived. It is, however, coming. The only open questions are what shape it takes, who establishes it, and which companies have the foresight to build their governance posture around it before they are required to.
The Pre-Standard Era
The pre-history of credit measurement is longer than most people realize. It begins in 1841 with Lewis Tappan, a New York merchant and abolitionist who had spent the better part of a decade watching customers in distant cities default on shipments he had no way to vet. His response was to build what he called the Mercantile Agency: a network of correspondents—lawyers, postmasters, the kind of small-town businessmen who knew everyone in their county—paid to write up the local merchants. The reports came back in long-hand. Who paid late, who had been sued, whose store had burned down and been rebuilt, who could be trusted with credit and who could not.2 The Mercantile Agency would eventually become Dun & Bradstreet, and its early ledgers are one of the more remarkable surviving records of nineteenth-century American commercial life.
For seventy years, this was the state of the art: someone wrote a report, someone else read it, and a decision was made. The first move toward something more systematic came in April 1909, when John Moody published Moody’s Analyses of Railroad Investments and assigned each of roughly 1,300 railroad bonds a letter grade from Aaa to C.3 Moody had no regulatory authority over the bonds he was rating. He was a publisher selling a book. But the format took. A pension manager evaluating forty-seven railroads could now read forty-seven letters instead of forty-seven essays. Poor’s Publishing followed in 1916, Standard Statistics in 1922, and Fitch in 1924. Standard Statistics merged with Poor’s in 1941.4 By the middle of the twentieth century, the corporate bond market had a working rating infrastructure.
Consumer credit lagged by another generation. In 1956 a Stanford Research Institute engineer named Bill Fair and a mathematician named Earl Isaac launched Fair, Isaac and Company on the unfashionable proposition that consumer creditworthiness could be modeled. For thirty years they were a small consultancy selling custom scoring models to whichever individual lender would pay. The breakthrough came in 1989 with the launch of the first general-purpose FICO score: a single number on a 300-to-850 scale, calculated from credit-bureau data using a five-factor formula.5 By 1995 it had been adopted by Fannie and Freddie. A market that had operated for centuries on personal relationships reorganized itself, in a few years, around an algorithm.
The transformation that followed is now nearly invisible because it is so complete. Mortgage decisions that used to take weeks moved to days. Loans that had previously sat on lender balance sheets—because no buyer could responsibly price the risk of paper they had not personally underwritten—became packageable, sellable, and tradable across the global financial system. Whole asset classes became possible. Whole industries that had not previously existed came into being.
The system was not perfect. It still isn’t. The 2008 financial crisis was, among other things, a failure of credit measurement applied at scale to mortgage-backed instruments the rating agencies did not fully understand. The reforms that followed—Dodd-Frank’s overhaul of the rating-agency regime, the SEC’s new oversight authority, methodological revisions across the Big Three—were arguments about how to do measurement better.6 None of them were arguments for going back. By 2008 the standardization itself had become infrastructure, and infrastructure does not get unbuilt.
Five Factors, Weighted
FICO’s actual formula is worth pausing on, because it has become a template that quantitative governance frameworks have been copying, often unconsciously, for thirty years.
A FICO score weighs five things: payment history (35 percent), amounts owed (30 percent), length of credit history (15 percent), credit mix (10 percent), and new credit (10 percent).7 The weights are public. The data is auditable. Two consumers with the same score are presumed comparable; if they aren’t, that’s a flaw in the model and the model can be revised. FICO has rewritten the formula several times since 1989, and each revision has been an argument over which factors should carry more weight, not whether the framework should exist.
The framework itself is the thing. Five dimensions, public weights, evidence underneath, a common scale on top. The genius isn’t that any of those weights is correct in some deep sense—it almost certainly isn’t—it’s that the structure makes correctness an empirical question. Without a defined frame, two underwriters looking at the same borrower can disagree forever, and there is no procedure for adjudicating who is right. With one, the disagreement becomes specific: which factor is over-weighted, which input is missing, what the historical default data shows. The argument moves from temperament to method.
The AIQ™ Score is built on the same architecture, applied to a different problem. It evaluates enterprise AI governance across five weighted dimensions—Governance & Accountability at 30 percent, Technical Robustness at 25, Strategic Alignment at 20, Socio-Economic Impact at 15, Adaptability & Education at 10—and resolves to a single figure on a 0-to-200 scale. Underneath each dimension sits a body of more than 250 proprietary data points, organized by tier of materiality and scored against documented evidence by trained third-party assessors. The methodology is patent-pending. Two companies—or two portfolios within an acquirer’s diligence file—can be placed on the same axis and meaningfully compared.
This is not a coincidence of design. AIQA’s founders spent the prior chapter of their careers at Ocean Tomo, now part of J.S. Held, building the first patented patent-quality scoring system and the index that became the Ocean Tomo 300®. The intellectual lineage is direct. What the credit-rating revolution demonstrated, and what the patent-quality work confirmed, is that a previously subjective domain can be made comparable—and comparability, once it exists, is the precondition for every market that prices risk in that domain.
Where AI Governance Is Today
Enterprise AI governance, in 2026, looks remarkably like consumer lending in 1988. The work is being done. Procurement teams send security questionnaires. Legal departments bolt on data-protection riders. Compliance requests evidence of alignment with the NIST AI Risk Management Framework. Risk runs its own review. The audit committee asks for assurance that the management team is not in a position to provide. Each function has its own framework, its own scoring rubric, its own threshold for acceptability—and the vendor on the receiving end answers each one separately. Months go by. At the end, no one in either organization can summarize what they learned in a form that can be compared to the next vendor in the queue.
The cost of all this is largely invisible because it has never been measured against a counterfactual. The counterfactual is worth describing. AI insurance does not have governance-tied pricing in any systematic form, because no underwriter has a comparable governance score to calibrate premiums against. There is no AI-collateralized lending market, because creditors have no measurable view of governance quality on which to price the risk of an AI-dependent revenue stream. No institutional investor has built a meaningful AI governance index, because no one has produced an audited score across a population of issuers large enough to track. No accounting firm issues an opinion on AI governance because no quantitative framework exists against which an opinion would mean anything. These products and markets are not waiting on demand. They are waiting on a number.
The demand is plainly there. The IAPP’s 2025 governance report found that 77 percent of organizations have AI governance work underway.8 IBM’s 2025 breach report found that 63 percent have no AI governance policies in place at all.9 McKinsey, surveying 238 C-suite executives last year, found that 92 percent plan to increase AI investment over the next three years and 1 percent describe their organizations as mature.10 The gap between activity and demonstrable maturity is the same gap that existed in consumer credit in 1988: vast amounts of underwriting being done, no comparable way to evaluate the quality of any of it.
The regulatory clock makes the gap urgent. The EU AI Act’s high-risk system obligations take effect in August 2026, requiring providers and deployers to demonstrate compliance across risk management, data governance, documentation, transparency, and oversight.11 The NIST AI Risk Management Framework, voluntary on paper, is becoming the de facto U.S. reference. Insurers are starting to differentiate. Investors are starting to ask. Sooner rather than later, each of those constituencies will want a number—and the organizations that have built a measurable governance posture in advance will be the ones holding the answer.
What Standardization Will Unlock
What FICO actually did, more than rationalize underwriting, was create markets that hadn’t previously existed. The same expansion is on the table for AI governance. A third-party governance score doesn’t merely make existing processes faster or fairer. It creates products, contracts, and asset classes that the current information environment cannot support.
Insurance pricing. AI-specific insurance exists today, but only in its early commercial form. Munich Re has introduced its aiSure performance guarantee. AXA XL has added a generative-AI endorsement. Beazley and several Lloyd’s syndicates are writing AI-specific cover.12 None of these products is yet priced against a comparable view of the insured’s governance—they cannot be, because no comparable view exists. Pricing remains qualitative and idiosyncratic, an underwriter-by-underwriter matter of judgment. Once a third-party governance score is in the market, the same maturation that overtook cyber insurance in the 2010s becomes available to AI insurance: actuarially priced premiums, defined exclusions, calibrated limits. This is the role the AIQ™ Score is designed to play. It gives an underwriter a signal that can be plugged directly into a pricing model without rebuilding the assessment from scratch.
Capital allocation. Institutional investors increasingly want to screen for AI risk and AI exposure, and increasingly find that they can’t—because no comparable signal exists across issuers. A standardized governance score makes possible the same product set that ESG ratings made possible in their domain: index funds, ETFs, screened portfolios that allocate capital systematically by governance quality. The closest precedent is the Ocean Tomo 300® Patent Index, which AIQA’s founders helped construct two decades ago. The same logic underwrites the AIQ™ 100 Governance Index, an index of AIQ™ Scores covering the largest publicly traded enterprises by AI exposure.
Procurement and supply chain. Every large enterprise is now a buyer of AI from somewhere, and every large enterprise has had the experience of trying to evaluate what it has bought. A standardized score makes it possible to set procurement floors, contractual minima, and tiered diligence processes against a common metric—the same way credit scores made it possible to set financial covenants in commercial agreements. An AIQ™ Score floor in a vendor master service agreement is the AI-governance analog to a minimum credit rating in a debt indenture.
Board reporting. Boards have a fiduciary duty to oversee AI risk. They do not have, and cannot reasonably be expected to develop, the technical fluency to evaluate machine-learning systems on the merits. What they need is what credit metrics provide in lending: a comparable, trackable indicator that supports informed oversight without requiring expertise in the underlying technology. The AIQ™ Score is engineered for exactly that audience—granular enough underneath to support an audit committee inquiry, summarized enough on top to anchor a quarterly board update.
Regulatory evidence. The EU AI Act, the NIST AI RMF, and ISO/IEC 42001 all require demonstrated governance quality. None of them prescribes a single comparable metric for demonstrating it.13 A third-party score does not replace regulatory compliance—it complements it, by providing an evidence base that regulators, customers, and counterparties can rely on to assess the credibility of a company’s claims. AIQ™ assessments are mapped directly to those frameworks, so the same evidentiary record supports both the score and the disclosure.
None of these markets exist yet in their developed form. All of them require the same precondition: a standardized, independent, quantitative measure of AI governance quality.
Lessons from the Credit Revolution
The credit-rating story has four lessons that travel directly to the AI governance moment.
Standardization wins. Before FICO, dozens of bespoke scoring models were in use, one or two per major lender. After FICO, the industry collapsed onto a small number of standards, because the network effects of standardization are overwhelming. Comparable scales mean faster decisions, more efficient pricing, and secondary markets that fragmented scoring cannot support. The same dynamic will play out in AI governance. Whichever metric achieves comparability across a meaningful population of organizations will become the metric everyone uses. The alternatives won’t lose because they are wrong. They will lose because they cannot be compared.
Independence is essential. The most important property of a useful rating is that it isn’t issued by the rated party. The credit system has had its share of structural problems—the issuer-pays model that contributed to 2008 being the most familiar—but the underlying principle that ratings come from independent third parties has held throughout. AI governance ratings face the same imperative. A score a company issues about itself is marketing. A score issued by an independent assessor against a documented methodology is evidence. AIQ™ assessments are conducted by trained third-party assessors on exactly that principle.
Methodology must evolve. No quantitative framework is permanently correct. FICO has been revised through several major versions; Moody’s, S&P, and Fitch have all materially changed their criteria over the decades. The 2008 crisis exposed real weaknesses in structured-product ratings and produced real reforms. The lesson is not that measurement is futile but that measurement is iterative—and that the frameworks designed to evolve in response to evidence are the ones that endure. The AIQ™ methodology is built that way: weighted dimensions, tiered data points, and a structured update process that incorporates regulatory change, technological change, and observed governance failures. A public methodology changelog records every revision.
Adoption is gradual, then sudden. Standardized credit measurement existed for years before it became universal infrastructure. Sophisticated lenders used scores while less sophisticated ones did not. Regulators referenced ratings without mandating them. The secondary market priced rated and unrated debt differently. Then, within a span of a few years, the standard became unavoidable. The same pattern is visible in cybersecurity maturity assessments, in ESG ratings, in patent-quality scoring. The companies that adopted these standards in the gradual phase benefited disproportionately when the sudden phase arrived. The same will be true of AI governance scoring. The gradual phase is now.
From Imagination to Implementation
Return for a moment to the thought experiment. What makes a world without credit scores hard to imagine is not that the world without them is unrecognizable—it isn’t, particularly—but that credit scores have become invisible. They are infrastructure. The financial system is built on them so thoroughly that their absence would constitute a different financial system entirely.
The same will be true of AI governance measurement within the decade. A board that cannot articulate its company’s AI governance score, a vendor that cannot present one to its enterprise customers, an insurer that cannot price by one, an investor that cannot screen by one—each of those, in 2035, will look as out of place as a mortgage applicant without a credit score looks today.
The infrastructure is being built now. The standards exist: the EU AI Act, the NIST AI Risk Management Framework, ISO/IEC 42001, the OECD AI Principles. The data points exist. The constituencies that need the metric—boards, insurers, investors, regulators, customers—are already asking the questions. What has been missing, as FICO was in 1988, is a standardized, independent, third-party score that makes everything else possible.
The AIQ™ Score is built on the architecture of the rating systems that came before it: weighted dimensions, defined methodology, transparent scoring, third-party assessment, evidentiary base. Like those systems, it will not be perfect on day one. It will be revised. It will absorb evidence, regulatory change, observed failure modes. What it provides—the structural prerequisite for every market it enables—is the discipline of measurement against a common scale.
In 1988, the companies that recognized the standard was coming were not the ones that scrambled to comply with it once it arrived. They were the ones that organized themselves around it before they had to. They were, by some distance, the ones that benefited when the standard became infrastructure. AI governance is at the same inflection point. The question isn’t whether the measurement standard will arrive. It’s whether your organization will be ready when it does.
Stay informed
Get notified when we publish new Measured View articles.
© 2026 AIQA Global, LLC. All rights reserved.