
March 18, 2026
8 Min
Somewhere in your organisation right now, a consequential decision is being made on bad data.
Not corrupted data. Not fraudulent data. Just old data - information that was accurate when it was collected, verified when it was stored, and has been quietly degrading ever since. An employee record that reflects a role someone left eight months ago. A vendor profile built on financials from a pre-pandemic filing. A customer's income band that hasn't been updated since onboarding. A risk signal frozen at the moment of underwriting, now two years stale.
Nobody flagged it. Nobody lied. The data simply aged - and the systems treating it as current didn't notice.
This is data decay. And it is, without exaggeration, one of the most pervasive and least discussed sources of enterprise risk operating in business today.
The Data Collection Obsession
Modern enterprises have developed an extraordinary capacity to collect data. CRMs that capture every customer touchpoint. HRMS platforms that log every employment event. ERP systems that record every vendor transaction. Data lakes that accumulate everything, tagged and timestamped, indexed and searchable.
The infrastructure investment is real and substantial. Organisations have spent the better part of two decades building the pipes - the APIs, the integrations, the warehouses - to get data in.
What they haven't built, with anywhere near the same rigour, is the infrastructure to ask a different question: is this data still true?
Collection and currency are treated as the same problem. They aren't. Collecting data well tells you what was true at a moment in time. Maintaining data currency tells you what's true right now. The first is a solved problem, more or less. The second is barely being addressed at most organisations.
The assumption that sits underneath this neglect is subtle but consequential: that data, once verified, retains its validity until something explicitly changes it. That a customer address confirmed eighteen months ago is still a reliable address. That an employee's department and seniority level on file reflects their actual current role. That a vendor's compliance certifications from last year's onboarding are still in force.
These assumptions are wrong - not occasionally, but systematically, at scale, across every function that runs on people and organisation data. And the decisions being made on top of them are only as good as the assumptions underneath.
The Half-Life Problem
Every category of enterprise data has a half-life - a period after which a meaningful portion of it can no longer be trusted as accurate.
The concept is borrowed from physics, but the mechanics are straightforward. Data doesn't decay uniformly. Some data types are highly stable: a person's date of birth, a company's founding year, a property's GPS coordinates. Others are volatile: contact details, employment status, organisational role, income, address, financial health signals. The volatility isn't random - it tracks the pace of change in the underlying reality the data describes.
Research on data decay rates produces numbers that should alarm anyone running enterprise operations on static records. Phone numbers become invalid at a meaningful rate annually. B2B contact data - job titles, email addresses, direct lines - turns over rapidly as people change roles and companies. Address data in high-mobility demographics can go stale significantly within eighteen months. Financial data ages even faster in volatile market conditions.
The half-life of employment data varies by sector and seniority, but the direction is consistent: faster churn, more frequent role changes, and the rise of portfolio careers and gig work have compressed the period over which employment records can be trusted.
What this means practically is that for any dataset with a tenure of more than twelve months, you are likely operating on a meaningful proportion of inaccurate records - not because the data was wrong when collected, but because the world moved and the data didn't.
Where Decay Hurts: Across the Enterprise
Data decay isn't a problem that lives in one function. It runs across the enterprise wherever decisions depend on information about people, organisations, and their current circumstances.
Credit and lending. Credit decisions are underwritten on a snapshot of a borrower's financial situation. Income, employment, existing obligations, address stability - all of these are verified at origination and then, in most cases, treated as durable facts for the life of the credit relationship. As explored extensively in the lending industry, this creates a specific risk: the borrower's actual risk profile changes, but the lender's data doesn't. Employment loss, income decline, relocation - these are material credit risk signals that arrive silently, invisible in a static record, until they announce themselves as missed payments.
Insurance underwriting. Insurers price risk based on declared and verified health, lifestyle, occupational, and financial signals collected at proposal. Across a multi-year policy, the customer's actual risk profile evolves continuously. Occupation changes affect risk class. Income changes affect sum assured adequacy. Health developments change the actuarial picture. The underwriting record capturing none of this isn't just operationally stale - it's an increasingly inaccurate representation of the risk the insurer is actually carrying.
Fraud detection. Fraud models are trained on signals. When the signals in your customer or vendor data are stale, the model is working with a distorted picture. Address mismatches, contact anomalies, employment inconsistencies - these are fraud indicators only if you have current data to compare against. Stale data doesn't just fail to catch fraud; it creates false confidence, because the record looks internally consistent even when the underlying reality has drifted significantly from what the record describes.
Enterprise risk and vendor management. Organisations maintain vendor databases, counterparty records, and supplier risk profiles built on due diligence conducted at onboarding. Compliance certifications, financial health assessments, directorship structures, regulatory standing - all of this ages. A vendor that was financially healthy and fully compliant at onboarding may be in very different shape eighteen months later. If the risk team is re-reviewing that vendor on a two-year cycle, they may be approving continued relationships with counterparties whose risk profile has materially changed between reviews.
HR and workforce intelligence. Employee records in large organisations are often shockingly out of date. Role changes that were processed in one system weren't reflected in another. Reporting structures updated on paper weren't captured in the HRMS. Skills certifications expired without update. Workforce planning models built on inaccurate headcount and capability data produce strategies misaligned with reality. Performance management, succession planning, and talent development all depend on knowing who your workforce actually is - and for many enterprises, the answer in the system is meaningfully different from the answer on the ground.
"Verified Once" Is Not a Risk Framework
The phrase that captures the underlying failure most cleanly is this: verified once does not mean reliable today.
Verification, as most organisations practice it, is a front-door function. You onboard a customer and verify their identity. You onboard a vendor and verify their compliance documentation. You hire an employee and verify their credentials. The verification event happens, the box is ticked, and the record enters a system where it will be treated as authoritative indefinitely.
This is not a risk framework. It's a risk assumption - one that treats a single verification event as a permanent warrant for the data's accuracy, regardless of how much time has passed or how much the underlying reality may have changed.
The disconnect becomes clearest when something goes wrong. A fraud case surfaces where the perpetrator's profile was internally consistent but had drifted significantly from any real-world identity. A credit default arrives from a borrower whose employment situation changed months earlier. A vendor relationship creates a compliance exposure traced to a certification that lapsed quietly while the risk record stayed green.
In each case, the post-mortem typically surfaces the same finding: the data was verified - just not recently. The verification event happened. The currency of what it verified was never re-examined.
Why Organisations Keep Getting This Wrong
The persistence of the problem is worth examining. It isn't from lack of awareness. Most risk, compliance, and data professionals understand, in the abstract, that data ages. The issue is structural.
Incentives are aligned to collection, not maintenance. Data teams are resourced and measured on ingestion: pipelines built, records captured, coverage achieved. The ongoing work of keeping data current is unglamorous, resource-intensive, and hard to measure in a dashboard. It gets deprioritised in every budget cycle.
Systems aren't designed for currency. Most enterprise data systems are optimised for storage and retrieval - not for flagging records that have aged past a reliability threshold. There is no standard practice of attaching a confidence decay score to a data point that diminishes over time. Data sits in a system and looks equally authoritative whether it was verified yesterday or three years ago.
The cost is invisible until it isn't. Data decay accumulates silently. There's no alarm. No obvious moment when the record tips from reliable to misleading. The decisions made on stale data look fine right up until the point where they don't - and tracing the loss back to data currency requires a forensic effort most organisations never undertake.
Re-verification feels expensive. Reaching back to authoritative sources to confirm the currency of existing records requires infrastructure - APIs, data partnerships, integration work. Without a clear framework for which data to re-verify, how often, and against what sources, the task feels overwhelming. So it doesn't happen.
The Reframe That Changes Everything
The organisations that will build genuine data advantages in the next five years are not the ones that collect the most. They're the ones that trust the most - because their data is maintained, not just accumulated.
This requires a fundamental reframe. Data is not an asset you acquire once and hold. It's a signal whose value decays continuously and must be actively maintained to remain useful. Every record in your system should carry an implicit question: is this still true? And the answer should come from a process, not an assumption.
Data decay is silent. It doesn't announce itself. It doesn't trigger an error message. It just accumulates in the background of every decision your organisation makes - quietly degrading the quality of the intelligence you think you're working from.
The question isn't whether your data is decaying. It is. The question is whether you're doing anything about it.
The most dangerous data in your organisation isn't the data you don't have. It's the data you think you have - that stopped being accurate a long time ago.
Tartan helps teams integrate, enrich, and validate critical customer data across workflows, not as a one-off step but as an infrastructure layer.









