Audit Evidence Half-Life: When Compliance Evidence Goes Stale
Audit evidence does not expire on a single date stamp; it ages across two axes at once. The calendar axis asks whether the artefact falls inside the framework observation period. The change axis asks whether the underlying asset, control, or remediation state has shifted enough that the artefact no longer describes the current programme. Internal security teams and GRC owners who treat the calendar axis alone end up with stale evidence that was current on paper. Teams that watch only changes miss the cadence rule and arrive at audit with gaps the framework will read as control failures.1,2,3,4
This research lays out how audit evidence half-life actually behaves across SOC 2, ISO 27001, PCI DSS, NIST CSF 2.0, HIPAA, and Cyber Essentials Plus. It covers the cadence rules that drive currency, the change triggers that invalidate evidence inside the calendar window, the difference between currency and completeness, the role of reproducible evidence, and the operational discipline that keeps a compliance programme audit-ready between assessments. The argument is not that twelve-month evidence is right or wrong. The argument is that evidence currency has to be paired to a control cadence, reproducible from a live record, and resistant to drift between audits.3,5,7,8
Half-life is not one number
When a security leader asks how long audit evidence is valid, the question collapses three separate sub-questions into one sentence. The first is the cadence question: how often does the framework expect the control to operate. The second is the observation question: what window does the audit cover and is the evidence inside it. The third is the change question: has the underlying control or asset shifted since capture in a way that breaks the evidence claim. Programmes that answer one of these three at the cost of the others end up with confident-looking artefacts that fail the audit read.
The cadence question is set by the framework and varies by control. PCI DSS Requirement 11.3 sets a quarterly cadence for vulnerability scans. SOC 2 CC7.1 expects ongoing detection rather than a fixed cadence. ISO 27001 Annex A 8.8 expects a cadence justified by the risk assessment. NIST CSF 2.0 frames cybersecurity as continuous risk management rather than a periodic checklist.1,2,3,5
The observation question is the contract between the entity and the auditor. SOC 2 Type 2 explicitly names the observation period in the engagement letter; evidence outside the period is not partial evidence, it is structurally invalid. ISO 27001 reads the surveillance interval as the implicit window. PCI DSS attaches a window to each requirement individually rather than to the assessment as a whole. The change question is the part most disputes are actually about, and it is the part most programmes have no written policy for.
Evidence cadence by framework
Cadence rules below are the de facto operating norms for the major regulated frameworks. They are not invariant. The risk assessment that sits behind each programme can justify a tighter cadence; the same risk assessment is rarely a defensible argument for a looser one.1,2,3,4,11
| Framework | Observation period | Cadence pattern |
|---|---|---|
| SOC 2 Type 2 | Named in the engagement letter, commonly 6 to 12 months. Evidence outside the window is invalid. | Trust Services Criteria expect ongoing operation; evidence has to populate the period rather than cluster at audit week. |
| ISO 27001:2022 | Surveillance interval (annual) with three-year recertification cycle. | Annex A controls each carry an expected cadence; documented in the ISMS and verified against artefacts produced inside the interval. |
| PCI DSS v4.0 | Annual assessment; per-requirement cadence rules layered inside the year. | Daily (10.4 log review), weekly (5.3 anti-malware), quarterly (11.3 scans), annual (11.4 pentest, 12.3 risk assessment). |
| NIST CSF 2.0 / SP 800-53 | Continuous; control assessments at organisation-defined frequency. | SP 800-53 control AU-11 sets retention; control-specific cadence set by the system security plan. |
| HIPAA Security Rule | Continuous; risk analysis and risk management activities documented over time. | Periodic technical and non-technical evaluation under 164.308(a)(8); cadence set by environment and operations. |
| Cyber Essentials Plus | Annual recertification with technical verification. | Five technical control areas each verified against current configuration; certificate is annual. |
The shared pattern is that cadence is per-control rather than per-programme. The shared failure mode is programmes that produce a single annual evidence pack against requirements with sub-annual cadence. The artefact is fresh by date stamp and complete by count, but the cadence never operated, so the evidence does not survive a careful read.3,4
Currency versus completeness
Two distinct properties shape audit acceptance, and they are routinely conflated. Currency asks whether an individual evidence artefact is recent enough to be valid against the requirement cadence. Completeness asks whether the set of artefacts covers every iteration of the cadence inside the observation period.
A weekly evidence requirement over a 52-week observation period needs 52 artefacts. Forty-eight artefacts covering forty-eight different weeks is incomplete by four. Forty-eight artefacts dated inside the last fortnight is current on every artefact and incomplete by cadence operation. Both fail the audit read, but they fail differently and the remediation differs.
Programmes that optimise for date freshness produce evidence that is current but not complete. Programmes that backfill at audit week produce evidence that is complete by count but stale on currency. The durable answer is to operate the cadence in real time and produce evidence as a side effect of operation, not to assemble evidence as a separate audit project.
Five triggers that invalidate evidence inside the window
Five trigger classes invalidate evidence before the calendar cadence fires. Programmes that watch only the calendar miss the invalidation events; programmes that watch only the change axis miss the cadence-based recertification. Both axes have to be wired in.
1. Material asset change
A system the evidence covers is replaced, re-architected, decommissioned, or migrated to a new platform. The original cadence still operated, but on an asset that no longer represents the in-scope estate. Configuration screenshots from the prior platform are not evidence of the current control state.
2. Material scope change
The boundary of the in-scope estate moves: a new business unit, new geography, new tenant, or new product line is added. Evidence that covers the prior boundary is partial against the new boundary. Auditors read the system description first and the evidence second; evidence that does not cover the stated boundary fails before the date check runs.
3. Material control change
The control the evidence demonstrates is modified, retired, or replaced. A WAF rule baseline captured under the prior policy is not evidence of the current policy. A change to the access review cadence from quarterly to monthly resets the cadence clock; quarterly evidence captured before the change is not evidence of the current cadence even if it is recent.
4. Material remediation gap
Findings the evidence references remain open past their SLA windows. The artefact still exists, the date stamp is still inside the observation period, but the artefact no longer evidences a working remediation control; it evidences a historical record of risk that has aged into current risk. The aging pentest findings research covers the long-tail accounting that the remediation-gap axis sits on top of.20
5. Material people change
The named owner, approver, or executor of the control is no longer in role and the new role-holder has not signed off the evidence chain. Approvals that name a former employee, access reviews executed by an offboarded reviewer, and policies acknowledged by departed staff fail the audit read even if the date is current. The evidence chain has to track ownership through role changes, not only through time.
Reproducible versus snapshot evidence
Evidence comes in two structural forms and they age differently. Snapshot evidence is captured at a point in time as a static artefact: a screenshot, a CSV export, a Word document, a PDF report. Snapshot evidence is fixed at creation and ages from that date forward. Reproducible evidence is generated from a live system of record at evidence-collection time and can be regenerated by the auditor at audit time. Reproducible evidence carries higher half-life because the currency question collapses into a query against the live record rather than a date check on a static file.5,8
| Evidence form | Half-life behaviour | Audit read |
|---|---|---|
| Static snapshot (screenshot, PDF) | Ages from capture date; cannot be refreshed without recapture; vulnerable to underlying-state drift. | Accepted if inside the window and the underlying control has not changed; flagged on any change axis trigger. |
| Time-stamped log export | Captures a precise period; complete only if the underlying log retention covers the observation window. | Strong evidence of operation if retention is intact; weak if gaps in the export are not explained. |
| Reproducible query against system of record | Half-life is bounded by the system of record itself; auditor regenerates at audit time. | Strongest evidence form; the currency question collapses into a query rather than a date check. |
| Narrative document | Cannot be regenerated; ages quickly and does not survive change events. | Accepted only as supporting context, not as primary evidence; auditors look for an underlying record. |
The operational discipline that follows from this distinction is to push evidence collection into the live system of record rather than into a separate evidence repository. Findings, remediation, and compliance status that live on the engagement record are reproducible by definition; the same data captured into a separate evidence pack is a static snapshot from the moment it is exported.
Common decay patterns and what they look like
Programmes that fail the evidence currency read tend to fail in recognisable patterns. The list below is the durable shape of the failure modes drawn from SOC 2 Type 2 examinations, ISO 27001 surveillance audits, PCI DSS QSA reviews, and HIPAA Security Rule evaluations.
The audit-week binge
Evidence is collected in a sprint immediately before audit. Date stamps cluster in the last two weeks of the observation period. The cadence the framework expects never operated; the artefacts exist only because the audit forced them. SOC 2 Type 2 reads this pattern as a control deficiency because the criteria expect ongoing operation, not periodic operation aimed at audit week.
The orphaned artefact
An artefact is current and complete but cannot be tied to the system of record it claims to document. The CSV export does not match the live data. The screenshot is from a system the auditor cannot access. The narrative references a control owner who left the organisation. Orphaned evidence fails the reproducibility test even when the date and scope are intact.
The stale-finding queue
Vulnerability scan reports are produced on cadence, but the findings they list age past their SLA windows. The remediation control the evidence claims to demonstrate is not operating. PCI DSS 6.3.3 and ISO 27001 Annex A 8.8 both read this pattern as a remediation failure rather than as a scanning success. The scan evidence is undermined by the open-finding evidence that lives next to it on the same engagement record.
The frozen policy document
Policy documents are dated three years prior, name controls that have since been replaced, and reference owners who no longer hold the role. Policy evidence has long half-life only when the policy still describes the operating control; a frozen policy document has zero half-life if the control has changed, no matter how recent the document review date.
The narrative-only attestation
Compensating control rationales, risk acceptances, and exception decisions exist only as narrative text in shared documents. The eight-field acceptance pattern (linked finding, severity, compensating controls, residual likelihood, residual impact, business rationale, expiry, review cadence) is missing. Auditors cannot reconstruct the decision chain from a narrative; the exception evidence fails the audit read even when the underlying technical control is sound.
Operational checklist for evidence currency
The programmes that handle evidence half-life cleanly converge on a small set of disciplines. The list below is the durable shape of that discipline, drawn from SOC 2 Trust Services Criteria, ISO/IEC 27007 audit guidance, PCI DSS QSA expectations, NIST SP 800-53 control requirements, and the HIPAA Security Rule periodic evaluation obligation.1,3,4,8,11
At programme design
- Each control has a documented cadence, owner, evidence type, and reproducibility source.
- The observation period is named per assessment and tied to each control mapping.
- The change-trigger policy names the asset, scope, control, remediation, and people axes.
- Evidence retention is documented per framework obligation and contractual requirement.
During the observation period
- Cadence operates in real time as a side effect of normal operation, not as audit-week capture.
- Reproducible evidence is generated from the live system of record rather than copied into a separate pack.
- Change triggers are monitored continuously against the change-trigger policy.
- Open findings and exceptions are tracked against SLA and review cadence so the evidence stays evidence.
At evidence collection
- Each artefact is paired to the control identifier and the framework mapping it serves.
- The system of record is named so the auditor can verify reproducibility.
- Date stamps are explicit (start, end) rather than relative phrases.
- Ownership through role changes is captured so the chain does not orphan on personnel transitions.
At audit
- The auditor regenerates a sample of reproducible evidence from the live system rather than only reading the artefact.
- Cadence completeness is verified per requirement, not only artefact currency.
- Open findings, exceptions, and risk acceptances are read alongside the supporting evidence.
- Change events inside the observation period are reconciled against the change-trigger policy.
How the engagement record carries evidence currency
Evidence half-life gets cleaner when the audit trail lives on the same engagement record the operational work lives on, rather than on a static evidence pack that diverges from operational reality after collection. The platform does not certify the evidence on the auditor side, but it makes the currency question reproducible and the audit trail self-documenting.
SecPortal pairs every finding, remediation action, retest, and exception to a versioned engagement record through findings management. CVSS vector, severity, owner, evidence, and remediation status are captured on the finding rather than in a separate spreadsheet, so currency is bounded by the live record rather than the export date.15 The engagement management layer keeps assessments, findings, reports, and remediation paired to one record so the audit narrative and the operational record do not diverge.17
The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks, with CSV export for auditors. Mapping happens on the live record, so the framework view of a control tracks the operational view rather than going stale between audits.14
The AI report generation workflow produces executive summaries, technical reports, remediation roadmaps, and compliance summaries from the same engagement data. Reports are regenerated from the live record rather than copy-pasted from stale drafts, so the narrative carries the same currency as the underlying data.16
The remediation tracking workflow and the vulnerability acceptance and exception management workflow keep the open-finding queue and the exception register on the same engagement record, so the remediation-gap axis of evidence currency is observable rather than hidden in a separate spreadsheet.18,19
For internal security and GRC teams
Internal security teams and GRC owners carry the evidence currency question between audits. The pattern that survives audit cycle after audit cycle is to operate cadence in real time, capture evidence as a side effect of operation rather than a separate audit project, and treat reproducibility as the primary evidence quality rather than as a nice-to-have.
- Document cadence per control rather than per programme so requirement-specific cadence rules survive contact with the audit read.
- Pair each artefact to the system of record it was generated from so reproducibility is not a question.
- Watch the change-trigger axes (asset, scope, control, remediation, people) continuously rather than at audit week.
- Treat aging open findings as an evidence currency signal, not only as a remediation backlog signal.
- Surface evidence on the same record the operational work lives on, not in a separate evidence repository.
For internal security teams, GRC and compliance teams, vulnerability management teams, AppSec teams, product security teams, and security engineering teams, and cloud security teams, the operating commitment is to keep the evidence question reproducible at any moment between audits rather than only at audit week. The vulnerability remediation throughput research covers the closure-side discipline (cycle-time stages, in-SLA closure rate per severity band, exception-to-remediation ratio) whose audit evidence has to stay reproducible alongside it. The security control drift research covers the upstream side: how controls erode between audits along the asset, scope, ownership, configuration, and compensating-control axes that drive evidence half-life. The continuous control monitoring cadence research covers the operating cadence that drives evidence currency from the rhythm side: the per-control reconciliation frequency and the change-trigger policy that fires reads outside the calendar boundary. The security workflow orchestration research covers the wider operating model that evidence currency plugs into.
The operational artefact that turns the half-life discipline into a live ledger is the audit evidence tracker template: twelve sections that catalogue every control artefact with a source system, a cadence, a currency state, a named owner, and a retention class so the audit narrative regenerates from a live record rather than from a multi-team evidence-collection sprint.
The operating workflow that closes the gaps surfaced by stale or missing evidence is the control gap remediation workflow: each gap opens with control context, a named owner, a closure plan, an evidence requirement, and a verified-closure rule so a stale-evidence finding does not silently age into a control failure between audits.
The lifecycle that governs how long each artefact is retained, when legal holds suspend disposition, and how destruction is documented is the audit evidence retention and disposal workflow: each artefact carries a retention class at capture, the legal-hold register sits on the engagement record, quarterly disposition reviews run on the live record, and destruction certificates land on the activity log at the moment of disposal so the evidence half-life ends with a defensible closure rather than silent deletion.
For the scan-output side of the same discipline (scan executions, raw module output, findings, and activity logs), the scan evidence retention and governance guide covers retention per artefact class, framework floors (PCI DSS Requirement 10.5.1, ISO 27001 Annex A 5.33 and 8.10, SOC 2 CC7.1, NIST AU-11 and SI-12), privacy layering, and disposal as a controlled activity recorded on the activity log alongside the scan cadence.
For security leadership and audit committees
Security leaders and audit committees read evidence currency through a different lens than operational teams. The leadership read is whether the programme is durably audit-ready between assessments, not only ready at audit week. A programme that passes one audit and rebuilds evidence from scratch for the next one carries higher residual risk than the date stamps suggest, because the cadence never operated and the evidence chain is not reproducible after the audit team rotates.
- Track evidence cadence operation in real time alongside finding closure rate as a programme health metric.
- Read change-event reconciliation as a separate metric from cadence operation; both have to operate.
- Ask for reproducible evidence demonstrations between audits rather than waiting for the next assessment to surface drift.
- Tie evidence currency to remediation SLA performance because aging findings invalidate the underlying scan evidence regardless of capture date.
- Surface compensating controls, exceptions, and risk acceptances on the same dashboard as evidence currency so the residual risk view is one record rather than three.
The leadership question that drives this discipline is straightforward: if a regulator, customer, or auditor asked for current control evidence today, would the answer come from one query against the live record, or from a multi-team evidence-collection sprint. Programmes whose answer is the live record are durably audit-ready. Programmes whose answer is the sprint are accidentally audit-ready and the accidental quality is the residual risk.
The leadership-side platform discipline that supports this is covered on SecPortal for CISOs and security leaders, which describes how findings, remediation, exceptions, retests, and reporting hold the audit-ready posture between assessments rather than at audit week.
Conclusion
Audit evidence half-life is two questions, not one, and the cadence question and the change question interact rather than operate independently. The cadence question has fairly tight bands of answers across regulated frameworks and varies per control rather than per programme. The change question has the most disagreement and the least documented policy across most programmes, and it is the part most evidence disputes are actually about. Currency, completeness, and reproducibility collapse into a single audit-ready answer when each artefact is paired to a control cadence, generated from a live system of record, and resistant to drift from the five change-trigger axes.1,2,3,4,5
Treating evidence half-life as a property of the live engagement record rather than as a date stamp on a static artefact is the highest-leverage discipline in compliance operations between audits. It keeps the audit trail current, it survives auditor and reviewer rotation, and it produces evidence that survives the second and third review cycle rather than expiring quietly in a shared drive. The platform you use does not have to write the evidence policy for you. It does have to make the policy reproducible and the audit trail self-documenting.
Frequently Asked Questions
Sources
- AICPA, SOC 2 Trust Services Criteria (TSC) 2017 with 2022 Revisions
- ISO/IEC, ISO 27001:2022 Information Security Management
- PCI Security Standards Council, PCI DSS v4.0
- NIST, SP 800-53 Revision 5: Security and Privacy Controls for Information Systems and Organizations
- NIST, Cybersecurity Framework (CSF) 2.0
- NIST, SP 800-40 Rev. 4: Guide to Enterprise Patch Management Planning
- AICPA, SOC 2 Type 2 Reporting on an Examination of Controls Relevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy
- ISO/IEC, ISO/IEC 27007:2020 Guidelines for Information Security Management Systems Auditing
- CISA, Known Exploited Vulnerabilities Catalog
- CISA, Binding Operational Directive 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities
- HHS, HIPAA Security Rule (45 CFR Part 164 Subpart C)
- NCSC, Cyber Essentials Plus Technical Verification Requirements
- NIST, SP 800-115: Technical Guide to Information Security Testing and Assessment
- SecPortal, Compliance Tracking
- SecPortal, Findings & Vulnerability Management
- SecPortal, AI-Powered Security Reports
- SecPortal, Engagement Management
- SecPortal, Remediation Tracking Use Case
- SecPortal, Vulnerability Acceptance and Exception Management Use Case
- SecPortal Research, Aging Pentest Findings
Run the audit trail on the live engagement record
SecPortal keeps findings, remediation actions, retests, exceptions, and compliance mappings paired to one versioned engagement record so evidence currency is reproducible at audit time and the chain does not depend on a static evidence pack that goes stale between assessments.