Audit Evidence Tracker Template one ledger for every control artefact
A free, copy-ready audit evidence tracker template. Twelve structured sections covering tracker scope and review cadence, evidence identification with linked control reference, evidence type and source system, evidence period and currency status, storage location and access controls with retention class, completeness and adequacy check, change triggers and re-capture conditions, lifecycle audit trail, gap and not-applicable handling, cross-references to operating records, closure or archive record, and tracker-level summary metrics. Aligned with ISO/IEC 27001 Clause 7.5 and Clause 9.2, NIST SP 800-53 AU-2 and CA-7, PCI DSS Requirement 12, SOC 2 CC4.1 and CC4.2, and the standard expectations across HIPAA, NIS2, DORA, FedRAMP, and HITRUST.
Generate evidence as a side effect of the work, not as an audit project
SecPortal captures findings, scans, retests, and exceptions on the same record auditors read, so the tracker entry points to the live system rather than to a screenshot. Free plan available.
No credit card required. Free plan available forever.
Twelve sections that turn shared-drive folders into a defensible audit evidence ledger
An audit evidence tracker is the cross-control ledger that catalogues every artefact a security or GRC programme produces in support of a control. The twelve sections below cover the durable shape of the tracker across ISO/IEC 27001 Clause 7.5 and Clause 9.2, NIST SP 800-53 AU-2 and CA-7, PCI DSS Requirement 12, SOC 2 CC4.1 and CC4.2, and the standard expectations across HIPAA, NIS2, DORA, FedRAMP, and HITRUST. Copy the section that fits your stage and paste the rest as you go.
The tracker is not the same artefact as a compliance control register. The control register lists the controls in scope and their status. The tracker lists the evidence that proves the control was operating, with a date, a source system, and an owner. The tracker is also not the same artefact as the policy that publishes the retention rule every entry inherits. Pair the tracker with the audit evidence retention policy template for the upstream document that names the per-class retention windows, the legal-hold rules, and the disposition workflow each tracker entry has to operate against. Pair the tracker with the security exception register template for the org-wide ledger of approved acceptances, the risk acceptance form template for per-decision rationale, the vulnerability remediation worksheet for per-finding work, the remediation SLA calculator for the policy targets that define when a finding needs an exception in the first place, and the vulnerability management programme scorecard for the programme-level maturity read that the per-control evidence pack supports.
Copy the full template (all twelve sections) as one block.
1. Tracker identification and scope
Capture the boundary of the tracker at the top so any reviewer reading any entry knows which programme, which estate, and which framework expectations it supports. ISO/IEC 27001 Clause 7.5.1, NIST SP 800-53 AU-2, and PCI DSS Requirement 12 all expect documented evidence to operate within a defined scope so an artefact that drifts outside scope is visible.
Tracker name: {{TRACKER_NAME}}
Owner function (Security, GRC, Audit Readiness): {{OWNER_FUNCTION}}
Tracker administrator (named individual): {{TRACKER_ADMINISTRATOR}}
Scope: {{IN_SCOPE_BUSINESS_UNITS_AND_CONTROL_DOMAINS}}
Frameworks the tracker supports (ISO 27001, SOC 2, PCI DSS, NIST SP 800-53, HIPAA, NIS2, DORA, FedRAMP, HITRUST, internal policy, other): {{FRAMEWORK_LIST}}
Audit periods covered: {{AUDIT_PERIODS}}
Last tracker-wide review date: {{TRACKER_REVIEW_DATE}}
Next tracker-wide review date: {{NEXT_TRACKER_REVIEW_DATE}}
Tracker-wide review cadence (quarterly default): {{TRACKER_REVIEW_CADENCE}}
2. Evidence identification and linked control
Every entry pairs to a persistent control reference, not to a report ID. Recording against the control lets the entry survive report regeneration, retest cycles, and engagement boundaries without the lineage breaking. A successor reviewer should be able to read which control the entry supports and which framework section asks for it without speaking to anyone who collected it.
Evidence identifier: {{EVIDENCE_REFERENCE}}
Linked control reference (Annex A control, NIST SP 800-53 control, PCI DSS requirement, internal policy clause): {{LINKED_CONTROL_REFERENCE}}
Control name (plain language): {{CONTROL_NAME}}
Framework reference (ISO 27001, SOC 2, PCI DSS, NIST, HIPAA, NIS2, DORA, FedRAMP, HITRUST, internal): {{FRAMEWORK_REFERENCE}}
Control owner (named individual responsible for operating the control): {{CONTROL_OWNER}}
Evidence owner (named individual responsible for capturing and maintaining the evidence): {{EVIDENCE_OWNER}}
Date raised in tracker: {{DATE_RAISED}}
Status: Pending capture / Captured / Current / Due / Expired / Reviewed / Archived
3. Evidence type and source system
Evidence categorises into a small number of durable types. Each entry names one type and one source system. Source-system attribution is what makes the evidence reproducible; without it the artefact is a static screenshot that cannot be re-validated when an auditor questions it. ISO 27001 Clause 7.5.3 and NIST SP 800-53 AU-9 both treat the protection and integrity of evidence records as a control in their own right.
Evidence type (select one):
[ ] Scan output (external, authenticated, code, network, cloud) with scan identifier and target
[ ] Configuration export (firewall, identity, cloud baseline, scanner schedule, retention policy)
[ ] Screenshot, console capture, dashboard export
[ ] Attestation or signed form (risk acceptance, exception approval, access review, change approval, control owner attestation)
[ ] Training record or awareness completion report
[ ] Change ticket, deployment record, remediation closure record
[ ] Retest report, retest finding closure record, retest evidence pack
[ ] Meeting record, committee minutes (security steering, risk committee, change advisory board)
[ ] Other: {{OTHER_EVIDENCE_TYPE}}
Source system (the live system the evidence was generated from): {{SOURCE_SYSTEM}}
Source system reference (scan ID, ticket ID, dashboard query, document ID): {{SOURCE_SYSTEM_REFERENCE}}
Reproducibility note (one line: how a successor reviewer regenerates this evidence): {{REPRODUCIBILITY_NOTE}}
4. Evidence period and currency status
Evidence currency is set per control rather than per programme. Capture the calendar window the evidence covers, the next-due date for the next generation, and the current state of the cadence. SOC 2 CC4.1 and CC4.2, ISO 27001 Annex A 8.8, PCI DSS 11.3, and NIST SP 800-53 CA-7 all expect evidence to operate on a documented cadence so an entry whose cadence has slipped is visible.
Generation date (when this evidence was captured): {{GENERATION_DATE}}
Evidence period (calendar window this evidence covers): {{EVIDENCE_PERIOD_START}} to {{EVIDENCE_PERIOD_END}}
Cadence for this control (daily, weekly, monthly, quarterly, semi-annual, annual, trigger-based, ad hoc): {{CADENCE}}
Next-due date (next time this evidence has to be regenerated): {{NEXT_DUE_DATE}}
Currency status (select one):
[ ] Current: within cadence window, generation complete
[ ] Due: cadence window open, generation not yet complete
[ ] Expired: cadence window closed without fresh generation; control operating without supporting evidence
[ ] Trigger-due: a change-trigger event opened a new cadence window
[ ] Not applicable: control was out of operation during this period (record reason in Section 9)
5. Storage location and access controls
Where the evidence lives matters as much as that it exists. Capture the storage location, the access controls that gate it, and the retention class so a reviewer can confirm the artefact is still recoverable and still protected. ISO 27001 Annex A 5.33 and NIST SP 800-53 AU-9 expect evidence storage to be controlled and durable.
Primary storage location (workspace, document repository, ticket system, scan record): {{PRIMARY_STORAGE}}
Storage path or reference: {{STORAGE_REFERENCE}}
Access control (who can read, who can edit, who can export): {{ACCESS_CONTROL}}
Encryption at rest: Yes / No / Not applicable - rationale: {{ENCRYPTION_RATIONALE}}
Backup or secondary copy location: {{BACKUP_LOCATION}}
Retention class (set by strictest applicable framework):
[ ] One year minimum (PCI DSS 10.5, default short-cycle)
[ ] Three years minimum (NIST SP 800-53 AU-11, default federal)
[ ] Three to seven years (ISO 27001 Annex A 5.33, common ISMS retention)
[ ] Six years minimum (HIPAA 45 CFR 164.316)
[ ] Audit period plus customer issuance window (SOC 2 typical practice)
[ ] Other: {{OTHER_RETENTION_RATIONALE}}
Retention end date: {{RETENTION_END_DATE}}
6. Completeness and adequacy check
Existence of an artefact is not the same as adequacy for audit. Check whether the evidence covers the calendar window completely, whether the artefact captures the underlying control activity rather than only its output, and whether the evidence answers the framework question the auditor will ask. SOC 2 CC4.1 and ISO 27001 Clause 9.2 both treat evidence adequacy as a separate question from evidence existence.
Calendar coverage check:
[ ] Evidence covers the full calendar window (no gaps)
[ ] Gap detected (record start, end, and reason in lifecycle log)
Activity coverage check:
[ ] Evidence captures the control activity itself (e.g. scan ran, review meeting occurred, change approved)
[ ] Evidence captures the control output only (e.g. dashboard summary, scoreboard) - rationale for adequacy: {{OUTPUT_ONLY_RATIONALE}}
Audit question this evidence is intended to answer:
{{AUDIT_QUESTION}}
Cross-reference to other evidence supporting the same control (where applicable): {{CROSS_REFERENCE}}
7. Change triggers and re-capture conditions
Calendar cadence catches the time-driven evidence need. Change triggers catch the event-driven evidence need. List the trigger conditions that would invalidate the entry inside the calendar window so the next-due date is corrected when a trigger fires. ISO 27001 Clauses 8.1 and 8.3, PCI DSS Requirement 11.3.1, and NIST SP 800-53 RA-5 all expect change-driven evidence on top of cadence-driven evidence.
Trigger conditions that invalidate this evidence inside the calendar window (select all that apply):
[ ] Material asset change (system replaced, re-architected, decommissioned, migrated)
[ ] Material scope change (boundary moves to add a business unit, geography, tenant, or product line)
[ ] Material control change (control modified, retired, replaced)
[ ] Material configuration change (firewall rule, identity policy, scanner schedule, retention policy edited)
[ ] Material people change (control owner or evidence owner no longer in role)
[ ] Material exploit change (CISA KEV listing, public exploit, active campaign on the underlying issue)
[ ] Vendor advisory or third-party assessment change (vendor patch released, assessor finding raised)
[ ] Incident or near-miss involving the underlying control
[ ] Other: {{OTHER_TRIGGER_CONDITION}}
Trigger response plan: {{TRIGGER_RESPONSE_PLAN}}
8. Lifecycle events and audit trail
Every status transition on the entry carries a date, an actor, and a reason. The lifecycle log is what makes the entry defensible at audit time. ISO 27001 Clause 9.3, NIST SP 800-53 AU-2 and AU-3, SOC 2 CC4.1, and PCI DSS Requirement 10 all expect a documented record of evidence lifecycle events.
A tracker that pretends every entry is current is the tracker that fails the audit read. When evidence is missing, expired, or genuinely not applicable, record the gap explicitly so the audit can see the firm decision rather than read the silence as a control failure. ISO 27001 Clause 9.2, SOC 2 CC4.2, and PCI DSS 12.10 all expect documented handling of evidence gaps.
Gap or not-applicable record (complete only if evidence is missing or out of scope):
- Gap type:
[ ] Capture missed inside cadence window
[ ] Source system unavailable during capture window
[ ] Control not in operation during this period (decommissioned, paused, in transition)
[ ] Genuinely not applicable to this scope (record rationale)
[ ] Other: {{OTHER_GAP_TYPE}}
- Gap window (start, end): {{GAP_WINDOW}}
- Detection date and detector: {{GAP_DETECTION}}
- Compensating evidence relied on during the gap (if any): {{COMPENSATING_EVIDENCE}}
- Linked exception register entry (if a formal exception was raised): {{LINKED_EXCEPTION}}
- Plan to close the gap or document permanent not-applicability: {{GAP_CLOSURE_PLAN}}
- Closure or permanent-na approval (named individual, date): {{GAP_CLOSURE_APPROVAL}}
10. Cross-references to operating records
Audit evidence sits next to the operating records it derives from. Linking each entry to the live finding, the live engagement, the live exception, and the live retest keeps the audit narrative one record rather than three. The auditor question "show me the underlying activity that produced this evidence" can be answered with one query rather than a multi-team evidence-collection sprint.
Linked finding identifiers (canonical record): {{LINKED_FINDINGS}}
Linked engagement reference (assessment, audit, internal review): {{LINKED_ENGAGEMENT}}
Linked retest report or retest pack: {{LINKED_RETEST}}
Linked exception register entry (if evidence covers an active exception): {{LINKED_EXCEPTION_ENTRY}}
Linked remediation worksheet (if evidence is the fix evidence for a finding): {{LINKED_REMEDIATION_WORKSHEET}}
Linked policy or standard clause: {{LINKED_POLICY_CLAUSE}}
Linked vendor advisory or third-party assessment: {{LINKED_VENDOR_REFERENCE}}
11. Closure, archive, or supersede record
Closure is one of: archived inside retention because the calendar window has closed, superseded by a fresh capture, retired because the underlying control is decommissioned, or destroyed because retention has expired. Silent archival is the most common failure pattern and is not a closure; record the closure type so the lifecycle is visible to a successor reviewer.
Closure type (select one):
[ ] Archived inside retention: cadence window closed, evidence retained for the retention class period.
[ ] Superseded: new evidence covers the next cadence window. Reference to superseding entry: {{SUPERSEDING_REFERENCE}}
[ ] Retired: underlying control decommissioned or out of scope. Decommission reference: {{DECOMMISSION_REFERENCE}}
[ ] Destroyed: retention window expired and evidence destroyed in line with policy. Destruction reference: {{DESTRUCTION_REFERENCE}}
Closure date: {{CLOSURE_DATE}}
Closure approver (named individual): {{CLOSURE_APPROVER}}
Closure evidence references:
- Retention policy clause applied: {{RETENTION_POLICY_CLAUSE}}
- Destruction certificate or archive log reference: {{ARCHIVE_LOG_REFERENCE}}
12. Tracker-level summary metrics
A tracker without aggregate metrics cannot answer the leadership question. Capture the cumulative position so programme review reads the audit-readiness picture rather than only the per-entry detail. Review these metrics at every tracker-wide review and report them to the risk or audit committee.
Reporting period: {{REPORTING_PERIOD_START}} to {{REPORTING_PERIOD_END}}
Counts at end of reporting period:
- Entries by status: Pending capture {{COUNT_PENDING}} / Captured {{COUNT_CAPTURED}} / Current {{COUNT_CURRENT}} / Due {{COUNT_DUE}} / Expired {{COUNT_EXPIRED}} / Archived {{COUNT_ARCHIVED}}
- Entries by framework: ISO 27001 {{COUNT_ISO}} / SOC 2 {{COUNT_SOC2}} / PCI DSS {{COUNT_PCI}} / NIST {{COUNT_NIST}} / HIPAA {{COUNT_HIPAA}} / Other {{COUNT_OTHER}}
- Entries by evidence type (scan output, configuration export, screenshot, attestation, training record, change ticket, retest, meeting record, other): {{COUNTS_BY_TYPE}}
- Entries approaching cadence window close within 30 days: {{COUNT_NEAR_DUE}}
- Entries past their next-due date: {{COUNT_PAST_DUE}}
- Entries with active gap records: {{COUNT_WITH_GAP}}
- Entries with linked exception register entries: {{COUNT_WITH_EXCEPTION}}
Trend versus prior period:
- Net change in current-status entries: {{NET_CHANGE_CURRENT}}
- Net change in past-due entries: {{NET_CHANGE_PAST_DUE}}
- Net change in gap records: {{NET_CHANGE_GAPS}}
- Outstanding actions from prior tracker-wide review: {{OUTSTANDING_PRIOR_ACTIONS}}
Report distribution (named recipients, channels, dates):
{{REPORT_DISTRIBUTION_LIST}}
Six failure modes the tracker has to design against
The tracker fails the audit read in recognisable patterns. Each failure has a structural fix that the template above is designed to enforce. Read this list before you customise the template so the customisation does not weaken the discipline that makes the tracker defensible.
The tracker lives in shared-drive folders nobody owns
Evidence sits across mail attachments, shared-drive folders, slide decks, and per-engagement Dropbox links nobody can search at audit time. The first audit question (show me every piece of evidence supporting Annex A 8.8 across the period) cannot be answered in a session. The fix is one tracker, one row per evidence artefact, one pointer to the source system.
No source-system attribution
Entries record the artefact but not the live system the artefact came from. When an auditor questions whether the screenshot is from the right scan or the export from the right configuration, the team cannot regenerate the evidence and the artefact becomes a defensive liability. Every entry needs a source system, a reference into the source system, and a reproducibility note.
Currency drift goes silent
Evidence is captured once, the cadence slips, and the next-due date passes without a fresh capture. The control is operating, but the evidence trail is now expired. By audit week, three quarters of cadence-due evidence is missing and the team scrambles to backfill, which itself fails the reproducibility read because backfilled evidence rarely ties to the calendar window it was supposed to cover. Mark every entry due, current, or expired and treat expired as a hard control event.
Evidence captures output, not activity
Many tracker entries record the dashboard summary or the scoreboard rather than the underlying control activity. A "vulnerability scan completed" tile is not the same as a scan record with target, schedule, and authentication state. The audit reads the absence of the underlying activity record as the absence of the control. Capture the activity at the source where possible; if only output is available, record the rationale on the entry.
No handling for gaps and not-applicable controls
When a control was not in operation for a period (decommissioning, transition, scope change), the tracker either pretends the evidence is current or leaves the entry blank. Both fail the audit read. Record the gap explicitly with start, end, reason, and compensating evidence; record permanent not-applicability with a rationale and an approver. The audit reads a documented gap as a managed control event; it reads silence as a failure.
No retention class on the entry
Entries are captured without a retention class, then either destroyed too early because nobody set the retention end date, or hoarded indefinitely because nobody set the retention policy. Both produce audit risk: early destruction breaches the framework retention floor; indefinite retention creates legal-discovery exposure. Set the strictest applicable retention class on every entry at capture and record the retention end date so the closure path is automatic.
Ten questions a quarterly tracker review has to answer
Per-entry review keeps each piece of evidence current. Tracker-wide review answers the cumulative question: is the evidence trail durably audit-ready, and is the programme on top of regenerations, gaps, and retentions. Run these ten questions at every quarterly review and capture the answers in the tracker-level summary section.
1.How many entries are in current status at the end of the period, broken out by framework and evidence type.
2.How many entries are approaching their next-due date within thirty days, and is the regeneration pipeline ready for them.
3.How many entries are past their next-due date, and what is the resolution path for each (regeneration, gap record, supersession, retirement).
4.Which controls have only output-based evidence, and is activity-based evidence available from the source system instead.
5.Which controls have evidence with no source-system attribution, and what is the plan to add reproducibility.
6.Which entries are linked to exception register entries, and is the residual risk position consistent with the cumulative exception view.
7.Which entries lost their owner during the period because the named individual moved roles, and who is the new owner.
8.How does the cumulative current-status entry count compare to the prior period, and what is the trend across the last four periods.
9.Which entries have not been reviewed in the period the tracker policy expects, even if they are still inside their cadence window.
10.Which entries are visible on the client portal (where applicable) and which are not, and is the visibility position still appropriate.
How the tracker pairs with SecPortal
The template above is copy-ready as a standalone artefact. If your team already runs finding tracking, scan execution, and compliance evidence on a workspace, the tracker becomes the byproduct of the work rather than a separate document. SecPortal pairs every piece of evidence to the live record it derives from through findings management (CVSS 3.1 scoring, finding lifecycle, retest evidence) and engagement management (engagement scope, attached documents, deliverables), so the lineage from the underlying activity to the evidence entry is one record rather than a folder glued to a report.
The compliance tracking feature maps findings and controls to ISO 27001, SOC 2, Cyber Essentials, PCI DSS, and NIST frameworks with CSV export, so the tracker can be sliced by framework when an auditor asks for the evidence pack against a specific control set. The activity log captures status transitions on every entity (finding, engagement, scan, document, comment, invoice, team) by user and timestamp, so the lifecycle log in Section 8 is recorded automatically as a side effect of the work and the reproducibility note in Section 3 points to a live audit record rather than a static file.
The team management feature carries the role-based access control that decides who can capture evidence, who can approve a gap record, and who can run the tracker-wide review. The multi-factor authentication control gates workspace access at AAL2 so the audit reads the access record as a documented control rather than a hope. The AI report generation workflow can produce a summary view of current evidence, near-due entries, and gap records for risk-committee reporting from the same engagement data.