Second-Order SQL Injection
detect, understand, remediate
Second-order SQL injection (also called stored or persistent SQL injection) stores a tainted value through one safe-looking endpoint, then triggers the injection on a later query that reads the stored value into an unsafe SQL string. The vulnerable sink is rarely the endpoint that took the input. Standard reflected SQLi tests miss it because the payload does not fire until a different code path runs.
No credit card required. Free plan available forever.
What is second-order SQL injection?
Second-order SQL injection (also called stored or persistent SQL injection) is the SQL injection class where the tainted value is stored by one request and triggered by a later request. The endpoint that accepted the input often parameterises its own query correctly, so the input reaches the database safely. The vulnerable sink lives in a different handler, a scheduled job, an admin tool, or a reporting query, where the stored value is read back and concatenated into an SQL string with no further sanitisation. The bug surfaces when the second query runs, sometimes minutes or hours after the input was supplied.
The class sits inside CWE-89 alongside the standard first-order SQL injection most testers know, but the detection and remediation conversation is materially different. A first-order injection is reproducible from a single request and shows up in any DAST tool that mutates parameters and watches the response. A second-order injection is reproducible only when the tester replays the stored value through the application surface that consumes it. Static dynamic scanning misses second-order paths because the trigger does not fire on the same request as the input.
Real-world examples span profile fields read by admin dashboards, search history rows replayed into recommendation queries, audit log entries reused in compliance reports, and message bodies indexed by full-text search jobs. The shared pattern is an input layer that treats the field as data and a reader layer that treats the field as identifier or fragment. The persistence step lets the payload survive the input-validation barrier; the reader step is where the injection actually executes.
How a second-order finding plays out on a pentest
Plant the payload
The tester submits a value that contains SQL syntax through a write endpoint that the application stores. The write succeeds because the insert was parameterised; the value lands in the column without being executed.
Trigger the read path
The tester walks the application until a feature reads the stored value back. Profile pages, admin search, audit reports, scheduled jobs, and notification renderers are common consumers. The trigger is whatever request makes that reader run.
Observe the second query
If the reader concatenates the stored value into an SQL string, the planted payload executes against the database in the context of the reader. Errors, timing differences, or out-of-band callbacks confirm execution.
Walk the impact
The tester documents the storage step, the trigger step, the reader query, and the data the injection reached. The finding records both endpoints, the stored value, and the request and response evidence at each stage.
Where second-order SQL injection tends to live
The patterns below cover most second-order findings on web, API, and back-office engagements. Each one shares the same shape: a write path that handles input correctly and a read path elsewhere in the stack that does not.
Profile and account fields read by admin tools
A user-controlled field (display name, bio, address, custom attribute) is stored safely through the profile API but rendered through an internal admin search that builds an SQL query from the stored value. The admin sees the dashboard fail; the database sees the injection execute.
Stored values consumed by scheduled jobs
A nightly billing run, an analytics rollup, or a cleanup job reads stored values into SQL fragments. The trigger is the cron schedule, not the originating request, so detection requires waiting for the job or invoking it manually.
Search history and saved queries
Saved searches, recent search history, and recommendation features pull stored search strings back into LIKE clauses or ORDER BY fragments. The reader concatenates the stored value where parameterisation no longer applies cleanly.
Audit log entries reused in compliance queries
Audit tables collect free-text reason fields and identifiers from many writers. A compliance report or an export job reads them back into composite queries that mix log rows with current state.
Message bodies replayed by indexers
Comments, support tickets, and notes captured through validated forms get re-indexed by full-text or external search jobs. The indexer rewrites stored content into SQL or stored-procedure fragments that lose the input-time protections.
Configuration values written by users and read by services
Tenant settings, webhook destinations, custom field definitions, and saved filters live in the database as user-controlled values. Services that read them back into queries (especially for filter SQL or dynamic WHERE clauses) become the trigger surface.
Identifier fields used as column or table names
A stored value that should be data (a tag, a label, a category code) ends up referenced as an identifier in a later query. Dynamic SQL that interpolates identifiers from stored values has the strongest second-order exposure because parameterisation does not apply to identifiers at all.
ORDER BY and pagination fields persisted with user preferences
A "remember sort order" preference saves a column name into a settings row, then the listing page concatenates the stored column into an ORDER BY clause. Parameterisation does not protect ORDER BY targets, so the read path is a textbook injection sink.
A worked example: stored display name in an admin search
A common shape on B2B engagements is a profile API that lets the user set a display name, paired with an admin support tool that searches across users by typing into a free-text box. The profile API parameterises its insert correctly. The admin search builds its query by string concatenation because the developer wanted to compose multiple LIKE clauses dynamically.
# Safe: profile update parameterises the insert
@app.post("/api/profile")
def update_profile(req: ProfileRequest, user = Depends(current_user)):
db.execute(
"UPDATE users SET display_name = %s WHERE id = %s",
(req.display_name, user.id),
)The same value gets read back by the internal admin search, which composes a query string from a list of LIKE clauses across multiple columns. The display_name value is interpolated into the SQL string rather than passed as a parameter:
# Vulnerable: admin search interpolates a stored value
@app.get("/admin/users")
def admin_search(q: str):
rows = db.execute(
f"SELECT id, email, display_name FROM users "
f"WHERE display_name LIKE '%{q}%' "
f" OR email LIKE '%{q}%' "
f" OR id::text = '{q}'"
)
return rowsThe injection does not happen on the admin search field. The pentester sets their display name to a value that closes the LIKE clause and appends a UNION, then asks an admin to run a search that hits the row. The display_name read back into the SQL string makes the planted payload execute. The proof is the request that planted the value, the admin request that triggered the read, and the database response that returned the unioned row set.
# Tester action 1: store a payload through the safe write endpoint
POST /api/profile body: {"display_name": "alice%' UNION SELECT id, password_hash, email FROM users -- "}
# Tester action 2: trigger the unsafe read with a search that lists the row
GET /admin/users?q=alice
# The admin search composes:
SELECT id, email, display_name FROM users
WHERE display_name LIKE '%alice%' UNION SELECT id, password_hash, email FROM users -- %'
OR email LIKE '%alice%' UNION ...
OR id::text = 'alice%' UNION ...
# Result: the union returns password hashes through the admin search response.The fix is in two places, not one. The reader query has to use parameterisation for the LIKE pattern, and the application has to commit to the rule that any column read back into an SQL string is treated as untrusted regardless of who originally wrote it. Recoding the reader to use parameter binding closes this finding; auditing every other reader for the same pattern closes the class.
# Fixed: the admin search parameterises every input
@app.get("/admin/users")
def admin_search(q: str):
pattern = f"%{q}%"
rows = db.execute(
"SELECT id, email, display_name FROM users "
"WHERE display_name LIKE %s "
" OR email LIKE %s "
" OR (CASE WHEN %s ~ '^[0-9]+$' THEN id = %s::int ELSE FALSE END)",
(pattern, pattern, q, q),
)
return rowsOn a SecPortal engagement, the finding is recorded against both the storage endpoint (as the path that allowed the payload to land) and the reader endpoint (as the path that executed it). The CVSS vector reflects the full impact of the reader query: confidentiality is high because the union returned credential material, integrity depends on whether the reader runs INSERT or UPDATE, and availability depends on whether the planted value can break the reader for other users. The retest verifies that the reader rejects the same stored payload and that the auditing rule (treat every column read into SQL as untrusted) is documented in the codebase.
How to detect second-order SQL injection
Automated detection
- SecPortal's authenticated scanner stores tainted candidates through write endpoints (profile fields, settings, comments, custom labels) and replays them across read paths to detect when a stored value reaches an unsafe SQL string downstream.
- The code scanner flags handlers that read database columns and pass the value into string-formatted SQL, dynamic SQL builders, or ORDER BY/identifier slots, so the read-path sinks surface even when the input layer is safe.
- SAST taint tracking treats every read from a user-controlled column as a tainted source. Tools that only follow request-scoped taint miss second-order paths; tools that include database columns in the taint model surface the cross-request flow.
- Out-of-band detection (DNS or HTTP callbacks from inside the database) confirms execution when the trigger is a scheduled job rather than an interactive request. The callback fires when the cron runs, even though the planting request returned a benign response hours earlier.
Manual testing
- Inventory every write endpoint that persists user-controlled strings, then map every read path that consumes those columns. The inventory is the test surface; an injection sink that nobody reads is not a finding, but a sink that any feature reads is high priority.
- Plant payloads that survive the input-validation barrier without breaking obvious application behaviour. A leading or trailing payload that the renderer truncates is acceptable on storage but invisible at trigger; a payload wrapped in syntax that the writer trusts is the high-yield candidate.
- Use timing-based payloads (SLEEP, pg_sleep, WAITFOR DELAY) when error responses are suppressed. The stored value triggers a measurable delay on the reader path even when the response body looks identical.
- Use out-of-band channels (DNS lookups, HTTP requests) when interactive responses are unavailable. A stored payload that triggers a callback when a nightly job runs is reproducible at the next run and survives narrow scanning windows.
- Walk admin and back-office tooling explicitly. Internal-only readers are the most common second-order sink because their query patterns predate the public API discipline. Any feature that lists, exports, or aggregates user-controlled data is a candidate.
How to fix second-order SQL injection
Treat every database read as a tainted source
A column populated by user input remains user-controlled forever. Any code that reads such a column and uses the value in an SQL string is the same risk as a controller reading the value from an HTTP request. Build the codebase around this rule rather than around the assumption that storage launders trust.
Parameterise every reader, not just the writers
The first-order discipline (parameterise inserts and updates) is well understood. The second-order discipline is the same rule applied to selects, deletes, and dynamic queries that include stored values in the WHERE, LIKE, ORDER BY, or identifier slots. Audit existing readers for string concatenation against any column the application allows users to write.
Stop interpolating identifiers from stored values
Column names, table names, sort directions, and SQL fragments cannot be parameterised. If a feature requires those values to be configurable, use an allow-list lookup that maps the stored token to a known identifier rather than interpolating the stored value directly. The lookup turns a SQL injection sink into a controlled mapping.
Apply least privilege to read-path database accounts
A reporting query that runs as a high-privilege account amplifies the impact of a second-order injection. Separate read-path accounts from write-path accounts, and grant read-only or scoped permissions to the readers. The blast radius of a stored payload is bounded by what the reader's account is allowed to do.
Add second-order test cases to the regression suite
Most code reviews catch first-order patterns reflexively because the input handler is right there in the diff. Second-order patterns slip through because the reader and the writer change in different commits. A test that plants a payload in a write call and then asserts the reader rejects it makes the cross-request invariant explicit and durable.
Prefer ORM query builders over raw SQL for dynamic conditions
A query builder that composes WHERE clauses by binding parameters resists the second-order pattern by construction; a raw SQL string composed with f-strings or sprintf invites it. Where dynamic SQL is unavoidable, isolate it behind a wrapper that exposes parameter binding as the only public API for the dynamic parts.
Validate at the read boundary as well as the write boundary
Even with parameterised readers, validating that a stored value matches an expected shape (a UUID, an enum, a positive integer) at the moment it is read into business logic catches data corruption and second-order risk together. The reader trusts the type, not the column.
Reporting a second-order SQL injection finding
Second-order findings get pushed back more than first-order findings because the writer and the reader sit in different parts of the codebase and different parts of the team. Engineering looks at the writing endpoint, sees a parameterised insert, and disputes the finding. The strongest reports name both endpoints, the database column that links them, the reader query that interpolated the stored value, and the response or out-of-band evidence that proves execution. The CWE mapping is CWE-89 like first-order SQL injection, but the report should call out the second-order shape explicitly so the remediation team understands that fixing the writer alone does not close the bug.
On a SecPortal engagement, the finding is recorded against the consuming endpoint (the reader) with cross-references to the storage endpoint and the column path. The CVSS 3.1 vector reflects the reader query's actual privileges and data access, the request and response captures sit on the finding, and the planted-then-triggered evidence chain shows both stages. Pentest engagement records keep the proof linked to the original commit and the retest verification, so the remediation conversation references the actual reader-side fix rather than treating it as a generic SQL injection. The finding triage workflow covers how to separate scanner-derived candidates (a reader that concatenates a stored value) from manually validated findings (a planted payload that returned credential material) so the report differentiates suspected exposure from confirmed exploit. Pentest report writing guidance covers how to express the two-stage chain in a way that survives engineering review.
Compliance impact
OWASP Top 10
A03:2021 Injection
OWASP ASVS
V5 Validation, Sanitisation and Encoding
PCI DSS
Req. 6.2.4 Common Coding Vulnerabilities
ISO 27001
A.8.28 Secure Coding
SOC 2
CC6.1 Logical and Physical Access Controls
NIST 800-53
SI-10 Information Input Validation
CIS Controls
Control 16 Application Software Security
PTES
Vulnerability Analysis and Exploitation
A pentester checklist for second-order SQL injection
The checklist below is the minimum coverage a tester should walk before declaring a target's storage-and-read surface acceptable for SQL injection. Each item maps to a recognisable shape with a CVSS profile that depends on the reader query's privileges and the data the planted payload reaches.
- Inventory every column the application allows users to write (profile fields, settings, comments, audit reasons, custom labels, saved filters, tag names) and trace which read paths consume them.
- Plant SQL syntax payloads through every write endpoint and walk the application surface. Profile pages, admin search, audit reports, scheduled job triggers, and analytics dashboards are the highest-yield reader targets.
- Use timing-based payloads where error output is suppressed. A stored payload that introduces a measurable delay on a reader path is reproducible without relying on exception leaks.
- Use out-of-band callbacks (DNS or HTTP) when readers run in non-interactive contexts. The callback fires when the cron, the indexer, or the export job runs, regardless of whether the planting request stayed in the tester's session.
- Test ORDER BY, identifier, and dynamic-SQL slots specifically. Parameter binding does not protect those positions, so a stored value that lands in a sort column or a table name is a high-confidence sink.
- Confirm the reader's privileges. A second-order injection in a low-privilege reader is a different finding from one in an admin reporting query that runs as the database owner.
- Record both endpoints in the finding, the column path that links them, the planted payload, the trigger request, and the proof. CVSS calibrates against the reader's actual scope, not the writer's.
Related vulnerabilities
Catch stored SQL injection before it lands in a backend job
SecPortal stores tainted candidates from authenticated scans, replays them across the application surface, and flags handlers that read back stored fields into unsafe SQL paths. Connect a target and start scanning for free.
No credit card required. Free plan available forever.