Built for you

For AI and ML
security consultancies

Run LLM red-team engagements, prompt injection assessments, and ML model security reviews as structured records, not as note files and screenshots. Tag findings against OWASP LLM Top 10, MITRE ATLAS, and the NIST AI Risk Management Framework, deliver through a branded portal scoped per AI-using client, and keep the evidence chain durable through the next model deployment cycle.

No credit card required. Free plan available forever.

A platform built for the firms that test AI and ML systems

Security consultancies that focus on artificial intelligence and machine learning systems carry a different operating burden than firms that test general SaaS or enterprise IT. Every engagement crosses the chat surface, the retrieval pipeline, the tool layer, the training data, and the underlying application, the deliverables sit alongside model risk committee material, governance reviews, and emerging regulator submissions, the report has to map findings back to the OWASP LLM Top 10, MITRE ATLAS, and the NIST AI Risk Management Framework, and the evidence chain has to survive a model version change, a guardrail revision, or a switch from a closed-weight provider to a self-hosted model without a missing link. Most adversarial-testing shops still run this delivery on a notes app, a screenshot folder, a shared chat transcript, and a ticket queue that loses context the moment the engagement closes and the model version increments.

SecPortal gives AI and ML security consultancies one workspace for engagements, findings, evidence, retests, branded delivery, and invoicing. Findings carry CVSS scores from the moment they are opened, OWASP LLM Top 10 and MITRE ATLAS tagging is part of the workflow, the client portal scopes adversarial prompts and exploit evidence behind authenticated access, and the AI-assisted reporting drafts the framework-aligned writeup the buyer is expecting. Whether the firm services an AI-native startup, an enterprise that has retrofitted assistants into existing applications, a model provider running adversarial review on its own deployments, or a panel of AI features inside a regulated platform, the platform scales without adding administrative overhead.

Capabilities AI and ML security consultancies actually use

Engagement records that carry the AI scope

Each AI security engagement opens with the in-scope models, the retrieval sources, the connected tools, the data classes the assistant can access, and the agreed rules for adversarial testing attached to the record. The record persists after the engagement closes, so the next test against the same client starts from the documented prior surface rather than a re-onboarded blank page.

Findings tagged to OWASP LLM Top 10, MITRE ATLAS, and NIST AI RMF

Log findings with CVSS 3.1 vectors, severity, and evidence, and tag them against the OWASP LLM Top 10 category, the MITRE ATLAS technique, or the NIST AI Risk Management Framework function the issue impacts. The exported report carries the reference, so the AI-using client can attach the finding to their model risk programme without re-keying every line.

Branded portal scoped per AI-using client

Each AI-using client receives a branded portal on a tenant subdomain. Reports, findings, retest evidence, and remediation status sit behind authenticated access scoped to the assessed entity. Adversarial prompts, leaked system prompts, and exploit evidence stay off the generic file-share links that most adversarial-testing shops still default to.

Evidence fields tuned to LLM and ML findings

Custom finding templates record the system prompt, the adversarial input, the model response, the tool the response triggered, the trust boundary it crossed, and the model and version that produced it. The reproducibility evidence sits in the record so a senior tester can verify the finding against a live deployment without rebuilding the prompt from a chat history.

Retests paired to the original finding across model versions

When the client ships a new model version, a new system prompt, or a tightened guardrail, the retest pairs to the same finding rather than opening a new record. Closure evidence sits with the original capture date, so the audit trail shows when the issue was found, when remediation took effect, and which model version cleared it, all on one record.

AI-assisted reporting tuned to AI-using buyers

Generate executive summaries, technical writeups, and remediation roadmaps from the live findings record. AI-using buyers expect a deliverable that ties technical detail to the model risk control their governance team already tracks. The AI generates a draft against the tagged record, so the senior tester edits a draft instead of typing from a blank page on the day the engagement closes.

How an AI security practice runs inside SecPortal

AI security delivery is most defensible when one operating picture covers scope, evidence, finding-to-framework mapping, retest verification across model versions, and the report. SecPortal supports the full delivery rather than a single phase of it.

  • Open the engagement against the right client record so the in-scope models, retrieval sources, connected tools, data classes the assistant can access, and the agreed adversarial-testing rules are documented before any prompt is sent.
  • Run prompt injection probes, jailbreak chains, RAG poisoning paths, agent hijack scenarios, training-data leak tests, and model output safety reviews under one engagement, with the findings consolidated to a single record rather than scattered across separate notebooks and chat transcripts.
  • Track every finding through open, in-progress, fix-pending, retest-pending, and verified-closed states with a date and actor on each transition, so the audit trail covers what model risk teams, governance leads, and external assessors expect to see.
  • Generate executive, technical, and remediation views from the same source data, so the same finding base produces the right artefact for the AI product owner, the model engineer, the security lead, and the governance contact at the client.
  • Map findings to the OWASP LLM Top 10 category, the MITRE ATLAS technique, or the NIST AI Risk Management Framework function on the same record they live on, with ISO 42001 and ISO 27001 tagging where the engagement scope demands them.
  • Invoice the engagement against the same record the work was tracked against, so billing closes on the same source of truth the deliverable closed on.

From engagement kickoff to verified close, on one record

The leverage in AI security delivery is the durability of the audit chain across model version changes. SecPortal runs a single delivery flow that the next deployment, the next retest, and the next governance review can build against without reconstructing context from chat history.

  1. 1Open the AI security engagement with assessed entity, in-scope models and providers, retrieval sources, connected tools, data classes the assistant can read, scope statement, rules of engagement for adversarial testing, testers, and dates stamped against the record. The rules-of-engagement template populates the standard sections; the engagement record holds the bespoke AI context.
  2. 2Run the adversarial testing programme inside the engagement record. Prompt injection probes, jailbreak chains, RAG poisoning paths, agent hijack scenarios, training-data leak tests, and the surrounding application coverage (authenticated DAST, SAST, SCA) all consolidate to the same findings database, with raw outputs attached to the finding they support.
  3. 3Tag each finding against the OWASP LLM Top 10 category, the MITRE ATLAS technique, or the NIST AI RMF function it impacts as it is logged. Add ISO 42001 and ISO 27001 tags where the client scope demands them. The tagging is part of the testing workflow, not a post-engagement reconciliation step.
  4. 4Generate the technical report, executive summary, and remediation roadmap with AI assistance from the live record. The deliverable lands in the client portal alongside the underlying finding-level evidence (system prompt, adversarial input, model response, tool triggered), so the report and the source-of-truth point at the same data.
  5. 5Run retests after the client tightens the guardrail, ships a new model version, or restructures the agent topology. Attach verification evidence to the same finding, and either close the issue with a status change actor recorded automatically or revert to open with regression notes captured in place. The audit chain stays intact for governance review and external assessor activities.

AI-specific finding classes the platform is built to handle

AI engagements surface a class of findings that do not fit the traditional web application finding template. The encyclopedia entries below cover the most common adversarial cases, with reproducibility evidence and remediation guidance the client team can act on. Each one can be tagged into a SecPortal engagement and tracked through the same workflow as a traditional pentest finding.

  • The headline LLM finding class is covered in the prompt injection encyclopedia entry, including direct jailbreaks, indirect injections through retrieved documents, and payload patterns that lead to data exfiltration or tool abuse.
  • Adversarial inputs that trigger pathological model behaviour fall under the regex denial-of-service entry when the surrounding application uses regex on model output, and the broader denial-of-service entry when the failure mode is resource exhaustion against the model serving layer.
  • AI agents that ingest model output as a tool invocation surface the same risks covered in the server-side request forgery entry and the command injection entry, because an injected prompt can drive an agent to call internal endpoints or shell out through a tool the model controls.
  • Sensitive context that leaks through model output is captured in the sensitive data exposure entry, and the hardcoded secrets entry covers cases where API keys end up inside system prompts, retrieval sources, or training data the model can be coaxed into reciting.
  • Authentication and authorisation failures around AI features that bypass per-user controls are tracked under the broken access control entry and the broken object level authorization entry, because an AI assistant that calls internal APIs with elevated privileges is the same authorisation failure with a different front door.

Where AI and ML security consultancies typically start

Most AI-focused firms adopt the platform in three phases: bring the active client list and engagement records under one workspace, layer in finding-to-framework tagging and branded portal delivery, then consolidate retests, AI-assisted reporting, and invoicing onto the same record. The relevant capability and workflow pages explain each phase in detail.

SecPortal is built for AI and ML security consultancies that want one platform for the whole adversarial-review delivery: live engagements, framework-tagged findings, evidence, retests, branded portals, AI-assisted reporting, and invoicing. AI-using clients get a deliverable that ties to the controls their governance team already tracks, and the firm gets back the hours that used to disappear into post-engagement document production and prompt-by-prompt reconciliation across chat transcripts.

If your firm is structured as a smaller partner-led practice between two and ten testers, the SecPortal for boutique security firms page covers the operating model that fits a specialist consultancy. If your firm runs a broader multi-vertical book of business, the SecPortal for cybersecurity firms page covers the multi-client delivery model. AppSec teams that run AI security review in-house can read the SecPortal for AppSec teams page for the integrated programme model, and cloud-heavy AI estates often pair this work with the model documented in the SecPortal for cloud security consultancies page.

For broader context on how AI security findings hold up after the engagement closes, the aging pentest findings research and the severity calibration research cover what happens after the report ships and the client starts working through the model and guardrail changes the engagement surfaced.

The problems you face

And how SecPortal solves each one.

LLM red-team findings live in chat transcripts, screenshot folders, and the lead consultant’s prompt notebook

Each engagement opens a structured record. Prompt injection cases, jailbreak chains, RAG poisoning paths, agent hijack scenarios, and model output failures all become findings with CVSS 3.1 vectors, severity, evidence attachments, and remediation guidance, instead of a folder of unsorted markdown files.

Reports do not map findings to OWASP LLM Top 10, MITRE ATLAS, or the NIST AI RMF the client is starting to track

Tag each finding against the relevant OWASP LLM category (LLM01 prompt injection, LLM02 sensitive data exposure, LLM06 excessive agency, LLM10 model theft), the matching MITRE ATLAS technique, and the NIST AI RMF function the issue impacts. The exported report carries the reference so the client can attach the finding to their model risk programme without re-keying every line.

AI engagement scope spans the chat surface, the RAG pipeline, the tool layer, the training data, and the underlying application, and it gets lost between testers

One engagement record holds the in-scope models, the retrieval sources, the connected tools, the data classes the assistant can read, and the rules of engagement for adversarial testing. The next tester picking up the engagement reads the same scope the first one wrote rather than reconstructing it from a chat thread.

Generic pentest finding templates do not capture the evidence an LLM finding actually needs

Custom finding templates record the system prompt, the adversarial input, the model response, the tool the response triggered, the trust boundary it crossed, and the model and version that produced it. The reproducibility evidence is in the record, not in a separate notebook the consultant has to re-share.

Clients expect an AI security report that ties findings to live model deployments, not a static PDF that ages out the moment the model is updated

A branded portal on a tenant subdomain shows open findings, severity by category, retest status, and remediation evidence per model and per surface. When the client ships a new model version, the existing engagement record carries forward and the retest pairs to the original finding rather than opening a new one.

Multi-cloud, multi-model AI estates need findings consolidated across closed-weight APIs, open-weight self-hosted models, and traditional ML pipelines

External scans, authenticated DAST against AI-facing endpoints, SAST and SCA via the Git provider connection on the surrounding application, and manually logged adversarial findings all consolidate on the same engagement record. Deduplication runs across the consolidated set so the engagement closes with one defensible findings list instead of four overlapping exports per provider.

Run AI and ML security delivery on one platform

LLM red-team, prompt injection, and model security engagements with branded delivery and audit-ready evidence. Free plan to start.

No credit card required. Free plan available forever.