Workspace AI assistant
that runs platform actions for you
Talk to your workspace in natural language. The assistant reads clients, engagements, and findings as context, proposes structured actions like creating findings or scaffolding engagements, and only writes to the workspace after you approve. Every action lands on the activity log with the actor and the inputs.
No credit card required. Free plan available forever.
An agentic AI assistant that respects RBAC and the audit trail
Internal security teams, AppSec teams, vulnerability management teams, and GRC owners all carry the same complaint about chat assistants in security platforms: they either suggest text and stop, or they write to the workspace silently with no audit trail and no review step. Workspace chat in SecPortal is built differently. The assistant reads workspace context (clients, engagements, findings, team members) as live data, proposes structured actions back to the chat as a reviewable list, and only writes to the workspace after the user explicitly applies the action. The same RBAC gates and activity log entries that govern manual platform use govern every applied action.
The assistant is powered by Claude with a tool schema the platform actually implements. Proposed actions are not natural-language hopes; they are structured tool calls bound to verified APIs the dashboard already uses, so an applied action lands in the same place it would land if you had clicked through the UI. The difference is that you stop click-walking the same workflows every week.
Five tools the assistant can propose
The assistant runs on a fixed tool schema. It cannot invent new platform actions, and it cannot bypass the schema by writing free-text instructions. The five tool calls below are the only ways the assistant can change workspace state, and each runs through the same RBAC and validation the manual API uses.
create_finding
Create a new finding on a specific engagement, including severity, status, description, affected asset, remediation, category, control reference, CVSS score, CVSS vector, and assigned owner. The assistant fills in the structured fields the platform expects rather than free text.
update_finding
Update an existing finding by ID. Useful for assignment changes, status transitions through the FindingStatus lifecycle, severity recalibration, title corrections, and remediation rewrites without leaving the chat.
create_client
Create a new client (or business unit in internal mode) with company name, contact name, and contact email. Useful when scaffolding a new engagement intake from a kickoff conversation.
create_engagement
Create a new engagement under a client with a typed engagement (pentest, vulnerability assessment, ce_assessment, ce_plus_assessment, iso27001, soc2, bug_bounty, security_review, incident_response), scope, and date range. The assistant proposes the structured engagement type the platform recognises rather than a free-text label.
update_engagement_status
Move an engagement through draft, in_progress, completed, or cancelled. Useful for batch status updates across multiple engagements after a programme review.
Workspace context the assistant reads on every turn
Every chat turn rebuilds a system prompt from the live workspace state. That keeps the assistant honest: a finding the team closed five minutes ago is closed in the next prompt, an engagement someone re-scoped is re-scoped, and a new client is visible without a refresh. The character budgets keep the prompt within the model input window without truncating the most recent state.
Clients and business units
The assistant reads the company name, contact details, and engagement count for each client (or business unit if the workspace is in internal mode), so it can answer scoping questions without you re-typing the catalogue.
Engagements with severity and status counts
Engagement title, type, status, scope, dates, and per-engagement findings counts including severity and status breakdowns are all part of the conversation context, so the assistant can summarise programme state at a glance.
Up to 200 findings with full structured fields
Title, severity, status, category, affected asset, assigned owner, and a description excerpt for the most recent findings give the assistant enough context to triage, summarise, and propose next actions without hallucinating.
Engagement type configuration
The assistant reads the engagement type configuration (item label, default status, finding categories) so it adapts language for pentest engagements, compliance audits, and incident response cases without needing to be told.
Assignable team-member email list
When the assistant proposes an assignment, it picks from a verified list of workspace team members rather than guessing an email. Assignments resolve to a real owner the platform can route notifications to.
How the platform behaves around proposed actions
Behaviour you can trust matters more than capability you cannot audit. The rules below are enforced in the workspace-chat route and the proposed-action UI, not described in documentation that someone has to remember.
The assistant proposes actions, you apply them
Tool calls do not run automatically. The assistant streams proposed actions back to the chat as a structured action list. You read each one, see what it would do, and apply it explicitly. The same RBAC checks that gate manual finding creation, finding updates, client creation, engagement creation, and engagement status changes apply to assistant-proposed actions.
Action ordering is dependency-aware
When the assistant proposes a chain like create client, then create engagement under that client, then create five findings on that engagement, the UI blocks downstream actions until the parent action is applied. You never end up with orphaned findings pointing at non-existent engagements.
Every applied action writes to the activity log
Because the assistant uses the same finding, client, engagement, and engagement-status APIs that the dashboard uses, every applied action lands on the activity log with the actor user, the entity, and the inputs. The audit trail does not skip rows because the change came from chat.
Multimodal attachments are first-class
You can attach JPEG, PNG, GIF, WebP images and PDF documents directly to the conversation. Images are downscaled and base64-encoded before sending. PDFs are passed through with a per-document page limit. The assistant reads the artefact alongside the workspace context.
Token budgeting trims old turns, never current input
The route counts tokens against a 100K-token input budget and progressively drops the oldest messages until the conversation fits. The current message is never silently truncated; if a single attachment is too large, the platform rejects the message with a clear size error rather than producing a partial response.
Plan-aware credits, atomic accounting
The assistant runs on a monthly credit pool tied to the workspace plan. Credits are deducted only after the stream completes, so a failed run never consumes the pool. Extra credits can be purchased and consumed when the monthly pool is exhausted, with the platform always trying the monthly pool before extras.
Starter
Lifetime trial credits
Try the assistant on a small engagement before committing to a plan. Credits are lifetime, not monthly.
Pro
25 credits per month
Suited to a single security engineer or AppSec analyst running the assistant across active engagements. Each chat message that completes successfully consumes one credit.
Team
75 credits per month
Shared workspace pool for security teams running multiple concurrent engagements. Pool resets monthly and extra credits can be purchased and consumed when the monthly pool is exhausted.
Six guardrails the assistant runs through before any token is spent
The most expensive AI security feature is the one that surprises a security team. Every workspace-chat request walks through these guardrails before the platform opens a stream to the model.
RBAC gate on the chat itself
The assistant is only available to workspace users (consultants in technical role naming). Each user must hold the ai_chat permission on their team role. Viewers are blocked. Members, admins, and owners with the permission can run the assistant.
Plan check before any tokens are spent
The assistant requires the AI feature flag on the workspace plan. Starter is gated by the lifetime trial credit count. Pro and Team have monthly limits. The route enforces both before opening a stream so a budget-exhausted workspace cannot accidentally accumulate provider charges.
Per-user rate limit
Each user is rate-limited to 40 chat messages per 15-minute window. The limit prevents accidental loops and runaway scripted clients without throttling normal interactive use. Exceeding the window returns a 429 with a Retry-After header.
Image and document format gates
Only JPEG, PNG, GIF, and WebP images and PDF documents are accepted as attachments. Other formats are rejected with a clear unsupported-format response so unexpected payloads cannot reach the model.
Atomic credit accounting
Monthly usage is reset and read in a single atomic call so two concurrent messages cannot both squeak past the limit. Credits are deducted only after the stream completes successfully, so a failed run does not consume the user pool.
Workspace isolation by design
The assistant reads only clients, engagements, and findings scoped to the user workspace. The system prompt never mixes data across workspaces. Multi-tenant isolation is enforced at the data-fetch layer, not at the prompt layer.
How different security teams use the assistant
The assistant is not a single workflow; it is a productivity layer over the platform. The teams below use it differently, but they all rely on the same guarantees: structured actions, RBAC, audit trail, plan-aware credits.
AppSec and product security teams
Triage scanner output by asking the assistant to log the next ten Critical findings against the right engagement with calibrated severity, owners, and remediation guidance. The assistant reads engagement context and proposes structured findings that match the team's existing taxonomy.
Vulnerability management teams
Run weekly remediation sweeps by asking the assistant to summarise unresolved Critical and High findings older than the SLA window, propose ownership changes, and bulk-update statuses after the team confirms the remediation evidence.
GRC and compliance teams
Scaffold compliance audit engagements (SOC 2, ISO 27001, Cyber Essentials, PCI DSS) for new clients in one prompt: client created, engagement typed correctly, scope captured, dates set. The assistant uses the typed engagement values the platform expects so reporting downstream still works.
Security leadership
Ask for a cross-engagement summary of programme health, including severity distributions, oldest open findings, and engagements that have been in_progress beyond the planned end date. The assistant reads the workspace as a whole, not one engagement at a time.
Penetration testing firms and security consultants
Speed up engagement intake from kickoff calls. Describe the new client and the agreed scope in one paragraph and let the assistant propose the client, the engagement, and the initial finding scaffolding. Apply only the parts the team approves.
Audit evidence the assistant preserves automatically
Auditors and second-line risk readers reading against ISO 27001 Annex A 8.8 and 5.10, SOC 2 CC4.1 and CC7.1, NIST SP 800-53 RA-5 and AU-2, and PCI DSS 6.3.3 and 10.2 expect the audit trail to survive every channel a change can come through, including AI assistants. The assistant produces audit evidence by design.
- Each applied action writes an activity log entry with the actor user, entity, action type, and structured payload
- Activity log exports to CSV so the assistant-driven actions surface in any audit window without UI screen-scraping
- Token usage is logged per request with input_tokens and output_tokens for cost attribution and capacity reviews
- Plan-aware credit usage is tracked per workspace with a monthly reset cycle the platform enforces atomically
- Per-user rate limiting prevents a single account from monopolising assistant capacity at workspace level
- Workspace mode (consulting or internal) is captured in the assistant context so prompts adapt to the buyer audience
Five failure modes the design prevents
Assistant writes to the workspace without review
A chat assistant that silently creates findings, engagements, or clients turns the workspace into a noisy log of model guesses. SecPortal's assistant proposes structured actions and waits for explicit approval. The workspace stays clean.
Hallucinated client and engagement names
A chat assistant that operates without workspace context invents client names that do not exist and points findings at engagements that were never created. SecPortal's assistant reads the workspace catalogue and only references real entities.
AI actions skip the audit trail
A chat assistant that bypasses RBAC and bypasses the activity log breaks audit. Because SecPortal's assistant uses the same finding, client, and engagement APIs the dashboard uses, every applied action lands on the activity log with the actor user.
Free-text fields where structured fields are needed
A chat assistant that drops free text into severity, status, or engagement type fields breaks reporting. SecPortal's tool schemas constrain severity to critical, high, medium, low, info; status transitions to the FindingStatus lifecycle; engagement type to the typed enum the platform recognises.
Concurrent runs racing on the credit pool
Two simultaneous chat sessions on the same workspace can exhaust a fragile credit counter. SecPortal's monthly reset and usage read happen in one atomic database call, so a race condition cannot push the workspace past its plan limit.
How workspace chat fits the rest of the platform
AI reports as the deliverable surface
Workspace chat handles operational work; the assistant proposes findings and engagement updates that AI reports then summarises into PDFs and executive summaries. The two surfaces share workspace context.
Findings management as the action target
Every create_finding and update_finding tool call lands on the same findings management record the dashboard reads. CVSS 3.1 vector, severity, status, and 300+ templates all apply.
Engagement management as the scope anchor
The assistant scaffolds new engagements with the typed engagement values the platform expects, so downstream reporting, AI report generation, and client portal rendering all work without manual cleanup.
Team management as the RBAC source
The ai_chat permission lives on the same RBAC system as every other platform permission. See team management for the role hierarchy that gates assistant access.
Activity log as audit trail
Every applied tool call writes to the activity log with the actor and the inputs. The CSV export surfaces assistant-driven actions in any audit window.
Bulk finding import for migrations
Migrations from spreadsheets and other tools are usually faster through bulk finding import. The assistant complements that path for ad-hoc creation, scaffolding, and triage rather than batch ingestion.
Where to read next
For the platform-wide programme view that the assistant accelerates, see the security testing programme management workflow.
For the operational closure cadence that benefits most from assistant-driven triage, see the remediation tracking workflow and the scanner result triage workflow.
For internal security teams evaluating the platform end to end, the internal security teams page covers how the assistant fits the broader workflow.
For AppSec teams working through a high-volume scanner backlog, the AppSec teams page covers the assistant alongside code scanning, repository connections, and retesting workflows.
For an architectural read on AI in security reporting, see the AI in security reporting article.
Stop click-walking the same workflows every week
Describe the work in plain English. The assistant proposes the actions. You review and apply. The audit trail stays intact.
No credit card required. Free plan available forever.