Skip to content
IC Inline Code
All posts

Digital employees

Digital employees, with the governance attached

Why most digital employee deployments fail their first audit, and what a governance-first build looks like — identity, data access, supervision, and the accountability question almost no-one is answering well.

Mathew Sayed Mathew Sayed
· · 5 min read

A digital employee is what most vendors are now calling an autonomous or semi-autonomous AI agent that performs work in your environment under its own credentials, with its own scope, completing tasks that previously required a human worker. Some vendors are pricing them per-employee; some are pricing them per-action. The economics are interesting. The governance is, almost universally, not yet good.

Across the deployments we have reviewed in the last twelve months, the same patterns recur: digital employees provisioned through procurement rather than HR-or-IAM, with shared service accounts, broad data access, no defined supervisor, no acceptable use boundary, and no audit trail that distinguishes their actions from a human’s.

This post is the short version of the case for governance-first design. The long version — the full runbook with templates, configuration baselines, and the audit-ready control framework — is available as a paid PDF.

The four governance failures we see most often

1. Identity treated as a procurement decision

The digital employee is given a service account, often shared, often with credentials stored in the vendor’s environment, often without a clear owner once the implementation team rolls off. From a CPS 234, ISO 27001, or 42001 perspective this is the equivalent of hiring someone without onboarding them through HR — there is no defined role, no manager, no review cycle, no offboarding trigger.

The fix is to treat the digital employee as an identity, with a named human owner (the supervisor), provisioned through the same IAM workflow as a human, with attribute-based access scoped to the work it performs. The hard part is not technical; it is procedural. Most procurement teams do not yet route AI agent purchases through identity governance.

2. Access scoped to the work the digital employee might do, not the work it does

Vendors will, by default, ask for substantial data access — give it the SharePoint site, give it the customer records, give it the email archive — on the basis that broad access produces better task completion. This is the same argument made for staff access in the late 1990s, and the answer is the same: scope to the role, not to the convenience.

A digital employee that drafts customer correspondence does not need read access to internal financial records. A digital employee that processes claims does not need access to staff personal information. The principle of least privilege applies to AI agents exactly as it applies to humans. The corollary applies too: under-scoping is a productivity drag, and the right scope is the result of an actual task analysis, not a checkbox exercise.

3. No defined supervisor, no defined supervision

The CPS 230 and 42001 requirement is implicit but operationally direct: someone must be accountable for the work. For a human employee, this is the line manager. For a digital employee, it is — almost always — undefined. We routinely encounter deployments where the agent has been operating for months and the answer to who reviews its work is a shrug.

Supervision design has three components: a supervisor (a named human, with the time and authority), a supervision protocol (what is reviewed, on what cadence, with what escalation), and a supervision record (what was reviewed, what was found, what was changed). A digital employee without these is functionally an unsupervised employee, which is a position no governance framework supports.

4. Audit trails that conflate the agent and the human

When the digital employee acts via a service account, its actions appear in the system logs as that service account. When it acts via impersonation (acting as the requesting user, common in chat-based interfaces), its actions appear as the user. Either way, the audit trail does not distinguish the digital employee did this from a human did this. For a regulator asking who made this decision, this is a reconstructible-but-painful position; for an internal investigation, it is worse.

The fix is to require — at the integration layer — that every action attributable to a digital employee is logged with both the agent identity and the human supervisor or requesting user. This is an integration discipline, not a feature most vendors provide out of the box.

The accountability question

The harder governance question, and the one most boards have not yet asked, is: when the digital employee makes a wrong decision, who is accountable?

The vendor-supplied answer is generally that the human supervisor is accountable, by virtue of having deployed the agent. This is correct as a matter of organisational accountability and inadequate as a matter of governance design, because the supervisor is rarely structured to discharge that accountability — they do not see the decisions, do not control the model, do not approve the upgrades, and often did not specify the agent’s scope.

A governance-first deployment defines the accountability stack explicitly:

  • The vendor, accountable for the model’s behaviour within stated capabilities and constraints.
  • The platform owner, accountable for the integration, the data scope, and the technical guardrails.
  • The supervisor, accountable for the work the agent produces and for raising material issues.
  • The risk function, accountable for the policy, the assessment, and the framework alignment.
  • The board, accountable for the existence of an effective governance framework over the use of digital employees.

When this stack is undocumented, accountability tends to consolidate at whichever individual is unlucky enough to be present when the failure surfaces. When it is documented, every party knows what they own.

Where to start

If you have one or more digital employees in production, the productive first move is a documented review of the four governance failures above, against your current configuration, with named gap owners. The depth of the review tracks the criticality of the work the digital employee performs.

If you are about to deploy your first digital employee, the productive first move is to design the governance before you sign the contract. Vendors will negotiate scope, identity model, audit trail design, and supervisor requirements pre-deployment. They will negotiate them post-deployment too, but at considerably greater cost.

We have written this up at length — including the configuration baselines, the supervision protocol templates, the audit log design, the contract language, and a phased twelve-month implementation plan — in the Digital Employees Runbook. It is the working document we use on enablement engagements, made available as a standalone PDF for organisations that prefer to do the work in-house with a reliable reference.

Get started

Bring AI risk under board oversight in two weeks.

A thirty-minute discovery call costs nothing. We confirm fit, scope, and timing, then issue a fixed-fee statement of work within two business days.