Detection
Detection engineering for organisations without a 24/7 SOC
Most mid-market organisations don't have a 24/7 SOC and won't justify the cost of one for years. That's not a reason to give up on detection — it's a reason to be specific about what you actually need to detect, and how.
The marketing version of detection engineering is build a 24/7 SOC. Most mid-market organisations cannot justify the cost — three or four full-time analysts, a SIEM with high-volume licensing, the threat intelligence subscriptions, the playbook engineering. The total annual cost runs into seven figures for a meaningful operation, and the marginal benefit over a well-designed alternative is often modest.
The honest mid-market answer isn’t no detection — it’s a detection program designed for the constraint. It optimises for fewer high-quality alerts, reasonable response times, and a cost envelope that’s a fraction of a 24/7 SOC.
This post is the structure we use when a mid-market client asks how do we do detection without spending two million dollars a year?
Constraints to design around
Mid-market detection engineering is constrained on:
- Headcount: typically zero to two full-time security engineers. Detection cannot require constant attention.
- Tooling budget: typically AUD 50–250k/year for SIEM and detection tooling combined. SaaS pricing is the natural fit.
- After-hours coverage: typically none in-house. Out-of-hours means MDR, on-call, or accepting delayed response.
- Tolerance for noise: low. The noise budget for a small team is small, and a false-positive-heavy detection program is worse than no program because it consumes the limited attention.
The architecture follows from these constraints.
The architecture
A workable mid-market detection architecture has four layers:
1. Logs into a SIEM that you can actually afford
Microsoft Sentinel for organisations on M365 (the integration is good, the pricing is workable for mid-market log volumes). Splunk or Chronicle for organisations with more bespoke environments. Elastic for technical teams that want to operate the platform.
The SIEM ingests:
- Microsoft Defender XDR (or equivalent endpoint+identity+email telemetry)
- Cloud platform audit logs (CloudTrail, Azure Activity, Google Audit)
- IdP sign-in and audit logs (Entra, Okta)
- Critical SaaS application audit logs (Salesforce, ServiceNow, GitHub, etc.)
- VPN and remote access logs
- Network telemetry where it’s economically feasible (firewall logs from edge devices)
It does not ingest everything-with-a-log. The cost discipline matters. Logs that don’t drive detection or investigation aren’t ingested.
2. A small set of high-quality detections
Detection content for a mid-market environment is roughly 30–80 rules, not 500. Each rule is:
- Documented (what it detects, what action to take, what the false-positive profile looks like)
- Tuned for your environment (not the vendor default)
- Tested at least quarterly
- Owned by a named person
The detections cover the high-impact attack patterns: identity (impossible travel, credential stuffing, persistence creation), endpoint (suspicious processes, EDR alerts of significance), email (mass phishing, BEC indicators), cloud (privilege escalation, suspicious resource creation, data exfiltration patterns), and SaaS (anomalous data export, app consent abuse).
The MITRE ATT&CK matrix is useful as a coverage map: which tactics and techniques does the detection content cover? Coverage gaps become a deliberate decision — fix or accept — rather than a surprise.
3. Triage process designed for the team size
The triage flow:
- Detection fires → goes to a queue (Sentinel incidents, Splunk SOAR, etc.).
- Queue is reviewed at scheduled intervals — for a 1–2 person team, every 4 hours during business hours is realistic, with on-call for high-severity after hours.
- Each alert is acknowledged in <30 minutes during business hours, with the disposition (true positive, false positive, in progress, escalated) recorded.
- High-severity alerts trigger immediate investigation. Medium and low-severity alerts are reviewed in batches.
The discipline is queue zero by end of shift. An accumulating queue is a leading indicator of either tooling problems (too noisy) or capacity problems (insufficient staff).
4. After-hours coverage via MDR
The most expensive shift to staff in-house is the night shift. The pragmatic mid-market answer is a Managed Detection and Response (MDR) provider for after-hours coverage. The MDR:
- Receives alerts from your SIEM.
- Triages and responds within their SLA.
- Escalates to your on-call when something requires customer-side action.
- Provides a daily handover at the start of your business day.
MDR pricing varies. For mid-market environments, AUD 80–200k/year is typical for credible providers. Adding MDR to the architecture costs less than half a single full-time analyst and provides 24/7 coverage.
The MDR doesn’t replace your in-house capability. They handle alerting and basic response; your team owns detection content, environment context, and complex investigation.
What you do not need
A mid-market detection program does not need:
- Full-stack SOAR for everything. A few specific automations (auto-disable after credential leak, auto-isolate on EDR alert) are valuable. A platform-level SOAR investment is overkill.
- Threat intelligence platform. Most mid-market organisations don’t have the analyst capacity to operationalise raw threat intel. The vendor-provided intel built into your SIEM and endpoint tools is sufficient.
- Custom-built detection platform. The major SIEMs are good enough. Building your own is a multi-FTE effort that doesn’t pay off.
- 24/7 in-house staffing. As above.
Metrics that matter
The metrics for a mid-market detection program:
- Alerts per day — the noise level. If trending up without environment change, tuning is needed.
- False-positive rate — percentage of alerts that don’t reflect real activity of concern. Target: under 30% for high-severity, under 60% overall.
- Mean time to acknowledge — for high-severity, under 30 minutes business hours, under 2 hours after-hours via MDR.
- Mean time to triage — initial assessment within 1 hour business hours.
- Coverage of MITRE ATT&CK techniques relevant to your sector — not all of them, but a documented coverage statement.
- Quarterly detection-content review attendance — actually reviewed, not theoretically reviewed.
These are reportable to the audit committee.
A practical first move
If your detection capability is currently we get alerts from our EDR and look at them sometimes, the productive first move is to set up the SIEM ingestion for the four high-leverage log sources: identity, endpoint, email, cloud platform. Then write or import 10 high-quality detections covering the most common attack patterns. Then build the triage cadence and stick to it.
The rest scales from there. The mid-market detection capability is built incrementally, not all at once. The investment is real but smaller than the marketing suggests.
For organisations doing this work alongside CPS 234 alignment or as part of a broader security uplift, the Security Posture Assessment reviews the existing detection capability against the size and risk profile of the organisation.
Tagged
Continue reading
Related pieces
AI · Authorisation
MCP and the new authorisation surface nobody is reviewing
Model Context Protocol turns every internal API into a tool an agent can call on a user's behalf. The authorisation model most teams ship with is naïve, and the audit log usually proves it.
29 April 2026
Digital employees
Digital employees, with the governance attached
Why most digital employee deployments fail their first audit, and what a governance-first build looks like — identity, data access, supervision, and the accountability question almost no-one is answering well.
22 April 2026
APRA CPS 230
Mapping APRA CPS 230 to your AI tooling: a practical checklist
Translating CPS 230 material service obligations to Microsoft 365 Copilot, ChatGPT Enterprise, and Claude deployments — what changes when an AI vendor becomes a material service provider.
2 April 2026