Skip to main content

Command Palette

Search for a command to run...

If I were to start validating my cloud detection...

Thoughts on starting cloud-centric detection assessment

Updated
7 min read
If I were to start validating my cloud detection...

Prologue

I had an interesting 45 minute conversation with a security engineer from a mostly-cloud company. We talked about how would one start a detection assessment program, mainly brainstorming ideas on what to test.

It was an impromptu chat (kind of), so I yap for a good couple of minutes using “spaghetti in the wall” strategy. I have had sometime afterwards to rethink about what I said, and I guess this is where I would start.

The conversation was mainly about finding ideas on what to test first, and less about the logistic of starting such program.

Anatomy

Diagram above from practical detection engineering describe life cycle of detection engineering. A similar phase with validation, called testing, is part of development phase, and just to make it clear for reader that didn’t read book, here are the differences of testing and validation:

FeatureDetection TestingDetection Validation
Primary GoalTo ensure a specific detection definition (rule/code) is implemented correctly and accurately reflects its intent in the production environment.To examine how the detection environment behaves in response to threat actor techniques.
FocusThe individual detection rule itself, ensuring it: Returns the expected data. Minimizes false positives. * Is performant.The adversarial tactics and techniques executed, often relying on the output of one or combinations of implemented detections.
Process AlignmentA sub-process of the detection engineering life cycle, concerned with preparing the rule for production.Does not need to be executed in-band with the detection engineering life cycle; can be executed independently.
OutputA detection rule ready for implementation in production.A tactic and technique-oriented report on how the subset of the detection environment responded to the executed techniques.

If we are talking detection validation (cloud or otherwise), it falls within the validation phase, with in broad terms can be executed in three stages:

  • Planning: this is the phase where objectives, scopes, timelines and stakeholders are defined. The specific defensive capabilities targeted for validation and the criteria for determining their effectiveness are rigidly defined during this phase.

  • Execution and data collection: this is where TTPs are executed against target and data collected.

  • Analysis and reporting: analysis and reporting of testing output. This phase identify gaps.

Validation phase can be use for the whole TTP used by known threat actor, simulating attack by that threat actor, or it can be made granular to test specific techniques from a threat actor.

With that in mind, the rest of this blog post is part of the planning stage of the validation phase, mentioned above.

Identify

Armed with understanding of validation, and what we are trying to accomplished in the planning stage, we will start with identify critical asset to protect. This step answers the question: What is the most critical thing to protect?

  1. Identify Critical Assets: List and rank the systems, data, and applications that are mission-critical (e.g., Active Directory, financial databases, key intellectual property).

  2. Model Your Threat: Identify the most likely Threat Actors targeting your organization/industry.

  3. Map Adversary Techniques (MITRE ATT&CK): Determine the specific Tactics, Techniques, and Procedures (TTPs) those threat actors use against your critical assets.

Prioritize

This is where you choose the detection to validate. There are several considerations to help you prioritize:

  1. High ROI: Focus on the few techniques that, if detected, stop an attacker from causing maximum damage. These are detection with the highest ROI . Focus on Critical Phases P1: Initial Access and Persistence (e.g., Valid Accounts, External Remote Services) and P2: Credential Access and Privilege Escalation (e.g., OS Credential Dumping, Process Injection).

  2. Emerging Techniques (Gap Analysis): Work with your Threat Intelligence (CTI) team to identify new MITRE ATT&CK techniques that have been observed in the wild targeting your industry or region since your last validation. Any new, high-prevalence technique that you don't have coverage for becomes an immediate high-priority detection rule to create, test, and validate.

  3. Adversary Playbooks: Prioritize testing the detection rules that cover the full attack chains of the specific threat actors you are most concerned about. This means validating the combination of rules that, together, should catch the attacker from initial access to data exfiltration.

  4. Maintenance: Rules decay over time due to system changes, software updates, and data source shifts. This pillar focuses on hygiene. These changes may manifest in High False Positive Rate (FPR) Rules, Low Confidence Rules, Critical Data Source Changes etc.

Ideas for Prioritization

Reports or Research

There is a bunch of reports or research by vendors, consultant, and many more. These publication use surveys, public incident lesson learn, etc. These can be a good place to start a discussion about where to start validate detection, of course the more specialized your organization is (in terms of its cloud architecture), these report will be less relevant. What you can also do is filtering these publications by different industry, to make it more relevant to your needs.

The latest google threat horizon report (at the time the blog posted), highlight a couple of common theme found throughout 2025, one of them is importance of foundational security, which is also highlighted in their dashboard shown below.

One of the more prominent report is by CSA, aptly named CSA top threats to cloud computing. On the last edition. CSA gather information for these publication by using two stage surveys and interview. First stage is to gather initial lists of top threats by survey, discussion that aim to be in depth. While the second one use a broad audience of 500 security professional to rank the result from the first stage.

The result is several list of top threats, similar gist with google threat horizon, where “Misconfiguration and inadequate change control”; “Identity and Access Management”; “Insecure interfaces and APIs”, which are pretty basic is highlighted as the top three threats.

CSA’s publication go even detailed by mentioning some of the common variation of these threats, the impact (technical, operational, and business), and even mentioning some anecdotes (case examples) of the threats, and even related controls. All of which are very useful in a discussion of what detection should we test.

One more point for CSA is also the amazing artwork used for their throughout the publication.

Security Benchmark and Guidelines

Security Benchmark and Guidelines, that are specifically crafted for cloud environment could be a very valuable inventory where you can start your discussion of detection assessment. Examples of these benchmark and guidelines are CSA Cloud Control Matrix, CIS Cloud Benchmark, and some of the vendor based one e.g. AWS Well-Architected Framework or Microsoft Azure Security Benchmark.

The heavy lifting on using these benchmarks are “processing” them to be used detection assessment. Since these publications are meant to be benchmark/ guidelines, most of them are prescriptive. So the process will involve discussing over at least these questions:

  • do we need detection for specific that prescriptive guide?

  • do we have detection for specific that prescriptive guide?

  • and do our detection for that prescriptive guide works?

Nevertheless, these benchmark, as with all other sources I mentioned here is invaluable for a starting discussion on detection assessment.

Threat Models or Threat Model Framework

Threat model framework that comes to mind is UCTM, they stated that their goal is to highlight the top undifferentiated attack sequences — not every possible undifferentiated or differentiated sequence. claiming that following list covers the majority of attacks the majority of organizations will experience. Trying to be pareto principle of cloud attacks, and provide the list of their top sequences of cloud attack.

Organization will (commonly) base their defenses (i.e. detection) on their threat model. Starting from currently available threat model in your organization is also a good place to start a discussion. A common problem might be that threat model will be scoped out for a specific projects or technology, translating them to a more broad detection might present a bit of a challenge.

Based on that, threat model and frameworks is also an amazing place to start the discussion on detection assessment.

Threat Databases

Examples of threat databases are Wiz’s cloud threat landscape or datadog’s cloud security atlas. These are collection of threats and vulnerability, specifically built for cloud environment. datadog’s is a bit easier to filter and has a detection suggestion and how to reproduce it with stratus, while Wiz’s has more on related incident anecdotes.

These databases might not be a good place to start, but it will surely contribute to your vocabulary of validation, especially if you have run your detection assessment for several iteration. These databases will also be very useful to keep your validation up to date with the newest vector.

Epilogue

The post briefly describe how ideas if you are starting on detection validation, focusing on the most important part of the validation phase, that is planning. Ideas on how to prioritize and choose which detection to test are laid out above. The hard part was actually operationalizing the planning stage

References

practical detection engineering

https://www.upwind.io/glossary/mitre-attck-evaluations