About onboarding log to SIEM

lesson learned, common pitfalls and how to avoid them

I was working on a task where a bunch of application logs needed to be onboarded and monitored for alerts. I've been thinking about the best way to share the experience from this job, and a full workflow would probably be more appropriate and useful (let me know if you aware of such framework/ workflow). Unfortunately, while the spirit is willing, the flesh is weak, so maybe in the future.

The next best thing is a list of lessons learned from my observations of the log onboarding process:

A clear and concise scope should be provided at the beginning. Without this, there will inevitably be scope creep.

A standardized alert, description, meta data for each use case should be established, along with a quality assurance process for use cases produced during the onboarding process.

A clear workflow should be known to all stakeholders. For application owners, this provides knowledge and clarity on their responsibilities. For engineers, it offers clear direction. Additionally, a clear workflow helps identify potential blockers.

A threat modeling process that has detection in mind. Many threat modeling exercises result in threat models that are difficult or inefficient to translate into detection. (e.g. Detection-Oriented Modelling Framework - DOMF, 2023.idsecconf.org)

If your SIEM/SOAR is not effective at maintaining an inventory of these use cases, it's important to have a good process for creating and maintaining this inventory. When creating this process, keep in mind that in the future, you might need to categorize these use cases based on their MITRE categories or the tables they reference. This inventory will also be useful outside of onboarding, especially in measuring SOC matrices.

Tracking visibility and alert capabilities (current and desired) is another thing a SOC should have. This can be done from a couple of different view. You can use MITRE, asset types, or application as a basis for visibility. Without tracking capabilities, it will be hard to measure how far you are from your desired goal, and even worse, whether you are moving toward your goal at all. Regularly reviewing your current capabilities and desired goals is also important, and this goes without saying.

Maintaining an inventory and tracking visibility and alerts help combat one of the pitfalls of detection engineering mentioned in chuvakin's blog, that it often it starts from available data, and not from relevant threats. Prioritization is still very much a gut feeling affair based on assumption, individual perspective and analysis bias.

And since I quoted chuvakin previously, let me close this post with another word of wisdom from chuvakin's blog

Inscrutable and unmaintainable detection content — if the detection was not developed in a structured and meaningful way, then both alert triage and further refinement of detection code will ..ahem … suffer (this wins the Understatement of the Year award)