Security Workflow: On-Demand Threat Hunting
This walkthrough shows you how to build a manual workflow for interactive threat hunting. By the end, you’ll have a reusable workflow that accepts a hunt hypothesis, queries one or more data sources, pivots on suspicious indicators, and produces a structured hunt report.
Overview
What this workflow does:
- Accepts a hunt hypothesis and search scope as workflow inputs
- Queries a primary data source such as Splunk, Datadog, or another log system
- Uses an LLM to identify suspicious patterns, IOCs, and follow-up pivots
- Runs a second query to validate or expand on those findings
- Produces a structured hunt report with timeline, affected systems, and recommended actions
Who it’s for: Threat hunters, senior SOC analysts, incident responders
What it produces: A repeatable hunt report with findings, IOCs, confidence level, and next actions
Manual inputs -> Query logs -> Correlate with LLM -> Pivot on IOCs -> Summarize findingsWhat You’ll Need
- Kindo account with at least one logging or SIEM integration configured
- Access to a dataset worth hunting in (authentication logs, endpoint telemetry, DNS, proxy logs, cloud audit logs, etc.)
- A clear hypothesis to test
- ~45 minutes
The Workflow
Step 1: Create a Manual Workflow Agent
-
Navigate to Agents — Click Agents in the left sidebar.
-
Click Create an Agent — Select Workflow as the agent type.
-
Name your agent — For example: “Threat Hunting Workflow”.
-
Add a description — Example: “Queries security data sources, correlates indicators, and summarizes suspicious activity.”
-
Select a model — Choose a reasoning-capable model that can synthesize noisy evidence well.
Step 2: Define Workflow Inputs
Workflow inputs make the hunt reusable. Each run can use a different hypothesis and scope without editing the agent.
-
Open the Agent Configuration panel — Click the configuration icon if needed.
-
Add a workflow input — Under Workflow Inputs, click Add Input.
-
Create these inputs:
- Name:
hunt_hypothesis- Type: Text
- Label: “Hunt Hypothesis”
- Description: “What are you trying to prove or disprove?”
- Name:
time_window- Type: Text
- Label: “Time Window”
- Description: “Example: last 4 hours, last 24 hours, last 7 days”
- Name:
scope- Type: Text
- Label: “Scope”
- Description: “Hosts, users, networks, applications, or environment to search”
- Name:
-
Save the inputs — These values become template variables such as
{{hunt_hypothesis}},{{time_window}}, and{{scope}}.
Step 3: Add an API Action Step — Query the Primary Data Source
Start with the system most likely to contain the signal you need.
-
Add a step — Click the + button under Agent Steps.
-
Select API Action Step — Choose API Action Step from the dropdown.
-
Configure a search query for your primary data source. Examples:
For Splunk:
- Query recent authentication, endpoint, or proxy logs
- Use
{{time_window}}to bound the search - Use
{{scope}}to constrain hosts, users, or networks
For Datadog or another logging platform:
- Search log indexes for the systems named in
{{scope}} - Filter the time range using
{{time_window}} - Use the hypothesis to guide the search expression or tags
-
Reference the workflow inputs in the query or request body:
Hunt hypothesis: {{hunt_hypothesis}}Time window: {{time_window}}Scope: {{scope}} -
Save the step — The returned data becomes available to later steps, and large responses can be inspected in the sandbox.
Step 4: Add an LLM Step — Correlate and Extract Leads
Now turn raw log output into a set of concrete leads to investigate.
-
Add a step — Click the + button and select LLM Step.
-
Write the prompt — Use a structure like this:
You are a threat hunter analyzing security telemetry.HUNT HYPOTHESIS:{{hunt_hypothesis}}SEARCH SCOPE:{{scope}}TIME WINDOW:{{time_window}}Review the results from the previous API step and do the following:1. Identify suspicious patterns, anomalies, or repeated behaviors2. Extract potential IOCs (IPs, domains, usernames, hosts, hashes, process names)3. Call out the most important follow-up pivots to run next4. Assign a confidence level: low, medium, or highOutput as structured markdown with sections:- Key observations- Candidate IOCs- Suggested pivot queries- Confidence -
Select a model — Use a model that handles synthesis and pattern recognition well.
-
Save the step — This step turns an initial search into a shortlist of useful pivots.
Step 5: Add an API Action Step — Pivot on the Most Suspicious Indicators
A good hunt rarely ends with one query. Use the extracted leads to search another source or another log slice.
-
Add another API Action Step — Click the + button again.
-
Choose a pivot target — Common options:
- Query DNS or proxy logs for a suspicious domain
- Query endpoint telemetry for a suspicious process or hash
- Query identity logs for a suspicious user or IP
- Search the same system with a narrower query built from the LLM findings
-
Reference earlier outputs — Use the previous step’s findings to build the second query. For example, use the top IOC or suspicious hostname identified by the LLM step.
-
Keep the query scoped — Threat hunts get expensive fast. Limit the pivot by time window, environment, or asset class.
-
Save the step — You now have correlated evidence from at least two perspectives.
Step 6: Add an LLM Step — Produce the Hunt Report
Use a second LLM step to synthesize the full investigation.
-
Add a final LLM Step — Click the + button and choose LLM Step.
-
Write the prompt — Ask the model to synthesize all evidence into a final report:
You are finalizing a threat hunting report.Use the evidence from the prior steps to produce a structured report with:1. Executive summary2. Hunt hypothesis tested3. Timeline of notable activity4. Confirmed or likely IOCs5. Affected users, systems, or environments6. Confidence level and reasoning7. Recommended next actionsIf the evidence does not support the hypothesis, say so clearly.Be explicit about what is confirmed versus inferred. -
Save the step — This output becomes the artifact your team can review, share, or turn into a follow-up investigation.
Running a Hunt
-
Click Run — Start the workflow manually.
-
Enter a realistic hypothesis — For example:
hunt_hypothesis: Determine whether a newly observed IP is associated with credential abuse or lateral movementtime_window: last 24 hoursscope: production AWS accounts and VPN authentication logs -
Review the intermediate outputs — Check the API responses and first LLM analysis before accepting the final summary.
-
Iterate — Adjust the hypothesis, time window, or scope and run again if you need to widen or narrow the search.
Working with Large Hunt Data
Threat hunting often produces more data than an LLM should ingest directly.
Useful Sandbox Patterns
- Use
rg,grep, or lightweight scripts to isolate only the relevant records - Build small timelines from raw logs before sending them to an LLM step
- Deduplicate repeated events so the model focuses on unique signals
- Split analysis by source (identity, endpoint, network) before writing the final summary
For more techniques, see Working with Large Context.
Adapting This Workflow
Add More Data Sources
Start with one log source, then expand:
| First source | Possible pivot source |
|---|---|
| Identity logs | Endpoint telemetry |
| SIEM alerts | DNS or proxy logs |
| Cloud audit logs | IAM change history |
| EDR telemetry | Network flow logs |
Save Reusable Hunt Templates
Keep a library of hypotheses such as:
- Suspicious OAuth consent grants
- Impossible travel or unusual geo-authentication
- Lateral movement after VPN login
- New admin role assignments in cloud environments
- Beaconing to suspicious external infrastructure
You can store these in internal documentation or another system your team already uses.
Chain into Follow-On Workflows
After the hunt completes, you can:
- Trigger a triage or remediation workflow
- Create a ticket with the findings
- Send the final report to a security operations channel
- Hand off to another agent for containment or evidence collection
See Multi-Agent Coordination for patterns that connect investigations to downstream automation.
Troubleshooting
Query Returns Too Much Data
Symptoms: The first API step succeeds, but the output is too large or noisy to use.
Fix:
- Narrow
time_window - Limit
scopeto a smaller set of users, hosts, or services - Filter for event types relevant to the hypothesis
- Use sandbox tools to reduce the dataset before LLM analysis
LLM Suggests Weak or Obvious Pivots
Symptoms: The correlation step produces generic recommendations instead of useful next queries.
Fix:
- Make the hypothesis more specific
- Include clearer instructions about the types of signals you care about
- Ask for prioritized pivots, not a long brainstorm list
- Provide a small example of the output shape you want
Final Report Sounds More Certain Than the Evidence Supports
Symptoms: The report states conclusions as facts even though the data is ambiguous.
Fix:
- Instruct the LLM to separate confirmed findings from inference
- Require an explicit confidence level with reasoning
- Ask it to list missing evidence or validation steps
- Review the intermediate outputs before sharing the final report
Next Steps
- Security Workflow: Automated Alert Triage — Convert recurring alerts into structured triage reports
- Security Workflow: Scheduled Vulnerability Report — Build a recurring reporting workflow for vulnerability data
- Working with Large Context — Learn how to handle large log volumes safely
- Multi-Agent Coordination — Chain hunts into response and remediation workflows