How AI Can Help Humans Operate More Effectively in Cyber Crises


By Hanah Darley, Head of Threat Research, Darktrace

Crises are routine in the field of cyber security. Whether through the evolution of threat actor tactics or new vulnerabilities, security teams face enormous amounts of stress and are frequently placed in what psychologists describe as “crisis states.” 1

A crisis state is an internal experience of confusion and anxiety to the degree that formerly successful coping mechanisms fail and ineffective decisions and behaviours take their place. 2

Due to the volume of crisis states in cyber security, practitioners are more likely to routinely make illogical choices due to the pressure placed on them. They are also faced with rapidly changing information, demands for decisions, and significant consequences for errors in judgement. There can be hundreds of changing and uncertain data points and factors to evaluate.

The volume of crisis states is poised to grow as generative AI enables attackers to increase the speed, scale and sophistication of attacks.

Why is operating effectively and efficiently in a crisis state so challenging? There are several factors.

First, people are more likely to rely on instincts, making them susceptible to cognitive biases. This renders it increasingly difficult to receive new information, process it appropriately, and make logical choices. Because crises occur unexpectedly and reach new thresholds suddenly, the levels of stress faced by responders escalates quickly and the instability creates doubt and insecurity.

These cognitive biases can take many forms. For example, confirmation bias, which is where people seek out information that confirms their pre-existing views, or hindsight bias, where past events that were difficult to analyse seem more predictable because of present context and information.

Crises also impact decision-making. People simplify new information and are often attached to the first information they receive compared to the most logical information.

For example, if an organisation has successfully defended against a ransomware attack previously, a defender might believe that the same remediations will be successful a second time. But ransomware is constantly evolving, and a second attack could use different tactics to make it harder to defend against. In a crisis state, they may revert to the same strategy instead of taking an action based off the latest information.

In the age of evergreen novel AI-generated attacks, security teams need a different approach: AI.

AI can help augment human decision-making from detection through to incident response and incident mitigation. That’s why Darktrace introduced HEAL, which uses self-learning AI to help teams build cyber resilience and more confidently address live incidents, reducing the cognitive load they face.

Darktrace HEALTM learns from your environment, the attack, and previous simulations to determine the most effective path to remediate and restore operations. This reduces information overload and enables more effective decision-making at critical moments.

HEAL also provides security teams with the ability to safely simulate real attacks within their own environment, allowing them to see how attacks would play out so they can be more prepared and psychologically ready if and when a real attack occurs.

AI can help reduce human bias by providing evidence-based remediation suggestions and making proportional responses based on data rather than interpretations or instincts. It can help cut through the fog of war to support human security teams.

Learn more about Darktrace HEAL and the full Cyber AI Loop at SecTor Booth #817.

More Info:

Sustaining Partners