Skip to content

Why cybersecurity tools fail when it comes to ambiguity

Artificial intelligence will likely help with cybersecurity, though figuring out how to handle ambiguous situations is critical.


Image: iStockphoto/metamororks

There is a great deal of excitement about artificial intelligence (AI) and how, in the not-too-distant future, AI could have cognitive capabilities, allowing it to mimic how humans learn and make decisions. This ability is especially important in the realm of cybersecurity, where cyberattacks seldom present the same indicators twice.

If we look at what’s available right now, the odds favor humans to make better decisions than AI–under certain circumstances

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

Ambiguity is a challenge for cybersecurity tools

More about cybersecurity

“Cybersecurity is very good at identifying activities that are black or white–either obviously bad and dangerous or clearly good and safe,” writes Margaret Cunningham, PhD, psychologist and principal research scientist at Forcepoint’s Innovation Lab, in her research paper Exploring the Gray Space of Cybersecurity with Insights from Cognitive Science. “But, traditional cybersecurity tools struggle with ambiguity–our algorithms are not always able to analyze all salient variables and make a confident decision whether to allow or block risky actions.” 

For example, an employee accessing sensitive files after company business hours might not be a security issue–the person could be traveling and in a different time zone. “We don’t want to stop the person from doing work because the access is flagged as an unapproved intrusion due to the time,” says Cunningham. “Building the capability to reason across multiple factors, or multiple categories, will help prevent the kinds of concrete reasoning mistakes that result in false positives and false negatives in traditional cyber toolsets.”

Cybercriminals morph their tools quickly

The success of cybercriminals, admits Cunningham, is in large part due to their ability to quickly morph attack tools, and cybersecurity tech cannot keep pace. “It’s not enough to build technology based on yesterday’s understanding of danger,” writes Cunningham. “We must create technology that is able to adjust to changing conditions and make meaning out of a wide range of ambiguous behaviors and activities.”

SEE: 3 ways criminals use artificial intelligence in cybersecurity attacks (TechRepublic)

Survival skills can help

Anthropologists suggest one important reason why humans are still around is our ability to create workable approaches to deal with whatever is occurring in the environment at any given moment. Cunningham adds, “Adults are able to engage in hypothetical and abstract problem solving, as well as dynamically shift positions on topics, decisions, or opinions on the fly.” 

If you are like me, ambiguity and not knowing causes great discomfort. Still, Cunningham believes we are better at coping with ambiguity than machines, especially when presented with something never before experienced. “We have exceedingly advanced sensory integration capabilities, allowing us to automatically and immediately transform information from our five senses into rich symbolic meaning,” explains Cunningham. “Humans also automatically give attention to, and ignore, specific environmental triggers that our brains effortlessly judge to be more or less important. Finally, humans can use memory, past experiences, and mental models of future goals to contextualize the world around us.”

SEE: Hiring Kit: Cybersecurity Engineer (TechRepublic Premium)

Where cybersecurity tech shines

Right now, humans can make better decisions than machines when ambiguity is in play; however, our ability to process a lot of information at once is severely lacking, and we suffer from limitations in attention span, fatigue, memory, and speed. This is where computing technology shines. Cunningham adds this about technology, “Its strengths are processing vast amounts of information extremely quickly, making what’s invisible to humans visible, and perceiving what people are unable to.”

Are security experts or tech tools the answer?

Cunningham feels the answer lies with AI, but current thinking needs a change of direction. “We can focus on building architectures that support robust analytics and strategically move our technology towards mirroring human capabilities in reasoning,” suggests Cunningham. “By using expert knowledge of the cybersecurity landscape of threats, along with expert knowledge in engineering analytic systems that can quickly and accurately reason about and make judgments with ambiguous information, cybersecurity will become better at both making sense of challenging signals and allowing people to continue to work within connected systems uninterrupted.”

SEE: Why we must strike a balance with AI to solve the cybersecurity skills gap (TechRepublic)

As Cunninghams sees it, we’ll never stop being faced with the unforeseen, the unknown, and the unexpected from cyber bad guys. Now is the time to create tools that will effectively combat ambiguous attack vectors.

Cunningham concludes, “This approach can usher in a powerful new tool for cybersecurity professionals. Instead of simply dealing with activities and threats it already understands, technology will be able to “see” behavior that falls within the gray space, analyze it, fit it within an existing understanding, expand its current “mental model,” or build a new one to deal with the unforeseen, unknown, and unexpected.”

Also see