AI Cybersecurity Is Not a Silver Bullet
Artificial intelligence can bolster human efforts in cyber security by analyzing and interpreting massive amounts of data, identifying patterns, and quickly responding to threats. The AI security tools at Darktrace, for example, monitor network activity and look for deviations from usual behavior to detect and fight cyberattacks in real time.
These technologies have become indispensable in defending against the rapidly growing threat landscape. They can be used for risk prioritization, malware detection, and providing incident response guidance — tasks that would take hours or weeks to complete with traditional security processes.
Unfortunately, these technologies have also been adopted by criminal gangs, who use generative AI to create customized ransomware and phishing attacks that are undetectable by current security tools. They can exploit ML capabilities to trawl large troves of data, and generate insights accurate enough to help them devise video deepfakes and spear phishing attacks at unprecedented scale.
As these sophisticated attacks continue to emerge, it’s important to recognize that AI cybersecurity is not a silver bullet, and that humans must remain responsible for identifying potential vulnerabilities. Additionally, organizations should have a clearly defined incident response plan to guide them in the event of an attack.
This includes a detailed knowledge of how the technology works and a full understanding of the risks associated with it. By following best practices for software maintenance, organizations can reduce the chances of an attack or compromise by minimizing exposure. To this end, the NCSC recommends implementing countermeasures such as limiting access to AI-enabled tools to those who need it for work purposes only and adopting a zero trust model where possible.