BotNet News

Your source for Online Security News

AI cybersecurity

Modern AI cybersecurity solutions employ a combination of structured and unstructured data to learn from their encounters with cyber threats. They improve over time, uncovering and neutralizing phishing, spam, and opportunistic malware on endpoints or in networks. They also provide behavioral edge protection, spotting anomalies that human security analysts may not.

The scalability of AI cybersecurity enables organizations to deploy it on thousands of endpoints or across the network. This can speed and scale threat detection, reducing the time attackers can exploit vulnerabilities. It can also help protect against sophisticated attacks that traditional signature-based methods may not detect, such as the use of adware or ransomware in an email message or an employee’s search behavior.

AI for cybersecurity can also support human security teams through risk-driven alert prioritization and automation of incident response, enabling them to focus on the true threat amongst the noise. It can also help speed up investigations by analyzing data from other systems and creating insights that may not be accessible to human analysts.

But just like any other technology, malicious hackers see an opportunity to leverage AI in their attack strategies. This year at Black Hat and Defcon, researchers from the cybersecurity startup HiddenLayer demonstrated how they could spoof a bank’s ChatGPT-powered customer service bot to approve a fraudulent loan application using only open-source tools and minimal audio or video content featuring a person’s voice or face. And just last month, DarkTrace—a cybersecurity firm that launched on the premise that deep learning would revolutionize how we protect against cyberattacks—demonstrated a self-training offensive AI that can spoof any face and voice by blending them into its own spoofed image.