Kaspersky’s experts have unveiled the significant impact of AI on the cybersecurity landscape of 2023 in their latest report. The analysis adopts a comprehensive approach, delving into the implications of AI for defenders and regulators, while also evaluating its potential misuse by cyber criminals. This detailed study is part of the annual Kaspersky Security Bulletin (KSB), which offers predictions and thorough reports highlighting the major shifts in the ever-changing cybersecurity domain.
In an era marked by swift technological advancements and societal transformations, “AI” has become a central topic in global discourse. The proliferation of large language models (LLMs) has heightened security and privacy concerns, firmly linking AI to the realm of cybersecurity. Kaspersky’s researchers demonstrate how AI tools have aided cybercriminals in their nefarious activities in 2023 and also present the defensive capabilities of this technology. Additionally, the experts discuss the future trajectory of AI-related threats, which may include:
- The integration of instruction-following LLMs into consumer products is likely to introduce new, complex vulnerabilities at the intersection of probabilistic generative AI and traditional deterministic technologies. This will broaden the attack surface that cybersecurity professionals must safeguard, necessitating the development of novel security measures, such as requiring user consent for actions initiated by LLM agents.
- Red teamers and researchers are harnessing generative AI to create cutting-edge cybersecurity tools, potentially leading to an assistant powered by LLM or machine learning (ML). Such a tool could streamline red teaming tasks, providing automated guidance based on the commands executed in a penetration testing scenario.
- In the upcoming year, scammers might enhance their schemes by employing neural networks and using AI tools to generate more authentic-looking fraudulent content. The ease of creating convincing images and videos raises the risk of intensified cyber threats associated with fraud and scams.
Despite these developments, Kaspersky’s experts maintain a cautious stance regarding AI’s immediate impact on the threat landscape. Although cybercriminals are adopting generative AI, cyber defenders are also utilizing similar or even more sophisticated tools to test and bolster the security of software and networks. This balance makes a drastic shift in the attack landscape unlikely in the near term.
As this rapidly evolving technology progresses, it increasingly becomes a subject of policymaking and regulation. The number of AI-related regulatory initiatives is expected to grow. Non-state entities, such as tech companies with their deep expertise in AI development and application, are poised to offer valuable contributions to the discourse on AI regulation at both global and national levels.
Further regulations and service provider policies will be necessary to identify or flag synthetic content, with continued investment in detection technologies. Developers and researchers will also play a role in devising watermarking methods for synthetic media, facilitating easier identification and traceability.
“AI in cybersecurity is akin to a double-edged sword. Its adaptive nature strengthens our defenses, providing a proactive barrier against emerging threats. Yet, this very adaptability introduces risks, as adversaries exploit AI to devise more intricate attacks. Finding an optimal balance, ensuring responsible usage without compromising sensitive data, is crucial for protecting our digital frontiers,” remarks Vladislav Tushkanov, a security expert at Kaspersky.
On 11 December, Kaspersky’s experts, alongside Dennis Kenji Kipker from the Cyber Intelligence Institute, engaged in an in-depth exploration of AI’s current multifaceted influence on cyber threats and the privacy landscape.