The role of AI in enhancing cyber intrusion operations is expected to continue evolving, leading to a rise in the frequency and severity of cyber threats.
Cyber threat actors are likely already leveraging AI to improve their existing tactics, techniques, and procedures (TTPs) when it comes to victim reconnaissance, vulnerability research, exploit development, social engineering for system access, basic malware creation, and data exfiltration. By 2027, it is anticipated that the volume and impact of cyber intrusions will increase primarily through the refinement of existing TTPs rather than the emergence of entirely new threat vectors.
In the foreseeable future, only highly capable state actors with adequate investment, high-quality training data, and the necessary expertise will be able to fully utilize AI within advanced cyber operations. Most other cyber threat groups are expected to primarily rely on the use and modification of commercially available and open-source AI models to boost their capabilities. The availability of robust open-source models is likely to lower the barriers for creating similar models and narrow AI-enabled tools that enhance capabilities across both cyber defense and attack.
AI-driven cyber capabilities will probably make large-scale cyber security increasingly vital leading up to 2027 and further.
The most significant advancements in AI related to cyber security will likely stem from AI-assisted vulnerability research and exploit development (VRED), which facilitates access to systems by uncovering and exploiting flaws in their underlying code or configurations.
By 2027, AI-powered tools are expected to significantly augment threat actors’ abilities to exploit known vulnerabilities, resulting in an uptick in attacks on systems that have yet to implement vital security updates. System owners are already engaged in a race to identify and remedy disclosed vulnerabilities before they can be exploited. The timeframe between vulnerability disclosure and exploitation has now reduced to just days, a trend that AI is likely to accelerate. This increase will likely pose a greater threat to Critical National Infrastructure (CNI) and its supply chains, particularly operational technologies that have lower security standards.
Nonetheless, AI will also support system owners and software developers in securing their environments. With a reasonable chance of universal access to AI for cyber defense by 2027, there will almost certainly be a digital divide between systems that can adapt to AI-enabled threats and those that remain vulnerable.
Remaining informed about cutting-edge AI developments in cybersecurity will be essential for ongoing cyber resilience for the decade ahead.
For adept cyber actors skilled in fine-tuning AI models or creating independent AI systems for vulnerability exploitation, AI will likely enhance techniques for discovering and exploiting zero-day vulnerabilities by 2027. Zero-day vulnerabilities are unpatched and, often, unknown weaknesses in systems that threat actors can take advantage of, with the knowledge that their targets will likely remain exposed. If there is a delay or stagnation in cyber security mitigations, critical systems may become increasingly susceptible to advanced threat actors by 2027.
By 2027, skilled cyber operatives will probably make use of AI-enabled automation to facilitate evasion and scalability.
The development of fully automated, end-to-end advanced cyber attacks is unlikely to be realized by 2027. Expertise will still be required to manage the intricacies of the process. However, adept cyber actors will almost certainly continue experimenting with automating parts of the attack chain, including vulnerability identification and exploitation, rapid malware adjustments, and evasion strategies. This human-machine collaboration will likely complicate the detection, tracking, and neutralization of threat activity unless effective AI-driven defense mechanisms are developed.
The proliferation of AI-enabled cyber tools will likely broaden access to AI-driven intrusion capabilities among a wider array of both state and non-state actors.
The commercial cyber intrusion industry is expected to increasingly integrate AI into its offerings. Moreover, the criminal application of AI is likely to rise by 2027 as AI becomes more embedded in society. Skilled cybercriminals will probably focus on circumventing restrictions on existing AI models and commercial AI penetration testing tools to deliver AI-enabled cyber tools as a service. This could empower novice cybercriminals, hireable hackers, and hacktivists to engage in opportunistic information gathering and disruptive operations.
The continued integration of AI technologies across the UK’s technological landscape, particularly in Critical National Infrastructure, is likely to expand the attack surface available for adversaries.
AI systems comprise data, methodologies for training and evaluating AI, and the requisite technology for implementation. As AI technology becomes increasingly connected to organizational systems, data, and operational technology, it will likely present additional targets for threat actors. Tactics such as direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are already facilitating the exploitation of AI systems to gain access to broader systems.
Inadequate cyber security measures are likely to enhance opportunities for capable state-linked actors and cybercriminals to misuse AI systems for nefarious objectives.
In a rush to deliver leading AI models or more advanced applications, developers may prioritize the speed of release over the importance of security, heightening the cyber threat posed by compromised or insecure systems.
This threat will also be exacerbated by inadequate data handling practices and configurations, which include:
- Transmission of data with weak encryption, rendering it susceptible to interception and manipulation
- Poor identity management and storage practices, increasing the risk of credential theft, especially for privileged accounts or those reused across multiple systems
- Extensive user data collection, raising the risk of de-anonymization and enabling targeted attacks
Implementing fundamental cyber security practices during the integration and configuration of AI and connected systems will be crucial for mitigating threats. Organizations deploying AI technologies will need to ensure that their cyber security measures are current and effective across both their AI systems and their interconnected dependencies.
Based on an article from ncsc.gov.uk: https://www.ncsc.gov.uk/report/impact-ai-cyber-threat-now-2027