Secured Research

View Original

AI regulation and national security

The Centre for Emerging Technology and Security (CETaS), part of the Alan Turing Institute, published a brief in August 2024 on the national security implications of the EU AI Act. 

The Act, the EU’s first comprehensive regulatory framework for artificial intelligence, entered into force on 1 August 2024. 

The AI Act adopts a risk-based approach to the regulation of AI. It introduces reporting, risk mitigation, oversight, and transparency requirements for high-risk systems. AI systems that pose unacceptable risks are banned - an example would be social credit scoring systems. General-purpose or foundational AI models, such as ChatGPT, are subjected to additional reporting and testing requirements. 

The regulation has prompted criticism from observers in the tech industry who see the controls it implements as stifling innovation in AI, in a way that threatens to undermine the potential economic and social benefits of this technology. The US is held up in comparison to the EU for its more laissez-faire approach to AI. 

Implications for national security

The CETaS paper moves beyond broad debates over the pros and cons of regulation to consider the implications of the Act on the use of AI systems by organisations with defence and national security missions. 

These use cases of AI are specifically excluded from regulation under the AI Act, reflecting the EU’s recognition that these are matters of national sovereignty. However, the concern raised by the paper is about ‘dual-use’ AI systems - those systems that are not used exclusively for national security - which do fall under the scope of the regulation. 

This is an issue because in practice, governments and militaries are heavily reliant on the private sector for the development of AI systems, raising the question of how far any system can genuinely be exclusive to national security applications.

The EU Act also provides only partial exemptions for law enforcement and public safety activities, which as the CETaS paper notes, raises further challenges given the close working arrangements in many countries between national security and law enforcement agencies. 

Impact on geopolitical competition over technological innovation

Beyond the specifics, the paper reflects broader underlying concerns about geopolitical competition over technological innovation. As the paper notes, if the EU regulation stifles AI innovation, this would in itself have an impact on national security and could even prompt “a shift in the geopolitical dynamics of AI innovation.” 

Differences in the approach adopted by the US and the EU may also create risks. As a recent RUSI brief noted, Western states are responding to the Russia-Ukraine War and the rise of China by attempting to form “cohesive economic frameworks that can [...] maintain their competitive edge.” AI regulation is one of the issues that tests the cohesion of that framework and the relationship between those allies and partners. 

Assessment

We assess with a high degree of confidence that companies working in the national security, defence, and law enforcement space will increasingly be required to demonstrate that they have effective and assured processes in place for managing their use of AI systems. 

This will include awareness both of the AI systems that the organisation is using, and of how those systems are being used within the organisation. General-purpose or foundational AI models present a particular risk, given the potential for emergent applications of those models to be classified as high or unacceptable risk under the EU Act. 

Secured’s experience in organisational security transformation enables us to support organisations as they seek to understand their exposure across the physical, cyber, and insider domains. This understanding is the first step towards the establishment of a more secure environment, including in the use of AI. Get in contact at hello@secured-research.com.