How will the advancement of artificial intelligence (AI) impact cyber counterintelligence (CCI)?
Part I
AI is evolving, and its impact will have far-reaching consequences in our daily lives. Another fact is the impact it already has on security and defense. Capabilities such as robotic systems, intelligence, surveillance, and reconnaissance (ISR), predictive and decision-making tools, AI-augmented cyber offensive weapons, and enhancements to intelligence analysis and counterintelligence detection tools are a few of the areas pushed to grow with the help of AI.
How will this AI push affect our current private-sector security postures in the coming years/months? In the short run, most likely not, because artificial intelligence (AI) technologies rely on big data to make quantitative decisions which are difficult to replicate in closed or classified environments, but the more open source intelligence and big data are consumed, the better AI technologies will become in predictive analytics, and THAT is certainly going to be a game changer for the private sector.
Cyber counterintelligence will need to adapt quickly to any level of AI development that an 'insider' could use as a tool for internal process predictions or, worse, as a weapon for sabotage or impersonation, to name a few.
My prediction is that AI-powered cyber security automation techniques and tools will catch up (before 2025) with the current asymmetry between the rates of cyber attacks and the rates of cyber defense successes, but we will fail to reduce the rates of data breaches.
¿Why? because the end of the corporate perimeter-centric security mentality (preCOVID) has developed new opportunities for insider threats in the -ever more common- remote work environments which are currently impossible to monitor or protect.
On the flip side, advances in predictive analytics will also benefit Cyber Counterintelligence (CCI). Can we predict the likelihood of a specific type of attack? We certainly can. Can we predict which characteristics or behaviors of our employees make them more likely to become an insider risk? We can. But, how can we spot the use of AI in a cyberattack, a 'whaling' scheme, or a deception campaign aimed at a specific target? The answer is straightforward: we can't.
This dichotomy will force cyber counterintelligence analysts to adopt a more offensive posture in the near future. CCI operations to misdirect and deceive external threat actors will become more common as a means of wasting adversaries' resources and time. In this regard, AI will aid by providing probability analysis on potential targets, just as AI-driven assessments of 'flagged' employees will tell us how likely it is that any of them will become a security concern.
AI will greatly impact the management and protection of information/data for every organization on the planet. But beware: the more data (open/restricted) available for AI predictive analytics intake, the more opportunities to hack and corrupt that data, resulting in disinformation and inconsistent results, to name just two possible outcomes.
The future of CCI in an AI-dominated world will be determined by the mission, the perceived threat landscape, and the ability of industry and government to share threat actor knowledge.
But make no mistake: there will be a distinction between countries willing to sacrifice levels of privacy and civil liberties for greater security and those that are not. Fighting insiders, state-sponsored cyber espionage, and the use of AI-augmented cyber weapons while adhering to ethical frameworks will be difficult.
This topic will be expanded upon in subsequent articles.