Artificial intelligence poses immense challenges for cybersecurity – most of which we are only beginning to understand.
At a minimum, AI has the potential to cause enormous upheavals in the cybersecurity strategies of corporations and governments. Fundamental concepts like encryption, malware detection, and multi-factor authentication will all be put to the test. The sheer speed and computational power of AI also threatens to outmatch human defenders, potentially requiring entirely new modes of defense. But AI will also pose even more complex challenges for society at large, by undermining the veracity of data, our faith in reliable sources and trusted institutions, and by unleashing the most advanced psychological manipulation ever seen in human history.
Due to AI’s constantly evolving nature, it is hard to fathom the vast potential that “bad AI” could offer cybercriminals, foreign adversaries and other malicious actors. But by using current models as our guide, we can predict several critical areas where AI will tip the scales – and unleash dangerous new attacks that could undermine businesses, governments, the economy, and society more broadly.
Here are the top four threats the security industry is most concerned about:
1) Hacked or infected AI systems
When it comes to AI, one of the biggest threats of all is the possibility that these systems may be hacked or corrupted by malicious actors.
This is an incredibly important issue, because companies, government agencies, critical services like healthcare, and even entire industries, will soon come to rely on AI to make critical decisions that will have widespread implications for essential services, patient care, business deals, regulation, surveillance, you name it.
The most significant of these threats is data poisoning.
AI systems have to be trained with enormous data sets, in order to develop the right algorithms and capabilities before they are deployed into the real world. For example, image recognition software (such as facial recognition) must be trained to distinguish between different objects and people by first studying millions of labeled images. If a malicious actor can seed this data set with “poisoned” images (i.e., fake, deliberately misleading, or otherwise malicious images), they can jeopardize the AI system’s effectiveness. Even a small number of fake images can undermine an entire algorithm.
Another tactic is “prompt injection” which can be used to manipulate or corrupt LLMs (large language models) that utilize prompt-based learning. A low-tech example of this is the debacle that occurred with Microsoft’s Twitter chatbot, Tay.ai, back in 2016. Shortly after it launched, Tay quickly unraveled as it spewed racist, misogynistic and homophobic comments after being manipulated by malicious user inputs.
With more significant LLMs, a similar process could be used to skew the system toward biased behaviors, wrong (or even dangerous) interpretations of data and even malicious behavior – like soliciting sensitive information from users.
2) Skynet-style botnets
Hackers will also be able to use AI to facilitate enormous botnets, the likes of which have never been seen before.
The largest botnets ever recorded are Zeus and Mariposa, which reached 13 and 12 million infected devices, respectively. However, AI-based exploitation tools could easily surpass that number by multiple times. It’s not unrealistic that we could see AI botnets surpass the 100 million mark, especially when considering that there are already over 14 billion IoT devices in the world today, and the market continues to grow rapidly.
At that level, the force of these attacks is difficult to imagine. Botnets are frequently used to carry out DDoS (distributed denial-of-service) attacks, to disrupt services or to overwhelm security defenses. The most powerful attack witnessed so far was the 2020 attack on Amazon Web Services, which registered at 2.3 terabits per second (Tbps). An AI botnet could far exceed that, at a size/scale previously unimaginable, which will have dire consequences for the integrity of online networks.
But in addition to power, the other problem is intelligence. AI botnets will essentially be able to think for themselves, allowing them to move fast and target their victims more intelligently. As such, they will be extremely hard to disrupt. Since AI will enable constant, machine-speed attacks on other devices, these botnets may exist for many years, if not indefinitely – posing an ongoing menace to companies, governments, critical services, and daily life.
3) AI malware
As bad as malware is today, it pales in comparison to what will happen once it becomes fused with AI.
Hackers will use AI to make malware more adept at finding and exploiting zero-days and other vulnerabilities, propagating across entire networks in record time, and achieving autonomous functioning that will enable lightning-fast cyber attacks to outmaneuver companies and governments. Of particular concern here is the potential for autonomous ransomware, which could have a devastating effect – causing widespread physical disruptions (such as energy or food production) and crippling key economic sectors.
AI malware will also be extremely hard to detect or contain, due to its likely use of polymorphism and clever tactics like “living off the land” (LOTL). This malware will also be harder for government and law enforcement agencies to disrupt, since an AI system could constantly change – and replicate – its command-and-control (C2) infrastructure to avoid discovery, or to rapidly reconstitute itself when a law enforcement agency tries to shut it down.
Future versions of AI malware could also be used to launch targeted strikes on specific people or companies – sort of like a digital equivalent of the smart bomb. Cybercriminals are already actively developing AI tools for writing or disseminating malware, such as WormGPT and FraudGPT. As these tools advance, it is only a matter of time before more capable tools and platforms emerge that offer point-and-click functionality – i.e., simply choose your target, and the tool will do the rest. There are several precedents for this already, from the professional “phishing kits” sold in the darknet to the old-school DDoS tool known as LoIC.
4) Social manipulation on a grand scale
AI has enormous potential for learning how to influence and manipulate people (or entire groups and populations) and spread highly convincing misinformation. This is one of our most daunting challenges from a security standpoint.
We are already seeing early forms of this manipulation occur in “innocent” generative AI models like ChatGPT, which – without any malicious design or intent – can convince users that wrong answers are actually correct. Its capacity for manipulation goes much further, as OpenAI recently tested a scenario in which GPT-4 was able to beat a CAPTCHA test by tricking the site’s online agent into thinking it was blind.
Future AI systems will be able to go much further in manipulating individuals or groups – and they’ll do so through a complex combination of “amygdala hijacking,” identity cloning, emotional mimicry, deepfakes and more. At a minimum, this sophisticated combination of advanced capabilities will lead to significantly larger and more successful criminal scams already in use today – such as financial fraud, identity theft, extortion, catfishing, pump-and-dump stock and cryptocurrency schemes, and more.
However, it could also lead to major societal threats – including episodes of mass panic, online radicalization, law and order breakdowns, election interference, you name it. Take elections, for example. While foreign interference was widespread in the 2016 and 2020 election cycles, U.S. adversaries could utilize AI systems much more effectively in the future to coordinate robust, multi-dimensional attacks.
Imagine a scenario in which AI is tasked with depressing voter turnout in a swing state. The AI platform could unleash an enormous PSYOPS campaign, similar to earlier efforts but at a vastly larger scale and far more sophisticated. However, it could also deduce that a more effective way for depressing turnout is to simply shut down the polling sites – by issuing a coordinated series of swatting-style bomb or active shooter threats (imagine dozens or even hundreds of these happening at the same time), shutting off power or Internet services to the venue, etc.
There is no question that AI will pose unprecedented cyber threats to our society over the coming years. As industries become more reliant on these technologies, and as AI systems themselves become more interconnected and interdependent, there is enormous potential for misuse.
In order to effectively manage these risks, we have to build strong safeguards into these technologies, establish effective standards and regulations for where and how they can be used, and develop better defensive capabilities to support companies and people. AI’s capacity for manipulation is one of the greatest threats we face – the only way to solve that is to work together, to restore and maintain trust in our institutions, and have contingency planning in place for the worst possibilities.
Yahoo.com |By
|Karim Hijazi is the managing director of SCP&CO, a private investment and fund management firm focused on emerging technology platforms. Karim is a 25-year veteran of the cybersecurity industry where he has specialized in cyber threat intelligence. He is a former director of intelligence for Mandiant and a former contractor for the U.S. intelligence community.