teal LED panel

Introduction

Artificial Intelligence (AI) has revolutionized the way we live and work, but with great power comes great responsibility. Cybersecurity expert Mikko Hyppönen, Chief Research Officer at WithSecure, warns us about the potential threats AI poses in 2024. Hyppönen, renowned for his expertise in combating malware and cyber threats, believes that the AI revolution will be even bigger than the internet revolution. In this article, we will explore Hyppönen’s five most fearsome AI threats for 2024 and what steps can be taken to mitigate them.

Deepfakes: A Growing Concern

One of the most alarming uses of AI for crime is the creation of deepfakes. Deepfakes are synthetic media that manipulate images or videos to create convincing but fake content. While deepfakes have not yet reached their full potential, recent incidents suggest an imminent threat. According to Onfido, a London-based ID verification unicorn, deepfake fraud attempts have increased by a staggering 3,000% in 2023.

The potential impact of deepfakes goes beyond fraud. In the realm of information warfare, sophisticated deepfakes can be used to manipulate public opinion or spread disinformation. For instance, during Russia’s invasion of Ukraine, deepfakes were used to fabricate videos of Ukrainian President Volodymyr Zelenskyy. Even simple cons are now leveraging deepfakes, as seen in a TikTok video where MrBeast appeared to offer iPhones at an unbelievably low price.

To combat the rising threat of deepfakes, Hyppönen suggests implementing an old-fashioned defense: safe words. By establishing safe words during video calls, individuals can verify the authenticity of requests for sensitive information. While this may seem trivial now, it’s a cost-effective measure that can prevent large-scale deepfake scams in the future.

Deep Scams: Automated Fraud at Scale

Deep scams, unlike deepfakes, don’t rely on manipulated media. Instead, they exploit automation to perpetrate large-scale fraud. By leveraging AI-powered tools, scammers can automate various types of scams, such as investment scams, phishing scams, romance scams, and ticket scams. The potential victim pool becomes significantly larger when scammers can target thousands of individuals simultaneously.

For example, the infamous Tinder Swindler, who stole millions from women he met online, could have amplified his fraudulent activities with AI. Language models and image generators could have been used to disseminate lies, create apparent evidence, and translate messages to target victims from different countries.

Another domain where deep scams pose a threat is the vacation rental market. Airbnb scammers currently rely on stolen images from legitimate listings, which can be detected through reverse image searches. However, with AI-powered tools like Stable Diffusion, DALL-E, and Midjourney, scammers can generate an unlimited number of plausible yet completely fake Airbnb listings, making it difficult to identify and prevent fraudulent bookings.

LLM-Enabled Malware: AI-Written Threats

AI is not only being used to combat cyber threats but also to create them. Hyppönen’s team has discovered three worms that leverage Large Language Models (LLMs) to rewrite their own code. These worms use OpenAI’s GPT (Generative Pre-trained Transformer) to generate different code for each target, making them highly elusive and difficult to detect.

Although these LLM-enabled malware have not been found in real networks yet, they have been published on platforms like GitHub, demonstrating their potential. The closed-source nature of these AI systems makes it challenging to blacklist their behavior. However, if the entire LLM could be downloaded and run locally, it would render blacklisting ineffective. Therefore, Hyppönen emphasizes the need for closed-source generative AI systems to prevent malicious actors from exploiting them.

Similarly, image generator algorithms present another challenge. Open access to these algorithms would undermine restrictions on violence, pornography, and deception, leading to the proliferation of harmful and malicious content. To maintain control and prevent abuse, AI developers must strike a balance between openness and security.

Discovery of Zero-Days: A Double-Edged Sword

Zero-day exploits are vulnerabilities that attackers discover before developers can create a solution for them. While AI can be used to detect and patch these vulnerabilities, it can also be leveraged to create zero-day exploits. This means that attackers can use AI to find and exploit vulnerabilities in software before they can be patched.

Hyppönen predicts that this reality is not far off. In a thesis assignment at WithSecure, a student demonstrated the potential threat by fully automating the process of scanning for vulnerabilities and exploiting them. The student’s findings were so impressive that WithSecure decided not to publish the research, highlighting the gravity of the situation.

As AI evolves, we must stay vigilant and ensure that AI-powered tools are used for positive purposes. The responsible disclosure of vulnerabilities and continuous monitoring are essential to stay one step ahead of attackers.

Automated Malware: AI’s Dark Side

WithSecure has been at the forefront of using automation to defend against cyber threats. However, this advantage could soon be nullified when criminals adopt fully automated malware campaigns. This would pit good AI against bad AI, resulting in a dangerous escalation of cyber threats.

Fully automated malware campaigns could have devastating consequences. Attackers could launch large-scale attacks with minimal human involvement, leading to widespread damage and disruption. Hyppönen considers fully automated malware as the number one security threat for 2024, highlighting the urgency to develop robust defenses against this emerging menace.

The Perilous Path to AGI

While the previous threats are concerning, Hyppönen believes that the most significant challenge lies in the future development of Artificial General Intelligence (AGI). AGI refers to highly autonomous systems that outperform humans in most economically valuable work. If Hyppönen’s hypothesis, known as Hyppönen Law, holds true for AGI, we could face unprecedented risks.

Hyppönen predicts that during his lifetime, humans will become the second most intelligent beings on the planet, with AGI taking the lead. To ensure human control over AGI, it is crucial to align its values and goals with our own. Hyppönen emphasizes the need for AGI to understand humanity and share its long-term interests, as the consequences of misaligned AGI could be catastrophic.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *