It’s not just the good guys using AI – hackers are constantly seeking and finding ways to leverage AI to launch highly effective cyber-attacks. In this blog our Cyber Security Assurance Technical Director Andy Swift reviews some of the current top uses of AI as a weapon and provides guidance on how you should respond.
Discussion around artificial intelligence (AI) was ubiquitous throughout our personal and professional lives in 2023. 2024 is unlikely to be any different, as people and organisations develop and improve AI use policies and identify countless use cases that should make our lives easier.
But it’s not just the good guys who are using AI – hackers see vast potential in it, too, and have been actively leveraging it to launch cyber-attacks already. In late-2023 the UK government warned that AI could increase the risk of cyber-attacks, and at Six Degrees our Cyber Security Assurance teams are already encountering instances of its use in cyber-attacks in the wild.
In this blog we’ll take you through some of the top uses of AI as a weapon we observed in use throughout 2023 that will likely continue and expand in 2024, as well as providing guidance on how you should respond.
What AI is Good At – and Not So Good At
Before we explore how hackers are using AI, it’s worth setting the scene by explaining some of the things AI is good at today – as well as some key things it isn’t.
- AI is not great at originality. You may have asked ChatGPT to write some poetry or created some AI art with Midjourney, but the truth is that AI today is only the sum of its inputs and the data available to it. Ask it to think creatively or outside the box and it’s no match for humans. Human thinking is highly complex and is a product of the rich soup that is our consciousness, emotions and experiences; while AI can generate interesting outputs by combining existing knowledge in somewhat unique ways, it doesn’t quite qualify as ‘original’ thinking – although that is its own topic of debate!
- It’s also not great at long-form content. One day a bestselling novel may be written in part or entirely by AI. Today is not that day. AI today struggles to add much value when it comes to drafting any kind of long-form content without an amount of repetition. In part this is attributed to its way of thinking and lack of ability to create rich original content.
- But it’s very good at debugging and analysing large datasets. AI is excellent, however, at tasks such as debugging. Give it a bunch of code and ask it to identify errors and optimise it, and it’ll likely outperform your average human coder by orders of magnitude (I set aside a special space for the higher end, as something I have noticed is the quality of code it comes up with is not always the most elegant). AI shines when it’s given structured, repetitive or predictable tasks – reviewing logs for patterns, analysing and writing code, summarising existing data sets, and so on.
- It’s also great at learning and education. Ever feel like the older you get, the harder it gets to take on and maintain new information? Not a problem for AI. It’s great at taking on, maintaining, and applying new information – even in vast quantities.
- And it’s great at pattern spotting and formatting, too. AI is great at scanning code, spotting patterns, and tidying up (formatting). It’s not so much that AI can spot patterns that a human couldn’t – it’s the speed at which AI can do it relative to humans that’s a real game changer.
Why AI is Good for Hackers
Now we’ve explained where AI’s strengths lie, we can explore why AI is good for hackers. Here are some of AI’s strengths that hackers are exploiting today:
- Deepfakes. As anyone who saw the Tom Hanks deepfake in 2023 can attest, AI is good at making hyper realistic, very convincing videos. However, the quality of the fake for both audio and video is totally dependent on the amount of available data – hence why most deepfakes have focused on celebrities or those with huge online presences to date.
- Writing better malware. We’ve seen examples in the wild of AI being used to create polymorphic malware that constantly mutates its source code to avoid detection; this will undoubtedly cause issues for simple signature-based antivirus and has paved the way for such malware to become more commonplace. We are, however, yet to see many examples of any ‘true’ AI malware that can execute a kill chain from end to end with context of its environment, making its own judgements, mutating to evade a particular control it has not seen before.
- CAPTCHA-cracking. Everyone’s favourite thing about using the internet, CAPTCHA forms have been used for years now to determine whether a user is human. Unfortunately, AI is now statistically better at determining whether that blurry shadow in the corner of an image is in fact a bike wheel than humans.
- Human impersonation. As with deepfakes, AI’s ability to impersonate humans requires a large amount of time, data and tuning. It’s getting there, though, as a recent Guardian Today in Focus podcast covered.
- Machine learning-enabled penetration testing tools. Even if a fully automated penetration testing panacea is some way off, machine learning-enabled penetration testing tools are useful for human testers seeking to weed out false positives and pinpoint data.
How Hackers are Using AI
So, how are hackers actually using AI today? Our Cyber Security Assurance teams have noted three key activity areas:
- Writing exploits and malware. Creating new malware is hard! As we’ve discussed, AI tools can speed up the process of coding significantly in general and this applies to writing exploit code as well. Given a few carefully crafted prompts to evade common built in language model safety guards, it is now helping to lower the technical barrier to entry, attracting more people to the world of cybercrime. Right now, AI isn’t being used to create anything new – it’s being trained on data that exists already. Using AI, hackers can more easily weaponise a vulnerability announcement or a proof of concept in a matter of days.
- Finding vulnerabilities. When hackers hunt for vulnerabilities, they often carry out a range of activities which can be anything from reversing comparisons between published versions of software to see what’s been patched/changed or iterating over a large number of pages of open-source code, through to reviewing the output fuzzing tools or taking a program crash from known overflow to working exploit. These activities can take humans a long time, but AI can once again come to a hacker’s aid in speeding up the creation of a working exploit.
- Generative AI and phishing. AI is great at creating super realistic phishing emails, impersonating people through convincing wording, concepts, and images. It can stitch together ‘profiles’ of people to use to build spear phishing emails that share the same opinions, use the same sentence structures, and generally mimic legitimate emails that can entrap even the most diligent recipients. The generative aspect of AI is by far the most popular we have seen in 2023/2024, and its usage is frankly mind boggling.
How You Should Respond to AI Cyber Security Risks
As with all things cyber security, there’s no one single solution that will protect your organisation from the risks that AI presents. However, here are four ways you should think about responding to AI cyber security risks:
- Embrace AI. The Pandora’s box that is AI has been opened. It is now one technology that is fast becoming integrated in just about everything – and it’s not really something we can so easily view as ‘optional’ as we could with NFTs and the metaverse. Engaging with it (even if we knowingly are or not) is almost inevitable. So, embrace it – both in terms of leveraging the potential it holds and protecting oneself from the risks it presents. Now is the time to be thinking about the ‘how’; you can speak to one of our experts about how your organisation can take steps towards building AI into its go-forward strategy.
- Understand how AI being used. Know your enemy. Right now, AI is largely being used to accelerate existing attack methods, and understanding this should influence your defensive approach. There’s not much point preparing your office for an infiltration by a T-1000 if the likelihood right now is it’s your patching schedule that’s going to catch you out given hackers’ enhanced ability to exploit vulnerabilities rapidly.
- Invest in phishing training. AI is making phishing and spear phishing attacks more and more accurate – scarily so, and is the main focus of its malicious use right here and now. This means that phishing training also needs to evolve and go further to ensure your users are capable of identifying this next evolution.
- Continually revisit and update. As we explored in a recent blog post, AI is evolving and we’re really at the start of harnessing its full potential. This means that hackers are, too. Make sure you continually revisit and update your cyber security approach to AI to ensure you’re in the best possible position to stay secure and maintain your operational resilience.
Partner with Six Degrees to Secure Your Organisation
There’s never been more pressure on organisations to defend themselves against the damage that can result from downtime and data breach. In a highly specialised field like cyber security, working with a specialist partner can unlock complexities, bolster in-house capabilities, and enhance your organisation’s cyber security posture – enabling you to be proactive in assuring your ongoing stability and success. Speak to us today to discuss how we can secure and enable your organisation.