As threat actors become more sophisticated and leverage new technology to achieve their goals, these are the cyber trends that businesses should be aware of and prepare for in the year ahead.
1. AI-powered cyber attacks
Threat actors will increasingly leverage artificial intelligence (AI) in 2023 and beyond to automate their attacks and launch AI-powered cyber attacks. AI-powered cyber security threats allow threat actors to launch sophisticated attacks that can evade traditional security measures.
Whilst AI can be used in a number of ways, including to avoid detection in a compromised network, we predict that AI will be employed by threat actors to overcome improved cyber awareness among the general public. AI-powered cyber security threats include the following.
Deepfake technology
Deepfake technology uses AI to fabricate synthetic media, such as videos or images, to impersonate a real person and carry out fraud. Historically, deepfake has been most frequently used to spread disinformation, particularly during geopolitical conflicts and political campaigns.
However, deepfake is increasingly being employed to trick users into making unauthorised payments or volunteering sensitive information. For example, an employee may be aware of a requirement from their CEO to provide oral approval before payment can be made for certain large transactions. If confronted with a payment redirection scam conducted through a business email compromise attack, a cyber savvy employee would call the CEO and the CEO would confirm that the payment was never requested ─ thus thwarting the attack.
But what if the employee called the CEO and the purported "CEO" gave oral approval and directed the payment? Surely it is not possible to impersonate the CEO's voice?
This was in fact possible with AI-based software in 2019, when audio spoofing was used by cyber criminals to impersonate a CEO's voice to trick a UK-based energy firm to transfer USD$243,000 to the threat actor (and purported supplier).1
This new spin on impersonation tactics allows attackers to call victims sounding exactly like someone they speak to every day in real time, creating a much more convincing scam. These tactics also bypass the traditionally effective authentication method of oral verification.
Applying machine-learning technology to spoof voices and replicate people's likenesses makes cyber crime easier, and AI technology is only going to evolve and become more sophisticated.
Phishing
Businesses have been grappling with phishing attacks for a number of years. Most recently there has been a dramatic shift from bulk spam emails to targeted phishing campaigns, which use information specific to the recipient to convince them that the communication is legitimate. Whilst phishing attacks continue to be successful, businesses have responded. Education around phishing has increased significantly and there are many resources available on how to spot a fake email. Common examples include reviewing the malicious link before clicking on it, reviewing the domain name of the sender, and looking out for spelling and grammatical errors.
Always looking for new ways to attack, threat actors have turned to AI to improve their spear phishing campaigns. GPT-3 language models such as ChatGPT have been a hot topic in the media recently for their ability to write university essays, however these models can also do far more. Natural language processing (NLP) and machine learning allow threat actors to compose more convincing phishing emails, free from grammatical and spelling errors. By combining AI and information leaked on the dark web, these programs can also tailor a phishing email to its recipient's interests and traits, in the same way that we receive targeted ads based on our social media and browsing history.
AI does require some level of expertise and it can be very expensive to train a really good model, particularly one focussed on personality analysis, to predict a person's proclivities and mentality based on behavioural inputs. However, AI-as-a-service may play a critical role in phishing and spear phishing campaigns by lowering the barriers to entry.
A team from Singapore's Government Technology Agency recently presented an experiment in which it crafted and sent targeted phishing emails generated by an AI-as-a-service platform to 200 of its colleagues.2 Surprisingly, the team found that the AI-generated messages were "weirdly human" and more recipients clicked the links in the AI-generated messages than the human-written ones. The platform also automatically supplied surprising specifics, including mentioning a Singaporean law when instructed to generate a phishing email for people living in Singapore.3
As recently as December 2022 and January 2023, it was still possible and easy to use the ChatGPT web user interface to generate malware and phishing emails.4 Since then, the anti-abuse mechanisms at ChatGPT have been significantly improved, and it is now not possible to generate malware and phishing emails, with the service replying instead that such content is "illegal, unethical, and harmful".5 That said, this has not stopped cyber criminals. Hackers have found a simple way to bypass those restrictions by using the application programming interface (API) for one of OpenAI's GPT-3 models known as text-davinci-003, instead of ChatGPT, which is a variant of the GPT-3 models specifically designed for chatbot applications. It appears that the API versions do not enforce restrictions on malicious content and have very few, if any, anti-abuse measures in place.6 There is now even a user in a forum selling a service that combines the API and the Telegram messaging app to allow malicious content creation such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface (the first 20 queries are free, and from then on users are charged $5.50 for every 100 queries).7
Clearly, AI poses serious concerns about its potential abuse and the misuse of language models. It is therefore important that AI governance frameworks are put in place as soon as practicable.
To this end, Singapore released the first edition of its Model AI Governance Framework on 23 January 2019 and the second edition on 21 January 2020. On 25 May 2022, A.I. Verify was launched in Singapore ─ the world’s first AI governance testing framework and toolkit for companies that wish to demonstrate responsible AI in an objective and verifiable manner.8 There is also a European proposal for a legal framework on AI.9
2. Cloud-native attacks
The shift towards cloud-based programs and software has been attractive to many organisations and was accelerated by the COVID-19 pandemic and the adoption of hybrid working. Gartner predicts that by 2025, over half of IT spending within the application software, infrastructure software, business process services and system infrastructure markets will have shifted from traditional solutions to the cloud.10
Moving to a cloud-based system, however, poses various challenges and numerous opportunities for threat actors to take advantage. These include the following.
Misconfiguration
Misconfiguration refers to any gaps, errors or glitches that can expose an environment to risk during cloud adoption. The sheer complexity of cloud-native platforms, as well as the rush to move to the cloud, has made misconfigurations more common in recent years. Examples include overly permissive access, default credentials, unrestricted ports and unsecured backups. These misconfigurations can provide unauthorised access to a system and its data, leaving a business vulnerable to various attack methods such as ransomware and major data breaches.
Observability
Observing cloud-based systems' performance is vital when a network utilises open-source platforms such as AWS, Microsoft Azure or Google Cloud Platform. It not only measures performance but can also provide real-time monitoring and alerting for potential security breaches and other threats. However, a recent study found that only 27% of organisations have full-stack observability. Whilst the study reported that observability is high on the priority list of many organisations, we expect that threat actors will exploit this before organisations have been able to invest in and implement observability tools across their whole cloud network.11
Insecure APIs
APIs are intended to streamline cloud computing processes by allowing programs to "talk to each other". However, when left unsecured, they can open lines of communication that allow threat actors to gain unauthorised access to data. APIs can become insecure for various reasons; however, we expect to see an increase in the exploitation of APIs with inadequate authentication and vulnerabilities due to outdated components.
Organisations in 2023 will need to pay close attention to the visibility of their cloud environments, reliance on third parties and gaps in their cloud security.
3. A rethink of multi-factor authentication (MFA)
When breaching corporate networks, threat actors commonly use stolen credentials obtained through phishing attacks, malware and data leaks. To combat this, in recent years organisations have relied heavily on MFA as a security measure. Whilst this has provided a useful layer of cyber security, many organisations wrongfully assume that it is foolproof and overlook the many ways in which threat actors can bypass MFA.
In fact, threat actors have used this pressure for MFA implementation to exploit "MFA fatigue" among users. We know that MFA is configured to use "push" notifications to ask a user to verify a login attempt. An MFA fatigue attack occurs when a threat actor runs a script that constantly attempts to log in with stolen credentials, sending repeated MFA requests to a user's device. The threat actor continues to do this for an extended period of time and often combines it with malicious messages impersonating IT support in an attempt to convince the user to accept the prompt.
Ultimately, the user becomes so overwhelmed or frustrated by the notifications that they either accept the request to stop the notifications, or accidentally click approve when reviewing or attempting to deny the request. They may also suspect a bug within the MFA application and change their configuration to disable MFA. Successful attacks on large organisations including Cisco, Uber and Microsoft demonstrated the effectiveness of this tactic in 2022.
This threat intelligence report prepared by Lander & Rogers with Forensic IT provides valuable insights and recommendations for mitigating a newly identified method of MFA bypass.
Attacks of this nature are expected to increase in frequency in 2023. As a result, organisations will need to educate their employees on the ways in which MFA can potentially be bypassed, and the malicious tactics to look out for.
Looking forward
Recent patterns and new developments in cyber attacks suggest that threat actors and cyber security threats will not remain static. They will continue to evolve, improve, and leverage new technology to achieve their goals. It is therefore imperative that governments, organisations and individuals stay vigilant, adapt and respond quickly to existing and emerging cyber security threats and risks.
This article is part of CyberSight 360 2022/23.
Our team of legal experts in Australian privacy regulation has developed Lander & Rogers PrivacyComply—an innovative, automated privacy impact assessment tool. Designed to help organisations efficiently navigate privacy obligations, Lander & Rogers PrivacyComply offers a smart, fast, and cyber-safe way to manage privacy risk.
1 Stupp, Catherine. Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case. The Wall Street Journal. 30 August, 2019.
2 Newman, Lily Hay. AI Wrote Better Phishing Emails Than Humans in a Recent Test. Wired. 7 August, 2021.
3 Ibid.
4 Check Point Research demonstrated in December 2022 how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, which can accept commands in English.
5 Goodin, Dan. Hackers are selling a service that bypasses Chat GPT restrictions on malware.. Ars Technica. 9 February 2023.
6 Barreiro Jr, Victor. Cybercriminals bypass ChatGPT restrictions to make malware worse, phishing emails better. Rappler. 11 February 2023.
7 Goodin, Dan (n5).
8 Personal Data Protection Commission Singapore: Singapore's Approach to AI Governance.
9 European Commission: A European approach to artificial intelligence.
10 Gartner: Press release, 9 February 2022.
11 VB Staff. Report: Only 27% of orgs have observability over their full stack. Venture Beat. 14 September, 2022.
Photo by Towfiqu Barbhuiya on Unsplash.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted. Lander & Rogers is furthermore committed to providing legal advice and content that is factual, true, practical and understandable. Learn more about our editorial policy.