By 2027, more than 40 per cent of AI-related data breaches will be caused by the improper use of generative AI (GenAI) across borders, according to Gartner, Inc.
The swift adoption of GenAI technologies by end-users has outpaced the development of data governance and security measures, raising concerns about data localisation due to the centralised computing power required to support these technologies.“Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated in existing products without clear descriptions or announcement,” said Joerg Fritsch, VP analyst at Gartner. “Organisations are noticing changes in the content produced by employees using GenAI tools. While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations.”
Global AI standardisation gaps drives operational inefficiency
The lack of consistent global best practices and standards for AI and data governance exacerbates challenges by causing market fragmentation and forcing enterprises to develop region-specific strategies. This can limit their ability to scale operations globally and benefit from AI products and services.
“The complexity of managing data flows and maintaining quality due to localised AI policies can lead to operational inefficiencies,” said Fritsch. “Organisations must invest in advanced AI governance and security to protect sensitive data and ensure compliance. This need will likely drive growth in AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes.”
Organisations must act before AI governance becomes a global mandate
Gartner predicts by 2027, AI governance will become a requirement of all sovereign AI laws and regulations worldwide.
“Organisations that cannot integrate required governance models and controls may find themselves at a competitive disadvantage, especially those lacking the resources to quickly extend existing data governance frameworks,” said Fritsch.
To mitigate the risks of AI data breaches, particularly from cross-border GenAI misuse, and to ensure compliance, Gartner recommends several strategic actions for enterprises:
- Enhance data governance: Organisations must ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data. This involves incorporating data lineage and data transfer impact assessments within regular privacy impact assessments.
- Establish governance committees: Form committees to enhance AI oversight and ensure transparent communication about AI deployments and data handling. These committees need to be responsible for technical oversight, risk and compliance management, and communication and decision reporting.
- Strengthen data security: Use advanced technologies, encryption, and anonymisation to protect sensitive data. For instance, verify Trusted Execution Environments in specific geographic regions and apply advanced anonymisation technologies, such as Differential Privacy, when data must leave these regions.
- Invest in TRiSM products: Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This includes AI governance, data security governance, prompt filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50 per cent less inaccurate or illegitimate information, reducing faulty decision-making.
Discussion about this post