A striking 87% of IT professionals in Europe, the Middle East and Africa (EMEA) would welcome stronger government regulation of artificial intelligence (AI) — according to a new survey from SolarWinds.
The survey of nearly 700 IT professionals, 297 of whom were from the EMEA region, reveals that security tops the list of AI concerns, with over two-thirds (67%) emphasising the need for government measures to address security. Privacy is another major worry, with 60% of the region’s IT professionals calling for stronger rules to safeguard sensitive information. Additionally, an equal share (60%) of respondents believe government intervention is crucial to curb the spread of misinformation through AI, while nearly half (47%) support regulations focused on ensuring transparency and ethical practices in AI development.
These findings come at a time when governments in the region have begun announcing landmark initiatives around the development of frameworks to facilitate the secure and ethical implementation of AI. Notable among these are the EU’s landmark AI Act, Dubai’s unveiling of its Universal Blueprint for Artificial Intelligence which prescribes the appointment of a Chief Artificial Intelligence Officer in every government entity in the Emirate, and Saudi Arabia signaling its plans to create a US$40 billion fund to invest in artificial intelligence.
The survey further reveals a troubling lack of trust in data quality — which is essential for successful AI implementation. Only a third (33%) of EMEA respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. Additionally, 41% of the region’s IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.
As a result, data quality is identified as the thrid most significant barrier to AI adoption, following security and cost challenges.
Concerns about database readiness are also widespread. Just a third (33%) of EMEA IT professionals are very confident in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is compounded by the fact that 43% of respondents believe their companies are not moving quickly enough to implement AI, partly due to ongoing data quality challenges.
Commenting on these findings, Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds said, “It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation. Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts.”
“This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI. High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes. Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies,” Johnson added.
Discussion about this post