
Two thirds of business leaders in the UK are worried about potential data security and compliance risks stemming from employees’ unregulated use of artificial intelligence (AI) tools, according to new research from Studio Graphene.
The digital product studio commissioned Censuswide to survey 500 managers, directors and C-suite executives within UK businesses. It found that almost half (48%) know or suspect that employees in their organisation are using AI tools that have not been officially approved – this rises to 54% for larger companies (over 250 employees).
Shadow AI refers to the use of unauthorised AI tools and services, and 48% of the leaders surveyed admitted that managers in their organisation have limited visibility of how staff use AI in their day-to-day work. Just under two thirds (64%) are concerned, however, that unregulated AI use could lead to data security or compliance risks.
Despite these concerns, Studio Graphene’s research also revealed just how many UK businesses have not formally created and communicated AI policies or guidelines. More than a third (34%) of organisations said they do not have formal policies or guidelines governing AI usage, while even more (37%) have failed to communicate to staff their expectations for how AI should be used.
Elsewhere, the study showed that while three fifths (59%) of UK business leaders are worried that an over-reliance on AI could lead to employees making mistakes, 61% admitted that frontline staff are more comfortable with using AI in their day-to-day work than the organisation's senior leadership team.
Ritam Gandhi, director and founder of Studio Graphene, said:
“Shadow AI isn’t the result of malice or even carelessness. It’s often the result of a disconnect between senior leadership and their teams – if the organisation is sanctioning or investing in AI tools that are not working well or delivering value, employees will turn to unsanctioned alternatives that will enable them to do their jobs better.
“It all comes down to precise strategy and effective integration. Businesses need a clear picture of where AI can make a meaningful impact and then, crucially, they have to embed it effectively into workflows so the AI can inform decisions or improve processes. Without that, AI projects are doomed to fail, meaning employees will continue to source their own AI tools – and that undoubtedly creates risks where data privacy, security and regulatory compliance are concerned.”














