Cybersecurity experts say “unregulated AI use” is creating dangerous blind spots, as research reveals 84% of AI tool providers have suffered breaches.
Earlier in August it was reported that some ChatGPT “shared conversations” were showing up in Google search results. The links, originally created through ChatGPT’s sharing feature, were publicly visible and indexed by search engines, making them accessible to anyone with the right search terms.
OpenAI reacted by retiring the sharing feature altogether and working with search engines to remove the indexed content.
While these actions closed the immediate vulnerability, security experts are warning that focusing solely on this incident risks ignoring a wider problem of systemic security weaknesses across the AI sector.
Cybersecurity analysis carried out by the Business Digital Index (BDI) team recently examined 10 leading large language model providers. Although half earned an “A” cybersecurity rating, the other half performed significantly worse. OpenAI received a D, while Inflection AI scored an F.
The study found:
- Half of the leading AI providers had experienced documented breaches.
- All providers had SSL/TLS configuration weaknesses.
- Most had hosting infrastructure vulnerabilities — only AI21 Labs and Anthropic avoided major issues.
- Credential reuse was widespread, with 35% of Perplexity AI employees and 33% at EleutherAI using previously breached passwords.
A separate Business Digital Index investigation into 52 popular AI web tools also revealed “concerning trends”.
Key findings:
- 84% of the analysed AI web tools had experienced at least one data breach.
- 51% had corporate credentials stolen.
- 93% had SSL/TLS misconfigurations.
- 91% had hosting vulnerabilities linked to weak cloud security or outdated servers.
Žilvinas Girėnas, head of product at nexos.ai, says the real danger lies in how fast AI tools are being deployed without governance:
“This isn’t just about one tool slipping through. Adoption is outpacing governance, and that's creating a freeway for breaches to escalate. Without enterprise-wide visibility, your security team can't lock down access, trace prompt histories, or enforce guardrails. It's like handing the keys to the kingdom to every team, freelancer, and experiment. A tool might seem harmless until you discover it's leaking customer PII or confidential strategy flows. We’re not just talking theory — studies show 96% of organisations see AI agents as security threats, while barely half can say they have full visibility into agent behaviours.”
Around 75% of employees use AI for work tasks, yet only 14% of organisations have formal AI policies.
Nearly half of sensitive prompts are entered via personal accounts, bypassing company oversight entirely, and a significant portion of users actively conceal their AI use from management.
Within this broader sample, productivity-focused AI platforms were the least secure. These include note-taking, scheduling, and content generation tools widely integrated into daily workflows. Every single productivity AI tool showed hosting and encryption flaws, the researchers said.
Cybernews’ cybersecurity researcher Aras Nazarovas cautions:
“A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. The ChatGPT leak is a reminder of how quickly these weaknesses can become public.”
Cybersecurity experts at Cybernews recommend taking these steps to reduce risk:
- Establish and enforce AI usage policies across all departments.
- Audit all AI vendors and tools for enterprise-grade security compliance.
- Prohibit personal accounts for work-related AI interactions.
- Monitor and revoke all shared or public AI content.
- Educate employees on the risks of unsecured AI tools and credential reuse.