- Many organizations blocked access to public AI tools, but this approach has proven ineffective at preventing data leaks.
- Zscaler ThreatLabz reported a 36-fold increase in enterprise AI and machine learning traffic in 2024, tracking more than 800 unique AI apps in use.
- Employees often find unofficial ways to use AI tools, creating “Shadow AI” that escapes security monitoring.
- Companies need real-time visibility into AI usage and risk, not just blocking capabilities, to build smarter policies using zero-trust principles.
- Approaches like data loss prevention, browser isolation, and steering users to secure, approved AI tools can enable productivity while protecting sensitive information.
Organizations worldwide began blocking public generative AI applications after their widespread adoption in late 2022 due to concerns over sensitive data exposure. However, companies are now finding that blocking access has not stopped employees from using these AI tools, according to research from Zscaler ThreatLabz.
ThreatLabz stated that in 2024, they analyzed 36 times more AI and machine learning traffic than the previous year, identifying over 800 different AI applications being used within enterprise environments. Employees often use personal emails, mobile devices, or screenshots to work around restrictions, resulting in “Shadow AI” — the unmonitored use of generative AI tools.
The report emphasized that blocking AI apps only creates a blind spot, rather than actual security. “Blocking unapproved AI apps may make usage appear to drop to zero… but in reality, your organization isn’t protected; it’s just blind to what’s actually happening,” the company noted. Data loss can be more severe with AI than with traditional file sharing, as sensitive information could be incorporated into public AI models with no way of removal.
Zscaler recommends establishing visibility first, then implementing policies aligned with zero-trust — a security model that requires verification for every user and device. Their tools can identify which apps are being accessed and by whom in real time. This allows organizations to develop context-based policies, such as using browser isolation or redirecting users to approved AI solutions managed on-premises.
The company’s data protection tools detected over 4 million incidents where users attempted to send sensitive information—such as financial records, personal data, source code, or medical information—to AI applications. These attempts were blocked by their data loss prevention technology.
Experts at Zscaler suggest that a balanced approach—empowering employees with safe AI access while maintaining strong data protection—will allow organizations to adopt AI responsibly. More details about their security solutions can be found at zscaler.com/security.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Dogecoin Drops 6% Amid Elon Musk and Donald Trump Public Feud
- Bitcoin Surges Past $100K Amid US Debt Fears, Trump & Musk Support
- FIFA Rivals Launches June 12 With Adidas, NFT Player Cards
- UK Law Commission Proposes Supranational Rules for Crypto Disputes
- MicroStrategy’s “Stride” Preferreds Spark Junk Bond Debate