- Most employees are using Artificial Intelligence (AI) tools at work, often bypassing company controls.
- Research shows more than 90% of employees use AI tools, while nearly half of sensitive AI interactions come from personal accounts.
- Blocking access to AI tools is proving ineffective as staff turn to new applications and personal devices.
- Governing the use of unsanctioned or “Shadow AI” tools is now required for compliance with new regulations, such as the EU AI Act.
- Continuous monitoring and smarter policies are needed to track all AI use—both official and unofficial—in order to protect company and customer data.
The use of artificial intelligence (AI) in the workplace has grown quickly, with employee-driven adoption outpacing corporate oversight. Many companies now find that staff are using a range of AI tools daily, sometimes outside approved channels and without following set security guidelines.
A recent report shows that while 40% of organizations have purchased enterprise AI subscriptions, over 90% of workers use AI tools in their routines. Research by Harmonic Security noted that 45.4% of sensitive AI data is accessed through personal email accounts, which bypass company security measures. This trend has led to concerns over a growing “Shadow AI Economy,” where usage of unsanctioned AI tools creates security risks for businesses.
The common strategy of blocking well-known AI platforms often fails, according to Harmonic Security. Employees resort to other applications or their personal devices, making it difficult for IT teams to monitor these activities. Productivity apps, such as Canva and Grammarly, frequently embed AI features, making it nearly impossible to fully restrict access.
Regulatory frameworks now require companies to maintain a complete inventory of all AI systems in use, including those not formally approved. The EU AI Act is one example, mandating organizations keep visibility into every AI system, as stated in MIT’s “State of AI in Business” report.
Harmonic Security provides solutions for continuous monitoring of Shadow AI, offering risk assessments for each tool and applying policies based on the sensitivity of information and the user’s role. For instance, marketing teams may be allowed to use certain tools for projects, while HR or legal staff face more restrictions when handling private employee data.
Experts say that as more SaaS (software-as-a-service) applications embed AI, the challenge of invisible, unmanaged adoption will likely increase. Having systems in place to identify, monitor, and manage all AI use has become critical for data protection and compliance with global regulations.
For more details on Shadow AI use and governance, readers can consult Harmonic Security’s research.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Lula Convenes BRICS Virtual Summit to Respond to Trump Tariffs
- VeChain Corrects; Can VET Reach $0.035 by Sept 2025? Outlook
- Understanding Crypto Staking And How It Works
- Bitcoin Core v30 Data Limits Spark Fears of Illegal Content Hosting
- AI Startup Kite Raises $18M to Power Stablecoin Agent Payments