- Employee use of Chinese generative AI tools in the US and UK is widespread and often unsanctioned.
- Researchers found over 1,000 users uploaded sensitive company information to China-hosted AI platforms in a month.
- Source code, legal documents, and personal data were among the types of sensitive information shared.
- Permissive data policies of Chinese GenAI services raise concerns about confidentiality and compliance.
- Some companies are adopting monitoring and policy enforcement tools to manage risks related to AI use.
A new study shows that employees in the United States and United Kingdom are using Chinese generative Artificial Intelligence (GenAI) platforms at significant levels, according to research by Harmonic Security. This activity often occurs without formal approval or oversight from company security teams.
Over a 30-day period, Harmonic Security analyzed the behavior of 14,000 employees across several organizations. Nearly 8% accessed China-based GenAI platforms, such as DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (by Alibaba), and Manus. The study identified 535 separate incidents where users uploaded confidential or sensitive data, including business records, engineering documentation, and personal information.
Of the total 17 megabytes of company content uploaded, about one-third was related to software source code or technical documents, with the remainder involving confidential financial reports, merger documents, legal contracts, and customer details. Harmonic Security noted that DeepSeek was involved in 85% of all reported incidents.
Many of these AI platforms, as highlighted in the study, operate under unclear or flexible data policies. In some cases, terms of service permit the use of user-uploaded data for further AI training. The use of these tools by employees can pose risks regarding company confidentiality and regulatory compliance, especially for firms handling sensitive customer or proprietary information.
To address these concerns, Harmonic Security has launched policy enforcement technology that provides real-time monitoring of employee AI use and detection of unsanctioned data uploads. Companies can restrict access to certain apps by location, limit the types of information uploaded, and prompt users with warnings or information about company policy.
The research indicates that awareness alone is not preventing risky use of external GenAI tools. Harmonic Security reports that about one in twelve employees works with Chinese GenAI platforms, often without knowing about data residency or potential risk exposure.
Further information about Harmonic Security’s efforts to protect sensitive data and enforce company AI use policies is available at harmonic.security.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Chris Larsen Sells $175M XRP, Sparks Centralization Concerns
- GENIUS Act Spurs Debate Over Stablecoin Redemption and Run Risks
- FTSE 100 Breaks Out, London Stocks Rally Toward 10,000 Mark
- Ex-BlackRock Digital Asset Chief Joins SharpLink Gaming as Co-CEO
- US Sanctions North Korean IT Worker Scheme, Arizona Woman Jailed