US, UK Employees Risk Data Leaks Using Chinese GenAI Tools, Study Finds

  • Employee use of Chinese generative AI tools in the US and UK is widespread and often unsanctioned.
  • Researchers found over 1,000 users uploaded sensitive company information to China-hosted AI platforms in a month.
  • Source code, legal documents, and personal data were among the types of sensitive information shared.
  • Permissive data policies of Chinese GenAI services raise concerns about confidentiality and compliance.
  • Some companies are adopting monitoring and policy enforcement tools to manage risks related to AI use.

A new study shows that employees in the United States and United Kingdom are using Chinese generative Artificial Intelligence (GenAI) platforms at significant levels, according to research by Harmonic Security. This activity often occurs without formal approval or oversight from company security teams.

- Advertisement -

Over a 30-day period, Harmonic Security analyzed the behavior of 14,000 employees across several organizations. Nearly 8% accessed China-based GenAI platforms, such as DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (by Alibaba), and Manus. The study identified 535 separate incidents where users uploaded confidential or sensitive data, including business records, engineering documentation, and personal information.

Of the total 17 megabytes of company content uploaded, about one-third was related to software source code or technical documents, with the remainder involving confidential financial reports, merger documents, legal contracts, and customer details. Harmonic Security noted that DeepSeek was involved in 85% of all reported incidents.

Many of these AI platforms, as highlighted in the study, operate under unclear or flexible data policies. In some cases, terms of service permit the use of user-uploaded data for further AI training. The use of these tools by employees can pose risks regarding company confidentiality and regulatory compliance, especially for firms handling sensitive customer or proprietary information.

To address these concerns, Harmonic Security has launched policy enforcement technology that provides real-time monitoring of employee AI use and detection of unsanctioned data uploads. Companies can restrict access to certain apps by location, limit the types of information uploaded, and prompt users with warnings or information about company policy.

- Advertisement -

The research indicates that awareness alone is not preventing risky use of external GenAI tools. Harmonic Security reports that about one in twelve employees works with Chinese GenAI platforms, often without knowing about data residency or potential risk exposure.

Further information about Harmonic Security’s efforts to protect sensitive data and enforce company AI use policies is available at harmonic.security.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Ohio Approves Crypto Payments for State Fees, Eyes Bitcoin Reserve

Ohio will allow cryptocurrency payments for state fees and services following a unanimous board...

Gate Launches Ethereum-Compatible Layer 2, Revamps GT Token

Gate has introduced Gate Layer, a new Layer 2 blockchain to raise transaction speeds...

FalconX Launches First Forward Rate Contracts for Ethereum Staking

FalconX completed the first forward rate agreements based on the Treehouse Ethereum Staking Rate. The...

Oracle to Operate TikTok US Algorithm as Takeover Deal Nears Completion

A potential agreement is nearing that would move control of TikTok’s U.S. operations to...

Bitcoin Options Expiry Favors Bulls if $112K Holds Amid Uncertainty

About $22.6 billion in Bitcoin options contracts are set to expire on Friday, with...
- Advertisement -

Must Read

10 Best Crypto to Mine Without Special Hardware Equipment

A lot of people mostly think that it takes a difficult process to mine cryptocurrency. today we are going to show you some of...