US, UK Employees Risk Data Leaks Using Chinese GenAI Tools, Study Finds

  • Employee use of Chinese generative AI tools in the US and UK is widespread and often unsanctioned.
  • Researchers found over 1,000 users uploaded sensitive company information to China-hosted AI platforms in a month.
  • Source code, legal documents, and personal data were among the types of sensitive information shared.
  • Permissive data policies of Chinese GenAI services raise concerns about confidentiality and compliance.
  • Some companies are adopting monitoring and policy enforcement tools to manage risks related to AI use.

A new study shows that employees in the United States and United Kingdom are using Chinese generative Artificial Intelligence (GenAI) platforms at significant levels, according to research by Harmonic Security. This activity often occurs without formal approval or oversight from company security teams.

- Advertisement -

Over a 30-day period, Harmonic Security analyzed the behavior of 14,000 employees across several organizations. Nearly 8% accessed China-based GenAI platforms, such as DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (by Alibaba), and Manus. The study identified 535 separate incidents where users uploaded confidential or sensitive data, including business records, engineering documentation, and personal information.

Of the total 17 megabytes of company content uploaded, about one-third was related to software source code or technical documents, with the remainder involving confidential financial reports, merger documents, legal contracts, and customer details. Harmonic Security noted that DeepSeek was involved in 85% of all reported incidents.

Many of these AI platforms, as highlighted in the study, operate under unclear or flexible data policies. In some cases, terms of service permit the use of user-uploaded data for further AI training. The use of these tools by employees can pose risks regarding company confidentiality and regulatory compliance, especially for firms handling sensitive customer or proprietary information.

To address these concerns, Harmonic Security has launched policy enforcement technology that provides real-time monitoring of employee AI use and detection of unsanctioned data uploads. Companies can restrict access to certain apps by location, limit the types of information uploaded, and prompt users with warnings or information about company policy.

- Advertisement -

The research indicates that awareness alone is not preventing risky use of external GenAI tools. Harmonic Security reports that about one in twelve employees works with Chinese GenAI platforms, often without knowing about data residency or potential risk exposure.

Further information about Harmonic Security’s efforts to protect sensitive data and enforce company AI use policies is available at harmonic.security.

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Trump Raises Global Tariffs to 15% After Court Loss

President Trump raised a proposed global tariff from 10% to 15%, effective immediately, following...

Ethereum’s FOCIL Aims to Force Censorship-Resistant Transactions

The FOCIL proposal is the headlining feature of Ethereum's upcoming Hegota upgrade, scheduled for...

Trump’s New Tariffs Draw Bipartisan Criticism as ‘Tax’

President Trump announced a new 10% global tariff in response to the Supreme Court...

XRP, SOL ETF Inflows Defy Bitcoin, Ethereum Outflows

Spot ETFs for XRP and Solana (SOL) recorded net inflows of $4.05 million and...

AI AdGazer Predicts Which Ads You’ll Actually Watch

Researchers have created AdGazer, an AI model trained on eye-tracking data to predict human...

Must Read

Are Cryptocurrency Securities?

TL;DR - Cryptocurrencies are not typically considered securities, as they are decentralized digital assets that operate independently of any central authority or government. However,...
🔥 #AD Get 20% OFF any new 12 month hosting plan from Hostinger. Click here!