Critical Base44 Flaw Let Hackers Bypass Authentication Controls

Wix Quickly Patches Critical Base44 Flaw Allowing Unauthorized Access to Private Apps, Highlighting Ongoing AI Security Risks

  • Critical vulnerability in Base44 allowed unauthorized access to private apps with only a public app ID.
  • Wiz researchers found and reported the flaw, and Wix patched it within 24 hours.
  • The issue bypassed authentication and Single Sign-On, exposing users’ data.
  • No evidence exists that attackers exploited the bug before the fix.
  • Recent incidents highlight ongoing Cybersecurity risks in AI and large language model tools.

On July 29, 2025, researchers disclosed a serious security flaw in the AI-powered coding platform Base44, which is owned by Wix. The security firm Wiz identified and reported the vulnerability, which allowed people to gain access to private apps built by users without proper authorization.

- Advertisement -

The bug let attackers register and verify accounts on private apps using only the app’s public identifier, known as “app_id.” According to Wiz’s report, the flaw could be exploited via two registration endpoints that lacked proper security checks. “The vulnerability we discovered was remarkably simple to exploit — by providing only a non-secret app_id value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform,” the researchers said. Wix responded by issuing a patch within 24 hours of notification on July 9, 2025.

The threat bypassed standard authentication such as Single Sign-On (SSO), putting all app data at risk. Wiz explained that since the app_id was visible in the app’s URL and files, anyone could use it to create and verify new accounts on private projects. “After confirming our email address, we could just login via the SSO within the application page, and successfully bypass the authentication,” said security researcher Gal Nagli. There is no evidence available that the flaw was actively exploited before it was fixed.

The incident exposes challenges as companies adopt AI-driven tools like “vibe coding.” These platforms allow users to create programs through natural language prompts, but new security issues may arise that traditional systems do not cover. Researchers have also warned of attacks on popular large language model (LLM) systems, such as prompt injection attacks, Gemini-ai-cli-hijack”>malicious code execution, phishing, and even leaking credentials.

Security teams are now exploring strategies like toxic flow analysis, which predicts potential attack scenarios in AI systems. Meanwhile, misconfigured servers in AI ecosystems, such as Model Control Protocol (MCP) servers, have been found exposed to the internet without authentication, risking data leaks and service abuse. According to Knostic, attackers could extract sensitive tokens and keys stored on these servers, gaining access to connected services.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -

Latest News

Ethereum Foundation Sells $10M ETH to Tom Lee’s

The Ethereum Foundation sold 5,000 ETH to BitMine Immersion Technologies for approximately $10.2 million...

Ex-PM Johnson calls Bitcoin ‘Ponzi’ worse than Pokémon

Former UK Prime Minister Boris Johnson labeled Bitcoin a "Ponzi Scheme" in a Friday...

China Warns of OpenClaw AI Security Risks

China's CNCERT issued a warning about critical security risks in the open-source AI agent...

Nvidia GTC 2026: AI Chips, CPU, Stock in Focus

NVIDIA's 2026 GTC summit, featuring a keynote from CEO Jensen Huang, is poised to...

Judge Dismisses Racketeering Claims in Priest Crypto Fraud Case

A federal judge rejected RICO claims in a class-action lawsuit against former pastor Eddy...

Must Read

What Are Sniper Bots Used in Defi Trading?

You've heard about DeFi, but what about sniper bots? These high-speed trading tools are shaking up the crypto scene.But don't fret, you're not...