- Critical vulnerability in Base44 allowed unauthorized access to private apps with only a public app ID.
- Wiz researchers found and reported the flaw, and Wix patched it within 24 hours.
- The issue bypassed authentication and Single Sign-On, exposing users’ data.
- No evidence exists that attackers exploited the bug before the fix.
- Recent incidents highlight ongoing Cybersecurity risks in AI and large language model tools.
On July 29, 2025, researchers disclosed a serious security flaw in the AI-powered coding platform Base44, which is owned by Wix. The security firm Wiz identified and reported the vulnerability, which allowed people to gain access to private apps built by users without proper authorization.
The bug let attackers register and verify accounts on private apps using only the app’s public identifier, known as “app_id.” According to Wiz’s report, the flaw could be exploited via two registration endpoints that lacked proper security checks. “The vulnerability we discovered was remarkably simple to exploit — by providing only a non-secret app_id value to undocumented registration and email verification endpoints, an attacker could have created a verified account for private applications on their platform,” the researchers said. Wix responded by issuing a patch within 24 hours of notification on July 9, 2025.
The threat bypassed standard authentication such as Single Sign-On (SSO), putting all app data at risk. Wiz explained that since the app_id was visible in the app’s URL and files, anyone could use it to create and verify new accounts on private projects. “After confirming our email address, we could just login via the SSO within the application page, and successfully bypass the authentication,” said security researcher Gal Nagli. There is no evidence available that the flaw was actively exploited before it was fixed.
The incident exposes challenges as companies adopt AI-driven tools like “vibe coding.” These platforms allow users to create programs through natural language prompts, but new security issues may arise that traditional systems do not cover. Researchers have also warned of attacks on popular large language model (LLM) systems, such as prompt injection attacks, Gemini-ai-cli-hijack”>malicious code execution, phishing, and even leaking credentials.
Security teams are now exploring strategies like toxic flow analysis, which predicts potential attack scenarios in AI systems. Meanwhile, misconfigured servers in AI ecosystems, such as Model Control Protocol (MCP) servers, have been found exposed to the internet without authentication, risking data leaks and service abuse. According to Knostic, attackers could extract sensitive tokens and keys stored on these servers, gaining access to connected services.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Coinbase’s Base App Surge Drives Zora Creator Token Frenzy
- Zodia Markets Raises $18.25M to Expand Stablecoin FX Solution
- Tesla Diner Opens in Hollywood, Yelp Reviews Mysteriously Vanish
- Bitcoin Bulls Eye $120K Breakout as Analysts Predict ‘Massive Pump’
- Radix Founder Dan Hughes Passes Away, Community Pays Tribute