- Attackers are using Vercel’s v0 AI tool to create convincing fake sign-in pages for phishing attacks.
- Threat actors can generate functional phishing sites quickly using simple text prompts, lowering the skill needed to launch attacks.
- Vercel has blocked access to phishing pages after receiving responsible disclosure from security researchers.
- Cybercriminals are also using uncensored large language models (LLMs) like WhiteRabbitNeo to assist in malicious operations.
- The trend highlights a shift toward AI-driven phishing campaigns, making scams more scalable and automated.
Unknown threat actors have been detected using Vercel’s v0 AI-powered development tool to generate realistic fake login pages for phishing attacks, according to security research released on July 2, 2025. The incidents involve attackers creating websites that closely mimic real sign-in portals in order to steal login credentials.
Researchers from Okta Threat Intelligence reported that these attackers use v0 to quickly produce deceptive websites by entering basic text prompts, eliminating the need for coding skills or complex setup. This approach lets even inexperienced actors launch phishing sites at scale and with speed.
Okta observed that some phishing attempts used brands, including an unnamed Okta customer, and hosted company logos directly on Vercel infrastructure. After receiving reports, Vercel acted to restrict access to the identified phishing resources. According to Okta’s researchers, “This observation signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts.” The use of v0.dev and open-source clones allows fast, large-scale creation of deceptive pages.
Cybersecurity analysts note that this type of AI-driven phishing differs from traditional “phishing kit” methods, which previously required greater technical expertise or time investment. “The observed activity confirms that today’s threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities,” Okta’s researchers stated.
The increase in AI-enabled cybercrime is also seen in how criminals use uncensored large language models (LLMs). One model, called WhiteRabbitNeo, is marketed as an “Uncensored AI model for (Dev) SecOps teams,” but researchers say it is being deployed for illicit purposes. Cisco Talos researcher Jaeson Schultz said, “Cybercriminals are increasingly gravitating towards uncensored LLMs, cybercriminal-designed LLMs, and jailbreaking legitimate LLMs.” Schultz added that these models operate without safety constraints, making them well-suited for cybercriminal use.
Recent trends show that phishing now involves AI-generated fake emails, voice clones, and deepfake videos, allowing cybercriminals to automate and expand their operations. As these tools lower technical barriers, the number and sophistication of phishing attacks continue to grow.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Webus Secures $100M XRP Equity Line as Ripple Launches Zurich Ads
- S&P 500 Index to Be Offered as Tokenized Fund via Centrifuge
- Bitcoin Eyes Big July as Trump’s Crypto Czar, BlackRock Stir Market
- ECB Approves Two-Track Blockchain Settlement Plan for Eurozone
- US, UK Sanction Russian Bulletproof Hosting Firm Aeza Group