BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up

Buterin: Grok on X boosts truthfulness and rebukes bias now!

Grok’s unpredictable responses on X may improve truth-seeking but expose hallucinations, adversarial prompting, and risks of centralized AI bias.

  • Grok, the AI chatbot on X, often challenges users who seek confirmation for political beliefs.
  • Vitalik Buterin said Grok’s unpredictability has improved truth-seeking on the platform; he linked this view in two posts here and here.
  • Elon Musk attributed some Grok errors to *“adversarial prompting”* in a post here.
  • Experts warn central control of AI can institutionalize bias, and other chatbots also show factual and safety problems.

Vitalik Buterin said this week that calling Grok on X has made the platform more truth-friendly, because users cannot see responses in advance and the bot sometimes contradicts partisan claims, according to his post here. He added in a second post that a strong case exists that Grok is a “net improvement” for the site here.

- Advertisement -

Grok is built by xAI and is widely used on the platform. The chatbot has produced conspicuous errors, including praising Elon Musk’s athletic ability and suggesting he could have resurrected faster than Jesus. Musk blamed “adversarial prompting” for some hallucinations in a post available here.

“When the most powerful AI systems are owned, trained and governed by a single company, you create conditions for algorithmic bias to become institutionalized knowledge,” said Kyle Okamoto, chief technology officer at Aethir. “Models begin to produce worldviews, priorities and responses as if they’re objective facts, and that’s when bias stops being a bug and becomes the operating logic of the system that’s replicated at scale.”

Other AI services also face issues. OpenAI’s ChatGPT has been criticized for biased responses and factual errors, and Character.ai has been accused of safety failures in a severe case involving a minor.

Definition — Adversarial prompting: deliberate inputs crafted to make a model produce incorrect, biased, or unexpected outputs.
Definition — AI chatbot: a software program that generates text-based responses using machine learning models.

- Advertisement -

✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.

Previous Articles:

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading
Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount

Latest News

Harvester Deploys New Linux Backdoor in Espionage

The cyber-espionage group Harvester has deployed a new Linux variant of its GoGra backdoor...

Best Shiba Inu Buy Under $0.00001? Gains 6.5% Monthly

Shiba Inu (SHIB) has rallied 2.5% in the last 24 hours amid a wider...

Bitcoin Surging as Saylor Outpaces BlackRock; Musk Hint

Bitcoin surged nearly 30% from a low of $60,000 in early Q2 2026, approaching...

SEC Close to Issuing Exemption for Tokenized Securities

The SEC is finalizing a new "innovation exemption" for trading tokenized securities onchain.The move...

Lotus Wiper Targets Venezuela’s Energy Infrastructure

Lotus Wiper, a new data-destroying malware, has been used in targeted attacks against Venezuela's...

Must Read

5 Best Hacking eBooks for Beginners

In this article we present the 5 Best Hacking eBooks for beginners as ranked by our editorial teamWelcome to the world of hacking, where...
Ad
Altseason Is Loading. These 4 coins are trending right now.
SOL $92.12
DOGE $0.0950
LINK $9.02
SUI $1.02
5% off spot fees when you sign up
Start Trading