OpenAI Under Scrutiny After Helping Create Math Test Its AI Later Excelled At

OpenAI faced criticism after its o3 model scored 25.2% on FrontierMath, a benchmark test the company helped develop.

  • Epoch AI disclosed that OpenAI commissioned 300 math problems and had access to their solutions.
  • Multiple AI models, including those from Google, Microsoft, and Meta, were found to have memorized benchmark test answers.
  • Epoch AI plans to implement a 50-problem holdout set to ensure genuine testing of AI capabilities.
  • The controversy highlights broader issues in AI performance evaluation methods across the industry.

Questions about Artificial Intelligence testing integrity emerged after OpenAI‘s involvement in developing a mathematical benchmark it later used to demonstrate its model’s capabilities, raising concerns about the validity of AI performance metrics across the industry.

- Advertisement -

Testing Transparency Issues

OpenAI‘s o3 model achieved a 25.2% score on FrontierMath, a mathematical assessment tool created by Epoch AI. However, subsequent revelations showed that OpenAI had funded the benchmark’s development and maintained access to problems and solutions.

According to Epoch AI‘s disclosure, the company provided 300 mathematics problems with solutions to OpenAI through a commissioned agreement.

Tamay Besiroglu, associate director at Epoch AI, revealed that OpenAI initially restricted disclosure of their partnership, stating: “We were restricted from disclosing the partnership until around the time o3 launched.”

The agreement included only a verbal commitment not to use the materials for model training.

Industry-Wide Pattern

AI researcher Louis Hunt’s investigation exposed that leading models from Google, Microsoft, Meta, and Alibaba could reproduce exact answers from MMLU and GSM8K benchmarks, standard tests measuring AI multitasking and mathematical abilities.

RemBrain founder Vasily Morzhakov emphasized the severity of the situation:

- Advertisement -

The models are tested in their instruction versions on MMLU and GSM8K tests. But the fact that base models can regenerate tests—it means those tests are already in pre-training.”

Moving Toward Solution

To address these concerns, Epoch AI announced plans to implement a “hold out set” comprising 50 randomly selected problems that will remain inaccessible to OpenAI.

Computer scientist Dirk Roeckmann suggests that proper testing requires a neutral evaluation environment, though acknowledging potential risks from human interference in the testing process.

- Advertisement -

The controversy parallels historical challenges in standardized testing, where access to test materials has consistently raised questions about assessment validity. This situation highlights the need for independent verification methods in artificial intelligence evaluation.

✅ Follow BITNEWSBOT on Facebook, LinkedIn, X.com, and Google News for instant updates.

Consider a small donation to support our journalism

Previous Articles:

- Advertisement -

Latest News

Coinbase Launches Wrapped ADA and LTC on Base, COIN Hits New High

Coinbase has introduced wrapped versions of Cardano (ADA) and Litecoin (LTC) on its Ethereum...

ClickFix Attacks Surge 517% in 2025, Fake CAPTCHAs Spread Malware

ClickFix attacks using fake CAPTCHA verifications have risen by 517% in early 2025, according...

FHFA Orders Fannie, Freddie to Consider Crypto as Mortgage Collateral

The U.S. Federal Housing Finance Agency ordered Fannie Mae and Freddie Mac to consider...

Retail Investors Can Now Buy Tokenized Shares of SpaceX via Blockchain

Retail investors can now buy blockchain-based fractional shares in SpaceX through Republic. These digital tokens...

EU Commission Eases Stablecoin Stance, Calms Bank Run Concerns

The European Commission downplayed the risk of bank runs linked to stablecoins after concerns...

Must Read

Tutorial: How to Buy a Domain Name Permanently? (Super Easy)

Are you ready to establish a permanent online presence and you want to buy a domain forever?In this tutorial, we'll show you how to...