- Theta Labs launches an LLM (large language model) inference service on EdgeCloud with distributed, blockchain-based verifiability.
- The update enables independently verified and tamper-proof AI chatbot outputs by integrating public randomness from the blockchain.
- EdgeCloud is now the first platform to offer trustless LLM inference for enterprises, academics, and other users.
- The open-source engine leverages DeepSeek V3/R1 models and makes results reproducible through a combination of deterministic and verifiable random processes.
- Results include metadata such as the blockchain’s random seed, allowing verification by any third party and supporting future on-chain attestations.
Theta Labs has released a new large language model (LLM) inference feature for its EdgeCloud platform, the company announced. The service is designed to bring distributed, blockchain-backed verifiability to AI chatbot and agent outputs. This aims to make LLM results trustworthy and independently confirmable for users across sectors.
According to Theta Labs, this update positions EdgeCloud as the first platform to provide “trustless” LLM inference for both crypto-native and traditional cloud users. The system combines open-source AI models, decentralized computing, and blockchain technology to ensure the integrity of AI-generated outputs. For sensitive applications in enterprise and academia, the company states that EdgeCloud is currently the only solution offering this level of output verification.
Modern AI systems often rely on LLMs, but most services today depend on centralized providers or security hardware that can be difficult to audit, the article explains. The new approach adopts DeepSeek V3/R1, an open-source alternative to proprietary LLMs, which enables transparent verification throughout the process. “This breakthrough allows communities to move away from opaque AI APIs toward transparent, inspectable inference workflows,” states Theta Labs.
The new system uses a two-part approach. First, the model calculates next-token probabilities deterministically, which can be reviewed by anyone using the same open-source tools. Second, the service integrates a publicly verifiable random seed—drawn from blockchain systems like Ethereum’s RanDAO—to determine which token is selected, making every inference reproducible and auditable. This method ensures that neither users nor service providers can alter the outcome.
To support this, Theta Labs has enhanced the popular vLLM inference engine and published it as a public Docker container, available on DockerHub. The company has also incorporated the feature in their “DeepSeek R1 / Distill-Qwen-7B” model template on the Theta EdgeCloud dedicated model launchpad.
Every model result now includes verification metadata, containing public randomness details sourced from the Theta Blockchain. For higher-security needs, the company suggests that metadata like prompts, distributions, and results may be published on-chain for attestation by decentralized witnesses.
Theta Labs expects this system to play a key role as AI agents are increasingly used in daily tasks and commercial decisions. With tamper-proof LLM inference, users can independently confirm the accuracy of AI-generated responses. The company highlights this as an important step in adding transparency and security to the expanding AI ecosystem.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- MicroStrategy Returns to Dilution, Raises $520M via Stock Sale
- UK Pension Fund’s 3% Bitcoin Bet Nets 60% Return in Under a Year
- Threat Actors Use Vercel v0 AI Tool to Create Phishing Sites
- Webus Secures $100M XRP Equity Line as Ripple Launches Zurich Ads
- S&P 500 Index to Be Offered as Tokenized Fund via Centrifuge