- Theta launches the first decentralized On-demand AI Model API Service, allowing developers to easily access AI models without building infrastructure.
- The pay-as-you-use service offers multiple AI models including FLUX.1-schnell, Llama 3.1, and Stable Diffusion, running on Theta’s hybrid cloud-edge infrastructure.
- The service will enable community GPU nodes to host select models after the June 2025 EdgeCloud client release, with node operators earning TFUEL rewards.
Theta has announced the launch of its first decentralized On-demand AI Model API Service. This new offering enables developers to incorporate AI models into their applications without the costly and time-consuming process of building infrastructure from scratch. The service operates on a pay-as-you-use model, giving AI developers access to various AI capabilities from Large Language Models (LLMs) to generative AI through simple API integration.
The service runs on Theta EdgeCloud’s hybrid cloud-edge infrastructure, which sources idle GPU capacity from multiple providers including cloud providers, enterprise data centers, and community-operated edge nodes. An intelligent scheduler optimizes model placement based on real-time demand, ensuring consistent performance even during peak hours.
“This new service enables EdgeCloud to manage AI model execution at much more granular levels such as by image generated, token used and work produced,” according to the announcement from Theta Labs. This capability sets the foundation for a fully hybrid GPU edge network scheduled to launch with the June 2025 release.
Available Models and Capabilities
At launch, the service offers eight AI models: FLUX.1-schnell (text to image), Llama 3.1 70B Instruct (LLM), Whisper (audio to text), Stable Diffusion Turbo Vision (text to image), Image to Image (for style transfer, object erasing, etc.), Llama 3 8B (LLM), Stable Diffusion XL Turbo (text to image), and Stable Diffusion Video (text to video).
Notably, several models can run efficiently on commercial-grade community GPUs such as NVIDIA RTX 4070/4090s. After the June 2025 EdgeCloud client release, community GPU node operators will be able to host these models and earn TFUEL token rewards for processing jobs.
How Developers Can Get Started
Developers can access the service through the official EdgeCloud dashboard. The interface includes a Playground tab for direct model interaction, an API documentation tab with code examples in multiple languages (cURL, JavaScript/Node.js, Python), and an access key management tab.
For developers needing dedicated model deployments to optimize performance and latency, Theta also offers Dedicated AI Model Serving as an alternative to the on-demand APIs.
The company emphasizes that its primary focus remains on increasing adoption and usage of the Theta edge network, which they believe “drives increased usage of Theta blockchain, velocity and utility of TFUEL tokens and ultimately better economics, security and governance by owning and staking THETA tokens.”
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- CZ’s Giggle Academy aims to provide free education to 1 billion children
- Sony’s Soneium Blockchain Partners With Plume for Tokenized RWAs
- Eric Trump: Banks Must Embrace Cryptocurrency or Face Extinction
- Circle Challenges Ripple with USDC Network for Cross-Border Payments
- Elderly US Citizen Falls Victim to $330 Million Bitcoin Social Engineering Scam