📘
Eternl.ai
  • 1. Overview & Propositions
    • Future of Gaming
    • AI Game Developer Challenges
  • 2. Ecosystem
    • Scaling of Game Hub
    • Ecosystem Moat
  • 3. Ability SDKs
    • Intelligent Agent
    • Dynamic World
  • 4. Gaming AGI
    • Foundation Models
    • RL Model Library
    • Publication & Patent
  • 5. Compute Network
    • Architecture
    • Breakthrough
  • 6. Gamer Rollup
    • Tokenomics
  • 7. AI-Native Game Concepts
Powered by GitBook
On this page
  • Overview
  • Challenges

5. Compute Network

PreviousPublication & PatentNextArchitecture

Last updated 1 year ago

Our Compute Network provides reliable and affordable computing resources for AI game inferencing. It optimizes aggregation, distribution, and verification of game compute supply provided by decentralized physical infrastructure networks (DePINs), centralized GPU services, miners, and players equipped with high-end gaming GPUs.

Overview

We believe in partnership over competition in terms of building a large game computing network. Therefore, we do not aim to profit from platform computing; we simply aim to match game demand with DePIN partners and gamers. This setup will benefit all stakeholders:

  1. Game Developers: No setup is needed; just connect developers' wallets for cost settlement, and inferencing tasks are directed to the Network. The abundant supply keeps computes affordable.

  2. Players: Our AI models are deployed in gaming GPUs for inferencing. Players can earn tokens by supplying computing resources with their high-end gaming laptops .

  3. DePIN Partners: DePINs lack use cases because of GPU memory limitations, security concerns, and cost-effectiveness. Our Compute Network can provide DePIN partners with a true and sustainable use case.

Challenges

Three technical challenges that limit the use of decentralized inferencing for most AI use cases.

Challenges
Our Web3-Tailored Compute Use Case

GPU Memory Limitation: LLMs are too big to deploy on idled GPUs in DePINs and player laptops.

Lightweight Model: and models to reduce hardware requirements of compute nodes—RTX4090s sufficient for most features.

Security Concerns: hard to protect proprietary models nor verify inferencing in a trustless setup.

Lightweight Model: models are "unreadable" for core technology protection. is embedded into the models.

Not Cost-Effective: adding gas fee onto the inferencing cost makes decentralized compute expensive.

Minimal Gas Fee: highly concentrated and long-term compute purchasers of platform games. .

We aim to bring a viable use case to decentralized inferencing networks by sharing our proprietary, obfuscated, lightweight AI models, and by offering a daily gas fee settlement model enabled by our concentrated compute purchasers of games.

quantized
distilled
Obfuscated
Proof of inferencing
We post compute record per block but settle once a day