Concepts
Understand how the INFER platform works under the hood.
Core Architecture
INFER is a decentralized inference marketplace connecting AI compute buyers with GPU node operators. Learn about the platform’s architecture, how requests are routed, and how the token-based billing system works.
Topics
- Architecture — Platform components, data flow, and deployment topology
- Inference Routing — How requests are matched to the best available node
- Billing & Tokens — Token-based payments, pricing models, and settlement