INFER Documentation
Welcome to the documentation for $INFER — a decentralized AI inference network connecting developers with GPU providers worldwide.
What is INFER?
INFER is a marketplace for AI inference compute. Developers send LLM requests through our OpenAI-compatible API, and the network routes them to the best available GPU node. Node operators earn tokens for every request they serve.
Quick Links
- Quickstart — Get up and running in 5 minutes
- API Reference — Full API documentation
- Guides — Step-by-step tutorials for operators
- Architecture — How the network works
Key Features
- OpenAI-Compatible API — Drop-in replacement for OpenAI’s chat completions endpoint
- 500+ GPU Nodes — Global network with sub-100ms latency
- Pay-per-Token — No subscriptions, only pay for what you use
- 90% Revenue Share — Node operators keep the vast majority of fees
- Desktop App — Native macOS app for node operators