Node Registration
Register your GPU as an INFER node and start earning tokens for serving inference requests.
Prerequisites
- An INFER account with Operator role (upgrade here )
- A GPU with 4GB+ VRAM
- An LLM runtime installed (Ollama or Exo)
- A TLS-secured endpoint (see TLS Setup)
Step 1: Install an LLM Runtime
Ollama (Recommended)
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.1:8bExo
pip install exo
exo startStep 2: Become an Operator
If your account is still in the USER role, upgrade to OPERATOR:
- Go to Dashboard → Provide Compute
- Click Become a Provider
- Your role upgrades instantly
Or use the desktop app, which detects your LLM runtime automatically.
Step 3: Register Your Node
Via Dashboard
- Navigate to Operator → Nodes
- Click Register New Node
- Fill in:
- Node Name: A friendly identifier
- Endpoint URL: Your TLS-secured URL (e.g.,
https://node.example.com) - GPU Type: Select your hardware
- Models: Select which models you’re serving
Via Desktop App
- Open the INFER desktop app
- The app auto-detects your LLM runtime and available models
- Click Register as INFER Node
- Confirm the pre-filled details
Step 4: Verification
After registration, the INFER network will:
- Probe your endpoint health check
- Verify TLS certificate validity
- Test inference with a small request
- Mark your node as Active if all checks pass
This typically takes 30–60 seconds.
Node Status
| Status | Description |
|---|---|
| Active | Receiving and serving inference requests |
| Idle | Online but no incoming requests |
| Offline | Failed health check or unreachable |
| Pending | Awaiting admin review (first-time nodes) |
Next Steps
- Monitor your earnings
- Set up TLS if you haven’t already
- Use the desktop app for real-time monitoring