Skip to Content
GuidesNode Registration

Node Registration

Register your GPU as an INFER node and start earning tokens for serving inference requests.

Prerequisites

  • An INFER account with Operator role (upgrade here )
  • A GPU with 4GB+ VRAM
  • An LLM runtime installed (Ollama or Exo)
  • A TLS-secured endpoint (see TLS Setup)

Step 1: Install an LLM Runtime

# macOS brew install ollama # Linux curl -fsSL https://ollama.com/install.sh | sh # Start Ollama ollama serve # Pull a model ollama pull llama3.1:8b

Exo

pip install exo exo start

Step 2: Become an Operator

If your account is still in the USER role, upgrade to OPERATOR:

  1. Go to Dashboard → Provide Compute 
  2. Click Become a Provider
  3. Your role upgrades instantly

Or use the desktop app, which detects your LLM runtime automatically.

Step 3: Register Your Node

Via Dashboard

  1. Navigate to Operator → Nodes
  2. Click Register New Node
  3. Fill in:
    • Node Name: A friendly identifier
    • Endpoint URL: Your TLS-secured URL (e.g., https://node.example.com)
    • GPU Type: Select your hardware
    • Models: Select which models you’re serving

Via Desktop App

  1. Open the INFER desktop app
  2. The app auto-detects your LLM runtime and available models
  3. Click Register as INFER Node
  4. Confirm the pre-filled details

Step 4: Verification

After registration, the INFER network will:

  1. Probe your endpoint health check
  2. Verify TLS certificate validity
  3. Test inference with a small request
  4. Mark your node as Active if all checks pass

This typically takes 30–60 seconds.

Node Status

StatusDescription
ActiveReceiving and serving inference requests
IdleOnline but no incoming requests
OfflineFailed health check or unreachable
PendingAwaiting admin review (first-time nodes)

Next Steps