Back to Learn Hub

Hosting Guide

Set up a DAISI host node, earn credits, and contribute to the decentralized AI network.

Getting Started

Running a DAISI host means your computer serves AI inference requests from the network. In return, you earn credits for every token processed and for your uptime.

Steps to Host

  1. Create a DAISI account at manager.daisinet.com
  2. Download the DAISI Host application from the Downloads page
  3. Configure your host with your account credentials
  4. Select and download an AI model
  5. Start the host and begin earning credits

Windows Installation

The Windows installer handles everything automatically so the host is ready to go after setup:

  • Desktop & Start Menu shortcuts are created so you can easily find and launch the host.
  • Auto-start on login is configured so the host runs whenever you sign in to Windows — no need to launch it manually.
  • Firewall rule is added for port 4242 so the host can receive inference requests without a Windows Firewall prompt.
  • Auto-update — the host checks for updates automatically and applies them in the background. Firewall rules and startup entries are updated to match the new version.

All of this is cleaned up automatically if you uninstall the host from Add/Remove Programs.

Hardware Requirements

Component Minimum Recommended
CPU 4 cores, modern x86_64 8+ cores, AVX2 support
RAM 8 GB 16+ GB
GPU Optional (CPU-only mode) NVIDIA GPU with 6+ GB VRAM
Storage 10 GB free 50+ GB SSD
Network 10 Mbps stable connection 50+ Mbps, low latency

Host Dashboard

When launched interactively, the DAISI Host displays a real-time TUI (text user interface) dashboard directly in the terminal. The dashboard shows two side-by-side panels:

Status Panel (left)

Shows live stats: orchestrator connection state, active inference sessions, token throughput, loaded models, tool and skill counts, memory usage, CPU percentage, GPU VRAM, and uptime.

Activity Feed (right)

Scrollable, color-coded log of all host activity in real time. Connection events, inference requests, heartbeat traffic, warnings, and errors are all visible at a glance.

Keyboard Controls
  • Q or Escape — Gracefully shut down the host
  • / — Scroll the activity feed
  • PgUp / PgDn — Page through the activity feed
  • Ctrl+C — Force stop the server

When the host runs as a Windows Service or systemd unit, the dashboard is not shown and the host operates silently in the background.

Auto-Tune Model Loading

The host automatically detects available GPU VRAM and adjusts model loading parameters for optimal performance. You no longer need to manually configure Context Size, GPU Layer Count, or Batch Size — the host handles this at load time.

How It Works
  1. GPU Detection — The host queries NVIDIA GPU VRAM via nvidia-smi. If no GPU is found, it falls back to CPU-only mode.
  2. Model Analysis — The GGUF file header is read to extract layer count, embedding dimensions, and model architecture.
  3. Safe Parameters — Based on available VRAM and model size, the host computes the maximum GPU layers and context size that will fit safely in memory.
  4. Retry on Failure — If model loading fails, the host retries once with reduced parameters (halved GPU layers and context).
GPU VRAM Display

The TUI dashboard shows free and total GPU VRAM in the System section. This helps you understand how much capacity is available for model loading.

Per-model backend engine, prompt format, and inference parameters (temperature, TopP, etc.) remain configurable in the Manager UI.

Earning Credits

Hosts earn credits through two mechanisms:

Token Processing

Earn credits for every token processed during inference. The more requests you handle, the more you earn.

Uptime Bonuses

Maintain high uptime to unlock bonus tiers: Bronze (90%+, 1.1x), Silver (95%+, 1.2x), Gold (99%+, 1.5x).