Back to Learn Hub

Hosting Guide

Set up a DAISI host node, earn credits, and contribute to the decentralized AI network.

Getting Started

Running a DAISI host means your computer serves AI inference requests from the network. In return, you earn credits for every token processed and for your uptime.

Steps to Host

  1. Create a DAISI account at manager.daisinet.com
  2. Download the DAISI Host application from the Downloads page
  3. Configure your host with your account credentials
  4. Select and download an AI model
  5. Start the host and begin earning credits

Windows Installation

The Windows installer handles everything automatically so the host is ready to go after setup:

  • Desktop & Start Menu shortcuts are created so you can easily find and launch the host.
  • Auto-start on login is configured so the host runs whenever you sign in to Windows — no need to launch it manually.
  • Firewall rule is added for port 4242 so the host can receive inference requests without a Windows Firewall prompt.
  • Auto-update — the host checks for updates automatically and applies them in the background. Firewall rules and startup entries are updated to match the new version.

All of this is cleaned up automatically if you uninstall the host from Add/Remove Programs.

Hardware Requirements

Component Minimum Recommended
CPU 4 cores, modern x86_64 8+ cores, AVX2 support
RAM 8 GB 16+ GB
GPU Optional (CPU-only mode) NVIDIA GPU with 6+ GB VRAM
Storage 10 GB free 50+ GB SSD
Network 10 Mbps stable connection 50+ Mbps, low latency

Host Dashboard

When launched interactively, the DAISI Host displays a real-time TUI (text user interface) dashboard directly in the terminal. The dashboard shows two side-by-side panels:

Status Panel (left)

Shows live stats: orchestrator connection state, active inference sessions, token throughput, loaded models, tool and skill counts, memory usage, CPU percentage, GPU VRAM, and uptime.

Activity Feed (right)

Scrollable, color-coded log of all host activity in real time. Connection events, inference requests, heartbeat traffic, warnings, and errors are all visible at a glance.

Keyboard Controls
  • Q or Escape — Gracefully shut down the host
  • / — Scroll the activity feed
  • PgUp / PgDn — Page through the activity feed
  • Ctrl+C — Force stop the server

When the host runs as a Windows Service or systemd unit, the dashboard is not shown and the host operates silently in the background.

Auto-Tune Model Loading

The host automatically detects available GPU VRAM and adjusts model loading parameters for optimal performance. You no longer need to manually configure Context Size, GPU Layer Count, or Batch Size — the host handles this at load time.

How It Works
  1. GPU Detection — The host queries NVIDIA GPU VRAM via nvidia-smi. If no GPU is found, it falls back to CPU-only mode.
  2. Model Analysis — The GGUF file header is read to extract layer count, embedding dimensions, and model architecture.
  3. Safe Parameters — Based on available VRAM and model size, the host computes the maximum GPU layers and context size that will fit safely in memory.
  4. Retry on Failure — If model loading fails, the host retries once with reduced parameters (halved GPU layers and context).
GPU VRAM Display

The TUI dashboard shows free and total GPU VRAM in the System section. This helps you understand how much capacity is available for model loading.

Per-model backend engine, prompt format, and inference parameters (temperature, TopP, etc.) remain configurable in the Manager UI.

Tools-Only Hosts

Tools-only hosts are DAISI host devices that run tools on behalf of other hosts without loading or running AI models. They are perfect for devices that lack GPU capability — such as phones, tablets, low-power servers, or any machine that can run .NET but doesn't have the hardware for AI inference.

When a tools-only host connects to the network, it loads all available tools (built-in, custom, and marketplace) but skips model download and loading. The orchestrator (ORC) excludes it from inference session routing, so it will never receive AI inference requests. Instead, it stands ready to execute tools when requested by other hosts.

How Tool Delegation Works

When an inference host encounters a tool call during a session, it can delegate that execution to a tools-only host. This happens through two paths:

ORC-Mediated Path
  1. The inference host sends an ExecuteToolRequest command to the ORC.
  2. The ORC finds an available tools-only host in the same account (least-recently-used first).
  3. The ORC forwards the request to the tools-only host.
  4. The tools-only host executes the tool and returns an ExecuteToolResponse.
  5. The ORC relays the response back to the inference host.
Direct Connect Path

If the tools-only host has Direct Connect enabled, inference hosts can call its ToolsRPC.Execute endpoint directly at port 4242, bypassing the ORC for lower latency.

Enabling Tools-Only Mode

  • Manager UI — Go to your host's Settings page in the DAISI Manager and toggle the "Tools Only" switch.
  • Automatic on Mobile — When a DAISI Host is first run on Android or iOS, it automatically defaults to tools-only mode since mobile devices typically lack the GPU power for AI inference. You can override this in the Manager UI if desired.
  • Settings File — Set Host.ToolsOnly to true in your daisi-settings.json file.

Full Host vs Tools-Only Host

Capability Full Host Tools-Only Host
AI Model Loading Yes No — skipped at startup
Model Downloads Yes No — saves bandwidth and storage
Tool Loading Yes Yes — all tools load normally
Inference Sessions Yes No — excluded from routing
Tool Execution Requests Local only Yes — from ORC or direct connect
Skill Sync Yes Yes

In the DAISI Manager host list, tools-only hosts are marked with a wrench icon next to their name.

Earning Credits

Hosts earn credits through two mechanisms:

Token Processing

Earn credits for every token processed during inference. The more requests you handle, the more you earn.

Uptime Bonuses

Maintain high uptime to unlock bonus tiers: Bronze (90%+, 1.1x), Silver (95%+, 1.2x), Gold (99%+, 1.5x).