Skip to main content

Get Started with Helix

Helix is an AI agent workspace for real software delivery. It connects your codebase, terminal, model providers, and multi-agent execution into one engineering workflow.

With Helix, you can:

  • Ship larger engineering tasks without losing context
  • Split investigation and implementation work into parallel sub tasks
  • Track agent progress in one visible workflow instead of guessing what happened
  • Move between local projects, remote hosts, and separate workspaces without mixing state

Choose Your Start Path

Use this section to choose how you want to start Helix. If you just want the smoothest daily experience, pick the desktop app.

Best for daily engineering work with the strongest local file, terminal, and workspace integration.

  1. Download Helix from Download
  2. Install and launch the desktop app
  3. Add your first workspace
  4. Configure a model provider
  5. Run your first realistic task

Path B: Web App + Backend

Best for quick evaluation, remote environments, or cases where desktop install is not available.

  1. Download the backend binary from Download
  2. Start the backend on your machine or server
  3. Open Helix Web
  4. Connect the web app to your backend URL
  5. If the backend asks for pairing, complete the local pairing flow described in Pairing agentui with aiagent
  6. Configure a model provider and start working
tip

If you are on Windows or Linux and desktop is not yet available, use web mode and install it as a PWA for a desktop-like experience.


10-Minute First Success

In about 10 minutes, you should be able to connect a real repository, run a realistic engineering task, and watch Helix split work across agents with visible progress.

Step 1: Add a workspace

Add the repository you actually want to evaluate.

  • Click to add a workspace
  • Select your project root directory
  • Wait for the workspace to finish loading so code intelligence and tools can operate on real files

Step 2: Configure one model provider

Choose one setup path and set a default model for new chats. Most users should start with Option A. Use Option B if you already have your own API endpoint, gateway, or preferred provider account.

Option A: Use Helix built-in models

  • Sign up and log in to Helix
  • Open the model or settings flow if prompted
  • Select one of the built-in Helix models
  • Set it as the default model for new chats
  • This is the fastest path because no separate API key setup is required

Option B: Configure your own LLM provider

  • Open LLM Configuration from Settings, or open it from the model picker in the top bar
  • In the Providers tab, click Add Provider
  • Fill in the provider fields: ID, Name, Interface Type, Base URL, and API Key
  • Choose the interface type that matches your endpoint: OpenAI-compatible, OpenAI Responses API, or Anthropic
  • Save the provider, then expand its card and click Add Model
  • Fill in the model fields: Model ID, Display Name, Context Window, Max Output Tokens, Temperature, and optionally Description
  • Turn on Supports Thinking for reasoning models when applicable, and leave Supports Tools on unless your endpoint does not accept tool parameters
  • Save the model, then set it as default from the provider card menu, the model row menu, or the Models tab
  • The default model is used automatically for new chats unless you switch models manually

Common external options:

  • DeepSeek for cost-effective coding
  • Claude for complex reasoning
  • OpenAI-compatible endpoints for custom gateways and self-hosted routing

Step 3: Run one realistic task

Start with a task that reflects real engineering work instead of a toy prompt.

Try one of these prompts:

Audit this module for bugs, security issues, and refactor opportunities.
Use sub tasks when beneficial and summarize with an execution checklist.
Trace this feature end to end, identify the files that matter, explain the current flow,
and propose the smallest safe change to add rate limiting.
Review this recent change for regressions, edge cases, and missing tests.
If needed, split the work into parallel investigations and summarize the findings.

What success looks like:

  • Helix breaks the task into focused sub tasks when that helps
  • You can see which agent is working on what and how the task is progressing
  • The final answer includes concrete findings, implementation guidance, or code changes instead of generic chat output

Why Helix Feels Different

One task, many agents

Helix keeps one main workflow while letting specialized agents handle focused investigation and execution work underneath it.

Parallel work you can actually follow

Instead of hidden background reasoning, Helix can run multiple sub tasks concurrently and show their status in the UI.

Long sessions without context collapse

Large tool outputs are cached and recalled on demand, and older conversation flow can be compacted to keep long-running work stable.

Clean separation across workspaces

Each workspace keeps its own state, tools, and execution context, which makes switching between projects and environments practical.

Useful for real engineering tasks

Helix is especially effective for bug hunts, code review, refactors, architecture investigation, and multi-step implementation work.


Where to Go Next

Understand the product:

Learn core workflows:

Set up backend and configuration details:


  • macOS app security prompts and binary permissions: Download Help
  • Backend health check: http://localhost:8080/health
  • Web connection failures: verify backend URL, port accessibility, and backend process status

If you need a deeper setup reference, continue with API and Configuration Overview.