Skip to main content

Quick Start

This guide walks you through your first OpenOrca session.

Start OpenOrca

Make sure LM Studio is running with a model loaded (see Installation), then:

openorca

You'll see a banner with connection status and the loaded model name. If it shows Connected, you're good.

Your First Prompt

At the > prompt, ask OpenOrca to do something:

> Create a Python script that reads a CSV file and prints the top 5 rows

OpenOrca will:

  1. Plan what to do (you'll see its thinking)
  2. Call tools — create files, run shell commands, etc.
  3. Observe the results of each tool call
  4. Iterate — fix errors, refine, continue until done

Each tool call is shown with the tool name, arguments, and result. Depending on your permission settings, you may be asked to approve certain tools.

The Agent Loop

When you send a message, OpenOrca enters an agent loop that runs up to 25 iterations:

  1. Your message is sent to the LLM along with the full conversation context
  2. The LLM streams a response, potentially including tool calls
  3. Tool calls are executed and results are fed back to the LLM
  4. Repeat until the LLM responds without tool calls (task complete)

If the model gets stuck in a retry loop (same tool failing repeatedly), OpenOrca auto-detects it and stops.

Essential Commands

CommandWhat It Does
/helpShow all available commands
/clearClear conversation and start fresh
/contextShow how much of the context window is used
/compactSummarize and compress the conversation to free context
/doctorRun diagnostic checks

See Commands & Shortcuts for the full list.

Keyboard Shortcuts

ShortcutAction
Ctrl+OToggle thinking output visibility (default: hidden)
Ctrl+CFirst press: cancel current generation. Second press within 2s: exit

When thinking is hidden, you'll see a token counter while the model generates. Press Ctrl+O to see the full streaming output.

Single-Prompt Mode

For scripting or CI, pass a prompt directly and OpenOrca will run non-interactively:

openorca --prompt "List all .cs files in this project"

CI/CD Usage

Pre-approve tools and get structured JSON output for pipeline integration:

openorca --prompt "List all TODO comments" --allow bash,grep --output json
# Output: {"response":"...","tokens":123}

Resume a Session

Pick up where you left off without navigating session menus:

openorca --continue           # Resume most recent session
openorca --resume abc123 # Resume a specific session by ID

Demo Mode

Run a demo without connecting to an LLM server:

openorca --demo

Next Steps