Aura Workshop / Documentation

Aura Workshop v1.3.91 Documentation

Complete reference for Aura Workshop -- a model-agnostic AI agent orchestration platform. From first launch through advanced multi-agent teams, automation workflows, and the full REST API.

First Launch

When you open Aura Workshop for the first time, the app creates a local SQLite database at:

All settings, conversations, team definitions, workflows, listener configurations, scheduled tasks, credentials, and billing data are stored in this single database file.

On first launch the app also:

Prerequisites

RequirementDetails
macOSmacOS 12+ (Monterey or later), Apple Silicon or Intel
WindowsWindows 10 (1809) or later, x64
LinuxUbuntu 20.04+, Debian 11+, or equivalent with webkit2gtk installed
DockerOptional. Required only for Docker execution mode. Native mode works without it.

Models & Providers (Quick Setup)

To start using Aura Workshop you need at least one LLM provider configured. You can use cloud APIs, local inference, or both.

  1. Navigate to the MODELS page using the sidebar navigation.
  2. Click a provider card (Anthropic, OpenAI, Google, DeepSeek, etc.) to expand it.
  3. Enter your API key in the key field. The key is encrypted with AES-256-GCM before being stored in the database.
  4. Select a model from the dropdown or type a custom model ID.
  5. Optionally adjust sampling parameters: temperature, top-p, max tokens.
  6. The base URL is auto-filled based on the provider preset but can be overridden for custom endpoints.

Each provider stores its API key independently. When you switch between providers, the app loads the previously saved key for that provider.

Local providers (Ollama, Aura AI) do not require an API key. Select one and ensure the corresponding inference server is running locally. See Aura AI and Ollama for setup details.

Quick Model Picker

A model selector in the top-right corner of the app lets you quickly switch between configured providers and models without opening the full MODELS page. Click it to see all available models grouped by provider, and select one to switch immediately. The active model is captured per-task at launch time, so you can switch models between tasks without affecting running ones.

Web UI (Browser Access)

Aura Workshop includes an embedded axum HTTP server that serves the full SolidJS UI via any web browser. This enables headless Linux server deployment and remote access from any device on the network.

Accessing the Web UI

The Web UI server auto-starts on port 18800 by default. Open your browser and navigate to:

http://<machine-ip>:18800

The actual machine IP address is displayed in Settings > Connectivity > Web UI Server.

Configuration

Configure the Web UI server in Settings > Connectivity:

SettingDescriptionDefault
EnabledToggle the Web UI server on/offOn
PortHTTP port for the web server18800
Auth TokenOptional Bearer token for authenticationEmpty (no auth)

Authentication

When an auth token is configured, all requests to the Web UI must include it:

How It Works

The browser-based UI uses the same SolidJS frontend with a web transport layer instead of Tauri IPC:

All features work identically in browser mode: agent tasks, chat, settings, listeners, webhooks, schedules, skills, MCP servers, and teams.

Headless Server Usage

For headless Linux servers without a desktop environment:

  1. Install the .deb or .AppImage package
  2. Launch with a virtual display: xvfb-run aura-workshop
  3. Access the full UI at http://<server-ip>:18800
  4. Configure API keys, models, and all settings through the browser

Unified Task Interface

Aura Workshop uses a single input box for everything. There are no separate "Chat" and "Agent" modes to switch between. Type anything -- a quick question, a complex project brief, or an automation request -- and the system automatically determines the right execution path.

Key principle: One input. Auto-classified. No mode switching required.

Input Field

The prompt input is at the bottom of the main panel. It reads: "Describe what you want to build..."

Below the input field, a toolbar provides additional controls:

Category Prompt Cards

When you start a new task, category prompt cards appear above the input field to help you get started quickly:

CategoryExample Tasks
Software DevBuild APIs, create apps, write scripts, debug code
MarketingCampaign copy, social media content, SEO analysis
FinanceFinancial models, budget reports, forecasting
LegalContract review, compliance checklists, policy drafts
ResearchLiterature review, competitive analysis, data synthesis
DesignUI mockups, brand guidelines, wireframes
OperationsProcess documentation, SOPs, workflow automation
EducationCourse outlines, lesson plans, quiz generation

Clicking a card pre-fills the input with a relevant prompt template that you can customize before sending.

Running Multiple Tasks

Multiple tasks can run simultaneously, each potentially using a different model or provider. Start a new task while another is running -- each gets its own cancel token and event stream. Switch between active tasks in the sidebar to monitor progress. The model active at the time each task was launched is captured and used for that task's duration.

Task Classification

When you send a prompt, Aura's classification system analyzes it and routes to the right execution path automatically:

ClassificationWhat HappensExample
CHATConversational response, no toolsQuick questions, brainstorming, explanations
SINGLEOne agent handles it directly with toolsCode scripts, file edits, web research
CLARIFYAsks clarifying questions, pauses for your answerVague requests like "help me with my project"
TEAM:NUses an existing team's workflowRoutes to Software Dev Team, Content Writing Team, etc.
WORKFLOW:NReuses a previously saved workflowSame prompt pattern as a prior task
NEWCreates a new team with specialized rolesComplex tasks needing multiple specialists

You don't need to pick a team or workflow manually -- just describe what you want and the system figures out the best approach.

How Classification Works Internally

The classify_task() function sends a one-shot LLM call that analyzes the prompt and returns the classification type, team name (if applicable), workflow type, and role specifications. For NEW tasks, the classifier also generates complete role definitions on the fly. The classification happens at the application level -- no special model cooperation is required.

Mounted Folders

The mount folder button (folder icon next to Send) tells the agent where to work on your filesystem. Always mount a folder when you expect the agent to create or modify files.

How to Use

  1. Click the folder icon in the input toolbar.
  2. A native file dialog opens. Select one or more directories.
  3. Selected folders appear as a "MOUNTED FOLDERS" badge above the input field.
  4. When you send a message, the selected paths are passed as the project_path to the agent.
  5. In Docker mode, these paths are bind-mounted into the container at /workspace.
  6. In native mode, the first path is used as the working directory for bash commands.

When to Mount

ScenarioMount folder?
"Build me a new project"Yes -- mount where you want the project created
"Read this codebase and refactor it"Yes -- mount the project root
"What's the capital of France?"No -- no file access needed
Running a team taskYes -- mount the output directory so all roles write there

Important Notes

Sample Prompts (Validated Test Scenarios)

These prompts have been tested end-to-end and demonstrate Aura's core capabilities.

1. Simple Question (CHAT)

What are the three laws of thermodynamics? One sentence each.

What happens: Classified as CHAT. Conversational response, no tools invoked. Status: completed.

2. Code Generation (SINGLE)

Write a Python function called is_palindrome that checks if a string reads the same forwards and backwards. Save it to palindrome.py

What happens: Single agent writes the code, saves the file, optionally runs tests. You'll see write_file and bash tool calls in the sidebar.

3. Clarification (CLARIFY)

Help me with my project

What happens: The system detects the request is too vague. Instead of guessing, it asks 3-5 specific clarifying questions. Task status shows "Needs Response" in amber. Reply with details and the system re-classifies.

4. Team Workflow -- Software Development (TEAM)

Build a full REST API for a todo list app with CRUD endpoints, database schema, and unit tests

What happens: Routes to the Software Dev Team (5 roles: Product Manager, Architect, Developer, QA Engineer, DevOps). The PM may ask clarifying questions -- the workflow pauses until you answer. The Workflow Progress panel shows each role's status.

5. New Team Creation (NEW)

Design and build a data ingestion pipeline that reads CSV files, validates the data, transforms it, and loads it into PostgreSQL

What happens: No existing team matches, so the classification creates a new pipeline team with custom roles. The team is saved for future reuse.

6. Scheduled Task (SCHEDULE)

Every Monday at 9am, compile a summary of all git commits from the past week and email it to [email protected]

What happens: A schedule is created and appears in the AUTO tab. The agent runs the task immediately as a first execution, then fires automatically at the configured time.

7. Parallel Translation (NEW + Fan-Out)

Translate this product documentation into Spanish, French, and German simultaneously, then have each reviewed by a native speaker

What happens: Creates a Localization Team with parallel workflow -- translators run simultaneously, then reviewers check each language. The Workflow Progress panel shows parallel nodes running at the same time.

How Orchestration Works

Aura Workshop has a built-in AI orchestration engine. Instead of doing everything as a single agent, the system can automatically route complex tasks to multi-agent teams, create automation workflows, set up scheduled tasks, and wire triggers -- all from natural language prompts.

Routing Logic

What you askWhat happensExample
Simple one-off taskSingle agent handles it directly"What time is it in Tokyo?"
Complex multi-step projectAuto-routes to a multi-agent team"Build me a REST API for a todo app"
Recurring taskCreates a scheduled task"Every morning at 8am, check my website uptime"
Event-driven automationCreates a workflow with triggers"When a GitHub webhook fires, run tests and deploy"
Messaging automationCreates a listener"Set up a WhatsApp bot that answers pricing questions"

Agent Loop

The agent operates in a loop of up to 50 turns (configurable). Each turn:

  1. Sends the conversation history and available tools to the LLM.
  2. Receives a response that may contain text and/or tool calls.
  3. Executes any tool calls (file reads, writes, bash commands, etc.).
  4. Feeds tool results back to the LLM for the next turn.

The loop ends when the LLM responds with only text (no tool calls), or the maximum turn count is reached.

Multi-Agent Teams

Teams define multiple AI roles that work together. Each role runs as a separate agent with its own system prompt, and the workflow engine manages execution order, parallel processing, and data passing between roles.

Default Teams

Software Dev Team (5 roles, fan-out enabled):

Content Writing Team (3 roles, sequential):

Using Teams

Automatic -- just describe what you need. Complex tasks auto-route to the matching team:

Build me a Python CLI tool for managing bookmarks

Manual via Settings -- create or edit teams in Settings > Teams with the visual workflow editor.

Via natural language -- ask the agent to create a team:

Create a translation team with a Localization Manager, Translator with fan-out, and Cultural Reviewer

Creating and Editing Teams

  1. Open Settings > Teams and click Create Team.
  2. Add roles -- each role needs a name and a system prompt.
  3. Choose a workflow type: Sequential or Pipeline (with validation gates).
  4. Use the Workflow Editor to customize: add Script, Webhook, Validate, or Approval Gate steps between roles.
  5. Enable Fan-Out on any role by double-clicking the node and checking "Enable Fan-Out".
  6. Import/Export -- click Import to load a team JSON, or Export on any team to download it.

Workflow Progress Panel

When a team runs, the right panel shows:

Role Library (20 Built-in Roles)

Aura Workshop ships with 20 built-in roles organized into 6 categories, providing ready-to-use specialist agents for common tasks. You can also create unlimited custom roles.

Built-in Role Categories

CategoryRoles
SoftwareSoftware Developer, QA Engineer, DevOps Engineer, Architect
ContentTechnical Writer, Copywriter, Editor, Research Lead
BusinessProduct Manager, Project Manager, Scrum Master
DesignUX Designer, UI Designer, Brand Designer
DataData Engineer, Data Analyst, ML Engineer
OperationsSecurity Analyst, SRE, Support Engineer

Each built-in role includes a curated system prompt with domain-specific instructions, tool preferences, and output format guidelines.

Managing Roles in Settings

Open Settings → Roles to manage all roles:

Tool Checkboxes

Each role has checkboxes controlling which tools it can access. Default tools include read_file, list_dir, glob, grep, and role_complete. You can enable or disable individual tools per role to enforce constraints (for example, giving a Research Lead read-only access by disabling write_file and bash).

Custom Roles

Custom roles are saved to ~/.aura/roles/ as .md files with YAML frontmatter containing the role name, category, description, and tool list. These files can be version-controlled, shared between machines, or manually edited.

Auto-Created Roles

When the task classifier creates a new team (classification type NEW), it generates role definitions on the fly. These auto-created roles are saved to the Role Library automatically so they can be reused in future teams.

Structured Handoff

When agents in a team workflow complete their role, they pass structured handoff data to downstream agents using the role_complete tool. This ensures critical context is not lost between workflow steps.

Handoff Data Structure

Each handoff includes four structured fields:

How It Works

When a role finishes, the agent calls role_complete with the handoff payload. The workflow engine:

  1. Saves the handoff data to the task record.
  2. Injects the handoff into the system prompt of the next role in the workflow.
  3. Marks the upstream role node as completed in the Workflow Progress panel.

Fan-Out (Parallel Agents)

Fan-Out lets a role automatically spawn multiple agents in parallel -- one per item from an upstream role's output list.

How to enable: Double-click a role node in the Workflow Editor → check "Enable Fan-Out" → set Source Node and Max Parallel Agents.

How it works: The source role produces a numbered list. The fan-out executor detects the list, splits it, and spawns one agent per item. Results are merged for the next role.

Team TypeSource Role ProducesFan-Out Role Does
Software DevArchitect lists tasksOne developer per task
Content WritingResearch Lead lists sectionsOne writer per section
ResearchLead lists questionsOne researcher per question
TranslationManager lists languagesOne translator per language

Role Guardrails

Every agent in a team workflow automatically receives system-enforced rules:

These guardrails are injected at the executor level and apply to every team, including user-created ones. They prevent agents from overstepping their responsibilities and ensure the workflow progresses cleanly.

Workflow Pause & Resume

When the first agent in a team workflow (typically the PM) asks clarifying questions, the workflow pauses automatically:

  1. The PM asks questions (e.g., "What database do you prefer?")
  2. Task status changes to "Needs Response"
  3. You type your answer in the input box
  4. The workflow resumes from where it paused, feeding your answer to the next agents

This ensures you get exactly the product you want instead of the agent guessing.

Cross-Session Memory

Aura Workshop maintains persistent memory files that survive across sessions, allowing the agent to recall user preferences, project context, and prior feedback without being told again.

Memory Storage

Memory files are stored as markdown in two locations:

Memory TypeFile PatternPurpose
User Preferencesuser_*.mdCoding style, language preferences, naming conventions
Feedbackfeedback_*.mdCorrections and preferences learned from user feedback
Projectproject_*.mdProject-specific architecture, dependencies, conventions
Referencereference_*.mdExternal system details, API endpoints, credentials notes

Auto-Extraction

After a task completes, the agent automatically analyzes the conversation for memorable information. If it detects user preferences, corrections, or important project context, it writes a new memory file (or updates an existing one) without being asked. A memory index file (MEMORY.md) is maintained with a summary of all stored memories.

System Prompt Injection

At the start of every agent session, all memory files are read and injected into the system prompt. This means the agent "remembers" everything it has learned across previous sessions. Memories survive context compaction -- they are re-injected even after the conversation is compressed.

Context Compression

Long-running agent sessions can exceed the model's context window. Aura Workshop automatically compresses the conversation to stay within limits while preserving critical information.

How Compaction Works

  1. Threshold detection -- Before each API call, the system estimates the current context size. When it reaches 80% of the model's context window, compaction triggers automatically.
  2. LLM summarization -- The conversation history is sent to the model with a summarization prompt. The model produces a compressed summary that preserves key decisions, file paths, tool results, and task state.
  3. Context replacement -- The compressed summary replaces the full conversation history. Only the summary and the most recent messages are kept.

Truncation Fallback

If the LLM summarization call itself fails (e.g., the context is already too large to summarize), the system falls back to a simple truncation strategy: it keeps the system prompt, the most recent N messages, and drops the oldest messages.

Circuit Breaker

If compaction fails 3 times in a row, the circuit breaker activates and stops attempting further compaction for the remainder of the session. This prevents infinite retry loops.

Parallel Task Execution

Aura Workshop supports running multiple agent tasks simultaneously, each potentially using a different model or provider.

You can stop tasks individually (click the stop button on a specific task) or stop all running tasks at once.

Crash-Safe Transcripts

Aura Workshop persists the full agent transcript to the database before every API call. If the app crashes, the machine loses power, or the process is killed, the conversation is recoverable up to the last completed turn.

  1. Before each LLM API call, the current conversation history (including all tool calls and results so far) is written to the task_messages table in the SQLite database.
  2. When you relaunch the app, interrupted tasks appear in the sidebar with a "Resume" indicator.
  3. Clicking Resume reloads the persisted transcript and continues the agent loop from where it left off.

You never lose work due to unexpected shutdowns. The agent picks up exactly where it stopped, with full context.

Schedules & Listeners

The AUTO tab manages all automation triggers.

Schedules (Cron-Based)

Schedules run tasks at recurring intervals using cron expressions.

Listeners (Event-Driven)

Listeners watch for incoming messages from external platforms and trigger agent responses.

Each listener can optionally enable "Agent Tools" for full tool access, or run in chat-only mode.

Automation Workflows

Automation workflows are pipelines that orchestrate triggers, conditions, scripts, webhooks, and teams. Unlike teams (which are multi-agent role-based), workflows handle the plumbing: when to run, what data to route, which conditions to check.

Creating Workflows

Via natural language -- describe the automation you need:

Every morning at 9am, check our server health. If anything is down, have the incident team diagnose and email me a report.

Via Settings -- go to Settings > Workflows > Create Workflow. Use the visual editor to add nodes, connect them, and configure each one.

Via Import -- click Import on the Workflows tab to load a workflow JSON file.

Node Types

NodeTypeDescription
Agent Taskagent-taskLLM agent with tools
TeamteamRuns a saved multi-agent team as a step
ScriptscriptRuns bash, Python, Node.js, or Go code
WebhookwebhookHTTP request (GET/POST/PUT/PATCH/DELETE)
ConditionalconditionalIF/ELSE branching based on expressions
TransformtransformData manipulation via JS/Python expression
Fan-Outfan-outSplits a list into parallel executions
MergemergeCombines results from parallel branches
DelaydelayWaits a specified duration before continuing
ValidatevalidateLLM quality check on a previous node's output
Approval Gatehuman-in-the-loopPauses for human approval

Conditional Expressions

The conditional node evaluates expressions against workflow data:

Workflow Templates

Pre-built templates are available for common use cases:

TemplateFan-OutUse Case
Marketing CampaignCopywriter per channelMulti-channel campaigns
Competitive AnalysisResearcher per competitorMarket research
Course CreatorLesson Writer per lessonEducational content
Data AnalysisCollector per sourceAnalytics and reporting
TranslationTranslator per languageLocalization
Code MigrationMigrator per moduleCodebase conversion
Proposal WriterSection Writer per sectionRFP responses

Import any template via Settings > Workflows > Import.

Cloud Providers

Aura Workshop supports 9 cloud LLM providers out of the box. Each is represented as an expandable card on the MODELS page.

ProviderAPI FormatDescription
AnthropicNative Messages APIClaude models (Opus, Sonnet, Haiku) -- native streaming with exact token counts
OpenAIChat CompletionsGPT-4o, GPT-4, o1/o3 series
GoogleGemini APIGemini models -- native Gemini API format
DeepSeekOpenAI-compatibleDeepSeek Chat and reasoning models
OpenRouterOpenAI-compatibleAggregator providing access to hundreds of models through one API key
TogetherOpenAI-compatibleOpen-source models hosted on Together AI infrastructure
GroqOpenAI-compatibleUltra-fast inference on custom LPU hardware
SiliconFlowOpenAI-compatibleCost-effective model hosting
FireworksOpenAI-compatibleFast inference platform for open-source and fine-tuned models

Provider Card Details

Click any provider card to expand it and configure:

Aura AI (Local GGUF Inference)

Aura AI is a built-in inference engine bundled with Aura Workshop. It runs GGUF models locally without requiring any external software.

How It Works

The aura-inference binary (Go-based, ~31MB) is bundled as a Tauri sidecar. It provides an OpenAI-compatible API on localhost for serving quantized GGUF models.

Model Discovery

Click Scan Models to discover GGUF files in:

Discovered models appear in a selectable list with file size and quantization level.

Configuration

ParameterDescriptionDefault
Context LengthMaximum context window size in tokens4096
GPU LayersNumber of layers to offload to GPU (Metal on macOS, CUDA on Windows/Linux)Auto
QuantizationModel quantization level (Q4_K_M, Q5_K_M, Q8_0, etc.)From model file
Batch SizeTokens processed per batch during inference512
KV Cache TypeKey-value cache precision (f16, q8_0, q4_0)f16

Launching the Server

  1. Select a GGUF model from the scanned list.
  2. Configure inference parameters.
  3. Click Launch Server.
  4. The app auto-configures the provider to aura-ai, sets the base URL to localhost, and selects the model.
  5. The server process is managed by the app and automatically killed when the window closes.

Zero API cost -- runs entirely on your hardware.

Ollama

Ollama is auto-detected when running on the local machine. The Ollama card shows all locally available models and lets you pull new ones directly from the Ollama registry. No API key required.

Setup

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.1

# Ollama runs automatically on localhost:11434
# Aura Workshop auto-detects it

Configuration

SettingValue
Base URLhttp://localhost:11434/v1
API KeyLeave empty
ModelAny Ollama model name (e.g., llama3.1, qwen2.5, deepseek-r1)

Ollama models run entirely locally with zero API costs. They serve as the final fallback in the provider fallback chain when all cloud provider spend limits are reached.

HuggingFace Downloader

The MODELS page includes a HuggingFace model downloader with:

Downloaded models appear automatically in the Aura AI model scanner and can be launched immediately.

Custom Providers

The Custom Provider Manager lets you save and quickly switch between custom LLM provider configurations.

Adding a Custom Provider

  1. Open Settings and navigate to the provider configuration area.
  2. Click Add Custom Provider.
  3. Fill in the configuration:
    • Name -- A display name for this provider (e.g., "My vLLM Server").
    • Base URL -- The endpoint URL (e.g., http://192.168.1.100:8000/v1).
    • Model ID -- The model identifier the server expects.
    • API Key -- Optional, depending on the server's auth requirements.
  4. Save the configuration.

Saved custom providers appear in the model selector dropdown alongside built-in providers. Supports any OpenAI-compatible endpoint: vLLM, TGI, SGLang, LocalAI, LiteLLM, and more.

Agent Tools

Agents have access to a comprehensive set of tools for interacting with the filesystem, executing commands, and managing platform resources.

Core Tools (Native Mode)

ToolDescription
read_fileRead file contents
write_fileCreate or overwrite a file
edit_fileMake targeted edits to a file
bashExecute shell commands on the host
globFind files by pattern
grepSearch file contents with regex
list_dirList directory contents
web_fetchFetch a web page and return clean text

Docker Tools

Available when Docker is installed and Docker mode is enabled:

ToolDescription
docker_runRun commands in Docker containers
docker_listList running containers
docker_imagesList available images

Device Tools (Opt-in via Settings)

ToolDescription
system_notifySend a system notification
screen_captureCapture a screenshot (macOS)
camera_captureTake a photo via webcam (macOS, requires imagesnap)

Platform Tools (53 tools)

The agent has full CRUD operations for all platform resources:

CategoryCountExamples
Listeners8platform_create_listener, platform_start_listener, platform_stop_listener
Webhooks7platform_create_webhook, platform_start_webhook, platform_delete_webhook
Schedules7platform_create_schedule, platform_start_schedule, platform_edit_schedule
Skills5platform_create_skill, platform_list_skills, platform_edit_skill
MCP Servers5platform_connect_mcp, platform_list_mcp, platform_disconnect_mcp
Teams6platform_create_team, platform_run_team_task, platform_edit_team
Workflows4platform_create_workflow, platform_run_workflow, platform_delete_workflow
Credentials5platform_store_credential, platform_get_credential, platform_delete_credential
Settings2platform_get_settings, platform_update_settings
Orchestration4role_complete, classify_task, and others

Security Notes

Skills System

Skills are structured instruction sets that guide the agent when performing specific types of tasks.

How Skills Work

  1. Skills are stored in the skills directory: ~/Library/Application Support/aura-workshop/skills/ (macOS) or %APPDATA%\aura-workshop\skills\ (Windows).
  2. Each skill is a folder containing a SKILL.md file with YAML frontmatter (name, description) and markdown instructions.
  3. When the agent starts, all available skills are listed in the system prompt.
  4. When a user request matches a skill, the agent reads the skill's SKILL.md file and follows its instructions.

Bundled Skills

Document and creative skills:

Development workflow skills (superpowers):

Platform integration:

Adding Custom Skills

  1. Click "+ Add Skill" in the Skills panel to import a skill folder from your computer.
  2. Alternatively, create a folder in the skills directory with a SKILL.md file.
  3. The SKILL.md must have YAML frontmatter with name and description fields.

Each skill in Settings > Skills has an Edit button for modifying the SKILL.md content. Changes take effect on the next agent task.

MCP Servers

The Model Context Protocol (MCP) allows Aura Workshop to connect to external tool servers, extending the agent's capabilities beyond built-in tools.

Adding an MCP Server

  1. Open Settings > MCPs.
  2. Click Add Server.
  3. Choose a transport type:
    • HTTP: provide the server URL (e.g., http://localhost:3000/mcp).
    • stdio: provide the command and arguments to spawn the server process (e.g., command npx, args @playwright/mcp).
  4. Optionally configure OAuth credentials if the server requires authentication.
  5. Toggle the server to enabled.
  6. The app connects and discovers available tools. These tools appear in the /tools listing with an mcp_ prefix.

Playwright Browser Automation

Add an MCP server with transport stdio, command npx, args @playwright/mcp. The agent can then browse the web, interact with pages, take screenshots, and extract content. MCP tool results are truncated to 8000 characters to prevent context overflow.

Browser-Use MCP (Bundled)

Browser-Use is bundled as a backup browser automation MCP server alongside Playwright. It is auto-connected on startup with no setup required. The Python virtual environment and all dependencies are bundled with the installer.

Custom MCP Servers

Any server implementing the MCP protocol can be added. Tool names are formatted as mcp_{server_id}_{tool_name} with hyphens and colons replaced by underscores.

Settings: General

The General tab controls core execution and interaction preferences.

Settings: Security & License

Settings: Billing & Spend Tracking

Complete cost management for all LLM API usage.

Usage Dashboard

Spend Limits

Set daily and monthly spending caps per configured provider. When a provider hits its limit, Aura automatically switches to the next available provider in your fallback list.

Provider Fallback Order

Configure a priority-ordered list of backup providers. When the primary provider hits its spend limit, Aura switches mid-session with a notification. Local models (Aura AI, Ollama) serve as zero-cost final fallbacks.

Model Pricing

Editable pricing table with input/output cost per million tokens for every model. Keep these updated with your provider's current rates for accurate cost tracking.

How Token Tracking Works

Settings: Connectivity (Web Server)

Configure the embedded Web UI server for browser-based access.

SettingDescriptionDefault
Web Server EnabledToggle the embedded HTTP server on/offOn
PortHTTP port for the web server18800
Auth TokenOptional Bearer token for remote access authenticationEmpty (no auth)

The machine's IP address is displayed here for easy reference when connecting from other devices.

Settings: Skills

Settings: MCPs

Settings: Data Management

Database and data management operations:

ActionDescription
Clear HistoryDelete all conversation and task history while keeping settings
Reset API KeysRemove all stored provider API keys
Reset DatabaseDrop and recreate the entire database (keeps the app installed)
Factory ResetFull reset: database, settings, skills, memory files -- returns the app to first-launch state
DiagnosticsRun system diagnostics to check database integrity, disk space, provider connectivity, and sidecar status

Settings: Teams

Settings: Roles

Settings: Workflows (Enterprise)

The visual workflow editor provides a node-based canvas with drag-and-drop for creating automation pipelines. This feature is available on Enterprise and Business license tiers.

Settings: Credentials

Encrypted credential store for sensitive data used by agents during task execution.

Settings: Cloud Storage

Connect to cloud storage providers so agents can read from and write to your cloud files.

Supported Providers

For OAuth providers, a local callback server on port 18793 handles the authorization flow. Access to cloud storage is gated behind biometric authentication.

Settings: Updates

The current version (v1.3.91) is displayed at the top of the Updates tab.

Chrome Extension

The Aura Workshop Chrome Extension brings your AI agent into any browser tab. Chat with your agent, ask about the page you're viewing, summarize content, or use selected text -- all from a side panel.

Prerequisites

Installation

  1. Download aura-workshop-chrome-extension.zip from the release page
  2. Unzip the file to a permanent location
  3. Open Chrome and navigate to chrome://extensions
  4. Enable Developer mode
  5. Click Load unpacked and select the unzipped folder

Usage

Click the extension icon to open the side panel. It connects via WebSocket on the WebChat listener port (default 18792).

Keyboard shortcut: Ctrl+Shift+Y (Windows/Linux) or Cmd+Shift+Y (macOS)

Quick Actions

ButtonWhat it does
Ask about pageExtracts the current page's title, URL, and text content, sends it to the agent
SummarizeSends the page content with a summarization prompt
Use selectionSends your highlighted text selection to the agent
StopStops the current response mid-stream
TaskConverts the current chat into a full agent task in the desktop app
New chatClears the conversation and starts fresh

Each browser tab maintains its own conversation context. File attachments via drag-and-drop and screenshot paste are supported.

Privacy

The extension communicates only with your local Aura Workshop instance (localhost:18792). No data is sent to external servers by the extension itself. Page content is only extracted when you click a quick action button.

Embeddable Chat Widget

Aura Workshop provides an embeddable chat widget that you can add to any website, enabling visitors to interact with your configured agent directly from a web page.

Setup

  1. Create a new Listener with the platform type set to WebChat.
  2. Start the listener. It launches a WebSocket server.
  3. Copy the provided JavaScript snippet from the listener configuration panel.
  4. Paste the snippet into your website's HTML.

Features

The JavaScript snippet creates a floating chat button on your page. When clicked, it opens a chat panel that connects to your running Aura Workshop instance.

Cloud Storage Integration

Once a cloud storage provider is connected (see Settings: Cloud Storage), agents can read files from and write files to that storage during tasks.

Example prompts:

REST API Overview

Aura Workshop includes an embedded HTTP server (default port 18800) that exposes the full platform as a REST API. Every feature available in the desktop app is also available via HTTP. All endpoints are prefixed with /api.

Base URL: http://localhost:18800/api
Content-Type: application/json for all POST/PUT requests
Authentication: Optional Bearer token (configured in Settings > Connectivity)

Authentication

When a token is configured, include it as a Bearer token:

curl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:18800/api/tasks

If no token is configured, all API requests are allowed without authentication (suitable for local-only access).

Tasks

MethodEndpointDescription
GET/api/tasksList all tasks
POST/api/tasksCreate a new task
GET/api/tasks/{id}Get task details
DELETE/api/tasks/{id}Delete a task
GET/api/tasks/{id}/messagesGet task conversation messages
GET/api/tasks/interruptedList interrupted tasks for resume
POST/api/tasks/{id}/runRun a task agent (SSE stream)
POST/api/tasks/{id}/resumeResume an interrupted task (SSE stream)

Conversations (Chat)

MethodEndpointDescription
GET/api/conversationsList all conversations
POST/api/conversationsCreate a new conversation
DELETE/api/conversations/{id}Delete a conversation
PUT/api/conversations/{id}/titleUpdate conversation title
GET/api/conversations/{id}/messagesGet conversation messages
POST/api/conversations/{id}/messagesAdd a message
POST/api/chat/sendSend chat message (SSE stream)
POST/api/chat/enhancedSend chat with tools (SSE stream)

Teams

MethodEndpointDescription
GET/api/teamsList all teams
POST/api/teamsCreate a team
PUT/api/teams/{id}Update a team
DELETE/api/teams/{id}Delete a team
POST/api/teams/runRun a team task

Automation Workflows

MethodEndpointDescription
GET/api/workflowsList all workflows
POST/api/workflowsCreate a workflow
GET/api/workflows/{id}Get workflow details
PUT/api/workflows/{id}Update a workflow
DELETE/api/workflows/{id}Delete a workflow
POST/api/workflows/{id}/runExecute a workflow
GET/api/workflow/runs/{run_id}Get workflow run status
POST/api/workflow/approvals/{id}/resolveResolve a human-in-the-loop approval

Schedules

MethodEndpointDescription
GET/api/schedulesList all scheduled tasks
POST/api/schedulesCreate a schedule
DELETE/api/schedules/{id}Delete a schedule
POST/api/schedules/{id}/toggleEnable/disable a schedule

Listeners

MethodEndpointDescription
GET/api/listenersList all listeners
POST/api/listenersCreate a listener
PUT/api/listeners/{id}Update a listener
DELETE/api/listeners/{id}Delete a listener
POST/api/listeners/{id}/startStart a listener
POST/api/listeners/{id}/stopStop a listener
GET/api/listeners/{id}/logsGet listener execution logs
GET/api/listeners/platformsGet supported messaging platforms

Webhooks

MethodEndpointDescription
GET/api/webhooksList all webhooks
POST/api/webhooksCreate a webhook
DELETE/api/webhooks/{id}Delete a webhook
POST/api/webhooks/{id}/toggleEnable/disable a webhook
GET/api/webhooks/{id}/urlGet the webhook's trigger URL
GET/api/webhooks/{id}/logsGet webhook execution logs

Billing & Usage

MethodEndpointDescription
GET/api/billing/summaryGet spend summary per provider
GET/api/billing/limitsGet all spend limits
POST/api/billing/limitsSet a spend limit for a provider
GET/api/billing/fallback-orderGet provider fallback priority
POST/api/billing/fallback-orderSet provider fallback priority
GET/api/billing/pricingGet model pricing table
POST/api/billing/pricingUpdate model pricing
POST/api/billing/resetReset all usage data
GET/api/billing/dailyGet daily usage (last 30 days)
GET/api/billing/daily-by-modelGet daily usage per model

Settings & Data Management

MethodEndpointDescription
GET/api/settingsGet all settings
PUT/api/settingsUpdate settings
POST/api/settings/testTest provider connection
GET/api/platformGet platform info (OS, version)
GET/api/diagnosticsRun system diagnostics
POST/api/inference/stopStop running inference (optional task_id in body)
POST/api/data/clear-historyClear all conversation history
POST/api/data/reset-keysReset all API keys
POST/api/data/reset-databaseReset entire database
POST/api/data/reset-allFactory reset

Streaming (Server-Sent Events)

Several endpoints return SSE streams for real-time updates. Connect with EventSource or curl -N.

EndpointDescription
POST /api/tasks/{id}/runAgent task execution -- streams text, tool calls, plan steps, done/error
POST /api/tasks/{id}/resumeResume interrupted task -- same event types as run
POST /api/chat/sendChat message -- streams text chunks + done
POST /api/chat/enhancedChat with tools -- streams text + tool events + done
GET /api/eventsGlobal event stream -- receives ALL workflow events from any client

SSE Event Types:

{"type":"text","content":"Hello world..."}            // Streaming text
{"type":"tool_start","tool":"write_file","input":{}}   // Tool execution started
{"type":"tool_end","tool":"write_file","success":true}  // Tool completed
{"type":"node_running","node_id":"role_0","label":"PM"} // Workflow node status
{"type":"done","total_turns":5}                         // Task completed
{"type":"error","message":"..."}                        // Task failed

Example: Create and Run a Task

# Create task
curl -X POST http://localhost:18800/api/tasks \
  -H "Content-Type: application/json" \
  -d '{"title":"Build API","description":"Build a REST API","prompt":"Build a REST API"}'

# Run it (returns SSE stream)
curl -N -X POST http://localhost:18800/api/tasks/{task_id}/run \
  -H "Content-Type: application/json" \
  -d '{"task_id":"...","message":"Build a REST API for a todo app","project_path":"/path/to/folder"}'