Welcome to LLxprt Code documentation

This documentation provides a comprehensive guide to installing, using, and developing LLxprt Code. This tool lets you interact with AI models from multiple providers through a command-line interface.

Overview

LLxprt Code brings the capabilities of large language models to your terminal in an interactive Read-Eval-Print Loop (REPL) environment. LLxprt Code consists of a client-side application (packages/cli) that communicates with a local server (packages/core), which in turn manages requests to the provider APIs and its AI models. LLxprt Code also contains a variety of tools for tasks such as performing file system operations, running shells, and web fetching, which are managed by packages/core.

Navigating the documentation

Getting Started

New to LLxprt Code? Start here:

  • Getting Started Guide: Your first session — installation, authentication, and your first AI-assisted coding task.

Core Workflows

Essential guides for daily use:

  • Provider Configuration: Configure and manage LLM providers (Anthropic, OpenAI, Gemini, and more).
  • Authentication: Set up authentication for various providers.
  • Profiles: Save and manage configurations for different contexts.
  • Sandboxing: Protect your system with container-based isolation.
  • Subagents: Create and manage specialized AI assistants tied to profiles.

Advanced Topics

Power features for advanced users:

Reference

Configuration, commands, and troubleshooting:

Architecture & Development

For contributors and developers:

Tools Documentation

Additional Resources

We hope this documentation helps you make the most of the LLxprt Code!