Prompt Configuration System
LLxprt Code uses a flexible and customizable prompt configuration system that allows you to tailor the AI's behavior for different providers, models, and environments. This guide explains how to configure and customize prompts.
Overview
The prompt configuration system provides:
- Provider-specific prompts: Different instructions for Gemini, OpenAI, Anthropic, etc.
- Model-specific adaptations: Special handling for models like Flash that need explicit tool usage reminders
- Environment awareness: Automatic adaptation based on Git repositories, sandboxes, and IDE integration
- Tool-specific instructions: Detailed guidance for each available tool
- User customization: Override any prompt with your own versions
Default Prompt Location
LLxprt Code looks for prompts in the following location:
~/.llxprt/prompts/
If custom prompts are not found, the system uses built-in defaults that are optimized for each provider and model.
Directory Structure
The prompt configuration follows a hierarchical structure:
~/.llxprt/prompts/
├── core.md # Main system prompt
├── compression.md # Instructions for context compression
├── providers/
│ ├── gemini/
│ │ ├── core.md # Gemini-specific overrides
│ │ └── models/
│ │ └── gemini-2.5-flash/
│ │ └── core.md # Flash-specific instructions
│ ├── openai/
│ │ └── core.md # OpenAI-specific overrides
│ └── anthropic/
│ └── core.md # Anthropic-specific overrides
├── env/
│ ├── git-repository.md # Added when in a Git repo
│ ├── sandbox.md # Added when sandboxed
│ ├── macos-seatbelt.md # macOS sandbox specifics
│ └── ide-mode.md # IDE integration context
├── tools/
│ ├── edit.md # Edit tool instructions
│ ├── shell.md # Shell command guidance
│ ├── web-fetch.md # Web fetching rules
│ └── ... # Other tool-specific prompts
└── services/
├── loop-detection.md # Loop detection warnings
└── init-command.md # Init command prompts
Prompt Resolution Order
Prompts are resolved in the following order (later overrides earlier):
- Built-in defaults: Core prompts shipped with LLxprt Code
- Provider defaults: Provider-specific adaptations
- Model defaults: Model-specific refinements
- User customizations: Your custom prompts in
~/.llxprt/prompts/
Template Variables
Prompts support template variables that are automatically replaced:
{{enabledTools}}: List of available tools{{environment}}: Current environment details (see below){{provider}}: Active provider name{{model}}: Current model name
{{environment}} fields
The environment object exposes the same properties as PromptEnvironment in code. Common fields include:
| Field | Description |
|---|---|
workspaceName |
Basename of the current workspace directory |
workspaceRoot |
Absolute path to the workspace root |
workspaceDirectories |
Array of directories included in the session |
workingDirectory |
The cwd the CLI started in |
isGitRepository |
true if git metadata was detected |
isSandboxed |
true when running inside Docker/Seatbelt/etc. |
sandboxType |
macos-seatbelt, generic, or omitted |
hasIdeCompanion |
Indicates VS Code integration status |
folderStructure |
A summarized folder tree (may be omitted if unavailable) |
Use these fields in custom prompts, e.g., {{environment.workspaceName}}.
Example Template Usage
You have access to these tools: {{enabledTools}}
Current environment:
{{environment}}
You are running on {{provider}} with model {{model}}.
Customizing Prompts
Method 1: Manual Creation
Create your custom prompts in the ~/.llxprt/prompts/ directory:
# Create the prompts directory
mkdir -p ~/.llxprt/prompts
# Create a custom core prompt
cat > ~/.llxprt/prompts/core.md << 'EOF'
You are a helpful AI assistant specializing in Python development.
Always write clean, well-documented Python code following PEP 8.
{{enabledTools}}
EOF
Method 2: Using the Installer
The prompt configuration system includes an installer that can set up the default structure:
# Install default prompts (coming soon)
llxprt prompts install
# Install with custom overrides (coming soon)
llxprt prompts install --custom
Environment-Specific Prompts
The system automatically includes environment-specific prompts based on your context:
Git Repository Context
When working in a Git repository, the system includes env/git-repository.md:
## Git Repository Guidelines
You are in a Git repository. Please:
- Respect .gitignore patterns
- Be aware of branch protection rules
- Use conventional commit messages
Sandbox Context
When running in sandbox mode, additional safety instructions are included from env/sandbox.md.
IDE Integration
When IDE mode is active, context about open files and cursor position is included from env/ide-mode.md.
Provider-Specific Customization
Gemini Flash Models
Flash models require explicit reminders about tool usage. Create a custom prompt:
mkdir -p ~/.llxprt/prompts/providers/gemini/models/gemini-2.5-flash/
cat > ~/.llxprt/prompts/providers/gemini/models/gemini-2.5-flash/core.md << 'EOF'
IMPORTANT: You MUST use the provided tools when appropriate.
Do not try to simulate or pretend tool functionality.
Always use the actual tools for:
- Reading files: Use read_file tool
- Listing directories: Use list_directory tool
- Running commands: Use run_shell_command tool
EOF
OpenAI Models
Customize behavior for OpenAI models:
mkdir -p ~/.llxprt/prompts/providers/openai/
cat > ~/.llxprt/prompts/providers/openai/core.md << 'EOF'
You are powered by OpenAI. Optimize responses for efficiency
and clarity. Use parallel tool calls when possible.
EOF
Tool-Specific Instructions
Customize instructions for individual tools:
Shell Command Tool
cat > ~/.llxprt/prompts/tools/shell.md << 'EOF'
When using shell commands:
- Always use absolute paths
- Check command existence with 'which' first
- Prefer non-interactive commands
- Explain any complex commands before running
EOF
Edit Tool
cat > ~/.llxprt/prompts/tools/edit.md << 'EOF'
When editing files:
- Preserve existing code style
- Make minimal necessary changes
- Add comments for complex changes
- Verify file exists before editing
EOF
Advanced Configuration
Compression Prompts
Customize how context compression works:
cat > ~/.llxprt/prompts/compression.md << 'EOF'
When compressing conversation history:
- Preserve all technical details
- Keep error messages intact
- Summarize repetitive content
- Maintain chronological order
EOF
Loop Detection
Customize loop detection warnings:
mkdir -p ~/.llxprt/prompts/services/
cat > ~/.llxprt/prompts/services/loop-detection.md << 'EOF'
You appear to be in a loop. Please:
1. Stop and analyze what went wrong
2. Try a different approach
3. Ask the user for clarification if needed
EOF
Environment Variables
Control prompt behavior with environment variables:
# Use a custom prompts directory
export LLXPRT_PROMPTS_DIR=/path/to/custom/prompts
# Enable debug mode to see prompt resolution
export DEBUG=true
Debugging Prompts
To see which prompts are being loaded:
-
Enable debug mode:
DEBUG=true llxprt -
Check the prompt resolution in the logs
-
Use the memory command to see the final composed prompt:
/memory show
Best Practices
- Start with defaults: Only customize what you need to change
- Test incrementally: Make small changes and test their effect
- Use version control: Keep your custom prompts in Git
- Document changes: Add comments explaining why you customized
- Share with team: Use project-specific prompt directories
Examples
Academic Writing Assistant
cat > ~/.llxprt/prompts/core.md << 'EOF'
You are an academic writing assistant. Always:
- Use formal academic language
- Cite sources in APA format
- Maintain objective tone
- Check facts before stating them
{{enabledTools}}
EOF
DevOps Specialist
cat > ~/.llxprt/prompts/core.md << 'EOF'
You are a DevOps specialist. Focus on:
- Infrastructure as code
- Container best practices
- CI/CD optimization
- Security-first approach
When working with shell commands, prefer:
- Docker and Kubernetes commands
- Terraform for infrastructure
- Ansible for configuration
{{enabledTools}}
EOF
Code Reviewer
cat > ~/.llxprt/prompts/core.md << 'EOF'
You are a thorough code reviewer. Always check for:
- Security vulnerabilities
- Performance issues
- Code smells
- Missing tests
- Documentation gaps
Provide constructive feedback with examples.
{{enabledTools}}
EOF
Troubleshooting
Prompts Not Loading
-
Check the directory exists:
ls -la ~/.llxprt/prompts/ -
Verify file permissions:
chmod -R 644 ~/.llxprt/prompts/ -
Enable debug mode to see loading errors:
DEBUG=true llxprt
Template Variables Not Replaced
Ensure you're using the correct syntax:
- Correct:
{{enabledTools}} - Wrong:
{enabledTools}or{{ enabledTools }}
Provider-Specific Prompts Not Working
Check the directory structure matches exactly:
~/.llxprt/prompts/providers/[provider-name]/core.md
Provider names must be lowercase: gemini, openai, anthropic
Migration from Hardcoded Prompts
If you were previously modifying LLxprt Code's source code to customize prompts, migrate to the new system:
- Copy your custom prompts to
~/.llxprt/prompts/ - Remove any source code modifications
- Update to the latest LLxprt Code version
- Test that your customizations still work
Contributing Prompt Improvements
If you've created prompts that would benefit others:
- Test thoroughly in various scenarios
- Document the use case and benefits
- Submit a pull request to the LLxprt Code repository
- Consider sharing in the community discussions
Future Enhancements
Planned improvements to the prompt system:
- Prompt marketplace: Share and download community prompts
- Interactive installer: GUI for prompt customization
- A/B testing: Compare prompt effectiveness
- Analytics: Track which prompts work best
- Hot reload: Change prompts without restarting
Related Documentation
- Configuration Guide - General LLxprt Code configuration
- Memory System - How context and memory work
- Provider Guide - Provider-specific features
- Tool Documentation - Available tools and their usage