Usage
Advanced Usage
Scripting, automation, and advanced patterns for power users.
Advanced Usage
This guide covers advanced patterns for power users, including scripting, CI/CD integration, and performance optimization.
Scripting with Whisp
Non-Interactive Mode
Whisp commands work in scripts. The daemon must be running:
#!/bin/bash
# ensure-daemon.sh
if ! whisp status > /dev/null 2>&1; then
whisp start
sleep 1
fiCapturing Generated Commands
Use the shell integration in scripts:
#!/bin/bash
# deploy-helper.sh
# Source whisp shell integration
source ~/.bashrc # or wherever whisp is loaded
# Generate a command and capture it
cmd=$(whisp shell query "list all docker containers older than 7 days")
echo "Generated: $cmd"
# Execute if confirmed
read -p "Run this command? [y/N] " confirm
if [[ "$confirm" == "y" ]]; then
eval "$cmd"
fiPiping in Scripts
Process data with whisp in pipelines:
#!/bin/bash
# analyze-logs.sh
# Find error patterns
cat /var/log/app.log | whisp shell pipe "extract unique error types and count them"
# Summarize git history
git log --oneline -50 | whisp shell pipe "categorize these commits by type"CI/CD Integration
GitHub Actions
# .github/workflows/ai-review.yml
name: AI Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install whisp
run: |
curl -fsSL https://whisp.dev/install.sh | sh
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Configure whisp
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
whisp start
sleep 2
- name: Analyze changes
run: |
git diff origin/main...HEAD | whisp shell pipe "review this diff for potential issues"GitLab CI
# .gitlab-ci.yml
ai-analysis:
image: rust:latest
script:
- curl -fsSL https://whisp.dev/install.sh | sh
- export PATH="$HOME/.local/bin:$PATH"
- export OPENAI_API_KEY=$OPENAI_API_KEY
- whisp start && sleep 2
- git diff $CI_MERGE_REQUEST_DIFF_BASE_SHA | whisp shell pipe "summarize changes"Pre-commit Hook
#!/bin/bash
# .git/hooks/pre-commit
# Check if whisp is available
if ! command -v whisp &> /dev/null; then
exit 0
fi
# Get staged diff
staged_diff=$(git diff --cached)
if [ -z "$staged_diff" ]; then
exit 0
fi
# Quick sanity check
echo "$staged_diff" | whisp shell pipe "check for obvious issues: hardcoded secrets, debug statements, TODO comments. Reply with just 'OK' or list issues." | head -5Automation Patterns
Scheduled Maintenance Scripts
#!/bin/bash
# maintenance.sh - run via cron
export PATH="$HOME/.local/bin:$PATH"
# Ensure daemon is running
whisp status > /dev/null 2>&1 || whisp start
# Generate cleanup commands
cleanup_cmd=$(whisp shell query "find and list files in /tmp older than 7 days, larger than 100MB")
echo "Suggested cleanup: $cleanup_cmd"Log Analysis Automation
#!/bin/bash
# analyze-errors.sh
# Collect last hour of errors
errors=$(journalctl --since "1 hour ago" -p err --no-pager)
if [ -n "$errors" ]; then
echo "$errors" | whisp shell pipe "summarize these system errors and suggest fixes"
fiMonitoring Integration
#!/bin/bash
# whisp-monitor.sh
# Check daemon health
health=$(whisp health 2>&1)
if echo "$health" | grep -q "Errors:"; then
error_count=$(echo "$health" | grep "Errors:" | awk '{print $2}')
if [ "$error_count" -gt 100 ]; then
echo "WARN: High error count in whisp: $error_count"
fi
fi
# Export metrics for monitoring
whisp metrics --json >> /var/log/whisp-metrics.jsonlPerformance Tips
Model Selection Guide
Choose models based on your primary use case:
| Use Case | Recommended Model | Why |
|---|---|---|
| Daily commands | gpt-5-nano, gemini-1.5-flash | Fast, cheap, good enough for most shell commands |
| Complex debugging | claude-sonnet, gpt-4o | Better reasoning for tricky problems |
| Privacy-sensitive | llama3.2 (Ollama) | All data stays on your machine |
| Offline work | llama3.2 (Ollama) | No network required |
| High volume | gemini-1.5-flash | Lowest cost per request |
| Budget-conscious | Ollama or gemini-1.5-flash | Free or very cheap |
| Best quality | claude-opus, gpt-4o | Maximum capability, higher cost |
Switch models per-query when needed:
# Use faster model for simple queries
WHISP_MODEL=gpt-5-nano , list files by size
# Use better model for complex queries
WHISP_MODEL=gpt-4o , debug this race condition in the async codeModel Selection for Speed
For fastest responses:
# ~/.config/whisp/config.toml
default_provider = "gemini"
[providers.gemini]
model = "gemini-1.5-flash" # Very fastOr use Ollama for zero network latency:
default_provider = "ollama"
[providers.ollama]
model = "llama3.2" # Fast local inferenceReducing Latency
-
Keep daemon running: Avoid startup overhead
# Install as service whisp install -
Use local provider for simple queries:
# Quick local query WHISP_PROVIDER=ollama , list files by size -
Minimize context: Short queries = faster responses
Batch Operations
For multiple operations, keep the daemon warm:
#!/bin/bash
# batch-process.sh
# Warm up connection
whisp health > /dev/null
# Process multiple items quickly
for file in *.log; do
cat "$file" | whisp shell pipe "extract error count" &
done
waitCombining Shortcuts Effectively
Research Workflow
# 1. Find what you need
, find all config files containing 'database'
# 2. Understand what you found
,. cat config/database.yml | head -20
# 3. Get alternatives
,,
# 4. Preview changes before making them
,d sed -i 's/localhost/db.prod/g' config/*.ymlDebugging Workflow
# 1. Check recent errors in logs
cat /var/log/app.log | tail -100 |, find the root cause of errors
# 2. When a command fails, whisp auto-suggests fix
npm install # fails with error
# Whisp suggests: npm cache clean --force && npm install
# 3. Search history for similar issues
,/ permission denied
# 4. Copy solution to share with team
,cLearning Workflow
# 1. Get command explanation
,. awk '{print $1}' access.log | sort | uniq -c | sort -rn
# 2. Ask for simpler alternative
, same thing but easier to read
# 3. See other approaches
,,
# 4. Try dry-run before committing
,d the-suggested-commandEnvironment-Specific Configuration
Development vs Production
# ~/.bashrc
if [ "$ENVIRONMENT" = "production" ]; then
# Use local model in prod (no data leaves machine)
export WHISP_PROVIDER=ollama
export WHISP_CONFIRM_DESTRUCTIVE=true
else
# Use cloud in dev for better quality
export WHISP_PROVIDER=openai
fiPer-Project Configuration
Create a wrapper script for project-specific settings:
#!/bin/bash
# project-whisp.sh
# Project-specific model
export WHISP_MODEL=gpt-4o
# Project context
export WHISP_CONTEXT="This is a Django project using PostgreSQL"
# Run whisp with project settings
whisp "$@"Troubleshooting Advanced Usage
Daemon Not Starting in Scripts
# Add explicit path and wait
export PATH="$HOME/.local/bin:$PATH"
whisp start
sleep 2 # Give daemon time to initialize
whisp status || exit 1Commands Timing Out
# Increase socket timeout for long operations
export WHISP_TIMEOUT=60000 # 60 secondsCI Rate Limiting
# Use conservative rate limits in CI
[resilience]
requests_per_minute = 10
max_retries = 2