Skip to content

feat(backend/copilot): local-LLM AutoPilot for the no-API-key install#12993

Open
ntindle wants to merge 16 commits into
devfrom
feat/copilot-local-ollama-transport
Open

feat(backend/copilot): local-LLM AutoPilot for the no-API-key install#12993
ntindle wants to merge 16 commits into
devfrom
feat/copilot-local-ollama-transport

Conversation

@ntindle
Copy link
Copy Markdown
Member

@ntindle ntindle commented May 4, 2026

Why

backend/backend/copilot/config.py (ChatConfig) hard-wires the AutoPilot chat path to OpenRouter / Anthropic via OpenRouter-format model slugs. Operators trying to run AutoPilot without a Claude / OpenAI / OpenRouter key today hit two failure modes:

  1. The baseline path 401s because CHAT_API_KEY is empty, so AutoPilot is unusable.
  2. If they wire CHAT_BASE_URL at a local OpenAI-compat endpoint, the SDK ("extended thinking") mode still fires for default requests, the CLI tries to speak Anthropic's wire protocol against Ollama, and they get an opaque 500 instead of a graceful downgrade.

This is a launch-blocker for the "no API keys required" install story being validated on the multi-OS test bed.

What

Adds a fourth, explicit local transport to ChatConfig.effective_transport (and a TransportProfile descriptor that consolidates per-transport behaviour for all four). When CHAT_USE_LOCAL=true:

  • Routing — the baseline path talks to CHAT_BASE_URL over OpenAI-compatible HTTP. Wins over OpenRouter / subscription (an inherited token in CI / dev shells doesn't silently re-route).
  • SDK gatetransport.supports_sdk == False. Requests with mode='extended_thinking' are downgraded to fast at the request layer with a logged WARNING; build_sdk_env raises RuntimeError if anything still tries to construct the SDK env (defensive).
  • Vendor validator — skipped, so CHAT_*_MODEL accepts bare Ollama-style names (llama3.1:8b-instruct-q4_K_M) without an anthropic/* placeholder.
  • api_key fallback — empty for local. A stray OPENAI_API_KEY set for graphiti / embedders no longer silently binds to the local Ollama endpoint as the bearer token.
  • Auxiliary modelstitle_model / simulation_model left at their cloud defaults inherit fast_standard_model so operators only set one model slug instead of three.
  • num_ctxlocal_num_ctx field (default 32 768) forwarded for OpenAI-compat backends that honor options.num_ctx in the request body. (Ollama itself doesn't — its OpenAI shim ignores it; set OLLAMA_CONTEXT_LENGTH on the systemd unit instead. The bundled installer's --with-ollama does this for you. Documented.)
  • Request timeoutlocal_request_timeout_s (default 1 800 s) replaces the OpenAI client's 600 s default, so CPU-only local backends don't fail one turn before the model finishes.

Non-local transports keep their existing behaviour exactly — use_local defaults False, the new thinking_available kwarg defaults True, every cloud path is byte-identical.

How

File Change
backend/copilot/config.py TransportProfile dataclass + _TRANSPORT_PROFILES table for all four transports; new use_local / local_num_ctx / local_request_timeout_s fields; transport property; effective_transport adds "local"; vendor validator + api_key fallback + aux-model derivation rewritten as model_validators that read the profile; thinking_available kept as backwards-compat alias for transport.supports_sdk
backend/copilot/baseline/service.py passes extra_body.options.num_ctx from config.local_num_ctx only under the local transport
backend/copilot/service.py _get_openai_client() uses config.local_request_timeout_s under the local transport
backend/copilot/executor/processor.py resolve_use_sdk_for_mode accepts thinking_available, downgrades extended_thinking → fast with WARNING log when the transport doesn't support the SDK
backend/copilot/sdk/env.py defensive RuntimeError when called under a no-SDK transport
backend/.env.default documents CHAT_USE_LOCAL / CHAT_BASE_URL / CHAT_API_KEY for the local-LLM operator
installer/setup-autogpt.sh --with-ollama / --ollama-model= / --ollama-host= flags; bootstrap_ollama installs Ollama + sets OLLAMA_HOST=0.0.0.0:11434 + OLLAMA_CONTEXT_LENGTH=32768 via systemd drop-in; pulls the recommended default model (llama3.1:8b-instruct-q4_K_M); writes backend/.env with the right CHAT_USE_LOCAL block. --ollama-host=URL skips the install for operators wiring an existing remote endpoint
docs/platform/copilot-local-llm.md new — operator guide covering local / LAN / remote scenarios (Ollama, vLLM, LocalAI, LM Studio, LiteLLM proxy), the num_ctx gotcha, and a verification checklist
docs/platform/SUMMARY.md links the new doc
Tests TestTransportProfile (shape per transport), TestApiKeyFallback (local refuses OPENAI_API_KEY fallback; openrouter still falls back), TestLocalAuxModels (auto-derive + explicit override + cloud transports unaffected), TestLocalTransport (precedence over subscription/openrouter, validator skip, env var pickup), TestBuildSdkEnvLocalTransportGuard (defensive RuntimeError), processor test_thinking_unavailable_forces_baseline + test_thinking_available_default_preserves_legacy_behaviour

Validation

End-to-end on a fresh Ubuntu 24.04 cloud VM:

  1. setup-autogpt.sh --with-ollama from a clean repo → Ollama installed, OLLAMA_CONTEXT_LENGTH=32768 set, model pulled, backend/.env wired, compose up.
  2. Container env confirmed: CHAT_USE_LOCAL=true, CHAT_BASE_URL=http://192.168.1.185:11434/v1, CHAT_FAST_STANDARD_MODEL=qwen3:0.6b, CHAT_TITLE_MODEL=qwen3:0.6b (auto-derived).
  3. Live python -c "from backend.copilot.config import ChatConfig" confirmed transport.name == "local", transport.supports_sdk is False, local_num_ctx == 32768, local_request_timeout_s == 1800.
  4. AutoPilot UI: signup → onboarding → /copilot → sent "What is 17 multiplied by 23? Reply with only the numeric answer." → AutoPilot responded 391 (correct), chat auto-titled "391" (title-model derivation working).
  5. Executor log: Using baseline service (mode=default) (the SDK downgrade path); Ollama log: 200 from /v1/chat/completions.

(Performance note: the test VM is 4-core CPU only — qwen3:0.6b under the AutoPilot tool-call loop took 37 minutes on it. On any GPU host, or with a smaller system prompt, this is interactive. Hardware constraint, not a code defect.)

Out of scope (separate PRs)

  • Native Ollama branch in the SDK / extended-thinking path. Not feasible — the Claude Agent SDK CLI hard-requires Anthropic's wire protocol. Downgrading to fast is the correct UX until/unless Anthropic exposes a non-Anthropic SDK backend.
  • The frontend localhost:8006 build-time URL bake-in that prevents external-IP UI access without an SSH tunnel — separate frontend PR.
  • FalkorDB compose env (REDIS_ARGS=--requirepass ${GRAPHITI_FALKORDB_PASSWORD:-} boots with mis-quoted args when password is empty) — separate platform-infra PR. Tolerated here because graphiti is gated by an LD flag that defaults off in self-hosted.
  • setup.agpt.co/install.sh Docker bootstrap. The script today hard-fails when Docker is missing despite docs claiming "no prerequisite setup needed" — separate gap, separate PR.

Checklist

  • Backend tests added/updated for every new behaviour
  • No backwards-incompatible changes to existing transports
  • Conventional commit title with scope (feat(backend/copilot))
  • Documentation added (docs/platform/copilot-local-llm.md) + linked from SUMMARY
  • End-to-end validated on a real Ubuntu VM running Ollama

Note

Medium Risk
Moderate risk because it changes core chat routing/credential resolution and propagates new local-LLM behavior into multiple LLM-backed helpers (chat, simulator, onboarding extraction, embeddings), plus installer automation that edits system services/env files.

Overview
Enables running AutoPilot against a self-hosted OpenAI-compatible endpoint by adding a new local transport (CHAT_USE_LOCAL, CHAT_BASE_URL, CHAT_API_KEY) with centralized per-transport behavior via TransportProfile (SDK availability, api-key fallback policy, and aux-model defaults).

Updates chat execution to downgrade extended_thinking to baseline when the SDK can’t run (local), adds local-specific request tuning (options.num_ctx, longer OpenAI-client timeouts), and makes helper features (simulator, activity-status generation, Tally extraction, store embeddings) respect the local transport and avoid OpenRouter-only request fields.

Adds operator-facing support: .env.default documentation, a new docs/platform/copilot-local-llm.md guide linked from SUMMARY.md, and setup-autogpt.sh --with-ollama automation to install/probe Ollama, pull a model, and idempotently write the required backend .env block.

Reviewed by Cursor Bugbot for commit 3eef535. Bugbot is set up for automated code reviews on this repo. Configure here.

@ntindle ntindle requested a review from a team as a code owner May 4, 2026 13:23
@ntindle ntindle requested review from 0ubbe and Swiftyos and removed request for a team May 4, 2026 13:23
@github-project-automation github-project-automation Bot moved this to 🆕 Needs initial review in AutoGPT development kanban May 4, 2026
@github-actions github-actions Bot added documentation Improvements or additions to documentation platform/backend AutoGPT Platform - Back end labels May 4, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a "local" transport to route AutoPilot chat to self-hosted OpenAI‑compatible LLMs (e.g., Ollama). Introduces TransportProfile/TransportName, transport-driven defaults/validators, per‑turn thinking_available kill‑switch, SDK guards, local context/timeout wiring, installer support to bootstrap Ollama, tests, and docs.

Changes

Local transport + routing, SDK guards, installer, docs, tests

Layer / File(s) Summary
Data / Types
autogpt_platform/backend/backend/copilot/config.py
Add TransportName, TransportProfile, _TRANSPORT_PROFILES, _DEFAULT_TITLE_MODEL, _DEFAULT_SIMULATION_MODEL.
Config fields & validators
autogpt_platform/backend/backend/copilot/config.py
Add local_num_ctx, local_request_timeout_s, effective_transport, transport, thinking_available; implement _apply_transport_api_key_fallback, _apply_local_aux_models, _validate_local_transport_requirements; make SDK model‑vendor validation transport‑aware.
Baseline caller integration
autogpt_platform/backend/backend/copilot/baseline/service.py
When config.transport.name == "local", set extra_body.options.num_ctx via setdefault to config.local_num_ctx.
Service client wiring
autogpt_platform/backend/backend/copilot/service.py
Construct LangfuseAsyncOpenAI with kwargs and add timeout=config.local_request_timeout_s when transport is local.
Executor routing (per‑turn)
autogpt_platform/backend/backend/copilot/executor/processor.py
Add thinking_available: bool = True param to resolve_use_sdk_for_mode; when false force baseline and log downgrade for extended_thinking. Wire thinking_available=config.thinking_available at call site.
SDK env guard
autogpt_platform/backend/backend/copilot/sdk/env.py
build_sdk_env() now raises RuntimeError early if config.transport.supports_sdk is false (prevents SDK path for local transport).
Installer / Bootstrap
autogpt_platform/installer/setup-autogpt.sh
Add --with-ollama, --ollama-model, --ollama-host flags; add bootstrap_ollama() and write_local_env() to install/verify/pull models and atomically write backend/.env entries (CHAT_USE_LOCAL, CHAT_BASE_URL, CHAT_API_KEY, aux models).
Env defaults
autogpt_platform/backend/.env.default
Add AutoPilot local LLM block with CHAT_USE_LOCAL, CHAT_BASE_URL, CHAT_API_KEY and explanatory comments.
Tests
autogpt_platform/backend/backend/copilot/config_test.py, .../executor/processor_test.py, .../sdk/env_test.py
Add transport profile tests, clear CHAT_USE_LOCAL in test env, update vendor-compatibility message expectation, add thinking_available tests, and add SDK-env guard test.
Documentation
docs/platform/copilot-local-llm.md, docs/platform/SUMMARY.md
Add docs for running AutoPilot with a self‑hosted LLM and a SUMMARY TOC entry.

Sequence Diagram

sequenceDiagram
    autonumber
    actor User
    participant Backend as "AutoPilot Backend"
    participant Config as "ChatConfig / TransportProfile"
    participant LocalLLM as "Ollama / Local LLM"
    participant SDK as "SDK (Langfuse/OpenAI client)"

    User->>Backend: POST /copilot (chat request)
    Backend->>Config: resolve transport & thinking_available
    alt transport == local
        Backend->>LocalLLM: Send request (base_url, api_key, num_ctx, timeout)
        LocalLLM-->>Backend: Response
        Backend-->>User: Return chat response
    else transport supports SDK
        Backend->>SDK: build_sdk_env() (guarded by transport.supports_sdk)
        SDK-->>Backend: SDK response (extended thinking path)
        Backend-->>User: Return chat response
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

Review effort 4/5

Suggested reviewers

  • Swiftyos
  • 0ubbe
  • Bentlybro

Poem

🐇 I found a snug Ollama nook,
Env keys tucked in a dotted book,
Timeouts stretched and contexts wide,
When thinking sleeps, baseline will guide,
Hooray — local LLMs hop inside!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 63.46% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly describes the main change: adding local-LLM AutoPilot support for installations without API keys, which aligns with the core objective of enabling OpenAI-compatible local endpoint usage.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description check ✅ Passed The pull request description comprehensively explains the motivation (enabling no-API-key install via local LLM), the technical solution (local transport with TransportProfile), detailed implementation changes across multiple files, and end-to-end validation on Ubuntu 24.04 with Ollama.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/copilot-local-ollama-transport

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 4, 2026

🔍 PR Overlap Detection

This check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early.

🔴 Merge Conflicts Detected

The following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.

  • Skip LLM execution analysis for credit exhaustion failures #12614 (Otto-AGPT · updated 1d ago)

    • 📁 autogpt_platform/
      • backend/backend/api/features/subscription_routes_test.py (1 conflict, ~5 lines)
      • backend/backend/api/features/v1.py (1 conflict, ~86 lines)
      • backend/backend/data/credit.py (1 conflict, ~35 lines)
      • backend/backend/executor/activity_status_generator.py (1 conflict, ~37 lines)
      • backend/backend/executor/activity_status_generator_test.py (6 conflicts, ~281 lines)
      • frontend/src/app/(platform)/copilot/components/RateLimitResetDialog/RateLimitGate.tsx (2 conflicts, ~45 lines)
      • frontend/src/app/(platform)/copilot/components/RateLimitResetDialog/__tests__/RateLimitGate.test.tsx (1 conflict, ~15 lines)
      • frontend/src/app/(platform)/settings/billing/__tests__/billing-cards.test.tsx (5 conflicts, ~759 lines)
      • frontend/src/app/(platform)/settings/billing/__tests__/billing-hooks.test.tsx (6 conflicts, ~126 lines)
      • frontend/src/app/(platform)/settings/billing/__tests__/billing-page.test.tsx (1 conflict, ~48 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/AutoRefillCard/AutoRefillCard.tsx (2 conflicts, ~16 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/AutoRefillCard/AutoRefillDialog.tsx (1 conflict, ~13 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/AutoRefillCard/useAutoRefillCard.ts (2 conflicts, ~37 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/BalanceCard/BalanceCard.tsx (3 conflicts, ~32 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/TransactionHistoryCard/TransactionHistoryCard.tsx (5 conflicts, ~184 lines)
      • frontend/src/app/(platform)/settings/billing/components/AutomationCreditsTab/UsageCard/UsageCard.tsx (6 conflicts, ~187 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/AutopilotUsageCard/AutopilotUsageCard.tsx (3 conflicts, ~83 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/InvoicesCard/InvoicesCard.tsx (5 conflicts, ~231 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/InvoicesCard/useInvoicesCard.ts (3 conflicts, ~50 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/PaymentMethodCard/PaymentMethodCard.tsx (2 conflicts, ~16 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/YourPlanCard/YourPlanCard.tsx (10 conflicts, ~201 lines)
      • frontend/src/app/(platform)/settings/billing/components/SubscriptionTab/YourPlanCard/useYourPlanCard.ts (15 conflicts, ~492 lines)
      • frontend/src/app/(platform)/settings/billing/helpers.ts (1 conflict, ~44 lines)
      • frontend/src/app/(platform)/settings/billing/page.tsx (5 conflicts, ~46 lines)
  • feat(platform): share agent chat results via public link #13081 (ntindle · updated 5h ago)

    • 📁 autogpt_platform/backend/backend/api/features/
      • v1.py (1 conflict, ~5 lines)
  • feat(platform): estimate CoPilot turn cost and require approval for high-cost requests #12877 (Rushi-Balapure · updated 2d ago)

🟢 Low Risk — File Overlap Only

These PRs touch the same files but different sections (click to expand)

Summary: 3 conflict(s), 0 medium risk, 14 low risk (out of 17 PRs with file overlap)


Auto-generated on push. Ignores: openapi.json, lock files.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@autogpt_platform/backend/.env.default`:
- Around line 78-90: Reorder the three AutoPilot env keys in .env.default so
they follow dotenv-linter's expected order: place CHAT_API_KEY first, then
CHAT_BASE_URL, then CHAT_USE_LOCAL (preserve their existing values and
surrounding comment block). Update the block containing CHAT_USE_LOCAL,
CHAT_BASE_URL, CHAT_API_KEY to the new sequence (referencing the exact variable
names CHAT_API_KEY, CHAT_BASE_URL, CHAT_USE_LOCAL) so the linter warning is
resolved.

In `@autogpt_platform/backend/backend/copilot/config.py`:
- Around line 447-494: When use_local is True the config currently allows
falling back to other base URLs; add a post-parse guard (a Pydantic
`@root_validator` or `@validator` with pre=False) on the same Pydantic config model
that defines use_local/local_num_ctx/local_request_timeout_s to raise a clear
ValueError if the OpenAI-compatible client base URL or API key are missing
(check the config fields that map to CHAT_BASE_URL and CHAT_API_KEY), ensuring
misconfiguration is caught at startup; replicate the same validator logic for
the other related config models referenced around the other ranges (the blocks
noted at 649-662 and 698-719).

In `@autogpt_platform/installer/setup-autogpt.sh`:
- Around line 185-187: The current sed call that removes from the marker to EOF
(the '/# === Local-LLM AutoPilot wiring/,$d' invocation) is unsafe and can
delete unrelated user config; change the logic to remove only the bounded block
between a start and end marker (e.g., '# === Local-LLM AutoPilot wiring START'
and '# === Local-LLM AutoPilot wiring END') so reruns are idempotent. Update the
grep/sed sequence that references '# === Local-LLM AutoPilot wiring' to look for
the bounded start marker and delete only the region between start and end
markers (use sed address range deletion like '/START/,/END/d' or equivalent),
and ensure you add the matching end marker when writing the block so future runs
can target the block precisely.
- Around line 132-138: When OLLAMA_HOST_URL is provided the script returns after
checking /api/version and never verifies or pulls the configured OLLAMA_MODEL,
which causes runtime "model not found" errors; update the block guarded by if [
-n "$OLLAMA_HOST_URL" ] (the code that prints "Using existing Ollama at
$OLLAMA_HOST_URL" and currently returns) to call the remote Ollama model
endpoints to confirm the value of OLLAMA_MODEL exists and, if missing, trigger
the appropriate remote model pull (or surface a clear error) before returning —
mirror the local-host logic used elsewhere (the same check referenced around
lines 163-165) so remote hosts get the same model validation/pull behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 81741ba4-8c91-4dff-aab4-9740ce4426f2

📥 Commits

Reviewing files that changed from the base of the PR and between 2c840ea and b25340d.

📒 Files selected for processing (12)
  • autogpt_platform/backend/.env.default
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/installer/setup-autogpt.sh
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: check API types
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.11)
  • GitHub Check: type-check (3.13)
  • GitHub Check: lint
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: type-check (3.12)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (typescript)
  • GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Backend environment configuration: backend/.env.default provides defaults (tracked in git), backend/.env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
autogpt_platform/**/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Platform environment configuration: .env.default provides Supabase/shared defaults (tracked in git), .env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
🧠 Learnings (9)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/config.py
🪛 dotenv-linter (4.0.0)
autogpt_platform/backend/.env.default

[warning] 89-89: [UnorderedKey] The CHAT_BASE_URL key should go before the CHAT_USE_LOCAL key

(UnorderedKey)


[warning] 90-90: [UnorderedKey] The CHAT_API_KEY key should go before the CHAT_BASE_URL key

(UnorderedKey)

🪛 LanguageTool
docs/platform/copilot-local-llm.md

[grammar] ~221-~221: Ensure spelling is correct
Context: ...T_API_KEY` so a stray cloud key set for graphiti / embedders doesn't silently bind to your ...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🔇 Additional comments (10)
docs/platform/SUMMARY.md (1)

29-29: TOC entry looks good.

The new link is clear and matches the added local-LLM documentation.

autogpt_platform/backend/backend/copilot/sdk/env.py (1)

51-62: Fail-fast guard looks correct.

Rejecting SDK env construction when the active transport cannot run the SDK is the right protection here.

autogpt_platform/backend/backend/copilot/service.py (1)

68-76: Local timeout wiring looks good.

Extending the OpenAI client timeout only for local keeps the default behavior unchanged for cloud transports.

autogpt_platform/backend/backend/copilot/baseline/service.py (1)

680-690: Local num_ctx override is applied in the right place.

Scoping it to config.transport.name == "local" keeps the OpenAI/OpenRouter payload unchanged.

autogpt_platform/backend/backend/copilot/executor/processor_test.py (1)

137-180: Good regression coverage for the new kill-switch.

This locks in the baseline downgrade, the warning path, and the legacy default behavior.

autogpt_platform/backend/backend/copilot/sdk/env_test.py (1)

482-499: Nice regression test for the SDK env guard.

This confirms the builder fails fast under the local transport instead of emitting a broken subprocess env.

autogpt_platform/backend/backend/copilot/executor/processor.py (2)

145-173: The new thinking_available gate is doing the right thing.

Short-circuiting to baseline before the mode/flag checks is the correct behavior for SDK-incompatible transports.


513-519: Wiring the transport flag through the call site is correct.

Passing config.thinking_available here makes the new routing guard effective during turn execution.

docs/platform/copilot-local-llm.md (1)

1-232: Well-structured local-LLM runbook.

This doc is clear, operationally practical, and maps cleanly to the config/installer behavior introduced in this PR.

autogpt_platform/backend/backend/copilot/config_test.py (1)

19-19: Strong coverage additions for transport semantics and local safety behavior.

The new tests pin transport precedence, fallback policy, and local aux-model derivation well, reducing regression risk in config routing.

Also applies to: 296-526

Comment thread autogpt_platform/backend/.env.default
Comment thread autogpt_platform/backend/backend/copilot/config.py
Comment thread autogpt_platform/installer/setup-autogpt.sh
Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
@codecov
Copy link
Copy Markdown

codecov Bot commented May 4, 2026

Codecov Report

❌ Patch coverage is 90.38462% with 25 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.77%. Comparing base (3073d44) to head (3eef535).
⚠️ Report is 2 commits behind head on dev.

Additional details and impacted files
@@            Coverage Diff             @@
##              dev   #12993      +/-   ##
==========================================
- Coverage   71.18%   70.77%   -0.42%     
==========================================
  Files        2203     2192      -11     
  Lines      166435   166565     +130     
  Branches    16966    16973       +7     
==========================================
- Hits       118483   117888     -595     
- Misses      44460    45283     +823     
+ Partials     3492     3394      -98     
Flag Coverage Δ
platform-backend 79.77% <90.38%> (+0.01%) ⬆️
platform-frontend-e2e 31.15% <ø> (-0.08%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Platform Backend 79.77% <90.38%> (+0.01%) ⬆️
Platform Frontend 37.82% <ø> (-2.53%) ⬇️
AutoGPT Libs ∅ <ø> (∅)
Classic AutoGPT 28.43% <ø> (ø)
🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment thread autogpt_platform/backend/backend/copilot/config_test.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
autogpt_platform/backend/backend/copilot/executor/processor.py (1)

298-302: ⚡ Quick win

Avoid introducing a new inline # type: ignore in the CLI prewarm path.

This changed block now depends on a private claude-agent-sdk API and suppresses the type error at the call site. Please move this behind a small typed helper or otherwise fix the signature mismatch directly instead of adding a new backend linter suppressor.

As per coding guidelines, "Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@autogpt_platform/backend/backend/copilot/executor/processor.py` around lines
298 - 302, The new inline "# type: ignore" should be removed and the private API
call to SubprocessCLITransport._find_bundled_cli should be wrapped by a small
typed helper that resolves the signature mismatch; create a helper function
(e.g., def _get_bundled_cli_path(cli_arg: Optional[str] = None) -> str) that
imports SubprocessCLITransport and calls
SubprocessCLITransport._find_bundled_cli(cli_arg) while handling/typing any
None/default parameter correctly (or using typing.cast when necessary), then
replace the inline call in processor.py with a call to that helper instead of
suppressing type checks.
autogpt_platform/backend/backend/copilot/config.py (1)

22-57: ⚡ Quick win

Prefer a Pydantic model for TransportProfile.

This new transport descriptor is structured backend config data, but it's introduced as a dataclass. Converting it to a small frozen Pydantic model would keep this module aligned with the rest of the backend config surface and the repo standard.

As per coding guidelines, "Use Pydantic models over dataclass/namedtuple/dict for structured data."

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@autogpt_platform/installer/setup-autogpt.sh`:
- Around line 141-142: The conditional that checks for the remote model uses
plain grep which treats regex metacharacters in $OLLAMA_MODEL as special; change
the lookup to use fixed-string matching (grep -F or --fixed-strings) so the
literal pattern "\"name\":\"$OLLAMA_MODEL\"" is searched for instead of a regex.
Update the conditional that calls curl and pipes to grep (the block referencing
OLLAMA_HOST_URL and OLLAMA_MODEL) to use fixed-string grep with -q so model
names like "llama3.1:8b-instruct-q4_K_M" are matched literally.

---

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/executor/processor.py`:
- Around line 298-302: The new inline "# type: ignore" should be removed and the
private API call to SubprocessCLITransport._find_bundled_cli should be wrapped
by a small typed helper that resolves the signature mismatch; create a helper
function (e.g., def _get_bundled_cli_path(cli_arg: Optional[str] = None) -> str)
that imports SubprocessCLITransport and calls
SubprocessCLITransport._find_bundled_cli(cli_arg) while handling/typing any
None/default parameter correctly (or using typing.cast when necessary), then
replace the inline call in processor.py with a call to that helper instead of
suppressing type checks.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c9b2665c-4d02-4c88-9ce2-be61967888fa

📥 Commits

Reviewing files that changed from the base of the PR and between b25340d and 29b9dfd.

📒 Files selected for processing (12)
  • autogpt_platform/backend/.env.default
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/installer/setup-autogpt.sh
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
✅ Files skipped from review due to trivial changes (2)
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: check API types
  • GitHub Check: Cursor Bugbot
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: type-check (3.12)
  • GitHub Check: test (3.12)
  • GitHub Check: Analyze (python)
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (typescript)
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
autogpt_platform/backend/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Backend environment configuration: backend/.env.default provides defaults (tracked in git), backend/.env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
autogpt_platform/**/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Platform environment configuration: .env.default provides Supabase/shared defaults (tracked in git), .env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
🧠 Learnings (9)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/config.py
🔇 Additional comments (5)
autogpt_platform/backend/backend/copilot/executor/processor_test.py (1)

137-179: LGTM — solid coverage of the new thinking_available kwarg.

test_thinking_unavailable_forces_baseline correctly pins the tight invariant that only the explicit "extended_thinking" path emits a warning (via len(warnings) == 1), so future regressions where "fast" or None accidentally start warning would be caught. test_thinking_available_default_preserves_legacy_behaviour keeping the kwarg out of the call site explicitly verifies backwards compatibility for the dozens of existing call sites that don't pass it.

autogpt_platform/backend/backend/copilot/config_test.py (4)

19-19: Good catch — prevents env leakage from skewing local-transport tests.

Without CHAT_USE_LOCAL in _ENV_VARS_TO_CLEAR, a developer running tests with CHAT_USE_LOCAL=true exported in their shell would see flapping behavior in TestSdkModelVendorCompatibility and TestOpenrouterActive (since effective_transport would silently flip to "local" and bypass the vendor validator). This addition keeps the test suite hermetic alongside the rest of the new transport's env vars.


369-381: Strong security-flavored test — pins the no-silent-leak guarantee.

Asserting on the CHAT_API_KEY-missing error (rather than just on cfg.api_key is None) makes this test simultaneously verify both invariants: (1) the local profile's api_key_fallback_envs == () doesn't pick up OPENAI_API_KEY, and (2) the _validate_local_transport_requirements guard fails fast at boot instead of letting AutoPilot 401 against Ollama on the first turn. This is exactly the kind of regression-pinning that catches "helpful" future changes that re-add OpenAI as a fallback for local.


296-360: Profile-shape pinning is a good single source of truth.

TestTransportProfile locks in the descriptor table per transport, which means any future change that scatters if use_local/if use_openrouter branches back through the codebase will surface as a profile-shape mismatch here rather than as scattered behavioral regressions. The test_thinking_available_alias_matches_profile cross-check (alias ↔ transport.supports_sdk) is the right invariant to keep the executor's per-turn kill-switch aligned with the config's transport descriptor.


487-570: Comprehensive local-transport coverage with the right defenses.

test_local_transport_overrides_subscription and test_local_transport_overrides_openrouter are the right invariants for the precedence rule — an operator opting into local self-hosting must win against inherited cloud credentials in CI/dev envs. test_local_skips_sdk_vendor_validator correctly pairs with test_default_use_local_is_false to confirm the validator skip is gated specifically on the local transport rather than weakening the vendor check globally.

Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
@ntindle ntindle force-pushed the feat/copilot-local-ollama-transport branch from 29b9dfd to 3de03fd Compare May 6, 2026 00:09
Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
autogpt_platform/backend/backend/copilot/config.py (1)

4-5: ⚡ Quick win

Use a Pydantic model for TransportProfile.

Line 22 introduces a new structured backend config object as a dataclass, which diverges from the repo rule for structured data in backend Python modules. Keeping transport descriptors in the same model system as ChatConfig avoids mixing validation conventions inside the config layer.

As per coding guidelines "Use Pydantic models over dataclass/namedtuple/dict for structured data".

Also applies to: 22-58

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@autogpt_platform/backend/backend/copilot/config.py` around lines 4 - 5,
TransportProfile is defined as a dataclass which breaks the repo rule to use
Pydantic models for structured backend config; replace the dataclass
TransportProfile with a Pydantic BaseModel (keep the same field names/types and
Literal usages), update any imports to pull BaseModel and Field from pydantic,
and ensure validation/serialization behavior matches ChatConfig (use same
patterns for Optional/defaults and validators if present) so the config layer
consistently uses Pydantic models.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@autogpt_platform/installer/setup-autogpt.sh`:
- Around line 132-147: Normalize OLLAMA_HOST_URL before using it: strip any
trailing slash and an optional trailing "/v1" so that API probes use a clean
root (e.g., normalize into a variable like OLLAMA_ROOT), then use
"${OLLAMA_ROOT%/}/api/..." for /api/* checks and set CHAT_BASE_URL by appending
a single "/v1" to that normalized root (avoid double "/v1" when constructing
CHAT_BASE_URL); update all usages (the curl probes that reference
OLLAMA_HOST_URL, the model pull logic using OLLAMA_MODEL, and the later
CHAT_BASE_URL assignment) to use the normalized variable consistently and add a
small comment noting the normalization.

---

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/config.py`:
- Around line 4-5: TransportProfile is defined as a dataclass which breaks the
repo rule to use Pydantic models for structured backend config; replace the
dataclass TransportProfile with a Pydantic BaseModel (keep the same field
names/types and Literal usages), update any imports to pull BaseModel and Field
from pydantic, and ensure validation/serialization behavior matches ChatConfig
(use same patterns for Optional/defaults and validators if present) so the
config layer consistently uses Pydantic models.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4edbff4f-bfaa-4749-984a-52680cb49a0a

📥 Commits

Reviewing files that changed from the base of the PR and between 29b9dfd and 3de03fd.

📒 Files selected for processing (12)
  • autogpt_platform/backend/.env.default
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/installer/setup-autogpt.sh
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
🚧 Files skipped from review as they are similar to previous changes (4)
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • docs/platform/copilot-local-llm.md
  • autogpt_platform/backend/backend/copilot/service.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: check API types
  • GitHub Check: Analyze (typescript)
  • GitHub Check: Analyze (python)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
autogpt_platform/backend/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Backend environment configuration: backend/.env.default provides defaults (tracked in git), backend/.env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
autogpt_platform/**/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Platform environment configuration: .env.default provides Supabase/shared defaults (tracked in git), .env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
🧠 Learnings (9)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config.py

Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
@ntindle ntindle force-pushed the feat/copilot-local-ollama-transport branch from 3de03fd to d6bfe04 Compare May 6, 2026 00:26
Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
autogpt_platform/installer/setup-autogpt.sh (1)

142-153: ⚡ Quick win

Add --max-time to remote Ollama probes to prevent installer hangs.

curl -sf has no default timeout — if $OLLAMA_ROOT accepts the TCP connection but the HTTP response stalls (e.g. misconfigured reverse proxy, packet filter that blackholes after handshake), the installer blocks indefinitely with no output and no handle_error path. A short connect+read budget converts that into the same clear "cannot reach" failure as a refused connection.

Pull (line 155-158) should keep its lack of timeout — model downloads legitimately take many minutes.

♻️ Proposed change
-        if ! curl -sf "${OLLAMA_ROOT}/api/version" > /dev/null; then
+        if ! curl -sf --max-time 5 "${OLLAMA_ROOT}/api/version" > /dev/null; then
             handle_error "Cannot reach Ollama at $OLLAMA_ROOT — is it running and listening on 0.0.0.0?"
         fi
@@
-        if ! curl -sf "${OLLAMA_ROOT}/api/tags" \
+        if ! curl -sf --max-time 10 "${OLLAMA_ROOT}/api/tags" \
             | grep -Fq "\"name\":\"$OLLAMA_MODEL\""; then
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@autogpt_platform/installer/setup-autogpt.sh` around lines 142 - 153, The
remote Ollama probes using curl to check "${OLLAMA_ROOT}/api/version" and
"${OLLAMA_ROOT}/api/tags" can hang indefinitely; update those two curl
invocations to include a short overall timeout (e.g. --max-time 10) so stalled
HTTP responses fail fast and trigger handle_error, but do not add a timeout to
the model download/pull step (leave the curl that performs the model pull
unchanged). Ensure you update the two checks that reference OLLAMA_ROOT and the
grep for "\"name\":\"$OLLAMA_MODEL\"" while keeping the model download logic
as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@autogpt_platform/backend/backend/copilot/config_test.py`:
- Around line 463-470: Hoist the local import of OPENROUTER_BASE_URL out of the
test function into the module-level imports: add "from backend.util.clients
import OPENROUTER_BASE_URL" to the top import block of this test module and
remove the inner import inside
test_explicit_openrouter_base_url_under_local_raises; this preserves the
existing reference to OPENROUTER_BASE_URL in that test and adheres to the
top-level import guideline.

In `@autogpt_platform/backend/backend/copilot/config.py`:
- Around line 22-89: TransportProfile is currently a dataclass but should be a
Pydantic model; replace the `@dataclass`(frozen=True) usage with class
TransportProfile(BaseModel):, add model_config = ConfigDict(frozen=True) inside
the class, keep the same typed attributes (name: TransportName, supports_sdk:
bool, sdk_model_vendor_constraint: str | None, api_key_fallback_envs: tuple[str,
...], inherit_fast_model_for_aux: bool), and import BaseModel and ConfigDict
from pydantic so the existing _TRANSPORT_PROFILES construction and read-only
access semantics remain unchanged.

---

Nitpick comments:
In `@autogpt_platform/installer/setup-autogpt.sh`:
- Around line 142-153: The remote Ollama probes using curl to check
"${OLLAMA_ROOT}/api/version" and "${OLLAMA_ROOT}/api/tags" can hang
indefinitely; update those two curl invocations to include a short overall
timeout (e.g. --max-time 10) so stalled HTTP responses fail fast and trigger
handle_error, but do not add a timeout to the model download/pull step (leave
the curl that performs the model pull unchanged). Ensure you update the two
checks that reference OLLAMA_ROOT and the grep for "\"name\":\"$OLLAMA_MODEL\""
while keeping the model download logic as-is.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 28976447-cf9d-4c13-b186-470a7c880864

📥 Commits

Reviewing files that changed from the base of the PR and between 3de03fd and d6bfe04.

📒 Files selected for processing (12)
  • autogpt_platform/backend/.env.default
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/installer/setup-autogpt.sh
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/backend/backend/copilot/baseline/service.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: check API types
  • GitHub Check: Cursor Bugbot
  • GitHub Check: end-to-end tests
  • GitHub Check: type-check (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: type-check (3.12)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (typescript)
  • GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
autogpt_platform/backend/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Backend environment configuration: backend/.env.default provides defaults (tracked in git), backend/.env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
autogpt_platform/**/.env*

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Platform environment configuration: .env.default provides Supabase/shared defaults (tracked in git), .env provides user overrides (gitignored)

Files:

  • autogpt_platform/backend/.env.default
🧠 Learnings (9)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
🔇 Additional comments (15)
autogpt_platform/backend/.env.default (1)

78-90: LGTM!

The AutoPilot self-hosted LLM block is well-documented with clear guidance on CHAT_BASE_URL, model-name conventions, and the extended-thinking auto-downgrade. Keys are now in alphabetical order to satisfy dotenv-linter.

docs/platform/SUMMARY.md (1)

29-29: LGTM!

The new entry slots correctly into "Using AI Services" alongside the existing Ollama page and points to the new AutoPilot-specific doc.

docs/platform/copilot-local-llm.md (1)

1-232: Comprehensive doc — LGTM.

Solid coverage of the local-transport contract, model selection, the num_ctx Ollama gotcha, networking topologies (same-host/LAN/remote), wiring verification, and troubleshooting. The Anthropic-wire-protocol downgrade caveat is called out in the right places (intro, behaviour table, troubleshooting).

autogpt_platform/installer/setup-autogpt.sh (1)

221-229: 💤 Low value

Bounded-block sed delete — verify marker safety.

The bounded START_MARKER/END_MARKER rewrite is a real improvement over the previous ,$d cliff. One thing worth noting: $START_MARKER contains parentheses ((added by setup-autogpt.sh --with-ollama)) which are literal in sed BRE today, but if anyone later refactors the marker to use BRE-special characters (*, ^, $, [, ], \, /), the address pattern silently breaks. Since markers are constants you control, this is fine as-is — just keep marker characters in the safe set if they ever change.

autogpt_platform/backend/backend/copilot/config.py (2)

698-758: 💤 Low value

LGTM on the local-transport guard + fallback chain.

The validator pair correctly fails fast when CHAT_USE_LOCAL=true is paired with the OPENROUTER_BASE_URL default or a missing CHAT_API_KEY, and the per-transport fallback table elegantly replaces the old scattered if config.use_X branches. The dependency on definition-order execution of model_validator(mode="after") in Pydantic v2 is documented in the docstring, which is the right call.

One small simplification opportunity: in Pydantic v2 BaseSettings defaults to validate_assignment=False, so self.api_key = v (and self.title_model = ..., self.simulation_model = ...) works the same as object.__setattr__ here without the bypass connotation. Not blocking — current form is correct.


779-827: LGTM — vendor-constraint validator now driven by transport.sdk_model_vendor_constraint.

Skip-path correctly short-circuits for transports without an SDK or without a vendor constraint (local, openrouter, subscription). The error message names the active transport and the offending field, which is the actionable form an operator needs to debug at boot.

autogpt_platform/backend/backend/copilot/sdk/env.py (1)

51-62: LGTM — fast-fail guard at the right layer.

Raising before any mode logic runs gives a clear, actionable error (with a pointer back to executor.processor.resolve_use_sdk_for_mode) instead of constructing an env that would route Anthropic-wire traffic to Ollama. Module-level config makes this patchable from tests, which the new TestBuildSdkEnvLocalTransportGuard exercises.

autogpt_platform/backend/backend/copilot/service.py (1)

65-77: LGTM — scoped timeout override for the local transport only.

Conditional kwargs cleanly leave cloud transports on the OpenAI client's default 600 s while giving CPU-bound local backends the headroom they need. The _client is module-cached so this evaluates once per process — fine since config.transport.name is stable for the lifetime of the worker.

autogpt_platform/backend/backend/copilot/sdk/env_test.py (1)

482-501: LGTM — defensive-guard test mocks at the use site.

Patching backend.copilot.sdk.env.config (where the symbol is consumed) rather than backend.copilot.config.ChatConfig (where it's defined) is the correct boundary per the project's mocking guideline. The regex match also pins the message wording so future refactors can't silently reduce it to a generic error.

autogpt_platform/backend/backend/copilot/config_test.py (3)

19-19: LGTM — CHAT_USE_LOCAL correctly added to _ENV_VARS_TO_CLEAR.


207-217: LGTM — updated error-message match aligns with the new vendor-constraint wording.


296-575: New transport/local test classes look solid.

All transport profile shapes, API-key fallback chains, aux-model inheritance, local requirements validation, and effective_transport/thinking_available semantics are covered with clear assertions. The caplog.records[i].message access pattern in test_thinking_unavailable_forces_baseline (line 160) is valid — pytest's LogCaptureHandler formats records before appending, making record.message reliably set.

autogpt_platform/backend/backend/copilot/executor/processor.py (2)

145-182: thinking_available kill-switch cleanly slots in with full backward compatibility.

The default thinking_available: bool = True means every existing call site is unaffected. The early return short-circuits all downstream logic (subscription flag, LaunchDarkly, is_feature_enabled) so the baseline is always forced when the transport can't support the SDK. Warning only fires for the explicitly-requested extended_thinking case, which is the right balance — silent downgrade for fast/None but a visible warning when the operator has explicitly asked for thinking.


513-519: LGTM — call site correctly threads config.thinking_available into the routing function.

autogpt_platform/backend/backend/copilot/executor/processor_test.py (1)

137-179: New thinking_available tests are well-structured and cover the right edge cases.

test_thinking_unavailable_forces_baseline correctly validates that all three modes ("fast", "extended_thinking", None) are forced to baseline when thinking_available=False, and that exactly one WARNING is emitted — only for the explicitly-requested extended_thinking mode. The is_feature_enabled mock (returning True) proves the early return fires before any LaunchDarkly call, providing a stronger guarantee than if the mock returned False.

test_thinking_available_default_preserves_legacy_behaviour correctly confirms the True default doesn't break any existing behavior.

Comment thread autogpt_platform/backend/backend/copilot/config_test.py
Comment thread autogpt_platform/backend/backend/copilot/config.py Outdated
@ntindle ntindle force-pushed the feat/copilot-local-ollama-transport branch from d6bfe04 to 91fa945 Compare May 6, 2026 00:44
Comment thread autogpt_platform/backend/backend/copilot/baseline/service.py
Comment thread autogpt_platform/installer/setup-autogpt.sh Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
autogpt_platform/backend/backend/copilot/config.py (1)

706-744: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

CHAT_USE_LOCAL can still piggyback inherited base URLs.

This validator only rejects the final OpenRouter default, so use_local=True still passes when base_url was inherited from OPENAI_BASE_URL or OPENROUTER_BASE_URL instead of being set explicitly via CHAT_BASE_URL. That leaves the local transport vulnerable to silent misrouting whenever those global env vars exist in the process.

Suggested fix
     `@model_validator`(mode="after")
     def _validate_local_transport_requirements(self) -> "ChatConfig":
@@
         if self.transport.name != "local":
             return self
+        explicit_chat_base_url = os.getenv("CHAT_BASE_URL")
+        explicit_base_url = bool(
+            explicit_chat_base_url or "base_url" in self.model_fields_set
+        )
+        if not explicit_base_url:
+            raise ValueError(
+                "CHAT_USE_LOCAL=true requires an explicit CHAT_BASE_URL "
+                "(an OpenAI-compatible /v1 endpoint, e.g. "
+                "http://host.docker.internal:11434/v1 for Ollama). "
+                "Do not rely on OPENAI_BASE_URL / OPENROUTER_BASE_URL fallbacks."
+            )
         if not self.base_url or self.base_url.rstrip("/") == OPENROUTER_BASE_URL.rstrip(
             "/"
         ):
             raise ValueError(
                 "CHAT_USE_LOCAL=true requires an explicit CHAT_BASE_URL "
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@autogpt_platform/backend/backend/copilot/config.py` around lines 706 - 744,
The validator _validate_local_transport_requirements currently only rejects the
OpenRouter default but allows base_url values inherited from other global
defaults; update it to also treat any inherited global defaults as invalid by
rejecting when self.base_url is missing or equals ANY of the known default
endpoints (not just OPENROUTER_BASE_URL) such as OPENAI_BASE_URL and
OPENROUTER_BASE_URL (after normalizing trailing slashes), so when
self.transport.name == "local" you raise the same ValueError if self.base_url is
falsy or matches OPENROUTER_BASE_URL or OPENAI_BASE_URL; keep the existing
api_key check and return self otherwise.
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/copilot/executor/processor.py (1)

169-172: ⚡ Quick win

Make the downgrade warning transport-agnostic.

This branch is driven by thinking_available, but the message hardcodes CHAT_USE_LOCAL=true. If another transport disables SDK support later, this log becomes misleading.

Proposed tweak
-                "Downgrading mode=extended_thinking to fast: SDK is "
-                "unavailable under the current transport (CHAT_USE_LOCAL=true)"
+                "Downgrading mode=extended_thinking to fast: SDK is "
+                "unavailable under the current transport configuration"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@autogpt_platform/backend/backend/copilot/executor/processor.py` around lines
169 - 172, The warning logged when downgrading "mode=extended_thinking" is
hardcoded to "CHAT_USE_LOCAL=true" and should be transport-agnostic; update the
logger.warning call in processor.py (where thinking_available is checked) to
mention the actual transport or reason instead of a hardcoded env var: include
the variable that represents the current transport or a generic phrase like
"current transport does not support SDK" (use the existing thinking_available
and any transport/transport_name variable available in the surrounding scope) so
the message accurately reflects why extended thinking was disabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@autogpt_platform/installer/setup-autogpt.sh`:
- Around line 124-187: The script's bootstrap_ollama function assumes curl
exists but check_prerequisites() doesn't validate it, causing unclear failures;
update check_prerequisites() to include curl (and any other tools used in
bootstrap_ollama like grep/tail if not already checked) and/or add an early
explicit check at the start of bootstrap_ollama to test for curl (use command -v
curl) and call handle_error with a clear message if missing; reference the
bootstrap_ollama function and the check_prerequisites function (and the
installer curl usage around the install.sh download and remote probes noted
around the 205-210 area) so the fix is applied where prerequisites are validated
and before any curl usage.

---

Duplicate comments:
In `@autogpt_platform/backend/backend/copilot/config.py`:
- Around line 706-744: The validator _validate_local_transport_requirements
currently only rejects the OpenRouter default but allows base_url values
inherited from other global defaults; update it to also treat any inherited
global defaults as invalid by rejecting when self.base_url is missing or equals
ANY of the known default endpoints (not just OPENROUTER_BASE_URL) such as
OPENAI_BASE_URL and OPENROUTER_BASE_URL (after normalizing trailing slashes), so
when self.transport.name == "local" you raise the same ValueError if
self.base_url is falsy or matches OPENROUTER_BASE_URL or OPENAI_BASE_URL; keep
the existing api_key check and return self otherwise.

---

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/executor/processor.py`:
- Around line 169-172: The warning logged when downgrading
"mode=extended_thinking" is hardcoded to "CHAT_USE_LOCAL=true" and should be
transport-agnostic; update the logger.warning call in processor.py (where
thinking_available is checked) to mention the actual transport or reason instead
of a hardcoded env var: include the variable that represents the current
transport or a generic phrase like "current transport does not support SDK" (use
the existing thinking_available and any transport/transport_name variable
available in the surrounding scope) so the message accurately reflects why
extended thinking was disabled.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a6be9cc9-8337-4cb9-a8d5-e47b64d44e0a

📥 Commits

Reviewing files that changed from the base of the PR and between d6bfe04 and 91fa945.

📒 Files selected for processing (12)
  • autogpt_platform/backend/.env.default
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/config.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/installer/setup-autogpt.sh
  • docs/platform/SUMMARY.md
  • docs/platform/copilot-local-llm.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • autogpt_platform/backend/.env.default
  • docs/platform/copilot-local-llm.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: check API types
  • GitHub Check: Cursor Bugbot
  • GitHub Check: test (3.11)
  • GitHub Check: type-check (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (typescript)
🧰 Additional context used
📓 Path-based instructions (3)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
🧠 Learnings (9)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/service.py
  • autogpt_platform/backend/backend/copilot/sdk/env_test.py
  • autogpt_platform/backend/backend/copilot/baseline/service.py
  • autogpt_platform/backend/backend/copilot/executor/processor.py
  • autogpt_platform/backend/backend/copilot/sdk/env.py
  • autogpt_platform/backend/backend/copilot/config_test.py
  • autogpt_platform/backend/backend/copilot/executor/processor_test.py
  • autogpt_platform/backend/backend/copilot/config.py
🔇 Additional comments (2)
autogpt_platform/backend/backend/copilot/executor/processor.py (1)

513-519: Nice wiring of capability-aware mode routing.

Passing thinking_available=config.thinking_available into resolve_use_sdk_for_mode keeps transport capability enforcement in a single decision point.

autogpt_platform/backend/backend/copilot/executor/processor_test.py (1)

137-179: Strong coverage for the new kill-switch semantics and legacy default behavior.

These tests cleanly pin the forced-baseline path (including warning visibility) and protect the default routing behavior when thinking_available is not passed.

Comment thread autogpt_platform/installer/setup-autogpt.sh
ntindle added 2 commits May 8, 2026 02:28
… simulator

Round-3 self-review follow-up. After the previous commit (2fb6255) gave
both transport branches a non-None ``extra_body`` value, the
``if extra_body is not None: create_kwargs["extra_body"] = extra_body``
guard inside the retry loop became dead code and the
``dict[str, Any] | None`` annotation became wider than the actual value.
Inline ``"extra_body": extra_body`` directly into ``create_kwargs`` and
narrow the annotation.
…meout_s under local transport

Cursor Bugbot caught: ``_LLM_TIMEOUT = 30`` was sized for OpenRouter
latency. Under ``CHAT_USE_LOCAL=true`` on CPU-only hardware (the launch
target for the no-API-key install), small JSON extractions on 0.6B–3B
Ollama models routinely take 30–120+ seconds, so the asyncio.wait_for
timeout fires before the model finishes and every Tally form submission
raises ``TimeoutError``.

Track ``chat_cfg.local_request_timeout_s`` (the same knob that gates the
AsyncOpenAI HTTP-client timeout, default 1800s) under local transport so
the application-layer wait_for and the underlying HTTP timeout stay
aligned. OpenRouter/cloud paths keep the original 30s budget.
Comment thread autogpt_platform/backend/backend/data/tally.py
…_understanding under local transport

Cursor Bugbot caught the last call site that wasn't forwarding the
``options.num_ctx`` hint under local transport. Without it, Tally's
``extract_business_understanding`` lets Ollama's OpenAI shim cap context
at its 4 k default (ollama/ollama#2714), silently truncating long form
submissions and producing garbage JSON.

Mirror the pattern already in baseline/service.py + simulator.py +
activity_status_generator.py: under ``CHAT_USE_LOCAL=true`` add
``extra_body={"options": {"num_ctx": chat_cfg.local_num_ctx}}`` to the
chat-completions kwargs. Non-Ollama OpenAI-compat backends ignore
unknown ``options`` keys, so the forward is safe across the local stack.
Comment thread autogpt_platform/backend/backend/executor/activity_status_generator.py Outdated
…nd-cache architecture test

The factor-out helper introduced in 2fb6255 inherited the same
process-wide-@cached(...) loop-binding caveat the architecture test in
``backend/util/architecture_test.py`` already grandfathers for the
parent ``get_openai_client``. Add it to ``_KNOWN_OFFENDERS`` so CI stays
green; the underlying migration to ``per_loop_cached`` will lift both
together.

CI failure was: ``test_no_process_cached_loop_bound_clients`` on Python
3.11/3.12/3.13.
…T per call

Cursor flagged that ``simulator.py`` and ``activity_status_generator.py``
assigned the module-level ``_OPENROUTER_INCLUDE_USAGE_COST`` dict to
``extra_body`` by reference, while ``baseline/service.py`` already wraps
it in ``dict(...)``. The OpenAI SDK treats ``extra_body`` as opaque
pass-through, but an intermediate layer mutating the dict would leak
across coroutines and corrupt every future call. Match baseline's
defensive ``dict(_OPENROUTER_INCLUDE_USAGE_COST)`` idiom in both call
sites so all three OpenRouter spots use the same shape.
Comment thread autogpt_platform/backend/backend/executor/activity_status_generator.py Outdated
…al transport

Cursor flagged that ``generate_activity_status_for_execution`` silently
discarded the caller's ``model_name`` argument under
``CHAT_USE_LOCAL=true`` — any explicit reroute (admin override, test
fixture, future-feature consumer) was clobbered by ``chat_cfg.title_model``.

Only auto-override when the caller left ``model_name`` at its
cloud-routing default sentinel (``_DEFAULT_MODEL_NAME = "gpt-4o-mini"``);
explicit caller values pass through untouched. Mirrors the pattern
``ChatConfig._apply_local_aux_models`` already uses for the
``title_model`` / ``simulation_model`` field defaults.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 9, 2026

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented May 11, 2026

CLA assistant check
All committers have signed the CLA.

Bentlybro
Bentlybro previously approved these changes May 12, 2026
Copy link
Copy Markdown
Member

@Bentlybro Bentlybro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested locally and it works so approving 😄

@github-project-automation github-project-automation Bot moved this from 🆕 Needs initial review to 👍🏼 Mergeable in AutoGPT development kanban May 12, 2026
Resolve 4 conflicts in copilot config + clients:

- copilot/config.py: combine local-transport TransportProfile system
  (HEAD) with dev's _host_matches helper, ANTHROPIC_OPENAI_COMPAT_BASE_URL
  constant, and main/aux client credential properties. Update
  _DEFAULT_TITLE_MODEL default to anthropic/claude-haiku-4-5 (dev) so
  direct-Anthropic deployments pass aux validation. Merge SDK vendor
  validator to use transport-profile constraint plus dev's bare-slug
  validation. Add local-transport fast-path to _validate_aux_client_for_direct_main
  so aux under Ollama isn't held to the Anthropic-only title constraint.
- copilot/service.py: keep dev's _get_main_client + _get_aux_client split
  with the back-compat _get_openai_client alias and reset_clients helper,
  while preserving HEAD's local-transport timeout extension on both clients.
- copilot/baseline/service.py: combine HEAD's local-transport branch
  (num_ctx, no OR cost params) with dev's openrouter_active vs
  direct-Anthropic split for reasoning vs anthropic_thinking_extra_body.
- copilot/config_test.py: keep both branches' new tests; update
  test_cloud_transport_does_not_inherit assertion to the new default
  title model.

Boy-scout: drop unused `now = datetime.now()` in two store/db_test.py
helpers and an unused CopilotPermissions import in chat/routes.py
flagged by ruff F841.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@github-actions github-actions Bot removed the conflicts Automatically applied to PRs with merge conflicts label May 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

extra_body.update(reasoning_param)
else:
extra_body = {}
# Direct mode (non-OR, non-local): use native Anthropic thinking param.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenRouter stream_options sent to local backends

Medium Severity

Under local transport with the default use_openrouter=True, openrouter_active returns True (because api_key is truthy, base_url starts with "http", and use_openrouter is True). The extra_body logic at line 676 correctly branches on config.transport.name == "local", but the subsequent config.openrouter_active checks at lines 709 and 740 are not guarded by a local-transport exclusion. This causes OpenRouter-specific stream_options to be sent to Ollama/vLLM and cost tracking to use the wrong mode (treating local responses as OpenRouter). Stricter OpenAI-compat backends may reject the unexpected parameter.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 3e4f669. Configure here.

ntindle and others added 2 commits May 13, 2026 21:10
…changed

After the dev merge, ``normalize_model_for_transport`` was only
special-casing the OpenRouter branch; everything else (subscription,
direct_anthropic, local) fell into the Anthropic-only validator and
raised ``ValueError`` for any non-``claude-*`` bare slug.  Local
backends (Ollama, vLLM, LM Studio, ...) use operator-chosen slugs
like ``llama3.1:8b-instruct-q4_K_M`` that don't fit the
``vendor/model`` convention — and rewriting them to strip a missing
prefix would 404 every call against the local endpoint.

Adds ``local`` to the same passthrough branch as ``openrouter`` so
local slugs flow through unchanged.  Caught on a fresh Proxmox VM
running the ``--with-ollama`` installer: the very first chat raised
``'local' transport requires an Anthropic model slug, got
model='llama3.1:8b-instruct-q4_K_M'``.

Regression test pins the local-passthrough behaviour against a bare
``llama3.1:8b-instruct-q4_K_M`` slug, sitting alongside the existing
cloud-transport cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
A first-time user on CPU-only hardware will see AutoPilot appear hung
for 10-15 minutes on the first turn while llama.cpp prefills the
~3 k-token system prompt. The existing troubleshooting entry only
covered the 5-15 s model-load delay; this expands it to set
expectations on the ongoing per-turn prefill cost and points at the
Ollama log line that signals "prefill done, generating".

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

There are 3 total unresolved issues (including 1 from previous review).

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 3eef535. Configure here.

"CHAT_BASE_URL + CHAT_API_KEY (local). See "
"docs/platform/copilot-local-llm.md."
)
model = chat_cfg.title_model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tally extraction model silently changed for cloud transports

Medium Severity

extract_business_understanding now uses chat_cfg.title_model (defaulting to "anthropic/claude-haiku-4-5") for all transports, replacing the previous hardcoded "openai/gpt-4o-mini". The PR description states "Non-local transports keep their existing behaviour exactly", but this model change applies to the cloud (OpenRouter) path too. Claude Haiku is a different model with different pricing and output characteristics than GPT-4o-mini, so existing cloud deployments see a silent model swap for every Tally business-understanding extraction.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 3eef535. Configure here.

# ``activity_status_generator.py``. Defends against intermediate
# layers ever mutating ``extra_body`` and corrupting the shared
# module-level dict for every future call.
extra_body = dict(_OPENROUTER_INCLUDE_USAGE_COST)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simulator records wrong provider label under local transport

Low Severity

The new local-transport branches in _call_llm_for_simulation and generate_activity_status_for_execution route LLM calls through the local Ollama/vLLM client, but downstream cost-tracking still records provider="open_router" (existing hardcoded value in _track_simulator_cost). Previously these callers returned None and skipped when no OpenRouter key was present, so the hardcoded label was never reached under the equivalent scenario. Now it is, producing misleading cost attribution in PlatformCostLog for local-transport deployments.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 3eef535. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation platform/backend AutoGPT Platform - Back end size/xl

Projects

Status: 👍🏼 Mergeable

Development

Successfully merging this pull request may close these issues.

3 participants