Skip to content

feat(platform): LD-configurable rate-limit multipliers + relative UI display#12910

Merged
majdyz merged 6 commits into
devfrom
feat/configurable-tier-multipliers-and-relative-ui
Apr 24, 2026
Merged

feat(platform): LD-configurable rate-limit multipliers + relative UI display#12910
majdyz merged 6 commits into
devfrom
feat/configurable-tier-multipliers-and-relative-ui

Conversation

@majdyz
Copy link
Copy Markdown
Contributor

@majdyz majdyz commented Apr 24, 2026

Summary

  • Backend (copilot/rate_limit)TIER_MULTIPLIERS is now float-typed and resolvable through a new LaunchDarkly flag copilot-tier-multipliers. The integer defaults live on as _DEFAULT_TIER_MULTIPLIERS and are merged with whatever LD returns (missing / invalid keys inherit defaults; LD failures fall back to defaults without raising). get_global_rate_limits now honours the flag per-user and casts int(base * multiplier) so downstream microdollar math stays integer even when LD hands back a fractional multiplier (e.g. 8.5×). Cached for 60 s via @cached(ttl_seconds=60, maxsize=8, cache_none=False) to match the pattern in get_subscription_price_id.
  • Backend (api/features/v1)SubscriptionStatusResponse gains tier_multipliers: dict[str, float], populated for the same set of tiers that make it into tier_costs so hidden tiers never get a rendered badge.
  • Frontend (SubscriptionTierSection) — drops the hard-coded "5x" / "20x" strings from TIERS and introduces formatRelativeMultiplier(tierKey, tierMultipliers): the lowest visible multiplier becomes the baseline (no badge), every other tier renders "N.Nx rate limits" relative to it. Fractional LD values like 8.5× round to one decimal.

The admin rate-limit page (/admin/rate-limits) keeps the static TIER_MULTIPLIERS defaults — it's admin-facing, infrequently viewed, and fine to lag the LD value until next deploy (noted in-code).

Related upstream: this PR stacks logically after #12903 (which added the MAX tier + LD-configurable prices) but does not require it — each PR can merge in either order. No schema changes, no migration.

Test plan

  • poetry run black backend/... --check + poetry run ruff check backend/... pass
  • pnpm format pass (modified files unchanged)
  • New backend tests: TestGetTierMultipliers (defaults, LD override, invalid JSON, unknown tier / non-positive values, LD failure) — 5 / 5 pass
  • New backend test: TestGetGlobalRateLimitsWithTiers::test_ld_override_applies_fractional_multiplierpass
  • backend/copilot/rate_limit_test.py — non-DB subset 72 / 72 pass; TestGetUserTier / TestSetUserTier require the full test-server fixture (Redis + Prisma) and are not run in this worktree — same behaviour on clean dev
  • backend/api/features/subscription_routes_test.py40 / 40 pass (includes new test_get_subscription_status_tier_multipliers_ld_override)
  • Frontend vitest targeted suite — 51 / 51 pass
    • helpers.test.ts — new formatRelativeMultiplier cases (lowest-tier null, integer ratio, fractional ratio, hidden-tier null, fractional LD)
    • SubscriptionTierSection.test.tsx — three new cases for relative badges, rebasing when the lowest tier is hidden, fractional LD overrides

…display

- copilot/rate_limit: TIER_MULTIPLIERS is now float-typed and resolved via new
  LD flag copilot-tier-multipliers (JSON per-tier override). Existing integer
  defaults preserved as _DEFAULT_TIER_MULTIPLIERS fallback. get_global_rate_limits
  honours the flag per-user and casts base*multiplier back to int so microdollar
  math stays integer.
- api v1: SubscriptionStatusResponse.tier_multipliers exposes effective
  multipliers so the frontend can render rate-limit badges without knowing
  backend constants.
- frontend SubscriptionTierSection: drop hardcoded "5x" / "20x" strings. A new
  helper formats each tier as "N.Nx rate limits" relative to the lowest visible
  tier; the lowest tier's badge is omitted entirely (it's the baseline).
@majdyz majdyz requested a review from a team as a code owner April 24, 2026 13:31
@majdyz majdyz requested review from ntindle and removed request for a team April 24, 2026 13:31
@majdyz majdyz requested a review from kcze April 24, 2026 13:31
@github-project-automation github-project-automation Bot moved this to 🆕 Needs initial review in AutoGPT development kanban Apr 24, 2026
@github-actions github-actions Bot added platform/frontend AutoGPT Platform - Front end platform/backend AutoGPT Platform - Back end labels Apr 24, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds LaunchDarkly-overridable, float-valued tier multipliers used in rate-limit computation; backend exposes a tier_multipliers map on the subscription status API; frontend computes and displays relative rate-limit badges from that map. Defaults and tolerant fallbacks preserve behavior when flags are missing or invalid.

Changes

Cohort / File(s) Summary
Rate limit core & tests
autogpt_platform/backend/.../copilot/rate_limit.py, autogpt_platform/backend/.../copilot/rate_limit_test.py
Introduce float _DEFAULT_TIER_MULTIPLIERS and async get_tier_multipliers() that reads/validates LD copilot-tier-multipliers overrides; merge overrides onto defaults; apply multipliers in get_global_rate_limits() and truncate to int. Tests cover LD overrides, invalid values, cache clearing, and fractional truncation.
Subscription API & tests
autogpt_platform/backend/.../api/features/v1.py, autogpt_platform/backend/.../api/features/subscription_routes_test.py
Add tier_multipliers to SubscriptionStatusResponse; /credits/subscription calls get_tier_multipliers() and includes the map in responses. Tests stub multiplier lookup and add an LD-override scenario asserting propagation and hiding of tiers without prices.
Feature flag enum
autogpt_platform/backend/.../util/feature_flag.py
Add Flag.COPILOT_TIER_MULTIPLIERS = "copilot-tier-multipliers".
Frontend: tier UI & helpers + tests
autogpt_platform/frontend/.../SubscriptionTierSection/SubscriptionTierSection.tsx, .../helpers.ts, .../__tests__/*, .../helpers.test.ts
Remove static per-tier multiplier metadata; add formatRelativeMultiplier(tierKey, tierMultipliers) to compute relative badges against the lowest visible positive multiplier, omit baseline 1.0x, and support fractional multipliers. Component consumes subscription.tier_multipliers. Tests updated/added for badge logic and fractional values.
OpenAPI schema
autogpt_platform/frontend/src/app/api/openapi.json
Document new optional tier_multipliers: Record<string, number> property on SubscriptionStatusResponse.

Sequence Diagram

sequenceDiagram
    actor User
    participant Frontend as Frontend Client
    participant API as API Server
    participant RateLimit as RateLimit Module
    participant LD as LaunchDarkly
    participant Cache as LD Cache

    User->>Frontend: Open credits page
    Frontend->>API: GET /credits/subscription
    API->>RateLimit: get_tier_multipliers()
    RateLimit->>Cache: check cached flag
    alt cache hit
        Cache-->>RateLimit: return parsed overrides
    else cache miss
        RateLimit->>LD: fetch "copilot-tier-multipliers"
        LD-->>RateLimit: return flag value or error
        RateLimit->>RateLimit: parse/validate overrides, merge onto defaults
        RateLimit->>Cache: store parsed overrides or sentinel
    end
    RateLimit-->>API: tier multipliers map
    API->>Frontend: SubscriptionStatusResponse (includes tier_multipliers)
    Frontend->>Frontend: formatRelativeMultiplier(...) per visible tier
    Frontend-->>User: Render tiers and relative rate-limit badges
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • ntindle
  • kcze
  • 0ubbe

Poem

🐇 I hopped through flags both bright and new,
Multipliers stretched—some fraction, some true.
LD whispers numbers, backend tucks them in,
Frontend paints badges with a tiny grin.
Hop, hop—rate limits wear their win!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and concisely describes the main changes: LaunchDarkly-configurable rate-limit multipliers and relative UI display, matching the substantial backend and frontend modifications.
Description check ✅ Passed The PR description comprehensively documents all major changes across backend and frontend, explains the implementation rationale, notes test coverage, and clarifies stacking/dependency relationships with other PRs.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/configurable-tier-multipliers-and-relative-ui

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 24, 2026

🔍 PR Overlap Detection

This check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early.

🔴 Merge Conflicts Detected

The following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.

  • feat(backend): tier-based workspace file storage limits #12780 (ntindle · updated 18h ago)

    • autogpt_platform/backend/backend/api/features/chat/routes.py (3 conflicts, ~14 lines)
    • autogpt_platform/backend/backend/api/features/chat/routes_test.py (2 conflicts, ~734 lines)
    • autogpt_platform/backend/backend/api/features/subscription_routes_test.py (1 conflict, ~209 lines)
    • autogpt_platform/backend/backend/api/features/v1.py (6 conflicts, ~50 lines)
    • autogpt_platform/backend/backend/api/features/workspace/routes_test.py (1 conflict, ~265 lines)
    • autogpt_platform/backend/backend/blocks/_base.py (1 conflict, ~16 lines)
    • autogpt_platform/backend/backend/blocks/orchestrator.py (2 conflicts, ~29 lines)
    • autogpt_platform/backend/backend/copilot/baseline/service.py (11 conflicts, ~351 lines)
    • autogpt_platform/backend/backend/copilot/baseline/service_unit_test.py (2 conflicts, ~919 lines)
    • autogpt_platform/backend/backend/copilot/config.py (2 conflicts, ~51 lines)
    • autogpt_platform/backend/backend/copilot/model_test.py (2 conflicts, ~183 lines)
    • autogpt_platform/backend/backend/copilot/prompting.py (4 conflicts, ~69 lines)
    • autogpt_platform/backend/backend/copilot/prompting_test.py (1 conflict, ~28 lines)
    • autogpt_platform/backend/backend/copilot/rate_limit.py (2 conflicts, ~69 lines)
    • autogpt_platform/backend/backend/copilot/sdk/response_adapter.py (4 conflicts, ~267 lines)
    • autogpt_platform/backend/backend/copilot/sdk/response_adapter_test.py (1 conflict, ~4 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service.py (11 conflicts, ~133 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service_helpers_test.py (1 conflict, ~129 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service_test.py (1 conflict, ~225 lines)
    • autogpt_platform/backend/backend/copilot/service.py (2 conflicts, ~13 lines)
    • autogpt_platform/backend/backend/copilot/token_tracking.py (3 conflicts, ~26 lines)
    • autogpt_platform/backend/backend/copilot/token_tracking_test.py (1 conflict, ~9 lines)
    • autogpt_platform/backend/backend/copilot/tools/__init__.py (1 conflict, ~4 lines)
    • autogpt_platform/backend/backend/copilot/tools/edit_agent.py (1 conflict, ~21 lines)
    • autogpt_platform/backend/backend/copilot/tools/helpers.py (2 conflicts, ~62 lines)
    • autogpt_platform/backend/backend/copilot/tools/models.py (2 conflicts, ~42 lines)
    • autogpt_platform/backend/backend/data/credit.py (3 conflicts, ~200 lines)
    • autogpt_platform/backend/backend/data/credit_subscription_test.py (7 conflicts, ~1279 lines)
    • autogpt_platform/backend/backend/data/model.py (1 conflict, ~12 lines)
    • autogpt_platform/backend/backend/executor/manager.py (4 conflicts, ~55 lines)
    • autogpt_platform/frontend/src/app/(platform)/build/components/BuilderChatPanel/__tests__/helpers.test.ts (deleted here, modified there)
    • autogpt_platform/frontend/src/app/(platform)/build/components/BuilderChatPanel/helpers.ts (deleted here, modified there)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ArtifactPanel/components/ArtifactContent.tsx (1 conflict, ~9 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ArtifactPanel/components/__tests__/reactArtifactPreview.test.ts (1 conflict, ~10 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ArtifactPanel/helpers.ts (1 conflict, ~7 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatContainer/ChatContainer.tsx (1 conflict, ~5 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatContainer/useAutoOpenArtifacts.test.ts (1 conflict, ~52 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatContainer/useAutoOpenArtifacts.ts (1 conflict, ~12 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (2 conflicts, ~10 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/UsageLimits/__tests__/UsagePanelContentRender.test.tsx (2 conflicts, ~42 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/helpers/convertChatSessionToUiMessages.ts (1 conflict, ~5 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useChatSession.ts (1 conflict, ~8 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts (2 conflicts, ~16 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotStream.ts (1 conflict, ~4 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useLoadMoreMessages.ts (2 conflicts, ~42 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx (7 conflicts, ~135 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/useSubscriptionTierSection.ts (1 conflict, ~4 lines)
    • autogpt_platform/frontend/src/app/api/openapi.json (4 conflicts, ~26 lines)
    • docs/integrations/block-integrations/llm.md (7 conflicts, ~35 lines)
    • docs/integrations/block-integrations/misc.md (1 conflict, ~5 lines)
  • fix(copilot): prevent 524 timeout on chat deletion by deferring cleanup #12668 (Otto-AGPT · updated 7d ago)

    • autogpt_platform/backend/backend/api/features/library/db.py (5 conflicts, ~67 lines)
    • autogpt_platform/backend/backend/api/features/library/model.py (1 conflict, ~4 lines)
    • autogpt_platform/backend/backend/api/features/subscription_routes_test.py (22 conflicts, ~1047 lines)
    • autogpt_platform/backend/backend/api/features/v1.py (10 conflicts, ~233 lines)
    • autogpt_platform/backend/backend/copilot/baseline/service.py (2 conflicts, ~15 lines)
    • autogpt_platform/backend/backend/copilot/model_test.py (1 conflict, ~5 lines)
    • autogpt_platform/backend/backend/copilot/prompting.py (1 conflict, ~5 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service.py (3 conflicts, ~51 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service_helpers_test.py (1 conflict, ~129 lines)
    • autogpt_platform/backend/backend/copilot/transcript.py (1 conflict, ~11 lines)
    • autogpt_platform/backend/backend/data/credit.py (12 conflicts, ~783 lines)
    • autogpt_platform/backend/backend/data/credit_subscription_test.py (24 conflicts, ~1633 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/PulseChips/usePulseChips.ts (1 conflict, ~13 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/usageHelpers.ts (1 conflict, ~9 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/components/AgentBriefingPanel/BriefingTabContent.tsx (9 conflicts, ~147 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/components/AgentBriefingPanel/StatsGrid.tsx (2 conflicts, ~9 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/components/ContextualActionButton/ContextualActionButton.tsx (2 conflicts, ~12 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/components/SitrepItem/SitrepItem.tsx (2 conflicts, ~15 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/components/SitrepItem/useSitrepItems.ts (4 conflicts, ~97 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/hooks/useAgentStatus.ts (2 conflicts, ~10 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/hooks/useLibraryFleetSummary.ts (7 conflicts, ~57 lines)
    • autogpt_platform/frontend/src/app/(platform)/library/types.ts (1 conflict, ~4 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx (11 conflicts, ~185 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/__tests__/SubscriptionTierSection.test.tsx (21 conflicts, ~486 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/useSubscriptionTierSection.ts (4 conflicts, ~60 lines)
    • autogpt_platform/frontend/src/app/api/openapi.json (2 conflicts, ~40 lines)
    • docs/integrations/block-integrations/llm.md (7 conflicts, ~35 lines)
    • docs/integrations/block-integrations/misc.md (1 conflict, ~5 lines)
  • feat(platform): Add AllQuiet alert integration alongside Discord alerts #11234 (ntindle · updated 3d ago)

    • autogpt_platform/backend/backend/api/features/chat/routes.py (2 conflicts, ~10 lines)
    • autogpt_platform/backend/backend/api/features/chat/routes_test.py (2 conflicts, ~734 lines)
    • autogpt_platform/backend/backend/api/features/subscription_routes_test.py (1 conflict, ~209 lines)
    • autogpt_platform/backend/backend/api/features/v1.py (6 conflicts, ~50 lines)
    • autogpt_platform/backend/backend/copilot/baseline/service.py (10 conflicts, ~328 lines)
    • autogpt_platform/backend/backend/copilot/baseline/service_unit_test.py (2 conflicts, ~919 lines)
    • autogpt_platform/backend/backend/copilot/config.py (2 conflicts, ~73 lines)
    • autogpt_platform/backend/backend/copilot/model_test.py (2 conflicts, ~183 lines)
    • autogpt_platform/backend/backend/copilot/prompting.py (4 conflicts, ~69 lines)
    • autogpt_platform/backend/backend/copilot/sdk/response_adapter.py (4 conflicts, ~267 lines)
    • autogpt_platform/backend/backend/copilot/sdk/response_adapter_test.py (1 conflict, ~4 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service.py (4 conflicts, ~75 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service_helpers_test.py (1 conflict, ~129 lines)
    • autogpt_platform/backend/backend/copilot/sdk/service_test.py (1 conflict, ~225 lines)
    • autogpt_platform/backend/backend/copilot/tools/__init__.py (1 conflict, ~4 lines)
    • autogpt_platform/backend/backend/copilot/tools/edit_agent.py (1 conflict, ~21 lines)
    • autogpt_platform/backend/backend/copilot/tools/helpers.py (2 conflicts, ~62 lines)
    • autogpt_platform/backend/backend/data/credit.py (3 conflicts, ~200 lines)
    • autogpt_platform/backend/backend/data/credit_subscription_test.py (7 conflicts, ~1279 lines)
    • autogpt_platform/frontend/src/app/(platform)/build/components/BuilderChatPanel/__tests__/helpers.test.ts (deleted here, modified there)
    • autogpt_platform/frontend/src/app/(platform)/build/components/BuilderChatPanel/helpers.ts (deleted here, modified there)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatContainer/ChatContainer.tsx (1 conflict, ~5 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (2 conflicts, ~10 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/helpers/convertChatSessionToUiMessages.ts (1 conflict, ~5 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts (2 conflicts, ~16 lines)
    • autogpt_platform/frontend/src/app/(platform)/copilot/useLoadMoreMessages.ts (2 conflicts, ~42 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx (7 conflicts, ~132 lines)
    • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/useSubscriptionTierSection.ts (1 conflict, ~4 lines)
    • autogpt_platform/frontend/src/app/api/openapi.json (2 conflicts, ~14 lines)
    • docs/integrations/block-integrations/llm.md (7 conflicts, ~35 lines)
    • docs/integrations/block-integrations/misc.md (1 conflict, ~5 lines)
  • feat(platform): estimate CoPilot turn cost and require approval for high-cost requests #12877 (Rushi-Balapure · updated 2d ago)

    • 📁 autogpt_platform/backend/backend/
      • api/features/chat/routes.py (2 conflicts, ~32 lines)
      • util/feature_flag.py (1 conflict, ~10 lines)

🟢 Low Risk — File Overlap Only

These PRs touch the same files but different sections (click to expand)

Summary: 4 conflict(s), 0 medium risk, 4 low risk (out of 8 PRs with file overlap)


Auto-generated on push. Ignores: openapi.json, lock files.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 98.02632% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.19%. Comparing base (2cb52e5) to head (c5bf466).
⚠️ Report is 3 commits behind head on dev.

Additional details and impacted files
@@            Coverage Diff             @@
##              dev   #12910      +/-   ##
==========================================
+ Coverage   68.12%   68.19%   +0.06%     
==========================================
  Files        1934     1955      +21     
  Lines      149285   149632     +347     
  Branches    15558    15589      +31     
==========================================
+ Hits       101698   102036     +338     
  Misses      44564    44564              
- Partials     3023     3032       +9     
Flag Coverage Δ
platform-backend 77.80% <98.59%> (+0.03%) ⬆️
platform-frontend 25.86% <90.00%> (+0.59%) ⬆️
platform-frontend-e2e 30.71% <ø> (+0.30%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Platform Backend 77.80% <98.59%> (+0.03%) ⬆️
Platform Frontend 32.88% <90.00%> (+0.48%) ⬆️
AutoGPT Libs ∅ <ø> (∅)
Classic AutoGPT 28.43% <ø> (ø)
🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
autogpt_platform/frontend/src/app/api/openapi.json (1)

15999-16036: ⚠️ Potential issue | 🟡 Minor

Make tier_multipliers required in the response contract.

The field is documented as frontend-consumed output for rate-limit badges, but it is currently optional in the schema. That weakens the API guarantee and generates optional client types.

🛠️ Proposed OpenAPI contract update
         "required": [
           "tier",
           "monthly_cost",
           "tier_costs",
+          "tier_multipliers",
           "proration_credit_cents"
         ],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/frontend/src/app/api/openapi.json` around lines 15999 -
16036, The response schema currently documents "tier_multipliers" but doesn't
include it in the object's "required" list; update the OpenAPI object that
defines the subscription/credits response by adding "tier_multipliers" to the
"required" array (alongside "tier", "monthly_cost", "tier_costs",
"proration_credit_cents") so the contract guarantees its presence while keeping
the existing definition of the "tier_multipliers" property (type: object,
additionalProperties: number).
🧹 Nitpick comments (4)
autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx (1)

182-192: Optional: Inline IIFE adds noise; hoist the label above the JSX.

The (() => { ... })() pattern works but forces an inner arrow + return just to conditionally render. Computing label before the return (...) of the .map callback (or above the outer <div> in the tier card) keeps the JSX flat and avoids nesting an extra lambda inside the render tree.

♻️ Suggested refactor
         }`}
       >
+        const multiplierLabel = formatRelativeMultiplier(
+          tier.key,
+          subscription.tier_multipliers ?? {},
+        );
         <div className="mb-2 flex items-center justify-between">
           ...
         </div>

         <p className="mb-1 text-2xl font-bold">
           {formatCost(cost, tier.key)}
         </p>
-        {(() => {
-          const label = formatRelativeMultiplier(
-            tier.key,
-            subscription.tier_multipliers ?? {},
-          );
-          return label ? (
-            <p className="mb-1 text-sm font-medium text-neutral-600 dark:text-neutral-400">
-              {label}
-            </p>
-          ) : null;
-        })()}
+        {multiplierLabel ? (
+          <p className="mb-1 text-sm font-medium text-neutral-600 dark:text-neutral-400">
+            {multiplierLabel}
+          </p>
+        ) : null}

(Declare const multiplierLabel = ... at the top of the .map callback alongside isCurrent, cost, etc., rather than inside the JSX.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
around lines 182 - 192, In SubscriptionTierSection.tsx, remove the inline IIFE
and compute the multiplier label before returning JSX in the .map callback: call
formatRelativeMultiplier(tier.key, subscription.tier_multipliers ?? {}) once at
the top of the map (e.g., const multiplierLabel = ... alongside existing
isCurrent/cost vars) and then conditionally render multiplierLabel inside the
JSX (replace label with multiplierLabel); this hoists the computation out of the
render tree and keeps the JSX flat while still using the existing
formatRelativeMultiplier function and tier/subscription values.
autogpt_platform/backend/backend/api/features/subscription_routes_test.py (1)

94-103: Move _DEFAULT_TIER_MULTIPLIERS import to module scope.

Line 97 introduces a local import for a non-heavy dependency. Prefer a top-level import and keep the fixture body pure setup logic.

♻️ Suggested cleanup
 from prisma.enums import SubscriptionTier
+from backend.copilot.rate_limit import _DEFAULT_TIER_MULTIPLIERS
...
-    from backend.copilot.rate_limit import _DEFAULT_TIER_MULTIPLIERS
-
     mocker.patch(
         "backend.api.features.v1.get_tier_multipliers",
         new_callable=AsyncMock,
         return_value=dict(_DEFAULT_TIER_MULTIPLIERS),
     )

As per coding guidelines: "Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/api/features/subscription_routes_test.py`
around lines 94 - 103, The local import of _DEFAULT_TIER_MULTIPLIERS inside the
fixture should be moved to module scope; change the test to import
_DEFAULT_TIER_MULTIPLIERS at the top of
autogpt_platform/backend/backend/api/features/subscription_routes_test.py and
remove the inline import in the fixture so the fixture only contains setup logic
(keep the mocker.patch call that patches
"backend.api.features.v1.get_tier_multipliers" returning
dict(_DEFAULT_TIER_MULTIPLIERS) unchanged).
autogpt_platform/backend/backend/copilot/rate_limit_test.py (1)

395-399: Remove repeated # type: ignore suppressors in cache-clear fixtures.

Line 398, Line 755, Line 935, and Line 1153 use # type: ignore[attr-defined]. Please replace these with a typed helper instead of suppressing checks.

♻️ Suggested cleanup
+from typing import Protocol, cast
+
+class _CacheClearable(Protocol):
+    def cache_clear(self) -> None: ...
+
+def _clear_tier_multiplier_flag_cache() -> None:
+    cast(_CacheClearable, _fetch_tier_multipliers_flag).cache_clear()
...
 class TestGetTierMultipliers:
     `@pytest.fixture`(autouse=True)
     def _clear_flag_cache(self):
         """Clear the LD flag cache between tests so patches don't leak."""
-        _fetch_tier_multipliers_flag.cache_clear()  # type: ignore[attr-defined]
+        _clear_tier_multiplier_flag_cache()
...
 class TestGetGlobalRateLimitsWithTiers:
     `@pytest.fixture`(autouse=True)
     def _clear_flag_cache(self):
         """Clear the LD flag cache between tests so patches don't leak."""
-        _fetch_tier_multipliers_flag.cache_clear()  # type: ignore[attr-defined]
+        _clear_tier_multiplier_flag_cache()
...
 class TestTierLimitsRespected:
     `@pytest.fixture`(autouse=True)
     def _clear_flag_cache(self):
-        _fetch_tier_multipliers_flag.cache_clear()  # type: ignore[attr-defined]
+        _clear_tier_multiplier_flag_cache()
...
 class TestTierLimitsEnforced:
     `@pytest.fixture`(autouse=True)
     def _clear_flag_cache(self):
-        _fetch_tier_multipliers_flag.cache_clear()  # type: ignore[attr-defined]
+        _clear_tier_multiplier_flag_cache()

As per coding guidelines: "Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead".

Also applies to: 752-756, 933-936, 1150-1153

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py` around lines 395
- 399, Replace repeated "# type: ignore[attr-defined]" suppressors by
introducing a small typed helper using a Protocol that exposes cache_clear, e.g.
define a CacheClearable Protocol with def cache_clear(self) -> None and a helper
function clear_cache(item: CacheClearable) -> None that calls
item.cache_clear(); then update the fixtures (the autouse fixture that calls
_fetch_tier_multipliers_flag.cache_clear and the other similar cache_clear
calls) to call clear_cache(_fetch_tier_multipliers_flag) (and the other cached
functions) instead of using the inline type ignore so static type checkers see a
proper typed call.
autogpt_platform/backend/backend/copilot/rate_limit.py (1)

780-791: Optional: parallelize get_user_tier and get_tier_multipliers.

Both calls are independent network-bound lookups (DB + LD flag fetch respectively). In the warm-cache case the sequential awaits are free, but on cold paths (pod start, cache miss, 60s TTL expiry) this adds a serialized round-trip to a call that already did asyncio.gather for the daily/weekly flags above. A gather here would keep the hot path identical and shave cold-path latency.

♻️ Suggested refactor
-    tier = await get_user_tier(user_id)
-    multipliers = await get_tier_multipliers(user_id)
+    tier, multipliers = await asyncio.gather(
+        get_user_tier(user_id),
+        get_tier_multipliers(user_id),
+    )
     multiplier = multipliers.get(tier, 1.0)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit.py` around lines 780 -
791, The calls to get_user_tier(user_id) and get_tier_multipliers(user_id) are
independent and should be executed concurrently to reduce cold-path latency;
replace the sequential awaits with an asyncio.gather call (e.g., tier,
multipliers = await asyncio.gather(get_user_tier(user_id),
get_tier_multipliers(user_id))) then proceed to compute multiplier and apply
int(daily * multiplier) / int(weekly * multiplier) as before; ensure asyncio is
imported where rate_limit.py runs and preserve identical fallback/semantics and
error propagation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@autogpt_platform/frontend/src/app/api/openapi.json`:
- Around line 15999-16036: The response schema currently documents
"tier_multipliers" but doesn't include it in the object's "required" list;
update the OpenAPI object that defines the subscription/credits response by
adding "tier_multipliers" to the "required" array (alongside "tier",
"monthly_cost", "tier_costs", "proration_credit_cents") so the contract
guarantees its presence while keeping the existing definition of the
"tier_multipliers" property (type: object, additionalProperties: number).

---

Nitpick comments:
In `@autogpt_platform/backend/backend/api/features/subscription_routes_test.py`:
- Around line 94-103: The local import of _DEFAULT_TIER_MULTIPLIERS inside the
fixture should be moved to module scope; change the test to import
_DEFAULT_TIER_MULTIPLIERS at the top of
autogpt_platform/backend/backend/api/features/subscription_routes_test.py and
remove the inline import in the fixture so the fixture only contains setup logic
(keep the mocker.patch call that patches
"backend.api.features.v1.get_tier_multipliers" returning
dict(_DEFAULT_TIER_MULTIPLIERS) unchanged).

In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py`:
- Around line 395-399: Replace repeated "# type: ignore[attr-defined]"
suppressors by introducing a small typed helper using a Protocol that exposes
cache_clear, e.g. define a CacheClearable Protocol with def cache_clear(self) ->
None and a helper function clear_cache(item: CacheClearable) -> None that calls
item.cache_clear(); then update the fixtures (the autouse fixture that calls
_fetch_tier_multipliers_flag.cache_clear and the other similar cache_clear
calls) to call clear_cache(_fetch_tier_multipliers_flag) (and the other cached
functions) instead of using the inline type ignore so static type checkers see a
proper typed call.

In `@autogpt_platform/backend/backend/copilot/rate_limit.py`:
- Around line 780-791: The calls to get_user_tier(user_id) and
get_tier_multipliers(user_id) are independent and should be executed
concurrently to reduce cold-path latency; replace the sequential awaits with an
asyncio.gather call (e.g., tier, multipliers = await
asyncio.gather(get_user_tier(user_id), get_tier_multipliers(user_id))) then
proceed to compute multiplier and apply int(daily * multiplier) / int(weekly *
multiplier) as before; ensure asyncio is imported where rate_limit.py runs and
preserve identical fallback/semantics and error propagation.

In
`@autogpt_platform/frontend/src/app/`(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx:
- Around line 182-192: In SubscriptionTierSection.tsx, remove the inline IIFE
and compute the multiplier label before returning JSX in the .map callback: call
formatRelativeMultiplier(tier.key, subscription.tier_multipliers ?? {}) once at
the top of the map (e.g., const multiplierLabel = ... alongside existing
isCurrent/cost vars) and then conditionally render multiplierLabel inside the
JSX (replace label with multiplierLabel); this hoists the computation out of the
render tree and keeps the JSX flat while still using the existing
formatRelativeMultiplier function and tier/subscription values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 571d5d3c-083d-4498-89b9-be401067c166

📥 Commits

Reviewing files that changed from the base of the PR and between 2cb52e5 and e2f2080.

📒 Files selected for processing (10)
  • autogpt_platform/backend/backend/api/features/subscription_routes_test.py
  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
  • autogpt_platform/backend/backend/util/feature_flag.py
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/__tests__/SubscriptionTierSection.test.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.test.ts
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
  • autogpt_platform/frontend/src/app/api/openapi.json

Comment thread autogpt_platform/backend/backend/copilot/rate_limit.py
Comment thread autogpt_platform/backend/backend/copilot/rate_limit.py
majdyz added 2 commits April 24, 2026 20:46
- rate_limit.get_tier_multipliers: drop unused user_id param — evaluation is system-wide; a future per-cohort move can re-introduce it at the call site.
- _fetch_tier_multipliers_flag: maxsize 8 → 1 (no-arg, one entry).
- helpers.ts formatRelativeMultiplier: guard against mine ≤ 0 and compare post-rounding so floats that collapse to the same "1.0" label treat the tier as baseline (no "0.0x" or "1.0x" stray renders).
- SubscriptionTierSection: hoist the formatRelativeMultiplier IIFE into a per-iteration const for readability.
Gaps flagged by the /pr-test audit:
- all-equal-visible-multipliers → every tier null (no baseline emerges).
- zero / negative own multiplier → null (defensive against misconfigured LD).
- 25.6/3 = 8.533… rounds to "8.5x rate limits" (real pricing-derived ratio).
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
autogpt_platform/backend/backend/copilot/rate_limit.py (1)

83-101: LGTM — safe defaults and a backward-compatible alias.

Making defaults float-typed so LD fractional overrides compose naturally, while keeping TIER_MULTIPLIERS as an alias for existing callers, is a clean migration. The int(base * multiplier) in get_global_rate_limits below preserves the downstream microdollar integer contract.

Minor note (nitpick): TIER_MULTIPLIERS = _DEFAULT_TIER_MULTIPLIERS aliases the same dict reference, so any accidental mutation by a caller would leak into the defaults. A MappingProxyType wrap or shallow-copy would make the alias read-only, but this is defensive and not required given current usage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit.py` around lines 83 -
101, TIER_MULTIPLIERS currently aliases the mutable _DEFAULT_TIER_MULTIPLIERS
dict which can leak accidental mutations; change the alias to either a shallow
copy or a read-only wrapper (e.g., use types.MappingProxyType or dict.copy()) so
TIER_MULTIPLIERS is immutable to callers while leaving _DEFAULT_TIER_MULTIPLIERS
and get_tier_multipliers behavior unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/rate_limit.py`:
- Around line 83-101: TIER_MULTIPLIERS currently aliases the mutable
_DEFAULT_TIER_MULTIPLIERS dict which can leak accidental mutations; change the
alias to either a shallow copy or a read-only wrapper (e.g., use
types.MappingProxyType or dict.copy()) so TIER_MULTIPLIERS is immutable to
callers while leaving _DEFAULT_TIER_MULTIPLIERS and get_tier_multipliers
behavior unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8204b690-ac82-4c4d-8e7d-8687c7d23a8f

📥 Commits

Reviewing files that changed from the base of the PR and between e2f2080 and 8993907.

📒 Files selected for processing (5)
  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: check API types
  • GitHub Check: lint
  • GitHub Check: integration_test
  • GitHub Check: Seer Code Review
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.13)
  • GitHub Check: type-check (3.12)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (typescript)
🧰 Additional context used
📓 Path-based instructions (15)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development

Format frontend code using pnpm format

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/**/*.{tsx,ts}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/
'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development

autogpt_platform/frontend/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g. graphID, useBackendAPI
Use function declarations (not arrow functions) for components and handlers
No dark: Tailwind classes — the design system handles dark mode
Use Next.js <Link> for internal navigation — never raw <a> tags
No any types unless the value genuinely can be anything
No linter suppressors (// @ts-ignore``, // eslint-disable) — fix the actual issue
Keep files under ~200 lines; extract sub-components or hooks into their own files when a file grows beyond this
Keep render functions and hooks under ~50 lines; extract named helpers or sub-components when they grow longer
Use generated API hooks from `@/app/api/generated/endpoints/` with pattern `use{Method}{Version}{OperationName}` and regenerate with `pnpm generate:api`
Do not use `useCallback` or `useMemo` unless asked to optimise a given function
Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions in the frontend

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

autogpt_platform/frontend/src/**/*.{ts,tsx}: Use generated API hooks from @/app/api/__generated__/endpoints/ following the pattern use{Method}{Version}{OperationName}, and regenerate with pnpm generate:api
Separate render logic from business logic using component.tsx + useComponent.ts + helpers.ts pattern, colocate state when possible and avoid creating large components, use sub-components in local /components folder
Use function declarations for components and handlers, use arrow functions only for callbacks
Do not use useCallback or useMemo unless asked to optimise a given function

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/**/*.{tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
autogpt_platform/frontend/src/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

Component props should use interface Props { ... } (not exported) unless the interface needs to be used outside the component

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
autogpt_platform/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Never type with any, if no types available use unknown

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.tsx

📄 CodeRabbit inference engine (autogpt_platform/frontend/AGENTS.md)

Put sub-components in local components/ folder

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
autogpt_platform/frontend/**/*.tsx

📄 CodeRabbit inference engine (autogpt_platform/frontend/AGENTS.md)

autogpt_platform/frontend/**/*.tsx: Component props should be type Props = { ... } (not exported) unless it needs to be used outside the component
Use design system components from src/components/ (atoms, molecules, organisms)
Never use src/components/__legacy__/*
Tailwind CSS only for styling, use design tokens, Phosphor Icons only

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
autogpt_platform/backend/backend/api/features/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
autogpt_platform/backend/**/api/**/*.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/api/**/*.py: Use Security() instead of Depends() for authentication dependencies to get proper OpenAPI security specification
Follow SSE (Server-Sent Events) protocol: use data: lines for frontend-parsed events (must match Zod schema) and : comment lines for heartbeats/status

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
autogpt_platform/frontend/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

No barrel files or index.ts re-exports in the frontend

Do not type hook returns, let Typescript infer as much as possible

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
autogpt_platform/frontend/src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Do not type hook returns, let Typescript infer as much as possible

Extract component logic into custom hooks grouped by concern, not by component. Each hook should represent a cohesive domain of functionality (e.g., useSearch, useFilters, usePagination) rather than bundling all state into one useComponentState hook. Put each hook in its own .ts file.

Files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
🧠 Learnings (37)
📓 Common learnings
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
📚 Learning: 2026-04-08T17:28:40.841Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:40.841Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Keep render functions and hooks under ~50 lines; extract named helpers or sub-components when they grow longer

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-02-27T10:45:49.499Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunMCPTool/helpers.tsx:23-24
Timestamp: 2026-02-27T10:45:49.499Z
Learning: Prefer using generated OpenAPI types from '@/app/api/__generated__/' for payloads defined in openapi.json (e.g., MCPToolsDiscoveredResponse, MCPToolOutputResponse). Use inline TypeScript interfaces only for payloads that are SSE-stream-only and not exposed via OpenAPI. Apply this pattern to frontend tool components (e.g., RunMCPTool) and related areas where similar SSE/openapi-discrepancies occur; avoid re-implementing types when a generated type is available.

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-03-24T02:05:04.672Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12526
File: autogpt_platform/frontend/src/app/(platform)/copilot/CopilotPage.tsx:0-0
Timestamp: 2026-03-24T02:05:04.672Z
Learning: When gating React component logic on a React Query result (e.g., hooks like `useQuery` / `useGetV2GetCopilotUsage`), prefer destructuring and checking `isSuccess` (or aliasing it to a meaningful boolean like `isSuccess: hasUsage`) instead of relying on `!isLoading`. Reason: `isLoading` can be `false` in error/idle states where `data` may still be `undefined`, while `isSuccess` indicates the query completed successfully and `data` is populated.

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-04-01T18:54:16.035Z
Learnt from: Bentlybro
Repo: Significant-Gravitas/AutoGPT PR: 12633
File: autogpt_platform/frontend/src/app/(platform)/library/components/AgentFilterMenu/AgentFilterMenu.tsx:3-10
Timestamp: 2026-04-01T18:54:16.035Z
Learning: In the frontend, the legacy Select component at `@/components/__legacy__/ui/select` is an intentional, codebase-wide visual-consistency pattern. During code reviews, do not flag or block PRs merely for continuing to use this legacy Select. If a migration to the newer design-system Select is desired, bundle it into a single dedicated cleanup/migration PR that updates all Select usages together (e.g., avoid piecemeal replacements).

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
📚 Learning: 2026-04-07T09:24:16.582Z
Learnt from: 0ubbe
Repo: Significant-Gravitas/AutoGPT PR: 12686
File: autogpt_platform/frontend/src/app/(no-navbar)/onboarding/steps/__tests__/PainPointsStep.test.tsx:1-19
Timestamp: 2026-04-07T09:24:16.582Z
Learning: In Significant-Gravitas/AutoGPT’s `autogpt_platform/frontend` (Vite + `vitejs/plugin-react` with the automatic JSX transform), do not flag usages of React types/components (e.g., `React.ReactNode`) in `.ts`/`.tsx` files as missing `React` imports. Since the React namespace is made available by the project’s TS/Vite setup, an explicit `import React from 'react'` or `import type { ReactNode } ...` is not required; only treat it as missing if typechecking (e.g., `pnpm types`) would actually fail.

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
📚 Learning: 2026-04-02T05:43:49.128Z
Learnt from: 0ubbe
Repo: Significant-Gravitas/AutoGPT PR: 12640
File: autogpt_platform/frontend/src/app/(no-navbar)/onboarding/steps/WelcomeStep.tsx:13-13
Timestamp: 2026-04-02T05:43:49.128Z
Learning: Do not flag `import { Question } from "phosphor-icons/react"` as an invalid import. `Question` is a valid named export from `phosphor-icons/react` (as reflected in the package’s generated `.d.ts` files and re-exports via `dist/index.d.ts`), so it should be treated as a supported named export during code reviews.

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
📚 Learning: 2026-04-13T13:11:07.445Z
Learnt from: 0ubbe
Repo: Significant-Gravitas/AutoGPT PR: 12764
File: autogpt_platform/frontend/src/app/(platform)/library/components/SitrepItem/SitrepItem.tsx:143-145
Timestamp: 2026-04-13T13:11:07.445Z
Learning: In `autogpt_platform/frontend`, do not flag direct interpolation of `executionID` UUID strings into URL query parameters (e.g., `activeItem=${executionID}` in JSX/Next links). If the value is a UUID string matching `[0-9a-f-]`, it contains no reserved URL characters, so additional `encodeURIComponent` or Next.js object-based `href` encoding is unnecessary. Only treat it as an encoding issue if the query-param value is not guaranteed to be UUID-formatted (i.e., may include characters outside `[0-9a-f-]`).

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-04-15T22:49:06.896Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11235
File: autogpt_platform/frontend/src/app/(platform)/admin/diagnostics/components/ExecutionsTable.tsx:0-0
Timestamp: 2026-04-15T22:49:06.896Z
Learning: In the AutoGPT frontend (React Query + toast/ErrorCard patterns), do not require `Sentry.captureException` in React Query mutation `catch` blocks. React Query handles error propagation for mutation paths, so follow the established pattern: show toast notifications for mutation errors and use `ErrorCard` for render/fetch errors. Only add `Sentry.captureException` for truly manual/unexpected exception paths that are outside React Query’s control (e.g., standalone async utilities or event handlers not wired through React Query).

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx
📚 Learning: 2026-04-22T12:26:42.571Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-21T04:36:19.755Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12865
File: autogpt_platform/backend/backend/data/credit_subscription_test.py:1119-1122
Timestamp: 2026-04-21T04:36:19.755Z
Learning: In `autogpt_platform/backend/backend/data/credit_subscription_test.py` (and related subscription test files), test mocks for the user object returned by `get_user_by_id` should use snake_case `subscription_tier` (not camelCase `subscriptionTier`). This is because `get_user_by_id` (defined in `backend/data/user.py`) returns `backend.data.model.User` — a Pydantic application model with `subscription_tier: SubscriptionTier` — not a raw Prisma model. Production code in `backend/data/credit.py` reads `user.subscription_tier` from that Pydantic model. Do NOT flag `mock_user.subscription_tier = ...` as incorrect in these tests.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-04-23T13:53:40.315Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12894
File: autogpt_platform/backend/backend/data/block_cost_config.py:271-277
Timestamp: 2026-04-23T13:53:40.315Z
Learning: In `autogpt_platform/backend/backend/data/block_cost_config.py`, `compute_token_credits()` intentionally returns `MODEL_COST[model]` (the flat tier) on pre-flight (when `stats is None`) for the `TOKENS` billing path. Returning 0 pre-flight would allow a zero-balance user to bypass the credit gate and trigger an LLM call, with the insufficient-balance error only surfacing post-flight (a billing leak). The overcharge concern (actual token cost < MODEL_COST estimate) is handled by `_charge_reconciled_usage_sync` in `autogpt_platform/backend/backend/executor/billing.py`, which issues a negative-delta refund via `spend_credits(cost=negative)` when real usage falls below the pre-flight estimate. Do NOT flag the MODEL_COST pre-flight floor in this function as an overcharge bug; the refund path covers it.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-26T00:32:06.673Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts
📚 Learning: 2026-03-13T15:49:44.961Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-13T15:49:44.961Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, the original per-session token window (with a TTL-based reset) was replaced with fixed daily and weekly windows. `resets_at` is now derived from `_daily_reset_time()` (midnight UTC) and `_weekly_reset_time()` (next Monday 00:00 UTC) — deterministic fixed-boundary calculations that require no Redis TTL introspection.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-12T14:42:40.552Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:141-170
Timestamp: 2026-03-12T14:42:40.552Z
Learning: In Significant-Gravitas/AutoGPT, `check_rate_limit` in `autogpt_platform/backend/backend/copilot/rate_limit.py` is intentionally a "pre-turn soft check" (not a hard atomic reservation). Because LLM token counts are unknown before generation completes, a strict check-and-reserve is impractical. The TOCTOU race (two concurrent turns both passing the pre-check and both committing via `record_token_usage`) is an accepted trade-off. If stricter enforcement is ever needed, the approach is a Lua script doing GET+INCRBY atomically in Redis.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-15T23:39:39.754Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-15T23:39:39.754Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, `record_token_usage` uses the same helper functions (`_daily_reset_time()` / `_weekly_reset_time()`) to compute both `resets_at` (the reset timestamp returned to callers) and the Redis key `expire` seconds. This single-source-of-truth design guarantees that the reported reset times and the actual Redis TTLs are always in sync — there is no separate TTL constant that could diverge from the calendar-boundary calculation.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-03T13:50:29.037Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12206
File: autogpt_platform/backend/backend/api/external/v2/rate_limit.py:24-56
Timestamp: 2026-04-03T13:50:29.037Z
Learning: In `autogpt_platform/backend/backend/api/external/v2/rate_limit.py`, the `RateLimiter` class uses in-process (per-worker) memory for sliding-window rate limiting. This is intentionally documented as a known limitation via WARNING comments in the module and class docstrings. A full Redis-backed migration (using ZADD/ZREMRANGEBYSCORE/ZCARD with TTL/Lua for atomic multi-replica enforcement) is deferred to a later PR. Do not re-flag the in-memory implementation as a blocking bug — the limitation is documented and accepted for the initial v2 external API release.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-10T11:22:18.867Z
Learnt from: Swiftyos
Repo: Significant-Gravitas/AutoGPT PR: 12347
File: autogpt_platform/backend/backend/data/invited_user.py:193-193
Timestamp: 2026-03-10T11:22:18.867Z
Learning: In Significant-Gravitas/AutoGPT, the admin data-layer functions in `autogpt_platform/backend/backend/data/invited_user.py` (`list_invited_users`, `create_invited_user`, `revoke_invited_user`, `retry_invited_user_tally`, `bulk_create_invited_users_from_file`) intentionally omit an acting-user/admin ID parameter. Authorization for these functions is enforced entirely at the FastAPI router layer via `Security(requires_admin_user)` in `user_admin_routes.py`. Do not flag the absence of a user_id/actor_id parameter in these functions as a missing data-access guardrail violation.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-15T22:50:02.270Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11235
File: autogpt_platform/backend/backend/data/diagnostics.py:0-0
Timestamp: 2026-04-15T22:50:02.270Z
Learning: In Significant-Gravitas/AutoGPT, the admin diagnostic data-layer functions in `autogpt_platform/backend/backend/data/diagnostics.py` (e.g., `get_execution_diagnostics`, `get_agent_diagnostics`, `get_schedule_health_metrics`, `get_all_schedules_details`, `get_running_executions_details`, `get_orphaned_executions_details`, `get_long_running_executions_details`, `get_stuck_queued_executions_details`, `get_invalid_executions_details`, `get_failed_executions_count`, `get_failed_executions_details`) intentionally omit a `user_id`/`admin_user_id` parameter. These functions require cross-user, system-wide visibility for admin diagnostics. Authorization is enforced entirely at the FastAPI router layer via `Security(requires_admin_user)` in `diagnostics_admin_routes.py`. Do not flag the absence of a user_id/admin_user_id parameter in these read functions as a missing data-access guardrail violation. Note: write/mutating functions like `cleanup_orphaned_execution`, `stop_all_long_running_executio...

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-10T08:39:22.025Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-09T08:47:32.750Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12720
File: autogpt_platform/backend/backend/copilot/graphiti/client.py:20-46
Timestamp: 2026-04-09T08:47:32.750Z
Learning: In Significant-Gravitas/AutoGPT, `user_id` values passed to `derive_group_id` in `autogpt_platform/backend/backend/copilot/graphiti/client.py` are always system-generated UUIDv4s (e.g. `883cc9da-fe37-4863-839b-acba022bf3ef`). The character set `[0-9a-f-]` is fully within `[a-zA-Z0-9_-]`, so the sanitization regex never strips any characters and no collision between two different user IDs is possible. Do not flag `derive_group_id` for collision-resistance issues.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-09T16:20:43.788Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12727
File: autogpt_platform/backend/backend/data/credit.py:0-0
Timestamp: 2026-04-09T16:20:43.788Z
Learning: In Significant-Gravitas/AutoGPT, `get_user_by_id(user_id: str) -> User` in `autogpt_platform/backend/backend/data/user.py` raises `ValueError("User not found with ID: ...")` when the user row does not exist — it never returns `None`. Do not flag call sites that dereference the result without a None-check as potential null-pointer issues.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-17T10:57:12.953Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/copilot/workflow_import/converter.py:0-0
Timestamp: 2026-03-17T10:57:12.953Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, `autogpt_platform/backend/backend/copilot/workflow_import/converter.py` was fully rewritten (commit 732960e2d) to no longer make direct LLM/OpenAI API calls. The converter now builds a structured text prompt for AutoPilot/CoPilot instead. There is no `response.choices` access or any direct LLM client usage in this file. Do not flag `response.choices` access or LLM client initialization patterns as issues in this file.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-07T07:43:15.754Z
Learnt from: kcze
Repo: Significant-Gravitas/AutoGPT PR: 12328
File: autogpt_platform/frontend/src/app/api/openapi.json:1116-1118
Timestamp: 2026-03-07T07:43:15.754Z
Learning: In Significant-Gravitas/AutoGPT, v2 chat endpoints often declare HTTPBearerJWT at the router level while using Depends(auth.get_user_id) that returns None for unauthenticated users; effective behavior is optional auth. Keep this convention unless doing a repo-wide OpenAPI update; prefer clarifying descriptions over per-operation security changes.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-21T04:35:34.710Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12865
File: autogpt_platform/backend/backend/data/credit.py:1584-1584
Timestamp: 2026-04-21T04:35:34.710Z
Learning: In Significant-Gravitas/AutoGPT, `get_user_by_id(user_id: str)` in `autogpt_platform/backend/backend/data/user.py` returns an application-layer Pydantic `User` model (defined in `autogpt_platform/backend/backend/data/model.py`), NOT the raw Prisma `User` object. This Pydantic model uses snake_case field names (e.g., `subscription_tier`, `stripe_customer_id`, `top_up_config`), which are mapped from camelCase Prisma fields (e.g., `subscriptionTier`, `stripeCustomerId`) inside `User.from_db()`. Do not flag `user.subscription_tier` as a wrong field name — it is correct on the app-layer model.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T05:58:28.595Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12879
File: autogpt_platform/frontend/src/app/api/openapi.json:14576-14577
Timestamp: 2026-04-22T05:58:28.595Z
Learning: Repo: Significant-Gravitas/AutoGPT — autogpt_platform
Process convention: When adding new CoPilot tool response models and updating ToolResponseUnion in backend/api/features/chat/routes.py, regenerate the frontend OpenAPI schema via `poetry run export-api-schema` (do not hand-edit autogpt_platform/frontend/src/app/api/openapi.json).

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-24T21:27:22.326Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12536
File: autogpt_platform/frontend/src/app/api/openapi.json:5732-5752
Timestamp: 2026-03-24T21:27:22.326Z
Learning: Repo: Significant-Gravitas/AutoGPT — Preference: Do not add explicit 403/404 entries to FastAPI route decorators for admin endpoints just to influence OpenAPI. Keep openapi.json autogenerated and use route docstrings to document admin-only (403) and not-found (404) behavior; rely on tests for enforcement. File context: autogpt_platform/backend/backend/api/features/admin/store_admin_routes.py. PR `#12536`.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
🔇 Additional comments (7)
autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/SubscriptionTierSection.tsx (1)

159-190: LGTM!

Extracting rateLimitLabel per-iteration (instead of an IIFE in JSX) reads cleanly, and the conditional render on truthiness correctly hides the line for the baseline tier and tiers absent from the payload. Passing subscription.tier_multipliers ?? {} safely defaults when the backend omits the field.

autogpt_platform/backend/backend/api/features/v1.py (2)

829-839: LGTM — consistent hiding of tiers at both layers.

The comprehension builds tier_multipliers only for tiers already present in tier_costs, which matches the goal of keeping hidden tiers (no LD price) out of both the price and multiplier maps. The .get(t, 1.0) fallback is defensively correct even though get_tier_multipliers() always returns the full default map.


712-720: No action required — frontend OpenAPI schema has been properly regenerated.

The openapi.json correctly includes tier_multipliers in the SubscriptionStatusResponse schema with the appropriate type definition. The frontend is already importing the generated types and using the field correctly in SubscriptionTierSection.tsx (line 161: subscription.tier_multipliers ?? {}), and the test fixtures include it as well. When developers run pnpm run dev, the generation occurs automatically.

autogpt_platform/frontend/src/app/(platform)/profile/(user)/credits/components/SubscriptionTierSection/helpers.ts (1)

61-76: LGTM — edge cases are well handled.

Guards for mine <= 0, empty visible, and the post-rounding "1.0" equality check address both the floating-point drift and the zero-multiplier stray-badge cases raised previously. Also worth noting the side effect that tiers within ~5% of the baseline (e.g. 1.04×) collapse to no badge by rounding, which is a reasonable UX tradeoff.

autogpt_platform/backend/backend/copilot/rate_limit.py (3)

151-172: LGTM — defensive fallback and clean merge semantics.

Catching Exception around _fetch_tier_multipliers_flag ensures LD/SDK/network failures never propagate to the rate-limit path, and merging defaults with overrides means partial LD JSON (e.g. only {"PRO": 8.5}) still yields a complete tier map. The docstring also correctly anchors the system-wide evaluation decision and documents the path to per-cohort overrides.


777-788: LGTM — fractional multiplier path preserves the integer contract.

int(daily * multiplier) after the != 1.0 short-circuit is correct for the microdollar domain (base values are on the order of $100 = 100M microdollars, so single-microdollar truncation is negligible). The BASIC fast-path via strict float equality against 1.0 is fine here because _DEFAULT_TIER_MULTIPLIERS[BASIC] is exactly 1.0 and LD overrides go through float(value) without further arithmetic.


106-148: The Flag.COPILOT_TIER_MULTIPLIERS enum entry exists and is correctly registered.

Line 124's reference to Flag.COPILOT_TIER_MULTIPLIERS.value is valid — the enum entry is properly defined in backend.util.feature_flag.py line 47 with the correct string literal "copilot-tier-multipliers". No AttributeError will occur.

			> Likely an incorrect or invalid review comment.

…-local enum mismatch

Pyright rejected `multipliers.get(prisma_tier, 1.0)` because the dict was keyed
by rate_limit.SubscriptionTier (local mirror), not prisma.enums.SubscriptionTier.
Runtime worked (both str+Enum with matching values), but type-check failed.

- get_tier_multipliers now returns dict[str, float] keyed by SubscriptionTier.value.
- get_global_rate_limits and v1.get_subscription_status look up by .value.
- Tests updated accordingly.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
autogpt_platform/backend/backend/copilot/rate_limit.py (1)

135-148: Consider a debug log when dropping invalid LD entries for operability.

Top-level shape mismatches are logged at warning, but per-key failures (unknown tier name, non-numeric value, non-positive multiplier) are silently dropped. A common misconfiguration — e.g. {"basic": 5} (lowercase) or {"PRO": -1} — will produce no signal at all; get_tier_multipliers will just return the defaults as if the flag were unset. A single debug-level log per skip makes LD misconfigurations diagnosable without adding noise in the happy path.

💡 Proposed refactor to surface skipped entries
     parsed: dict[SubscriptionTier, float] = {}
     for key, value in raw.items():
         try:
             tier = SubscriptionTier(key)
         except ValueError:
+            logger.debug(
+                "copilot-tier-multipliers: skipping unknown tier key %r", key
+            )
             continue
         try:
             multiplier = float(value)
         except (TypeError, ValueError):
+            logger.debug(
+                "copilot-tier-multipliers: non-numeric value for %s: %r", key, value
+            )
             continue
         if multiplier <= 0:
+            logger.debug(
+                "copilot-tier-multipliers: non-positive multiplier for %s: %r",
+                key,
+                multiplier,
+            )
             continue
         parsed[tier] = multiplier
     return parsed or None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit.py` around lines 135 -
148, In get_tier_multipliers, when iterating over raw to build parsed, add a
debug log for each skipped entry (unknown SubscriptionTier(key), non-numeric
value, or non-positive multiplier) including the offending key and value and a
short reason so misconfigurations like lowercase names or negative multipliers
are visible; keep the existing behavior of continuing after logging and use the
module logger (or existing logger object) to emit debug-level messages
referencing SubscriptionTier, key, value, and multiplier as appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/rate_limit.py`:
- Around line 135-148: In get_tier_multipliers, when iterating over raw to build
parsed, add a debug log for each skipped entry (unknown SubscriptionTier(key),
non-numeric value, or non-positive multiplier) including the offending key and
value and a short reason so misconfigurations like lowercase names or negative
multipliers are visible; keep the existing behavior of continuing after logging
and use the module logger (or existing logger object) to emit debug-level
messages referencing SubscriptionTier, key, value, and multiplier as
appropriate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5648e112-f9b0-49bf-9827-4039b607f2bd

📥 Commits

Reviewing files that changed from the base of the PR and between 9196e5f and 354d4e6.

📒 Files selected for processing (2)
  • autogpt_platform/backend/backend/copilot/rate_limit.py
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: check API types
  • GitHub Check: lint
  • GitHub Check: integration_test
  • GitHub Check: end-to-end tests
  • GitHub Check: Seer Code Review
  • GitHub Check: type-check (3.11)
  • GitHub Check: type-check (3.13)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (typescript)
  • GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (2)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
🧠 Learnings (27)
📓 Common learnings
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12206
File: autogpt_platform/backend/backend/api/external/v2/rate_limit.py:24-56
Timestamp: 2026-04-03T13:50:29.037Z
Learning: In `autogpt_platform/backend/backend/api/external/v2/rate_limit.py`, the `RateLimiter` class uses in-process (per-worker) memory for sliding-window rate limiting. This is intentionally documented as a known limitation via WARNING comments in the module and class docstrings. A full Redis-backed migration (using ZADD/ZREMRANGEBYSCORE/ZCARD with TTL/Lua for atomic multi-replica enforcement) is deferred to a later PR. Do not re-flag the in-memory implementation as a blocking bug — the limitation is documented and accepted for the initial v2 external API release.
📚 Learning: 2026-03-13T15:49:44.961Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-13T15:49:44.961Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, the original per-session token window (with a TTL-based reset) was replaced with fixed daily and weekly windows. `resets_at` is now derived from `_daily_reset_time()` (midnight UTC) and `_weekly_reset_time()` (next Monday 00:00 UTC) — deterministic fixed-boundary calculations that require no Redis TTL introspection.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T12:26:42.571Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-23T13:53:40.315Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12894
File: autogpt_platform/backend/backend/data/block_cost_config.py:271-277
Timestamp: 2026-04-23T13:53:40.315Z
Learning: In `autogpt_platform/backend/backend/data/block_cost_config.py`, `compute_token_credits()` intentionally returns `MODEL_COST[model]` (the flat tier) on pre-flight (when `stats is None`) for the `TOKENS` billing path. Returning 0 pre-flight would allow a zero-balance user to bypass the credit gate and trigger an LLM call, with the insufficient-balance error only surfacing post-flight (a billing leak). The overcharge concern (actual token cost < MODEL_COST estimate) is handled by `_charge_reconciled_usage_sync` in `autogpt_platform/backend/backend/executor/billing.py`, which issues a negative-delta refund via `spend_credits(cost=negative)` when real usage falls below the pre-flight estimate. Do NOT flag the MODEL_COST pre-flight floor in this function as an overcharge bug; the refund path covers it.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-03T13:50:29.037Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12206
File: autogpt_platform/backend/backend/api/external/v2/rate_limit.py:24-56
Timestamp: 2026-04-03T13:50:29.037Z
Learning: In `autogpt_platform/backend/backend/api/external/v2/rate_limit.py`, the `RateLimiter` class uses in-process (per-worker) memory for sliding-window rate limiting. This is intentionally documented as a known limitation via WARNING comments in the module and class docstrings. A full Redis-backed migration (using ZADD/ZREMRANGEBYSCORE/ZCARD with TTL/Lua for atomic multi-replica enforcement) is deferred to a later PR. Do not re-flag the in-memory implementation as a blocking bug — the limitation is documented and accepted for the initial v2 external API release.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-15T23:39:39.754Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-15T23:39:39.754Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, `record_token_usage` uses the same helper functions (`_daily_reset_time()` / `_weekly_reset_time()`) to compute both `resets_at` (the reset timestamp returned to callers) and the Redis key `expire` seconds. This single-source-of-truth design guarantees that the reported reset times and the actual Redis TTLs are always in sync — there is no separate TTL constant that could diverge from the calendar-boundary calculation.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-10T11:22:18.867Z
Learnt from: Swiftyos
Repo: Significant-Gravitas/AutoGPT PR: 12347
File: autogpt_platform/backend/backend/data/invited_user.py:193-193
Timestamp: 2026-03-10T11:22:18.867Z
Learning: In Significant-Gravitas/AutoGPT, the admin data-layer functions in `autogpt_platform/backend/backend/data/invited_user.py` (`list_invited_users`, `create_invited_user`, `revoke_invited_user`, `retry_invited_user_tally`, `bulk_create_invited_users_from_file`) intentionally omit an acting-user/admin ID parameter. Authorization for these functions is enforced entirely at the FastAPI router layer via `Security(requires_admin_user)` in `user_admin_routes.py`. Do not flag the absence of a user_id/actor_id parameter in these functions as a missing data-access guardrail violation.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-15T22:50:02.270Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11235
File: autogpt_platform/backend/backend/data/diagnostics.py:0-0
Timestamp: 2026-04-15T22:50:02.270Z
Learning: In Significant-Gravitas/AutoGPT, the admin diagnostic data-layer functions in `autogpt_platform/backend/backend/data/diagnostics.py` (e.g., `get_execution_diagnostics`, `get_agent_diagnostics`, `get_schedule_health_metrics`, `get_all_schedules_details`, `get_running_executions_details`, `get_orphaned_executions_details`, `get_long_running_executions_details`, `get_stuck_queued_executions_details`, `get_invalid_executions_details`, `get_failed_executions_count`, `get_failed_executions_details`) intentionally omit a `user_id`/`admin_user_id` parameter. These functions require cross-user, system-wide visibility for admin diagnostics. Authorization is enforced entirely at the FastAPI router layer via `Security(requires_admin_user)` in `diagnostics_admin_routes.py`. Do not flag the absence of a user_id/admin_user_id parameter in these read functions as a missing data-access guardrail violation. Note: write/mutating functions like `cleanup_orphaned_execution`, `stop_all_long_running_executio...

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-10T08:39:22.025Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-09T08:47:32.750Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12720
File: autogpt_platform/backend/backend/copilot/graphiti/client.py:20-46
Timestamp: 2026-04-09T08:47:32.750Z
Learning: In Significant-Gravitas/AutoGPT, `user_id` values passed to `derive_group_id` in `autogpt_platform/backend/backend/copilot/graphiti/client.py` are always system-generated UUIDv4s (e.g. `883cc9da-fe37-4863-839b-acba022bf3ef`). The character set `[0-9a-f-]` is fully within `[a-zA-Z0-9_-]`, so the sanitization regex never strips any characters and no collision between two different user IDs is possible. Do not flag `derive_group_id` for collision-resistance issues.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-09T16:20:43.788Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12727
File: autogpt_platform/backend/backend/data/credit.py:0-0
Timestamp: 2026-04-09T16:20:43.788Z
Learning: In Significant-Gravitas/AutoGPT, `get_user_by_id(user_id: str) -> User` in `autogpt_platform/backend/backend/data/user.py` raises `ValueError("User not found with ID: ...")` when the user row does not exist — it never returns `None`. Do not flag call sites that dereference the result without a None-check as potential null-pointer issues.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-17T10:57:12.953Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/copilot/workflow_import/converter.py:0-0
Timestamp: 2026-03-17T10:57:12.953Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, `autogpt_platform/backend/backend/copilot/workflow_import/converter.py` was fully rewritten (commit 732960e2d) to no longer make direct LLM/OpenAI API calls. The converter now builds a structured text prompt for AutoPilot/CoPilot instead. There is no `response.choices` access or any direct LLM client usage in this file. Do not flag `response.choices` access or LLM client initialization patterns as issues in this file.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-07T07:43:15.754Z
Learnt from: kcze
Repo: Significant-Gravitas/AutoGPT PR: 12328
File: autogpt_platform/frontend/src/app/api/openapi.json:1116-1118
Timestamp: 2026-03-07T07:43:15.754Z
Learning: In Significant-Gravitas/AutoGPT, v2 chat endpoints often declare HTTPBearerJWT at the router level while using Depends(auth.get_user_id) that returns None for unauthenticated users; effective behavior is optional auth. Keep this convention unless doing a repo-wide OpenAPI update; prefer clarifying descriptions over per-operation security changes.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-21T04:35:34.710Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12865
File: autogpt_platform/backend/backend/data/credit.py:1584-1584
Timestamp: 2026-04-21T04:35:34.710Z
Learning: In Significant-Gravitas/AutoGPT, `get_user_by_id(user_id: str)` in `autogpt_platform/backend/backend/data/user.py` returns an application-layer Pydantic `User` model (defined in `autogpt_platform/backend/backend/data/model.py`), NOT the raw Prisma `User` object. This Pydantic model uses snake_case field names (e.g., `subscription_tier`, `stripe_customer_id`, `top_up_config`), which are mapped from camelCase Prisma fields (e.g., `subscriptionTier`, `stripeCustomerId`) inside `User.from_db()`. Do not flag `user.subscription_tier` as a wrong field name — it is correct on the app-layer model.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T05:58:28.595Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12879
File: autogpt_platform/frontend/src/app/api/openapi.json:14576-14577
Timestamp: 2026-04-22T05:58:28.595Z
Learning: Repo: Significant-Gravitas/AutoGPT — autogpt_platform
Process convention: When adding new CoPilot tool response models and updating ToolResponseUnion in backend/api/features/chat/routes.py, regenerate the frontend OpenAPI schema via `poetry run export-api-schema` (do not hand-edit autogpt_platform/frontend/src/app/api/openapi.json).

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-24T21:27:22.326Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12536
File: autogpt_platform/frontend/src/app/api/openapi.json:5732-5752
Timestamp: 2026-03-24T21:27:22.326Z
Learning: Repo: Significant-Gravitas/AutoGPT — Preference: Do not add explicit 403/404 entries to FastAPI route decorators for admin endpoints just to influence OpenAPI. Keep openapi.json autogenerated and use route docstrings to document admin-only (403) and not-found (404) behavior; rely on tests for enforcement. File context: autogpt_platform/backend/backend/api/features/admin/store_admin_routes.py. PR `#12536`.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-09T10:50:43.907Z
Learnt from: Bentlybro
Repo: Significant-Gravitas/AutoGPT PR: 0
File: :0-0
Timestamp: 2026-03-09T10:50:43.907Z
Learning: Repo: Significant-Gravitas/AutoGPT — File: autogpt_platform/backend/backend/blocks/llm.py
For xAI Grok models accessed via OpenRouter, the API returns `null` for `max_completion_tokens`. The convention in this codebase is to use the model's context window size as the `max_output_tokens` value in ModelMetadata. For example, Grok 3 uses 131072 (128k) and Grok 4 uses 262144 (256k). Do not flag these as incorrect max output token values.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-14T06:39:52.592Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12773
File: autogpt_platform/frontend/src/app/api/openapi.json:12803-12806
Timestamp: 2026-04-14T06:39:52.592Z
Learning: Repo: Significant-Gravitas/AutoGPT — autogpt_platform
Intentional message length caps:
- StreamChatRequest.message maxLength = 64000.
- QueuePendingMessageRequest.message maxLength = 32000 (matches PendingMessage.content).
Rationale: both feed the same LLM context window; pending must not exceed stream, and larger ceilings replace legacy 4000/16000.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit.py
🔇 Additional comments (1)
autogpt_platform/backend/backend/copilot/rate_limit.py (1)

780-793: LGTM on the multiplier integration.

The multipliers.get(tier.value, 1.0) fallback is a nice belt-and-braces for unknown tiers, the multiplier != 1.0 gate cleanly avoids an unnecessary cast in the common path, and the int(...) truncation preserves the microdollar integer contract while the docstring and comment clearly call out the accepted sub-microdollar precision loss. Sequential awaits on get_user_tier + get_tier_multipliers are fine here since both are cache-hot paths.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
autogpt_platform/backend/backend/copilot/rate_limit_test.py (3)

441-456: Optional: assert the unknown key is dropped explicitly.

The test docstring covers "unknown tier keys" but nothing verifies "BOGUS" is excluded from the returned map (it only works implicitly because result is keyed by known SubscriptionTier values). A quick assert "BOGUS" not in result would make the intent explicit and guard against a regression where the resolver naively merges unknown keys through.

♻️ Proposed tweak
         result = await get_tier_multipliers()
         assert result["PRO"] == 3.0
+        # Unknown keys must not leak into the result map.
+        assert "BOGUS" not in result
         # MAX had a non-positive override → falls back to default.
         assert result["MAX"] == _DEFAULT_TIER_MULTIPLIERS[SubscriptionTier.MAX]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py` around lines 441
- 456, Add an explicit check in test_unknown_tier_key_skipped to assert that
unknown override keys are not present in the returned mapping: after calling
get_tier_multipliers() (the call inside the patched get_feature_flag_value), add
an assertion that the bogus key ("BOGUS") is not in result to ensure
get_tier_multipliers filters unknown keys and only returns keys corresponding to
SubscriptionTier/default multipliers.

742-746: Consider extracting the repeated _clear_flag_cache fixture.

The same autouse fixture body (_fetch_tier_multipliers_flag.cache_clear() # type: ignore[attr-defined]) is duplicated across TestGetTierMultipliers, TestGetGlobalRateLimitsWithTiers, TestTierLimitsRespected, and TestTierLimitsEnforced. A module-level @pytest.fixture(autouse=True) (or a small shared base) would DRY this up and make it harder to forget in future tier-multiplier tests. Non-blocking.

Also applies to: 923-926, 1140-1143

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py` around lines 742
- 746, Multiple test classes repeat the same autouse fixture body clearing the
LD flag cache; extract this into a single module-level
pytest.fixture(autouse=True) that calls
_fetch_tier_multipliers_flag.cache_clear() (referencing the existing fixture
name _clear_flag_cache and the cached function _fetch_tier_multipliers_flag) so
TestGetTierMultipliers, TestGetGlobalRateLimitsWithTiers,
TestTierLimitsRespected, and TestTierLimitsEnforced (and the occurrences at the
other ranges) can rely on the shared fixture and avoid duplication.

430-439: Optional: exercise more non-object shapes in test_invalid_json_falls_back.

The docstring promises the fallback triggers for "string, list, bool", but only a string is exercised. Consider parametrizing (or adding a couple of asserts) with [] and True to actually cover the list/bool paths — otherwise a future change that, e.g., handles strings specially but still mis-handles lists would slip through.

♻️ Suggested parametrize
-    `@pytest.mark.asyncio`
-    async def test_invalid_json_falls_back(self):
-        """A non-object LD value (string, list, bool) falls back to defaults."""
-        with patch(
-            "backend.util.feature_flag.get_feature_flag_value",
-            new_callable=AsyncMock,
-            return_value="broken",
-        ):
-            result = await get_tier_multipliers()
-        assert result == {t.value: m for t, m in _DEFAULT_TIER_MULTIPLIERS.items()}
+    `@pytest.mark.parametrize`("bad_value", ["broken", [], True, 42])
+    `@pytest.mark.asyncio`
+    async def test_invalid_json_falls_back(self, bad_value):
+        """Non-object LD values fall back to defaults."""
+        with patch(
+            "backend.util.feature_flag.get_feature_flag_value",
+            new_callable=AsyncMock,
+            return_value=bad_value,
+        ):
+            result = await get_tier_multipliers()
+        assert result == {t.value: m for t, m in _DEFAULT_TIER_MULTIPLIERS.items()}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py` around lines 430
- 439, The test test_invalid_json_falls_back only exercises a string; update it
to parametrize over multiple non-object LD shapes (e.g., "broken", [], True) so
get_tier_multipliers() is validated for string/list/bool fallbacks; use
pytest.mark.parametrize on the test and patch
backend.util.feature_flag.get_feature_flag_value (AsyncMock) to return each
value, then assert the result equals {t.value: m for t, m in
_DEFAULT_TIER_MULTIPLIERS.items()} to cover all promised cases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/rate_limit_test.py`:
- Around line 441-456: Add an explicit check in test_unknown_tier_key_skipped to
assert that unknown override keys are not present in the returned mapping: after
calling get_tier_multipliers() (the call inside the patched
get_feature_flag_value), add an assertion that the bogus key ("BOGUS") is not in
result to ensure get_tier_multipliers filters unknown keys and only returns keys
corresponding to SubscriptionTier/default multipliers.
- Around line 742-746: Multiple test classes repeat the same autouse fixture
body clearing the LD flag cache; extract this into a single module-level
pytest.fixture(autouse=True) that calls
_fetch_tier_multipliers_flag.cache_clear() (referencing the existing fixture
name _clear_flag_cache and the cached function _fetch_tier_multipliers_flag) so
TestGetTierMultipliers, TestGetGlobalRateLimitsWithTiers,
TestTierLimitsRespected, and TestTierLimitsEnforced (and the occurrences at the
other ranges) can rely on the shared fixture and avoid duplication.
- Around line 430-439: The test test_invalid_json_falls_back only exercises a
string; update it to parametrize over multiple non-object LD shapes (e.g.,
"broken", [], True) so get_tier_multipliers() is validated for string/list/bool
fallbacks; use pytest.mark.parametrize on the test and patch
backend.util.feature_flag.get_feature_flag_value (AsyncMock) to return each
value, then assert the result equals {t.value: m for t, m in
_DEFAULT_TIER_MULTIPLIERS.items()} to cover all promised cases.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 032dfc77-41cd-4232-a455-09a8f8326f40

📥 Commits

Reviewing files that changed from the base of the PR and between 354d4e6 and 0f8c921.

📒 Files selected for processing (1)
  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: CodeQL
  • GitHub Check: check API types
  • GitHub Check: integration_test
  • GitHub Check: lint
  • GitHub Check: Seer Code Review
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: type-check (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: type-check (3.12)
  • GitHub Check: type-check (3.13)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (typescript)
  • GitHub Check: conflicts
🧰 Additional context used
📓 Path-based instructions (3)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
autogpt_platform/backend/**/*_test.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/*_test.py: Use pytest with snapshot testing for API responses
Colocate test files with source files using *_test.py naming convention
Mock at boundaries — mock where the symbol is used, not where it's defined; after refactoring, update mock targets to match new module paths
Use AsyncMock from unittest.mock for async functions in tests
When writing tests, use Test-Driven Development (TDD): write failing tests marked with @pytest.mark.xfail before implementation, then remove the marker once the implementation is complete
When creating snapshots in tests, use poetry run pytest path/to/test.py --snapshot-update; always review snapshot changes with git diff before committing

Files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
🧠 Learnings (18)
📓 Common learnings
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12206
File: autogpt_platform/backend/backend/api/external/v2/rate_limit.py:24-56
Timestamp: 2026-04-03T13:50:29.037Z
Learning: In `autogpt_platform/backend/backend/api/external/v2/rate_limit.py`, the `RateLimiter` class uses in-process (per-worker) memory for sliding-window rate limiting. This is intentionally documented as a known limitation via WARNING comments in the module and class docstrings. A full Redis-backed migration (using ZADD/ZREMRANGEBYSCORE/ZCARD with TTL/Lua for atomic multi-replica enforcement) is deferred to a later PR. Do not re-flag the in-memory implementation as a blocking bug — the limitation is documented and accepted for the initial v2 external API release.
📚 Learning: 2026-04-22T12:26:42.571Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/sdk/service.py:0-0
Timestamp: 2026-04-22T12:26:42.571Z
Learning: In `autogpt_platform/backend/backend/copilot/sdk/service.py`, `_resolve_sdk_model_for_request`: when a per-user LaunchDarkly model value fails `_normalize_model_name` (e.g. a `moonshotai/kimi-*` slug in direct-Anthropic mode), the fallback must be tier-specific — `config.thinking_advanced_model` for advanced tier, `config.thinking_standard_model` for standard tier — NOT the generic `_resolve_sdk_model()` (which is standard-only and returns None under subscription mode). If the tier-specific config default also fails `_normalize_model_name`, re-raise the original LD error; this is a deployment-level misconfiguration that `model_validator` should have caught at startup. Established in PR `#12881` commit 637d2fef5.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-13T15:49:44.961Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-13T15:49:44.961Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, the original per-session token window (with a TTL-based reset) was replaced with fixed daily and weekly windows. `resets_at` is now derived from `_daily_reset_time()` (midnight UTC) and `_weekly_reset_time()` (next Monday 00:00 UTC) — deterministic fixed-boundary calculations that require no Redis TTL introspection.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-03T13:50:29.037Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12206
File: autogpt_platform/backend/backend/api/external/v2/rate_limit.py:24-56
Timestamp: 2026-04-03T13:50:29.037Z
Learning: In `autogpt_platform/backend/backend/api/external/v2/rate_limit.py`, the `RateLimiter` class uses in-process (per-worker) memory for sliding-window rate limiting. This is intentionally documented as a known limitation via WARNING comments in the module and class docstrings. A full Redis-backed migration (using ZADD/ZREMRANGEBYSCORE/ZCARD with TTL/Lua for atomic multi-replica enforcement) is deferred to a later PR. Do not re-flag the in-memory implementation as a blocking bug — the limitation is documented and accepted for the initial v2 external API release.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-12T14:42:40.552Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:141-170
Timestamp: 2026-03-12T14:42:40.552Z
Learning: In Significant-Gravitas/AutoGPT, `check_rate_limit` in `autogpt_platform/backend/backend/copilot/rate_limit.py` is intentionally a "pre-turn soft check" (not a hard atomic reservation). Because LLM token counts are unknown before generation completes, a strict check-and-reserve is impractical. The TOCTOU race (two concurrent turns both passing the pre-check and both committing via `record_token_usage`) is an accepted trade-off. If stricter enforcement is ever needed, the approach is a Lua script doing GET+INCRBY atomically in Redis.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-17T07:24:34.302Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-17T07:24:34.302Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, all fail-open `except` blocks catch `(RedisError, ConnectionError, OSError)` specifically — not bare `except Exception`. This applies to `_session_reset_from_ttl`, `get_usage_status`, `check_rate_limit`, and `record_token_usage`. The narrowed tuple ensures only genuine Redis/network failures are swallowed; unexpected exceptions propagate normally.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-15T15:29:20.889Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-15T15:29:20.889Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, the daily and weekly Redis keys encode the current date/week directly in the key name (e.g., `copilot:usage:daily:{user_id}:{YYYY-MM-DD}` and `copilot:usage:weekly:{user_id}:{year}-W{week}`). This means a new key is naturally created at each window boundary, so `resets_at` (derived from `_daily_reset_time()` / `_weekly_reset_time()`) is always accurate without any Redis TTL introspection — the key rotation and reset-time calculation are inherently synchronized.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-15T23:39:39.754Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12385
File: autogpt_platform/backend/backend/copilot/rate_limit.py:0-0
Timestamp: 2026-03-15T23:39:39.754Z
Learning: In `autogpt_platform/backend/backend/copilot/rate_limit.py`, `record_token_usage` uses the same helper functions (`_daily_reset_time()` / `_weekly_reset_time()`) to compute both `resets_at` (the reset timestamp returned to callers) and the Redis key `expire` seconds. This single-source-of-truth design guarantees that the reported reset times and the actual Redis TTLs are always in sync — there is no separate TTL constant that could diverge from the calendar-boundary calculation.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-21T04:36:19.755Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12865
File: autogpt_platform/backend/backend/data/credit_subscription_test.py:1119-1122
Timestamp: 2026-04-21T04:36:19.755Z
Learning: In `autogpt_platform/backend/backend/data/credit_subscription_test.py` (and related subscription test files), test mocks for the user object returned by `get_user_by_id` should use snake_case `subscription_tier` (not camelCase `subscriptionTier`). This is because `get_user_by_id` (defined in `backend/data/user.py`) returns `backend.data.model.User` — a Pydantic application model with `subscription_tier: SubscriptionTier` — not a raw Prisma model. Production code in `backend/data/credit.py` reads `user.subscription_tier` from that Pydantic model. Do NOT flag `mock_user.subscription_tier = ...` as incorrect in these tests.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-04T08:04:35.881Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12273
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:216-220
Timestamp: 2026-03-04T08:04:35.881Z
Learning: In the AutoGPT Copilot backend, ensure that SVG images are not treated as vision image types by excluding 'image/svg+xml' from INLINEABLE_MIME_TYPES and MULTIMODAL_TYPES in tool_adapter.py; the Claude API supports PNG, JPEG, GIF, and WebP for vision. SVGs (XML text) should be handled via the text path instead, not the vision path.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-01T04:17:41.600Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12632
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-01T04:17:41.600Z
Learning: When reviewing AutoGPT Copilot tool implementations, accept that `readOnlyHint=True` (provided via `ToolAnnotations`) may be applied unconditionally to *all* tools—even tools that have side effects (e.g., `bash_exec`, `write_workspace_file`, or other write/save operations). Do **not** flag these tools for having `readOnlyHint=True`; this is intentional to enable fully-parallel dispatch by the Anthropic SDK/CLI and has been E2E validated. Only flag `readOnlyHint` issues if they conflict with the established `ToolAnnotations` behavior (e.g., missing/incorrect propagation relative to the intended annotation mechanism).

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/copilot/rate_limit_test.py
🔇 Additional comments (1)
autogpt_platform/backend/backend/copilot/rate_limit_test.py (1)

394-468: LGTM — solid coverage for the new LD resolver.

The new TestGetTierMultipliers class exercises the key paths (defaults, partial override, invalid payload, unknown/non-positive/unparseable overrides, LD exception) and correctly autouses _fetch_tier_multipliers_flag.cache_clear() so the per-test patches aren't shadowed by stale cache entries. The fractional-multiplier assertion at lines 793–797 (8.5× on PRO) also nicely pins down that the int(base * multiplier) cast survives for downstream microdollar math.

@majdyz majdyz merged commit 408b205 into dev Apr 24, 2026
44 checks passed
@majdyz majdyz deleted the feat/configurable-tier-multipliers-and-relative-ui branch April 24, 2026 15:05
@github-project-automation github-project-automation Bot moved this from 🆕 Needs initial review to ✅ Done in AutoGPT development kanban Apr 24, 2026
@github-project-automation github-project-automation Bot moved this to Done in Frontend Apr 24, 2026
majdyz added a commit that referenced this pull request Apr 24, 2026
## What

Consolidates two groups of LaunchDarkly flags into single JSON-valued
flags, matching the pattern established by `copilot-tier-multipliers`
(merged in #12910):

**Stripe prices** — 4 string flags → 1 JSON flag:
- ~~`stripe-price-id-basic`~~ / ~~`-pro`~~ / ~~`-max`~~ /
~~`-business`~~
- **New:** `copilot-tier-stripe-prices` (JSON)
  ```json
  { "PRO": "price_xxx", "MAX": "price_yyy" }
  ```

**Cost limits** — 2 number flags → 1 JSON flag:
- ~~`copilot-daily-cost-limit-microdollars`~~ /
~~`copilot-weekly-cost-limit-microdollars`~~
- **New:** `copilot-cost-limits` (JSON)
  ```json
  { "daily": 625000, "weekly": 3125000 }
  ```

## Why

- One flag to manage per config domain (LD UI less cluttered, easier
audit trail).
- Atomic updates — e.g., rotating Pro + Max prices happens in a single
save.
- Fewer LD entities to name, version, target, and explain.
- Mirrors the just-merged `copilot-tier-multipliers` shape so the whole
pricing/limits config is uniform.

## How

- `get_subscription_price_id(tier)` now parses
`copilot-tier-stripe-prices` and looks up `tier.value` — returns `None`
when the flag is unset, non-dict, tier key missing, or value isn't a
non-empty string.
- `get_global_rate_limits` uses a new sibling
`_fetch_cost_limits_flag()` helper (60s cache, `cache_none=False`) that
extracts `daily` / `weekly` int keys independently and falls back to the
existing `ChatConfig` defaults when any key is missing / non-int /
negative. A broken `daily` doesn't wipe out `weekly` (or vice versa).
- Tests rewritten to mock the new JSON shapes + cover partial / invalid
/ missing-key fallbacks.

## ⚠️ Operator action required BEFORE merging

This PR **removes 6 LD flags** and introduces 2 replacements. To avoid a
pricing/rate-limit outage, do this in LaunchDarkly first:

1. Create `copilot-tier-stripe-prices` (type: **JSON**). Default
variation = union of the current `stripe-price-id-*` values:
   ```json
{ "PRO": "<current stripe-price-id-pro>", "MAX": "<current
stripe-price-id-max>" }
   ```
   Omit BASIC / BUSINESS if those flags are unset today.

2. Create `copilot-cost-limits` (type: **JSON**). Default variation =
the current two flags' values:
   ```json
{ "daily": <current daily microdollars>, "weekly": <current weekly
microdollars> }
   ```

3. Merge this PR.

4. After deploy + smoke test, delete the six legacy flags:
   - `stripe-price-id-{basic,pro,max,business}`
   - `copilot-daily-cost-limit-microdollars`
   - `copilot-weekly-cost-limit-microdollars`

## Testing

- Backend unit tests: `pytest backend/copilot/rate_limit_test.py
backend/data/credit_subscription_test.py
backend/api/features/subscription_routes_test.py` — rewritten to
exercise the JSON flag shapes + fallback paths; passes locally.
- `black --check` / `ruff check` / `isort --check` — all clean.

## Checklist

- [x] I have read the project's contributing guide.
- [x] I have clearly described what this PR changes and why.
- [x] My code follows the style guidelines of this project.
- [x] I have added tests that prove my fix is effective or that my
feature works.
- [ ] New and existing unit tests pass locally with my changes (CI will
confirm).
majdyz added a commit that referenced this pull request Apr 25, 2026
… 1 JSON flag (#12917)

## What

Replaces 4 string-valued LaunchDarkly flags with a single JSON-valued
flag for copilot model routing:

- ~~`copilot-fast-standard-model`~~
- ~~`copilot-fast-advanced-model`~~
- ~~`copilot-thinking-standard-model`~~
- ~~`copilot-thinking-advanced-model`~~

**New:** `copilot-model-routing` (JSON), keyed `{mode: {tier: model}}`:
```json
{
  "fast":     { "standard": "anthropic/claude-sonnet-4-6", "advanced": "anthropic/claude-opus-4-6" },
  "thinking": { "standard": "moonshotai/kimi-k2.6",         "advanced": "anthropic/claude-opus-4-6" }
}
```

## Why

Same pattern as the sibling consolidation in #12915 (pricing /
cost-limits flags) and the merged #12910 (tier-multipliers):

- One flag per config domain — less LD UI clutter, easier audit trail.
- Atomic updates — rotating fast.standard + thinking.standard is a
single save.
- Fewer LD entities to name, version, target, explain.
- Mirrors the now-uniform copilot-* JSON-flag shape.

## How

- `backend/util/feature_flag.py`: drop the four `COPILOT_*_MODEL` enum
values, add `COPILOT_MODEL_ROUTING`.
- `backend/copilot/model_router.py`: rewrite `resolve_model` to fetch
the JSON flag once per call and walk `payload[mode][tier]`. Missing
mode, missing tier-within-mode, non-string cell value, non-dict payload,
or LD failure all fall back to the corresponding `ChatConfig` default
(same user-visible semantics as before). `_FLAG_BY_CELL` removed
entirely; `_config_default` / `ModelMode` / `ModelTier` unchanged.
- Per-user LD targeting preserved — cohorts can still receive different
routing.
- No caching added (preserves existing uncached behaviour).
- Docstring references in `copilot/config.py` + `copilot/sdk/service.py`
updated to point at the new nested key path; one docstring in
`service_test.py` likewise.

## Operator action required BEFORE merging

This PR removes 4 LD flags and introduces 1 replacement.

1. In LaunchDarkly, create `copilot-model-routing` (type: **JSON**,
server-side only). Default variation = union of the current four string
flags, shaped as:
   ```json
   {
"fast": { "standard": "<current copilot-fast-standard-model>",
"advanced": "<current copilot-fast-advanced-model>" },
"thinking": { "standard": "<current copilot-thinking-standard-model>",
"advanced": "<current copilot-thinking-advanced-model>" }
   }
   ```
Omit any cell that's currently unset (its `ChatConfig` default will be
used).

2. Merge this PR.

3. After deploy + smoke, delete the four legacy flags:
   - `copilot-fast-standard-model`
   - `copilot-fast-advanced-model`
   - `copilot-thinking-standard-model`
   - `copilot-thinking-advanced-model`

## Testing

- `backend/copilot/model_router_test.py` rewritten — 27 tests pass:
  - LD unset / `None` payload → fallback for every cell.
  - Full JSON → each cell maps to its value (parametrized).
- Partial JSON (missing mode, missing tier-within-mode, mode value not a
dict).
  - Non-dict payloads (str / list / int / bool) → fallback + warning.
- Non-string cell values (number, list, bool, dict) → fallback +
'non-string' warning.
- Empty-string cell → fallback + 'empty string' warning (not
'non-string').
  - LD raises → fallback + warning with `exc_info`.
  - `user_id=None` → skip LD entirely.
- Single-LD-call regression guard against re-introducing per-cell flag
fan-out.
- `backend/copilot/sdk/service_test.py`: 61 tests still pass (it mocks
`_resolve_thinking_model_for_user`, so the inner flag change is
transparent).
- `black --check` / `ruff check` / `isort --check` all clean.

## Sibling

- #12915 — same consolidation pattern for stripe-price / cost-limits
flags.

## Checklist

- [x] I have read the project's contributing guide.
- [x] I have clearly described what this PR changes and why.
- [x] My code follows the style guidelines of this project.
- [x] I have added tests that prove my fix is effective or that my
feature works.
- [ ] New and existing unit tests pass locally with my changes (CI will
confirm).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

platform/backend AutoGPT Platform - Back end platform/frontend AutoGPT Platform - Front end size/xl

Projects

Status: ✅ Done
Status: Done

Development

Successfully merging this pull request may close these issues.

1 participant