feat(provider): added regolo.ai#13028
Conversation
|
|
|
This PR targets the Automatically setting the base branch to |
|
ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📜 Recent review details⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
WalkthroughAdds Regolo.ai support: new provider enum and secret, 14 Regolo model enum entries plus MODEL_METADATA, and a Regolo branch in llm_call() that calls Regolo’s OpenAI-compatible chat completions endpoint and returns LLMResponse with usage and tool-call extraction. ChangesRegolo LLM Provider Integration
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@autogpt_platform/backend/backend/blocks/llm.py`:
- Around line 336-338: The key REGOLO_APERTUS_70B in the model metadata map is
referenced as a bare name causing a NameError at import time; qualify it with
the enum/class that actually defines that constant (the same enum used for other
keys in this map) so the entry becomes e.g. <EnumName>.REGOLO_APERTUS_70B;
update the entry where ModelMetadata("regolo", 60000, 30000, "Apertus-70B",
"Regolo.ai", "Apertus", 1) is assigned to use the enum-qualified key (replace
REGOLO_APERTUS_70B with the appropriate enum like Models.REGOLO_APERTUS_70B) to
prevent import-time crashes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5942c6d8-183a-4bc1-8c2b-43342af996a4
📒 Files selected for processing (3)
autogpt_platform/backend/backend/blocks/llm.pyautogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: end-to-end tests
- GitHub Check: test (3.12)
- GitHub Check: type-check (3.13)
- GitHub Check: test (3.13)
- GitHub Check: type-check (3.12)
- GitHub Check: type-check (3.11)
- GitHub Check: test (3.11)
- GitHub Check: types
- GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (3)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
autogpt_platform/backend/**/*.py: Usepoetry run ...command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies likeopenpyxl
Use absolute imports withfrom backend.module import ...for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoidhasattr/getattr/isinstancefor type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no# type: ignore,# noqa,# pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use%sfor deferred interpolation indebuglog statements for efficiency; use f-strings elsewhere for readability (e.g.,logger.debug("Processing %s items", count)vslogger.info(f"Processing {count} items"))
Sanitize error paths by usingos.path.basename()in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Usetransaction=Truefor Redis pipelines to ensure atomicity on multi-step operations
Usemax(0, value)guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...
Files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
autogpt_platform/{backend,autogpt_libs}/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
autogpt_platform/backend/backend/blocks/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/backend/blocks/**/*.py: Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Implement 'run' method with proper error handling in backend blocks
Generate block UUID using 'uuid.uuid4()' when creating new blocks in backend
Write tests alongside block implementation when adding new blocks in backend
autogpt_platform/backend/backend/blocks/**/*.py: For blocks handling files, usestore_media_file()withreturn_format="for_local_processing"when processing with local tools (ffmpeg, MoviePy, PIL)
For blocks handling files, usestore_media_file()withreturn_format="for_external_api"when sending content to external APIs (Replicate, OpenAI)
For blocks returning files, usestore_media_file()withreturn_format="for_block_output"to enable auto-adaptation to execution context (workspace:// in CoPilot, data URI in graphs)
When creating new blocks, inherit fromBlockbase class, define input/output schemas usingBlockSchema, implement asyncrunmethod, and generate unique block ID usinguuid.uuid4()
Files:
autogpt_platform/backend/backend/blocks/llm.py
🧠 Learnings (13)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.
Applied to files:
autogpt_platform/backend/backend/integrations/providers.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-02-05T04:11:00.596Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11796
File: autogpt_platform/backend/backend/blocks/video/concat.py:3-4
Timestamp: 2026-02-05T04:11:00.596Z
Learning: In autogpt_platform/backend/backend/blocks/**/*.py, when creating a new block, generate a UUID once with uuid.uuid4() and hard-code the resulting string as the block's id parameter. Do not call uuid.uuid4() at runtime; IDs must be constant across all imports and runs to ensure stability.
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:32:21.686Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:32:21.686Z
Learning: In autogpt_platform/backend/backend/blocks/, the Block base class execute() already wraps run() in a try/except to convert uncaught exceptions into BlockExecutionError/BlockUnknownError. Do not add per-block try/except in individual block run() methods, as this is not the established pattern (e.g., Gmail, Slack, Todoist blocks omit it). Only use explicit try/except within blocks that need to distinguish between success and error yield paths inside a generator (e.g., attachment blocks). This guidance applies to all Python files under autogpt_platform/backend/backend/blocks/ and similar block implementations; avoid duplicating error handling in run() unless a block requires generator-based branching.
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-23T12:55:26.122Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12893
File: autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py:24-24
Timestamp: 2026-04-23T12:55:26.122Z
Learning: Cost billing via the cost(*costs) decorator is applied at input-evaluation time (before a block’s run() executes). Therefore, mutating input_data inside run() will not change billing. When a block’s billing depends on a field plus URL/sniff-derived signals, treat the explicitly declared billing field (e.g., is_video) as the only billing source—set it correctly before run() (or in the code path that occurs before the decorator evaluates input_data). This should be checked for all blocks under autogpt_platform/backend/backend/blocks/ so billing signals are not mistakenly assumed to update during run().
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: In autogpt_platform/backend/backend/blocks/ (and related blocks under autogpt_platform/backend/backend/blocks/), do not add try/except blocks around a block's run() method for standard error propagation. The block executor framework (backend/executor/manager.py) catches uncaught exceptions from run() and emits them on the 'error' output. Only add explicit try/except blocks when you need to control partial outputs in failure cases (e.g., certain outputs must not be yielded on error, as in attachment blocks). This is the standard pattern across the codebase; apply it broadly to blocks' run() implementations.
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:23.196Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:30:23.196Z
Learning: In any Python file under autogpt_platform/backend/backend/blocks, do not add a try/except around run() solely for standard error handling. The block framework’s _execute() in _base.py already catches unhandled exceptions and re-raises as BlockExecutionError or BlockUnknownError. If you yield ("error", message), _execute() raises BlockExecutionError immediately, so the error port will not propagate downstream. Reserve explicit try/except for scenarios where you must control partial output (e.g., attachment blocks that must skip yielding content_base64 on failure).
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: Do not wrap synchronous AgentMail SDK calls with asyncio.to_thread() in blocks under autogpt_platform/backend/backend/blocks (and across the codebase). The block executor runs node execution in dedicated threads via asyncio.run_coroutine_threadsafe (see manager.py around lines ~745-752 and ~1079). The existing pattern avoids using asyncio.to_thread for SDK calls inside async run() methods, so maintain that approach and do not add to_thread usage in these code paths.
Applied to files:
autogpt_platform/backend/backend/blocks/llm.py
🪛 GitHub Actions: Block Documentation Sync Check / 0_check-docs-sync.txt
autogpt_platform/backend/backend/blocks/llm.py
[error] 336-336: NameError: name 'REGOLO_APERTUS_70B' is not defined.
🪛 GitHub Actions: Block Documentation Sync Check / check-docs-sync
autogpt_platform/backend/backend/blocks/llm.py
[error] 336-336: NameError: name 'REGOLO_APERTUS_70B' is not defined (while defining REGOLO_APERTUS_70B: ModelMetadata...).
🪛 Ruff (0.15.12)
autogpt_platform/backend/backend/blocks/llm.py
[error] 336-336: Undefined name REGOLO_APERTUS_70B
(F821)
🔇 Additional comments (4)
autogpt_platform/backend/backend/integrations/providers.py (1)
43-43: Provider enum extension looks correct.
ProviderName.REGOLOis consistent with the provider string used by the new integration path.autogpt_platform/backend/backend/util/settings.py (1)
658-660: Secrets model update is aligned with the provider integration.Adding
regolo_api_keyhere matches the new provider and keeps credential configuration centralized.autogpt_platform/backend/backend/blocks/llm.py (2)
137-150: Regolo model enum additions are structured well.The new
LlmModel.REGOLO_*entries are consistently named and fit the existing model catalog pattern.
1319-1357: Regolo dispatch branch follows the existing OpenAI-compatible provider pattern.The new branch mirrors the established flow (tool-call extraction, usage accounting, empty-choice guard) consistently.
|
/dev-screenshot |
|
|
@ntindle seems that the automatic check failed |
Why / What / How
Adds Regolo.Ai between the various providers
Changes 🏗️
Checklist 📋
For code changes: