Skip to content

feat(provider): added regolo.ai#13028

Open
Mte90 wants to merge 2 commits into
Significant-Gravitas:devfrom
Mte90:regolo
Open

feat(provider): added regolo.ai#13028
Mte90 wants to merge 2 commits into
Significant-Gravitas:devfrom
Mte90:regolo

Conversation

@Mte90
Copy link
Copy Markdown

@Mte90 Mte90 commented May 7, 2026

Why / What / How

Adds Regolo.Ai between the various providers

Changes 🏗️

Checklist 📋

For code changes:

  • I have clearly listed my changes in the PR description
  • I have made a test plan
  • I have tested my changes according to the test plan

@Mte90 Mte90 requested a review from a team as a code owner May 7, 2026 12:45
@Mte90 Mte90 requested review from Bentlybro and Pwuts and removed request for a team May 7, 2026 12:45
@github-project-automation github-project-automation Bot moved this to 🆕 Needs initial review in AutoGPT development kanban May 7, 2026
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented May 7, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 7, 2026

This PR targets the master branch but does not come from dev or a hotfix/* branch.

Automatically setting the base branch to dev.

@github-actions github-actions Bot added platform/backend AutoGPT Platform - Back end platform/blocks labels May 7, 2026
@github-actions github-actions Bot changed the base branch from master to dev May 7, 2026 12:46
@github-actions github-actions Bot added the size/l label May 7, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

Review Change Stack
No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: aa47f33c-e0e0-4809-8c44-50262f49d3d8

📥 Commits

Reviewing files that changed from the base of the PR and between baf2818 and aad4259.

📒 Files selected for processing (1)
  • autogpt_platform/backend/backend/blocks/llm.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/backend/backend/blocks/llm.py
📜 Recent review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: end-to-end tests
  • GitHub Check: Check PR Status

Walkthrough

Adds Regolo.ai support: new provider enum and secret, 14 Regolo model enum entries plus MODEL_METADATA, and a Regolo branch in llm_call() that calls Regolo’s OpenAI-compatible chat completions endpoint and returns LLMResponse with usage and tool-call extraction.

Changes

Regolo LLM Provider Integration

Layer / File(s) Summary
Provider Definition & Configuration
autogpt_platform/backend/backend/integrations/providers.py, autogpt_platform/backend/backend/util/settings.py
ProviderName gains REGOLO = "regolo"; Secrets adds regolo_api_key for OpenAI-compatible Regolo API authentication.
Model Definitions & Metadata
autogpt_platform/backend/backend/blocks/llm.py
LlmModel adds 14 REGOLO_* variants; MODEL_METADATA adds matching ModelMetadata entries (provider="regolo", context windows, max output tokens, display/provider/creator names, and price tiers).
Regolo API Integration
autogpt_platform/backend/backend/blocks/llm.py
llm_call(...) adds elif provider == "regolo": to call Regolo's OpenAI-compatible chat completions endpoint (https://api.regolo.ai/v1), forwards model/messages/max_tokens/tools, handles parallel tool calls, extracts tool_calls and reasoning, raises on empty choices, and returns an LLMResponse with content and usage.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • Pwuts
  • ntindle

Poem

🐰 I hopped into code with a whiskered cheer,
Fourteen new models, Regolo drawing near.
Keys tucked in secrets, metadata in line,
Chat completions hum — the rabbits all dine!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically identifies the main change: adding Regolo.ai as a provider, which aligns perfectly with the changeset.
Description check ✅ Passed The description is related to the changeset, describing the addition of Regolo.Ai as a provider, though it lacks detailed information about specific changes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@autogpt_platform/backend/backend/blocks/llm.py`:
- Around line 336-338: The key REGOLO_APERTUS_70B in the model metadata map is
referenced as a bare name causing a NameError at import time; qualify it with
the enum/class that actually defines that constant (the same enum used for other
keys in this map) so the entry becomes e.g. <EnumName>.REGOLO_APERTUS_70B;
update the entry where ModelMetadata("regolo", 60000, 30000, "Apertus-70B",
"Regolo.ai", "Apertus", 1) is assigned to use the enum-qualified key (replace
REGOLO_APERTUS_70B with the appropriate enum like Models.REGOLO_APERTUS_70B) to
prevent import-time crashes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5942c6d8-183a-4bc1-8c2b-43342af996a4

📥 Commits

Reviewing files that changed from the base of the PR and between 46e5795 and baf2818.

📒 Files selected for processing (3)
  • autogpt_platform/backend/backend/blocks/llm.py
  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.12)
  • GitHub Check: type-check (3.13)
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.12)
  • GitHub Check: type-check (3.11)
  • GitHub Check: test (3.11)
  • GitHub Check: types
  • GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (3)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
autogpt_platform/backend/backend/blocks/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/backend/blocks/**/*.py: Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Implement 'run' method with proper error handling in backend blocks
Generate block UUID using 'uuid.uuid4()' when creating new blocks in backend
Write tests alongside block implementation when adding new blocks in backend

autogpt_platform/backend/backend/blocks/**/*.py: For blocks handling files, use store_media_file() with return_format="for_local_processing" when processing with local tools (ffmpeg, MoviePy, PIL)
For blocks handling files, use store_media_file() with return_format="for_external_api" when sending content to external APIs (Replicate, OpenAI)
For blocks returning files, use store_media_file() with return_format="for_block_output" to enable auto-adaptation to execution context (workspace:// in CoPilot, data URI in graphs)
When creating new blocks, inherit from Block base class, define input/output schemas using BlockSchema, implement async run method, and generate unique block ID using uuid.uuid4()

Files:

  • autogpt_platform/backend/backend/blocks/llm.py
🧠 Learnings (13)
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-15T02:43:36.890Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12780
File: autogpt_platform/backend/backend/copilot/tools/workspace_files.py:0-0
Timestamp: 2026-04-15T02:43:36.890Z
Learning: When reviewing Python exception handlers, do not flag `isinstance(e, X)` checks as dead/unreachable if the caught exception `X` is a subclass of the exception type being handled. For example, if `X` (e.g., `VirusScanError`) inherits from `ValueError` (directly or via an intermediate class) and it can be raised within an `except ValueError:` block, then `isinstance(e, X)` inside that handler is reachable and should not be treated as dead code.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-22T11:46:04.431Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/config.py:0-0
Timestamp: 2026-04-22T11:46:04.431Z
Learning: Do not flag the Claude Sonnet 4.6 model ID as incorrect when it uses the project’s established hyphenated convention: `anthropic/claude-sonnet-4-6`. This hyphen form is the intentional, production convention and should be treated as valid (including in files like llm.py, blocks tests, reasoning.py, `_is_anthropic_model` tests, and config defaults). Note that OpenRouter also accepts the dot variant `anthropic/claude-sonnet-4.6`, so either form may be tolerated, but `anthropic/claude-sonnet-4-6` should be considered the standard to match project usage.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-22T11:46:12.892Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12881
File: autogpt_platform/backend/backend/copilot/baseline/service.py:322-332
Timestamp: 2026-04-22T11:46:12.892Z
Learning: In this codebase (Significant-Gravitas/AutoGPT), OpenRouter-routed Anthropic model IDs should use the hyphen-separated convention (e.g., `anthropic/claude-sonnet-4-6`, `anthropic/claude-opus-4-6`). Although OpenRouter may accept both hyphen and dot variants, treat the hyphen-separated form as the intended, correct codebase-wide convention and do not flag it as an error. Only flag the dot-separated variant (e.g., `anthropic/claude-sonnet-4.6`) as incorrect when reviewing/validating model ID strings for OpenRouter-routed Anthropic models.

Applied to files:

  • autogpt_platform/backend/backend/integrations/providers.py
  • autogpt_platform/backend/backend/util/settings.py
  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-02-05T04:11:00.596Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11796
File: autogpt_platform/backend/backend/blocks/video/concat.py:3-4
Timestamp: 2026-02-05T04:11:00.596Z
Learning: In autogpt_platform/backend/backend/blocks/**/*.py, when creating a new block, generate a UUID once with uuid.uuid4() and hard-code the resulting string as the block's id parameter. Do not call uuid.uuid4() at runtime; IDs must be constant across all imports and runs to ensure stability.

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:32:21.686Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:32:21.686Z
Learning: In autogpt_platform/backend/backend/blocks/, the Block base class execute() already wraps run() in a try/except to convert uncaught exceptions into BlockExecutionError/BlockUnknownError. Do not add per-block try/except in individual block run() methods, as this is not the established pattern (e.g., Gmail, Slack, Todoist blocks omit it). Only use explicit try/except within blocks that need to distinguish between success and error yield paths inside a generator (e.g., attachment blocks). This guidance applies to all Python files under autogpt_platform/backend/backend/blocks/ and similar block implementations; avoid duplicating error handling in run() unless a block requires generator-based branching.

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-04-23T12:55:26.122Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12893
File: autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py:24-24
Timestamp: 2026-04-23T12:55:26.122Z
Learning: Cost billing via the cost(*costs) decorator is applied at input-evaluation time (before a block’s run() executes). Therefore, mutating input_data inside run() will not change billing. When a block’s billing depends on a field plus URL/sniff-derived signals, treat the explicitly declared billing field (e.g., is_video) as the only billing source—set it correctly before run() (or in the code path that occurs before the decorator evaluates input_data). This should be checked for all blocks under autogpt_platform/backend/backend/blocks/ so billing signals are not mistakenly assumed to update during run().

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: In autogpt_platform/backend/backend/blocks/ (and related blocks under autogpt_platform/backend/backend/blocks/), do not add try/except blocks around a block's run() method for standard error propagation. The block executor framework (backend/executor/manager.py) catches uncaught exceptions from run() and emits them on the 'error' output. Only add explicit try/except blocks when you need to control partial outputs in failure cases (e.g., certain outputs must not be yielded on error, as in attachment blocks). This is the standard pattern across the codebase; apply it broadly to blocks' run() implementations.

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:23.196Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:30:23.196Z
Learning: In any Python file under autogpt_platform/backend/backend/blocks, do not add a try/except around run() solely for standard error handling. The block framework’s _execute() in _base.py already catches unhandled exceptions and re-raises as BlockExecutionError or BlockUnknownError. If you yield ("error", message), _execute() raises BlockExecutionError immediately, so the error port will not propagate downstream. Reserve explicit try/except for scenarios where you must control partial output (e.g., attachment blocks that must skip yielding content_base64 on failure).

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: Do not wrap synchronous AgentMail SDK calls with asyncio.to_thread() in blocks under autogpt_platform/backend/backend/blocks (and across the codebase). The block executor runs node execution in dedicated threads via asyncio.run_coroutine_threadsafe (see manager.py around lines ~745-752 and ~1079). The existing pattern avoids using asyncio.to_thread for SDK calls inside async run() methods, so maintain that approach and do not add to_thread usage in these code paths.

Applied to files:

  • autogpt_platform/backend/backend/blocks/llm.py
🪛 GitHub Actions: Block Documentation Sync Check / 0_check-docs-sync.txt
autogpt_platform/backend/backend/blocks/llm.py

[error] 336-336: NameError: name 'REGOLO_APERTUS_70B' is not defined.

🪛 GitHub Actions: Block Documentation Sync Check / check-docs-sync
autogpt_platform/backend/backend/blocks/llm.py

[error] 336-336: NameError: name 'REGOLO_APERTUS_70B' is not defined (while defining REGOLO_APERTUS_70B: ModelMetadata...).

🪛 Ruff (0.15.12)
autogpt_platform/backend/backend/blocks/llm.py

[error] 336-336: Undefined name REGOLO_APERTUS_70B

(F821)

🔇 Additional comments (4)
autogpt_platform/backend/backend/integrations/providers.py (1)

43-43: Provider enum extension looks correct.

ProviderName.REGOLO is consistent with the provider string used by the new integration path.

autogpt_platform/backend/backend/util/settings.py (1)

658-660: Secrets model update is aligned with the provider integration.

Adding regolo_api_key here matches the new provider and keeps credential configuration centralized.

autogpt_platform/backend/backend/blocks/llm.py (2)

137-150: Regolo model enum additions are structured well.

The new LlmModel.REGOLO_* entries are consistently named and fit the existing model catalog pattern.


1319-1357: Regolo dispatch branch follows the existing OpenAI-compatible provider pattern.

The new branch mirrors the established flow (tool-call extraction, usage accounting, empty-choice guard) consistently.

Comment thread autogpt_platform/backend/backend/blocks/llm.py Outdated
@ntindle
Copy link
Copy Markdown
Member

ntindle commented May 8, 2026

/dev-screenshot

@autogpt-pr-reviewer-in-dev
Copy link
Copy Markdown

Queued a review for PR #13028 at aad4259.

@autogpt-pr-reviewer-in-dev
Copy link
Copy Markdown

⚠️ Code review could not be completed

The review failed due to a temporary infrastructure issue after multiple retries.

If this persists, please contact support with job ID 84764d32-bd72-48b3-8352-f2d5689c941f.

@Mte90
Copy link
Copy Markdown
Author

Mte90 commented May 11, 2026

@ntindle seems that the automatic check failed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: 🆕 Needs initial review

Development

Successfully merging this pull request may close these issues.

3 participants