Supported Google Cloud Model IDs #930
Unanswered
renesoberanes
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am running Accomplish (Openwork) on Linux with a Google AI Pro Plan API key. While the cloud integration works for Gemini 2.5 Flash and Gemini 3, I am experiencing issues with the Gemma models listed in the UI.
The Issue:
Both Gemma 3 and Gemma 4 appear in the model selection dropdown under the Google Cloud provider. However, when selected, they fail to generate any response.
Technical Context:
Plan: Google AI Pro (Cloud API).
Hardware: Ryzen 7 5800X | RTX 3080 (10GB) | 32GB RAM.
Goal: I want to utilize the higher token limits/scalability of Gemma 3/4 without the local hardware overhead of Ollama, assuming they are supported via the Google Cloud endpoint.
Questions:
Does the current Linux build of Accomplish support Gemma 3/4 via the Google AI Studio/Vertex AI cloud endpoints?
If so, what are the specific Model IDs required for the configuration?
If these are intended strictly for local use (Ollama), why are they populated in the Cloud Provider dropdown?
I am looking for the most cost-efficient way to leverage these higher-limit models within my existing Pro subscription. Any clarification on whether these are UI artifacts or misconfigured cloud endpoints would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions