Upgrade MiniMax provider to M2.7 with temperature clamping#195
Upgrade MiniMax provider to M2.7 with temperature clamping#195octo-patch wants to merge 1 commit intominitap-ai:mainfrom
Conversation
… config - Upgrade default model from MiniMax-M1 to MiniMax-M2.7 (1M context window) - Add temperature clamping to [0.0, 1.0] range for MiniMax API compatibility - Add MiniMax preset configuration in llm-config.defaults.jsonc with M2.7 and M2.7-highspeed models across all agent nodes - Add MiniMax model examples to override template comments - Add MiniMax provider setup documentation in README - Add comprehensive test suite (23 unit + 3 integration tests) covering M2.7/M2.7-highspeed models, temperature clamping, and provider dispatch
📝 WalkthroughWalkthroughThis pull request adds support for MiniMax as a new LLM provider. Changes include documentation updates describing setup instructions and available models, a new Changes
Sequence DiagramsequenceDiagram
actor Client
participant LLMService as LLM Service<br/>(get_llm)
participant LLMConfig as LLM Config<br/>(resolve provider)
participant MiniMaxClient as MiniMax Client<br/>(ChatOpenAI)
participant MiniMaxAPI as MiniMax API<br/>(api.minimax.io)
Client->>LLMService: get_llm(ctx, role="planner")
LLMService->>LLMConfig: Lookup role config
LLMConfig-->>LLMService: {provider: "minimax", model: "MiniMax-M2.7"}
LLMService->>LLMService: Check if provider == "minimax"
LLMService->>MiniMaxClient: get_minimax_llm("MiniMax-M2.7", temperature)
MiniMaxClient->>MiniMaxClient: Clamp temperature to [0.0, 1.0]
MiniMaxClient->>MiniMaxClient: Create ChatOpenAI with<br/>api_key=MINIMAX_API_KEY<br/>base_url=https://api.minimax.io/v1
MiniMaxClient-->>LLMService: ChatOpenAI instance
LLMService-->>Client: Configured LLM client
Client->>MiniMaxClient: invoke(prompt)
MiniMaxClient->>MiniMaxAPI: HTTP POST /chat/completions
MiniMaxAPI-->>MiniMaxClient: Response
MiniMaxClient-->>Client: LLM response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@llm-config.defaults.jsonc`:
- Around line 74-76: The comment for the "minimax" preset is misleading: update
the text around the "minimax" object so it explicitly instructs users to copy
the contents of the "minimax" object (the inner keys) into
llm-config.override.jsonc rather than copying the top-level "minimax": { ... }
wrapper; reference the "minimax" key in the comment and mirror the same
clarified wording in the README if present so users know the override loader
only merges agent keys at the root.
In `@minitap/mobile_use/services/llm.py`:
- Around line 244-245: The provider string check in the LLM dispatch is
mismatched: llm.provider is compared to "minimax" but the config's LLMProvider
only defines "minitap"; update the dispatch in
minitap/mobile_use/services/llm.py (the branch that calls get_minimax_llm) to
check for "minitap" instead of "minimax" so the branch is reachable and uses
get_minimax_llm for the configured provider; alternatively, if the intended
provider name is "minimax", update the LLMProvider definition in config.py to
include "minimax" consistently—pick one name and make both the LLMProvider
enum/constant and the llm.provider comparison use the same identifier.
In `@tests/mobile_use/test_minimax_provider.py`:
- Around line 326-393: Mark the MiniMax integration tests with the pytest
integration marker and declare that marker in the project config: add
`@pytest.mark.integration` to the TestMiniMaxIntegration class or to each test
(test_minimax_m27_client_creation, test_minimax_full_config_validation,
test_minimax_m27_llm_invoke) in tests/mobile_use/test_minimax_provider.py, and
add an "integration" entry to the markers list in pyproject.toml under the
project's pytest configuration (matching the existing pattern used for
ios_simulator/android).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 57f9d947-3d1b-4f6c-8a36-8efe93f2f446
📒 Files selected for processing (5)
README.mdllm-config.defaults.jsoncllm-config.override.template.jsoncminitap/mobile_use/services/llm.pytests/mobile_use/test_minimax_provider.py
| // MiniMax preset – uses MiniMax M2.7 (1M context) and M2.7-highspeed. | ||
| // To use it, copy it to llm-config.override.jsonc. | ||
| "minimax": { |
There was a problem hiding this comment.
Clarify that users must copy the object contents, not the "minimax" wrapper.
Line 75 currently reads like the whole top-level preset can be pasted into llm-config.override.jsonc, but the override loader only merges agent keys at the root. Copying "minimax": { ... } verbatim will be ignored, so this comment and the matching README instructions need to say “copy the contents of this object” unless you add real preset selection.
✏️ Suggested wording
- // To use it, copy it to llm-config.override.jsonc.
+ // To use it, copy the contents of this object into llm-config.override.jsonc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@llm-config.defaults.jsonc` around lines 74 - 76, The comment for the
"minimax" preset is misleading: update the text around the "minimax" object so
it explicitly instructs users to copy the contents of the "minimax" object (the
inner keys) into llm-config.override.jsonc rather than copying the top-level
"minimax": { ... } wrapper; reference the "minimax" key in the comment and
mirror the same clarified wording in the README if present so users know the
override loader only merges agent keys at the root.
| elif llm.provider == "minimax": | ||
| return get_minimax_llm(llm.model, temperature) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C2 'LLMProvider|validate_provider|case "minimax"|MINIMAX_API_KEY' minitap/mobile_use/config.pyRepository: minitap-ai/mobile-use
Length of output: 1329
🏁 Script executed:
#!/bin/bash
# Get the complete validate_provider method to see all case handlers
sed -n '129,145p' minitap/mobile_use/config.pyRepository: minitap-ai/mobile-use
Length of output: 864
🏁 Script executed:
#!/bin/bash
sed -n '244,245p' minitap/mobile_use/services/llm.pyRepository: minitap-ai/mobile-use
Length of output: 155
Fix the provider name mismatch: code checks for "minimax" but config defines "minitap".
The dispatch at lines 244-245 checks llm.provider == "minimax", but LLMProvider in config.py (line 97) defines only "minitap". This provider name mismatch makes the branch unreachable—config validation will pass for "minitap", but the string comparison will never match. Either update the config to use "minimax" throughout (if that is the intended Minimax provider name) or change the code branch to check for "minitap" to match the existing config definition.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@minitap/mobile_use/services/llm.py` around lines 244 - 245, The provider
string check in the LLM dispatch is mismatched: llm.provider is compared to
"minimax" but the config's LLMProvider only defines "minitap"; update the
dispatch in minitap/mobile_use/services/llm.py (the branch that calls
get_minimax_llm) to check for "minitap" instead of "minimax" so the branch is
reachable and uses get_minimax_llm for the configured provider; alternatively,
if the intended provider name is "minimax", update the LLMProvider definition in
config.py to include "minimax" consistently—pick one name and make both the
LLMProvider enum/constant and the llm.provider comparison use the same
identifier.
| class TestMiniMaxIntegration: | ||
| """Integration tests for MiniMax provider (require MINIMAX_API_KEY).""" | ||
|
|
||
| @pytest.fixture | ||
| def minimax_api_key(self): | ||
| """Get MiniMax API key from environment, skip if not available.""" | ||
| import os | ||
|
|
||
| key = os.environ.get("MINIMAX_API_KEY") | ||
| if not key: | ||
| pytest.skip("MINIMAX_API_KEY not set, skipping integration test") | ||
| return key | ||
|
|
||
| @patch("minitap.mobile_use.services.llm.settings") | ||
| def test_minimax_m27_client_creation(self, mock_settings, minimax_api_key): | ||
| """Integration: Create a real MiniMax M2.7 client with valid config.""" | ||
| from pydantic import SecretStr | ||
|
|
||
| from minitap.mobile_use.services.llm import get_minimax_llm | ||
|
|
||
| mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key) | ||
| client = get_minimax_llm(model_name="MiniMax-M2.7", temperature=0.7) | ||
| assert client is not None | ||
| assert client.model_name == "MiniMax-M2.7" | ||
|
|
||
| @patch("minitap.mobile_use.config.settings") | ||
| def test_minimax_full_config_validation(self, mock_settings, minimax_api_key): | ||
| """Integration: Full LLMConfig with MiniMax validates providers.""" | ||
| from pydantic import SecretStr | ||
|
|
||
| mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key) | ||
| mock_settings.OPENAI_API_KEY = None | ||
| mock_settings.GOOGLE_API_KEY = None | ||
| mock_settings.XAI_API_KEY = None | ||
| mock_settings.OPEN_ROUTER_API_KEY = None | ||
| mock_settings.MINITAP_API_KEY = None | ||
|
|
||
| minimax_llm = LLMWithFallback( | ||
| provider="minimax", | ||
| model="MiniMax-M2.7", | ||
| fallback=LLM(provider="minimax", model="MiniMax-M2.7-highspeed"), | ||
| ) | ||
| config = LLMConfig( | ||
| planner=minimax_llm, | ||
| orchestrator=minimax_llm, | ||
| contextor=minimax_llm, | ||
| cortex=minimax_llm, | ||
| executor=minimax_llm, | ||
| utils=LLMConfigUtils( | ||
| outputter=minimax_llm, | ||
| hopper=minimax_llm, | ||
| ), | ||
| ) | ||
| # Should not raise | ||
| config.validate_providers() | ||
|
|
||
| @patch("minitap.mobile_use.services.llm.settings") | ||
| def test_minimax_m27_llm_invoke(self, mock_settings, minimax_api_key): | ||
| """Integration: Invoke MiniMax M2.7 LLM with a simple prompt.""" | ||
| from pydantic import SecretStr | ||
|
|
||
| from minitap.mobile_use.services.llm import get_minimax_llm | ||
|
|
||
| mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key) | ||
| client = get_minimax_llm(model_name="MiniMax-M2.7", temperature=0.7) | ||
| response = client.invoke("Say 'hello' and nothing else.") | ||
| assert response is not None | ||
| assert len(response.content) > 0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C2 'pytest\.mark\.integration|markers\s*=|integration' pyproject.toml pytest.ini tox.ini tests 2>/dev/null || true
sed -n '326,395p' tests/mobile_use/test_minimax_provider.pyRepository: minitap-ai/mobile-use
Length of output: 3842
Add @pytest.mark.integration and define the marker in pyproject.toml.
These tests make real billable API calls whenever MINIMAX_API_KEY is set, and currently have no pytest marker to prevent them from running by default. The fixture's pytest.skip() only applies when the environment variable is missing—it does not gate execution on secret-enabled machines.
The repo already uses markers (ios_simulator, android). Add the integration marker following the same pattern:
Minimal changes
In tests/mobile_use/test_minimax_provider.py:
+@pytest.mark.integration
class TestMiniMaxIntegration:In pyproject.toml markers list:
markers = [
"ios_simulator: tests requiring iOS simulator (run on macOS with IDB)",
"android: tests requiring Android device or emulator",
+ "integration: tests requiring external API calls (e.g., MiniMax, OpenAI)",
]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/mobile_use/test_minimax_provider.py` around lines 326 - 393, Mark the
MiniMax integration tests with the pytest integration marker and declare that
marker in the project config: add `@pytest.mark.integration` to the
TestMiniMaxIntegration class or to each test (test_minimax_m27_client_creation,
test_minimax_full_config_validation, test_minimax_m27_llm_invoke) in
tests/mobile_use/test_minimax_provider.py, and add an "integration" entry to the
markers list in pyproject.toml under the project's pytest configuration
(matching the existing pattern used for ios_simulator/android).
Summary
Details
The existing MiniMax integration uses the outdated MiniMax-M1 model. This PR upgrades to MiniMax M2.7, the latest generation with 1M context window support.
Changes
Available Models
Temperature Clamping
MiniMax API requires temperature in [0.0, 1.0]. The get_minimax_llm() function now clamps values outside this range to prevent API errors.
Test plan
Summary by CodeRabbit
New Features
Documentation