Skip to content

Upgrade MiniMax provider to M2.7 with temperature clamping#195

Open
octo-patch wants to merge 1 commit intominitap-ai:mainfrom
octo-patch:feature/upgrade-minimax-m27
Open

Upgrade MiniMax provider to M2.7 with temperature clamping#195
octo-patch wants to merge 1 commit intominitap-ai:mainfrom
octo-patch:feature/upgrade-minimax-m27

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 23, 2026

Summary

  • Upgrade default model from MiniMax-M1 to MiniMax-M2.7 (1M context window, latest generation)
  • Add temperature clamping to [0.0, 1.0] range for MiniMax API compatibility
  • Add MiniMax preset config in llm-config.defaults.jsonc with MiniMax-M2.7 and MiniMax-M2.7-highspeed across all agent nodes
  • Add README documentation for MiniMax provider setup
  • Add comprehensive test suite (23 unit + 3 integration tests)

Details

The existing MiniMax integration uses the outdated MiniMax-M1 model. This PR upgrades to MiniMax M2.7, the latest generation with 1M context window support.

Changes

File Change
services/llm.py Default model MiniMax-M1 to MiniMax-M2.7, temperature clamping
llm-config.defaults.jsonc New minimax preset section with M2.7/M2.7-highspeed
llm-config.override.template.jsonc MiniMax model examples in comments
README.md MiniMax provider setup instructions
tests/test_minimax_provider.py 23 unit + 3 integration tests

Available Models

Model Context Best for
MiniMax-M2.7 1M tokens Reasoning tasks (planner, cortex, orchestrator)
MiniMax-M2.7-highspeed 1M tokens Fast inference (executor, contextor, outputter)

Temperature Clamping

MiniMax API requires temperature in [0.0, 1.0]. The get_minimax_llm() function now clamps values outside this range to prevent API errors.

Test plan

  • 23 unit tests pass (config validation, service functions, temperature clamping)
  • 3 integration tests pass with real MiniMax API key
  • Verify MiniMax preset config works end-to-end with a device

Summary by CodeRabbit

  • New Features

    • Added MiniMax as a supported LLM provider with configurable model options.
  • Documentation

    • Updated setup instructions with MiniMax configuration and setup guidance.
    • Expanded supported provider documentation to explicitly list additional LLM options.

… config

- Upgrade default model from MiniMax-M1 to MiniMax-M2.7 (1M context window)
- Add temperature clamping to [0.0, 1.0] range for MiniMax API compatibility
- Add MiniMax preset configuration in llm-config.defaults.jsonc with M2.7
  and M2.7-highspeed models across all agent nodes
- Add MiniMax model examples to override template comments
- Add MiniMax provider setup documentation in README
- Add comprehensive test suite (23 unit + 3 integration tests) covering
  M2.7/M2.7-highspeed models, temperature clamping, and provider dispatch
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

This pull request adds support for MiniMax as a new LLM provider. Changes include documentation updates describing setup instructions and available models, a new minimax preset in the default configuration file, service-layer implementation to construct and route MiniMax clients, and comprehensive test coverage for configuration, validation, dispatch, and optional integration scenarios.

Changes

Cohort / File(s) Summary
Documentation
README.md, llm-config.override.template.jsonc
Updated documentation to list MiniMax as a supported LLM provider, added setup instructions with environment variable (MINIMAX_API_KEY) and model options (MiniMax-M2.7, MiniMax-M2.7-highspeed), and clarified configuration patterns.
Configuration
llm-config.defaults.jsonc
Added new minimax preset with provider/model/fallback definitions for six roles (planner, orchestrator, cortex, executor, contextor) and two utility roles (hopper, outputter), using model-swapped primary/fallback pairs.
Service Implementation
minitap/mobile_use/services/llm.py
Implemented get_minimax_llm() function to construct ChatOpenAI client targeting MiniMax API endpoint, added provider routing logic in get_llm() to dispatch MiniMax requests, and included temperature clamping to [0.0, 1.0] range.
Test Coverage
tests/mobile_use/test_minimax_provider.py
Comprehensive test module covering configuration validation, LLM client construction, provider dispatch, fallback behavior, temperature handling, and optional integration tests for real API calls.

Sequence Diagram

sequenceDiagram
    actor Client
    participant LLMService as LLM Service<br/>(get_llm)
    participant LLMConfig as LLM Config<br/>(resolve provider)
    participant MiniMaxClient as MiniMax Client<br/>(ChatOpenAI)
    participant MiniMaxAPI as MiniMax API<br/>(api.minimax.io)

    Client->>LLMService: get_llm(ctx, role="planner")
    LLMService->>LLMConfig: Lookup role config
    LLMConfig-->>LLMService: {provider: "minimax", model: "MiniMax-M2.7"}
    LLMService->>LLMService: Check if provider == "minimax"
    LLMService->>MiniMaxClient: get_minimax_llm("MiniMax-M2.7", temperature)
    MiniMaxClient->>MiniMaxClient: Clamp temperature to [0.0, 1.0]
    MiniMaxClient->>MiniMaxClient: Create ChatOpenAI with<br/>api_key=MINIMAX_API_KEY<br/>base_url=https://api.minimax.io/v1
    MiniMaxClient-->>LLMService: ChatOpenAI instance
    LLMService-->>Client: Configured LLM client
    Client->>MiniMaxClient: invoke(prompt)
    MiniMaxClient->>MiniMaxAPI: HTTP POST /chat/completions
    MiniMaxAPI-->>MiniMaxClient: Response
    MiniMaxClient-->>Client: LLM response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested reviewers

  • cguiguet

Poem

🐰 A new provider hops into sight,
MiniMax models running just right,
With configs so neat and tests burning bright,
Temperature clamped in a cozy bite,
The warren's LLM routers take flight! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: upgrading the MiniMax provider to M2.7 and adding temperature clamping, which are the core technical improvements across the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 87.10% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@llm-config.defaults.jsonc`:
- Around line 74-76: The comment for the "minimax" preset is misleading: update
the text around the "minimax" object so it explicitly instructs users to copy
the contents of the "minimax" object (the inner keys) into
llm-config.override.jsonc rather than copying the top-level "minimax": { ... }
wrapper; reference the "minimax" key in the comment and mirror the same
clarified wording in the README if present so users know the override loader
only merges agent keys at the root.

In `@minitap/mobile_use/services/llm.py`:
- Around line 244-245: The provider string check in the LLM dispatch is
mismatched: llm.provider is compared to "minimax" but the config's LLMProvider
only defines "minitap"; update the dispatch in
minitap/mobile_use/services/llm.py (the branch that calls get_minimax_llm) to
check for "minitap" instead of "minimax" so the branch is reachable and uses
get_minimax_llm for the configured provider; alternatively, if the intended
provider name is "minimax", update the LLMProvider definition in config.py to
include "minimax" consistently—pick one name and make both the LLMProvider
enum/constant and the llm.provider comparison use the same identifier.

In `@tests/mobile_use/test_minimax_provider.py`:
- Around line 326-393: Mark the MiniMax integration tests with the pytest
integration marker and declare that marker in the project config: add
`@pytest.mark.integration` to the TestMiniMaxIntegration class or to each test
(test_minimax_m27_client_creation, test_minimax_full_config_validation,
test_minimax_m27_llm_invoke) in tests/mobile_use/test_minimax_provider.py, and
add an "integration" entry to the markers list in pyproject.toml under the
project's pytest configuration (matching the existing pattern used for
ios_simulator/android).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 57f9d947-3d1b-4f6c-8a36-8efe93f2f446

📥 Commits

Reviewing files that changed from the base of the PR and between e1d31ab and 8baeb08.

📒 Files selected for processing (5)
  • README.md
  • llm-config.defaults.jsonc
  • llm-config.override.template.jsonc
  • minitap/mobile_use/services/llm.py
  • tests/mobile_use/test_minimax_provider.py

Comment thread llm-config.defaults.jsonc
Comment on lines +74 to +76
// MiniMax preset – uses MiniMax M2.7 (1M context) and M2.7-highspeed.
// To use it, copy it to llm-config.override.jsonc.
"minimax": {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clarify that users must copy the object contents, not the "minimax" wrapper.

Line 75 currently reads like the whole top-level preset can be pasted into llm-config.override.jsonc, but the override loader only merges agent keys at the root. Copying "minimax": { ... } verbatim will be ignored, so this comment and the matching README instructions need to say “copy the contents of this object” unless you add real preset selection.

✏️ Suggested wording
-  // To use it, copy it to llm-config.override.jsonc.
+  // To use it, copy the contents of this object into llm-config.override.jsonc.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@llm-config.defaults.jsonc` around lines 74 - 76, The comment for the
"minimax" preset is misleading: update the text around the "minimax" object so
it explicitly instructs users to copy the contents of the "minimax" object (the
inner keys) into llm-config.override.jsonc rather than copying the top-level
"minimax": { ... } wrapper; reference the "minimax" key in the comment and
mirror the same clarified wording in the README if present so users know the
override loader only merges agent keys at the root.

Comment on lines +244 to +245
elif llm.provider == "minimax":
return get_minimax_llm(llm.model, temperature)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -C2 'LLMProvider|validate_provider|case "minimax"|MINIMAX_API_KEY' minitap/mobile_use/config.py

Repository: minitap-ai/mobile-use

Length of output: 1329


🏁 Script executed:

#!/bin/bash
# Get the complete validate_provider method to see all case handlers
sed -n '129,145p' minitap/mobile_use/config.py

Repository: minitap-ai/mobile-use

Length of output: 864


🏁 Script executed:

#!/bin/bash
sed -n '244,245p' minitap/mobile_use/services/llm.py

Repository: minitap-ai/mobile-use

Length of output: 155


Fix the provider name mismatch: code checks for "minimax" but config defines "minitap".

The dispatch at lines 244-245 checks llm.provider == "minimax", but LLMProvider in config.py (line 97) defines only "minitap". This provider name mismatch makes the branch unreachable—config validation will pass for "minitap", but the string comparison will never match. Either update the config to use "minimax" throughout (if that is the intended Minimax provider name) or change the code branch to check for "minitap" to match the existing config definition.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` around lines 244 - 245, The provider
string check in the LLM dispatch is mismatched: llm.provider is compared to
"minimax" but the config's LLMProvider only defines "minitap"; update the
dispatch in minitap/mobile_use/services/llm.py (the branch that calls
get_minimax_llm) to check for "minitap" instead of "minimax" so the branch is
reachable and uses get_minimax_llm for the configured provider; alternatively,
if the intended provider name is "minimax", update the LLMProvider definition in
config.py to include "minimax" consistently—pick one name and make both the
LLMProvider enum/constant and the llm.provider comparison use the same
identifier.

Comment on lines +326 to +393
class TestMiniMaxIntegration:
"""Integration tests for MiniMax provider (require MINIMAX_API_KEY)."""

@pytest.fixture
def minimax_api_key(self):
"""Get MiniMax API key from environment, skip if not available."""
import os

key = os.environ.get("MINIMAX_API_KEY")
if not key:
pytest.skip("MINIMAX_API_KEY not set, skipping integration test")
return key

@patch("minitap.mobile_use.services.llm.settings")
def test_minimax_m27_client_creation(self, mock_settings, minimax_api_key):
"""Integration: Create a real MiniMax M2.7 client with valid config."""
from pydantic import SecretStr

from minitap.mobile_use.services.llm import get_minimax_llm

mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key)
client = get_minimax_llm(model_name="MiniMax-M2.7", temperature=0.7)
assert client is not None
assert client.model_name == "MiniMax-M2.7"

@patch("minitap.mobile_use.config.settings")
def test_minimax_full_config_validation(self, mock_settings, minimax_api_key):
"""Integration: Full LLMConfig with MiniMax validates providers."""
from pydantic import SecretStr

mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key)
mock_settings.OPENAI_API_KEY = None
mock_settings.GOOGLE_API_KEY = None
mock_settings.XAI_API_KEY = None
mock_settings.OPEN_ROUTER_API_KEY = None
mock_settings.MINITAP_API_KEY = None

minimax_llm = LLMWithFallback(
provider="minimax",
model="MiniMax-M2.7",
fallback=LLM(provider="minimax", model="MiniMax-M2.7-highspeed"),
)
config = LLMConfig(
planner=minimax_llm,
orchestrator=minimax_llm,
contextor=minimax_llm,
cortex=minimax_llm,
executor=minimax_llm,
utils=LLMConfigUtils(
outputter=minimax_llm,
hopper=minimax_llm,
),
)
# Should not raise
config.validate_providers()

@patch("minitap.mobile_use.services.llm.settings")
def test_minimax_m27_llm_invoke(self, mock_settings, minimax_api_key):
"""Integration: Invoke MiniMax M2.7 LLM with a simple prompt."""
from pydantic import SecretStr

from minitap.mobile_use.services.llm import get_minimax_llm

mock_settings.MINIMAX_API_KEY = SecretStr(minimax_api_key)
client = get_minimax_llm(model_name="MiniMax-M2.7", temperature=0.7)
response = client.invoke("Say 'hello' and nothing else.")
assert response is not None
assert len(response.content) > 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -C2 'pytest\.mark\.integration|markers\s*=|integration' pyproject.toml pytest.ini tox.ini tests 2>/dev/null || true
sed -n '326,395p' tests/mobile_use/test_minimax_provider.py

Repository: minitap-ai/mobile-use

Length of output: 3842


Add @pytest.mark.integration and define the marker in pyproject.toml.

These tests make real billable API calls whenever MINIMAX_API_KEY is set, and currently have no pytest marker to prevent them from running by default. The fixture's pytest.skip() only applies when the environment variable is missing—it does not gate execution on secret-enabled machines.

The repo already uses markers (ios_simulator, android). Add the integration marker following the same pattern:

Minimal changes

In tests/mobile_use/test_minimax_provider.py:

+@pytest.mark.integration
 class TestMiniMaxIntegration:

In pyproject.toml markers list:

 markers = [
     "ios_simulator: tests requiring iOS simulator (run on macOS with IDB)",
     "android: tests requiring Android device or emulator",
+    "integration: tests requiring external API calls (e.g., MiniMax, OpenAI)",
 ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/mobile_use/test_minimax_provider.py` around lines 326 - 393, Mark the
MiniMax integration tests with the pytest integration marker and declare that
marker in the project config: add `@pytest.mark.integration` to the
TestMiniMaxIntegration class or to each test (test_minimax_m27_client_creation,
test_minimax_full_config_validation, test_minimax_m27_llm_invoke) in
tests/mobile_use/test_minimax_provider.py, and add an "integration" entry to the
markers list in pyproject.toml under the project's pytest configuration
(matching the existing pattern used for ios_simulator/android).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant