Skip to content

feat: add Anthropic/Claude as LLM provider#185

Open
singhalkarun wants to merge 1 commit intominitap-ai:mainfrom
singhalkarun:feat/add-anthropic-provider
Open

feat: add Anthropic/Claude as LLM provider#185
singhalkarun wants to merge 1 commit intominitap-ai:mainfrom
singhalkarun:feat/add-anthropic-provider

Conversation

@singhalkarun
Copy link
Copy Markdown

@singhalkarun singhalkarun commented Feb 21, 2026

Add langchain-anthropic dependency and wire up Anthropic as a new provider option alongside OpenAI, Google, etc. Supports both standard API keys (ANTHROPIC_API_KEY) and OAuth tokens from claude setup-token (ANTHROPIC_AUTH_TOKEN) with automatic Bearer auth + beta header handling.

🚀 What's new?

Describe the purpose of your pull request. What problem does it solve? What feature does it add? Link to any relevant issues!

🤔 Type of Change

What kind of change is this? Mark with an x

  • Bug fix (non-breaking change that solves an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update to the docs)

✅ Checklist

Before you submit, please make sure you've done the following. If you have any questions, we're here to help!

  • I have read the Contributing Guide.
  • My code follows the project's style guidelines (ruff check . and ruff format . pass).
  • I have added necessary documentation (if applicable).

💬 Any questions or comments?

Have a question or need some help? Join us on Discord!

Summary by CodeRabbit

  • New Features

    • Added support for Anthropic Claude as an LLM provider option.
    • Enabled authentication via ANTHROPIC_API_KEY or ANTHROPIC_AUTH_TOKEN, with token taking precedence when both are configured.
  • Documentation

    • Updated configuration guides to document Anthropic provider setup and authentication methods.
    • Added Anthropic to the list of supported LLM providers in configuration templates.

Add langchain-anthropic dependency and wire up Anthropic as a new
provider option alongside OpenAI, Google, etc. Supports both standard
API keys (ANTHROPIC_API_KEY) and OAuth tokens from `claude setup-token`
(ANTHROPIC_AUTH_TOKEN) with automatic Bearer auth + beta header handling.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

This PR adds Anthropic Claude as a supported LLM provider. It introduces configuration for ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN environment variables, implements OAuth-based client authentication alongside API-key authentication, integrates Anthropic into the existing provider selection system, and adds the langchain-anthropic dependency.

Changes

Cohort / File(s) Summary
Documentation & Configuration
.env.example, README.md, llm-config.override.template.jsonc
Added environment variable guidance for ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN with token precedence clarification across example files and documentation.
LLM Provider Configuration
minitap/mobile_use/config.py
Extended LLMProvider type to include "anthropic"; added ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN SecretStr fields to Settings; added validation requiring at least one of these keys for anthropic provider.
LLM Service Implementation
minitap/mobile_use/services/llm.py
Introduced _ChatAnthropicOAuth subclass disabling api_key for OAuth Bearer token flow; added get_anthropic_llm() function to route between OAuth and API-key authentication; integrated anthropic provider branch into get_llm() selection logic.
Dependencies
pyproject.toml
Added langchain-anthropic>=0.3.0 to project dependencies.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

🐰 Hoppity-hop, Claude joins the crew,
With OAuth tokens and API keys too,
Anthropic magic flows through the wires,
Our LLM garden now reaches higher! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: add Anthropic/Claude as LLM provider' directly and accurately summarizes the main change: adding Anthropic/Claude support as a new LLM provider option.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
minitap/mobile_use/services/llm.py (1)

139-139: Default model claude-sonnet-4-20250514 is a superseded snapshot.

LiteLLM's model list confirms claude-sonnet-4-20250514 is Claude Sonnet 4 (initial release). The Anthropic migration guide treats claude-sonnet-4-20250514 as the "before" model, with claude-sonnet-4-5-20250929 as the recommended successor. For mobile automation workloads, using a newer, more capable snapshot as the default would give users a better out-of-the-box experience.

♻️ Proposed update
 def get_anthropic_llm(
-    model_name: str = "claude-sonnet-4-20250514",
+    model_name: str = "claude-sonnet-4-5-20250929",
     temperature: float = 1,
 ) -> ChatAnthropic:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` at line 139, The default model_name
currently uses the superseded snapshot "claude-sonnet-4-20250514"; update the
default to the newer recommended snapshot "claude-sonnet-4-5-20250929" by
changing the model_name default value in the llm configuration (look for the
model_name variable in minitap/mobile_use/services/llm.py), and update any
adjacent comment or docstring mentioning the old snapshot to reflect the new
default so defaults and documentation stay consistent.
pyproject.toml (1)

26-26: Consider tightening the langchain-anthropic upper bound.

The current release is langchain_anthropic-1.3.3. The >=0.3.0 constraint spans all future major versions. Because services/llm.py subclasses ChatAnthropic and overrides the private _client_params property (see that file's review), a rename or restructure of that internal API across a major bump would silently break the OAuth path.

langchain-openai in the same file is pinned exactly. Applying at least an upper-bound cap here is safer:

🔧 Proposed tightening
-    "langchain-anthropic>=0.3.0",
+    "langchain-anthropic>=0.3.0,<2.0.0",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 26, Tighten the langchain-anthropic version spec in
pyproject.toml to prevent accidental breaking changes to the internal API used
by services/llm.py: limit the range (for example add an upper bound such as
<2.0.0 or a more conservative <1.4.0) instead of unbounded ">=0.3.0"; this
protects the code that subclasses ChatAnthropic and overrides the private
_client_params property in services/llm.py from breaking on a major
langchain-anthropic release.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@minitap/mobile_use/services/llm.py`:
- Around line 123-159: Remove the OAuth flow and token-specific class: delete
the _ChatAnthropicOAuth class and remove the ANTHROPIC_AUTH_TOKEN branch in
get_anthropic_llm so the function always constructs a ChatAnthropic using
settings.ANTHROPIC_API_KEY; ensure the code no longer references
ANTHROPIC_AUTH_TOKEN or the temporary SecretStr("not-used")/default_headers
usage and keep max_retries/temperature handling consistent for the remaining
ChatAnthropic instantiation.
- Around line 131-135: The override of ChatAnthropic._client_params is fragile
because _client_params is an internal API and may be removed/renamed in future
langchain-anthropic releases; instead of silently relying on it, add a runtime
guard in the code that accesses ChatAnthropic._client_params (the overridden
property) to detect absence or unexpected shape and raise a clear error, and/or
pin the langchain-anthropic dependency to a safe minor version; concretely, when
implementing the fix update the logic around the _client_params property on the
ChatAnthropic-derived class to (1) check for the existence of
super()._client_params and that it is a dict before mutating, raising a
descriptive exception if not present, and (2) add a note to dependency
management to lock langchain-anthropic to a tested minor version (or add a
comment to monitor upstream breaking changes).

In `@README.md`:
- Around line 86-90: There is a blank line between two consecutive blockquote
blocks starting with "> [!NOTE]" which triggers MD028; remove that empty line
and replace it with a blockquote continuation line (a single ">" line) so the
spacing is preserved but the blocks are not separated by a raw blank line—update
the blockquote containing the Anthropic instructions (the lines referencing
ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN and the precedence note) to use ">"
continuation instead of an empty line.

---

Nitpick comments:
In `@minitap/mobile_use/services/llm.py`:
- Line 139: The default model_name currently uses the superseded snapshot
"claude-sonnet-4-20250514"; update the default to the newer recommended snapshot
"claude-sonnet-4-5-20250929" by changing the model_name default value in the llm
configuration (look for the model_name variable in
minitap/mobile_use/services/llm.py), and update any adjacent comment or
docstring mentioning the old snapshot to reflect the new default so defaults and
documentation stay consistent.

In `@pyproject.toml`:
- Line 26: Tighten the langchain-anthropic version spec in pyproject.toml to
prevent accidental breaking changes to the internal API used by services/llm.py:
limit the range (for example add an upper bound such as <2.0.0 or a more
conservative <1.4.0) instead of unbounded ">=0.3.0"; this protects the code that
subclasses ChatAnthropic and overrides the private _client_params property in
services/llm.py from breaking on a major langchain-anthropic release.

Comment on lines +123 to +159
class _ChatAnthropicOAuth(ChatAnthropic):
"""ChatAnthropic subclass that uses OAuth Bearer token auth instead of API key.

The underlying anthropic SDK reads ANTHROPIC_AUTH_TOKEN from env and sends
an Authorization: Bearer header. We override _client_params to set api_key=None
so the SDK doesn't also send an X-Api-Key header (which would fail validation).
"""

@property
def _client_params(self) -> dict[str, Any]:
params = super()._client_params
params["api_key"] = None
return params


def get_anthropic_llm(
model_name: str = "claude-sonnet-4-20250514",
temperature: float = 1,
) -> ChatAnthropic:
if settings.ANTHROPIC_AUTH_TOKEN:
# OAuth token (e.g. from `claude setup-token`)
client = _ChatAnthropicOAuth(
model=model_name,
anthropic_api_key=SecretStr("not-used"),
default_headers={"anthropic-beta": "oauth-2025-04-20"},
temperature=temperature,
max_retries=2,
)
else:
assert settings.ANTHROPIC_API_KEY is not None
client = ChatAnthropic(
model=model_name,
api_key=settings.ANTHROPIC_API_KEY,
temperature=temperature,
max_retries=2,
)
return client
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Anthropic OAuth tokens third-party tools ban January 2026

💡 Result:

In early January 2026, Anthropic began technically blocking OAuth tokens from Claude Free/Pro/Max (consumer subscriptions) when they were used outside Anthropic’s own Claude products (notably outside the official Claude Code client), which broke many third‑party coding tools that had been reusing those tokens instead of API keys. One widely reported enforcement date is January 9, 2026, when third‑party tools started receiving errors indicating the credential was only authorized for Claude Code use. [2]

Anthropic later published an explicit policy statement in the Claude Code “Legal and compliance” documentation clarifying that:

  • OAuth tokens from Free/Pro/Max are intended exclusively for Claude Code and Claude.ai.
  • Using those OAuth tokens “in any other product, tool, or service — including the Agent SDK — is not permitted” and is a violation of the Consumer Terms of Service.
  • Anthropic also states it may enforce these restrictions without prior notice, and that third‑party developers should use API keys instead (Claude Console or supported cloud providers). [1]

Bottom line: The “third‑party tools ban” refers specifically to reusing consumer-subscription OAuth tokens in third‑party apps; Anthropic’s compliant path for third‑party tooling is API-key (pay‑per‑token) access, not Claude subscription OAuth. [1]

Sources:
[1] Anthropic Claude Code Docs – Legal and compliance (Authentication and credential use)
[2] OpenClaw.rocks report summarizing the January 9, 2026 enforcement behavior and resulting third‑party tool breakage


🌐 Web query:

Anthropic claude setup-token authentication third-party restrictions 2026

💡 Result:

  • What claude setup-token is: It generates a long‑lived Claude Code OAuth token (often set as CLAUDE_CODE_OAUTH_TOKEN, tokens commonly start with sk-ant-oat01-...) intended to authenticate Claude Code in non-interactive environments (e.g., containers/servers). [2]
  • 2026 restriction (the key point): Anthropic’s Claude Code “Legal and compliance” docs state that OAuth tokens from Claude Free/Pro/Max are intended exclusively for Claude Code and Claude.ai. Using those OAuth tokens in any other product, tool, or service (including the Agent SDK) is not permitted and is a violation of the Consumer Terms of Service. [1]
  • Third‑party tool / “wrapper” implication: This means you should not use setup-tokens/OAuth tokens to power third‑party apps, agents, IDE extensions, gateways, or proxy services, and you also may not offer “Log in with Claude.ai” or route requests through Free/Pro/Max credentials on behalf of your users. [1]
  • What to do instead: For third‑party tools/products/services, Anthropic directs developers to use API key authentication via Claude Console (or a supported cloud provider) and notes it may enforce these restrictions without prior notice. [1]
  • Security note: Anthropic also recommends caution when giving API keys to third‑party tools (treat it as giving that tool access to your Anthropic account). [3]

Sources: [1] Claude Code Docs – Legal and compliance (Authentication and credential use) • [2] GitHub: cabinlab/litellm-claude-code (setup-token / CLAUDE_CODE_OAUTH_TOKEN) • [3] Anthropic Help Center: API key best practices (third-party tools caution)


🌐 Web query:

Anthropic API authentication methods February 2026 official documentation

💡 Result:

From Anthropic’s official API documentation (as of the currently published docs in Feb 2026), authentication works like this:

  • Primary/standard auth (Anthropic API): API key via x-api-key headerall requests to the Anthropic API must include x-api-key: <your_api_key> (SDKs set this for you). [1]
  • Admin API auth: also uses x-api-key, specifically your Admin API key, for all Admin API requests. [2]
  • Required companion header (not auth, but mandatory): anthropic-version — you must send an anthropic-version request header (SDKs handle this automatically). [3]
  • Beta features (optional): anthropic-beta header — used to access experimental features; still alongside x-api-key. [4]

Note: Anthropic’s docs also mention OAuth Bearer tokens in the context of the MCP connector (for authenticating to MCP servers you connect to), not as the primary way to authenticate to the Anthropic API itself. [5]

Sources (official Anthropic docs):
[1] (docs.anthropic.com)
[2] (docs.anthropic.com)
[3] (docs.anthropic.com)
[4] (docs.anthropic.com)
[5] (docs.anthropic.com)

Citations:


🌐 Web query:

"This credential is only authorized for use with Claude Code" Anthropic error

💡 Result:

That error means the credential you’re using is scoped to Anthropic’s first‑party “Claude Code” client and Anthropic is rejecting it for “regular” API traffic (including many third‑party tools/IDEs). Since Jan 9, 2026, Anthropic also began blocking Claude subscription OAuth credentials in third‑party harnesses, which triggers this exact message. [1]

Fix (depends on where you see it)

A) You’re using a 3rd‑party tool (Cursor / OpenCode / Windsurf / etc.) with “Sign in to Claude / OAuth”

  • This is now expected to fail. Use one of these supported alternatives:
    • Anthropic API key (pay‑as‑you‑go) via ANTHROPIC_API_KEY [2][3]
    • Or route via Bedrock / Vertex credentials (if your tool supports it) [3][4]

B) You’re using the official Claude Code CLI / official VS Code extension and still get it

  1. Update Claude Code and log out / log back in (the error is also reported as an intermittent auth issue in official clients). [5]
  2. Check whether you accidentally set an API key env var:
    • If ANTHROPIC_API_KEY is set, Claude Code will prioritize it over subscription auth (which can cause confusing auth/billing behavior). Unset it if you intend to use subscription auth. [2]
  3. If it’s the VS Code extension only (CLI works), it may be an extension auth bug—try reinstalling the extension and re-authing; if it persists, track/report it in the claude-code repo. [5]

If you tell me which app you’re using (Claude Code CLI, VS Code extension, Cursor/OpenCode, GitHub Action, etc.) and how you authenticated (API key vs “Sign in with Claude”), I can give the exact steps for that setup.

Sources: [1] [2] [3] [4] [5]


🌐 Web query:

Anthropic consumer terms of service OAuth token usage restrictions

💡 Result:

Anthropic’s consumer plans (Claude Free/Pro/Max) have explicit restrictions on using OAuth tokens outside Anthropic’s own first-party experiences:

  • OAuth tokens from Claude Free/Pro/Max are only for Claude Code and Claude.ai. Using those OAuth tokens “in any other product, tool, or service — including the Agent SDK” is not permitted and is a violation of the Consumer Terms of Service. [1]
  • Third-party tools can’t use/offer “Sign in with Claude” (Claude.ai login) or proxy requests using Free/Pro/Max credentials on behalf of users; developers should instead use API key authentication via Claude Console (or supported cloud providers). [1]
  • Anthropic says it may enforce these restrictions without prior notice. [1]
  • Separately, the Consumer Terms prohibit accessing the services via “automated or non-human means” (bot/script/etc.) except when accessing via an Anthropic API key or where Anthropic explicitly permits it. [2]
  • The Consumer Terms also say you may not share your account login information or account credentials. [2]

Sources: Anthropic Claude Code “Legal and compliance” (Authentication and credential use) [1]; Anthropic “Consumer Terms of Service” (Use of our Services / automated access; account credentials) [2].


ANTHROPIC_AUTH_TOKEN (OAuth subscription tokens) are banned by Anthropic for third-party tools.

Anthropic enforced a technical block on January 9, 2026 blocking OAuth tokens from consumer subscriptions (Free, Pro, Max) when used outside their first-party products (Claude Code and Claude.ai). This restriction is formalized in Anthropic's Claude Code "Legal and compliance" documentation, which explicitly states that using OAuth tokens "in any other product, tool, or service — including the Agent SDK" is not permitted and constitutes a violation of the Consumer Terms of Service.

Third-party tools receive the error: "This credential is only authorized for use with Claude Code and cannot be used for other API requests."

Impact: Every invocation through ANTHROPIC_AUTH_TOKEN will fail at runtime with a 4xx error from Anthropic. The README and .env.example document this as a supported option for claude setup-token users, but this guidance is now incorrect.

Recommended action: Remove the _ChatAnthropicOAuth class and the ANTHROPIC_AUTH_TOKEN branch from get_anthropic_llm. Anthropic's compliance path for third-party tools is ANTHROPIC_API_KEY (pay-per-token access via Claude Console), which the code already supports.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` around lines 123 - 159, Remove the OAuth
flow and token-specific class: delete the _ChatAnthropicOAuth class and remove
the ANTHROPIC_AUTH_TOKEN branch in get_anthropic_llm so the function always
constructs a ChatAnthropic using settings.ANTHROPIC_API_KEY; ensure the code no
longer references ANTHROPIC_AUTH_TOKEN or the temporary
SecretStr("not-used")/default_headers usage and keep max_retries/temperature
handling consistent for the remaining ChatAnthropic instantiation.

Comment on lines +131 to +135
@property
def _client_params(self) -> dict[str, Any]:
params = super()._client_params
params["api_key"] = None
return params
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the file in question
find . -type f -name "llm.py" -path "*/mobile_use/services/*"

Repository: minitap-ai/mobile-use

Length of output: 101


🏁 Script executed:

# Look for the class definition and the _client_params override
grep -n "class\|_client_params\|ChatAnthropic" minitap/mobile_use/services/llm.py | head -30

Repository: minitap-ai/mobile-use

Length of output: 532


🏁 Script executed:

# Check the dependency specifications for langchain-anthropic version
find . -type f \( -name "requirements.txt" -o -name "pyproject.toml" -o -name "setup.py" -o -name "setup.cfg" \) | head -10

Repository: minitap-ai/mobile-use

Length of output: 81


🏁 Script executed:

# Read the full context around the _client_params property
sed -n '120,145p' minitap/mobile_use/services/llm.py

Repository: minitap-ai/mobile-use

Length of output: 908


🏁 Script executed:

# Check pyproject.toml for langchain-anthropic version constraint
cat pyproject.toml | grep -A 20 "dependencies\|langchain"

Repository: minitap-ai/mobile-use

Length of output: 1304


🏁 Script executed:

# Get full class definition to see the initialization
sed -n '123,160p' minitap/mobile_use/services/llm.py

Repository: minitap-ai/mobile-use

Length of output: 1350


🌐 Web query:

langchain-anthropic ChatAnthropic _client_params current version

💡 Result:

  • Current langchain-anthropic version (PyPI): 1.3.3 (released Feb 10, 2026). [1]
  • In that version line, ChatAnthropic._client_params is a (cached) property used to build the underlying Anthropic SDK clients; it returns a dict containing (at least) api_key, base_url, max_retries, default_headers, and conditionally timeout (when default_request_timeout is None or > 0). That dict is then used to construct both _client and _async_client. [2]

Sources
[1] PyPI release history for langchain-anthropic (shows 1.3.3 on Feb 10, 2026). (pypi.org)
[2] chat_models.py excerpt showing _client_params and how it feeds _client / _async_client. (fossies.org)

Citations:


🌐 Web query:

langchain-anthropic ChatAnthropic client parameter pre-built anthropic.Anthropic

💡 Result:

In Python langchain-anthropic (langchain_anthropic.ChatAnthropic), there is no public client= parameter to inject a pre-built anthropic.Anthropic / anthropic.Client instance.

ChatAnthropic builds its own SDK clients internally from config like api_key, base_url, default_headers, timeout, and max_retries, via cached properties _client / _async_client that instantiate anthropic.Client(...) / anthropic.AsyncClient(...). [1][2]

So, instead of passing a pre-built client, configure ChatAnthropic with the corresponding constructor params (e.g., api_key/env var, base_url, default_headers, timeout, etc.). [1][2]

Sources:
[1] ChatAnthropic fields/params and docsstring in langchain_anthropic/chat_models.py (raw source) https://raw.githubusercontent.com/langchain-ai/langchain/273d282a/libs/partners/anthropic/langchain_anthropic/chat_models.py
[2] Internal client construction (_client_params, _client, _async_client) in the same file https://raw.githubusercontent.com/langchain-ai/langchain/273d282a/libs/partners/anthropic/langchain_anthropic/chat_models.py


_client_params is a private property — fragile override that can break silently.

The _client_params property on ChatAnthropic (in langchain-anthropic v1.3.3) is an internal implementation detail with no stability guarantee. The override here sets api_key=None after calling super()._client_params, which is the only documented way to prevent the Anthropic SDK from sending both an Authorization: Bearer header (from ANTHROPIC_AUTH_TOKEN) and an X-Api-Key header simultaneously.

However, the dependency constraint langchain-anthropic>=0.3.0 permits any future major version. A rename or restructure of _client_params in a later release would produce a subtly broken client, causing authentication failures at runtime rather than a clear startup error.

The current approach is well-documented in the code comment and works as intended. Consider monitoring langchain-anthropic releases for API changes or pinning to a specific minor version while OAuth support is still under development (the beta header anthropic-beta: oauth-2025-04-20 suggests this is still unstable).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` around lines 131 - 135, The override of
ChatAnthropic._client_params is fragile because _client_params is an internal
API and may be removed/renamed in future langchain-anthropic releases; instead
of silently relying on it, add a runtime guard in the code that accesses
ChatAnthropic._client_params (the overridden property) to detect absence or
unexpected shape and raise a clear error, and/or pin the langchain-anthropic
dependency to a safe minor version; concretely, when implementing the fix update
the logic around the _client_params property on the ChatAnthropic-derived class
to (1) check for the existence of super()._client_params and that it is a dict
before mutating, raising a descriptive exception if not present, and (2) add a
note to dependency management to lock langchain-anthropic to a tested minor
version (or add a comment to monitor upstream breaking changes).

Comment thread README.md
Comment on lines +86 to +90
> [!NOTE]
> If you want to use Anthropic Claude, set `ANTHROPIC_API_KEY` in your `.env`.
> Alternatively, if you have an OAuth token from `claude setup-token`, set `ANTHROPIC_AUTH_TOKEN` instead.
> If both are set, `ANTHROPIC_AUTH_TOKEN` takes priority.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix MD028: blank line inside blockquote.

The blank line at Line 90 between the two consecutive > [!NOTE] blocks triggers MD028 (no-blanks-blockquote). Replace it with a > continuation line to maintain spacing while satisfying the linter.

📝 Proposed fix
     > If both are set, `ANTHROPIC_AUTH_TOKEN` takes priority.
-
+    >
     > [!NOTE]
     > If you want to use Google Vertex AI...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
> [!NOTE]
> If you want to use Anthropic Claude, set `ANTHROPIC_API_KEY` in your `.env`.
> Alternatively, if you have an OAuth token from `claude setup-token`, set `ANTHROPIC_AUTH_TOKEN` instead.
> If both are set, `ANTHROPIC_AUTH_TOKEN` takes priority.
> [!NOTE]
> If you want to use Anthropic Claude, set `ANTHROPIC_API_KEY` in your `.env`.
> Alternatively, if you have an OAuth token from `claude setup-token`, set `ANTHROPIC_AUTH_TOKEN` instead.
> If both are set, `ANTHROPIC_AUTH_TOKEN` takes priority.
>
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 90-90: Blank line inside blockquote

(MD028, no-blanks-blockquote)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 86 - 90, There is a blank line between two
consecutive blockquote blocks starting with "> [!NOTE]" which triggers MD028;
remove that empty line and replace it with a blockquote continuation line (a
single ">" line) so the spacing is preserved but the blocks are not separated by
a raw blank line—update the blockquote containing the Anthropic instructions
(the lines referencing ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN and the
precedence note) to use ">" continuation instead of an empty line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant