繁體中文 | English
A resilient, standalone Python bridge connecting your Meshtastic device to powerful Large Language Models (LLMs). This project is designed for "apocalypse-grade" off-grid communication, allowing you to interact with AI even when the internet is down.
It intelligently switches between online (Google Gemini) and offline (Local LLMs like LM Studio or Ollama) modes, providing robust AI assistance in any scenario.
- Dual-Mode LLM Integration: Automatically detects internet connectivity.
- Online Mode: Connects to Google Gemini API for powerful, internet-enabled AI responses.
- Offline Mode: Seamlessly switches to local LLMs (LM Studio or Ollama) for off-grid AI capabilities.
- GPS-Aware Weather Queries: Send a simple command like
"weather here"from your device, and the bridge will automatically use your node's GPS location to fetch the local weather forecast, leveraging thequery_surf_spotstool. No manual coordinates needed! - Government Alert Broadcast: In online mode, the bridge actively monitors Taiwan's NCDR (National Science and Technology Center for Disaster Reduction) CAP feed. If a severe alert (earthquake, typhoon, air raid, etc.) is issued, it will be automatically broadcast to all devices on the mesh network (
^all). - Local Knowledge Base (RAG): In offline mode, the LLM can query a local
knowledge_base/of documents (PDFs, Markdown, text files) to provide informed answers. This is crucial for offline survival and reference. - Smart Tool Integration: Seamlessly integrates
find_parking(parking query) andquery_surf_spots(surf/weather query) tools, declared using OpenAI function calling format for robust LLM dispatch.find_parking: Works when online; returns an "offline" message if the internet is down.query_surf_spots: Provides general surf spot info and calculates sunrise/sunset offline. Real-time tide, wind, and typhoon data are only available when online with a configured CWA API key.
- Robust LLM Response Handling: Compatible with both object-style and dict-style LLM responses via unified
_get_content()helper, ensuring stability across different OpenAI-compatible backends. - Meshtastic Communication: Utilizes the Meshtastic CLI for sending and receiving messages over LoRa mesh networks.
- Message Chunking & Pagination: Automatically splits long LLM responses into multiple Meshtastic packets with pagination (
(1/3)) due to LoRa's limited payload size. - Resource Optimization: Designed for low-bandwidth, low-power Meshtastic networks.
- Easy Setup: Runs as a standalone Python script with
.envconfiguration.
Most LLM solutions rely entirely on internet connectivity. Meshtastic-LLM Bridge offers unparalleled resilience:
- True Off-Grid AI: Ensures you always have access to AI assistance, even in emergencies or remote locations without internet.
- Hybrid Intelligence: Leverages the best of both worlds: powerful cloud LLMs when online, and robust local LLMs when offline.
- Personal Knowledge Hub: Turn your local computer into a private, searchable knowledge base for your AI, accessible via LoRa.
- Open Source & Customizable: A foundation for building your own specialized off-grid AI applications.
- OS: Linux, macOS, or Windows (via WSL2).
- Python: v3.9 or higher.
- Meshtastic Device: A working Meshtastic device connected via USB (or configurable for TCP/IP).
- Local LLM: (Essential for offline chat, reasoning, and local RAG embeddings)
- LM Studio (lmstudio.ai): Recommended for ease of use (GUI). Download a chat model and an embedding model (e.g.,
nomic-ai/nomic-embed-text-v1.5), then start the local server. - Ollama (ollama.ai): Command-line friendly. Install a chat model (e.g.,
ollama run gemma:2b) and an embedding model (e.g.,ollama run nomic-embed-text). Ensure the Ollama server is running.
- LM Studio (lmstudio.ai): Recommended for ease of use (GUI). Download a chat model and an embedding model (e.g.,
- Google AI Studio: Obtain your Gemini API Key for online mode (free tier available).
- TDX (Transport Data eXchange): For parking queries in Taiwan.
- CWA Open Data: For surf spot weather in Taiwan.
git clone https://github.com/Harperbot/meshtastic-llm-bridge.git
cd meshtastic-llm-bridge# Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Python dependencies
pip install "meshtastic[cli]" requests python-dotenv openai ollama langchain-community pypdf unstructured chromadb- Connect your Meshtastic device via USB.
- Find its path:
meshtastic --info(e.g.,/dev/cu.usbserial-0001on macOS,/dev/ttyUSB0on Linux).
- Download and install LM Studio.
- In LM Studio, download your preferred LLM (e.g.,
Nexusflow/Starling-LM-7B-beta-GGUF) and an Embedding model (e.g.,nomic-ai/nomic-embed-text-v1.5). - Go to the "Local Server" tab and click "Start Server". Ensure it's running on
http://localhost:1234/v1.
- Download and install Ollama.
- Download your preferred LLM (e.g.,
ollama run gemma:2b) and an Embedding model (e.g.,ollama run nomic-embed-text). - Ensure the Ollama server is running (usually automatic after
ollama run).
Copy the example file:
cp .env.example .envEdit .env with your details:
# --- General Configuration ---
MESHTASTIC_DEVICE_PATH=/dev/cu.usbserial-XXXX # <--- IMPORTANT: Update this!
MESHTASTIC_LONGNAME=YourMeshAINode
LOCALIZATION=TW # Set to 'TW' for Taiwan-specific tools, or remove for global LLM
# --- Google Gemini API (Online Mode) ---
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL_ONLINE=gemini-1.5-pro-latest
# --- Local LLM (Offline Mode) ---
# LM Studio Configuration (Priority 1 if enabled)
LOCAL_LLM_API_BASE=http://localhost:1234/v1
LOCAL_LLM_MODEL=Nexusflow/Starling-LM-7B-beta-GGUF # Your downloaded chat model in LM Studio
# Ollama Configuration (Priority 2 if LM Studio fails or is not configured)
LOCAL_LLM_OLLAMA_API_BASE=http://localhost:11434/api
LOCAL_LLM_OLLAMA_MODEL=gemma:2b # Your installed Ollama chat model
# Local Embedding Model (for RAG - Retrieval Augmented Generation)
# Used by Langchain for generating document embeddings when offline.
# Prioritizes LM Studio if LOCAL_EMBEDDING_API_BASE is set, then Ollama.
# If using LM Studio, the API base is usually the same as LOCAL_LLM_API_BASE.
# If using Ollama, ensure you have an embedding model like 'nomic-embed-text' installed ('ollama run nomic-embed-text').
LOCAL_EMBEDDING_API_BASE=http://localhost:1234/v1 # e.g., LM Studio Embedding API
LOCAL_EMBEDDING_MODEL=nomic-ai/nomic-embed-text-v1.5 # e.g., your downloaded Embedding modelPlace your documents (e.g., survival guides, manuals, Wikipedia exports) into the ./knowledge_base/ directory.
Supported formats: .txt, .md, .pdf.
Each time you add/remove documents, or update the Embedding model, restart bridge.py to rebuild the vector database.
- Ensure your Meshtastic device is connected via USB and powered on.
- Ensure your chosen Local LLM (LM Studio or Ollama) server is running.
- Activate your Python virtual environment:
source venv/bin/activate - Run the bridge:
python3 bridge.py
Now, send messages to your AI node (e.g., YourMeshAINode) from your Meshtastic mobile app. The bridge will intelligently route your query to Gemini (online) or your local LLM (offline), using your local knowledge base when offline.
This bridge employs a hybrid intelligence architecture:
- Meshtastic CLI Listener: Continuously monitors incoming LoRa messages via
meshtastic --listen. - Internet Connectivity Check: Periodically pings a reliable endpoint to determine online/offline status.
- Dynamic LLM Dispatch:
- Online: Routes queries to Google Gemini API (via
openaiclient withx-goog-api-keyheader). - Offline: Attempts to connect to LM Studio's OpenAI-compatible API, falling back to Ollama if not available.
- Online: Routes queries to Google Gemini API (via
- Local RAG Integration: In offline mode, queries the
./knowledge_base/for relevant document snippets using Langchain and local embeddings, injecting this context into the LLM prompt. - Meshtastic Response Sender: Formats LLM responses for Meshtastic's limited payload size, chunking and paginating long messages, then sends them via
meshtastic --sendtext.
Due to Meshtastic's low bandwidth, optimize your queries:
- Be Concise: Ask short, direct questions.
- Use Keywords: "Weather [City]", "Manual [Topic]", "Calc [Expression]".
- Expect Summaries: LLM responses will be limited to ~200 characters and may be paginated.
- Encrypted Communication: All traffic between the bridge and Google/Local LLM APIs is secured (HTTPS/local IPC).
- Physical Security: Your Meshtastic device and local computer should be in a secure location.
- Local LLM Trust: Ensure you trust the local LLM models you download, as they run on your machine.
Pull requests are welcome!
MIT