✅ Stable release (v1.2.0)
LoRaMapr has reached v1.2.0 and is ready for self-hosted use.
Development continues for new features and refinements, with changes tracked through normal release/version notes.
LoRaMapr is an app for mapping real-world coverage around a fixed location with Meshtastic devices.
Position one node at that location, use a second field node to explore the surrounding area, and build coverage maps from real packet data.
By recording the same route multiple times, you can compare antennas, placement, and other setup changes under real-world conditions.
- Meshtastic users testing coverage from a home, base, or relay location
- People comparing antenna placement or node setup
- Users who want real measured coverage, not just node positions on a map
- Meshtastic-first ingest path via Pi Forwarder, plus supported LoRaWAN ingest via TTS webhooks.
- Session-first workflow for manual capture and Home Auto Session (HAS) support for hands-free Meshtastic coverage runs.
- Dedicated Session Comparison workflow for comparing repeated runs side by side with shared map overlays.
- Decision-oriented compare metrics including Max range, Edge RSSI, Edge SNR, and farthest-point map markers.
- Coverage workflow with Bins + Heatmap visualization and Device vs Session scope switching.
- Production-ready self-hosting baseline with Docker Compose, reverse proxy, health/readiness checks, and startup migration flow.
- Built-in operational tooling for API keys, database backup/restore, and retention safety defaults.
- Expanded project documentation in the GitHub Wiki for quickstart, deployment, ingestion, troubleshooting, and operations.
Best fit: fixed base + mobile field testing.
Most commonly, that means a home node and a field node:
- Leave one node at home, at a relay site, or at another fixed base location.
- Carry another node through the field on a walk or drive.
- Let your receiver path send packets into LoRaMapr.
- Review sessions, playback, coverage views, and exports to see where packets were actually received.
Run capture can be manual, or automated with Home Auto Session (HAS) for base-driven workflows.
Home node + field node is the default pattern, but not the only valid setup.
For the standard fixed-base + mobile-field workflow, LoRaMapr requires:
- one node at a fixed location such as home/base/relay
- one field node used while walking or driving
- a Linux machine or Raspberry Pi at the fixed location to run the Receiver service (
Pi Forwarder) - network access from that Receiver host to your LoRaMapr Cloud endpoint (or self-hosted LoRaMapr API endpoint)
The Receiver service ingests Meshtastic data from the fixed-location node and forwards it to LoRaMapr Cloud, where sessions, playback, and coverage maps are generated.
Minimum:
- CPU:
1 vCPU - RAM:
512 MB - Free disk:
2 GB - Network: stable LAN/internet path to your cloud endpoint
- USB: one reliable USB data connection for the Meshtastic node (if using direct USB)
Recommended:
- CPU:
2 vCPU - RAM:
1 GB - Free disk:
4 GBor more - stable power, networking, and storage for long-running receiver use
Raspberry Pi baseline:
- Minimum proven: Raspberry Pi Zero 2 W with
512 MBRAM - Recommended: Raspberry Pi 3 or 4 class system with a quality power supply and SD card
Practical note:
- A minimal Linux VM can run the Receiver service in roughly the same class as a Pi Zero 2 W.
- For better stability and upgrade/log/retry headroom,
1 GB RAMand2 vCPUis the better baseline when available.
Real-world setup (typical)
- You own: one or more Meshtastic field nodes you carry while walking/driving
- You own: a fixed receiver node (often at home) plus a small computer (often a Raspberry Pi)
- Field node(s) and base node commonly run on a private channel for repeatable testing
- LoRaMapr: your self-hosted backend + UI
How ingestion works
- Field node(s) transmit packets into the mesh.
- Your fixed base node hears them.
- The Receiver service (Pi Forwarder) listens to Meshtastic packets locally and POSTs them to LoRaMapr over HTTP/HTTPS.
- LoRaMapr stores the events and normalizes GPS/radio fields into measurements attached to sessions.
Home Auto Session (HAS) supports a home-driven coverage workflow: leave one node at your base location, carry another through the field, and let the base-side workflow automatically open and close coverage runs around real activity. This reduces manual session handling and makes repeated walks or drives easier to capture consistently.
Important: Meshtastic is not limited to a home node. The Receiver service can run on any Linux host that can read Meshtastic packets (Pi, mini PC, Linux laptop over USB, etc.).
Real-world setup
- You own: a LoRaWAN end device (your sensor/tracker)
- Gateways: can be yours or community/public gateways (any gateway that hears your device helps)
- The Things Stack (TTS/TTN): the network server that receives gateway traffic for your application
- LoRaMapr: your backend + UI
How ingestion works
- Your device transmits an uplink over LoRa.
- One or more gateways receive it and forward it to The Things Stack.
- You configure a Webhook integration in The Things Stack by entering LoRaMapr's HTTPS URL (and a secret).
- The Things Stack automatically POSTs each uplink event to LoRaMapr.
- LoRaMapr stores the event, extracts GPS + radio metadata (RSSI/SNR, gateway IDs when available), and attaches the data to your sessions for visualization.
- Record a walk/drive coverage run and replay it later.
- Compare antennas, node placement, terrain, routes, and settings by repeating the same route over time.
- Use Coverage Bins and Heatmap views to see where packets were actually received.
- Use Home Auto Session (HAS) to capture repeated field runs with less manual session management.
- Inspect reception details (especially strong with LoRaWAN where gateways report RSSI/SNR).
- Export session tracks/points (GeoJSON) for external tools like QGIS.
- Backend: Node.js + TypeScript + NestJS
- Frontend: React + Vite + TypeScript
- Data: PostgreSQL + Prisma
- Supporting libs: RxJS, class-validator, class-transformer
- GitHub Wiki: https://github.com/kpax2049/loramapr/wiki
Start the dev stack (postgres + backend + frontend):
make keys
make upNo manual npm install is required for runtime; containers install and run dependencies.
Default URLs/ports after startup:
- Frontend UI:
http://localhost:5173 - Backend API:
http://localhost:3000 - Health:
http://localhost:3000/health - Readiness:
http://localhost:3000/readyz
These values are controlled by .env (FRONTEND_PORT, API_PORT).
- Backend listens on
http://localhost:3000 - Frontend listens on
http://localhost:5173 - Postgres runs as the
postgresservice - Migrations are applied automatically in the Docker backend flow (
docker compose up --build)
curl http://localhost:3000/health
curl http://localhost:3000/readyz/health: process-level liveness/readyz: DB readiness (503when database is unreachable)
npm install
cp .env.example .env
docker compose up -d postgres
# IMPORTANT: when backend runs on host (not in docker), edit .env and set:
# DATABASE_URL=postgres://postgres:postgres@localhost:5432/loramapr
npm run db:migrate
npm run start:devThis repo has one Python bridge script at apps/pi-forwarder/scripts/meshtastic-json-bridge.py.
Install Python lint/dead-code tooling:
make py-tools-installThis creates/uses a repo-local virtual environment at .venv-pytools.
Run Ruff (fast lint for unused imports/variables and obvious issues):
make py-lint-ruff
make py-lint-ruff-fixRun Vulture (conservative dead-code scan):
make py-deadcode-vultureCombined pass:
make py-deadcodeNotes:
- Ruff is configured for linting only (
Fchecks), not formatting. - Vulture uses a conservative baseline (
min_confidence = 90) and is intended for review, not blind deletion. - For dynamic Python usage (framework registration, decorators, CLI discovery, reflection), prefer narrow Vulture whitelist/ignore updates over broad global ignores.
Run both servers together:
npm run dev:allOr run them separately:
npm run start:dev
npm --prefix frontend run dev- Run the simulator to ingest sample points:
npm run simulate:walk -- --apiKey YOUR_KEY --deviceUid dev-1 --baseLat 37.77 --baseLon -122.43 --minutes 15 --intervalSec 5 --seed demo- Open the frontend dev server in your browser.
- Select the device in the dropdown to see points and track.
Post Meshtastic JSON payloads to:
POST /api/meshtastic/eventUse an X-API-Key with INGEST scope. Meshtastic events create webhook events, and if GPS data is present, measurements will appear in the map.
The LoRaWAN and Meshtastic debug panels require VITE_QUERY_API_KEY (QUERY scope) in frontend/.env.
Session playback mode supports scrubber, keyboard shortcuts, and time-window slicing for deterministic replay.
Use the Quickstart above for the recommended flow.
For local Docker-first setup, generate (or preserve existing) QUERY/INGEST keys:
make keysAdvanced/manual minting is also available:
npm run apikey:mint -- --scopes INGEST --label "dev ingest key"Use the printed key in the X-API-Key header.
Generate and ingest a synthetic walk (posts to POST /api/measurements in batches):
npm run simulate:walk -- --apiKey YOUR_KEY --deviceUid dev-1 --baseLat 37.77 --baseLon -122.43 --minutes 15 --intervalSec 5 --seed demoUse the seed script when you want more than a single walk. It writes a larger test dataset directly to Postgres, including:
- multiple devices and sessions
- many measurements across several days
- per-gateway Rx metadata
- precomputed coverage bins
Run:
npx ts-node scripts/seed-data.ts --dbOptional controls:
SEED=1337 CENTER_LAT=37.7749 CENTER_LON=-122.4194 OWNER_USER_ID=<uuid> npx ts-node scripts/seed-data.ts --dbIf you only want the generated payload (no DB writes):
npx ts-node scripts/seed-data.ts --json > tmp/dummy.jsonnpm run build
npm startdocker compose logs postgres --tail=200
docker compose logs backend --tail=200
docker compose down -v
docker compose up --buildProd compose equivalents:
docker compose -f docker-compose.prod.yml logs postgres --tail=200
docker compose -f docker-compose.prod.yml logs api --tail=200
docker compose -f docker-compose.prod.yml logs reverse-proxy --tail=200
docker compose -f docker-compose.prod.yml down
docker compose -f docker-compose.prod.yml up -d --buildIf you see a Prisma engine mismatch (darwin vs linux), run:
docker compose down -v
docker compose up --buildIf npm ci fails, ensure you are using the committed package-lock.json and rebuild.
Common ports:
- Backend: 3000
- Frontend dev server: 5173
- Postgres: 5432
If API requests fail in dev, check that frontend/.env has VITE_API_BASE_URL=http://localhost:3000 and restart the Vite dev server. In production, frontend requests use same-origin /api/* by default (leave VITE_API_BASE_URL empty).
Start production-style stack:
make prod-upIf your docker-compose.prod.yml maps proxy to default ports:
curl -i http://localhost/healthz
curl -i http://localhost/readyzIf proxy is mapped to custom host port (for example 8080:80), use that port:
PORT=8080
curl -i "http://localhost:${PORT}/healthz"
curl -i "http://localhost:${PORT}/readyz"License: AGPL-3.0
This project is licensed under the GNU Affero General Public License v3.0. See LICENSE.
- Use
prisma migrate devonly when changing schema; otherwise useprisma migrate deploy(the default in Docker).

