Skip to content

kpax2049/loramapr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

363 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LoRaMapr

LoRaMapr

Stable release (v1.2.0)
LoRaMapr has reached v1.2.0 and is ready for self-hosted use.
Development continues for new features and refinements, with changes tracked through normal release/version notes.

LoRaMapr is an app for mapping real-world coverage around a fixed location with Meshtastic devices.

Position one node at that location, use a second field node to explore the surrounding area, and build coverage maps from real packet data.

By recording the same route multiple times, you can compare antennas, placement, and other setup changes under real-world conditions.

Who it is for

  • Meshtastic users testing coverage from a home, base, or relay location
  • People comparing antenna placement or node setup
  • Users who want real measured coverage, not just node positions on a map

What's in v1.2.0

  • Meshtastic-first ingest path via Pi Forwarder, plus supported LoRaWAN ingest via TTS webhooks.
  • Session-first workflow for manual capture and Home Auto Session (HAS) support for hands-free Meshtastic coverage runs.
  • Dedicated Session Comparison workflow for comparing repeated runs side by side with shared map overlays.
  • Decision-oriented compare metrics including Max range, Edge RSSI, Edge SNR, and farthest-point map markers.
  • Coverage workflow with Bins + Heatmap visualization and Device vs Session scope switching.
  • Production-ready self-hosting baseline with Docker Compose, reverse proxy, health/readiness checks, and startup migration flow.
  • Built-in operational tooling for API keys, database backup/restore, and retention safety defaults.
  • Expanded project documentation in the GitHub Wiki for quickstart, deployment, ingestion, troubleshooting, and operations.

LoRaMapr session details and signal charts

LoRaMapr coverage mode view

How coverage mapping works

Best fit: fixed base + mobile field testing.

Most commonly, that means a home node and a field node:

  1. Leave one node at home, at a relay site, or at another fixed base location.
  2. Carry another node through the field on a walk or drive.
  3. Let your receiver path send packets into LoRaMapr.
  4. Review sessions, playback, coverage views, and exports to see where packets were actually received.

Run capture can be manual, or automated with Home Auto Session (HAS) for base-driven workflows.

Home node + field node is the default pattern, but not the only valid setup.

Prerequisites (standard Meshtastic coverage workflow)

For the standard fixed-base + mobile-field workflow, LoRaMapr requires:

  • one node at a fixed location such as home/base/relay
  • one field node used while walking or driving
  • a Linux machine or Raspberry Pi at the fixed location to run the Receiver service (Pi Forwarder)
  • network access from that Receiver host to your LoRaMapr Cloud endpoint (or self-hosted LoRaMapr API endpoint)

The Receiver service ingests Meshtastic data from the fixed-location node and forwards it to LoRaMapr Cloud, where sessions, playback, and coverage maps are generated.

Receiver host baseline

Minimum:

  • CPU: 1 vCPU
  • RAM: 512 MB
  • Free disk: 2 GB
  • Network: stable LAN/internet path to your cloud endpoint
  • USB: one reliable USB data connection for the Meshtastic node (if using direct USB)

Recommended:

  • CPU: 2 vCPU
  • RAM: 1 GB
  • Free disk: 4 GB or more
  • stable power, networking, and storage for long-running receiver use

Raspberry Pi baseline:

  • Minimum proven: Raspberry Pi Zero 2 W with 512 MB RAM
  • Recommended: Raspberry Pi 3 or 4 class system with a quality power supply and SD card

Practical note:

  • A minimal Linux VM can run the Receiver service in roughly the same class as a Pi Zero 2 W.
  • For better stability and upgrade/log/retry headroom, 1 GB RAM and 2 vCPU is the better baseline when available.

How data gets into LoRaMapr (supported ingest paths)

1) Meshtastic (Forwarder -> HTTP) — primary coverage-mapping workflow

LoRaMapr Meshtastic ingestion flow

Real-world setup (typical)

  • You own: one or more Meshtastic field nodes you carry while walking/driving
  • You own: a fixed receiver node (often at home) plus a small computer (often a Raspberry Pi)
  • Field node(s) and base node commonly run on a private channel for repeatable testing
  • LoRaMapr: your self-hosted backend + UI

How ingestion works

  1. Field node(s) transmit packets into the mesh.
  2. Your fixed base node hears them.
  3. The Receiver service (Pi Forwarder) listens to Meshtastic packets locally and POSTs them to LoRaMapr over HTTP/HTTPS.
  4. LoRaMapr stores the events and normalizes GPS/radio fields into measurements attached to sessions.

Home Auto Session (HAS) supports a home-driven coverage workflow: leave one node at your base location, carry another through the field, and let the base-side workflow automatically open and close coverage runs around real activity. This reduces manual session handling and makes repeated walks or drives easier to capture consistently.

Important: Meshtastic is not limited to a home node. The Receiver service can run on any Linux host that can read Meshtastic packets (Pi, mini PC, Linux laptop over USB, etc.).

2) LoRaWAN (The Things Stack webhook) — supported secondary path

LoRaMapr LoRaWAN ingestion flow

Real-world setup

  • You own: a LoRaWAN end device (your sensor/tracker)
  • Gateways: can be yours or community/public gateways (any gateway that hears your device helps)
  • The Things Stack (TTS/TTN): the network server that receives gateway traffic for your application
  • LoRaMapr: your backend + UI

How ingestion works

  1. Your device transmits an uplink over LoRa.
  2. One or more gateways receive it and forward it to The Things Stack.
  3. You configure a Webhook integration in The Things Stack by entering LoRaMapr's HTTPS URL (and a secret).
  4. The Things Stack automatically POSTs each uplink event to LoRaMapr.
  5. LoRaMapr stores the event, extracts GPS + radio metadata (RSSI/SNR, gateway IDs when available), and attaches the data to your sessions for visualization.

What users typically do with it

  • Record a walk/drive coverage run and replay it later.
  • Compare antennas, node placement, terrain, routes, and settings by repeating the same route over time.
  • Use Coverage Bins and Heatmap views to see where packets were actually received.
  • Use Home Auto Session (HAS) to capture repeated field runs with less manual session management.
  • Inspect reception details (especially strong with LoRaWAN where gateways report RSSI/SNR).
  • Export session tracks/points (GeoJSON) for external tools like QGIS.

Tech stack

  • Backend: Node.js + TypeScript + NestJS
  • Frontend: React + Vite + TypeScript
  • Data: PostgreSQL + Prisma
  • Supporting libs: RxJS, class-validator, class-transformer

Documentation

Quickstart (first-time users, working UI)

Start the dev stack (postgres + backend + frontend):

make keys
make up

No manual npm install is required for runtime; containers install and run dependencies.

Default URLs/ports after startup:

  • Frontend UI: http://localhost:5173
  • Backend API: http://localhost:3000
  • Health: http://localhost:3000/health
  • Readiness: http://localhost:3000/readyz

These values are controlled by .env (FRONTEND_PORT, API_PORT).

What to expect

  • Backend listens on http://localhost:3000
  • Frontend listens on http://localhost:5173
  • Postgres runs as the postgres service
  • Migrations are applied automatically in the Docker backend flow (docker compose up --build)

Health check

curl http://localhost:3000/health
curl http://localhost:3000/readyz
  • /health: process-level liveness
  • /readyz: DB readiness (503 when database is unreachable)

Running locally (contributors)

npm install
cp .env.example .env
docker compose up -d postgres
# IMPORTANT: when backend runs on host (not in docker), edit .env and set:
# DATABASE_URL=postgres://postgres:postgres@localhost:5432/loramapr
npm run db:migrate
npm run start:dev

Python dead-code checks (contributors)

This repo has one Python bridge script at apps/pi-forwarder/scripts/meshtastic-json-bridge.py.

Install Python lint/dead-code tooling:

make py-tools-install

This creates/uses a repo-local virtual environment at .venv-pytools.

Run Ruff (fast lint for unused imports/variables and obvious issues):

make py-lint-ruff
make py-lint-ruff-fix

Run Vulture (conservative dead-code scan):

make py-deadcode-vulture

Combined pass:

make py-deadcode

Notes:

  • Ruff is configured for linting only (F checks), not formatting.
  • Vulture uses a conservative baseline (min_confidence = 90) and is intended for review, not blind deletion.
  • For dynamic Python usage (framework registration, decorators, CLI discovery, reflection), prefer narrow Vulture whitelist/ignore updates over broad global ignores.

Full-stack dev (backend + frontend)

Run both servers together:

npm run dev:all

Or run them separately:

npm run start:dev
npm --prefix frontend run dev

See data in the map

  1. Run the simulator to ingest sample points:
npm run simulate:walk -- --apiKey YOUR_KEY --deviceUid dev-1 --baseLat 37.77 --baseLon -122.43 --minutes 15 --intervalSec 5 --seed demo
  1. Open the frontend dev server in your browser.
  2. Select the device in the dropdown to see points and track.

Meshtastic ingest (MVP)

Post Meshtastic JSON payloads to:

POST /api/meshtastic/event

Use an X-API-Key with INGEST scope. Meshtastic events create webhook events, and if GPS data is present, measurements will appear in the map.

Debug panels (QUERY key)

The LoRaWAN and Meshtastic debug panels require VITE_QUERY_API_KEY (QUERY scope) in frontend/.env.

Playback

Session playback mode supports scrubber, keyboard shortcuts, and time-window slicing for deterministic replay.

Docker dev workflow (backend)

Use the Quickstart above for the recommended flow.

API key generation

For local Docker-first setup, generate (or preserve existing) QUERY/INGEST keys:

make keys

Advanced/manual minting is also available:

npm run apikey:mint -- --scopes INGEST --label "dev ingest key"

Use the printed key in the X-API-Key header.

Simulate measurement walk

Generate and ingest a synthetic walk (posts to POST /api/measurements in batches):

npm run simulate:walk -- --apiKey YOUR_KEY --deviceUid dev-1 --baseLat 37.77 --baseLon -122.43 --minutes 15 --intervalSec 5 --seed demo

Seed richer demo data (DB)

Use the seed script when you want more than a single walk. It writes a larger test dataset directly to Postgres, including:

  • multiple devices and sessions
  • many measurements across several days
  • per-gateway Rx metadata
  • precomputed coverage bins

Run:

npx ts-node scripts/seed-data.ts --db

Optional controls:

SEED=1337 CENTER_LAT=37.7749 CENTER_LON=-122.4194 OWNER_USER_ID=<uuid> npx ts-node scripts/seed-data.ts --db

If you only want the generated payload (no DB writes):

npx ts-node scripts/seed-data.ts --json > tmp/dummy.json

Build and run

npm run build
npm start

Troubleshooting

docker compose logs postgres --tail=200
docker compose logs backend --tail=200
docker compose down -v
docker compose up --build

Prod compose equivalents:

docker compose -f docker-compose.prod.yml logs postgres --tail=200
docker compose -f docker-compose.prod.yml logs api --tail=200
docker compose -f docker-compose.prod.yml logs reverse-proxy --tail=200
docker compose -f docker-compose.prod.yml down
docker compose -f docker-compose.prod.yml up -d --build

If you see a Prisma engine mismatch (darwin vs linux), run:

docker compose down -v
docker compose up --build

If npm ci fails, ensure you are using the committed package-lock.json and rebuild.

Common ports:

  • Backend: 3000
  • Frontend dev server: 5173
  • Postgres: 5432

If API requests fail in dev, check that frontend/.env has VITE_API_BASE_URL=http://localhost:3000 and restart the Vite dev server. In production, frontend requests use same-origin /api/* by default (leave VITE_API_BASE_URL empty).

Production smoke test

Start production-style stack:

make prod-up

If your docker-compose.prod.yml maps proxy to default ports:

curl -i http://localhost/healthz
curl -i http://localhost/readyz

If proxy is mapped to custom host port (for example 8080:80), use that port:

PORT=8080
curl -i "http://localhost:${PORT}/healthz"
curl -i "http://localhost:${PORT}/readyz"

License

License: AGPL-3.0

This project is licensed under the GNU Affero General Public License v3.0. See LICENSE.

Contributor note

  • Use prisma migrate dev only when changing schema; otherwise use prisma migrate deploy (the default in Docker).

About

LoRaMapr is a hardware-backed mapping and visualization project focused on collecting, storing, and displaying LoRa-based field data on an interactive map.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors