Revolutionary database architecture combining neuromorphic computing, quantum-inspired algorithms, and DNA-storage principles for ultra-efficient edge computing applications on Raspberry Pi 4
β οΈ BETA SOFTWARE - NOT FOR PRODUCTION USEThis project is currently in beta testing phase. It is under active development and may contain bugs, incomplete features, or breaking changes. Do not use this software in production environments. Use at your own risk for testing and evaluation purposes only.
If you find this extension helpful, please consider supporting its development! Your sponsorship helps maintain and improve this project.
Every contribution, no matter the size, is greatly appreciated and helps ensure the continued development of this extension. Thank you for your support! π
After cloning the repository, simply run:
# Clone repository
git clone git@github.com:twohreichel/NeuroQuantumDB.git
cd neuroquantumdbThe setup script automatically installs:
- β All required Rust tools (cargo-audit, cargo-deny, cargo-machete)
- β Pre-commit hooks for code quality
- β Git configuration for optimal workflow
- β Post-merge hooks for dependency updates
- β Commit message validation
NeuroQuantumDB uses secure initialization instead of default credentials:
# Initialize the database with your first admin key
neuroquantum-api init
# Or non-interactive with custom settings
neuroquantum-api init --name admin --expiry-hours 8760 --output .env --yes
# Generate a secure JWT secret for production
neuroquantum-api generate-jwt-secret --output config/jwt-secret.txt- β No Default Credentials - Requires explicit initialization
- β JWT Authentication - Secure token-based authentication
- β API Key Management - Granular permission control
- β Rate Limiting - Protection against abuse (5 key generations/hour per IP)
- β IP Whitelisting - Admin endpoints protected by IP whitelist
- β Post-Quantum Crypto - ML-KEM & ML-DSA ready
- β Biometric Auth - EEG-based authentication support
Edit config/prod.toml:
[auth]
jwt_secret = "YOUR-GENERATED-SECRET-HERE"
jwt_expiration_hours = 8
[security]
admin_ip_whitelist = [
"127.0.0.1",
"::1",
"YOUR-ADMIN-IP-HERE"
]In the following folders, you can find the latest libraries for interacting with the database in a programming language of your choice. These have also been stored as tags and can be installed via the most common providers (as example packagist, GitHub, Maven, PiPy):
- connecting-libraries/php/
The cluster mode is currently in development and should not be used in production environments.
The multi-node cluster functionality is available as a Beta/Preview feature for testing and development purposes. The following features are still missing or incomplete:
- β gRPC Network Transport - Inter-node communication not fully implemented
- β Complete Raft Implementation - Consensus protocol is partial
- β Service Discovery - DNS/Consul/etcd integration not yet available
- β Full Replication - Data replication has limitations
| Deployment Type | Status | Use Case |
|---|---|---|
| Single-Node | β Production-Ready | Recommended for all production workloads |
| Multi-Node Cluster | Development and testing only |
For production environments, we strongly recommend single-node deployments until the cluster module reaches stable release.
The full cluster implementation is planned for 2026 as part of our distributed architecture milestone. See Future Vision for details on the roadmap.
The complete API can be tested locally with Postman:
-
Import the Postman Collection:
- Open Postman
- Click on "Import"
- Drag the files from
postman/into the Import window:NeuroQuantumDB.postman_collection.jsonNeuroQuantumDB.postman_environment.json
-
Activate Environment:
- Select "NeuroQuantumDB Local" in the top right
-
Start the Server:
cargo run --bin neuroquantum-api
-
Test the API:
- Health Check β Login β Create Table β Insert Data
- The token is automatically saved! β¨
The Postman Collection contains ready-made requests for:
- β Authentication - Login, Token Refresh, API Key Management
- β CRUD Operations - Create, Read, Update, Delete with SQL
- β Neural Networks - Training and status queries
- β Quantum Search - Grover's algorithm search
- β DNA Compression - DNA sequence compression
- β Biometric Auth - EEG-based authentication
- β Monitoring - Prometheus metrics & Performance Stats
π Detailed Guide: See postman/README.md
Comprehensive documentation is available for developers and users:
Complete Documentation - Overview and navigation to all documentation resources
Project Conception - The origin story and design philosophy:
- How a small idea evolved into NeuroQuantumDB over three years
- Neuroscience foundations β the brain as architectural blueprint
- Core principles: Self-learning, DNA encoding, quantum-inspired algorithms
- Technical evolution and milestone timeline
- Future vision and roadmap
Developer Guide - Complete technical reference including:
- System architecture and design principles
- Core component internals (Storage Engine, DNA Compression, Quantum Processor)
- API reference and implementation details
- Development setup and build process
- Testing, benchmarking, and performance optimization
- Security architecture and best practices
- Contributing guidelines
User Guide - Practical guide for using NeuroQuantumDB:
- Quick start and installation instructions
- Configuration and deployment
- Using the REST API with examples
- QSQL query language reference
- Advanced features (DNA compression, quantum search, neural networks)
- Monitoring and maintenance
- Troubleshooting and FAQ
Comprehensive guides explaining Quantum Search, Neural Endpoints, and DNA Compression in detail:
- π©πͺ Feature Guide (Deutsch) - Detaillierte ErklΓ€rungen aller Features
- π¬π§ Feature Guide (English) - Detailed explanations of all features
These guides explain complex concepts in simple terms that anyone can understand!
- API Documentation: Run
make docs-apiand opentarget/doc/index.html - Interactive API Docs: Start the server and visit
http://localhost:8080/api-docs/ - Generate All Docs: Run
make docsto generate complete documentation - Serve Docs Locally: Run
make docs-serveto browse athttp://localhost:8000
Have a look at the Wiki for more information.
This repository uses a monorepo structure with automated releases via release-please.
All commits must follow the Conventional Commits specification. This enables automatic versioning and changelog generation.
Commit Format:
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
Types:
| Type | Description | Version Bump |
|---|---|---|
feat |
New feature | Minor |
fix |
Bug fix | Patch |
feat! or BREAKING CHANGE: |
Breaking change | Major (v1.0.0) |
perf |
Performance improvement | Patch |
docs |
Documentation only | None |
refactor |
Code refactoring | None |
test |
Adding tests | None |
ci |
CI/CD changes | None |
chore |
Maintenance | None |
Scopes route commits to the correct package for versioning:
| Scope | Target Package | Example |
|---|---|---|
api, core, cluster, qsql, wasm |
NeuroQuantumDB (main) | feat(api): add batch endpoint |
php |
PHP-Driver | fix(php): connection timeout |
| (no scope) | NeuroQuantumDB (main) | feat: improve query parser |
| Component | Tag Format | Example |
|---|---|---|
| NeuroQuantumDB | v{version} |
v1.0.0, v1.1.0 |
| PHP-Driver | php-driver/v{version} |
php-driver/v1.0.0 |
When a NeuroQuantumDB release is created:
- Multi-platform binaries: Linux x86_64, macOS arm64, macOS x86_64, Windows x86_64
- WASM package: npm tarball for browser integration
- crates.io: All 5 crates published in dependency order
To enable automatic publishing, add the following repository secret:
| Secret | Purpose | How to Obtain |
|---|---|---|
CARGO_REGISTRY_TOKEN |
Publish crates to crates.io | Generate at https://crates.io/settings/tokens |
Setup Steps:
- Go to Repository β Settings β Secrets and variables β Actions
- Add
CARGO_REGISTRY_TOKENas a repository secret - Ensure GitHub Actions has write permissions (Settings β Actions β General β Workflow permissions β Read and write permissions)
When adding a new library (e.g., Python, Node.js):
-
Add to
release-please-config.json:"connecting-libraries/python": { "release-type": "python", "component": "python-driver", "include-component-in-tag": true, "bump-minor-pre-major": false }
-
Add to
.release-please-manifest.json:"connecting-libraries/python": "0.1.0"
-
Add scope mapping (document in this README under Scope Mapping)
-
Create
connecting-libraries/<library>/CHANGELOG.md -
Add publish job to
.github/workflows/release.ymlif needed (e.g., PyPI, npm)