Open-Source Intelligent Command Layer
-
Updated
Apr 16, 2026 - Python
Open-Source Intelligent Command Layer
Cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
Local-first Personal AI Memory OS - RAG over your entire life. Git, notes, calendar, location. 100% offline. No cloud.
🎬 Nano Cinema: An all-in-one local AI video production studio. Automatically orchestrates Llama-3 (Script), SDXL-Turbo (Visuals), EdgeTTS (Audio), and LTX-Video (Motion) into a seamless Python workflow. Create cinematic short films with no API fees, full privacy, and professional-grade editing logic included!!! 🚀
An intelligent local AI agent powered by open-source LLMs, featuring free web search, hybrid memory, and context-aware query rewriting for real-time, grounded answers.
**LocalEcho** is a fully local, open-source Text-to-Speech engine powered by **Qwen3 TTS** models
HWP / HWPX files are a web-based editor that can be opened and edited directly in the browser. You can modify Hangul documents without installing any separate program, and even use local AI (OLLAMA) to get Korean synonym suggestions.
A lightweight, self-contained Python project for running a local large language model (LLM) with minimal dependencies. This system uses TinyLlama-1.1B-Chat-v1.0.0 and llama-cpp-python for inference, and Rich for a user-friendly console chat interface
Lightweight Ruby gem for interacting with locally running Ollama LLMs with streaming, chat, and full offline privacy.
Local AI desktop app built for a single user. No accounts. No teams. No telemetry. Just you and your models.
Setup guide for AI-Mini PC. For hosting local LLM's via LM-Studio as RDP/headless-GUI Setup. In this example we'll use a Minisforum AI X1 Pro, AMD Ryzen AI 9 HX 370 / 64GB RAM
Local-first desktop AI daemon that runs fully offline. Tracks active desktop context, exposes a CLI, streams responses from local LLMs via Ollama, and runs as a systemd user service. Built for systems-level learning: IPC, daemons, streaming inference, OS integration.
Run model GGUF gui esay ,faster,localy 100%
This script is an automated AI-driven report-generating tool designed to help complete assignments in minutes. With its local or cloud-based AI integration, users can produce detailed reports effortlessly.
AI ChatBot, image creation, image stylizing, and Video Generation powered by an Intel® Arc™ A770 GPU. Free to build NSFW images/videos. 基于 Intel® Arc™ A770 GPU 打造的高性能 AI 创作平台,集成聊天、图像生成、图像风格化与视频生成等全流程能力。全面开放 NSFW 创作权限,为创作者提供真正无限制的自由表达空间。
A fully local desktop AI assistant built in C++ with wxWidgets, powered by llama.cpp and running offline.
Add a description, image, and links to the local-ai-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-ai-llm topic, visit your repo's landing page and select "manage topics."