The AI-Native Desktop Experience built on KDE Neon
Turing AI OS is a customized, intelligent layer built on top of the KDE Neon Linux distribution. It utilizes Ollama as a local, privacy-first AI backend and deeply integrates a suite of intelligent tools right into your desktop workflow.
Turing AI OS seamlessly weaves artificial intelligence into your daily tasks through a set of beautifully crafted, glassmorphism-styled PyQt6 applications:
- 💬 Turing Sidebar (
ui/sidebar.py) A persistent, translucent AI assistant that lives on the edge of your screen. It features a persistent memory system (SSD-backed) so Turing remembers previous interactions even after a reboot. - 🔍 Spotlight Search (
ui/spotlight.py) A lightning-fast, floating command palette. Press a shortcut, type a natural language query or command, and get instant streaming answers from the local LLM. - 💻 Turing Shell (
ui/turing_shell.py) A natural-language terminal built withrich. Don't know how to do something in Ubuntu/KDE? Just ask in plain English. Turing Shell translates your intent into precise Bash commands, explains them to you, and executes them upon your confirmation. - 👁️ Turing Vision (
ui/vision.py) Context-aware AI analysis. Point it at a file, a script, or an entire project directory, and Turing Vision will read the contents, analyze the structure, and provide a comprehensive explanation of what it does. - 🎛️ AI Control Panel (
ui/control_panel.py) Your central hub for AI settings. Easily swap out local logic engines (models), download new models from Ollama, wipe long-term vector memory, and tweak the system's generation temperature.
Turing AI OS is specifically designed for speed, privacy, and low resource utilization (optimized to run fast even on standard CPUs like an i3), utilizing local execution for all AI tasks.
core/llm_engine.py: The bridge to the Ollama backend. It useslangchain-ollamato interface with the local server, injecting the Turing OS System Persona into every interaction and handling token streaming for lag-free UI experiences.memory/chroma_db_manager.py: A local Vector Database usingchromadb. All Sidebar conversations are embedded and saved to SSD. When you talk to Turing, it silently searches this memory bank to construct augmented prompts.skills/: The system's action layer.file_ops.py: Allows the AI to read your directories and files.shell_ops.py: Specialized system prompt that forces the LLM to output valid bash commands without markdown.
ui/: The graphical layer. All components are built with PyQt6, utilizing frameless windows, translucent backgrounds, and drop shadows to match the custom KDE Neon aesthetics.
- OS: KDE Neon (or any modern Linux distribution)
- Python: 3.12 or newer
- Ollama: Installed and running in the background (Download Ollama)
-
Clone the Repository
git clone https://github.com/yourusername/turing-ai-os.git cd turing-ai-os -
Install Python Dependencies It's recommended to use a virtual environment.
pip install -r requirements.txt
-
Start the Ollama Service Ensure your local AI engine is running:
ollama serve
-
Pull the Default Neural Engine Turing AI OS defaults to
qwen2.5:1.5bfor extreme speed and efficiency:ollama pull qwen2.5:1.5b
-
Launch the Utilities You can map these Python scripts to global KDE keyboard shortcuts to launch them instantly:
python ui/sidebar.py # Launch the sliding assistant python ui/spotlight.py # Launch the quick command palette python ui/control_panel.py # Edit settings and download models python ui/turing_shell.py # Start the Natural Language Terminal
To use Turing Vision, pass a file or folder path as an argument:
python ui/vision.py /path/to/my/code/
The system's behavior is controlled by core/config.json. You can edit this directly or use the graphical AI Control Panel.
{
"model": {
"active_llm": "qwen2.5:1.5b",
"temperature": 0.3,
"max_ram_usage_gb": 2.0
},
"memory": {
"enabled": true,
"vector_db_path": "./memory/chroma_db"
},
...
}Turing AI OS is released under the Apache License 2.0.