PodGist is an automated tool that "listens" to your podcast habits and creates concise summaries for you. It monitors your gPodder account (or a self-hosted instance) for played episodes, automatically downloads the audio, transcribes it using an OpenAI-compatible Whisper API, and generates a structured summary using any OpenAI-compatible LLM API — including OpenAI itself, Ollama, vLLM, llama.cpp, or Google Gemini (via its OpenAI compatibility endpoint).
Warning
PodGist is under active development. The master branch and beta releases
may contain breaking changes between versions — including config file changes that
require a manual migration. Always check the release notes before upgrading.
- 🔄 Automated Sync: Polls gPodder.net (or your self-hosted instance) for new "play" actions (episodes you've listened to).
- 📥 Smart Downloading: Automatically downloads the audio files for processed episodes.
- 🔒 Flexible Transcription: Sends audio to your configured Whisper server (self-hosted or remote).
- 🤖 AI Summarization: Generates summaries using any OpenAI-compatible LLM API (OpenAI, Ollama, vLLM, llama.cpp, Gemini, etc.).
- 🐳 Docker Ready: Easy to deploy with Docker Compose.
- ⚙️ Highly Configurable: Customize models, prompts, and paths easily.
| Tag | Description |
|---|---|
latest / v*.*.* |
Stable release, pinned to a specific version. Receives weekly security patches if it's the newest tag. |
master |
Rolling release, always tracks the latest commit on master. May break without notice. |
For production use, always pin to a specific version tag (e.g. v1.0.0-beta.1).
Get up and running without installing Python or building code.
-
Download Files: Create a directory and save the following files into it:
docker-compose.ymlconfig.example/(the whole directory)
-
Configure:
- Copy
config.example/toconfig/:cp -r config.example config
- Edit
config/config.yamlto set your preferences (paths, models, etc.). - If either file is missing, the app will create it in
config/from the bundled examples on startup. - Set
llm.base_urlto your LLM server URL andllm.modelto the model name. - Set
whisper.base_urlto your Whisper server URL. - Create a
.envfile in the same directory with your credentials:GPODDER_USERNAME=your_username GPODDER_PASSWORD=your_password LLM_API_KEY=your_llm_key # Optional: Only if your LLM server requires auth WHISPER_API_KEY=your_whisper_key # Optional: Only if your Whisper server requires auth
- Copy
-
Run:
docker compose up -d
The service will start and check for episodes every 10 minutes.
-
Prerequisites:
- Python 3.14
- ffmpeg
- Access to an OpenAI-compatible Whisper server
- Access to an OpenAI-compatible LLM server (e.g. OpenAI, Ollama, vLLM, llama.cpp)
-
Clone & Install:
git clone https://github.com/hddq/podgist.git cd podgist python3 -m venv venv source venv/bin/activate pip3 install -e .
-
Configure:
- Copy
.env.exampleto.envand fill in credentials. - Copy
config.example/toconfig/:cp -r config.example config
- Important: Set
llm.base_urlandllm.modelto your LLM backend. - Important: Set
whisper.base_urlto your Whisper server endpoint. - If either file is missing, the app will create it in
config/from the bundled examples on startup. .python-versionis the source of truth for the Python minor version. After changing it, runpython3 scripts/sync_python_version.pyto updatepyproject.toml,Dockerfile, andREADME.md.
- Copy
-
Run:
python3 src/main.py
You can change how the summaries are generated by editing config/prompt.md. The {transcript} placeholder will be replaced by the actual text of the episode.
For long transcripts, PodGist automatically splits the transcript into chunks and uses a map-reduce flow. The per-chunk prompt lives in config/prompt_chunk.md, and the final synthesis prompt lives in config/prompt_final.md. In the chunk prompt, {transcript}, {chunk_index}, and {total_chunks} are available placeholders.
PodGist uses the OpenAI Python library under the hood, so it works with any server that exposes an OpenAI-compatible API. Just point llm.base_url and whisper.base_url at your servers.
Example backends:
| Backend | llm.base_url |
|---|---|
| OpenAI | https://api.openai.com |
| Ollama | http://localhost:11434 |
| vLLM | http://localhost:8000 |
| llama.cpp | http://localhost:8080 |
| Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
llm.extra_bodylets you pass backend-specific request fields such as Ollamaoptionsor other compatible extensions.llm.auto_pullenables automatic model pulling when using Ollama and the model is not yet downloaded.