ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
-
Updated
Nov 4, 2024 - Shell
ComfyUI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.
Automatic1111 serverless worker.
RunPod serverless worker for Fooocus-API. Standalone or with network volume
RunPod Serverless Worker for Face Swapper and Restoration powered by insightface 🔥
RunPod Serverless Worker for the ComfyUI Stable Diffusion API
RunPod Serverless Worker for the Automatic1111 Stable Diffusion API
Getting started with a serverless endpoint on RunPod by creating a custom worker
RunPod Serverless Worker for Real-ESRGAN Restoration and Upscaling
Runpod-LLM provides ready-to-use container scripts for running large language models (LLMs) easily on RunPod.
RunPod Serverless Worker for the Stable Diffusion WebUI Forge API
RunPod Serverless Worker for Oobabooga Text Generation API for LLMs
🎥 Animate reference images using motion from pose videos with the easy-to-use ComfyUI integration for DreamID-V.
InstantID : Zero-shot Identity-Preserving Generation in Seconds | RunPod Serverless Worker
RunPod serverless worker for the vLLM AI text-gen inference. Simple, optimized and customisable.
LLaVA: Large Language and Vision Assistant | RunPod Serverless Worker
Run custom LLM models via Ollama on Runpod Serverless Load Balancer.
Adds diarization to faster-whisper Runpod worker
vLLM middleware that wrap RunPod-style endpoints so you can proxy fine-tuning requests from OpenAI SDK, launch jobs, and shuttle training artifacts through an S3-compatible store for custom vLLM fine-tuning workflows.
Add a description, image, and links to the runpod-worker topic page so that developers can more easily learn about it.
To associate your repository with the runpod-worker topic, visit your repo's landing page and select "manage topics."