Recommendarr is an automation service inspired by Sonarr and Radarr. It watches for new releases across TV, anime, and film, evaluates them against user-authored interest profiles, and queues matching titles so operators can forward approved picks to Overseerr.
- Let users describe their preferences once and automatically surface matching titles.
- Keep the initial footprint small: single FastAPI service, SQLite storage, and background tasks.
- Make the matching engine explainable by capturing LLM rationales and decision metadata.
- Preserve user control with optional review queues and per-profile Overseerr routing.
- Web UI for CRUD of interest profiles, including previewing the LLM prompt payload.
- Manual and feed-driven ingestion of release metadata (start with manual form upload, add RSS/traceable feeds next).
- LLM-powered scoring of each release against active profiles with persisted verdicts.
- Queue-based Overseerr requests with manual approval before any title leaves Recommendarr.
- Basic operational tooling: health check, audit log, and
.env-based configuration.
Recommendarr will launch as a FastAPI app serving both JSON APIs and a lightweight server-rendered admin UI (Jinja + HTMX). SQLite (via SQLAlchemy) backs all persistence. Background jobs (FastAPI BackgroundTasks / asyncio scheduler) handle feed polling and asynchronous OpenAI calls.
- API Server: FastAPI app exposing REST endpoints for profiles, releases, matches, and Overseerr hand-offs.
- Web UI: Jinja templates + HTMX partials for profile management, release submissions, and match review.
- Persistence Layer: SQLAlchemy ORM backed by SQLite with Alembic migrations.
- Matching Service: Prompt builder + OpenAI client that returns a structured verdict (match flag, confidence, rationale, tags).
- Integration Layer: Clients for Overseerr API, release ingestion feeds (placeholder for now), and OpenAI.
interest_profiles: id, name, persona_text, content_filters (json), enabled, auto_approve, created_at, updated_at.incoming_releases: id, title, type, source, external_ids (json), metadata (json), received_at, status.match_results: id, release_id, profile_id, verdict (match/pending/rejected), confidence, rationale, llm_payload (json), created_at.match_requests: id, match_id, status (pending/accepted/rejected/failed), created_at, resolved_at, message.overseerr_requests: id, match_id, overseerr_request_id, status, requested_at, resolved_at.
- Normalize incoming release metadata (title, synopsis, genres, runtime, region) and store it.
- For each active profile, build an OpenAI prompt combining the profile persona and release summary.
- Call OpenAI (e.g.,
gpt-4o-mini) with JSON-mode instructions; capture verdict, rationale, and tags. - Persist the decision and confidence; create or update a request queue entry when the verdict is a match.
- When a queued request is approved, trigger Overseerr's API and capture the response for auditing.
Release submissions through POST /releases now schedule the full evaluation flow in the background. The worker records match_results, updates release status, and creates a match_requests row whenever a profile verdict is a match so operators can approve or reject the title before Overseerr sees it.
- Each LLM decision includes an
auto_requestdiagnostic payload summarising whether the match cleared the confidence threshold, even though forwarding now requires a manual approval step. - Matches stay in the queue with a
pendingstatus until they are approved, rejected, or a dispatch attempt fails. - Approvals invoke the Overseerr client, persist an
overseerr_requestsrecord with the payload/response, and mark the queue entry as accepted. Rejections capture the operator's note and resolved timestamp without contacting Overseerr.
- Profile list with quick status toggles and last match summary.
- Profile editor supporting persona text, optional filters (genres, language, minimum rating), auto-approve toggle.
- Release submission form for manual testing; drag/drop JSON or structured form.
- Match review and request queue screens with LLM rationale, confidence slider, and current request status.
- OpenAI: Matching inference; requires
OPENAI_API_KEY. - Overseerr: Request submission via REST API using service account token.
- Content feeds: Start with manual uploads; future integrations include Radarr/Sonarr webhook or RSS.
POST /profiles– create profile.GET /profiles,GET /profiles/{id},PATCH /profiles/{id},DELETE /profiles/{id}.POST /releases– ingest release payload (manual or webhook).GET /releases/{id}– show release with match summaries.POST /matches/{id}/approve– force Overseerr request.GET /requests– list queued match requests with status metadata.POST /requests/{id}/approve– approve a queued match and forward it to Overseerr.POST /requests/{id}/reject– reject a queued match without contacting Overseerr.GET /health– service heartbeat.
recommendarr/
├─ app/
│ ├─ api/
│ │ ├─ routes/
│ │ └─ dependencies.py
│ ├─ core/ # settings, logging, startup hooks
│ ├─ models/ # SQLAlchemy models + Pydantic schemas
│ ├─ services/ # match engine, Overseerr client, feed intake
│ ├─ workers/ # background tasks & schedulers
│ └─ main.py # FastAPI entrypoint
├─ web/
│ ├─ templates/
│ └─ static/
├─ migrations/ # Alembic scripts
├─ tests/
└─ README.md
- Install Python 3.11+ and
uv(or pip) for dependency management. - Create a virtual environment:
python -m venv .venv && source .venv/bin/activate. - Install dependencies (placeholder):
pip install -r requirements.txtonce generated. - Copy
.env.exampleto.envand set keys (OPENAI_API_KEY,OVERSEERR__URL,OVERSEERR__API_TOKEN). - Run the dev server:
uvicorn app.main:app --reload.
- Ensure the Docker services are up:
docker compose up --build -d. - Run the test suite inside the application container (Pytest and tooling are pre-installed):
docker compose exec app python -m pytest. - Alternatively, launch a one-off test run without attaching to the long-running container:
docker compose run --rm app python -m pytest. - The development image also bundles
curlandjqfor ad-hoc API checks while working in the container shell.
- The
appcontainer now startscronon boot and schedulesscripts/run_tmdb_ingest.shfor 03:00 server time each day (0 3 * * *). - The wrapper script loads
.envvalues so the TMDB credentials and Recommendarr URL are available when the job posts new releases. - Adjust the frequency or arguments by editing
docker/cron.d/tmdb_ingestor overriding the command inscripts/run_tmdb_ingest.sh. - Trigger a manual run inside the container with
docker compose exec app /app/scripts/run_tmdb_ingest.sh(or pass explicit CLI arguments).
OPENAI_API_KEY: OpenAI account key with access to chosen model.OPENAI_MODEL: Defaultgpt-4o-mini(override for testing).OVERSEERR__URL,OVERSEERR__API_TOKEN: Overseerr API endpoint and auth.RELEASE_FEED_URLS: Comma-separated list for future feed polling.AUTO_REQUEST_CONFIDENCE: Confidence threshold (0-1) used when flagging matches as ready for Overseerr review.
- FastAPI router tests using
httpx.AsyncClient+pytestfixtures. - Matching service unit tests with stubbed OpenAI responses (
tests/test_services.py). - Background workflow tests covering match persistence and Overseerr hand-off (
tests/test_workers.py). - Integration tests for Overseerr client using recorded responses (VCR or mocked).
- Scaffold FastAPI app structure with dependency injection and settings management.
- Implement SQLite models + Alembic migration pipeline.
- Build profile CRUD API/view + manual release submission UI.
- Implement OpenAI prompt templates and response parser with JSON schema validation.
- Add Overseerr client and approval workflow.
- Wire automated feed ingestion once manual flow is validated.
- Add logging, metrics, and request tracing.
- Preferred sources for release metadata (e.g., Trakt, TMDb, direct from Sonarr/Radarr).
- Handling duplicates and release versioning (quality upgrades, language variants).
- How aggressively to auto-request vs. prompt for confirmation when confidence is borderline.
- Budget constraints for LLM calls and batching strategy.
- Follow conventional commits and format code with
ruff/black(configs TBD). - Keep changes small and tested; include migration scripts when touching the schema.
- Document new endpoints and background jobs in this README as the project evolves.