Skip to content

rohvani/recommendarr

Repository files navigation

Recommendarr

Recommendarr is an automation service inspired by Sonarr and Radarr. It watches for new releases across TV, anime, and film, evaluates them against user-authored interest profiles, and queues matching titles so operators can forward approved picks to Overseerr.

Vision

  • Let users describe their preferences once and automatically surface matching titles.
  • Keep the initial footprint small: single FastAPI service, SQLite storage, and background tasks.
  • Make the matching engine explainable by capturing LLM rationales and decision metadata.
  • Preserve user control with optional review queues and per-profile Overseerr routing.

MVP Scope

  • Web UI for CRUD of interest profiles, including previewing the LLM prompt payload.
  • Manual and feed-driven ingestion of release metadata (start with manual form upload, add RSS/traceable feeds next).
  • LLM-powered scoring of each release against active profiles with persisted verdicts.
  • Queue-based Overseerr requests with manual approval before any title leaves Recommendarr.
  • Basic operational tooling: health check, audit log, and .env-based configuration.

Architecture Overview

Recommendarr will launch as a FastAPI app serving both JSON APIs and a lightweight server-rendered admin UI (Jinja + HTMX). SQLite (via SQLAlchemy) backs all persistence. Background jobs (FastAPI BackgroundTasks / asyncio scheduler) handle feed polling and asynchronous OpenAI calls.

Core Components

  • API Server: FastAPI app exposing REST endpoints for profiles, releases, matches, and Overseerr hand-offs.
  • Web UI: Jinja templates + HTMX partials for profile management, release submissions, and match review.
  • Persistence Layer: SQLAlchemy ORM backed by SQLite with Alembic migrations.
  • Matching Service: Prompt builder + OpenAI client that returns a structured verdict (match flag, confidence, rationale, tags).
  • Integration Layer: Clients for Overseerr API, release ingestion feeds (placeholder for now), and OpenAI.

Data Model (initial tables)

  • interest_profiles: id, name, persona_text, content_filters (json), enabled, auto_approve, created_at, updated_at.
  • incoming_releases: id, title, type, source, external_ids (json), metadata (json), received_at, status.
  • match_results: id, release_id, profile_id, verdict (match/pending/rejected), confidence, rationale, llm_payload (json), created_at.
  • match_requests: id, match_id, status (pending/accepted/rejected/failed), created_at, resolved_at, message.
  • overseerr_requests: id, match_id, overseerr_request_id, status, requested_at, resolved_at.

Matching Pipeline

  1. Normalize incoming release metadata (title, synopsis, genres, runtime, region) and store it.
  2. For each active profile, build an OpenAI prompt combining the profile persona and release summary.
  3. Call OpenAI (e.g., gpt-4o-mini) with JSON-mode instructions; capture verdict, rationale, and tags.
  4. Persist the decision and confidence; create or update a request queue entry when the verdict is a match.
  5. When a queued request is approved, trigger Overseerr's API and capture the response for auditing.

Release submissions through POST /releases now schedule the full evaluation flow in the background. The worker records match_results, updates release status, and creates a match_requests row whenever a profile verdict is a match so operators can approve or reject the title before Overseerr sees it.

Request Queue Behaviour

  • Each LLM decision includes an auto_request diagnostic payload summarising whether the match cleared the confidence threshold, even though forwarding now requires a manual approval step.
  • Matches stay in the queue with a pending status until they are approved, rejected, or a dispatch attempt fails.
  • Approvals invoke the Overseerr client, persist an overseerr_requests record with the payload/response, and mark the queue entry as accepted. Rejections capture the operator's note and resolved timestamp without contacting Overseerr.

Web Interface

  • Profile list with quick status toggles and last match summary.
  • Profile editor supporting persona text, optional filters (genres, language, minimum rating), auto-approve toggle.
  • Release submission form for manual testing; drag/drop JSON or structured form.
  • Match review and request queue screens with LLM rationale, confidence slider, and current request status.

External Integrations

  • OpenAI: Matching inference; requires OPENAI_API_KEY.
  • Overseerr: Request submission via REST API using service account token.
  • Content feeds: Start with manual uploads; future integrations include Radarr/Sonarr webhook or RSS.

API Surface (initial draft)

  • POST /profiles – create profile.
  • GET /profiles, GET /profiles/{id}, PATCH /profiles/{id}, DELETE /profiles/{id}.
  • POST /releases – ingest release payload (manual or webhook).
  • GET /releases/{id} – show release with match summaries.
  • POST /matches/{id}/approve – force Overseerr request.
  • GET /requests – list queued match requests with status metadata.
  • POST /requests/{id}/approve – approve a queued match and forward it to Overseerr.
  • POST /requests/{id}/reject – reject a queued match without contacting Overseerr.
  • GET /health – service heartbeat.

Project Layout (planned)

recommendarr/
├─ app/
│  ├─ api/
│  │  ├─ routes/
│  │  └─ dependencies.py
│  ├─ core/            # settings, logging, startup hooks
│  ├─ models/          # SQLAlchemy models + Pydantic schemas
│  ├─ services/        # match engine, Overseerr client, feed intake
│  ├─ workers/         # background tasks & schedulers
│  └─ main.py          # FastAPI entrypoint
├─ web/
│  ├─ templates/
│  └─ static/
├─ migrations/         # Alembic scripts
├─ tests/
└─ README.md

Local Setup

  1. Install Python 3.11+ and uv (or pip) for dependency management.
  2. Create a virtual environment: python -m venv .venv && source .venv/bin/activate.
  3. Install dependencies (placeholder): pip install -r requirements.txt once generated.
  4. Copy .env.example to .env and set keys (OPENAI_API_KEY, OVERSEERR__URL, OVERSEERR__API_TOKEN).
  5. Run the dev server: uvicorn app.main:app --reload.

Running Tests

  • Ensure the Docker services are up: docker compose up --build -d.
  • Run the test suite inside the application container (Pytest and tooling are pre-installed): docker compose exec app python -m pytest.
  • Alternatively, launch a one-off test run without attaching to the long-running container: docker compose run --rm app python -m pytest.
  • The development image also bundles curl and jq for ad-hoc API checks while working in the container shell.

TMDB Ingestion Automation

  • The app container now starts cron on boot and schedules scripts/run_tmdb_ingest.sh for 03:00 server time each day (0 3 * * *).
  • The wrapper script loads .env values so the TMDB credentials and Recommendarr URL are available when the job posts new releases.
  • Adjust the frequency or arguments by editing docker/cron.d/tmdb_ingest or overriding the command in scripts/run_tmdb_ingest.sh.
  • Trigger a manual run inside the container with docker compose exec app /app/scripts/run_tmdb_ingest.sh (or pass explicit CLI arguments).

Configuration & Secrets

  • OPENAI_API_KEY: OpenAI account key with access to chosen model.
  • OPENAI_MODEL: Default gpt-4o-mini (override for testing).
  • OVERSEERR__URL, OVERSEERR__API_TOKEN: Overseerr API endpoint and auth.
  • RELEASE_FEED_URLS: Comma-separated list for future feed polling.
  • AUTO_REQUEST_CONFIDENCE: Confidence threshold (0-1) used when flagging matches as ready for Overseerr review.

Testing Strategy

  • FastAPI router tests using httpx.AsyncClient + pytest fixtures.
  • Matching service unit tests with stubbed OpenAI responses (tests/test_services.py).
  • Background workflow tests covering match persistence and Overseerr hand-off (tests/test_workers.py).
  • Integration tests for Overseerr client using recorded responses (VCR or mocked).

Development Roadmap

  • Scaffold FastAPI app structure with dependency injection and settings management.
  • Implement SQLite models + Alembic migration pipeline.
  • Build profile CRUD API/view + manual release submission UI.
  • Implement OpenAI prompt templates and response parser with JSON schema validation.
  • Add Overseerr client and approval workflow.
  • Wire automated feed ingestion once manual flow is validated.
  • Add logging, metrics, and request tracing.

Open Questions

  • Preferred sources for release metadata (e.g., Trakt, TMDb, direct from Sonarr/Radarr).
  • Handling duplicates and release versioning (quality upgrades, language variants).
  • How aggressively to auto-request vs. prompt for confirmation when confidence is borderline.
  • Budget constraints for LLM calls and batching strategy.

Contributing

  • Follow conventional commits and format code with ruff / black (configs TBD).
  • Keep changes small and tested; include migration scripts when touching the schema.
  • Document new endpoints and background jobs in this README as the project evolves.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published