24 Commits

Author SHA1 Message Date
Andy
9952758b38 feat(changelog): Update changelog with enhanced tagging configuration and improvements 2025-08-08 05:03:57 +00:00
Andy
f56e7c1ec8 chore(release): Bump version to 1.4.1 and update changelog with title caching features 2025-08-08 04:57:32 +00:00
Andy
096b7d70f8 Merge remote-tracking branch 'origin/main' into feature/title-caching 2025-08-08 04:50:46 +00:00
Andy
460878777d refactor(tags): Simplify Simkl search logic and soft-fail when no results found 2025-08-07 17:56:36 +00:00
Andy
9eb6bdbe12 feat(tags): Enhance tag_file function to prioritize provided TMDB ID if --tmdb is used 2025-08-06 22:15:16 +00:00
Andy
41d203aaba feat(config): Add options for tagging with group name and IMDB/TMDB details and new API endpoint of simkl if no tmdb api key is added. 2025-08-06 21:34:14 +00:00
Andy
0c6909be4e feat(dl): Update language option default to 'orig' if no -l is set, avoids hardcoded en 2025-08-06 21:33:23 +00:00
Andy
f0493292af feat: Implement title caching system to reduce API calls
- Add configurable title caching with fallback support
- Cache titles for 30 minutes by default, with 24-hour fallback on API failures
- Add --no-cache and --reset-cache CLI flags for cache control
- Implement region-aware caching to handle geo-restricted content
- Use SHA256 hashing for cache keys to handle complex title IDs
- Add cache configuration variables to config system
- Document new caching options in example config

This caching system significantly reduces redundant API calls when debugging
or modifying CLI parameters, improving both performance and reliability.
2025-08-06 17:08:58 +00:00
Andy
ead05d08ac fix(subtitle): Handle ValueError in subtitle filtering for multiple colons in time references fixes issues with subtitles that contain multiple colons 2025-08-06 01:28:03 +00:00
Andy
8c1f51a431 refactor: Remove Dockerfile and .dockerignore from the repository 2025-08-05 23:56:07 +00:00
Sp5rky
1d4e8bf9ec Update CHANGELOG.md 2025-08-05 17:43:57 -06:00
Andy
b4a1f2236e feat: Bump version to 1.4.0 and update changelog with new features and fixes 2025-08-05 23:37:45 +00:00
Andy
3277ab0d77 feat(playready): Enhance KID extraction from PSSH with base64 support and XML parsing 2025-08-05 23:28:30 +00:00
Andy
be0f7299f8 style(dl): Standardize quotation marks for service attribute checks 2025-08-05 23:27:59 +00:00
Andy
948ef30de7 feat(dl): Add support for services that do not support subtitle downloads 2025-08-05 20:22:08 +00:00
Andy
1bd63ddc91 feat(titles): Better detection of DV across all codecs in Episode and Movie classes dvhe.05.06 was not being detected correctly. 2025-08-05 18:33:51 +00:00
Andy
4dff597af2 feat(dl): Fix track selection to support combining -V, -A, -S flags
Previously, using multiple track selection flags like `-S -A` would not work
as expected. The flags were treated as mutually exclusive, resulting in only
one type of track being downloaded.

This change refactors the track selection logic to properly handle combinations:

- Multiple "only" flags now work together (e.g., `-S -A` downloads both)
- Exclusion flags (`--no-*`) continue to work and override selections
- Default behavior (no flags) remains unchanged

Fixes #10
2025-08-05 15:48:17 +00:00
Andy
8dbdde697d feat(hybrid): Enhance extraction and conversion processes with dymanic spinning bars to follow the rest of the codebase. 2025-08-05 14:57:51 +00:00
Andy
63c697f082 feat(series): Enhance tree representation with season breakdown 2025-08-04 19:30:27 +00:00
Andy
3e0835d9fb feat(dl): Improve DRM track decryption handling 2025-08-04 19:30:27 +00:00
Andy
c6c83ee43b feat(dl): Enhance language selection for video and audio tracks, including original language support 2025-08-04 19:30:27 +00:00
Andy
507690834b feat(tracks): Add support for HLG color transfer characteristics in video arguments 2025-08-04 19:28:11 +00:00
Andy
f8a58d966b feat(subtitle): Add filtering for unwanted cues in WebVTT subtitles 2025-08-03 22:10:17 +00:00
Andy
8d12b735ff feat(dl): Add option to include forced subtitle tracks 2025-08-03 22:00:21 +00:00
20 changed files with 931 additions and 371 deletions

View File

@@ -1,62 +0,0 @@
# Logs and temporary files
Logs/
logs/
temp/
\*.log
# Sensitive files
key_vault.db
unshackle/WVDs/
unshackle/PRDs/
unshackle/cookies/
_.prd
_.wvd
# Cache directories
unshackle/cache/
**pycache**/
_.pyc
_.pyo
\*.pyd
.Python
# Development files
.git/
.gitignore
.vscode/
.idea/
_.swp
_.swo
# Documentation and plans
plan/
CONTRIBUTING.md
CONFIG.md
AGENTS.md
OLD-CHANGELOG.md
cliff.toml
# Installation scripts
install.bat
# Test files
_test_
_Test_
# Virtual environments
venv/
env/
.venv/
# OS generated files
.DS_Store
Thumbs.db

View File

@@ -5,6 +5,105 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.4.1] - 2025-08-08
### Added
- **Title Caching System**: Intelligent title caching to reduce redundant API calls
- Configurable title caching with 30-minute default cache duration
- 24-hour fallback cache on API failures for improved reliability
- Region-aware caching to handle geo-restricted content properly
- SHA256 hashing for cache keys to handle complex title IDs
- Added `--no-cache` CLI flag to bypass caching when needed
- Added `--reset-cache` CLI flag to clear existing cache data
- New cache configuration variables in config system
- Documented caching options in example configuration file
- Significantly improves performance when debugging or modifying CLI parameters
- **Enhanced Tagging Configuration**: New options for customizing tag behavior
- Added `tag_group_name` config option to control group name inclusion in tags
- Added `tag_imdb_tmdb` config option to control IMDB/TMDB details in tags
- Added Simkl API endpoint support as fallback when no TMDB API key is provided
- Enhanced tag_file function to prioritize provided TMDB ID when `--tmdb` flag is used
- Improved TMDB ID handling with better prioritization logic
### Changed
- **Language Selection Enhancement**: Improved default language handling
- Updated language option default to 'orig' when no `-l` flag is set
- Avoids hardcoded 'en' default and respects original content language
- **Tagging Logic Improvements**: Simplified and enhanced tagging functionality
- Simplified Simkl search logic with soft-fail when no results found
- Enhanced tag_file function with better TMDB ID prioritization
- Improved error handling in tagging operations
### Fixed
- **Subtitle Processing**: Enhanced subtitle filtering for edge cases
- Fixed ValueError in subtitle filtering for multiple colons in time references
- Improved handling of subtitles containing complex time formatting
- Better error handling for malformed subtitle timestamps
### Removed
- **Docker Support**: Removed Docker configuration from repository
- Removed Dockerfile and .dockerignore files
- Cleaned up README.md Docker-related documentation
- Focuses on direct installation methods
## [1.4.0] - 2025-08-05
### Added
- **HLG Transfer Characteristics Preservation**: Enhanced video muxing to preserve HLG color metadata
- Added automatic detection of HLG video tracks during muxing process
- Implemented `--color-transfer-characteristics 0:18` argument for mkvmerge when processing HLG content
- Prevents incorrect conversion from HLG (18) to BT.2020 (14) transfer characteristics
- Ensures proper HLG playback support on compatible hardware without manual editing
- **Original Language Support**: Enhanced language selection with 'orig' keyword support
- Added support for 'orig' language selector for both video and audio tracks
- Automatically detects and uses the title's original language when 'orig' is specified
- Improved language processing logic with better duplicate handling
- Enhanced help text to document original language selection usage
- **Forced Subtitle Support**: Added option to include forced subtitle tracks
- New functionality to download and include forced subtitle tracks alongside regular subtitles
- **WebVTT Subtitle Filtering**: Enhanced subtitle processing capabilities
- Added filtering for unwanted cues in WebVTT subtitles
- Improved subtitle quality by removing unnecessary metadata
### Changed
- **DRM Track Decryption**: Improved DRM decryption track selection logic
- Enhanced `get_drm_for_cdm()` method usage for better DRM-CDM matching
- Added warning messages when no matching DRM is found for tracks
- Improved error handling and logging for DRM decryption failures
- **Series Tree Representation**: Enhanced episode tree display formatting
- Updated series tree to show season breakdown with episode counts
- Improved visual representation with "S{season}({count})" format
- Better organization of series information in console output
- **Hybrid Processing UI**: Enhanced extraction and conversion processes
- Added dynamic spinning bars to follow the rest of the codebase design
- Improved visual feedback during hybrid HDR processing operations
- **Track Selection Logic**: Enhanced multi-track selection capabilities
- Fixed track selection to support combining -V, -A, -S flags properly
- Improved flexibility in selecting multiple track types simultaneously
- **Service Subtitle Support**: Added configuration for services without subtitle support
- Services can now indicate if they don't support subtitle downloads
- Prevents unnecessary subtitle download attempts for unsupported services
- **Update Checker**: Enhanced update checking logic and cache handling
- Improved rate limiting and caching mechanisms for update checks
- Better performance and reduced API calls to GitHub
### Fixed
- **PlayReady KID Extraction**: Enhanced KID extraction from PSSH data
- Added base64 support and XML parsing for better KID detection
- Fixed issue where only one KID was being extracted for certain services
- Improved multi-KID support for PlayReady protected content
- **Dolby Vision Detection**: Improved DV codec detection across all formats
- Fixed detection of dvhe.05.06 codec which was not being recognized correctly
- Enhanced detection logic in Episode and Movie title classes
- Better support for various Dolby Vision codec variants
## [1.3.0] - 2025-08-03 ## [1.3.0] - 2025-08-03
### Added ### Added

View File

@@ -1,78 +0,0 @@
FROM python:3.12-slim
# Set environment variables to reduce image size
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
UV_CACHE_DIR=/tmp/uv-cache
# Add container metadata
LABEL org.opencontainers.image.description="Docker image for Unshackle with all required dependencies for downloading media content"
# Install base dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
gnupg \
git \
curl \
build-essential \
cmake \
pkg-config \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Set up repos for mkvtools and bullseye for ccextractor
RUN wget -O /etc/apt/keyrings/gpg-pub-moritzbunkus.gpg https://mkvtoolnix.download/gpg-pub-moritzbunkus.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/gpg-pub-moritzbunkus.gpg] https://mkvtoolnix.download/debian/ bookworm main" >> /etc/apt/sources.list \
&& echo "deb-src [signed-by=/etc/apt/keyrings/gpg-pub-moritzbunkus.gpg] https://mkvtoolnix.download/debian/ bookworm main" >> /etc/apt/sources.list \
&& echo "deb http://ftp.debian.org/debian bullseye main" >> /etc/apt/sources.list
# Install all dependencies from apt
RUN apt-get update && apt-get install -y \
ffmpeg \
ccextractor \
mkvtoolnix \
aria2 \
libmediainfo-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install shaka packager
RUN wget https://github.com/shaka-project/shaka-packager/releases/download/v2.6.1/packager-linux-x64 \
&& chmod +x packager-linux-x64 \
&& mv packager-linux-x64 /usr/local/bin/packager
# Install N_m3u8DL-RE
RUN wget https://github.com/nilaoda/N_m3u8DL-RE/releases/download/v0.3.0-beta/N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz \
&& tar -xzf N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz \
&& mv N_m3u8DL-RE /usr/local/bin/ \
&& chmod +x /usr/local/bin/N_m3u8DL-RE \
&& rm N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz
# Create binaries directory and add symlinks for all required executables
RUN mkdir -p /app/binaries && \
ln -sf /usr/bin/ffprobe /app/binaries/ffprobe && \
ln -sf /usr/bin/ffmpeg /app/binaries/ffmpeg && \
ln -sf /usr/bin/mkvmerge /app/binaries/mkvmerge && \
ln -sf /usr/local/bin/N_m3u8DL-RE /app/binaries/N_m3u8DL-RE && \
ln -sf /usr/local/bin/packager /app/binaries/packager && \
ln -sf /usr/local/bin/packager /usr/local/bin/shaka-packager && \
ln -sf /usr/local/bin/packager /usr/local/bin/packager-linux-x64
# Install uv
RUN pip install --no-cache-dir uv
# Set working directory
WORKDIR /app
# Copy dependency files and README (required by pyproject.toml)
COPY pyproject.toml uv.lock README.md ./
# Copy source code first
COPY unshackle/ ./unshackle/
# Install dependencies with uv (including the project itself)
RUN uv sync --frozen --no-dev
# Set entrypoint to allow passing commands directly to unshackle
ENTRYPOINT ["uv", "run", "unshackle"]
CMD ["-h"]

View File

@@ -42,45 +42,6 @@ uv tool install git+https://github.com/unshackle-dl/unshackle.git
uvx unshackle --help # or just `unshackle` once PATH updated uvx unshackle --help # or just `unshackle` once PATH updated
``` ```
### Docker Installation
Run unshackle using our pre-built Docker image from GitHub Container Registry:
```bash
# Run with default help command
docker run --rm ghcr.io/unshackle-dl/unshackle:latest
# Check environment dependencies
docker run --rm ghcr.io/unshackle-dl/unshackle:latest env check
# Download content (mount directories for persistent data)
docker run --rm \
-v "$(pwd)/unshackle/downloads:/app/downloads" \
-v "$(pwd)/unshackle/cookies:/app/unshackle/cookies" \
-v "$(pwd)/unshackle/services:/app/unshackle/services" \
-v "$(pwd)/unshackle/WVDs:/app/unshackle/WVDs" \
-v "$(pwd)/unshackle/PRDs:/app/unshackle/PRDs" \
-v "$(pwd)/unshackle/unshackle.yaml:/app/unshackle.yaml" \
ghcr.io/unshackle-dl/unshackle:latest dl SERVICE_NAME CONTENT_ID
# Run interactively for configuration
docker run --rm -it \
-v "$(pwd)/unshackle/cookies:/app/unshackle/cookies" \
-v "$(pwd)/unshackle/services:/app/unshackle/services" \
-v "$(pwd)/unshackle.yaml:/app/unshackle.yaml" \
ghcr.io/unshackle-dl/unshackle:latest cfg
```
**Alternative: Build locally**
```bash
# Clone and build your own image
git clone https://github.com/unshackle-dl/unshackle.git
cd unshackle
docker build -t unshackle .
docker run --rm unshackle env check
```
> [!NOTE] > [!NOTE]
> After installation, you may need to add the installation path to your PATH environment variable if prompted. > After installation, you may need to add the installation path to your PATH environment variable if prompted.

View File

@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "unshackle" name = "unshackle"
version = "1.3.0" version = "1.4.1"
description = "Modular Movie, TV, and Music Archival Software." description = "Modular Movie, TV, and Music Archival Software."
authors = [{ name = "unshackle team" }] authors = [{ name = "unshackle team" }]
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<3.13"

View File

@@ -139,7 +139,13 @@ class dl:
default=None, default=None,
help="Wanted episodes, e.g. `S01-S05,S07`, `S01E01-S02E03`, `S02-S02E03`, e.t.c, defaults to all.", help="Wanted episodes, e.g. `S01-S05,S07`, `S01E01-S02E03`, `S02-S02E03`, e.t.c, defaults to all.",
) )
@click.option("-l", "--lang", type=LANGUAGE_RANGE, default="en", help="Language wanted for Video and Audio.") @click.option(
"-l",
"--lang",
type=LANGUAGE_RANGE,
default="orig",
help="Language wanted for Video and Audio. Use 'orig' to select the original language, e.g. 'orig,en' for both original and English.",
)
@click.option( @click.option(
"-vl", "-vl",
"--v-lang", "--v-lang",
@@ -148,6 +154,7 @@ class dl:
help="Language wanted for Video, you would use this if the video language doesn't match the audio.", help="Language wanted for Video, you would use this if the video language doesn't match the audio.",
) )
@click.option("-sl", "--s-lang", type=LANGUAGE_RANGE, default=["all"], help="Language wanted for Subtitles.") @click.option("-sl", "--s-lang", type=LANGUAGE_RANGE, default=["all"], help="Language wanted for Subtitles.")
@click.option("-fs", "--forced-subs", is_flag=True, default=False, help="Include forced subtitle tracks.")
@click.option( @click.option(
"--proxy", "--proxy",
type=str, type=str,
@@ -233,6 +240,8 @@ class dl:
help="Max workers/threads to download with per-track. Default depends on the downloader.", help="Max workers/threads to download with per-track. Default depends on the downloader.",
) )
@click.option("--downloads", type=int, default=1, help="Amount of tracks to download concurrently.") @click.option("--downloads", type=int, default=1, help="Amount of tracks to download concurrently.")
@click.option("--no-cache", "no_cache", is_flag=True, default=False, help="Bypass title cache for this download.")
@click.option("--reset-cache", "reset_cache", is_flag=True, default=False, help="Clear title cache before fetching.")
@click.pass_context @click.pass_context
def cli(ctx: click.Context, **kwargs: Any) -> dl: def cli(ctx: click.Context, **kwargs: Any) -> dl:
return dl(ctx, **kwargs) return dl(ctx, **kwargs)
@@ -405,6 +414,7 @@ class dl:
lang: list[str], lang: list[str],
v_lang: list[str], v_lang: list[str],
s_lang: list[str], s_lang: list[str],
forced_subs: bool,
sub_format: Optional[Subtitle.Codec], sub_format: Optional[Subtitle.Codec],
video_only: bool, video_only: bool,
audio_only: bool, audio_only: bool,
@@ -428,6 +438,7 @@ class dl:
**__: Any, **__: Any,
) -> None: ) -> None:
self.tmdb_searched = False self.tmdb_searched = False
self.search_source = None
start_time = time.time() start_time = time.time()
# Check if dovi_tool is available when hybrid mode is requested # Check if dovi_tool is available when hybrid mode is requested
@@ -452,7 +463,7 @@ class dl:
self.log.info("Authenticated with Service") self.log.info("Authenticated with Service")
with console.status("Fetching Title Metadata...", spinner="dots"): with console.status("Fetching Title Metadata...", spinner="dots"):
titles = service.get_titles() titles = service.get_titles_cached()
if not titles: if not titles:
self.log.error("No titles returned, nothing to download...") self.log.error("No titles returned, nothing to download...")
sys.exit(1) sys.exit(1)
@@ -485,34 +496,34 @@ class dl:
if self.tmdb_id: if self.tmdb_id:
tmdb_title = tags.get_title(self.tmdb_id, kind) tmdb_title = tags.get_title(self.tmdb_id, kind)
else: else:
self.tmdb_id, tmdb_title = tags.search_tmdb(title.title, title.year, kind) self.tmdb_id, tmdb_title, self.search_source = tags.search_show_info(title.title, title.year, kind)
if not (self.tmdb_id and tmdb_title and tags.fuzzy_match(tmdb_title, title.title)): if not (self.tmdb_id and tmdb_title and tags.fuzzy_match(tmdb_title, title.title)):
self.tmdb_id = None self.tmdb_id = None
if list_ or list_titles: if list_ or list_titles:
if self.tmdb_id: if self.tmdb_id:
console.print( console.print(
Padding( Padding(
f"TMDB -> {tmdb_title or '?'} [bright_black](ID {self.tmdb_id})", f"Search -> {tmdb_title or '?'} [bright_black](ID {self.tmdb_id})",
(0, 5), (0, 5),
) )
) )
else: else:
console.print(Padding("TMDB -> [bright_black]No match found[/]", (0, 5))) console.print(Padding("Search -> [bright_black]No match found[/]", (0, 5)))
self.tmdb_searched = True self.tmdb_searched = True
if isinstance(title, Movie) and (list_ or list_titles) and not self.tmdb_id: if isinstance(title, Movie) and (list_ or list_titles) and not self.tmdb_id:
movie_id, movie_title = tags.search_tmdb(title.name, title.year, "movie") movie_id, movie_title, _ = tags.search_show_info(title.name, title.year, "movie")
if movie_id: if movie_id:
console.print( console.print(
Padding( Padding(
f"TMDB -> {movie_title or '?'} [bright_black](ID {movie_id})", f"Search -> {movie_title or '?'} [bright_black](ID {movie_id})",
(0, 5), (0, 5),
) )
) )
else: else:
console.print(Padding("TMDB -> [bright_black]No match found[/]", (0, 5))) console.print(Padding("Search -> [bright_black]No match found[/]", (0, 5)))
if self.tmdb_id: if self.tmdb_id and getattr(self, 'search_source', None) != 'simkl':
kind = "tv" if isinstance(title, Episode) else "movie" kind = "tv" if isinstance(title, Episode) else "movie"
tags.external_ids(self.tmdb_id, kind) tags.external_ids(self.tmdb_id, kind)
if self.tmdb_year: if self.tmdb_year:
@@ -533,7 +544,12 @@ class dl:
events.subscribe(events.Types.TRACK_REPACKED, service.on_track_repacked) events.subscribe(events.Types.TRACK_REPACKED, service.on_track_repacked)
events.subscribe(events.Types.TRACK_MULTIPLEX, service.on_track_multiplex) events.subscribe(events.Types.TRACK_MULTIPLEX, service.on_track_multiplex)
if no_subs: if hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES:
console.log("Skipping subtitles - service does not support subtitle downloads")
no_subs = True
s_lang = None
title.tracks.subtitles = []
elif no_subs:
console.log("Skipped subtitles as --no-subs was used...") console.log("Skipped subtitles as --no-subs was used...")
s_lang = None s_lang = None
title.tracks.subtitles = [] title.tracks.subtitles = []
@@ -560,8 +576,31 @@ class dl:
) )
with console.status("Sorting tracks by language and bitrate...", spinner="dots"): with console.status("Sorting tracks by language and bitrate...", spinner="dots"):
title.tracks.sort_videos(by_language=v_lang or lang) video_sort_lang = v_lang or lang
title.tracks.sort_audio(by_language=lang) processed_video_sort_lang = []
for language in video_sort_lang:
if language == "orig":
if title.language:
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
if orig_lang not in processed_video_sort_lang:
processed_video_sort_lang.append(orig_lang)
else:
if language not in processed_video_sort_lang:
processed_video_sort_lang.append(language)
processed_audio_sort_lang = []
for language in lang:
if language == "orig":
if title.language:
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
if orig_lang not in processed_audio_sort_lang:
processed_audio_sort_lang.append(orig_lang)
else:
if language not in processed_audio_sort_lang:
processed_audio_sort_lang.append(language)
title.tracks.sort_videos(by_language=processed_video_sort_lang)
title.tracks.sort_audio(by_language=processed_audio_sort_lang)
title.tracks.sort_subtitles(by_language=s_lang) title.tracks.sort_subtitles(by_language=s_lang)
if list_: if list_:
@@ -592,12 +631,27 @@ class dl:
self.log.error(f"There's no {vbitrate}kbps Video Track...") self.log.error(f"There's no {vbitrate}kbps Video Track...")
sys.exit(1) sys.exit(1)
# Filter out "best" from the video languages list.
video_languages = [lang for lang in (v_lang or lang) if lang != "best"] video_languages = [lang for lang in (v_lang or lang) if lang != "best"]
if video_languages and "all" not in video_languages: if video_languages and "all" not in video_languages:
title.tracks.videos = title.tracks.by_language(title.tracks.videos, video_languages) processed_video_lang = []
for language in video_languages:
if language == "orig":
if title.language:
orig_lang = (
str(title.language) if hasattr(title.language, "__str__") else title.language
)
if orig_lang not in processed_video_lang:
processed_video_lang.append(orig_lang)
else:
self.log.warning(
"Original language not available for title, skipping 'orig' selection for video"
)
else:
if language not in processed_video_lang:
processed_video_lang.append(language)
title.tracks.videos = title.tracks.by_language(title.tracks.videos, processed_video_lang)
if not title.tracks.videos: if not title.tracks.videos:
self.log.error(f"There's no {video_languages} Video Track...") self.log.error(f"There's no {processed_video_lang} Video Track...")
sys.exit(1) sys.exit(1)
if quality: if quality:
@@ -672,7 +726,8 @@ class dl:
self.log.error(f"There's no {s_lang} Subtitle Track...") self.log.error(f"There's no {s_lang} Subtitle Track...")
sys.exit(1) sys.exit(1)
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang)) if not forced_subs:
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang))
# filter audio tracks # filter audio tracks
# might have no audio tracks if part of the video, e.g. transport stream hls # might have no audio tracks if part of the video, e.g. transport stream hls
@@ -699,8 +754,24 @@ class dl:
self.log.error(f"There's no {abitrate}kbps Audio Track...") self.log.error(f"There's no {abitrate}kbps Audio Track...")
sys.exit(1) sys.exit(1)
if lang: if lang:
if "best" in lang: processed_lang = []
# Get unique languages and select highest quality for each for language in lang:
if language == "orig":
if title.language:
orig_lang = (
str(title.language) if hasattr(title.language, "__str__") else title.language
)
if orig_lang not in processed_lang:
processed_lang.append(orig_lang)
else:
self.log.warning(
"Original language not available for title, skipping 'orig' selection"
)
else:
if language not in processed_lang:
processed_lang.append(language)
if "best" in processed_lang:
unique_languages = {track.language for track in title.tracks.audio} unique_languages = {track.language for track in title.tracks.audio}
selected_audio = [] selected_audio = []
for language in unique_languages: for language in unique_languages:
@@ -710,30 +781,36 @@ class dl:
) )
selected_audio.append(highest_quality) selected_audio.append(highest_quality)
title.tracks.audio = selected_audio title.tracks.audio = selected_audio
elif "all" not in lang: elif "all" not in processed_lang:
title.tracks.audio = title.tracks.by_language(title.tracks.audio, lang, per_language=1) per_language = 0 if len(processed_lang) > 1 else 1
title.tracks.audio = title.tracks.by_language(
title.tracks.audio, processed_lang, per_language=per_language
)
if not title.tracks.audio: if not title.tracks.audio:
self.log.error(f"There's no {lang} Audio Track, cannot continue...") self.log.error(f"There's no {processed_lang} Audio Track, cannot continue...")
sys.exit(1) sys.exit(1)
if video_only or audio_only or subs_only or chapters_only or no_subs or no_audio or no_chapters: if video_only or audio_only or subs_only or chapters_only or no_subs or no_audio or no_chapters:
# Determine which track types to keep based on the flags keep_videos = False
keep_videos = True keep_audio = False
keep_audio = True keep_subtitles = False
keep_subtitles = True keep_chapters = False
keep_chapters = True
# Handle exclusive flags (only keep one type) if video_only or audio_only or subs_only or chapters_only:
if video_only: if video_only:
keep_audio = keep_subtitles = keep_chapters = False keep_videos = True
elif audio_only: if audio_only:
keep_videos = keep_subtitles = keep_chapters = False keep_audio = True
elif subs_only: if subs_only:
keep_videos = keep_audio = keep_chapters = False keep_subtitles = True
elif chapters_only: if chapters_only:
keep_videos = keep_audio = keep_subtitles = False keep_chapters = True
else:
keep_videos = True
keep_audio = True
keep_subtitles = True
keep_chapters = True
# Handle exclusion flags (remove specific types)
if no_subs: if no_subs:
keep_subtitles = False keep_subtitles = False
if no_audio: if no_audio:
@@ -741,7 +818,6 @@ class dl:
if no_chapters: if no_chapters:
keep_chapters = False keep_chapters = False
# Build the kept_tracks list without duplicates
kept_tracks = [] kept_tracks = []
if keep_videos: if keep_videos:
kept_tracks.extend(title.tracks.videos) kept_tracks.extend(title.tracks.videos)
@@ -838,6 +914,7 @@ class dl:
while ( while (
not title.tracks.subtitles not title.tracks.subtitles
and not no_subs and not no_subs
and not (hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES)
and not video_only and not video_only
and len(title.tracks.videos) > video_track_n and len(title.tracks.videos) > video_track_n
and any( and any(
@@ -926,12 +1003,15 @@ class dl:
with console.status(f"Decrypting tracks with {decrypt_tool}..."): with console.status(f"Decrypting tracks with {decrypt_tool}..."):
has_decrypted = False has_decrypted = False
for track in drm_tracks: for track in drm_tracks:
for drm in track.drm: drm = track.get_drm_for_cdm(self.cdm)
if hasattr(drm, "decrypt"): if drm and hasattr(drm, "decrypt"):
drm.decrypt(track.path, use_mp4decrypt=use_mp4decrypt) drm.decrypt(track.path, use_mp4decrypt=use_mp4decrypt)
has_decrypted = True has_decrypted = True
events.emit(events.Types.TRACK_REPACKED, track=track) events.emit(events.Types.TRACK_REPACKED, track=track)
break else:
self.log.warning(
f"No matching DRM found for track {track} with CDM type {type(self.cdm).__name__}"
)
if has_decrypted: if has_decrypted:
self.log.info(f"Decrypted tracks with {decrypt_tool}") self.log.info(f"Decrypted tracks with {decrypt_tool}")

View File

@@ -1 +1 @@
__version__ = "1.3.0" __version__ = "1.4.1"

View File

@@ -85,11 +85,17 @@ class Config:
self.set_terminal_bg: bool = kwargs.get("set_terminal_bg", False) self.set_terminal_bg: bool = kwargs.get("set_terminal_bg", False)
self.tag: str = kwargs.get("tag") or "" self.tag: str = kwargs.get("tag") or ""
self.tag_group_name: bool = kwargs.get("tag_group_name", True)
self.tag_imdb_tmdb: bool = kwargs.get("tag_imdb_tmdb", True)
self.tmdb_api_key: str = kwargs.get("tmdb_api_key") or "" self.tmdb_api_key: str = kwargs.get("tmdb_api_key") or ""
self.update_checks: bool = kwargs.get("update_checks", True) self.update_checks: bool = kwargs.get("update_checks", True)
self.update_check_interval: int = kwargs.get("update_check_interval", 24) self.update_check_interval: int = kwargs.get("update_check_interval", 24)
self.scene_naming: bool = kwargs.get("scene_naming", True) self.scene_naming: bool = kwargs.get("scene_naming", True)
self.title_cache_time: int = kwargs.get("title_cache_time", 1800) # 30 minutes default
self.title_cache_max_retention: int = kwargs.get("title_cache_max_retention", 86400) # 24 hours default
self.title_cache_enabled: bool = kwargs.get("title_cache_enabled", True)
@classmethod @classmethod
def from_yaml(cls, path: Path) -> Config: def from_yaml(cls, path: Path) -> Config:
if not path.exists(): if not path.exists():

View File

@@ -39,17 +39,23 @@ class PlayReady:
if not isinstance(pssh, PSSH): if not isinstance(pssh, PSSH):
raise TypeError(f"Expected pssh to be a {PSSH}, not {pssh!r}") raise TypeError(f"Expected pssh to be a {PSSH}, not {pssh!r}")
kids: list[UUID] = [] if pssh_b64:
for header in pssh.wrm_headers: kids = self._extract_kids_from_pssh_b64(pssh_b64)
try: else:
signed_ids, _, _, _ = header.read_attributes() kids = []
except Exception:
continue # Extract KIDs using pyplayready's method (may miss some KIDs)
for signed_id in signed_ids: if not kids:
for header in pssh.wrm_headers:
try: try:
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value))) signed_ids, _, _, _ = header.read_attributes()
except Exception: except Exception:
continue continue
for signed_id in signed_ids:
try:
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value)))
except Exception:
continue
if kid: if kid:
if isinstance(kid, str): if isinstance(kid, str):
@@ -72,6 +78,66 @@ class PlayReady:
if pssh_b64: if pssh_b64:
self.data.setdefault("pssh_b64", pssh_b64) self.data.setdefault("pssh_b64", pssh_b64)
def _extract_kids_from_pssh_b64(self, pssh_b64: str) -> list[UUID]:
"""Extract all KIDs from base64-encoded PSSH data."""
try:
import xml.etree.ElementTree as ET
# Decode the PSSH
pssh_bytes = base64.b64decode(pssh_b64)
# Try to find XML in the PSSH data
# PlayReady PSSH usually has XML embedded in it
pssh_str = pssh_bytes.decode("utf-16le", errors="ignore")
# Find WRMHEADER
xml_start = pssh_str.find("<WRMHEADER")
if xml_start == -1:
# Try UTF-8
pssh_str = pssh_bytes.decode("utf-8", errors="ignore")
xml_start = pssh_str.find("<WRMHEADER")
if xml_start != -1:
clean_xml = pssh_str[xml_start:]
xml_end = clean_xml.find("</WRMHEADER>") + len("</WRMHEADER>")
clean_xml = clean_xml[:xml_end]
root = ET.fromstring(clean_xml)
ns = {"pr": "http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader"}
kids = []
# Extract from CUSTOMATTRIBUTES/KIDS
kid_elements = root.findall(".//pr:CUSTOMATTRIBUTES/pr:KIDS/pr:KID", ns)
for kid_elem in kid_elements:
value = kid_elem.get("VALUE")
if value:
try:
kid_bytes = base64.b64decode(value + "==")
kid_uuid = UUID(bytes_le=kid_bytes)
kids.append(kid_uuid)
except Exception:
pass
# Also get individual KID
individual_kids = root.findall(".//pr:DATA/pr:KID", ns)
for kid_elem in individual_kids:
if kid_elem.text:
try:
kid_bytes = base64.b64decode(kid_elem.text.strip() + "==")
kid_uuid = UUID(bytes_le=kid_bytes)
if kid_uuid not in kids:
kids.append(kid_uuid)
except Exception:
pass
return kids
except Exception:
pass
return []
@classmethod @classmethod
def from_track(cls, track: AnyTrack, session: Optional[Session] = None) -> PlayReady: def from_track(cls, track: AnyTrack, session: Optional[Session] = None) -> PlayReady:
if not session: if not session:

View File

@@ -21,6 +21,7 @@ from unshackle.core.constants import AnyTrack
from unshackle.core.credential import Credential from unshackle.core.credential import Credential
from unshackle.core.drm import DRM_T from unshackle.core.drm import DRM_T
from unshackle.core.search_result import SearchResult from unshackle.core.search_result import SearchResult
from unshackle.core.title_cacher import TitleCacher, get_account_hash, get_region_from_proxy
from unshackle.core.titles import Title_T, Titles_T from unshackle.core.titles import Title_T, Titles_T
from unshackle.core.tracks import Chapters, Tracks from unshackle.core.tracks import Chapters, Tracks
from unshackle.core.utilities import get_ip_info from unshackle.core.utilities import get_ip_info
@@ -42,6 +43,12 @@ class Service(metaclass=ABCMeta):
self.session = self.get_session() self.session = self.get_session()
self.cache = Cacher(self.__class__.__name__) self.cache = Cacher(self.__class__.__name__)
self.title_cache = TitleCacher(self.__class__.__name__)
# Store context for cache control flags and credential
self.ctx = ctx
self.credential = None # Will be set in authenticate()
self.current_region = None # Will be set based on proxy/geolocation
if not ctx.parent or not ctx.parent.params.get("no_proxy"): if not ctx.parent or not ctx.parent.params.get("no_proxy"):
if ctx.parent: if ctx.parent:
@@ -79,6 +86,15 @@ class Service(metaclass=ABCMeta):
).decode() ).decode()
} }
) )
# Store region from proxy
self.current_region = get_region_from_proxy(proxy)
else:
# No proxy, try to get current region
try:
ip_info = get_ip_info(self.session)
self.current_region = ip_info.get("country", "").lower() if ip_info else None
except Exception:
self.current_region = None
# Optional Abstract functions # Optional Abstract functions
# The following functions may be implemented by the Service. # The following functions may be implemented by the Service.
@@ -123,6 +139,9 @@ class Service(metaclass=ABCMeta):
raise TypeError(f"Expected cookies to be a {CookieJar}, not {cookies!r}.") raise TypeError(f"Expected cookies to be a {CookieJar}, not {cookies!r}.")
self.session.cookies.update(cookies) self.session.cookies.update(cookies)
# Store credential for cache key generation
self.credential = credential
def search(self) -> Generator[SearchResult, None, None]: def search(self) -> Generator[SearchResult, None, None]:
""" """
Search by query for titles from the Service. Search by query for titles from the Service.
@@ -187,6 +206,52 @@ class Service(metaclass=ABCMeta):
This can be useful to store information on each title that will be required like any sub-asset IDs, or such. This can be useful to store information on each title that will be required like any sub-asset IDs, or such.
""" """
def get_titles_cached(self, title_id: str = None) -> Titles_T:
"""
Cached wrapper around get_titles() to reduce redundant API calls.
This method checks the cache before calling get_titles() and handles
fallback to cached data when API calls fail.
Args:
title_id: Optional title ID for cache key generation.
If not provided, will try to extract from service instance.
Returns:
Titles object (Movies, Series, or Album)
"""
# Try to get title_id from service instance if not provided
if title_id is None:
# Different services store the title ID in different attributes
if hasattr(self, "title"):
title_id = self.title
elif hasattr(self, "title_id"):
title_id = self.title_id
else:
# If we can't determine title_id, just call get_titles directly
self.log.debug("Cannot determine title_id for caching, bypassing cache")
return self.get_titles()
# Get cache control flags from context
no_cache = False
reset_cache = False
if self.ctx and self.ctx.parent:
no_cache = self.ctx.parent.params.get("no_cache", False)
reset_cache = self.ctx.parent.params.get("reset_cache", False)
# Get account hash for cache key
account_hash = get_account_hash(self.credential)
# Use title cache to get titles with fallback support
return self.title_cache.get_cached_titles(
title_id=str(title_id),
fetch_function=self.get_titles,
region=self.current_region,
account_hash=account_hash,
no_cache=no_cache,
reset_cache=reset_cache,
)
@abstractmethod @abstractmethod
def get_tracks(self, title: Title_T) -> Tracks: def get_tracks(self, title: Title_T) -> Tracks:
""" """

View File

@@ -0,0 +1,240 @@
from __future__ import annotations
import hashlib
import logging
from datetime import datetime, timedelta
from typing import Optional
from unshackle.core.cacher import Cacher
from unshackle.core.config import config
from unshackle.core.titles import Titles_T
class TitleCacher:
"""
Handles caching of Title objects to reduce redundant API calls.
This wrapper provides:
- Region-aware caching to handle geo-restricted content
- Automatic fallback to cached data when API calls fail
- Cache lifetime extension during failures
- Cache hit/miss statistics for debugging
"""
def __init__(self, service_name: str):
self.service_name = service_name
self.log = logging.getLogger(f"{service_name}.TitleCache")
self.cacher = Cacher(service_name)
self.stats = {"hits": 0, "misses": 0, "fallbacks": 0}
def _generate_cache_key(
self, title_id: str, region: Optional[str] = None, account_hash: Optional[str] = None
) -> str:
"""
Generate a unique cache key for title data.
Args:
title_id: The title identifier
region: The region/proxy identifier
account_hash: Hash of account credentials (if applicable)
Returns:
A unique cache key string
"""
# Hash the title_id to handle complex IDs (URLs, dots, special chars)
# This ensures consistent length and filesystem-safe keys
title_hash = hashlib.sha256(title_id.encode()).hexdigest()[:16]
# Start with base key using hash
key_parts = ["titles", title_hash]
# Add region if available
if region:
key_parts.append(region.lower())
# Add account hash if available
if account_hash:
key_parts.append(account_hash[:8]) # Use first 8 chars of hash
# Join with underscores
cache_key = "_".join(key_parts)
# Log the mapping for debugging
self.log.debug(f"Cache key mapping: {title_id} -> {cache_key}")
return cache_key
def get_cached_titles(
self,
title_id: str,
fetch_function,
region: Optional[str] = None,
account_hash: Optional[str] = None,
no_cache: bool = False,
reset_cache: bool = False,
) -> Optional[Titles_T]:
"""
Get titles from cache or fetch from API with fallback support.
Args:
title_id: The title identifier
fetch_function: Function to call to fetch fresh titles
region: The region/proxy identifier
account_hash: Hash of account credentials
no_cache: Bypass cache completely
reset_cache: Clear cache before fetching
Returns:
Titles object (Movies, Series, or Album)
"""
# If caching is globally disabled or no_cache flag is set
if not config.title_cache_enabled or no_cache:
self.log.debug("Cache bypassed, fetching fresh titles")
return fetch_function()
# Generate cache key
cache_key = self._generate_cache_key(title_id, region, account_hash)
# If reset_cache flag is set, clear the cache entry
if reset_cache:
self.log.info(f"Clearing cache for {cache_key}")
cache_path = (config.directories.cache / self.service_name / cache_key).with_suffix(".json")
if cache_path.exists():
cache_path.unlink()
# Try to get from cache
cache = self.cacher.get(cache_key, version=1)
# Check if we have valid cached data
if cache and not cache.expired:
self.stats["hits"] += 1
self.log.debug(f"Cache hit for {title_id} (hits: {self.stats['hits']}, misses: {self.stats['misses']})")
return cache.data
# Cache miss or expired, try to fetch fresh data
self.stats["misses"] += 1
self.log.debug(f"Cache miss for {title_id}, fetching fresh data")
try:
# Attempt to fetch fresh titles
titles = fetch_function()
if titles:
# Successfully fetched, update cache
self.log.debug(f"Successfully fetched titles for {title_id}, updating cache")
cache = self.cacher.get(cache_key, version=1)
cache.set(titles, expiration=datetime.now() + timedelta(seconds=config.title_cache_time))
return titles
except Exception as e:
# API call failed, check if we have fallback cached data
if cache and cache.data:
# We have expired cached data, use it as fallback
current_time = datetime.now()
max_retention_time = cache.expiration + timedelta(
seconds=config.title_cache_max_retention - config.title_cache_time
)
if current_time < max_retention_time:
self.stats["fallbacks"] += 1
self.log.warning(
f"API call failed for {title_id}, using cached data as fallback "
f"(fallbacks: {self.stats['fallbacks']})"
)
self.log.debug(f"Error was: {e}")
# Extend cache lifetime
extended_expiration = current_time + timedelta(minutes=5)
if extended_expiration < max_retention_time:
cache.expiration = extended_expiration
cache.set(cache.data, expiration=extended_expiration)
return cache.data
else:
self.log.error(f"API call failed and cached data for {title_id} exceeded maximum retention time")
# Re-raise the exception if no fallback available
raise
def clear_all_title_cache(self):
"""Clear all title caches for this service."""
cache_dir = config.directories.cache / self.service_name
if cache_dir.exists():
for cache_file in cache_dir.glob("titles_*.json"):
cache_file.unlink()
self.log.info(f"Cleared cache file: {cache_file.name}")
def get_cache_stats(self) -> dict:
"""Get cache statistics."""
total = sum(self.stats.values())
if total > 0:
hit_rate = (self.stats["hits"] / total) * 100
else:
hit_rate = 0
return {
"hits": self.stats["hits"],
"misses": self.stats["misses"],
"fallbacks": self.stats["fallbacks"],
"hit_rate": f"{hit_rate:.1f}%",
}
def get_region_from_proxy(proxy_url: Optional[str]) -> Optional[str]:
"""
Extract region identifier from proxy URL.
Args:
proxy_url: The proxy URL string
Returns:
Region identifier or None
"""
if not proxy_url:
return None
# Try to extract region from common proxy patterns
# e.g., "us123.nordvpn.com", "gb-proxy.example.com"
import re
# Pattern for NordVPN style
nord_match = re.search(r"([a-z]{2})\d+\.nordvpn", proxy_url.lower())
if nord_match:
return nord_match.group(1)
# Pattern for country code at start
cc_match = re.search(r"([a-z]{2})[-_]", proxy_url.lower())
if cc_match:
return cc_match.group(1)
# Pattern for country code subdomain
subdomain_match = re.search(r"://([a-z]{2})\.", proxy_url.lower())
if subdomain_match:
return subdomain_match.group(1)
return None
def get_account_hash(credential) -> Optional[str]:
"""
Generate a hash for account identification.
Args:
credential: Credential object
Returns:
SHA1 hash of the credential or None
"""
if not credential:
return None
# Use existing sha1 property if available
if hasattr(credential, "sha1"):
return credential.sha1
# Otherwise generate hash from username
if hasattr(credential, "username"):
return hashlib.sha1(credential.username.encode()).hexdigest()
return None

View File

@@ -170,8 +170,9 @@ class Episode(Title):
frame_rate = float(primary_video_track.frame_rate) frame_rate = float(primary_video_track.frame_rate)
if hdr_format: if hdr_format:
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"): if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
if (primary_video_track.hdr_format_commercial) != "Dolby Vision": name += " DV"
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} " if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
name += " HDR"
else: else:
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} " name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
elif trc and "HLG" in trc: elif trc and "HLG" in trc:
@@ -201,9 +202,10 @@ class Series(SortedKeyList, ABC):
def tree(self, verbose: bool = False) -> Tree: def tree(self, verbose: bool = False) -> Tree:
seasons = Counter(x.season for x in self) seasons = Counter(x.season for x in self)
num_seasons = len(seasons) num_seasons = len(seasons)
num_episodes = sum(seasons.values()) sum(seasons.values())
season_breakdown = ", ".join(f"S{season}({count})" for season, count in sorted(seasons.items()))
tree = Tree( tree = Tree(
f"{num_seasons} Season{['s', ''][num_seasons == 1]}, {num_episodes} Episode{['s', ''][num_episodes == 1]}", f"{num_seasons} seasons, {season_breakdown}",
guide_style="bright_black", guide_style="bright_black",
) )
if verbose: if verbose:

View File

@@ -121,8 +121,9 @@ class Movie(Title):
frame_rate = float(primary_video_track.frame_rate) frame_rate = float(primary_video_track.frame_rate)
if hdr_format: if hdr_format:
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"): if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
if (primary_video_track.hdr_format_commercial) != "Dolby Vision": name += " DV"
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} " if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
name += " HDR"
else: else:
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} " name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
elif trc and "HLG" in trc: elif trc and "HLG" in trc:

View File

@@ -126,38 +126,40 @@ class Hybrid:
def extract_stream(self, save_path, type_): def extract_stream(self, save_path, type_):
output = Path(config.directories.temp / f"{type_}.hevc") output = Path(config.directories.temp / f"{type_}.hevc")
self.log.info(f"+ Extracting {type_} stream") with console.status(f"Extracting {type_} stream...", spinner="dots"):
returncode = self.ffmpeg_simple(save_path, output)
returncode = self.ffmpeg_simple(save_path, output)
if returncode: if returncode:
output.unlink(missing_ok=True) output.unlink(missing_ok=True)
self.log.error(f"x Failed extracting {type_} stream") self.log.error(f"x Failed extracting {type_} stream")
sys.exit(1) sys.exit(1)
self.log.info(f"Extracted {type_} stream")
def extract_rpu(self, video, untouched=False): def extract_rpu(self, video, untouched=False):
if os.path.isfile(config.directories.temp / "RPU.bin") or os.path.isfile( if os.path.isfile(config.directories.temp / "RPU.bin") or os.path.isfile(
config.directories.temp / "RPU_UNT.bin" config.directories.temp / "RPU_UNT.bin"
): ):
return return
self.log.info(f"+ Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream") with console.status(
f"Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream...", spinner="dots"
):
extraction_args = [str(DoviTool)]
if not untouched:
extraction_args += ["-m", "3"]
extraction_args += [
"extract-rpu",
config.directories.temp / "DV.hevc",
"-o",
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
]
extraction_args = [str(DoviTool)] rpu_extraction = subprocess.run(
if not untouched: extraction_args,
extraction_args += ["-m", "3"] stdout=subprocess.PIPE,
extraction_args += [ stderr=subprocess.PIPE,
"extract-rpu", )
config.directories.temp / "DV.hevc",
"-o",
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
]
rpu_extraction = subprocess.run(
extraction_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
if rpu_extraction.returncode: if rpu_extraction.returncode:
Path.unlink(config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin") Path.unlink(config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin")
@@ -168,6 +170,8 @@ class Hybrid:
else: else:
raise ValueError(f"Failed extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream") raise ValueError(f"Failed extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
self.log.info(f"Extracted{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
def level_6(self): def level_6(self):
"""Edit RPU Level 6 values""" """Edit RPU Level 6 values"""
with open(config.directories.temp / "L6.json", "w+") as level6_file: with open(config.directories.temp / "L6.json", "w+") as level6_file:
@@ -185,26 +189,28 @@ class Hybrid:
json.dump(level6, level6_file, indent=3) json.dump(level6, level6_file, indent=3)
if not os.path.isfile(config.directories.temp / "RPU_L6.bin"): if not os.path.isfile(config.directories.temp / "RPU_L6.bin"):
self.log.info("+ Editing RPU Level 6 values") with console.status("Editing RPU Level 6 values...", spinner="dots"):
level6 = subprocess.run( level6 = subprocess.run(
[ [
str(DoviTool), str(DoviTool),
"editor", "editor",
"-i", "-i",
config.directories.temp / self.rpu_file, config.directories.temp / self.rpu_file,
"-j", "-j",
config.directories.temp / "L6.json", config.directories.temp / "L6.json",
"-o", "-o",
config.directories.temp / "RPU_L6.bin", config.directories.temp / "RPU_L6.bin",
], ],
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
) )
if level6.returncode: if level6.returncode:
Path.unlink(config.directories.temp / "RPU_L6.bin") Path.unlink(config.directories.temp / "RPU_L6.bin")
raise ValueError("Failed editing RPU Level 6 values") raise ValueError("Failed editing RPU Level 6 values")
self.log.info("Edited RPU Level 6 values")
# Update rpu_file to use the edited version # Update rpu_file to use the edited version
self.rpu_file = "RPU_L6.bin" self.rpu_file = "RPU_L6.bin"
@@ -212,35 +218,36 @@ class Hybrid:
if os.path.isfile(config.directories.temp / self.hevc_file): if os.path.isfile(config.directories.temp / self.hevc_file):
return return
self.log.info(f"+ Injecting Dolby Vision metadata into {self.hdr_type} stream") with console.status(f"Injecting Dolby Vision metadata into {self.hdr_type} stream...", spinner="dots"):
inject_cmd = [
str(DoviTool),
"inject-rpu",
"-i",
config.directories.temp / "HDR10.hevc",
"--rpu-in",
config.directories.temp / self.rpu_file,
]
inject_cmd = [ # If we converted from HDR10+, optionally remove HDR10+ metadata during injection
str(DoviTool), # Default to removing HDR10+ metadata since we're converting to DV
"inject-rpu", if self.hdr10plus_to_dv:
"-i", inject_cmd.append("--drop-hdr10plus")
config.directories.temp / "HDR10.hevc", self.log.info(" - Removing HDR10+ metadata during injection")
"--rpu-in",
config.directories.temp / self.rpu_file,
]
# If we converted from HDR10+, optionally remove HDR10+ metadata during injection inject_cmd.extend(["-o", config.directories.temp / self.hevc_file])
# Default to removing HDR10+ metadata since we're converting to DV
if self.hdr10plus_to_dv:
inject_cmd.append("--drop-hdr10plus")
self.log.info(" - Removing HDR10+ metadata during injection")
inject_cmd.extend(["-o", config.directories.temp / self.hevc_file]) inject = subprocess.run(
inject_cmd,
inject = subprocess.run( stdout=subprocess.PIPE,
inject_cmd, stderr=subprocess.PIPE,
stdout=subprocess.PIPE, )
stderr=subprocess.PIPE,
)
if inject.returncode: if inject.returncode:
Path.unlink(config.directories.temp / self.hevc_file) Path.unlink(config.directories.temp / self.hevc_file)
raise ValueError("Failed injecting Dolby Vision metadata into HDR10 stream") raise ValueError("Failed injecting Dolby Vision metadata into HDR10 stream")
self.log.info(f"Injected Dolby Vision metadata into {self.hdr_type} stream")
def extract_hdr10plus(self, _video): def extract_hdr10plus(self, _video):
"""Extract HDR10+ metadata from the video stream""" """Extract HDR10+ metadata from the video stream"""
if os.path.isfile(config.directories.temp / self.hdr10plus_file): if os.path.isfile(config.directories.temp / self.hdr10plus_file):
@@ -249,20 +256,19 @@ class Hybrid:
if not HDR10PlusTool: if not HDR10PlusTool:
raise ValueError("HDR10Plus_tool not found. Please install it to use HDR10+ to DV conversion.") raise ValueError("HDR10Plus_tool not found. Please install it to use HDR10+ to DV conversion.")
self.log.info("+ Extracting HDR10+ metadata") with console.status("Extracting HDR10+ metadata...", spinner="dots"):
# HDR10Plus_tool needs raw HEVC stream
# HDR10Plus_tool needs raw HEVC stream extraction = subprocess.run(
extraction = subprocess.run( [
[ str(HDR10PlusTool),
str(HDR10PlusTool), "extract",
"extract", str(config.directories.temp / "HDR10.hevc"),
str(config.directories.temp / "HDR10.hevc"), "-o",
"-o", str(config.directories.temp / self.hdr10plus_file),
str(config.directories.temp / self.hdr10plus_file), ],
], stdout=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stderr=subprocess.PIPE, )
)
if extraction.returncode: if extraction.returncode:
raise ValueError("Failed extracting HDR10+ metadata") raise ValueError("Failed extracting HDR10+ metadata")
@@ -271,47 +277,49 @@ class Hybrid:
if os.path.getsize(config.directories.temp / self.hdr10plus_file) == 0: if os.path.getsize(config.directories.temp / self.hdr10plus_file) == 0:
raise ValueError("No HDR10+ metadata found in the stream") raise ValueError("No HDR10+ metadata found in the stream")
self.log.info("Extracted HDR10+ metadata")
def convert_hdr10plus_to_dv(self): def convert_hdr10plus_to_dv(self):
"""Convert HDR10+ metadata to Dolby Vision RPU""" """Convert HDR10+ metadata to Dolby Vision RPU"""
if os.path.isfile(config.directories.temp / "RPU.bin"): if os.path.isfile(config.directories.temp / "RPU.bin"):
return return
self.log.info("+ Converting HDR10+ metadata to Dolby Vision") with console.status("Converting HDR10+ metadata to Dolby Vision...", spinner="dots"):
# First create the extra metadata JSON for dovi_tool
extra_metadata = {
"cm_version": "V29",
"length": 0, # dovi_tool will figure this out
"level6": {
"max_display_mastering_luminance": 1000,
"min_display_mastering_luminance": 1,
"max_content_light_level": 0,
"max_frame_average_light_level": 0,
},
}
# First create the extra metadata JSON for dovi_tool with open(config.directories.temp / "extra.json", "w") as f:
extra_metadata = { json.dump(extra_metadata, f, indent=2)
"cm_version": "V29",
"length": 0, # dovi_tool will figure this out
"level6": {
"max_display_mastering_luminance": 1000,
"min_display_mastering_luminance": 1,
"max_content_light_level": 0,
"max_frame_average_light_level": 0,
},
}
with open(config.directories.temp / "extra.json", "w") as f: # Generate DV RPU from HDR10+ metadata
json.dump(extra_metadata, f, indent=2) conversion = subprocess.run(
[
# Generate DV RPU from HDR10+ metadata str(DoviTool),
conversion = subprocess.run( "generate",
[ "-j",
str(DoviTool), str(config.directories.temp / "extra.json"),
"generate", "--hdr10plus-json",
"-j", str(config.directories.temp / self.hdr10plus_file),
str(config.directories.temp / "extra.json"), "-o",
"--hdr10plus-json", str(config.directories.temp / "RPU.bin"),
str(config.directories.temp / self.hdr10plus_file), ],
"-o", stdout=subprocess.PIPE,
str(config.directories.temp / "RPU.bin"), stderr=subprocess.PIPE,
], )
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
if conversion.returncode: if conversion.returncode:
raise ValueError("Failed converting HDR10+ to Dolby Vision") raise ValueError("Failed converting HDR10+ to Dolby Vision")
self.log.info("Converted HDR10+ metadata to Dolby Vision")
self.log.info("✓ HDR10+ successfully converted to Dolby Vision Profile 8") self.log.info("✓ HDR10+ successfully converted to Dolby Vision Profile 8")
# Clean up temporary files # Clean up temporary files

View File

@@ -233,6 +233,7 @@ class Subtitle(Track):
try: try:
caption_set = pycaption.WebVTTReader().read(text) caption_set = pycaption.WebVTTReader().read(text)
Subtitle.merge_same_cues(caption_set) Subtitle.merge_same_cues(caption_set)
Subtitle.filter_unwanted_cues(caption_set)
subtitle_text = pycaption.WebVTTWriter().write(caption_set) subtitle_text = pycaption.WebVTTWriter().write(caption_set)
self.path.write_text(subtitle_text, encoding="utf8") self.path.write_text(subtitle_text, encoding="utf8")
except pycaption.exceptions.CaptionReadSyntaxError: except pycaption.exceptions.CaptionReadSyntaxError:
@@ -241,6 +242,7 @@ class Subtitle(Track):
try: try:
caption_set = pycaption.WebVTTReader().read(text) caption_set = pycaption.WebVTTReader().read(text)
Subtitle.merge_same_cues(caption_set) Subtitle.merge_same_cues(caption_set)
Subtitle.filter_unwanted_cues(caption_set)
subtitle_text = pycaption.WebVTTWriter().write(caption_set) subtitle_text = pycaption.WebVTTWriter().write(caption_set)
self.path.write_text(subtitle_text, encoding="utf8") self.path.write_text(subtitle_text, encoding="utf8")
except Exception: except Exception:
@@ -444,6 +446,8 @@ class Subtitle(Track):
caption_set = self.parse(self.path.read_bytes(), self.codec) caption_set = self.parse(self.path.read_bytes(), self.codec)
Subtitle.merge_same_cues(caption_set) Subtitle.merge_same_cues(caption_set)
if codec == Subtitle.Codec.WebVTT:
Subtitle.filter_unwanted_cues(caption_set)
subtitle_text = writer().write(caption_set) subtitle_text = writer().write(caption_set)
output_path.write_text(subtitle_text, encoding="utf8") output_path.write_text(subtitle_text, encoding="utf8")
@@ -520,6 +524,8 @@ class Subtitle(Track):
caption_set = self.parse(self.path.read_bytes(), self.codec) caption_set = self.parse(self.path.read_bytes(), self.codec)
Subtitle.merge_same_cues(caption_set) Subtitle.merge_same_cues(caption_set)
if codec == Subtitle.Codec.WebVTT:
Subtitle.filter_unwanted_cues(caption_set)
subtitle_text = writer().write(caption_set) subtitle_text = writer().write(caption_set)
output_path.write_text(subtitle_text, encoding="utf8") output_path.write_text(subtitle_text, encoding="utf8")
@@ -681,6 +687,24 @@ class Subtitle(Track):
if merged_captions: if merged_captions:
caption_set.set_captions(lang, merged_captions) caption_set.set_captions(lang, merged_captions)
@staticmethod
def filter_unwanted_cues(caption_set: pycaption.CaptionSet):
"""
Filter out subtitle cues containing only &nbsp; or whitespace.
"""
for lang in caption_set.get_languages():
captions = caption_set.get_captions(lang)
filtered_captions = pycaption.CaptionList()
for caption in captions:
text = caption.get_text().strip()
if not text or text == "&nbsp;" or all(c in " \t\n\r\xa0" for c in text.replace("&nbsp;", "\xa0")):
continue
filtered_captions.append(caption)
caption_set.set_captions(lang, filtered_captions)
@staticmethod @staticmethod
def merge_segmented_wvtt(data: bytes, period_start: float = 0.0) -> tuple[CaptionList, Optional[str]]: def merge_segmented_wvtt(data: bytes, period_start: float = 0.0) -> tuple[CaptionList, Optional[str]]:
""" """
@@ -846,7 +870,18 @@ class Subtitle(Track):
elif sdh_method == "filter-subs": elif sdh_method == "filter-subs":
# Force use of filter-subs # Force use of filter-subs
sub = Subtitles(self.path) sub = Subtitles(self.path)
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True) try:
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
except ValueError as e:
if "too many values to unpack" in str(e):
# Retry without name removal if the error is due to multiple colons in time references
# This can happen with lines like "at 10:00 and 2:00"
sub = Subtitles(self.path)
sub.filter(
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
)
else:
raise
sub.save() sub.save()
return return
elif sdh_method == "auto": elif sdh_method == "auto":
@@ -882,7 +917,18 @@ class Subtitle(Track):
) )
else: else:
sub = Subtitles(self.path) sub = Subtitles(self.path)
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True) try:
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
except ValueError as e:
if "too many values to unpack" in str(e):
# Retry without name removal if the error is due to multiple colons in time references
# This can happen with lines like "at 10:00 and 2:00"
sub = Subtitles(self.path)
sub.filter(
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
)
else:
raise
sub.save() sub.save()
def reverse_rtl(self) -> None: def reverse_rtl(self) -> None:

View File

@@ -355,6 +355,14 @@ class Tracks:
] ]
) )
if hasattr(vt, "range") and vt.range == Video.Range.HLG:
video_args.extend(
[
"--color-transfer-characteristics",
"0:18", # ARIB STD-B67 (HLG)
]
)
cl.extend(video_args + ["(", str(vt.path), ")"]) cl.extend(video_args + ["(", str(vt.path), ")"])
for i, at in enumerate(self.audio): for i, at in enumerate(self.audio):

View File

@@ -44,6 +44,89 @@ def fuzzy_match(a: str, b: str, threshold: float = 0.8) -> bool:
return ratio >= threshold return ratio >= threshold
def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[dict], Optional[str], Optional[int]]:
"""Search Simkl API for show information by filename (no auth required)."""
log.debug("Searching Simkl for %r (%s, %s)", title, kind, year)
# Construct appropriate filename based on type
filename = f"{title}"
if year:
filename = f"{title} {year}"
if kind == "tv":
filename += " S01E01.mkv"
else: # movie
filename += " 2160p.mkv"
try:
resp = requests.post("https://api.simkl.com/search/file", json={"file": filename}, headers=HEADERS, timeout=30)
resp.raise_for_status()
data = resp.json()
log.debug("Simkl API response received")
# Handle case where SIMKL returns empty list (no results)
if isinstance(data, list):
log.debug("Simkl returned list (no matches) for %r", filename)
return None, None, None
# Handle TV show responses
if data.get("type") == "episode" and "show" in data:
show_info = data["show"]
show_title = show_info.get("title")
show_year = show_info.get("year")
# Verify title matches and year if provided
if not fuzzy_match(show_title, title):
log.debug("Simkl title mismatch: searched %r, got %r", title, show_title)
return None, None, None
if year and show_year and abs(year - show_year) > 1: # Allow 1 year difference
log.debug("Simkl year mismatch: searched %d, got %d", year, show_year)
return None, None, None
tmdb_id = show_info.get("ids", {}).get("tmdbtv")
if tmdb_id:
tmdb_id = int(tmdb_id)
log.debug("Simkl -> %s (TMDB ID %s)", show_title, tmdb_id)
return data, show_title, tmdb_id
# Handle movie responses
elif data.get("type") == "movie" and "movie" in data:
movie_info = data["movie"]
movie_title = movie_info.get("title")
movie_year = movie_info.get("year")
# Verify title matches and year if provided
if not fuzzy_match(movie_title, title):
log.debug("Simkl title mismatch: searched %r, got %r", title, movie_title)
return None, None, None
if year and movie_year and abs(year - movie_year) > 1: # Allow 1 year difference
log.debug("Simkl year mismatch: searched %d, got %d", year, movie_year)
return None, None, None
ids = movie_info.get("ids", {})
tmdb_id = ids.get("tmdb") or ids.get("moviedb")
if tmdb_id:
tmdb_id = int(tmdb_id)
log.debug("Simkl -> %s (TMDB ID %s)", movie_title, tmdb_id)
return data, movie_title, tmdb_id
except (requests.RequestException, ValueError, KeyError) as exc:
log.debug("Simkl search failed: %s", exc)
return None, None, None
def search_show_info(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str], Optional[str]]:
"""Search for show information, trying Simkl first, then TMDB fallback. Returns (tmdb_id, title, source)."""
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(title, year, kind)
if simkl_data and simkl_title and fuzzy_match(simkl_title, title):
return simkl_tmdb_id, simkl_title, "simkl"
tmdb_id, tmdb_title = search_tmdb(title, year, kind)
return tmdb_id, tmdb_title, "tmdb"
def search_tmdb(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str]]: def search_tmdb(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str]]:
api_key = _api_key() api_key = _api_key()
if not api_key: if not api_key:
@@ -202,10 +285,8 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
log.debug("Tagging file %s with title %r", path, title) log.debug("Tagging file %s with title %r", path, title)
standard_tags: dict[str, str] = {} standard_tags: dict[str, str] = {}
custom_tags: dict[str, str] = {} custom_tags: dict[str, str] = {}
# To add custom information to the tags
# custom_tags["Text to the left side"] = "Text to the right side"
if config.tag: if config.tag and config.tag_group_name:
custom_tags["Group"] = config.tag custom_tags["Group"] = config.tag
description = getattr(title, "description", None) description = getattr(title, "description", None)
if description: if description:
@@ -216,12 +297,6 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
description = truncated + "..." description = truncated + "..."
custom_tags["Description"] = description custom_tags["Description"] = description
api_key = _api_key()
if not api_key:
log.debug("No TMDB API key set; applying basic tags only")
_apply_tags(path, custom_tags)
return
if isinstance(title, Movie): if isinstance(title, Movie):
kind = "movie" kind = "movie"
name = title.name name = title.name
@@ -234,32 +309,60 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
_apply_tags(path, custom_tags) _apply_tags(path, custom_tags)
return return
tmdb_title: Optional[str] = None if config.tag_imdb_tmdb:
if tmdb_id is None: # If tmdb_id is provided (via --tmdb), skip Simkl and use TMDB directly
tmdb_id, tmdb_title = search_tmdb(name, year, kind) if tmdb_id is not None:
log.debug("Search result: %r (ID %s)", tmdb_title, tmdb_id) log.debug("Using provided TMDB ID %s for tags", tmdb_id)
if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name): else:
log.debug("TMDB search did not match; skipping external ID lookup") # Try Simkl first for automatic lookup
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(name, year, kind)
if simkl_data and simkl_title and fuzzy_match(simkl_title, name):
log.debug("Using Simkl data for tags")
if simkl_tmdb_id:
tmdb_id = simkl_tmdb_id
show_ids = simkl_data.get("show", {}).get("ids", {})
if show_ids.get("imdb"):
standard_tags["IMDB"] = f"https://www.imdb.com/title/{show_ids['imdb']}"
if show_ids.get("tvdb"):
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/series/{show_ids['tvdb']}"
if show_ids.get("tmdbtv"):
standard_tags["TMDB"] = f"https://www.themoviedb.org/tv/{show_ids['tmdbtv']}"
# Use TMDB API for additional metadata (either from provided ID or Simkl lookup)
api_key = _api_key()
if not api_key:
log.debug("No TMDB API key set; applying basic tags only")
_apply_tags(path, custom_tags) _apply_tags(path, custom_tags)
return return
tmdb_url = f"https://www.themoviedb.org/{'movie' if kind == 'movie' else 'tv'}/{tmdb_id}" tmdb_title: Optional[str] = None
standard_tags["TMDB"] = tmdb_url if tmdb_id is None:
try: tmdb_id, tmdb_title = search_tmdb(name, year, kind)
ids = external_ids(tmdb_id, kind) log.debug("TMDB search result: %r (ID %s)", tmdb_title, tmdb_id)
except requests.RequestException as exc: if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name):
log.debug("Failed to fetch external IDs: %s", exc) log.debug("TMDB search did not match; skipping external ID lookup")
ids = {} _apply_tags(path, custom_tags)
else: return
log.debug("External IDs found: %s", ids)
imdb_id = ids.get("imdb_id") tmdb_url = f"https://www.themoviedb.org/{'movie' if kind == 'movie' else 'tv'}/{tmdb_id}"
if imdb_id: standard_tags["TMDB"] = tmdb_url
standard_tags["IMDB"] = f"https://www.imdb.com/title/{imdb_id}" try:
tvdb_id = ids.get("tvdb_id") ids = external_ids(tmdb_id, kind)
if tvdb_id: except requests.RequestException as exc:
tvdb_prefix = "movies" if kind == "movie" else "series" log.debug("Failed to fetch external IDs: %s", exc)
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/{tvdb_prefix}/{tvdb_id}" ids = {}
else:
log.debug("External IDs found: %s", ids)
imdb_id = ids.get("imdb_id")
if imdb_id:
standard_tags["IMDB"] = f"https://www.imdb.com/title/{imdb_id}"
tvdb_id = ids.get("tvdb_id")
if tvdb_id:
tvdb_prefix = "movies" if kind == "movie" else "series"
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/{tvdb_prefix}/{tvdb_id}"
merged_tags = { merged_tags = {
**custom_tags, **custom_tags,
@@ -269,6 +372,8 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
__all__ = [ __all__ = [
"search_simkl",
"search_show_info",
"search_tmdb", "search_tmdb",
"get_title", "get_title",
"get_year", "get_year",

View File

@@ -33,6 +33,7 @@ class EXAMPLE(Service):
TITLE_RE = r"^(?:https?://?domain\.com/details/)?(?P<title_id>[^/]+)" TITLE_RE = r"^(?:https?://?domain\.com/details/)?(?P<title_id>[^/]+)"
GEOFENCE = ("US", "UK") GEOFENCE = ("US", "UK")
NO_SUBTITLES = True
@staticmethod @staticmethod
@click.command(name="EXAMPLE", short_help="https://domain.com") @click.command(name="EXAMPLE", short_help="https://domain.com")

View File

@@ -1,6 +1,12 @@
# Group or Username to postfix to the end of all download filenames following a dash # Group or Username to postfix to the end of all download filenames following a dash
tag: user_tag tag: user_tag
# Enable/disable tagging with group name (default: true)
tag_group_name: true
# Enable/disable tagging with IMDB/TMDB/TVDB details (default: true)
tag_imdb_tmdb: true
# Set terminal background color (custom option not in CONFIG.md) # Set terminal background color (custom option not in CONFIG.md)
set_terminal_bg: false set_terminal_bg: false
@@ -15,6 +21,12 @@ update_checks: true
# How often to check for updates, in hours (default: 24) # How often to check for updates, in hours (default: 24)
update_check_interval: 24 update_check_interval: 24
# Title caching configuration
# Cache title metadata to reduce redundant API calls
title_cache_enabled: true # Enable/disable title caching globally (default: true)
title_cache_time: 1800 # Cache duration in seconds (default: 1800 = 30 minutes)
title_cache_max_retention: 86400 # Maximum cache retention for fallback when API fails (default: 86400 = 24 hours)
# Muxing configuration # Muxing configuration
muxing: muxing:
set_title: false set_title: false

2
uv.lock generated
View File

@@ -1505,7 +1505,7 @@ wheels = [
[[package]] [[package]]
name = "unshackle" name = "unshackle"
version = "1.3.0" version = "1.4.1"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "appdirs" }, { name = "appdirs" },