mirror of
https://github.com/unshackle-dl/unshackle.git
synced 2026-03-10 00:19:01 +00:00
Compare commits
41 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9952758b38 | ||
|
|
f56e7c1ec8 | ||
|
|
096b7d70f8 | ||
|
|
460878777d | ||
|
|
9eb6bdbe12 | ||
|
|
41d203aaba | ||
|
|
0c6909be4e | ||
|
|
f0493292af | ||
|
|
ead05d08ac | ||
|
|
8c1f51a431 | ||
|
|
1d4e8bf9ec | ||
|
|
b4a1f2236e | ||
|
|
3277ab0d77 | ||
|
|
be0f7299f8 | ||
|
|
948ef30de7 | ||
|
|
1bd63ddc91 | ||
|
|
4dff597af2 | ||
|
|
8dbdde697d | ||
|
|
63c697f082 | ||
|
|
3e0835d9fb | ||
|
|
c6c83ee43b | ||
|
|
507690834b | ||
|
|
f8a58d966b | ||
|
|
8d12b735ff | ||
|
|
1aaea23669 | ||
|
|
e3571b9518 | ||
|
|
b478a00519 | ||
|
|
24fb8fb00c | ||
|
|
63e9a78b2a | ||
|
|
a2bfe47993 | ||
|
|
cf4dc1ce76 | ||
|
|
40028c81d7 | ||
|
|
06df10cb58 | ||
|
|
d61bec4a8c | ||
|
|
058bb60502 | ||
|
|
7583129e8f | ||
|
|
4691694d2e | ||
|
|
a07345a0a2 | ||
|
|
091d7335a3 | ||
|
|
8c798b95c4 | ||
|
|
46c28fe943 |
@@ -1,62 +0,0 @@
|
||||
# Logs and temporary files
|
||||
|
||||
Logs/
|
||||
logs/
|
||||
temp/
|
||||
\*.log
|
||||
|
||||
# Sensitive files
|
||||
|
||||
key_vault.db
|
||||
unshackle/WVDs/
|
||||
unshackle/PRDs/
|
||||
unshackle/cookies/
|
||||
_.prd
|
||||
_.wvd
|
||||
|
||||
# Cache directories
|
||||
|
||||
unshackle/cache/
|
||||
**pycache**/
|
||||
_.pyc
|
||||
_.pyo
|
||||
\*.pyd
|
||||
.Python
|
||||
|
||||
# Development files
|
||||
|
||||
.git/
|
||||
.gitignore
|
||||
.vscode/
|
||||
.idea/
|
||||
_.swp
|
||||
_.swo
|
||||
|
||||
# Documentation and plans
|
||||
|
||||
plan/
|
||||
CONTRIBUTING.md
|
||||
CONFIG.md
|
||||
AGENTS.md
|
||||
OLD-CHANGELOG.md
|
||||
cliff.toml
|
||||
|
||||
# Installation scripts
|
||||
|
||||
install.bat
|
||||
|
||||
# Test files
|
||||
|
||||
_test_
|
||||
_Test_
|
||||
|
||||
# Virtual environments
|
||||
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
|
||||
# OS generated files
|
||||
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,6 +1,7 @@
|
||||
# unshackle
|
||||
unshackle.yaml
|
||||
unshackle.yml
|
||||
update_check.json
|
||||
*.mkv
|
||||
*.mp4
|
||||
*.exe
|
||||
|
||||
147
CHANGELOG.md
147
CHANGELOG.md
@@ -5,6 +5,153 @@ All notable changes to this project will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [1.4.1] - 2025-08-08
|
||||
|
||||
### Added
|
||||
|
||||
- **Title Caching System**: Intelligent title caching to reduce redundant API calls
|
||||
- Configurable title caching with 30-minute default cache duration
|
||||
- 24-hour fallback cache on API failures for improved reliability
|
||||
- Region-aware caching to handle geo-restricted content properly
|
||||
- SHA256 hashing for cache keys to handle complex title IDs
|
||||
- Added `--no-cache` CLI flag to bypass caching when needed
|
||||
- Added `--reset-cache` CLI flag to clear existing cache data
|
||||
- New cache configuration variables in config system
|
||||
- Documented caching options in example configuration file
|
||||
- Significantly improves performance when debugging or modifying CLI parameters
|
||||
- **Enhanced Tagging Configuration**: New options for customizing tag behavior
|
||||
- Added `tag_group_name` config option to control group name inclusion in tags
|
||||
- Added `tag_imdb_tmdb` config option to control IMDB/TMDB details in tags
|
||||
- Added Simkl API endpoint support as fallback when no TMDB API key is provided
|
||||
- Enhanced tag_file function to prioritize provided TMDB ID when `--tmdb` flag is used
|
||||
- Improved TMDB ID handling with better prioritization logic
|
||||
|
||||
### Changed
|
||||
|
||||
- **Language Selection Enhancement**: Improved default language handling
|
||||
- Updated language option default to 'orig' when no `-l` flag is set
|
||||
- Avoids hardcoded 'en' default and respects original content language
|
||||
- **Tagging Logic Improvements**: Simplified and enhanced tagging functionality
|
||||
- Simplified Simkl search logic with soft-fail when no results found
|
||||
- Enhanced tag_file function with better TMDB ID prioritization
|
||||
- Improved error handling in tagging operations
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Subtitle Processing**: Enhanced subtitle filtering for edge cases
|
||||
- Fixed ValueError in subtitle filtering for multiple colons in time references
|
||||
- Improved handling of subtitles containing complex time formatting
|
||||
- Better error handling for malformed subtitle timestamps
|
||||
|
||||
### Removed
|
||||
|
||||
- **Docker Support**: Removed Docker configuration from repository
|
||||
- Removed Dockerfile and .dockerignore files
|
||||
- Cleaned up README.md Docker-related documentation
|
||||
- Focuses on direct installation methods
|
||||
|
||||
## [1.4.0] - 2025-08-05
|
||||
|
||||
### Added
|
||||
|
||||
- **HLG Transfer Characteristics Preservation**: Enhanced video muxing to preserve HLG color metadata
|
||||
- Added automatic detection of HLG video tracks during muxing process
|
||||
- Implemented `--color-transfer-characteristics 0:18` argument for mkvmerge when processing HLG content
|
||||
- Prevents incorrect conversion from HLG (18) to BT.2020 (14) transfer characteristics
|
||||
- Ensures proper HLG playback support on compatible hardware without manual editing
|
||||
- **Original Language Support**: Enhanced language selection with 'orig' keyword support
|
||||
- Added support for 'orig' language selector for both video and audio tracks
|
||||
- Automatically detects and uses the title's original language when 'orig' is specified
|
||||
- Improved language processing logic with better duplicate handling
|
||||
- Enhanced help text to document original language selection usage
|
||||
- **Forced Subtitle Support**: Added option to include forced subtitle tracks
|
||||
- New functionality to download and include forced subtitle tracks alongside regular subtitles
|
||||
- **WebVTT Subtitle Filtering**: Enhanced subtitle processing capabilities
|
||||
- Added filtering for unwanted cues in WebVTT subtitles
|
||||
- Improved subtitle quality by removing unnecessary metadata
|
||||
|
||||
### Changed
|
||||
|
||||
- **DRM Track Decryption**: Improved DRM decryption track selection logic
|
||||
- Enhanced `get_drm_for_cdm()` method usage for better DRM-CDM matching
|
||||
- Added warning messages when no matching DRM is found for tracks
|
||||
- Improved error handling and logging for DRM decryption failures
|
||||
- **Series Tree Representation**: Enhanced episode tree display formatting
|
||||
- Updated series tree to show season breakdown with episode counts
|
||||
- Improved visual representation with "S{season}({count})" format
|
||||
- Better organization of series information in console output
|
||||
- **Hybrid Processing UI**: Enhanced extraction and conversion processes
|
||||
- Added dynamic spinning bars to follow the rest of the codebase design
|
||||
- Improved visual feedback during hybrid HDR processing operations
|
||||
- **Track Selection Logic**: Enhanced multi-track selection capabilities
|
||||
- Fixed track selection to support combining -V, -A, -S flags properly
|
||||
- Improved flexibility in selecting multiple track types simultaneously
|
||||
- **Service Subtitle Support**: Added configuration for services without subtitle support
|
||||
- Services can now indicate if they don't support subtitle downloads
|
||||
- Prevents unnecessary subtitle download attempts for unsupported services
|
||||
- **Update Checker**: Enhanced update checking logic and cache handling
|
||||
- Improved rate limiting and caching mechanisms for update checks
|
||||
- Better performance and reduced API calls to GitHub
|
||||
|
||||
### Fixed
|
||||
|
||||
- **PlayReady KID Extraction**: Enhanced KID extraction from PSSH data
|
||||
- Added base64 support and XML parsing for better KID detection
|
||||
- Fixed issue where only one KID was being extracted for certain services
|
||||
- Improved multi-KID support for PlayReady protected content
|
||||
- **Dolby Vision Detection**: Improved DV codec detection across all formats
|
||||
- Fixed detection of dvhe.05.06 codec which was not being recognized correctly
|
||||
- Enhanced detection logic in Episode and Movie title classes
|
||||
- Better support for various Dolby Vision codec variants
|
||||
|
||||
## [1.3.0] - 2025-08-03
|
||||
|
||||
### Added
|
||||
|
||||
- **mp4decrypt Support**: Alternative DRM decryption method using mp4decrypt from Bento4
|
||||
- Added `mp4decrypt` binary detection and support in binaries module
|
||||
- New `decryption` configuration option in unshackle.yaml for service-specific decryption methods
|
||||
- Enhanced PlayReady and Widevine DRM classes with mp4decrypt decryption support
|
||||
- Service-specific decryption mapping allows choosing between `shaka` and `mp4decrypt` per service
|
||||
- Improved error handling and progress reporting for mp4decrypt operations
|
||||
- **Scene Naming Configuration**: New `scene_naming` option for controlling file naming conventions
|
||||
- Added scene naming logic to movie, episode, and song title classes
|
||||
- Configurable through unshackle.yaml to enable/disable scene naming standards
|
||||
- **Terminal Cleanup and Signal Handling**: Enhanced console management
|
||||
- Implemented proper terminal cleanup on application exit
|
||||
- Added signal handling for graceful shutdown in ComfyConsole
|
||||
- **Configuration Template**: New `unshackle-example.yaml` template file
|
||||
- Replaced main `unshackle.yaml` with example template to prevent git conflicts
|
||||
- Users can now modify their local config without affecting repository updates
|
||||
- **Enhanced Credential Management**: Improved CDM and vault configuration
|
||||
- Expanded credential management documentation in configuration
|
||||
- Enhanced CDM configuration examples and guidelines
|
||||
- **Video Transfer Standards**: Added `Unspecified_Image` option to Transfer enum
|
||||
- Implements ITU-T H.Sup19 standard value 2 for image characteristics
|
||||
- Supports still image coding systems and unknown transfer characteristics
|
||||
- **Update Check Rate Limiting**: Enhanced update checking system
|
||||
- Added configurable update check intervals to prevent excessive API calls
|
||||
- Improved rate limiting for GitHub API requests
|
||||
|
||||
### Changed
|
||||
|
||||
- **DRM Decryption Architecture**: Enhanced decryption system with dual method support
|
||||
- Updated `dl.py` to handle service-specific decryption method selection
|
||||
- Refactored `Config` class to manage decryption method mapping per service
|
||||
- Enhanced DRM decrypt methods with `use_mp4decrypt` parameter for method selection
|
||||
- **Error Handling**: Improved exception handling in Hybrid class
|
||||
- Replaced log.exit calls with ValueError exceptions for better error propagation
|
||||
- Enhanced error handling consistency across hybrid processing
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Proxy Configuration**: Fixed proxy server mapping in configuration
|
||||
- Renamed 'servers' to 'server_map' in proxy configuration to resolve Nord/Surfshark naming conflicts
|
||||
- Updated configuration structure for better compatibility with proxy providers
|
||||
- **HTTP Vault**: Improved URL handling and key retrieval logic
|
||||
- Fixed URL processing issues in HTTP-based key vaults
|
||||
- Enhanced key retrieval reliability and error handling
|
||||
|
||||
## [1.2.0] - 2025-07-30
|
||||
|
||||
### Added
|
||||
|
||||
31
CONFIG.md
31
CONFIG.md
@@ -213,6 +213,37 @@ downloader:
|
||||
|
||||
The `default` entry is optional. If omitted, `requests` will be used for services not listed.
|
||||
|
||||
## decryption (str | dict)
|
||||
|
||||
Choose what software to use to decrypt DRM-protected content throughout unshackle where needed.
|
||||
You may provide a single decryption method globally or a mapping of service tags to
|
||||
decryption methods.
|
||||
|
||||
Options:
|
||||
|
||||
- `shaka` (default) - Shaka Packager - <https://github.com/shaka-project/shaka-packager>
|
||||
- `mp4decrypt` - mp4decrypt from Bento4 - <https://github.com/axiomatic-systems/Bento4>
|
||||
|
||||
Note that Shaka Packager is the traditional method and works with most services. mp4decrypt
|
||||
is an alternative that may work better with certain services that have specific encryption formats.
|
||||
|
||||
Example mapping:
|
||||
|
||||
```yaml
|
||||
decryption:
|
||||
ATVP: mp4decrypt
|
||||
AMZN: shaka
|
||||
default: shaka
|
||||
```
|
||||
|
||||
The `default` entry is optional. If omitted, `shaka` will be used for services not listed.
|
||||
|
||||
Simple configuration (single method for all services):
|
||||
|
||||
```yaml
|
||||
decryption: mp4decrypt
|
||||
```
|
||||
|
||||
## filenames (dict)
|
||||
|
||||
Override the default filenames used across unshackle.
|
||||
|
||||
78
Dockerfile
78
Dockerfile
@@ -1,78 +0,0 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
# Set environment variables to reduce image size
|
||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
UV_CACHE_DIR=/tmp/uv-cache
|
||||
|
||||
# Add container metadata
|
||||
LABEL org.opencontainers.image.description="Docker image for Unshackle with all required dependencies for downloading media content"
|
||||
|
||||
# Install base dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
gnupg \
|
||||
git \
|
||||
curl \
|
||||
build-essential \
|
||||
cmake \
|
||||
pkg-config \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Set up repos for mkvtools and bullseye for ccextractor
|
||||
RUN wget -O /etc/apt/keyrings/gpg-pub-moritzbunkus.gpg https://mkvtoolnix.download/gpg-pub-moritzbunkus.gpg \
|
||||
&& echo "deb [signed-by=/etc/apt/keyrings/gpg-pub-moritzbunkus.gpg] https://mkvtoolnix.download/debian/ bookworm main" >> /etc/apt/sources.list \
|
||||
&& echo "deb-src [signed-by=/etc/apt/keyrings/gpg-pub-moritzbunkus.gpg] https://mkvtoolnix.download/debian/ bookworm main" >> /etc/apt/sources.list \
|
||||
&& echo "deb http://ftp.debian.org/debian bullseye main" >> /etc/apt/sources.list
|
||||
|
||||
# Install all dependencies from apt
|
||||
RUN apt-get update && apt-get install -y \
|
||||
ffmpeg \
|
||||
ccextractor \
|
||||
mkvtoolnix \
|
||||
aria2 \
|
||||
libmediainfo-dev \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install shaka packager
|
||||
RUN wget https://github.com/shaka-project/shaka-packager/releases/download/v2.6.1/packager-linux-x64 \
|
||||
&& chmod +x packager-linux-x64 \
|
||||
&& mv packager-linux-x64 /usr/local/bin/packager
|
||||
|
||||
# Install N_m3u8DL-RE
|
||||
RUN wget https://github.com/nilaoda/N_m3u8DL-RE/releases/download/v0.3.0-beta/N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz \
|
||||
&& tar -xzf N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz \
|
||||
&& mv N_m3u8DL-RE /usr/local/bin/ \
|
||||
&& chmod +x /usr/local/bin/N_m3u8DL-RE \
|
||||
&& rm N_m3u8DL-RE_v0.3.0-beta_linux-x64_20241203.tar.gz
|
||||
|
||||
# Create binaries directory and add symlinks for all required executables
|
||||
RUN mkdir -p /app/binaries && \
|
||||
ln -sf /usr/bin/ffprobe /app/binaries/ffprobe && \
|
||||
ln -sf /usr/bin/ffmpeg /app/binaries/ffmpeg && \
|
||||
ln -sf /usr/bin/mkvmerge /app/binaries/mkvmerge && \
|
||||
ln -sf /usr/local/bin/N_m3u8DL-RE /app/binaries/N_m3u8DL-RE && \
|
||||
ln -sf /usr/local/bin/packager /app/binaries/packager && \
|
||||
ln -sf /usr/local/bin/packager /usr/local/bin/shaka-packager && \
|
||||
ln -sf /usr/local/bin/packager /usr/local/bin/packager-linux-x64
|
||||
|
||||
# Install uv
|
||||
RUN pip install --no-cache-dir uv
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy dependency files and README (required by pyproject.toml)
|
||||
COPY pyproject.toml uv.lock README.md ./
|
||||
|
||||
# Copy source code first
|
||||
COPY unshackle/ ./unshackle/
|
||||
|
||||
# Install dependencies with uv (including the project itself)
|
||||
RUN uv sync --frozen --no-dev
|
||||
|
||||
# Set entrypoint to allow passing commands directly to unshackle
|
||||
ENTRYPOINT ["uv", "run", "unshackle"]
|
||||
CMD ["-h"]
|
||||
39
README.md
39
README.md
@@ -42,45 +42,6 @@ uv tool install git+https://github.com/unshackle-dl/unshackle.git
|
||||
uvx unshackle --help # or just `unshackle` once PATH updated
|
||||
```
|
||||
|
||||
### Docker Installation
|
||||
|
||||
Run unshackle using our pre-built Docker image from GitHub Container Registry:
|
||||
|
||||
```bash
|
||||
# Run with default help command
|
||||
docker run --rm ghcr.io/unshackle-dl/unshackle:latest
|
||||
|
||||
# Check environment dependencies
|
||||
docker run --rm ghcr.io/unshackle-dl/unshackle:latest env check
|
||||
|
||||
# Download content (mount directories for persistent data)
|
||||
docker run --rm \
|
||||
-v "$(pwd)/unshackle/downloads:/app/downloads" \
|
||||
-v "$(pwd)/unshackle/cookies:/app/unshackle/cookies" \
|
||||
-v "$(pwd)/unshackle/services:/app/unshackle/services" \
|
||||
-v "$(pwd)/unshackle/WVDs:/app/unshackle/WVDs" \
|
||||
-v "$(pwd)/unshackle/PRDs:/app/unshackle/PRDs" \
|
||||
-v "$(pwd)/unshackle/unshackle.yaml:/app/unshackle.yaml" \
|
||||
ghcr.io/unshackle-dl/unshackle:latest dl SERVICE_NAME CONTENT_ID
|
||||
|
||||
# Run interactively for configuration
|
||||
docker run --rm -it \
|
||||
-v "$(pwd)/unshackle/cookies:/app/unshackle/cookies" \
|
||||
-v "$(pwd)/unshackle/services:/app/unshackle/services" \
|
||||
-v "$(pwd)/unshackle.yaml:/app/unshackle.yaml" \
|
||||
ghcr.io/unshackle-dl/unshackle:latest cfg
|
||||
```
|
||||
|
||||
**Alternative: Build locally**
|
||||
|
||||
```bash
|
||||
# Clone and build your own image
|
||||
git clone https://github.com/unshackle-dl/unshackle.git
|
||||
cd unshackle
|
||||
docker build -t unshackle .
|
||||
docker run --rm unshackle env check
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> After installation, you may need to add the installation path to your PATH environment variable if prompted.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "unshackle"
|
||||
version = "1.2.0"
|
||||
version = "1.4.1"
|
||||
description = "Modular Movie, TV, and Music Archival Software."
|
||||
authors = [{ name = "unshackle team" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
|
||||
@@ -139,7 +139,13 @@ class dl:
|
||||
default=None,
|
||||
help="Wanted episodes, e.g. `S01-S05,S07`, `S01E01-S02E03`, `S02-S02E03`, e.t.c, defaults to all.",
|
||||
)
|
||||
@click.option("-l", "--lang", type=LANGUAGE_RANGE, default="en", help="Language wanted for Video and Audio.")
|
||||
@click.option(
|
||||
"-l",
|
||||
"--lang",
|
||||
type=LANGUAGE_RANGE,
|
||||
default="orig",
|
||||
help="Language wanted for Video and Audio. Use 'orig' to select the original language, e.g. 'orig,en' for both original and English.",
|
||||
)
|
||||
@click.option(
|
||||
"-vl",
|
||||
"--v-lang",
|
||||
@@ -148,6 +154,7 @@ class dl:
|
||||
help="Language wanted for Video, you would use this if the video language doesn't match the audio.",
|
||||
)
|
||||
@click.option("-sl", "--s-lang", type=LANGUAGE_RANGE, default=["all"], help="Language wanted for Subtitles.")
|
||||
@click.option("-fs", "--forced-subs", is_flag=True, default=False, help="Include forced subtitle tracks.")
|
||||
@click.option(
|
||||
"--proxy",
|
||||
type=str,
|
||||
@@ -233,6 +240,8 @@ class dl:
|
||||
help="Max workers/threads to download with per-track. Default depends on the downloader.",
|
||||
)
|
||||
@click.option("--downloads", type=int, default=1, help="Amount of tracks to download concurrently.")
|
||||
@click.option("--no-cache", "no_cache", is_flag=True, default=False, help="Bypass title cache for this download.")
|
||||
@click.option("--reset-cache", "reset_cache", is_flag=True, default=False, help="Clear title cache before fetching.")
|
||||
@click.pass_context
|
||||
def cli(ctx: click.Context, **kwargs: Any) -> dl:
|
||||
return dl(ctx, **kwargs)
|
||||
@@ -405,6 +414,7 @@ class dl:
|
||||
lang: list[str],
|
||||
v_lang: list[str],
|
||||
s_lang: list[str],
|
||||
forced_subs: bool,
|
||||
sub_format: Optional[Subtitle.Codec],
|
||||
video_only: bool,
|
||||
audio_only: bool,
|
||||
@@ -428,6 +438,7 @@ class dl:
|
||||
**__: Any,
|
||||
) -> None:
|
||||
self.tmdb_searched = False
|
||||
self.search_source = None
|
||||
start_time = time.time()
|
||||
|
||||
# Check if dovi_tool is available when hybrid mode is requested
|
||||
@@ -452,7 +463,7 @@ class dl:
|
||||
self.log.info("Authenticated with Service")
|
||||
|
||||
with console.status("Fetching Title Metadata...", spinner="dots"):
|
||||
titles = service.get_titles()
|
||||
titles = service.get_titles_cached()
|
||||
if not titles:
|
||||
self.log.error("No titles returned, nothing to download...")
|
||||
sys.exit(1)
|
||||
@@ -485,34 +496,34 @@ class dl:
|
||||
if self.tmdb_id:
|
||||
tmdb_title = tags.get_title(self.tmdb_id, kind)
|
||||
else:
|
||||
self.tmdb_id, tmdb_title = tags.search_tmdb(title.title, title.year, kind)
|
||||
self.tmdb_id, tmdb_title, self.search_source = tags.search_show_info(title.title, title.year, kind)
|
||||
if not (self.tmdb_id and tmdb_title and tags.fuzzy_match(tmdb_title, title.title)):
|
||||
self.tmdb_id = None
|
||||
if list_ or list_titles:
|
||||
if self.tmdb_id:
|
||||
console.print(
|
||||
Padding(
|
||||
f"TMDB -> {tmdb_title or '?'} [bright_black](ID {self.tmdb_id})",
|
||||
f"Search -> {tmdb_title or '?'} [bright_black](ID {self.tmdb_id})",
|
||||
(0, 5),
|
||||
)
|
||||
)
|
||||
else:
|
||||
console.print(Padding("TMDB -> [bright_black]No match found[/]", (0, 5)))
|
||||
console.print(Padding("Search -> [bright_black]No match found[/]", (0, 5)))
|
||||
self.tmdb_searched = True
|
||||
|
||||
if isinstance(title, Movie) and (list_ or list_titles) and not self.tmdb_id:
|
||||
movie_id, movie_title = tags.search_tmdb(title.name, title.year, "movie")
|
||||
movie_id, movie_title, _ = tags.search_show_info(title.name, title.year, "movie")
|
||||
if movie_id:
|
||||
console.print(
|
||||
Padding(
|
||||
f"TMDB -> {movie_title or '?'} [bright_black](ID {movie_id})",
|
||||
f"Search -> {movie_title or '?'} [bright_black](ID {movie_id})",
|
||||
(0, 5),
|
||||
)
|
||||
)
|
||||
else:
|
||||
console.print(Padding("TMDB -> [bright_black]No match found[/]", (0, 5)))
|
||||
console.print(Padding("Search -> [bright_black]No match found[/]", (0, 5)))
|
||||
|
||||
if self.tmdb_id:
|
||||
if self.tmdb_id and getattr(self, 'search_source', None) != 'simkl':
|
||||
kind = "tv" if isinstance(title, Episode) else "movie"
|
||||
tags.external_ids(self.tmdb_id, kind)
|
||||
if self.tmdb_year:
|
||||
@@ -533,7 +544,12 @@ class dl:
|
||||
events.subscribe(events.Types.TRACK_REPACKED, service.on_track_repacked)
|
||||
events.subscribe(events.Types.TRACK_MULTIPLEX, service.on_track_multiplex)
|
||||
|
||||
if no_subs:
|
||||
if hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES:
|
||||
console.log("Skipping subtitles - service does not support subtitle downloads")
|
||||
no_subs = True
|
||||
s_lang = None
|
||||
title.tracks.subtitles = []
|
||||
elif no_subs:
|
||||
console.log("Skipped subtitles as --no-subs was used...")
|
||||
s_lang = None
|
||||
title.tracks.subtitles = []
|
||||
@@ -560,8 +576,31 @@ class dl:
|
||||
)
|
||||
|
||||
with console.status("Sorting tracks by language and bitrate...", spinner="dots"):
|
||||
title.tracks.sort_videos(by_language=v_lang or lang)
|
||||
title.tracks.sort_audio(by_language=lang)
|
||||
video_sort_lang = v_lang or lang
|
||||
processed_video_sort_lang = []
|
||||
for language in video_sort_lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
if orig_lang not in processed_video_sort_lang:
|
||||
processed_video_sort_lang.append(orig_lang)
|
||||
else:
|
||||
if language not in processed_video_sort_lang:
|
||||
processed_video_sort_lang.append(language)
|
||||
|
||||
processed_audio_sort_lang = []
|
||||
for language in lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
if orig_lang not in processed_audio_sort_lang:
|
||||
processed_audio_sort_lang.append(orig_lang)
|
||||
else:
|
||||
if language not in processed_audio_sort_lang:
|
||||
processed_audio_sort_lang.append(language)
|
||||
|
||||
title.tracks.sort_videos(by_language=processed_video_sort_lang)
|
||||
title.tracks.sort_audio(by_language=processed_audio_sort_lang)
|
||||
title.tracks.sort_subtitles(by_language=s_lang)
|
||||
|
||||
if list_:
|
||||
@@ -592,12 +631,27 @@ class dl:
|
||||
self.log.error(f"There's no {vbitrate}kbps Video Track...")
|
||||
sys.exit(1)
|
||||
|
||||
# Filter out "best" from the video languages list.
|
||||
video_languages = [lang for lang in (v_lang or lang) if lang != "best"]
|
||||
if video_languages and "all" not in video_languages:
|
||||
title.tracks.videos = title.tracks.by_language(title.tracks.videos, video_languages)
|
||||
processed_video_lang = []
|
||||
for language in video_languages:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = (
|
||||
str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
)
|
||||
if orig_lang not in processed_video_lang:
|
||||
processed_video_lang.append(orig_lang)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Original language not available for title, skipping 'orig' selection for video"
|
||||
)
|
||||
else:
|
||||
if language not in processed_video_lang:
|
||||
processed_video_lang.append(language)
|
||||
title.tracks.videos = title.tracks.by_language(title.tracks.videos, processed_video_lang)
|
||||
if not title.tracks.videos:
|
||||
self.log.error(f"There's no {video_languages} Video Track...")
|
||||
self.log.error(f"There's no {processed_video_lang} Video Track...")
|
||||
sys.exit(1)
|
||||
|
||||
if quality:
|
||||
@@ -672,7 +726,8 @@ class dl:
|
||||
self.log.error(f"There's no {s_lang} Subtitle Track...")
|
||||
sys.exit(1)
|
||||
|
||||
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang))
|
||||
if not forced_subs:
|
||||
title.tracks.select_subtitles(lambda x: not x.forced or is_close_match(x.language, lang))
|
||||
|
||||
# filter audio tracks
|
||||
# might have no audio tracks if part of the video, e.g. transport stream hls
|
||||
@@ -699,8 +754,24 @@ class dl:
|
||||
self.log.error(f"There's no {abitrate}kbps Audio Track...")
|
||||
sys.exit(1)
|
||||
if lang:
|
||||
if "best" in lang:
|
||||
# Get unique languages and select highest quality for each
|
||||
processed_lang = []
|
||||
for language in lang:
|
||||
if language == "orig":
|
||||
if title.language:
|
||||
orig_lang = (
|
||||
str(title.language) if hasattr(title.language, "__str__") else title.language
|
||||
)
|
||||
if orig_lang not in processed_lang:
|
||||
processed_lang.append(orig_lang)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Original language not available for title, skipping 'orig' selection"
|
||||
)
|
||||
else:
|
||||
if language not in processed_lang:
|
||||
processed_lang.append(language)
|
||||
|
||||
if "best" in processed_lang:
|
||||
unique_languages = {track.language for track in title.tracks.audio}
|
||||
selected_audio = []
|
||||
for language in unique_languages:
|
||||
@@ -710,30 +781,36 @@ class dl:
|
||||
)
|
||||
selected_audio.append(highest_quality)
|
||||
title.tracks.audio = selected_audio
|
||||
elif "all" not in lang:
|
||||
title.tracks.audio = title.tracks.by_language(title.tracks.audio, lang, per_language=1)
|
||||
elif "all" not in processed_lang:
|
||||
per_language = 0 if len(processed_lang) > 1 else 1
|
||||
title.tracks.audio = title.tracks.by_language(
|
||||
title.tracks.audio, processed_lang, per_language=per_language
|
||||
)
|
||||
if not title.tracks.audio:
|
||||
self.log.error(f"There's no {lang} Audio Track, cannot continue...")
|
||||
self.log.error(f"There's no {processed_lang} Audio Track, cannot continue...")
|
||||
sys.exit(1)
|
||||
|
||||
if video_only or audio_only or subs_only or chapters_only or no_subs or no_audio or no_chapters:
|
||||
# Determine which track types to keep based on the flags
|
||||
keep_videos = True
|
||||
keep_audio = True
|
||||
keep_subtitles = True
|
||||
keep_chapters = True
|
||||
keep_videos = False
|
||||
keep_audio = False
|
||||
keep_subtitles = False
|
||||
keep_chapters = False
|
||||
|
||||
# Handle exclusive flags (only keep one type)
|
||||
if video_only:
|
||||
keep_audio = keep_subtitles = keep_chapters = False
|
||||
elif audio_only:
|
||||
keep_videos = keep_subtitles = keep_chapters = False
|
||||
elif subs_only:
|
||||
keep_videos = keep_audio = keep_chapters = False
|
||||
elif chapters_only:
|
||||
keep_videos = keep_audio = keep_subtitles = False
|
||||
if video_only or audio_only or subs_only or chapters_only:
|
||||
if video_only:
|
||||
keep_videos = True
|
||||
if audio_only:
|
||||
keep_audio = True
|
||||
if subs_only:
|
||||
keep_subtitles = True
|
||||
if chapters_only:
|
||||
keep_chapters = True
|
||||
else:
|
||||
keep_videos = True
|
||||
keep_audio = True
|
||||
keep_subtitles = True
|
||||
keep_chapters = True
|
||||
|
||||
# Handle exclusion flags (remove specific types)
|
||||
if no_subs:
|
||||
keep_subtitles = False
|
||||
if no_audio:
|
||||
@@ -741,7 +818,6 @@ class dl:
|
||||
if no_chapters:
|
||||
keep_chapters = False
|
||||
|
||||
# Build the kept_tracks list without duplicates
|
||||
kept_tracks = []
|
||||
if keep_videos:
|
||||
kept_tracks.extend(title.tracks.videos)
|
||||
@@ -838,6 +914,7 @@ class dl:
|
||||
while (
|
||||
not title.tracks.subtitles
|
||||
and not no_subs
|
||||
and not (hasattr(service, "NO_SUBTITLES") and service.NO_SUBTITLES)
|
||||
and not video_only
|
||||
and len(title.tracks.videos) > video_track_n
|
||||
and any(
|
||||
@@ -911,6 +988,34 @@ class dl:
|
||||
if font_count:
|
||||
self.log.info(f"Attached {font_count} fonts for the Subtitles")
|
||||
|
||||
# Handle DRM decryption BEFORE repacking (must decrypt first!)
|
||||
service_name = service.__class__.__name__.upper()
|
||||
decryption_method = config.decryption_map.get(service_name, config.decryption)
|
||||
use_mp4decrypt = decryption_method.lower() == "mp4decrypt"
|
||||
|
||||
if use_mp4decrypt:
|
||||
decrypt_tool = "mp4decrypt"
|
||||
else:
|
||||
decrypt_tool = "Shaka Packager"
|
||||
|
||||
drm_tracks = [track for track in title.tracks if track.drm]
|
||||
if drm_tracks:
|
||||
with console.status(f"Decrypting tracks with {decrypt_tool}..."):
|
||||
has_decrypted = False
|
||||
for track in drm_tracks:
|
||||
drm = track.get_drm_for_cdm(self.cdm)
|
||||
if drm and hasattr(drm, "decrypt"):
|
||||
drm.decrypt(track.path, use_mp4decrypt=use_mp4decrypt)
|
||||
has_decrypted = True
|
||||
events.emit(events.Types.TRACK_REPACKED, track=track)
|
||||
else:
|
||||
self.log.warning(
|
||||
f"No matching DRM found for track {track} with CDM type {type(self.cdm).__name__}"
|
||||
)
|
||||
if has_decrypted:
|
||||
self.log.info(f"Decrypted tracks with {decrypt_tool}")
|
||||
|
||||
# Now repack the decrypted tracks
|
||||
with console.status("Repackaging tracks with FFMPEG..."):
|
||||
has_repacked = False
|
||||
for track in title.tracks:
|
||||
|
||||
@@ -45,6 +45,13 @@ def check() -> None:
|
||||
"desc": "DRM decryption",
|
||||
"cat": "DRM",
|
||||
},
|
||||
{
|
||||
"name": "mp4decrypt",
|
||||
"binary": binaries.Mp4decrypt,
|
||||
"required": False,
|
||||
"desc": "DRM decryption",
|
||||
"cat": "DRM",
|
||||
},
|
||||
# HDR Processing
|
||||
{"name": "dovi_tool", "binary": binaries.DoviTool, "required": False, "desc": "Dolby Vision", "cat": "HDR"},
|
||||
{
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.4.1"
|
||||
|
||||
@@ -53,6 +53,7 @@ MKVToolNix = find("mkvmerge")
|
||||
Mkvpropedit = find("mkvpropedit")
|
||||
DoviTool = find("dovi_tool")
|
||||
HDR10PlusTool = find("hdr10plus_tool", "HDR10Plus_tool")
|
||||
Mp4decrypt = find("mp4decrypt")
|
||||
|
||||
|
||||
__all__ = (
|
||||
@@ -71,5 +72,6 @@ __all__ = (
|
||||
"Mkvpropedit",
|
||||
"DoviTool",
|
||||
"HDR10PlusTool",
|
||||
"Mp4decrypt",
|
||||
"find",
|
||||
)
|
||||
|
||||
@@ -75,10 +75,26 @@ class Config:
|
||||
self.proxy_providers: dict = kwargs.get("proxy_providers") or {}
|
||||
self.serve: dict = kwargs.get("serve") or {}
|
||||
self.services: dict = kwargs.get("services") or {}
|
||||
decryption_cfg = kwargs.get("decryption") or {}
|
||||
if isinstance(decryption_cfg, dict):
|
||||
self.decryption_map = {k.upper(): v for k, v in decryption_cfg.items()}
|
||||
self.decryption = self.decryption_map.get("DEFAULT", "shaka")
|
||||
else:
|
||||
self.decryption_map = {}
|
||||
self.decryption = decryption_cfg or "shaka"
|
||||
|
||||
self.set_terminal_bg: bool = kwargs.get("set_terminal_bg", False)
|
||||
self.tag: str = kwargs.get("tag") or ""
|
||||
self.tag_group_name: bool = kwargs.get("tag_group_name", True)
|
||||
self.tag_imdb_tmdb: bool = kwargs.get("tag_imdb_tmdb", True)
|
||||
self.tmdb_api_key: str = kwargs.get("tmdb_api_key") or ""
|
||||
self.update_checks: bool = kwargs.get("update_checks", True)
|
||||
self.update_check_interval: int = kwargs.get("update_check_interval", 24)
|
||||
self.scene_naming: bool = kwargs.get("scene_naming", True)
|
||||
|
||||
self.title_cache_time: int = kwargs.get("title_cache_time", 1800) # 30 minutes default
|
||||
self.title_cache_max_retention: int = kwargs.get("title_cache_max_retention", 86400) # 24 hours default
|
||||
self.title_cache_enabled: bool = kwargs.get("title_cache_enabled", True)
|
||||
|
||||
@classmethod
|
||||
def from_yaml(cls, path: Path) -> Config:
|
||||
|
||||
@@ -39,17 +39,23 @@ class PlayReady:
|
||||
if not isinstance(pssh, PSSH):
|
||||
raise TypeError(f"Expected pssh to be a {PSSH}, not {pssh!r}")
|
||||
|
||||
kids: list[UUID] = []
|
||||
for header in pssh.wrm_headers:
|
||||
try:
|
||||
signed_ids, _, _, _ = header.read_attributes()
|
||||
except Exception:
|
||||
continue
|
||||
for signed_id in signed_ids:
|
||||
if pssh_b64:
|
||||
kids = self._extract_kids_from_pssh_b64(pssh_b64)
|
||||
else:
|
||||
kids = []
|
||||
|
||||
# Extract KIDs using pyplayready's method (may miss some KIDs)
|
||||
if not kids:
|
||||
for header in pssh.wrm_headers:
|
||||
try:
|
||||
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value)))
|
||||
signed_ids, _, _, _ = header.read_attributes()
|
||||
except Exception:
|
||||
continue
|
||||
for signed_id in signed_ids:
|
||||
try:
|
||||
kids.append(UUID(bytes_le=base64.b64decode(signed_id.value)))
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if kid:
|
||||
if isinstance(kid, str):
|
||||
@@ -72,6 +78,66 @@ class PlayReady:
|
||||
if pssh_b64:
|
||||
self.data.setdefault("pssh_b64", pssh_b64)
|
||||
|
||||
def _extract_kids_from_pssh_b64(self, pssh_b64: str) -> list[UUID]:
|
||||
"""Extract all KIDs from base64-encoded PSSH data."""
|
||||
try:
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
# Decode the PSSH
|
||||
pssh_bytes = base64.b64decode(pssh_b64)
|
||||
|
||||
# Try to find XML in the PSSH data
|
||||
# PlayReady PSSH usually has XML embedded in it
|
||||
pssh_str = pssh_bytes.decode("utf-16le", errors="ignore")
|
||||
|
||||
# Find WRMHEADER
|
||||
xml_start = pssh_str.find("<WRMHEADER")
|
||||
if xml_start == -1:
|
||||
# Try UTF-8
|
||||
pssh_str = pssh_bytes.decode("utf-8", errors="ignore")
|
||||
xml_start = pssh_str.find("<WRMHEADER")
|
||||
|
||||
if xml_start != -1:
|
||||
clean_xml = pssh_str[xml_start:]
|
||||
xml_end = clean_xml.find("</WRMHEADER>") + len("</WRMHEADER>")
|
||||
clean_xml = clean_xml[:xml_end]
|
||||
|
||||
root = ET.fromstring(clean_xml)
|
||||
ns = {"pr": "http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader"}
|
||||
|
||||
kids = []
|
||||
|
||||
# Extract from CUSTOMATTRIBUTES/KIDS
|
||||
kid_elements = root.findall(".//pr:CUSTOMATTRIBUTES/pr:KIDS/pr:KID", ns)
|
||||
for kid_elem in kid_elements:
|
||||
value = kid_elem.get("VALUE")
|
||||
if value:
|
||||
try:
|
||||
kid_bytes = base64.b64decode(value + "==")
|
||||
kid_uuid = UUID(bytes_le=kid_bytes)
|
||||
kids.append(kid_uuid)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Also get individual KID
|
||||
individual_kids = root.findall(".//pr:DATA/pr:KID", ns)
|
||||
for kid_elem in individual_kids:
|
||||
if kid_elem.text:
|
||||
try:
|
||||
kid_bytes = base64.b64decode(kid_elem.text.strip() + "==")
|
||||
kid_uuid = UUID(bytes_le=kid_bytes)
|
||||
if kid_uuid not in kids:
|
||||
kids.append(kid_uuid)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return kids
|
||||
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
def from_track(cls, track: AnyTrack, session: Optional[Session] = None) -> PlayReady:
|
||||
if not session:
|
||||
@@ -187,14 +253,69 @@ class PlayReady:
|
||||
if not self.content_keys:
|
||||
raise PlayReady.Exceptions.EmptyLicense("No Content Keys were within the License")
|
||||
|
||||
def decrypt(self, path: Path) -> None:
|
||||
def decrypt(self, path: Path, use_mp4decrypt: bool = False) -> None:
|
||||
"""
|
||||
Decrypt a Track with PlayReady DRM.
|
||||
Args:
|
||||
path: Path to the encrypted file to decrypt
|
||||
use_mp4decrypt: If True, use mp4decrypt instead of Shaka Packager
|
||||
Raises:
|
||||
EnvironmentError if the required decryption executable could not be found.
|
||||
ValueError if the track has not yet been downloaded.
|
||||
SubprocessError if the decryption process returned a non-zero exit code.
|
||||
"""
|
||||
if not self.content_keys:
|
||||
raise ValueError("Cannot decrypt a Track without any Content Keys...")
|
||||
if not binaries.ShakaPackager:
|
||||
raise EnvironmentError("Shaka Packager executable not found but is required.")
|
||||
|
||||
if not path or not path.exists():
|
||||
raise ValueError("Tried to decrypt a file that does not exist.")
|
||||
|
||||
if use_mp4decrypt:
|
||||
return self._decrypt_with_mp4decrypt(path)
|
||||
else:
|
||||
return self._decrypt_with_shaka_packager(path)
|
||||
|
||||
def _decrypt_with_mp4decrypt(self, path: Path) -> None:
|
||||
"""Decrypt using mp4decrypt"""
|
||||
if not binaries.Mp4decrypt:
|
||||
raise EnvironmentError("mp4decrypt executable not found but is required.")
|
||||
|
||||
output_path = path.with_stem(f"{path.stem}_decrypted")
|
||||
|
||||
# Build key arguments
|
||||
key_args = []
|
||||
for kid, key in self.content_keys.items():
|
||||
kid_hex = kid.hex if hasattr(kid, "hex") else str(kid).replace("-", "")
|
||||
key_hex = key if isinstance(key, str) else key.hex()
|
||||
key_args.extend(["--key", f"{kid_hex}:{key_hex}"])
|
||||
|
||||
cmd = [
|
||||
str(binaries.Mp4decrypt),
|
||||
"--show-progress",
|
||||
*key_args,
|
||||
str(path),
|
||||
str(output_path),
|
||||
]
|
||||
|
||||
try:
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
error_msg = e.stderr if e.stderr else f"mp4decrypt failed with exit code {e.returncode}"
|
||||
raise subprocess.CalledProcessError(e.returncode, cmd, output=e.stdout, stderr=error_msg)
|
||||
|
||||
if not output_path.exists():
|
||||
raise RuntimeError(f"mp4decrypt failed: output file {output_path} was not created")
|
||||
if output_path.stat().st_size == 0:
|
||||
raise RuntimeError(f"mp4decrypt failed: output file {output_path} is empty")
|
||||
|
||||
path.unlink()
|
||||
shutil.move(output_path, path)
|
||||
|
||||
def _decrypt_with_shaka_packager(self, path: Path) -> None:
|
||||
"""Decrypt using Shaka Packager (original method)"""
|
||||
if not binaries.ShakaPackager:
|
||||
raise EnvironmentError("Shaka Packager executable not found but is required.")
|
||||
|
||||
output_path = path.with_stem(f"{path.stem}_decrypted")
|
||||
config.directories.temp.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
@@ -227,22 +227,69 @@ class Widevine:
|
||||
finally:
|
||||
cdm.close(session_id)
|
||||
|
||||
def decrypt(self, path: Path) -> None:
|
||||
def decrypt(self, path: Path, use_mp4decrypt: bool = False) -> None:
|
||||
"""
|
||||
Decrypt a Track with Widevine DRM.
|
||||
Args:
|
||||
path: Path to the encrypted file to decrypt
|
||||
use_mp4decrypt: If True, use mp4decrypt instead of Shaka Packager
|
||||
Raises:
|
||||
EnvironmentError if the Shaka Packager executable could not be found.
|
||||
EnvironmentError if the required decryption executable could not be found.
|
||||
ValueError if the track has not yet been downloaded.
|
||||
SubprocessError if Shaka Packager returned a non-zero exit code.
|
||||
SubprocessError if the decryption process returned a non-zero exit code.
|
||||
"""
|
||||
if not self.content_keys:
|
||||
raise ValueError("Cannot decrypt a Track without any Content Keys...")
|
||||
|
||||
if not binaries.ShakaPackager:
|
||||
raise EnvironmentError("Shaka Packager executable not found but is required.")
|
||||
if not path or not path.exists():
|
||||
raise ValueError("Tried to decrypt a file that does not exist.")
|
||||
|
||||
if use_mp4decrypt:
|
||||
return self._decrypt_with_mp4decrypt(path)
|
||||
else:
|
||||
return self._decrypt_with_shaka_packager(path)
|
||||
|
||||
def _decrypt_with_mp4decrypt(self, path: Path) -> None:
|
||||
"""Decrypt using mp4decrypt"""
|
||||
if not binaries.Mp4decrypt:
|
||||
raise EnvironmentError("mp4decrypt executable not found but is required.")
|
||||
|
||||
output_path = path.with_stem(f"{path.stem}_decrypted")
|
||||
|
||||
# Build key arguments
|
||||
key_args = []
|
||||
for kid, key in self.content_keys.items():
|
||||
kid_hex = kid.hex if hasattr(kid, "hex") else str(kid).replace("-", "")
|
||||
key_hex = key if isinstance(key, str) else key.hex()
|
||||
key_args.extend(["--key", f"{kid_hex}:{key_hex}"])
|
||||
|
||||
cmd = [
|
||||
str(binaries.Mp4decrypt),
|
||||
"--show-progress",
|
||||
*key_args,
|
||||
str(path),
|
||||
str(output_path),
|
||||
]
|
||||
|
||||
try:
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
error_msg = e.stderr if e.stderr else f"mp4decrypt failed with exit code {e.returncode}"
|
||||
raise subprocess.CalledProcessError(e.returncode, cmd, output=e.stdout, stderr=error_msg)
|
||||
|
||||
if not output_path.exists():
|
||||
raise RuntimeError(f"mp4decrypt failed: output file {output_path} was not created")
|
||||
if output_path.stat().st_size == 0:
|
||||
raise RuntimeError(f"mp4decrypt failed: output file {output_path} is empty")
|
||||
|
||||
path.unlink()
|
||||
shutil.move(output_path, path)
|
||||
|
||||
def _decrypt_with_shaka_packager(self, path: Path) -> None:
|
||||
"""Decrypt using Shaka Packager (original method)"""
|
||||
if not binaries.ShakaPackager:
|
||||
raise EnvironmentError("Shaka Packager executable not found but is required.")
|
||||
|
||||
output_path = path.with_stem(f"{path.stem}_decrypted")
|
||||
config.directories.temp.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ from unshackle.core.constants import AnyTrack
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.drm import DRM_T
|
||||
from unshackle.core.search_result import SearchResult
|
||||
from unshackle.core.title_cacher import TitleCacher, get_account_hash, get_region_from_proxy
|
||||
from unshackle.core.titles import Title_T, Titles_T
|
||||
from unshackle.core.tracks import Chapters, Tracks
|
||||
from unshackle.core.utilities import get_ip_info
|
||||
@@ -42,6 +43,12 @@ class Service(metaclass=ABCMeta):
|
||||
|
||||
self.session = self.get_session()
|
||||
self.cache = Cacher(self.__class__.__name__)
|
||||
self.title_cache = TitleCacher(self.__class__.__name__)
|
||||
|
||||
# Store context for cache control flags and credential
|
||||
self.ctx = ctx
|
||||
self.credential = None # Will be set in authenticate()
|
||||
self.current_region = None # Will be set based on proxy/geolocation
|
||||
|
||||
if not ctx.parent or not ctx.parent.params.get("no_proxy"):
|
||||
if ctx.parent:
|
||||
@@ -79,6 +86,15 @@ class Service(metaclass=ABCMeta):
|
||||
).decode()
|
||||
}
|
||||
)
|
||||
# Store region from proxy
|
||||
self.current_region = get_region_from_proxy(proxy)
|
||||
else:
|
||||
# No proxy, try to get current region
|
||||
try:
|
||||
ip_info = get_ip_info(self.session)
|
||||
self.current_region = ip_info.get("country", "").lower() if ip_info else None
|
||||
except Exception:
|
||||
self.current_region = None
|
||||
|
||||
# Optional Abstract functions
|
||||
# The following functions may be implemented by the Service.
|
||||
@@ -123,6 +139,9 @@ class Service(metaclass=ABCMeta):
|
||||
raise TypeError(f"Expected cookies to be a {CookieJar}, not {cookies!r}.")
|
||||
self.session.cookies.update(cookies)
|
||||
|
||||
# Store credential for cache key generation
|
||||
self.credential = credential
|
||||
|
||||
def search(self) -> Generator[SearchResult, None, None]:
|
||||
"""
|
||||
Search by query for titles from the Service.
|
||||
@@ -187,6 +206,52 @@ class Service(metaclass=ABCMeta):
|
||||
This can be useful to store information on each title that will be required like any sub-asset IDs, or such.
|
||||
"""
|
||||
|
||||
def get_titles_cached(self, title_id: str = None) -> Titles_T:
|
||||
"""
|
||||
Cached wrapper around get_titles() to reduce redundant API calls.
|
||||
|
||||
This method checks the cache before calling get_titles() and handles
|
||||
fallback to cached data when API calls fail.
|
||||
|
||||
Args:
|
||||
title_id: Optional title ID for cache key generation.
|
||||
If not provided, will try to extract from service instance.
|
||||
|
||||
Returns:
|
||||
Titles object (Movies, Series, or Album)
|
||||
"""
|
||||
# Try to get title_id from service instance if not provided
|
||||
if title_id is None:
|
||||
# Different services store the title ID in different attributes
|
||||
if hasattr(self, "title"):
|
||||
title_id = self.title
|
||||
elif hasattr(self, "title_id"):
|
||||
title_id = self.title_id
|
||||
else:
|
||||
# If we can't determine title_id, just call get_titles directly
|
||||
self.log.debug("Cannot determine title_id for caching, bypassing cache")
|
||||
return self.get_titles()
|
||||
|
||||
# Get cache control flags from context
|
||||
no_cache = False
|
||||
reset_cache = False
|
||||
if self.ctx and self.ctx.parent:
|
||||
no_cache = self.ctx.parent.params.get("no_cache", False)
|
||||
reset_cache = self.ctx.parent.params.get("reset_cache", False)
|
||||
|
||||
# Get account hash for cache key
|
||||
account_hash = get_account_hash(self.credential)
|
||||
|
||||
# Use title cache to get titles with fallback support
|
||||
return self.title_cache.get_cached_titles(
|
||||
title_id=str(title_id),
|
||||
fetch_function=self.get_titles,
|
||||
region=self.current_region,
|
||||
account_hash=account_hash,
|
||||
no_cache=no_cache,
|
||||
reset_cache=reset_cache,
|
||||
)
|
||||
|
||||
@abstractmethod
|
||||
def get_tracks(self, title: Title_T) -> Tracks:
|
||||
"""
|
||||
|
||||
240
unshackle/core/title_cacher.py
Normal file
240
unshackle/core/title_cacher.py
Normal file
@@ -0,0 +1,240 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
from unshackle.core.cacher import Cacher
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.titles import Titles_T
|
||||
|
||||
|
||||
class TitleCacher:
|
||||
"""
|
||||
Handles caching of Title objects to reduce redundant API calls.
|
||||
|
||||
This wrapper provides:
|
||||
- Region-aware caching to handle geo-restricted content
|
||||
- Automatic fallback to cached data when API calls fail
|
||||
- Cache lifetime extension during failures
|
||||
- Cache hit/miss statistics for debugging
|
||||
"""
|
||||
|
||||
def __init__(self, service_name: str):
|
||||
self.service_name = service_name
|
||||
self.log = logging.getLogger(f"{service_name}.TitleCache")
|
||||
self.cacher = Cacher(service_name)
|
||||
self.stats = {"hits": 0, "misses": 0, "fallbacks": 0}
|
||||
|
||||
def _generate_cache_key(
|
||||
self, title_id: str, region: Optional[str] = None, account_hash: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Generate a unique cache key for title data.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials (if applicable)
|
||||
|
||||
Returns:
|
||||
A unique cache key string
|
||||
"""
|
||||
# Hash the title_id to handle complex IDs (URLs, dots, special chars)
|
||||
# This ensures consistent length and filesystem-safe keys
|
||||
title_hash = hashlib.sha256(title_id.encode()).hexdigest()[:16]
|
||||
|
||||
# Start with base key using hash
|
||||
key_parts = ["titles", title_hash]
|
||||
|
||||
# Add region if available
|
||||
if region:
|
||||
key_parts.append(region.lower())
|
||||
|
||||
# Add account hash if available
|
||||
if account_hash:
|
||||
key_parts.append(account_hash[:8]) # Use first 8 chars of hash
|
||||
|
||||
# Join with underscores
|
||||
cache_key = "_".join(key_parts)
|
||||
|
||||
# Log the mapping for debugging
|
||||
self.log.debug(f"Cache key mapping: {title_id} -> {cache_key}")
|
||||
|
||||
return cache_key
|
||||
|
||||
def get_cached_titles(
|
||||
self,
|
||||
title_id: str,
|
||||
fetch_function,
|
||||
region: Optional[str] = None,
|
||||
account_hash: Optional[str] = None,
|
||||
no_cache: bool = False,
|
||||
reset_cache: bool = False,
|
||||
) -> Optional[Titles_T]:
|
||||
"""
|
||||
Get titles from cache or fetch from API with fallback support.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
fetch_function: Function to call to fetch fresh titles
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials
|
||||
no_cache: Bypass cache completely
|
||||
reset_cache: Clear cache before fetching
|
||||
|
||||
Returns:
|
||||
Titles object (Movies, Series, or Album)
|
||||
"""
|
||||
# If caching is globally disabled or no_cache flag is set
|
||||
if not config.title_cache_enabled or no_cache:
|
||||
self.log.debug("Cache bypassed, fetching fresh titles")
|
||||
return fetch_function()
|
||||
|
||||
# Generate cache key
|
||||
cache_key = self._generate_cache_key(title_id, region, account_hash)
|
||||
|
||||
# If reset_cache flag is set, clear the cache entry
|
||||
if reset_cache:
|
||||
self.log.info(f"Clearing cache for {cache_key}")
|
||||
cache_path = (config.directories.cache / self.service_name / cache_key).with_suffix(".json")
|
||||
if cache_path.exists():
|
||||
cache_path.unlink()
|
||||
|
||||
# Try to get from cache
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
|
||||
# Check if we have valid cached data
|
||||
if cache and not cache.expired:
|
||||
self.stats["hits"] += 1
|
||||
self.log.debug(f"Cache hit for {title_id} (hits: {self.stats['hits']}, misses: {self.stats['misses']})")
|
||||
return cache.data
|
||||
|
||||
# Cache miss or expired, try to fetch fresh data
|
||||
self.stats["misses"] += 1
|
||||
self.log.debug(f"Cache miss for {title_id}, fetching fresh data")
|
||||
|
||||
try:
|
||||
# Attempt to fetch fresh titles
|
||||
titles = fetch_function()
|
||||
|
||||
if titles:
|
||||
# Successfully fetched, update cache
|
||||
self.log.debug(f"Successfully fetched titles for {title_id}, updating cache")
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
cache.set(titles, expiration=datetime.now() + timedelta(seconds=config.title_cache_time))
|
||||
|
||||
return titles
|
||||
|
||||
except Exception as e:
|
||||
# API call failed, check if we have fallback cached data
|
||||
if cache and cache.data:
|
||||
# We have expired cached data, use it as fallback
|
||||
current_time = datetime.now()
|
||||
max_retention_time = cache.expiration + timedelta(
|
||||
seconds=config.title_cache_max_retention - config.title_cache_time
|
||||
)
|
||||
|
||||
if current_time < max_retention_time:
|
||||
self.stats["fallbacks"] += 1
|
||||
self.log.warning(
|
||||
f"API call failed for {title_id}, using cached data as fallback "
|
||||
f"(fallbacks: {self.stats['fallbacks']})"
|
||||
)
|
||||
self.log.debug(f"Error was: {e}")
|
||||
|
||||
# Extend cache lifetime
|
||||
extended_expiration = current_time + timedelta(minutes=5)
|
||||
if extended_expiration < max_retention_time:
|
||||
cache.expiration = extended_expiration
|
||||
cache.set(cache.data, expiration=extended_expiration)
|
||||
|
||||
return cache.data
|
||||
else:
|
||||
self.log.error(f"API call failed and cached data for {title_id} exceeded maximum retention time")
|
||||
|
||||
# Re-raise the exception if no fallback available
|
||||
raise
|
||||
|
||||
def clear_all_title_cache(self):
|
||||
"""Clear all title caches for this service."""
|
||||
cache_dir = config.directories.cache / self.service_name
|
||||
if cache_dir.exists():
|
||||
for cache_file in cache_dir.glob("titles_*.json"):
|
||||
cache_file.unlink()
|
||||
self.log.info(f"Cleared cache file: {cache_file.name}")
|
||||
|
||||
def get_cache_stats(self) -> dict:
|
||||
"""Get cache statistics."""
|
||||
total = sum(self.stats.values())
|
||||
if total > 0:
|
||||
hit_rate = (self.stats["hits"] / total) * 100
|
||||
else:
|
||||
hit_rate = 0
|
||||
|
||||
return {
|
||||
"hits": self.stats["hits"],
|
||||
"misses": self.stats["misses"],
|
||||
"fallbacks": self.stats["fallbacks"],
|
||||
"hit_rate": f"{hit_rate:.1f}%",
|
||||
}
|
||||
|
||||
|
||||
def get_region_from_proxy(proxy_url: Optional[str]) -> Optional[str]:
|
||||
"""
|
||||
Extract region identifier from proxy URL.
|
||||
|
||||
Args:
|
||||
proxy_url: The proxy URL string
|
||||
|
||||
Returns:
|
||||
Region identifier or None
|
||||
"""
|
||||
if not proxy_url:
|
||||
return None
|
||||
|
||||
# Try to extract region from common proxy patterns
|
||||
# e.g., "us123.nordvpn.com", "gb-proxy.example.com"
|
||||
import re
|
||||
|
||||
# Pattern for NordVPN style
|
||||
nord_match = re.search(r"([a-z]{2})\d+\.nordvpn", proxy_url.lower())
|
||||
if nord_match:
|
||||
return nord_match.group(1)
|
||||
|
||||
# Pattern for country code at start
|
||||
cc_match = re.search(r"([a-z]{2})[-_]", proxy_url.lower())
|
||||
if cc_match:
|
||||
return cc_match.group(1)
|
||||
|
||||
# Pattern for country code subdomain
|
||||
subdomain_match = re.search(r"://([a-z]{2})\.", proxy_url.lower())
|
||||
if subdomain_match:
|
||||
return subdomain_match.group(1)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_account_hash(credential) -> Optional[str]:
|
||||
"""
|
||||
Generate a hash for account identification.
|
||||
|
||||
Args:
|
||||
credential: Credential object
|
||||
|
||||
Returns:
|
||||
SHA1 hash of the credential or None
|
||||
"""
|
||||
if not credential:
|
||||
return None
|
||||
|
||||
# Use existing sha1 property if available
|
||||
if hasattr(credential, "sha1"):
|
||||
return credential.sha1
|
||||
|
||||
# Otherwise generate hash from username
|
||||
if hasattr(credential, "username"):
|
||||
return hashlib.sha1(credential.username.encode()).hexdigest()
|
||||
|
||||
return None
|
||||
@@ -107,75 +107,87 @@ class Episode(Title):
|
||||
name=self.name or "",
|
||||
).strip()
|
||||
|
||||
# Resolution
|
||||
if primary_video_track:
|
||||
resolution = primary_video_track.height
|
||||
aspect_ratio = [int(float(plane)) for plane in primary_video_track.other_display_aspect_ratio[0].split(":")]
|
||||
if len(aspect_ratio) == 1:
|
||||
# e.g., aspect ratio of 2 (2.00:1) would end up as `(2.0,)`, add 1
|
||||
aspect_ratio.append(1)
|
||||
if aspect_ratio[0] / aspect_ratio[1] not in (16 / 9, 4 / 3):
|
||||
# We want the resolution represented in a 4:3 or 16:9 canvas.
|
||||
# If it's not 4:3 or 16:9, calculate as if it's inside a 16:9 canvas,
|
||||
# otherwise the track's height value is fine.
|
||||
# We are assuming this title is some weird aspect ratio so most
|
||||
# likely a movie or HD source, so it's most likely widescreen so
|
||||
# 16:9 canvas makes the most sense.
|
||||
resolution = int(primary_video_track.width * (9 / 16))
|
||||
name += f" {resolution}p"
|
||||
if config.scene_naming:
|
||||
# Resolution
|
||||
if primary_video_track:
|
||||
resolution = primary_video_track.height
|
||||
aspect_ratio = [
|
||||
int(float(plane)) for plane in primary_video_track.other_display_aspect_ratio[0].split(":")
|
||||
]
|
||||
if len(aspect_ratio) == 1:
|
||||
# e.g., aspect ratio of 2 (2.00:1) would end up as `(2.0,)`, add 1
|
||||
aspect_ratio.append(1)
|
||||
if aspect_ratio[0] / aspect_ratio[1] not in (16 / 9, 4 / 3):
|
||||
# We want the resolution represented in a 4:3 or 16:9 canvas.
|
||||
# If it's not 4:3 or 16:9, calculate as if it's inside a 16:9 canvas,
|
||||
# otherwise the track's height value is fine.
|
||||
# We are assuming this title is some weird aspect ratio so most
|
||||
# likely a movie or HD source, so it's most likely widescreen so
|
||||
# 16:9 canvas makes the most sense.
|
||||
resolution = int(primary_video_track.width * (9 / 16))
|
||||
name += f" {resolution}p"
|
||||
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
|
||||
# DUAL
|
||||
if unique_audio_languages == 2:
|
||||
name += " DUAL"
|
||||
# DUAL
|
||||
if unique_audio_languages == 2:
|
||||
name += " DUAL"
|
||||
|
||||
# MULTi
|
||||
if unique_audio_languages > 2:
|
||||
name += " MULTi"
|
||||
# MULTi
|
||||
if unique_audio_languages > 2:
|
||||
name += " MULTi"
|
||||
|
||||
# Audio Codec + Channels (+ feature)
|
||||
if primary_audio_track:
|
||||
codec = primary_audio_track.format
|
||||
channel_layout = primary_audio_track.channel_layout or primary_audio_track.channellayout_original
|
||||
if channel_layout:
|
||||
channels = float(sum({"LFE": 0.1}.get(position.upper(), 1) for position in channel_layout.split(" ")))
|
||||
else:
|
||||
channel_count = primary_audio_track.channel_s or primary_audio_track.channels or 0
|
||||
channels = float(channel_count)
|
||||
|
||||
features = primary_audio_track.format_additionalfeatures or ""
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or primary_audio_track.joc:
|
||||
name += " Atmos"
|
||||
|
||||
# Video (dynamic range + hfr +) Codec
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
trc = primary_video_track.transfer_characteristics or primary_video_track.transfer_characteristics_original
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if (primary_video_track.hdr_format_commercial) != "Dolby Vision":
|
||||
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
# Audio Codec + Channels (+ feature)
|
||||
if primary_audio_track:
|
||||
codec = primary_audio_track.format
|
||||
channel_layout = primary_audio_track.channel_layout or primary_audio_track.channellayout_original
|
||||
if channel_layout:
|
||||
channels = float(
|
||||
sum({"LFE": 0.1}.get(position.upper(), 1) for position in channel_layout.split(" "))
|
||||
)
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
name += " HLG"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
channel_count = primary_audio_track.channel_s or primary_audio_track.channels or 0
|
||||
channels = float(channel_count)
|
||||
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
features = primary_audio_track.format_additionalfeatures or ""
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or primary_audio_track.joc:
|
||||
name += " Atmos"
|
||||
|
||||
return sanitize_filename(name)
|
||||
# Video (dynamic range + hfr +) Codec
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
trc = (
|
||||
primary_video_track.transfer_characteristics
|
||||
or primary_video_track.transfer_characteristics_original
|
||||
)
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
name += " HLG"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
|
||||
return sanitize_filename(name)
|
||||
else:
|
||||
# Simple naming style without technical details - use spaces instead of dots
|
||||
return sanitize_filename(name, " ")
|
||||
|
||||
|
||||
class Series(SortedKeyList, ABC):
|
||||
@@ -190,9 +202,10 @@ class Series(SortedKeyList, ABC):
|
||||
def tree(self, verbose: bool = False) -> Tree:
|
||||
seasons = Counter(x.season for x in self)
|
||||
num_seasons = len(seasons)
|
||||
num_episodes = sum(seasons.values())
|
||||
sum(seasons.values())
|
||||
season_breakdown = ", ".join(f"S{season}({count})" for season, count in sorted(seasons.items()))
|
||||
tree = Tree(
|
||||
f"{num_seasons} Season{['s', ''][num_seasons == 1]}, {num_episodes} Episode{['s', ''][num_episodes == 1]}",
|
||||
f"{num_seasons} seasons, {season_breakdown}",
|
||||
guide_style="bright_black",
|
||||
)
|
||||
if verbose:
|
||||
|
||||
@@ -58,75 +58,87 @@ class Movie(Title):
|
||||
# Name (Year)
|
||||
name = str(self).replace("$", "S") # e.g., Arli$$
|
||||
|
||||
# Resolution
|
||||
if primary_video_track:
|
||||
resolution = primary_video_track.height
|
||||
aspect_ratio = [int(float(plane)) for plane in primary_video_track.other_display_aspect_ratio[0].split(":")]
|
||||
if len(aspect_ratio) == 1:
|
||||
# e.g., aspect ratio of 2 (2.00:1) would end up as `(2.0,)`, add 1
|
||||
aspect_ratio.append(1)
|
||||
if aspect_ratio[0] / aspect_ratio[1] not in (16 / 9, 4 / 3):
|
||||
# We want the resolution represented in a 4:3 or 16:9 canvas.
|
||||
# If it's not 4:3 or 16:9, calculate as if it's inside a 16:9 canvas,
|
||||
# otherwise the track's height value is fine.
|
||||
# We are assuming this title is some weird aspect ratio so most
|
||||
# likely a movie or HD source, so it's most likely widescreen so
|
||||
# 16:9 canvas makes the most sense.
|
||||
resolution = int(primary_video_track.width * (9 / 16))
|
||||
name += f" {resolution}p"
|
||||
if config.scene_naming:
|
||||
# Resolution
|
||||
if primary_video_track:
|
||||
resolution = primary_video_track.height
|
||||
aspect_ratio = [
|
||||
int(float(plane)) for plane in primary_video_track.other_display_aspect_ratio[0].split(":")
|
||||
]
|
||||
if len(aspect_ratio) == 1:
|
||||
# e.g., aspect ratio of 2 (2.00:1) would end up as `(2.0,)`, add 1
|
||||
aspect_ratio.append(1)
|
||||
if aspect_ratio[0] / aspect_ratio[1] not in (16 / 9, 4 / 3):
|
||||
# We want the resolution represented in a 4:3 or 16:9 canvas.
|
||||
# If it's not 4:3 or 16:9, calculate as if it's inside a 16:9 canvas,
|
||||
# otherwise the track's height value is fine.
|
||||
# We are assuming this title is some weird aspect ratio so most
|
||||
# likely a movie or HD source, so it's most likely widescreen so
|
||||
# 16:9 canvas makes the most sense.
|
||||
resolution = int(primary_video_track.width * (9 / 16))
|
||||
name += f" {resolution}p"
|
||||
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
|
||||
# DUAL
|
||||
if unique_audio_languages == 2:
|
||||
name += " DUAL"
|
||||
# DUAL
|
||||
if unique_audio_languages == 2:
|
||||
name += " DUAL"
|
||||
|
||||
# MULTi
|
||||
if unique_audio_languages > 2:
|
||||
name += " MULTi"
|
||||
# MULTi
|
||||
if unique_audio_languages > 2:
|
||||
name += " MULTi"
|
||||
|
||||
# Audio Codec + Channels (+ feature)
|
||||
if primary_audio_track:
|
||||
codec = primary_audio_track.format
|
||||
channel_layout = primary_audio_track.channel_layout or primary_audio_track.channellayout_original
|
||||
if channel_layout:
|
||||
channels = float(sum({"LFE": 0.1}.get(position.upper(), 1) for position in channel_layout.split(" ")))
|
||||
else:
|
||||
channel_count = primary_audio_track.channel_s or primary_audio_track.channels or 0
|
||||
channels = float(channel_count)
|
||||
|
||||
features = primary_audio_track.format_additionalfeatures or ""
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or primary_audio_track.joc:
|
||||
name += " Atmos"
|
||||
|
||||
# Video (dynamic range + hfr +) Codec
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
trc = primary_video_track.transfer_characteristics or primary_video_track.transfer_characteristics_original
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if (primary_video_track.hdr_format_commercial) != "Dolby Vision":
|
||||
name += f" DV {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
# Audio Codec + Channels (+ feature)
|
||||
if primary_audio_track:
|
||||
codec = primary_audio_track.format
|
||||
channel_layout = primary_audio_track.channel_layout or primary_audio_track.channellayout_original
|
||||
if channel_layout:
|
||||
channels = float(
|
||||
sum({"LFE": 0.1}.get(position.upper(), 1) for position in channel_layout.split(" "))
|
||||
)
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
name += " HLG"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
channel_count = primary_audio_track.channel_s or primary_audio_track.channels or 0
|
||||
channels = float(channel_count)
|
||||
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
features = primary_audio_track.format_additionalfeatures or ""
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or primary_audio_track.joc:
|
||||
name += " Atmos"
|
||||
|
||||
return sanitize_filename(name)
|
||||
# Video (dynamic range + hfr +) Codec
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
trc = (
|
||||
primary_video_track.transfer_characteristics
|
||||
or primary_video_track.transfer_characteristics_original
|
||||
)
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
name += " HLG"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
|
||||
return sanitize_filename(name)
|
||||
else:
|
||||
# Simple naming style without technical details - use spaces instead of dots
|
||||
return sanitize_filename(name, " ")
|
||||
|
||||
|
||||
class Movies(SortedKeyList, ABC):
|
||||
|
||||
@@ -100,22 +100,26 @@ class Song(Title):
|
||||
# NN. Song Name
|
||||
name = str(self).split(" / ")[1]
|
||||
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
if config.scene_naming:
|
||||
# Service
|
||||
if show_service:
|
||||
name += f" {self.service.__name__}"
|
||||
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
# 'WEB-DL'
|
||||
name += " WEB-DL"
|
||||
|
||||
# Audio Codec + Channels (+ feature)
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or audio_track.joc:
|
||||
name += " Atmos"
|
||||
# Audio Codec + Channels (+ feature)
|
||||
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
|
||||
if "JOC" in features or audio_track.joc:
|
||||
name += " Atmos"
|
||||
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
if config.tag:
|
||||
name += f"-{config.tag}"
|
||||
|
||||
return sanitize_filename(name, " ")
|
||||
return sanitize_filename(name, " ")
|
||||
else:
|
||||
# Simple naming style without technical details
|
||||
return sanitize_filename(name, " ")
|
||||
|
||||
|
||||
class Album(SortedKeyList, ABC):
|
||||
|
||||
@@ -43,7 +43,7 @@ class Hybrid:
|
||||
|
||||
for video in self.videos:
|
||||
if not video.path or not os.path.exists(video.path):
|
||||
self.log.exit(f" - Video track {video.id} was not downloaded before injection.")
|
||||
raise ValueError(f"Video track {video.id} was not downloaded before injection.")
|
||||
|
||||
# Check if we have DV track available
|
||||
has_dv = any(video.range == Video.Range.DV for video in self.videos)
|
||||
@@ -51,14 +51,14 @@ class Hybrid:
|
||||
has_hdr10p = any(video.range == Video.Range.HDR10P for video in self.videos)
|
||||
|
||||
if not has_hdr10:
|
||||
self.log.exit(" - No HDR10 track available for hybrid processing.")
|
||||
raise ValueError("No HDR10 track available for hybrid processing.")
|
||||
|
||||
# If we have HDR10+ but no DV, we can convert HDR10+ to DV
|
||||
if not has_dv and has_hdr10p:
|
||||
self.log.info("✓ No DV track found, but HDR10+ is available. Will convert HDR10+ to DV.")
|
||||
self.hdr10plus_to_dv = True
|
||||
elif not has_dv:
|
||||
self.log.exit(" - No DV track available and no HDR10+ to convert.")
|
||||
raise ValueError("No DV track available and no HDR10+ to convert.")
|
||||
|
||||
if os.path.isfile(config.directories.temp / self.hevc_file):
|
||||
self.log.info("✓ Already Injected")
|
||||
@@ -68,7 +68,7 @@ class Hybrid:
|
||||
# Use the actual path from the video track
|
||||
save_path = video.path
|
||||
if not save_path or not os.path.exists(save_path):
|
||||
self.log.exit(f" - Video track {video.id} was not downloaded or path not found: {save_path}")
|
||||
raise ValueError(f"Video track {video.id} was not downloaded or path not found: {save_path}")
|
||||
|
||||
if video.range == Video.Range.HDR10:
|
||||
self.extract_stream(save_path, "HDR10")
|
||||
@@ -126,47 +126,51 @@ class Hybrid:
|
||||
def extract_stream(self, save_path, type_):
|
||||
output = Path(config.directories.temp / f"{type_}.hevc")
|
||||
|
||||
self.log.info(f"+ Extracting {type_} stream")
|
||||
|
||||
returncode = self.ffmpeg_simple(save_path, output)
|
||||
with console.status(f"Extracting {type_} stream...", spinner="dots"):
|
||||
returncode = self.ffmpeg_simple(save_path, output)
|
||||
|
||||
if returncode:
|
||||
output.unlink(missing_ok=True)
|
||||
self.log.error(f"x Failed extracting {type_} stream")
|
||||
sys.exit(1)
|
||||
|
||||
self.log.info(f"Extracted {type_} stream")
|
||||
|
||||
def extract_rpu(self, video, untouched=False):
|
||||
if os.path.isfile(config.directories.temp / "RPU.bin") or os.path.isfile(
|
||||
config.directories.temp / "RPU_UNT.bin"
|
||||
):
|
||||
return
|
||||
|
||||
self.log.info(f"+ Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
with console.status(
|
||||
f"Extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream...", spinner="dots"
|
||||
):
|
||||
extraction_args = [str(DoviTool)]
|
||||
if not untouched:
|
||||
extraction_args += ["-m", "3"]
|
||||
extraction_args += [
|
||||
"extract-rpu",
|
||||
config.directories.temp / "DV.hevc",
|
||||
"-o",
|
||||
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
|
||||
]
|
||||
|
||||
extraction_args = [str(DoviTool)]
|
||||
if not untouched:
|
||||
extraction_args += ["-m", "3"]
|
||||
extraction_args += [
|
||||
"extract-rpu",
|
||||
config.directories.temp / "DV.hevc",
|
||||
"-o",
|
||||
config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin",
|
||||
]
|
||||
|
||||
rpu_extraction = subprocess.run(
|
||||
extraction_args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
rpu_extraction = subprocess.run(
|
||||
extraction_args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if rpu_extraction.returncode:
|
||||
Path.unlink(config.directories.temp / f"{'RPU' if not untouched else 'RPU_UNT'}.bin")
|
||||
if b"MAX_PQ_LUMINANCE" in rpu_extraction.stderr:
|
||||
self.extract_rpu(video, untouched=True)
|
||||
elif b"Invalid PPS index" in rpu_extraction.stderr:
|
||||
self.log.exit("x Dolby Vision VideoTrack seems to be corrupt")
|
||||
raise ValueError("Dolby Vision VideoTrack seems to be corrupt")
|
||||
else:
|
||||
self.log.exit(f"x Failed extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
raise ValueError(f"Failed extracting{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
|
||||
self.log.info(f"Extracted{' untouched ' if untouched else ' '}RPU from Dolby Vision stream")
|
||||
|
||||
def level_6(self):
|
||||
"""Edit RPU Level 6 values"""
|
||||
@@ -185,25 +189,27 @@ class Hybrid:
|
||||
json.dump(level6, level6_file, indent=3)
|
||||
|
||||
if not os.path.isfile(config.directories.temp / "RPU_L6.bin"):
|
||||
self.log.info("+ Editing RPU Level 6 values")
|
||||
level6 = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"editor",
|
||||
"-i",
|
||||
config.directories.temp / self.rpu_file,
|
||||
"-j",
|
||||
config.directories.temp / "L6.json",
|
||||
"-o",
|
||||
config.directories.temp / "RPU_L6.bin",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
with console.status("Editing RPU Level 6 values...", spinner="dots"):
|
||||
level6 = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"editor",
|
||||
"-i",
|
||||
config.directories.temp / self.rpu_file,
|
||||
"-j",
|
||||
config.directories.temp / "L6.json",
|
||||
"-o",
|
||||
config.directories.temp / "RPU_L6.bin",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if level6.returncode:
|
||||
Path.unlink(config.directories.temp / "RPU_L6.bin")
|
||||
self.log.exit("x Failed editing RPU Level 6 values")
|
||||
raise ValueError("Failed editing RPU Level 6 values")
|
||||
|
||||
self.log.info("Edited RPU Level 6 values")
|
||||
|
||||
# Update rpu_file to use the edited version
|
||||
self.rpu_file = "RPU_L6.bin"
|
||||
@@ -212,34 +218,35 @@ class Hybrid:
|
||||
if os.path.isfile(config.directories.temp / self.hevc_file):
|
||||
return
|
||||
|
||||
self.log.info(f"+ Injecting Dolby Vision metadata into {self.hdr_type} stream")
|
||||
with console.status(f"Injecting Dolby Vision metadata into {self.hdr_type} stream...", spinner="dots"):
|
||||
inject_cmd = [
|
||||
str(DoviTool),
|
||||
"inject-rpu",
|
||||
"-i",
|
||||
config.directories.temp / "HDR10.hevc",
|
||||
"--rpu-in",
|
||||
config.directories.temp / self.rpu_file,
|
||||
]
|
||||
|
||||
inject_cmd = [
|
||||
str(DoviTool),
|
||||
"inject-rpu",
|
||||
"-i",
|
||||
config.directories.temp / "HDR10.hevc",
|
||||
"--rpu-in",
|
||||
config.directories.temp / self.rpu_file,
|
||||
]
|
||||
# If we converted from HDR10+, optionally remove HDR10+ metadata during injection
|
||||
# Default to removing HDR10+ metadata since we're converting to DV
|
||||
if self.hdr10plus_to_dv:
|
||||
inject_cmd.append("--drop-hdr10plus")
|
||||
self.log.info(" - Removing HDR10+ metadata during injection")
|
||||
|
||||
# If we converted from HDR10+, optionally remove HDR10+ metadata during injection
|
||||
# Default to removing HDR10+ metadata since we're converting to DV
|
||||
if self.hdr10plus_to_dv:
|
||||
inject_cmd.append("--drop-hdr10plus")
|
||||
self.log.info(" - Removing HDR10+ metadata during injection")
|
||||
inject_cmd.extend(["-o", config.directories.temp / self.hevc_file])
|
||||
|
||||
inject_cmd.extend(["-o", config.directories.temp / self.hevc_file])
|
||||
|
||||
inject = subprocess.run(
|
||||
inject_cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
inject = subprocess.run(
|
||||
inject_cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if inject.returncode:
|
||||
Path.unlink(config.directories.temp / self.hevc_file)
|
||||
self.log.exit("x Failed injecting Dolby Vision metadata into HDR10 stream")
|
||||
raise ValueError("Failed injecting Dolby Vision metadata into HDR10 stream")
|
||||
|
||||
self.log.info(f"Injected Dolby Vision metadata into {self.hdr_type} stream")
|
||||
|
||||
def extract_hdr10plus(self, _video):
|
||||
"""Extract HDR10+ metadata from the video stream"""
|
||||
@@ -247,71 +254,72 @@ class Hybrid:
|
||||
return
|
||||
|
||||
if not HDR10PlusTool:
|
||||
self.log.exit("x HDR10Plus_tool not found. Please install it to use HDR10+ to DV conversion.")
|
||||
raise ValueError("HDR10Plus_tool not found. Please install it to use HDR10+ to DV conversion.")
|
||||
|
||||
self.log.info("+ Extracting HDR10+ metadata")
|
||||
|
||||
# HDR10Plus_tool needs raw HEVC stream
|
||||
extraction = subprocess.run(
|
||||
[
|
||||
str(HDR10PlusTool),
|
||||
"extract",
|
||||
str(config.directories.temp / "HDR10.hevc"),
|
||||
"-o",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
with console.status("Extracting HDR10+ metadata...", spinner="dots"):
|
||||
# HDR10Plus_tool needs raw HEVC stream
|
||||
extraction = subprocess.run(
|
||||
[
|
||||
str(HDR10PlusTool),
|
||||
"extract",
|
||||
str(config.directories.temp / "HDR10.hevc"),
|
||||
"-o",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if extraction.returncode:
|
||||
self.log.exit("x Failed extracting HDR10+ metadata")
|
||||
raise ValueError("Failed extracting HDR10+ metadata")
|
||||
|
||||
# Check if the extracted file has content
|
||||
if os.path.getsize(config.directories.temp / self.hdr10plus_file) == 0:
|
||||
self.log.exit("x No HDR10+ metadata found in the stream")
|
||||
raise ValueError("No HDR10+ metadata found in the stream")
|
||||
|
||||
self.log.info("Extracted HDR10+ metadata")
|
||||
|
||||
def convert_hdr10plus_to_dv(self):
|
||||
"""Convert HDR10+ metadata to Dolby Vision RPU"""
|
||||
if os.path.isfile(config.directories.temp / "RPU.bin"):
|
||||
return
|
||||
|
||||
self.log.info("+ Converting HDR10+ metadata to Dolby Vision")
|
||||
with console.status("Converting HDR10+ metadata to Dolby Vision...", spinner="dots"):
|
||||
# First create the extra metadata JSON for dovi_tool
|
||||
extra_metadata = {
|
||||
"cm_version": "V29",
|
||||
"length": 0, # dovi_tool will figure this out
|
||||
"level6": {
|
||||
"max_display_mastering_luminance": 1000,
|
||||
"min_display_mastering_luminance": 1,
|
||||
"max_content_light_level": 0,
|
||||
"max_frame_average_light_level": 0,
|
||||
},
|
||||
}
|
||||
|
||||
# First create the extra metadata JSON for dovi_tool
|
||||
extra_metadata = {
|
||||
"cm_version": "V29",
|
||||
"length": 0, # dovi_tool will figure this out
|
||||
"level6": {
|
||||
"max_display_mastering_luminance": 1000,
|
||||
"min_display_mastering_luminance": 1,
|
||||
"max_content_light_level": 0,
|
||||
"max_frame_average_light_level": 0,
|
||||
},
|
||||
}
|
||||
with open(config.directories.temp / "extra.json", "w") as f:
|
||||
json.dump(extra_metadata, f, indent=2)
|
||||
|
||||
with open(config.directories.temp / "extra.json", "w") as f:
|
||||
json.dump(extra_metadata, f, indent=2)
|
||||
|
||||
# Generate DV RPU from HDR10+ metadata
|
||||
conversion = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"generate",
|
||||
"-j",
|
||||
str(config.directories.temp / "extra.json"),
|
||||
"--hdr10plus-json",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
"-o",
|
||||
str(config.directories.temp / "RPU.bin"),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
# Generate DV RPU from HDR10+ metadata
|
||||
conversion = subprocess.run(
|
||||
[
|
||||
str(DoviTool),
|
||||
"generate",
|
||||
"-j",
|
||||
str(config.directories.temp / "extra.json"),
|
||||
"--hdr10plus-json",
|
||||
str(config.directories.temp / self.hdr10plus_file),
|
||||
"-o",
|
||||
str(config.directories.temp / "RPU.bin"),
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
|
||||
if conversion.returncode:
|
||||
self.log.exit("x Failed converting HDR10+ to Dolby Vision")
|
||||
raise ValueError("Failed converting HDR10+ to Dolby Vision")
|
||||
|
||||
self.log.info("Converted HDR10+ metadata to Dolby Vision")
|
||||
self.log.info("✓ HDR10+ successfully converted to Dolby Vision Profile 8")
|
||||
|
||||
# Clean up temporary files
|
||||
|
||||
@@ -233,6 +233,7 @@ class Subtitle(Track):
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except pycaption.exceptions.CaptionReadSyntaxError:
|
||||
@@ -241,6 +242,7 @@ class Subtitle(Track):
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except Exception:
|
||||
@@ -444,6 +446,8 @@ class Subtitle(Track):
|
||||
|
||||
caption_set = self.parse(self.path.read_bytes(), self.codec)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
if codec == Subtitle.Codec.WebVTT:
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = writer().write(caption_set)
|
||||
|
||||
output_path.write_text(subtitle_text, encoding="utf8")
|
||||
@@ -520,6 +524,8 @@ class Subtitle(Track):
|
||||
|
||||
caption_set = self.parse(self.path.read_bytes(), self.codec)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
if codec == Subtitle.Codec.WebVTT:
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = writer().write(caption_set)
|
||||
|
||||
output_path.write_text(subtitle_text, encoding="utf8")
|
||||
@@ -681,6 +687,24 @@ class Subtitle(Track):
|
||||
if merged_captions:
|
||||
caption_set.set_captions(lang, merged_captions)
|
||||
|
||||
@staticmethod
|
||||
def filter_unwanted_cues(caption_set: pycaption.CaptionSet):
|
||||
"""
|
||||
Filter out subtitle cues containing only or whitespace.
|
||||
"""
|
||||
for lang in caption_set.get_languages():
|
||||
captions = caption_set.get_captions(lang)
|
||||
filtered_captions = pycaption.CaptionList()
|
||||
|
||||
for caption in captions:
|
||||
text = caption.get_text().strip()
|
||||
if not text or text == " " or all(c in " \t\n\r\xa0" for c in text.replace(" ", "\xa0")):
|
||||
continue
|
||||
|
||||
filtered_captions.append(caption)
|
||||
|
||||
caption_set.set_captions(lang, filtered_captions)
|
||||
|
||||
@staticmethod
|
||||
def merge_segmented_wvtt(data: bytes, period_start: float = 0.0) -> tuple[CaptionList, Optional[str]]:
|
||||
"""
|
||||
@@ -846,7 +870,18 @@ class Subtitle(Track):
|
||||
elif sdh_method == "filter-subs":
|
||||
# Force use of filter-subs
|
||||
sub = Subtitles(self.path)
|
||||
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
|
||||
try:
|
||||
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
|
||||
except ValueError as e:
|
||||
if "too many values to unpack" in str(e):
|
||||
# Retry without name removal if the error is due to multiple colons in time references
|
||||
# This can happen with lines like "at 10:00 and 2:00"
|
||||
sub = Subtitles(self.path)
|
||||
sub.filter(
|
||||
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
|
||||
)
|
||||
else:
|
||||
raise
|
||||
sub.save()
|
||||
return
|
||||
elif sdh_method == "auto":
|
||||
@@ -882,7 +917,18 @@ class Subtitle(Track):
|
||||
)
|
||||
else:
|
||||
sub = Subtitles(self.path)
|
||||
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
|
||||
try:
|
||||
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
|
||||
except ValueError as e:
|
||||
if "too many values to unpack" in str(e):
|
||||
# Retry without name removal if the error is due to multiple colons in time references
|
||||
# This can happen with lines like "at 10:00 and 2:00"
|
||||
sub = Subtitles(self.path)
|
||||
sub.filter(
|
||||
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
|
||||
)
|
||||
else:
|
||||
raise
|
||||
sub.save()
|
||||
|
||||
def reverse_rtl(self) -> None:
|
||||
|
||||
@@ -355,6 +355,14 @@ class Tracks:
|
||||
]
|
||||
)
|
||||
|
||||
if hasattr(vt, "range") and vt.range == Video.Range.HLG:
|
||||
video_args.extend(
|
||||
[
|
||||
"--color-transfer-characteristics",
|
||||
"0:18", # ARIB STD-B67 (HLG)
|
||||
]
|
||||
)
|
||||
|
||||
cl.extend(video_args + ["(", str(vt.path), ")"])
|
||||
|
||||
for i, at in enumerate(self.audio):
|
||||
|
||||
@@ -116,6 +116,7 @@ class Video(Track):
|
||||
class Transfer(Enum):
|
||||
Unspecified = 0
|
||||
BT_709 = 1
|
||||
Unspecified_Image = 2
|
||||
BT_601 = 6
|
||||
BT_2020 = 14
|
||||
BT_2100 = 15
|
||||
|
||||
@@ -1,16 +1,165 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
class UpdateChecker:
|
||||
"""Check for available updates from the GitHub repository."""
|
||||
"""
|
||||
Check for available updates from the GitHub repository.
|
||||
|
||||
This class provides functionality to check for newer versions of the application
|
||||
by querying the GitHub releases API. It includes rate limiting, caching, and
|
||||
both synchronous and asynchronous interfaces.
|
||||
|
||||
Attributes:
|
||||
REPO_URL: GitHub API URL for latest release
|
||||
TIMEOUT: Request timeout in seconds
|
||||
DEFAULT_CHECK_INTERVAL: Default time between checks in seconds (24 hours)
|
||||
"""
|
||||
|
||||
REPO_URL = "https://api.github.com/repos/unshackle-dl/unshackle/releases/latest"
|
||||
TIMEOUT = 5
|
||||
DEFAULT_CHECK_INTERVAL = 24 * 60 * 60
|
||||
|
||||
@classmethod
|
||||
def _get_cache_file(cls) -> Path:
|
||||
"""Get the path to the update check cache file."""
|
||||
from unshackle.core.config import config
|
||||
|
||||
return config.directories.cache / "update_check.json"
|
||||
|
||||
@classmethod
|
||||
def _load_cache_data(cls) -> dict:
|
||||
"""
|
||||
Load cache data from file.
|
||||
|
||||
Returns:
|
||||
Cache data dictionary or empty dict if loading fails
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
|
||||
if not cache_file.exists():
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(cache_file, "r") as f:
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def _parse_version(version_string: str) -> str:
|
||||
"""
|
||||
Parse and normalize version string by removing 'v' prefix.
|
||||
|
||||
Args:
|
||||
version_string: Raw version string from API
|
||||
|
||||
Returns:
|
||||
Cleaned version string
|
||||
"""
|
||||
return version_string.lstrip("v")
|
||||
|
||||
@staticmethod
|
||||
def _is_valid_version(version: str) -> bool:
|
||||
"""
|
||||
Validate version string format.
|
||||
|
||||
Args:
|
||||
version: Version string to validate
|
||||
|
||||
Returns:
|
||||
True if version string is valid semantic version, False otherwise
|
||||
"""
|
||||
if not version or not isinstance(version, str):
|
||||
return False
|
||||
|
||||
try:
|
||||
parts = version.split(".")
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
int(part)
|
||||
|
||||
return True
|
||||
except (ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def _fetch_latest_version(cls) -> Optional[str]:
|
||||
"""
|
||||
Fetch the latest version from GitHub API.
|
||||
|
||||
Returns:
|
||||
Latest version string if successful, None otherwise
|
||||
"""
|
||||
try:
|
||||
response = requests.get(cls.REPO_URL, timeout=cls.TIMEOUT)
|
||||
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = cls._parse_version(data.get("tag_name", ""))
|
||||
|
||||
return latest_version if cls._is_valid_version(latest_version) else None
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _should_check_for_updates(cls, check_interval: int = DEFAULT_CHECK_INTERVAL) -> bool:
|
||||
"""
|
||||
Check if enough time has passed since the last update check.
|
||||
|
||||
Args:
|
||||
check_interval: Time in seconds between checks (default: 24 hours)
|
||||
|
||||
Returns:
|
||||
True if we should check for updates, False otherwise
|
||||
"""
|
||||
cache_data = cls._load_cache_data()
|
||||
|
||||
if not cache_data:
|
||||
return True
|
||||
|
||||
last_check = cache_data.get("last_check", 0)
|
||||
current_time = time.time()
|
||||
|
||||
return (current_time - last_check) >= check_interval
|
||||
|
||||
@classmethod
|
||||
def _update_cache(cls, latest_version: Optional[str] = None, current_version: Optional[str] = None) -> None:
|
||||
"""
|
||||
Update the cache file with the current timestamp and version info.
|
||||
|
||||
Args:
|
||||
latest_version: The latest version found, if any
|
||||
current_version: The current version being used
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
|
||||
try:
|
||||
cache_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
cache_data = {
|
||||
"last_check": time.time(),
|
||||
"latest_version": latest_version,
|
||||
"current_version": current_version,
|
||||
}
|
||||
|
||||
with open(cache_file, "w") as f:
|
||||
json.dump(cache_data, f, indent=2)
|
||||
|
||||
except (OSError, json.JSONEncodeError):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def _compare_versions(current: str, latest: str) -> bool:
|
||||
@@ -24,6 +173,9 @@ class UpdateChecker:
|
||||
Returns:
|
||||
True if latest > current, False otherwise
|
||||
"""
|
||||
if not UpdateChecker._is_valid_version(current) or not UpdateChecker._is_valid_version(latest):
|
||||
return False
|
||||
|
||||
try:
|
||||
current_parts = [int(x) for x in current.split(".")]
|
||||
latest_parts = [int(x) for x in latest.split(".")]
|
||||
@@ -53,20 +205,14 @@ class UpdateChecker:
|
||||
Returns:
|
||||
The latest version string if an update is available, None otherwise
|
||||
"""
|
||||
if not cls._is_valid_version(current_version):
|
||||
return None
|
||||
|
||||
try:
|
||||
loop = asyncio.get_event_loop()
|
||||
response = await loop.run_in_executor(None, lambda: requests.get(cls.REPO_URL, timeout=cls.TIMEOUT))
|
||||
latest_version = await loop.run_in_executor(None, cls._fetch_latest_version)
|
||||
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = data.get("tag_name", "").lstrip("v")
|
||||
|
||||
if not latest_version:
|
||||
return None
|
||||
|
||||
if cls._compare_versions(current_version, latest_version):
|
||||
if latest_version and cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
except Exception:
|
||||
@@ -75,32 +221,56 @@ class UpdateChecker:
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def check_for_updates_sync(cls, current_version: str) -> Optional[str]:
|
||||
def _get_cached_update_info(cls, current_version: str) -> Optional[str]:
|
||||
"""
|
||||
Synchronous version of update check.
|
||||
Check if there's a cached update available for the current version.
|
||||
|
||||
Args:
|
||||
current_version: The current version string
|
||||
|
||||
Returns:
|
||||
The latest version string if an update is available from cache, None otherwise
|
||||
"""
|
||||
cache_data = cls._load_cache_data()
|
||||
|
||||
if not cache_data:
|
||||
return None
|
||||
|
||||
cached_current = cache_data.get("current_version")
|
||||
cached_latest = cache_data.get("latest_version")
|
||||
|
||||
if cached_current == current_version and cached_latest:
|
||||
if cls._compare_versions(current_version, cached_latest):
|
||||
return cached_latest
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def check_for_updates_sync(cls, current_version: str, check_interval: Optional[int] = None) -> Optional[str]:
|
||||
"""
|
||||
Synchronous version of update check with rate limiting.
|
||||
|
||||
Args:
|
||||
current_version: The current version string (e.g., "1.1.0")
|
||||
check_interval: Time in seconds between checks (default: from config)
|
||||
|
||||
Returns:
|
||||
The latest version string if an update is available, None otherwise
|
||||
"""
|
||||
try:
|
||||
response = requests.get(cls.REPO_URL, timeout=cls.TIMEOUT)
|
||||
if not cls._is_valid_version(current_version):
|
||||
return None
|
||||
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
if check_interval is None:
|
||||
from unshackle.core.config import config
|
||||
|
||||
data = response.json()
|
||||
latest_version = data.get("tag_name", "").lstrip("v")
|
||||
check_interval = config.update_check_interval * 60 * 60
|
||||
|
||||
if not latest_version:
|
||||
return None
|
||||
if not cls._should_check_for_updates(check_interval):
|
||||
return cls._get_cached_update_info(current_version)
|
||||
|
||||
if cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
except Exception:
|
||||
pass
|
||||
latest_version = cls._fetch_latest_version()
|
||||
cls._update_cache(latest_version, current_version)
|
||||
if latest_version and cls._compare_versions(current_version, latest_version):
|
||||
return latest_version
|
||||
|
||||
return None
|
||||
|
||||
@@ -44,6 +44,89 @@ def fuzzy_match(a: str, b: str, threshold: float = 0.8) -> bool:
|
||||
return ratio >= threshold
|
||||
|
||||
|
||||
def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[dict], Optional[str], Optional[int]]:
|
||||
"""Search Simkl API for show information by filename (no auth required)."""
|
||||
log.debug("Searching Simkl for %r (%s, %s)", title, kind, year)
|
||||
|
||||
# Construct appropriate filename based on type
|
||||
filename = f"{title}"
|
||||
if year:
|
||||
filename = f"{title} {year}"
|
||||
|
||||
if kind == "tv":
|
||||
filename += " S01E01.mkv"
|
||||
else: # movie
|
||||
filename += " 2160p.mkv"
|
||||
|
||||
try:
|
||||
resp = requests.post("https://api.simkl.com/search/file", json={"file": filename}, headers=HEADERS, timeout=30)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
log.debug("Simkl API response received")
|
||||
|
||||
# Handle case where SIMKL returns empty list (no results)
|
||||
if isinstance(data, list):
|
||||
log.debug("Simkl returned list (no matches) for %r", filename)
|
||||
return None, None, None
|
||||
|
||||
# Handle TV show responses
|
||||
if data.get("type") == "episode" and "show" in data:
|
||||
show_info = data["show"]
|
||||
show_title = show_info.get("title")
|
||||
show_year = show_info.get("year")
|
||||
|
||||
# Verify title matches and year if provided
|
||||
if not fuzzy_match(show_title, title):
|
||||
log.debug("Simkl title mismatch: searched %r, got %r", title, show_title)
|
||||
return None, None, None
|
||||
if year and show_year and abs(year - show_year) > 1: # Allow 1 year difference
|
||||
log.debug("Simkl year mismatch: searched %d, got %d", year, show_year)
|
||||
return None, None, None
|
||||
|
||||
tmdb_id = show_info.get("ids", {}).get("tmdbtv")
|
||||
if tmdb_id:
|
||||
tmdb_id = int(tmdb_id)
|
||||
log.debug("Simkl -> %s (TMDB ID %s)", show_title, tmdb_id)
|
||||
return data, show_title, tmdb_id
|
||||
|
||||
# Handle movie responses
|
||||
elif data.get("type") == "movie" and "movie" in data:
|
||||
movie_info = data["movie"]
|
||||
movie_title = movie_info.get("title")
|
||||
movie_year = movie_info.get("year")
|
||||
|
||||
# Verify title matches and year if provided
|
||||
if not fuzzy_match(movie_title, title):
|
||||
log.debug("Simkl title mismatch: searched %r, got %r", title, movie_title)
|
||||
return None, None, None
|
||||
if year and movie_year and abs(year - movie_year) > 1: # Allow 1 year difference
|
||||
log.debug("Simkl year mismatch: searched %d, got %d", year, movie_year)
|
||||
return None, None, None
|
||||
|
||||
ids = movie_info.get("ids", {})
|
||||
tmdb_id = ids.get("tmdb") or ids.get("moviedb")
|
||||
if tmdb_id:
|
||||
tmdb_id = int(tmdb_id)
|
||||
log.debug("Simkl -> %s (TMDB ID %s)", movie_title, tmdb_id)
|
||||
return data, movie_title, tmdb_id
|
||||
|
||||
except (requests.RequestException, ValueError, KeyError) as exc:
|
||||
log.debug("Simkl search failed: %s", exc)
|
||||
|
||||
return None, None, None
|
||||
|
||||
|
||||
def search_show_info(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str], Optional[str]]:
|
||||
"""Search for show information, trying Simkl first, then TMDB fallback. Returns (tmdb_id, title, source)."""
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(title, year, kind)
|
||||
|
||||
if simkl_data and simkl_title and fuzzy_match(simkl_title, title):
|
||||
return simkl_tmdb_id, simkl_title, "simkl"
|
||||
|
||||
tmdb_id, tmdb_title = search_tmdb(title, year, kind)
|
||||
return tmdb_id, tmdb_title, "tmdb"
|
||||
|
||||
|
||||
def search_tmdb(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str]]:
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
@@ -202,10 +285,8 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
|
||||
log.debug("Tagging file %s with title %r", path, title)
|
||||
standard_tags: dict[str, str] = {}
|
||||
custom_tags: dict[str, str] = {}
|
||||
# To add custom information to the tags
|
||||
# custom_tags["Text to the left side"] = "Text to the right side"
|
||||
|
||||
if config.tag:
|
||||
if config.tag and config.tag_group_name:
|
||||
custom_tags["Group"] = config.tag
|
||||
description = getattr(title, "description", None)
|
||||
if description:
|
||||
@@ -216,12 +297,6 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
|
||||
description = truncated + "..."
|
||||
custom_tags["Description"] = description
|
||||
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
log.debug("No TMDB API key set; applying basic tags only")
|
||||
_apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
if isinstance(title, Movie):
|
||||
kind = "movie"
|
||||
name = title.name
|
||||
@@ -234,32 +309,60 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
|
||||
_apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
tmdb_title: Optional[str] = None
|
||||
if tmdb_id is None:
|
||||
tmdb_id, tmdb_title = search_tmdb(name, year, kind)
|
||||
log.debug("Search result: %r (ID %s)", tmdb_title, tmdb_id)
|
||||
if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name):
|
||||
log.debug("TMDB search did not match; skipping external ID lookup")
|
||||
if config.tag_imdb_tmdb:
|
||||
# If tmdb_id is provided (via --tmdb), skip Simkl and use TMDB directly
|
||||
if tmdb_id is not None:
|
||||
log.debug("Using provided TMDB ID %s for tags", tmdb_id)
|
||||
else:
|
||||
# Try Simkl first for automatic lookup
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(name, year, kind)
|
||||
|
||||
if simkl_data and simkl_title and fuzzy_match(simkl_title, name):
|
||||
log.debug("Using Simkl data for tags")
|
||||
if simkl_tmdb_id:
|
||||
tmdb_id = simkl_tmdb_id
|
||||
|
||||
show_ids = simkl_data.get("show", {}).get("ids", {})
|
||||
if show_ids.get("imdb"):
|
||||
standard_tags["IMDB"] = f"https://www.imdb.com/title/{show_ids['imdb']}"
|
||||
if show_ids.get("tvdb"):
|
||||
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/series/{show_ids['tvdb']}"
|
||||
if show_ids.get("tmdbtv"):
|
||||
standard_tags["TMDB"] = f"https://www.themoviedb.org/tv/{show_ids['tmdbtv']}"
|
||||
|
||||
# Use TMDB API for additional metadata (either from provided ID or Simkl lookup)
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
log.debug("No TMDB API key set; applying basic tags only")
|
||||
_apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
tmdb_url = f"https://www.themoviedb.org/{'movie' if kind == 'movie' else 'tv'}/{tmdb_id}"
|
||||
standard_tags["TMDB"] = tmdb_url
|
||||
try:
|
||||
ids = external_ids(tmdb_id, kind)
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch external IDs: %s", exc)
|
||||
ids = {}
|
||||
else:
|
||||
log.debug("External IDs found: %s", ids)
|
||||
tmdb_title: Optional[str] = None
|
||||
if tmdb_id is None:
|
||||
tmdb_id, tmdb_title = search_tmdb(name, year, kind)
|
||||
log.debug("TMDB search result: %r (ID %s)", tmdb_title, tmdb_id)
|
||||
if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name):
|
||||
log.debug("TMDB search did not match; skipping external ID lookup")
|
||||
_apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
imdb_id = ids.get("imdb_id")
|
||||
if imdb_id:
|
||||
standard_tags["IMDB"] = f"https://www.imdb.com/title/{imdb_id}"
|
||||
tvdb_id = ids.get("tvdb_id")
|
||||
if tvdb_id:
|
||||
tvdb_prefix = "movies" if kind == "movie" else "series"
|
||||
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/{tvdb_prefix}/{tvdb_id}"
|
||||
tmdb_url = f"https://www.themoviedb.org/{'movie' if kind == 'movie' else 'tv'}/{tmdb_id}"
|
||||
standard_tags["TMDB"] = tmdb_url
|
||||
try:
|
||||
ids = external_ids(tmdb_id, kind)
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch external IDs: %s", exc)
|
||||
ids = {}
|
||||
else:
|
||||
log.debug("External IDs found: %s", ids)
|
||||
|
||||
imdb_id = ids.get("imdb_id")
|
||||
if imdb_id:
|
||||
standard_tags["IMDB"] = f"https://www.imdb.com/title/{imdb_id}"
|
||||
tvdb_id = ids.get("tvdb_id")
|
||||
if tvdb_id:
|
||||
tvdb_prefix = "movies" if kind == "movie" else "series"
|
||||
standard_tags["TVDB"] = f"https://thetvdb.com/dereferrer/{tvdb_prefix}/{tvdb_id}"
|
||||
|
||||
merged_tags = {
|
||||
**custom_tags,
|
||||
@@ -269,6 +372,8 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
|
||||
|
||||
|
||||
__all__ = [
|
||||
"search_simkl",
|
||||
"search_show_info",
|
||||
"search_tmdb",
|
||||
"get_title",
|
||||
"get_year",
|
||||
|
||||
@@ -33,6 +33,7 @@ class EXAMPLE(Service):
|
||||
|
||||
TITLE_RE = r"^(?:https?://?domain\.com/details/)?(?P<title_id>[^/]+)"
|
||||
GEOFENCE = ("US", "UK")
|
||||
NO_SUBTITLES = True
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="EXAMPLE", short_help="https://domain.com")
|
||||
|
||||
@@ -1,20 +1,55 @@
|
||||
# Group or Username to postfix to the end of all download filenames following a dash
|
||||
tag: user_tag
|
||||
|
||||
# Enable/disable tagging with group name (default: true)
|
||||
tag_group_name: true
|
||||
|
||||
# Enable/disable tagging with IMDB/TMDB/TVDB details (default: true)
|
||||
tag_imdb_tmdb: true
|
||||
|
||||
# Set terminal background color (custom option not in CONFIG.md)
|
||||
set_terminal_bg: false
|
||||
|
||||
# Set file naming convention
|
||||
# true for style - Prime.Suspect.S07E01.The.Final.Act.Part.One.1080p.ITV.WEB-DL.AAC2.0.H.264
|
||||
# false for style - Prime Suspect S07E01 The Final Act - Part One
|
||||
scene_naming: true
|
||||
|
||||
# Check for updates from GitHub repository on startup (default: true)
|
||||
update_checks: true
|
||||
|
||||
# How often to check for updates, in hours (default: 24)
|
||||
update_check_interval: 24
|
||||
|
||||
# Title caching configuration
|
||||
# Cache title metadata to reduce redundant API calls
|
||||
title_cache_enabled: true # Enable/disable title caching globally (default: true)
|
||||
title_cache_time: 1800 # Cache duration in seconds (default: 1800 = 30 minutes)
|
||||
title_cache_max_retention: 86400 # Maximum cache retention for fallback when API fails (default: 86400 = 24 hours)
|
||||
|
||||
# Muxing configuration
|
||||
muxing:
|
||||
set_title: false
|
||||
|
||||
# Login credentials for each Service
|
||||
credentials:
|
||||
# Direct credentials (no profile support)
|
||||
EXAMPLE: email@example.com:password
|
||||
EXAMPLE2: username:password
|
||||
|
||||
# Per-profile credentials with default fallback
|
||||
SERVICE_NAME:
|
||||
default: default@email.com:password # Used when no -p/--profile is specified
|
||||
profile1: user1@email.com:password1
|
||||
profile2: user2@email.com:password2
|
||||
|
||||
# Per-profile credentials without default (requires -p/--profile)
|
||||
SERVICE_NAME2:
|
||||
john: john@example.com:johnspassword
|
||||
jane: jane@example.com:janespassword
|
||||
|
||||
# You can also use list format for passwords with special characters
|
||||
SERVICE_NAME3:
|
||||
default: ["user@email.com", ":PasswordWith:Colons"]
|
||||
|
||||
# Override default directories used across unshackle
|
||||
directories:
|
||||
@@ -36,8 +71,17 @@ directories:
|
||||
|
||||
# Pre-define which Widevine or PlayReady device to use for each Service
|
||||
cdm:
|
||||
# Global default CDM device (fallback for all services/profiles)
|
||||
default: WVD_1
|
||||
EXAMPLE: PRD_1
|
||||
|
||||
# Direct service-specific CDM
|
||||
DIFFERENT_EXAMPLE: PRD_1
|
||||
|
||||
# Per-profile CDM configuration
|
||||
EXAMPLE:
|
||||
john_sd: chromecdm_903_l3 # Profile 'john_sd' uses Chrome CDM L3
|
||||
jane_uhd: nexus_5_l1 # Profile 'jane_uhd' uses Nexus 5 L1
|
||||
default: generic_android_l3 # Default CDM for this service
|
||||
|
||||
# Use pywidevine Serve-compliant Remote CDMs
|
||||
remote_cdm:
|
||||
@@ -154,20 +198,45 @@ serve:
|
||||
# Configuration data for each Service
|
||||
services:
|
||||
# Service-specific configuration goes here
|
||||
# EXAMPLE:
|
||||
# api_key: "service_specific_key"
|
||||
# Profile-specific configurations can be nested under service names
|
||||
|
||||
# Example: with profile-specific device configs
|
||||
EXAMPLE:
|
||||
# Global service config
|
||||
api_key: "service_api_key"
|
||||
|
||||
# Profile-specific device configurations
|
||||
profiles:
|
||||
john_sd:
|
||||
device:
|
||||
app_name: "AIV"
|
||||
device_model: "SHIELD Android TV"
|
||||
jane_uhd:
|
||||
device:
|
||||
app_name: "AIV"
|
||||
device_model: "Fire TV Stick 4K"
|
||||
|
||||
# Example: Service with different regions per profile
|
||||
SERVICE_NAME:
|
||||
profiles:
|
||||
us_account:
|
||||
region: "US"
|
||||
api_endpoint: "https://api.us.service.com"
|
||||
uk_account:
|
||||
region: "GB"
|
||||
api_endpoint: "https://api.uk.service.com"
|
||||
|
||||
# External proxy provider services
|
||||
proxy_providers:
|
||||
nordvpn:
|
||||
username: username_from_service_credentials
|
||||
password: password_from_service_credentials
|
||||
servers:
|
||||
server_map:
|
||||
- us: 12 # force US server #12 for US proxies
|
||||
surfsharkvpn:
|
||||
username: your_surfshark_service_username # Service credentials from https://my.surfshark.com/vpn/manual-setup/main/openvpn
|
||||
password: your_surfshark_service_password # Service credentials (not your login password)
|
||||
servers:
|
||||
server_map:
|
||||
- us: 3844 # force US server #3844 for US proxies
|
||||
- gb: 2697 # force GB server #2697 for GB proxies
|
||||
- au: 4621 # force AU server #4621 for AU proxies
|
||||
@@ -30,7 +30,7 @@ class HTTP(Vault):
|
||||
api_mode: "query" for query parameters or "json" for JSON API
|
||||
"""
|
||||
super().__init__(name)
|
||||
self.url = host.rstrip("/")
|
||||
self.url = host
|
||||
self.password = password
|
||||
self.username = username
|
||||
self.api_mode = api_mode.lower()
|
||||
@@ -88,21 +88,23 @@ class HTTP(Vault):
|
||||
|
||||
if self.api_mode == "json":
|
||||
try:
|
||||
title = getattr(self, "current_title", None)
|
||||
response = self.request(
|
||||
"GetKey",
|
||||
{
|
||||
"kid": kid,
|
||||
"service": service.lower(),
|
||||
"title": title,
|
||||
},
|
||||
)
|
||||
params = {
|
||||
"kid": kid,
|
||||
"service": service.lower(),
|
||||
}
|
||||
|
||||
response = self.request("GetKey", params)
|
||||
if response.get("status") == "not_found":
|
||||
return None
|
||||
keys = response.get("keys", [])
|
||||
for key_entry in keys:
|
||||
if key_entry["kid"] == kid:
|
||||
return key_entry["key"]
|
||||
if isinstance(key_entry, str) and ":" in key_entry:
|
||||
entry_kid, entry_key = key_entry.split(":", 1)
|
||||
if entry_kid == kid:
|
||||
return entry_key
|
||||
elif isinstance(key_entry, dict):
|
||||
if key_entry.get("kid") == kid:
|
||||
return key_entry.get("key")
|
||||
except Exception as e:
|
||||
print(f"Failed to get key ({e.__class__.__name__}: {e})")
|
||||
return None
|
||||
|
||||
Reference in New Issue
Block a user