17 Commits

Author SHA1 Message Date
kenzuya
b308669221 Update Netflix 2026-03-17 14:04:36 +07:00
kenzuya
dfd3cdb8a2 Update Netflix 2026-03-14 16:42:21 +07:00
kenzuya
b61135175d Update Netflix service 2026-03-13 06:29:50 +07:00
kenzuya
528a62c243 Update config 2026-03-12 03:01:05 +07:00
kenzuya
81661a44b9 Update config 2026-03-11 00:48:08 +07:00
kenzuya
b22c422408 Update config and .gitignore 2026-03-11 00:45:24 +07:00
kenzuya
f4152bc777 Add Widevine and Playready Devices 2026-03-11 00:44:40 +07:00
kenzuya
9c7af72cad feat(netflix): support templated Android ESN generation
Add support for `{randomchar_N}` placeholders in Netflix Android `esn_map` values and generate those segments at runtime. Reuse a cached ESN only when it matches the derived template pattern, is Android-typed, and is not expired; otherwise regenerate and refresh the cache.

This keeps static ESN mappings working as before while enabling dynamic ESN templates (e.g., system_id `7110`) to avoid fixed identifiers and keep ESNs valid per template.
2026-03-10 14:58:08 +07:00
kenzuya
1244141df2 fix(netflix): align MSL manifest payload with Chrome Widevine
Update Netflix manifest request construction to better match current
Widevine-on-Chrome behavior by:
- setting top-level and param `clientVersion` to `9999999`
- sending `challenge` only for Chrome Widevine requests
- removing hardcoded device/platform fields from params

Also refresh Android TV ESN mappings in config by replacing ESN `7110`
and adding ESN `16401` for Hisense devices to improve request validity.
2026-03-10 12:45:59 +07:00
kenzuya
5dde031bd8 feat(netflix-msl): support UserIDToken auth and raw responses
Add `UserAuthentication.UserIDToken()` to build MSL user auth payloads
for token-based Netflix authentication flows.

Extend MSL message handling to be more flexible by:
- allowing custom HTTP headers in `send_message()`
- adding `unwrap_result` to `send_message()`, `parse_message()`, and
  `decrypt_payload_chunks()` so callers can receive either full payload
  data or only `result`

Also lower key/KID and payload logging from `info` to `debug` to reduce
noisy and sensitive runtime logs while keeping diagnostics available.
2026-03-10 00:54:59 +07:00
kenzuya
a07302cb88 chore(gitignore): ignore capitalized Logs directory too
Add `Logs` to `.gitignore` so log output from environments that use an uppercase directory name is not accidentally staged or committed.
2026-03-10 00:54:47 +07:00
kenzuya
0a820e6552 fix(dl): normalize non-scene episode folder/file naming
Ensure episode downloads use a consistent non-scene layout when
`scene_naming` is disabled by:

- adding a sanitized series-title directory before season/episode folders
  for sidecars, sample-based paths, and muxed outputs
- updating `Episode.get_filename()` to return `Season XX` for folder names
  in non-scene mode
- generating non-scene episode file names as
  `Title SXXEXX - Episode Name`
- adding token append helpers to avoid duplicate/empty naming tokens

This keeps output paths predictable across download stages and prevents
naming inconsistencies or duplicated suffixes.fix(dl): normalize non-scene episode folder/file naming

Ensure episode downloads use a consistent non-scene layout when
`scene_naming` is disabled by:

- adding a sanitized series-title directory before season/episode folders
  for sidecars, sample-based paths, and muxed outputs
- updating `Episode.get_filename()` to return `Season XX` for folder names
  in non-scene mode
- generating non-scene episode file names as
  `Title SXXEXX - Episode Name`
- adding token append helpers to avoid duplicate/empty naming tokens

This keeps output paths predictable across download stages and prevents
naming inconsistencies or duplicated suffixes.
2026-03-02 22:35:48 +07:00
kenzuya
8748ce8a11 feat(movie): improve non-scene filename sanitization
Add dedicated non-scene filename sanitization for movies to produce cleaner, filesystem-safe names when `scene_naming` is disabled. The new logic:
- optionally transliterates unicode based on `unicode_filenames`
- removes combining marks/diacritics and disallowed punctuation
- normalizes separators and extra whitespace

Also update movie name construction to explicitly format `Name (Year)` and append a trailing ` -` for non-scene naming before metadata tokens, improving readability and consistency.

Update default config values in `unshackle.yaml` to match non-scene/local usage:
- disable `scene_naming` by default
- comment out default `tag`
- change downloads directory to `Downloads`feat(movie): improve non-scene filename sanitization

Add dedicated non-scene filename sanitization for movies to produce cleaner, filesystem-safe names when `scene_naming` is disabled. The new logic:
- optionally transliterates unicode based on `unicode_filenames`
- removes combining marks/diacritics and disallowed punctuation
- normalizes separators and extra whitespace

Also update movie name construction to explicitly format `Name (Year)` and append a trailing ` -` for non-scene naming before metadata tokens, improving readability and consistency.

Update default config values in `unshackle.yaml` to match non-scene/local usage:
- disable `scene_naming` by default
- comment out default `tag`
- change downloads directory to `Downloads`
2026-03-02 19:42:53 +07:00
kenzuya
3e45f3efe7 fix(netflix): harden ESN cache checks and Widevine type test
Handle Netflix ESN cache values more defensively to avoid key/type errors and
stale reuse by validating cache shape, cache expiry, and device type before
reusing values. Also log the final ESN safely when cache data is not a dict.

Alias `pywidevine.Cdm` to `WidevineCDM` and use it in DRM system detection so
Widevine instances are identified correctly.

Also include related config updates: add ESN map entry for system ID `12063`,
ignore `binaries/`, and refresh local runtime defaults in `unshackle.yaml`.fix(netflix): harden ESN cache checks and Widevine type test

Handle Netflix ESN cache values more defensively to avoid key/type errors and
stale reuse by validating cache shape, cache expiry, and device type before
reusing values. Also log the final ESN safely when cache data is not a dict.

Alias `pywidevine.Cdm` to `WidevineCDM` and use it in DRM system detection so
Widevine instances are identified correctly.

Also include related config updates: add ESN map entry for system ID `12063`,
ignore `binaries/`, and refresh local runtime defaults in `unshackle.yaml`.
2026-03-02 17:29:32 +07:00
kenzuya
fb14f412d4 update .gitignore 2026-03-02 02:59:45 +07:00
kenzuya
27048d56ee fix(netflix): scope 720p QC filter to explicit 720 requests
Refine QC manifest profile selection so `l40` profiles are filtered out
only when the user requests **only** 720p, instead of whenever 720 is
included. This prevents unintended profile narrowing for mixed-quality
requests and default quality runs.

Also bump the Netflix client `platform` version from `138.0.0.0` to
`145.0.0.0` to keep manifest requests aligned with current expectations.fix(netflix): scope 720p QC filter to explicit 720 requests

Refine QC manifest profile selection so `l40` profiles are filtered out
only when the user requests **only** 720p, instead of whenever 720 is
included. This prevents unintended profile narrowing for mixed-quality
requests and default quality runs.

Also bump the Netflix client `platform` version from `138.0.0.0` to
`145.0.0.0` to keep manifest requests aligned with current expectations.
2026-03-02 02:59:33 +07:00
kenzuya
66ba78a928 Update payload challenge 2026-03-02 02:58:56 +07:00
19 changed files with 974 additions and 511 deletions

7
.gitignore vendored
View File

@@ -6,8 +6,6 @@ update_check.json
*.exe *.exe
*.dll *.dll
*.crt *.crt
*.wvd
*.prd
*.der *.der
*.pem *.pem
*.bin *.bin
@@ -21,12 +19,11 @@ device_vmp_blob
unshackle/cache/ unshackle/cache/
unshackle/cookies/ unshackle/cookies/
unshackle/certs/ unshackle/certs/
unshackle/WVDs/
unshackle/PRDs/
temp/ temp/
logs/ logs/
services/
Temp/ Temp/
binaries/
Logs
# Byte-compiled / optimized / DLL files # Byte-compiled / optimized / DLL files
__pycache__/ __pycache__/

View File

@@ -60,7 +60,7 @@ from unshackle.core.tracks import Audio, Subtitle, Tracks, Video
from unshackle.core.tracks.attachment import Attachment from unshackle.core.tracks.attachment import Attachment
from unshackle.core.tracks.hybrid import Hybrid from unshackle.core.tracks.hybrid import Hybrid
from unshackle.core.utilities import (find_font_with_fallbacks, get_debug_logger, get_system_fonts, init_debug_logger, from unshackle.core.utilities import (find_font_with_fallbacks, get_debug_logger, get_system_fonts, init_debug_logger,
is_close_match, suggest_font_packages, time_elapsed_since) is_close_match, sanitize_filename, suggest_font_packages, time_elapsed_since)
from unshackle.core.utils import tags from unshackle.core.utils import tags
from unshackle.core.utils.click_types import (AUDIO_CODEC_LIST, LANGUAGE_RANGE, QUALITY_LIST, SEASON_RANGE, from unshackle.core.utils.click_types import (AUDIO_CODEC_LIST, LANGUAGE_RANGE, QUALITY_LIST, SEASON_RANGE,
ContextData, MultipleChoice, SubtitleCodecChoice, VideoCodecChoice) ContextData, MultipleChoice, SubtitleCodecChoice, VideoCodecChoice)
@@ -2015,6 +2015,8 @@ class dl:
sidecar_dir = config.directories.downloads sidecar_dir = config.directories.downloads
if not no_folder and isinstance(title, (Episode, Song)) and media_info: if not no_folder and isinstance(title, (Episode, Song)) and media_info:
if isinstance(title, Episode) and not config.scene_naming:
sidecar_dir /= sanitize_filename(title.title.replace("$", "S"), " ")
sidecar_dir /= title.get_filename(media_info, show_service=not no_source, folder=True) sidecar_dir /= title.get_filename(media_info, show_service=not no_source, folder=True)
sidecar_dir.mkdir(parents=True, exist_ok=True) sidecar_dir.mkdir(parents=True, exist_ok=True)
@@ -2080,6 +2082,8 @@ class dl:
) )
if sample_track and sample_track.path: if sample_track and sample_track.path:
media_info = MediaInfo.parse(sample_track.path) media_info = MediaInfo.parse(sample_track.path)
if isinstance(title, Episode) and not config.scene_naming:
final_dir /= sanitize_filename(title.title.replace("$", "S"), " ")
final_dir /= title.get_filename(media_info, show_service=not no_source, folder=True) final_dir /= title.get_filename(media_info, show_service=not no_source, folder=True)
final_dir.mkdir(parents=True, exist_ok=True) final_dir.mkdir(parents=True, exist_ok=True)
@@ -2122,6 +2126,8 @@ class dl:
audio_codec_suffix = muxed_audio_codecs.get(muxed_path) audio_codec_suffix = muxed_audio_codecs.get(muxed_path)
if not no_folder and isinstance(title, (Episode, Song)): if not no_folder and isinstance(title, (Episode, Song)):
if isinstance(title, Episode) and not config.scene_naming:
final_dir /= sanitize_filename(title.title.replace("$", "S"), " ")
final_dir /= title.get_filename(media_info, show_service=not no_source, folder=True) final_dir /= title.get_filename(media_info, show_service=not no_source, folder=True)
final_dir.mkdir(parents=True, exist_ok=True) final_dir.mkdir(parents=True, exist_ok=True)

View File

@@ -123,14 +123,36 @@ class Episode(Title):
scan_suffix = "i" scan_suffix = "i"
return f"{resolution}{scan_suffix}" return f"{resolution}{scan_suffix}"
def _append_token(current: str, token: Optional[str]) -> str:
token = (token or "").strip()
current = current.rstrip()
if not token:
return current
if current.endswith(f" {token}"):
return current
return f"{current} {token}"
def _append_unique_token(tokens: list[str], token: Optional[str]) -> None:
token = (token or "").strip()
if token and token not in tokens:
tokens.append(token)
if folder and not config.scene_naming:
return sanitize_filename(f"Season {self.season:02}", " ")
non_scene_episode_file = not config.scene_naming and not folder
extra_tokens: list[str] = []
# Title [Year] SXXEXX Name (or Title [Year] SXX if folder) # Title [Year] SXXEXX Name (or Title [Year] SXX if folder)
if folder: if folder:
name = f"{self.title}" name = f"{self.title}"
if self.year and config.series_year: if self.year and config.series_year:
name += f" {self.year}" name += f" {self.year}"
name += f" S{self.season:02}" name += f" S{self.season:02}"
else: elif non_scene_episode_file:
if config.dash_naming: episode_label = self.name or f"Episode {self.number:02}"
name = f"{self.title.replace('$', 'S')} S{self.season:02}E{self.number:02} - {episode_label}"
elif config.dash_naming:
# Format: Title - SXXEXX - Episode Name # Format: Title - SXXEXX - Episode Name
name = self.title.replace("$", "S") # e.g., Arli$$ name = self.title.replace("$", "S") # e.g., Arli$$
@@ -159,10 +181,13 @@ class Episode(Title):
if primary_video_track: if primary_video_track:
resolution_token = _get_resolution_token(primary_video_track) resolution_token = _get_resolution_token(primary_video_track)
if resolution_token: if resolution_token:
if non_scene_episode_file:
_append_unique_token(extra_tokens, resolution_token)
else:
name += f" {resolution_token}" name += f" {resolution_token}"
# Service (use track source if available) # Service (use track source if available)
if show_service: if show_service and config.scene_naming:
source_name = None source_name = None
if self.tracks: if self.tracks:
first_track = next(iter(self.tracks), None) first_track = next(iter(self.tracks), None)
@@ -171,14 +196,21 @@ class Episode(Title):
name += f" {source_name or self.service.__name__}" name += f" {source_name or self.service.__name__}"
# 'WEB-DL' # 'WEB-DL'
if config.scene_naming:
name += " WEB-DL" name += " WEB-DL"
# DUAL # DUAL
if unique_audio_languages == 2: if unique_audio_languages == 2:
if non_scene_episode_file:
_append_unique_token(extra_tokens, "DUAL")
else:
name += " DUAL" name += " DUAL"
# MULTi # MULTi
if unique_audio_languages > 2: if unique_audio_languages > 2:
if non_scene_episode_file:
_append_unique_token(extra_tokens, "MULTi")
else:
name += " MULTi" name += " MULTi"
# Audio Codec + Channels (+ feature) # Audio Codec + Channels (+ feature)
@@ -194,8 +226,15 @@ class Episode(Title):
channels = float(channel_count) channels = float(channel_count)
features = primary_audio_track.format_additionalfeatures or "" features = primary_audio_track.format_additionalfeatures or ""
name += f" {AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}" audio_token = f"{AUDIO_CODEC_MAP.get(codec, codec)}{channels:.1f}"
if non_scene_episode_file:
_append_unique_token(extra_tokens, audio_token)
else:
name += f" {audio_token}"
if "JOC" in features or primary_audio_track.joc: if "JOC" in features or primary_audio_track.joc:
if non_scene_episode_file:
_append_unique_token(extra_tokens, "Atmos")
else:
name += " Atmos" name += " Atmos"
# Video (dynamic range + hfr +) Codec # Video (dynamic range + hfr +) Codec
@@ -210,36 +249,55 @@ class Episode(Title):
) )
frame_rate = float(primary_video_track.frame_rate) frame_rate = float(primary_video_track.frame_rate)
def _append_token(current: str, token: Optional[str]) -> str:
token = (token or "").strip()
current = current.rstrip()
if not token:
return current
if current.endswith(f" {token}"):
return current
return f"{current} {token}"
# Primary HDR format detection # Primary HDR format detection
if hdr_format: if hdr_format:
if hdr_format_full.startswith("Dolby Vision"): if hdr_format_full.startswith("Dolby Vision"):
if non_scene_episode_file:
_append_unique_token(extra_tokens, "DV")
else:
name = _append_token(name, "DV") name = _append_token(name, "DV")
if any( if any(
indicator in (hdr_format_full + " " + hdr_format) indicator in (hdr_format_full + " " + hdr_format)
for indicator in ["HDR10", "SMPTE ST 2086"] for indicator in ["HDR10", "SMPTE ST 2086"]
): ):
if non_scene_episode_file:
_append_unique_token(extra_tokens, "HDR")
else:
name = _append_token(name, "HDR") name = _append_token(name, "HDR")
elif "HDR Vivid" in hdr_format: elif "HDR Vivid" in hdr_format:
if non_scene_episode_file:
_append_unique_token(extra_tokens, "HDR")
else:
name = _append_token(name, "HDR") name = _append_token(name, "HDR")
else: else:
dynamic_range = DYNAMIC_RANGE_MAP.get(hdr_format) or hdr_format or "" dynamic_range = DYNAMIC_RANGE_MAP.get(hdr_format) or hdr_format or ""
if non_scene_episode_file:
_append_unique_token(extra_tokens, dynamic_range)
else:
name = _append_token(name, dynamic_range) name = _append_token(name, dynamic_range)
elif "HLG" in trc or "Hybrid Log-Gamma" in trc or "ARIB STD-B67" in trc or "arib-std-b67" in trc.lower(): elif "HLG" in trc or "Hybrid Log-Gamma" in trc or "ARIB STD-B67" in trc or "arib-std-b67" in trc.lower():
if non_scene_episode_file:
_append_unique_token(extra_tokens, "HLG")
else:
name += " HLG" name += " HLG"
elif any(indicator in trc for indicator in ["PQ", "SMPTE ST 2084", "BT.2100"]) or "smpte2084" in trc.lower() or "bt.2020-10" in trc.lower(): elif any(indicator in trc for indicator in ["PQ", "SMPTE ST 2084", "BT.2100"]) or "smpte2084" in trc.lower() or "bt.2020-10" in trc.lower():
if non_scene_episode_file:
_append_unique_token(extra_tokens, "HDR")
else:
name += " HDR" name += " HDR"
if frame_rate > 30: if frame_rate > 30:
if non_scene_episode_file:
_append_unique_token(extra_tokens, "HFR")
else:
name += " HFR" name += " HFR"
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}" video_codec_token = VIDEO_CODEC_MAP.get(codec, codec)
if non_scene_episode_file:
_append_unique_token(extra_tokens, video_codec_token)
else:
name += f" {video_codec_token}"
if non_scene_episode_file and extra_tokens:
name += f" - {' '.join(extra_tokens)}"
if config.tag: if config.tag:
name += f"-{config.tag}" name += f"-{config.tag}"

View File

@@ -1,3 +1,5 @@
import re
import unicodedata
from abc import ABC from abc import ABC
from typing import Any, Iterable, Optional, Union from typing import Any, Iterable, Optional, Union
@@ -5,6 +7,7 @@ from langcodes import Language
from pymediainfo import MediaInfo from pymediainfo import MediaInfo
from rich.tree import Tree from rich.tree import Tree
from sortedcontainers import SortedKeyList from sortedcontainers import SortedKeyList
from unidecode import unidecode
from unshackle.core.config import config from unshackle.core.config import config
from unshackle.core.constants import AUDIO_CODEC_MAP, DYNAMIC_RANGE_MAP, VIDEO_CODEC_MAP from unshackle.core.constants import AUDIO_CODEC_MAP, DYNAMIC_RANGE_MAP, VIDEO_CODEC_MAP
@@ -52,6 +55,18 @@ class Movie(Title):
return f"{self.name} ({self.year})" return f"{self.name} ({self.year})"
return self.name return self.name
@staticmethod
def _sanitize_non_scene_filename(filename: str) -> str:
if not config.unicode_filenames:
filename = unidecode(filename)
filename = "".join(c for c in filename if unicodedata.category(c) != "Mn")
filename = filename.replace("/", " & ").replace(";", " & ")
filename = re.sub(r"[\\:*!?¿,'\"<>|$#~]", "", filename)
filename = re.sub(r"\s{2,}", " ", filename)
return filename.strip()
def get_filename(self, media_info: MediaInfo, folder: bool = False, show_service: bool = True) -> str: def get_filename(self, media_info: MediaInfo, folder: bool = False, show_service: bool = True) -> str:
primary_video_track = next(iter(media_info.video_tracks), None) primary_video_track = next(iter(media_info.video_tracks), None)
primary_audio_track = None primary_audio_track = None
@@ -89,7 +104,11 @@ class Movie(Title):
return f"{resolution}{scan_suffix}" return f"{resolution}{scan_suffix}"
# Name (Year) # Name (Year)
name = str(self).replace("$", "S") # e.g., Arli$$ name = self.name.replace("$", "S") # e.g., Arli$$
if self.year:
name = f"{name} ({self.year})"
if not config.scene_naming:
name += " -"
if primary_video_track: if primary_video_track:
resolution_token = _get_resolution_token(primary_video_track) resolution_token = _get_resolution_token(primary_video_track)
@@ -179,7 +198,9 @@ class Movie(Title):
if config.tag: if config.tag:
name += f"-{config.tag}" name += f"-{config.tag}"
return sanitize_filename(name, "." if config.scene_naming else " ") if config.scene_naming:
return sanitize_filename(name, ".")
return self._sanitize_non_scene_filename(name)
class Movies(SortedKeyList, ABC): class Movies(SortedKeyList, ABC):

View File

@@ -10,7 +10,7 @@ import time
import zlib import zlib
from datetime import datetime from datetime import datetime
from io import BytesIO from io import BytesIO
from typing import Optional, Any from typing import Any, Optional
import jsonpickle import jsonpickle
import requests import requests
@@ -19,15 +19,18 @@ from Cryptodome.Hash import HMAC, SHA256
from Cryptodome.PublicKey import RSA from Cryptodome.PublicKey import RSA
from Cryptodome.Random import get_random_bytes from Cryptodome.Random import get_random_bytes
from Cryptodome.Util import Padding from Cryptodome.Util import Padding
from pywidevine import PSSH, Cdm, Key
from unshackle.core.cacher import Cacher from unshackle.core.cacher import Cacher
from .MSLKeys import MSLKeys from .MSLKeys import MSLKeys
from .schemes import EntityAuthenticationSchemes # noqa: F401 from .schemes import (
from .schemes import KeyExchangeSchemes EntityAuthenticationSchemes, # noqa: F401
KeyExchangeSchemes,
)
from .schemes.EntityAuthentication import EntityAuthentication from .schemes.EntityAuthentication import EntityAuthentication
from .schemes.KeyExchangeRequest import KeyExchangeRequest from .schemes.KeyExchangeRequest import KeyExchangeRequest
from pywidevine import Cdm, PSSH, Key
class MSL: class MSL:
log = logging.getLogger("MSL") log = logging.getLogger("MSL")
@@ -41,7 +44,16 @@ class MSL:
self.message_id = message_id self.message_id = message_id
@classmethod @classmethod
def handshake(cls, scheme: KeyExchangeSchemes, session: requests.Session, endpoint: str, sender: str, cache: Cacher, cdm: Optional[Cdm] = None, config: Any = None): def handshake(
cls,
scheme: KeyExchangeSchemes,
session: requests.Session,
endpoint: str,
sender: str,
cache: Cacher,
cdm: Optional[Cdm] = None,
config: Any = None,
):
cache = cache.get(sender) cache = cache.get(sender)
message_id = random.randint(0, pow(2, 52)) message_id = random.randint(0, pow(2, 52))
msl_keys = MSL.load_cache_data(cache) msl_keys = MSL.load_cache_data(cache)
@@ -53,20 +65,18 @@ class MSL:
if scheme != KeyExchangeSchemes.Widevine: if scheme != KeyExchangeSchemes.Widevine:
msl_keys.rsa = RSA.generate(2048) msl_keys.rsa = RSA.generate(2048)
if scheme == KeyExchangeSchemes.Widevine: if scheme == KeyExchangeSchemes.Widevine:
if not cdm: if not cdm:
raise Exception('Key exchange scheme Widevine but CDM instance is None.') raise Exception("Key exchange scheme Widevine but CDM instance is None.")
session_id = cdm.open() session_id = cdm.open()
msl_keys.cdm_session = session_id msl_keys.cdm_session = session_id
cdm.set_service_certificate(session_id, config["certificate"]) cdm.set_service_certificate(session_id, config["certificate"])
challenge = cdm.get_license_challenge( challenge = cdm.get_license_challenge(
session_id=session_id, session_id,
pssh=PSSH("AAAANHBzc2gAAAAA7e+LqXnWSs6jyCfc1R0h7QAAABQIARIQAAAAAAPSZ0kAAAAAAAAAAA=="), PSSH("AAAANHBzc2gAAAAA7e+LqXnWSs6jyCfc1R0h7QAAABQIARIQAAAAAAPSZ0kAAAAAAAAAAA=="),
license_type="OFFLINE", "OFFLINE",
privacy_mode=True, True,
) )
keyrequestdata = KeyExchangeRequest.Widevine(challenge) keyrequestdata = KeyExchangeRequest.Widevine(challenge)
entityauthdata = EntityAuthentication.Unauthenticated(sender) entityauthdata = EntityAuthentication.Unauthenticated(sender)
@@ -76,47 +86,48 @@ class MSL:
keyrequestdata = KeyExchangeRequest.AsymmetricWrapped( keyrequestdata = KeyExchangeRequest.AsymmetricWrapped(
keypairid="superKeyPair", keypairid="superKeyPair",
mechanism="JWK_RSA", mechanism="JWK_RSA",
publickey=msl_keys.rsa.publickey().exportKey(format="DER") publickey=msl_keys.rsa.publickey().exportKey(format="DER"),
) )
data = jsonpickle.encode({ data = jsonpickle.encode(
{
"entityauthdata": entityauthdata, "entityauthdata": entityauthdata,
"headerdata": base64.b64encode(MSL.generate_msg_header( "headerdata": base64.b64encode(
message_id=message_id, MSL.generate_msg_header(
sender=sender, message_id=message_id, sender=sender, is_handshake=True, keyrequestdata=keyrequestdata
is_handshake=True, ).encode("utf-8")
keyrequestdata=keyrequestdata ).decode("utf-8"),
).encode("utf-8")).decode("utf-8"), "signature": "",
"signature": "" },
}, unpicklable=False) unpicklable=False,
data += json.dumps({ )
"payload": base64.b64encode(json.dumps({ data += json.dumps(
"messageid": message_id, {
"data": "", "payload": base64.b64encode(
"sequencenumber": 1, json.dumps({"messageid": message_id, "data": "", "sequencenumber": 1, "endofmsg": True}).encode(
"endofmsg": True "utf-8"
}).encode("utf-8")).decode("utf-8"), )
"signature": "" ).decode("utf-8"),
}) "signature": "",
}
)
try: try:
r = session.post( r = session.post(url=endpoint, data=data)
url=endpoint,
data=data
)
except requests.HTTPError as e: except requests.HTTPError as e:
raise Exception(f"- Key exchange failed, response data is unexpected: {e.response.text}") raise Exception(f"- Key exchange failed, response data is unexpected: {e.response.text}")
key_exchange = r.json() # expecting no payloads, so this is fine key_exchange = r.json() # expecting no payloads, so this is fine
if "errordata" in key_exchange: if "errordata" in key_exchange:
raise Exception("- Key exchange failed: " + json.loads(base64.b64decode( raise Exception(
key_exchange["errordata"] "- Key exchange failed: "
).decode())["errormsg"]) + json.loads(base64.b64decode(key_exchange["errordata"]).decode())["errormsg"]
)
# parse the crypto keys # parse the crypto keys
key_response_data = json.JSONDecoder().decode(base64.b64decode( key_response_data = json.JSONDecoder().decode(base64.b64decode(key_exchange["headerdata"]).decode("utf-8"))[
key_exchange["headerdata"] "keyresponsedata"
).decode("utf-8"))["keyresponsedata"] ]
if key_response_data["scheme"] != str(scheme): if key_response_data["scheme"] != str(scheme):
raise Exception("- Key exchange scheme mismatch occurred") raise Exception("- Key exchange scheme mismatch occurred")
@@ -129,46 +140,40 @@ class MSL:
raise Exception("- No CDM available") raise Exception("- No CDM available")
cdm.parse_license(msl_keys.cdm_session, key_data["cdmkeyresponse"]) cdm.parse_license(msl_keys.cdm_session, key_data["cdmkeyresponse"])
keys = cdm.get_keys(msl_keys.cdm_session) keys = cdm.get_keys(msl_keys.cdm_session)
cls.log.info(f"Keys: {keys}") cls.log.debug(f"Keys: {keys}")
encryption_key = MSL.get_widevine_key( encryption_key = MSL.get_widevine_key(
kid=base64.b64decode(key_data["encryptionkeyid"]), kid=base64.b64decode(key_data["encryptionkeyid"]),
keys=keys, keys=keys,
permissions=["allow_encrypt", "allow_decrypt"] permissions=["allow_encrypt", "allow_decrypt"],
) )
msl_keys.encryption = encryption_key msl_keys.encryption = encryption_key
cls.log.info(f"Encryption key: {encryption_key}") cls.log.debug(f"Encryption key: {encryption_key}")
sign = MSL.get_widevine_key( sign = MSL.get_widevine_key(
kid=base64.b64decode(key_data["hmackeyid"]), kid=base64.b64decode(key_data["hmackeyid"]),
keys=keys, keys=keys,
permissions=["allow_sign", "allow_signature_verify"] permissions=["allow_sign", "allow_signature_verify"],
) )
cls.log.info(f"Sign key: {sign}") cls.log.debug(f"Sign key: {sign}")
msl_keys.sign = sign msl_keys.sign = sign
elif scheme == KeyExchangeSchemes.AsymmetricWrapped: elif scheme == KeyExchangeSchemes.AsymmetricWrapped:
cipher_rsa = PKCS1_OAEP.new(msl_keys.rsa) cipher_rsa = PKCS1_OAEP.new(msl_keys.rsa)
msl_keys.encryption = MSL.base64key_decode( msl_keys.encryption = MSL.base64key_decode(
json.JSONDecoder().decode(cipher_rsa.decrypt( json.JSONDecoder().decode(
base64.b64decode(key_data["encryptionkey"]) cipher_rsa.decrypt(base64.b64decode(key_data["encryptionkey"])).decode("utf-8")
).decode("utf-8"))["k"] )["k"]
) )
msl_keys.sign = MSL.base64key_decode( msl_keys.sign = MSL.base64key_decode(
json.JSONDecoder().decode(cipher_rsa.decrypt( json.JSONDecoder().decode(
base64.b64decode(key_data["hmackey"]) cipher_rsa.decrypt(base64.b64decode(key_data["hmackey"])).decode("utf-8")
).decode("utf-8"))["k"] )["k"]
) )
msl_keys.mastertoken = key_response_data["mastertoken"] msl_keys.mastertoken = key_response_data["mastertoken"]
MSL.cache_keys(msl_keys, cache) MSL.cache_keys(msl_keys, cache)
cls.log.info("MSL handshake successful") cls.log.info("MSL handshake successful")
return cls( return cls(session=session, endpoint=endpoint, sender=sender, keys=msl_keys, message_id=message_id)
session=session,
endpoint=endpoint,
sender=sender,
keys=msl_keys,
message_id=message_id
)
@staticmethod @staticmethod
def load_cache_data(cacher: Cacher): def load_cache_data(cacher: Cacher):
@@ -184,9 +189,24 @@ class MSL:
# to an RsaKey :) # to an RsaKey :)
msl_keys.rsa = RSA.importKey(msl_keys.rsa) msl_keys.rsa = RSA.importKey(msl_keys.rsa)
# If it's expired or close to, return None as it's unusable # If it's expired or close to, return None as it's unusable
if msl_keys.mastertoken and ((datetime.utcfromtimestamp(int(json.JSONDecoder().decode( if (
msl_keys.mastertoken
and (
(
datetime.utcfromtimestamp(
int(
json.JSONDecoder().decode(
base64.b64decode(msl_keys.mastertoken["tokendata"]).decode("utf-8") base64.b64decode(msl_keys.mastertoken["tokendata"]).decode("utf-8")
)["expiration"])) - datetime.now()).total_seconds() / 60 / 60) < 10: )["expiration"]
)
)
- datetime.now()
).total_seconds()
/ 60
/ 60
)
< 10
):
return None return None
return msl_keys return msl_keys
@@ -204,8 +224,9 @@ class MSL:
msl_keys.rsa = RSA.importKey(msl_keys.rsa) msl_keys.rsa = RSA.importKey(msl_keys.rsa)
@staticmethod @staticmethod
def generate_msg_header(message_id, sender, is_handshake, userauthdata=None, keyrequestdata=None, def generate_msg_header(
compression="GZIP"): message_id, sender, is_handshake, userauthdata=None, keyrequestdata=None, compression="GZIP"
):
""" """
The MSL header carries all MSL data used for entity and user authentication, message encryption The MSL header carries all MSL data used for entity and user authentication, message encryption
and verification, and service tokens. Portions of the MSL header are encrypted. and verification, and service tokens. Portions of the MSL header are encrypted.
@@ -228,7 +249,7 @@ class MSL:
"capabilities": { "capabilities": {
"compressionalgos": [compression] if compression else [], "compressionalgos": [compression] if compression else [],
"languages": ["en-US"], # bcp-47 "languages": ["en-US"], # bcp-47
"encoderformats": ["JSON"] "encoderformats": ["JSON"],
}, },
"timestamp": int(time.time()), "timestamp": int(time.time()),
# undocumented or unused: # undocumented or unused:
@@ -244,7 +265,7 @@ class MSL:
@classmethod @classmethod
def get_widevine_key(cls, kid, keys: list[Key], permissions): def get_widevine_key(cls, kid, keys: list[Key], permissions):
cls.log.info(f"KID: {Key.kid_to_uuid(kid)}") cls.log.debug(f"KID: {Key.kid_to_uuid(kid)}")
for key in keys: for key in keys:
# cls.log.info(f"KEY: {key.kid_to_uuid}") # cls.log.info(f"KEY: {key.kid_to_uuid}")
if key.kid != Key.kid_to_uuid(kid): if key.kid != Key.kid_to_uuid(kid):
@@ -258,10 +279,10 @@ class MSL:
return key.key return key.key
return None return None
def send_message(self, endpoint, params, application_data, userauthdata=None): def send_message(self, endpoint, params, application_data, userauthdata=None, headers=None, unwrap_result=True):
message = self.create_message(application_data, userauthdata) message = self.create_message(application_data, userauthdata)
res = self.session.post(url=endpoint, data=message, params=params) res = self.session.post(url=endpoint, data=message, params=params, headers=headers)
header, payload_data = self.parse_message(res.text) header, payload_data = self.parse_message(res.text, unwrap_result=unwrap_result)
if "errordata" in header: if "errordata" in header:
raise Exception( raise Exception(
"- MSL response message contains an error: {}".format( "- MSL response message contains an error: {}".format(
@@ -272,37 +293,46 @@ class MSL:
def create_message(self, application_data, userauthdata=None): def create_message(self, application_data, userauthdata=None):
self.message_id += 1 # new message must ue a new message id self.message_id += 1 # new message must ue a new message id
headerdata = self.encrypt(self.generate_msg_header( headerdata = self.encrypt(
message_id=self.message_id, self.generate_msg_header(
sender=self.sender, message_id=self.message_id, sender=self.sender, is_handshake=False, userauthdata=userauthdata
is_handshake=False, )
userauthdata=userauthdata )
))
header = json.dumps({ header = json.dumps(
{
"headerdata": base64.b64encode(headerdata.encode("utf-8")).decode("utf-8"), "headerdata": base64.b64encode(headerdata.encode("utf-8")).decode("utf-8"),
"signature": self.sign(headerdata).decode("utf-8"), "signature": self.sign(headerdata).decode("utf-8"),
"mastertoken": self.keys.mastertoken "mastertoken": self.keys.mastertoken,
}) }
)
payload_chunks = [self.encrypt(json.dumps({ payload_chunks = [
self.encrypt(
json.dumps(
{
"messageid": self.message_id, "messageid": self.message_id,
"data": self.gzip_compress(json.dumps(application_data).encode("utf-8")).decode("utf-8"), "data": self.gzip_compress(json.dumps(application_data).encode("utf-8")).decode("utf-8"),
"compressionalgo": "GZIP", "compressionalgo": "GZIP",
"sequencenumber": 1, # todo ; use sequence_number from master token instead? "sequencenumber": 1, # todo ; use sequence_number from master token instead?
"endofmsg": True "endofmsg": True,
}))] }
)
)
]
message = header message = header
for payload_chunk in payload_chunks: for payload_chunk in payload_chunks:
message += json.dumps({ message += json.dumps(
{
"payload": base64.b64encode(payload_chunk.encode("utf-8")).decode("utf-8"), "payload": base64.b64encode(payload_chunk.encode("utf-8")).decode("utf-8"),
"signature": self.sign(payload_chunk).decode("utf-8") "signature": self.sign(payload_chunk).decode("utf-8"),
}) }
)
return message return message
def decrypt_payload_chunks(self, payload_chunks): def decrypt_payload_chunks(self, payload_chunks, unwrap_result=True):
""" """
Decrypt and extract data from payload chunks Decrypt and extract data from payload chunks
@@ -310,16 +340,13 @@ class MSL:
:return: json object :return: json object
""" """
raw_data = "" raw_data = ""
for payload_chunk in payload_chunks: for payload_chunk in payload_chunks:
# todo ; verify signature of payload_chunk["signature"] against payload_chunk["payload"] # todo ; verify signature of payload_chunk["signature"] against payload_chunk["payload"]
# expecting base64-encoded json string # expecting base64-encoded json string
payload_chunk = json.loads(base64.b64decode(payload_chunk["payload"]).decode("utf-8")) payload_chunk = json.loads(base64.b64decode(payload_chunk["payload"]).decode("utf-8"))
# decrypt the payload # decrypt the payload
payload_decrypted = AES.new( payload_decrypted = AES.new(
key=self.keys.encryption, key=self.keys.encryption, mode=AES.MODE_CBC, iv=base64.b64decode(payload_chunk["iv"])
mode=AES.MODE_CBC,
iv=base64.b64decode(payload_chunk["iv"])
).decrypt(base64.b64decode(payload_chunk["ciphertext"])) ).decrypt(base64.b64decode(payload_chunk["ciphertext"]))
payload_decrypted = Padding.unpad(payload_decrypted, 16) payload_decrypted = Padding.unpad(payload_decrypted, 16)
payload_decrypted = json.loads(payload_decrypted.decode("utf-8")) payload_decrypted = json.loads(payload_decrypted.decode("utf-8"))
@@ -344,10 +371,12 @@ class MSL:
self.log.critical(f"- {error}") self.log.critical(f"- {error}")
raise Exception(f"- MSL response message contains an error: {error}") raise Exception(f"- MSL response message contains an error: {error}")
# sys.exit(1) # sys.exit(1)
self.log.debug(f"Payload Chunks: {data}")
if unwrap_result:
return data["result"] return data["result"]
return data
def parse_message(self, message): def parse_message(self, message, unwrap_result=True):
""" """
Parse an MSL message into a header and list of payload chunks Parse an MSL message into a header and list of payload chunks
@@ -359,7 +388,7 @@ class MSL:
header = parsed_message[0] header = parsed_message[0]
encrypted_payload_chunks = parsed_message[1:] if len(parsed_message) > 1 else [] encrypted_payload_chunks = parsed_message[1:] if len(parsed_message) > 1 else []
if encrypted_payload_chunks: if encrypted_payload_chunks:
payload_chunks = self.decrypt_payload_chunks(encrypted_payload_chunks) payload_chunks = self.decrypt_payload_chunks(encrypted_payload_chunks, unwrap_result=unwrap_result)
else: else:
payload_chunks = {} payload_chunks = {}
@@ -390,22 +419,19 @@ class MSL:
:return: Serialized JSON String of the encryption Envelope :return: Serialized JSON String of the encryption Envelope
""" """
iv = get_random_bytes(16) iv = get_random_bytes(16)
return json.dumps({ return json.dumps(
{
"ciphertext": base64.b64encode( "ciphertext": base64.b64encode(
AES.new( AES.new(self.keys.encryption, AES.MODE_CBC, iv).encrypt(Padding.pad(plaintext.encode("utf-8"), 16))
self.keys.encryption,
AES.MODE_CBC,
iv
).encrypt(
Padding.pad(plaintext.encode("utf-8"), 16)
)
).decode("utf-8"), ).decode("utf-8"),
"keyid": "{}_{}".format(self.sender, json.loads( "keyid": "{}_{}".format(
base64.b64decode(self.keys.mastertoken["tokendata"]).decode("utf-8") self.sender,
)["sequencenumber"]), json.loads(base64.b64decode(self.keys.mastertoken["tokendata"]).decode("utf-8"))["sequencenumber"],
),
"sha256": "AA==", "sha256": "AA==",
"iv": base64.b64encode(iv).decode("utf-8") "iv": base64.b64encode(iv).decode("utf-8"),
}) }
)
def sign(self, text): def sign(self, text):
""" """

View File

@@ -31,6 +31,19 @@ class UserAuthentication(MSLObject):
} }
) )
@classmethod
def UserIDToken(cls, token_data, signature, master_token):
return cls(
scheme=UserAuthenticationSchemes.UserIDToken,
authdata={
"useridtoken": {
"tokendata": token_data,
"signature": signature
},
"mastertoken": master_token
}
)
@classmethod @classmethod
def NetflixIDCookies(cls, netflixid, securenetflixid): def NetflixIDCookies(cls, netflixid, securenetflixid):
""" """

View File

@@ -15,6 +15,7 @@ class EntityAuthenticationSchemes(Scheme):
class UserAuthenticationSchemes(Scheme): class UserAuthenticationSchemes(Scheme):
"""https://github.com/Netflix/msl/wiki/User-Authentication-%28Configuration%29""" """https://github.com/Netflix/msl/wiki/User-Authentication-%28Configuration%29"""
EmailPassword = "EMAIL_PASSWORD" EmailPassword = "EMAIL_PASSWORD"
UserIDToken = "USER_ID_TOKEN"
NetflixIDCookies = "NETFLIXID" NetflixIDCookies = "NETFLIXID"

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,5 @@
# Group or Username to postfix to the end of all download filenames following a dash # Group or Username to postfix to the end of all download filenames following a dash
tag: Kenzuya # tag: Kenzuya
# Enable/disable tagging with group name (default: true) # Enable/disable tagging with group name (default: true)
tag_group_name: true tag_group_name: true
@@ -13,7 +13,7 @@ set_terminal_bg: false
# Set file naming convention # Set file naming convention
# true for style - Prime.Suspect.S07E01.The.Final.Act.Part.One.1080p.ITV.WEB-DL.AAC2.0.H.264 # true for style - Prime.Suspect.S07E01.The.Final.Act.Part.One.1080p.ITV.WEB-DL.AAC2.0.H.264
# false for style - Prime Suspect S07E01 The Final Act - Part One # false for style - Prime Suspect S07E01 The Final Act - Part One
scene_naming: true scene_naming: false
# Whether to include the year in series names for episodes and folders (default: true) # Whether to include the year in series names for episodes and folders (default: true)
# true for style - Show Name (2023) S01E01 Episode Name # true for style - Show Name (2023) S01E01 Episode Name
@@ -36,62 +36,32 @@ title_cache_max_retention: 86400 # Maximum cache retention for fallback when API
muxing: muxing:
set_title: true set_title: true
# Configuration for serve
serve:
api_secret: "kenzuya"
users:
secret_key_for_user:
devices:
- generic_nexus_4464_l3
username: user
# Login credentials for each Service # Login credentials for each Service
credentials: credentials:
# Direct credentials (no profile support)
EXAMPLE: email@example.com:password
# Per-profile credentials with default fallback
SERVICE_NAME:
default: default@email.com:password # Used when no -p/--profile is specified
profile1: user1@email.com:password1
profile2: user2@email.com:password2
# Per-profile credentials without default (requires -p/--profile)
SERVICE_NAME2:
john: john@example.com:johnspassword
jane: jane@example.com:janespassword
# You can also use list format for passwords with special characters
SERVICE_NAME3:
default: ["user@email.com", ":PasswordWith:Colons"]
Netflix: Netflix:
default: ["sako.sako1109@gmail.com", "sako1109"] default: ["ariel-prinsess828@ezweb.ne.jp", "AiNe892186"]
secondary: ["csyc5478@naver.com", "wl107508!"]
third: ["erin.e.pfleger@gmail.com", "Pfleger93"]
# default: ["pbgarena0838@gmail.com", "Andhika1978"] # default: ["pbgarena0838@gmail.com", "Andhika1978"]
# Override default directories used across unshackle # Override default directories used across unshackle
directories: directories:
cache: Cache cache: Cache
# cookies: Cookies
dcsl: DCSL # Device Certificate Status List dcsl: DCSL # Device Certificate Status List
downloads: Downloads downloads: /mnt/ketuakenzuya/Downloads/
logs: Logs logs: Logs
temp: Temp temp: /tmp/unshackle
# wvds: WVDs
prds: PRDs
# Additional directories that can be configured:
# commands: Commands
# services:
# - /path/to/services
# - /other/path/to/services
# vaults: Vaults
# fonts: Fonts
# Pre-define which Widevine or PlayReady device to use for each Service
cdm: cdm:
# Global default CDM device (fallback for all services/profiles)
default: chromecdm default: chromecdm
# Direct service-specific CDM
DIFFERENT_EXAMPLE: PRD_1
# Per-profile CDM configuration
EXAMPLE:
john_sd: chromecdm_903_l3 # Profile 'john_sd' uses Chrome CDM L3
jane_uhd: nexus_5_l1 # Profile 'jane_uhd' uses Nexus 5 L1
default: generic_android_l3 # Default CDM for this service
# Use pywidevine Serve-compliant Remote CDMs
remote_cdm: remote_cdm:
- name: "chromecdm" - name: "chromecdm"
device_name: widevine device_name: widevine
@@ -100,7 +70,7 @@ remote_cdm:
security_level: 3 security_level: 3
type: "decrypt_labs" type: "decrypt_labs"
host: https://keyxtractor.decryptlabs.com host: https://keyxtractor.decryptlabs.com
secret: 7547150416_41da0a32d6237d83_KeyXtractor_api_ext secret: 919240143_41d9c3fac9a5f82e_KeyXtractor_ultimate
- name: "android" - name: "android"
device_name: andorid device_name: andorid
device_type: ANDROID device_type: ANDROID
@@ -128,22 +98,7 @@ key_vaults:
api_mode: "decrypt_labs" api_mode: "decrypt_labs"
host: "https://keyvault.decryptlabs.com" host: "https://keyvault.decryptlabs.com"
password: "7547150416_41da0a32d6237d83_KeyXtractor_api_ext" password: "7547150416_41da0a32d6237d83_KeyXtractor_api_ext"
# Additional vault types:
# - type: API
# name: "Remote Vault"
# uri: "https://key-vault.example.com"
# token: "secret_token"
# no_push: true # This vault will only provide keys, not receive them
# - type: MySQL
# name: "MySQL Vault"
# host: "127.0.0.1"
# port: 3306
# database: vault
# username: user
# password: pass
# no_push: false # Default behavior - vault both provides and receives keys
# Choose what software to use to download data
downloader: aria2c downloader: aria2c
# Options: requests | aria2c | curl_impersonate | n_m3u8dl_re # Options: requests | aria2c | curl_impersonate | n_m3u8dl_re
# Can also be a mapping: # Can also be a mapping:
@@ -200,26 +155,10 @@ filenames:
# API key for The Movie Database (TMDB) # API key for The Movie Database (TMDB)
tmdb_api_key: "8f5c14ef648a0abdd262cf809e11fcd4" tmdb_api_key: "8f5c14ef648a0abdd262cf809e11fcd4"
# conversion_method:
# - auto (default): Smart routing - subby for WebVTT/SAMI, standard for others
# - subby: Always use subby with advanced processing
# - pycaption: Use only pycaption library (no SubtitleEdit, no subby)
# - subtitleedit: Prefer SubtitleEdit when available, fall back to pycaption
subtitle: subtitle:
conversion_method: auto conversion_method: auto
sdh_method: auto sdh_method: auto
# Configuration for pywidevine's serve functionality
serve:
users:
secret_key_for_user:
devices:
- generic_nexus_4464_l3
username: user
# devices:
# - '/path/to/device.wvd'
# Configuration data for each Service
services: services:
# Service-specific configuration goes here # Service-specific configuration goes here
# Profile-specific configurations can be nested under service names # Profile-specific configurations can be nested under service names
@@ -252,6 +191,24 @@ services:
# External proxy provider services # External proxy provider services
proxy_providers: proxy_providers:
gluetun:
base_port: 8888
auto_cleanup: true
container_prefix: "unshackle-gluetun"
verify_ip: true
providers:
protonvpn:
vpn_type: "openvpn"
credentials:
username: "L83JaCnXKIviymQm"
password: "UewUDYdthTLLhOBJDympFFxJn4uG12BV"
server_countries:
us: United States
id: Indonesia
kr: Korea
basic:
SG:
- "http://127.0.0.1:6004"
surfsharkvpn: surfsharkvpn:
username: SkyCBP7kH8KqxDwy5Qw36mQn # Service credentials from https://my.surfshark.com/vpn/manual-setup/main/openvpn username: SkyCBP7kH8KqxDwy5Qw36mQn # Service credentials from https://my.surfshark.com/vpn/manual-setup/main/openvpn
password: pcmewxKTNPvLENdbKJGh8Cgt # Service credentials (not your login password) password: pcmewxKTNPvLENdbKJGh8Cgt # Service credentials (not your login password)