Step into the future and join our online courses. Join Now

This setup gives you a production-grade, multithreaded job queue for MakeMKV automation. Adjust thread counts and memory based on your actual hardware.

Load it:

keydb-cli --pass MakemkvR0cks! SCRIPT LOAD "$(cat claim_job.lua)" # Push a disc to queue keydb-cli --pass MakemkvR0cks! LPUSH makemkv:queue:waiting "/dev/sr0" Worker loop (simplified) while true; do JOB=$(keydb-cli --pass MakemkvR0cks! EVALSHA <hash> 2 makemkv:queue:waiting makemkv:queue:processing "worker-$$" "/dev/sr0") if [ "$JOB" ]; then makemkvcon mkv disc:0 all /output --progress=-same keydb-cli --pass MakemkvR0cks! HDEL makemkv:queue:processing "worker-$$" fi sleep 2 done

This configuration assumes you are using KeyDB as a job queue, metadata cache, or progress tracker for a MakeMKV automation script. # ============================================ # KeyDB Configuration for MakeMKV Automation # ============================================ # Purpose: High-performance job queue for disc ripping # Tuned for: Many parallel ripping tasks, large metadata --- NETWORK & PORT --- port 6379 tcp-backlog 511 timeout 300 tcp-keepalive 300 --- MEMORY MANAGEMENT (Optimized for large file lists)--- maxmemory 8gb maxmemory-policy allkeys-lru maxmemory-samples 10 --- SNAPSHOTTING (Disable for pure queue mode)--- save "" # Disable RDB snapshots to reduce I/O appendonly no # Disable AOF (queue can rebuild from source) --- THREADING (KeyDB specific)--- server-threads 4 # Match CPU cores for parallel ripping queues server-thread-affinity false io-threads 4 io-threads-do-reads yes --- REPLICATION (Optional: for backup of job status)--- replica-serve-stale-data yes replica-read-only yes --- SECURITY & COMMANDS --- requirepass MakemkvR0cks! # CHANGE THIS rename-command FLUSHALL "" rename-command FLUSHDB "" rename-command CONFIG "Makemkv_CONFIG_ADMIN" --- SLOW LOG & MONITORING --- slowlog-log-slower-than 10000 # 10ms, good for queue operations slowlog-max-len 128 latency-monitor-threshold 100 --- ADVANCED QUEUE SETTINGS --- Prevent head-of-line blocking for large MKV jobs client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 --- MAKEMKV SPECIFIC KEYS --- Suggested key structure: makemkv:queue:waiting -> List of pending disc paths makemkv:queue:processing -> Hash of active jobs (pid -> disc) makemkv:status:{job_id} -> Hash with progress, ETA, title makemkv:completed -> Sorted Set (timestamp -> output file) makemkv:failure -> List of failed discs + reason Bonus: Lua Script for Atomic Job Claim (Atomic pop + register) Save as claim_job.lua and load into KeyDB:

-- Atomic claim from waiting queue to processing -- KEYS[1] = waiting list -- KEYS[2] = processing hash -- ARGV[1] = worker_id (e.g., PID or hostname) -- ARGV[2] = disc_path -- Returns: claimed job info or nil local job = redis.call('LPOP', KEYS[1]) if job then redis.call('HSET', KEYS[2], ARGV[1], job) return job end return nil

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.