Video transcoding is incredibly CPU-intensive. If we handled file uploads and FFmpeg processing on the same Node.js thread, a single video upload would completely block the API, taking the entire application offline for other users.
To solve this, I designed a distributed, microservices-based architecture using Docker Compose.
When a user uploads a video via the Express API, the server doesn't process the video. Instead, it securely uploads the raw file to MinIO, creates a database record, and instantly fires off a tiny metadata payload to a RabbitMQ queue. This ensures the API response time remains under 200ms.
app.post('/api/upload', upload.single('video'), async (req, res) => {
// Upload raw file to MinIO bucket
const fileName = `${crypto.randomUUID()}.${fileExt}`;
await minioClient.putObject('raw-videos', fileName, req.file.buffer);
const result = await pgPool.query(
`INSERT INTO videos (title, raw_path, status) VALUES ($1, $2, $3) RETURNING id`,
[req.file.originalname, fileName, 'queued']
);
// Decouple heavy processing by sending a task to RabbitMQ
rabbitChannel.sendToQueue(
'transcode_tasks',
Buffer.from(JSON.stringify({ videoId: result.rows[0].id, fileName }))
);
res.status(202).json({ id: videoId, status: 'queued' });
});The worker service listens to the RabbitMQ queue. I specifically configured the consumer with channel.prefetch(1). This is crucial: it forces RabbitMQ to only assign one video to a worker at a time, preventing memory exhaustion if hundreds of videos are uploaded simultaneously.
Once a job is received, the worker pulls the raw video from MinIO and spins up a child process to run FFmpeg.
To support Adaptive Bitrate Streaming, the video is encoded into four distinct resolution tiers concurrently (1080p, 720p, 480p, and 360p). The -var_stream_map argument binds these distinct video streams into a single master .m3u8 playlist.
// Binding output arguments for FFmpeg
args.push(
'-s:v:0', '1920x1080', '-c:v:0', 'libx264', '-b:v:0', '5000k',
'-s:v:1', '1280x720', '-c:v:1', 'libx264', '-b:v:1', '2800k',
'-s:v:2', '854x480', '-c:v:2', 'libx264', '-b:v:2', '1400k',
'-s:v:3', '640x360', '-c:v:3', 'libx264', '-b:v:3', '800k'
);
args.push('-var_stream_map', 'v:0,a:0,name:1080p v:1,a:1,name:720p v:2,a:2,name:480p v:3,a:3,name:360p');
// Setting up HLS chunking
args.push(
'-master_pl_name', 'master.m3u8',
'-f', 'hls', '-hls_time', '4',
'-hls_playlist_type', 'vod',
'-hls_segment_filename', path.join(workDir, '%v/fileSequence%d.ts'),
path.join(workDir, '%v/prog_index.m3u8')
);Because FFmpeg is a compiled binary, tracking its progress in Node.js requires a bit of clever engineering. FFmpeg logs its operational metadata to stderr (not stdout). By attaching a listener to the stderr stream, I extract the current encoding timestamp using Regex, calculate the percentage against the total video duration, and throttle updates to the database to avoid exhausting the connection pool.
proc.stderr.on('data', (data: Buffer) => {
const output = data.toString();
// Parse current time from FFmpeg log
const timeMatch = output.match(/time=(\d{2}):(\d{2}):(\d{2}\.\d{2})/);
if (timeMatch && totalDuration > 0) {
const currentTime = parseFloat(timeMatch[1]) * 3600 + parseFloat(timeMatch[2]) * 60 + parseFloat(timeMatch[3]);
const progress = Math.min(Math.round((currentTime / totalDuration) * 100), 99);
// Throttle DB updates to once every 3 seconds
if (Date.now() - lastUpdate > 3000) {
updateStatus(videoId, 'processing', null, progress).catch(console.error);
lastUpdate = Date.now();
}
}
});If the encoding completes successfully, the worker pushes the resulting HLS folders back into MinIO and acknowledges the message in RabbitMQ (channel.ack(msg)). If the file is corrupted and FFmpeg crashes, the worker utilizes channel.nack(msg, false, false). This explicitly rejects the message without requeuing it, preventing the "infinite loop of death" where a bad file continuously crashes the worker.