Some useful FFmpeg commands
- FFmpeg intro
- FFplay video viewing
- FFmpeg video editing
- Convert MKV to MP4
- Convert MP4 to M4A (audio only mp4)
- Edit metadata (add chapters)
- Add thumbnail
- Add subtitles
- Extract frames
- Create video from frames
- crop video
- compress video
- cut video
- loop video
- Reverse video and/or Audio
- Concatenate multiple videos into one
- Cut and combine multiple sections of multiple files
- Create/download video with m3u8 playlist
- find silence parts in video
- Libavfilter virtual input device (lavfi filtergraph)
- sierpinski (pan)
- mandelbrot (zoom)
- (elementary) cellular automaton
- life (Cellular automaton)
- mptestsrc (animated test patterns)
- empty (input)
- color (input)
- smptebars (input)
- smptehdbars (input)
- testsrc (input)
- testsrc2 (input)
- rgbtestsrc (input)
- yuvtestsrc (input)
- colorspectrum (input)
- colorchart (input)
- allrgb (input)
- allyuv (input)
Important
The order of (some) parameters/flags matters - before or after a given input or output (FFmpeg supports multiple input files and creating multiple output streams with one command).
Scroll TOP
- Full FFmpeg documentation @ https://ffmpeg.org/ffmpeg-all.html (archive)
- Full FFplay documentation @ https://ffmpeg.org/ffplay-all.html (archive)
- Full FFprobe documentation @ https://ffmpeg.org/ffprobe-all.html (archive)
Get FFmpeg from https://ffmpeg.org/download.html
I currently use the FFmpeg builds from https://www.gyan.dev/ffmpeg/builds/ (for Windows 7+)
under the release builds section the file ffmpeg-release-full.7z
or directly https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full.7z
see Generic options
ffmpeg -h # CLI help
ffmpeg -L # FFmpeg licence
ffmpeg -version # FFmpeg build version
ffmpeg -buildconf # FFmpeg build configuration
ffmpeg -formats # files (and devices) de-/muxing support
ffmpeg -pix_fmts # pixel formats (in/out/hardware acceleration/palette/bitstream)
ffmpeg -protocols # protocols (in/out) like: file, https, sftp
ffmpeg -codecs # video/audio/subtitle/data en-/decoders (also shows if lossy or lossless)
ffmpeg -filters # video/audio (libavfilter) filters like: avgblur V->V (video in; video out)
ffmpeg -bsfs # bitstream filters like: null, h264_metadata, hevc_metadata
ffmpeg -dispositions # how a stream is added to an output file; for example attached_pic is the file thumbnail/cover art for video files like MP4 (visible in file explorer)
ffmpeg -colors # color names with their hex value; for example: Lime #00ff00
ffmpeg -hide_banner # does not log version/copyright/buildconfig
ffmpeg -v level+warning # only log warnings and worse and shows level: "[warning] ..."
# debug > verbose > info (default) > warning > error > fatal > quiet (nothing)
# banner is info so -hide_banner is not needed with warning or less
ffmpeg -stats # allways show stats (en-/decoding progress), even when log level is less than info
CLI keyboard hotkeys mid-process:
key | function |
---|---|
? | show this table |
+ | increase verbosity (logging level) |
- | decrease verbosity (logging level) |
q | quit |
c | Send command to first matching filter supporting it |
C | Send/Queue command to all matching filters |
D | cycle through available debug modes |
h | dump packets/hex press to cycle through the 3 states |
s | Show QP histogram |
I didn't find an official documentation for these...
Scroll TOP
ffplay -v level+warning -stats -loop -1 INPUT.mp4
A window will show the video looping infinitly (see FFplay video controls)
# random simulation, window size 1280*960 (4 times 320*240, which is the default size)
ffplay -v level+warning -stats -f lavfi life=mold=25:life_color=\#00ff00:death_color=\#aa0000,scale=4*iw:-1:flags=neighbor
A window will show the simulation infinitly (see FFplay video controls)
Note
seeking is not available, only pause/resume or frame-by-frame playback
-v
documentation-stats
documentation- see life (cellular automaton) section below
scale
filter documentation
Key | Action |
---|---|
Q / ESC | Quit |
F or left mouse double-click | Toggle full screen |
P / SPACE | Pause/Resume |
S | Step to next frame (and pause) frame-by-frame |
← / → | Seek back-/forward 10 seconds |
↓ / ↑ | Seek back-/forward 1 minute |
PAGE DOWN / PAGE UP | Seek to the previous/next chapter (or 10 minutes) |
right mouse click | Seek to percentage by click position (of window width) |
9 / 0 or / / * | De-/increase volume |
M | Toggle mute |
A | Cycle audio channel (current program) |
T | Cycle subtitle channel (current program) |
C | Cycle program |
V | Cycle video channel |
W | Cycle video filter/show modes |
- Convert MKV to MP4
- Convert MP4 to M4A (audio only mp4)
- Edit metadata (add chapters)
- Add thumbnail
- Add subtitles
- Extract frames
- Create video from frames
- crop video
- compress video
- cut video
- loop video
- Reverse video and/or Audio
- Concatenate multiple videos into one
- Cut and combine multiple sections of multiple files
- Create/download video with m3u8 playlist
- find silence parts in video
Scroll TOP
the mkv video file format is suggested when streaming or recording (via OBS) since it can be easily recovert
# Audio codec already is AAC, so it can be copied to save some time
# Also use some compression to shrink the file size a bit
ffmpeg -v level+warning -stats -i INPUT.mkv -c:a copy -c:v libx264 -crf 12 OUTPUT.mp4
-v
documentation-stats
documentation-c
documentation-crf
documentation (the best description is under libaom-AV1 but it's also in other encoders like MPEG-4)- also see this guide for CRF with
libx264
only include audio and subtitles (if present)
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -map 0:a -map 0:s? OUTPUT.m4a
or only exclude video
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -map 0 -map -0:v OUTPUT.m4a
export all metadata to a file
ffmpeg -v level+warning -stats -i INPUT.mp4 -f ffmetadata FFMETADATAFILE.txt
it looks something like this
;FFMETADATA1
# empty lines or lines starting with ; or # will be ignored
# whitespace will not be ignored so "title = A" would be interpreted as key "title " and value " A"
title=Video Title
artist=Artist Name
# newlines and other special characters like = ; # \ must be escaped with a \
description=Text\
Line two\
\
\
Line five\
Line with Û̕͝͡n̊̑̓̊i͚͚ͬ́c̗͕̈́̀o̵̯ͣ͊ḑ̴̱̐ḛ̯̓̒
# then adding chapters is very simple | order does not matter (no intersection ofc), so easiest is to append them to the end of file
[CHAPTER]
# fractions of a second so 1/1000 says the following START and END are in milliseconds
TIMEBASE=1/1000
# start and end might change a bit when reinserting (snaps to nearest frame when video stream is copied and not encoded)
START=0
END=10000
title=0 to 10sec of the video
[CHAPTER]
TIMEBASE=1/1000
START=10000
END=20000
title=10sec to 20sec of the video
then to reinsert the edited metadata file
ffmpeg -v level+warning -stats -i INPUT.mp4 -i FFMETADATAFILE.txt -map_metadata 1 -c copy OUTPUT.mp4
-v
documentation-stats
documentation- full metadata documentation
- You might also want to look at the
-metadata
documentation
ffmpeg -v level+warning -stats -i INPUT.mp4 -i IMAGE.png -map 0 -map 1 -c copy -c:v:1 png -disposition:v:1 attached_pic OUTPUT.mp4
-v
documentation-stats
documentation-c
documentation-map
documentation-disposition
documentation- How To add an embedded cover/thumbnail (within the
-disposition
documentation)
Adding subtitles as an extra stream so they can be turned on and off.
Needs a video player that supports this feature like VLC.
# for mkv output
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB.srt -c copy OUTPUT.mkv
# for mp4 output
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB.srt -c copy -c:s mov_text OUTPUT.mp4
# ... with multiple subtitle files
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB_ENG.srt -i SUB_GER.srt -map 0:0 -map 1:0 -map 2:0 -c copy -c:s mov_text OUTPUT.mp4
# ... with language codes
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB_ENG.srt -i SUB_GER.srt -map 0:0 -map 1:0 -map 2:0 -c copy -c:s mov_text -metadata:s:s:0 language=eng -metadata:s:s:1 language=ger OUTPUT.mp4
A subtitle file (.srt
) may look like this:
1
00:00:00,000 --> 00:00:03,000
hello there
2
00:00:04,000 --> 00:00:08,000
general kenobi
3
00:00:10,000 --> 00:01:00,000
multi
line
subtitles
displayed like in file → new line in SRT = new line in video
Unicode can be used → tested with z̵̢͎̟͛ͥ̄͑̐͐a̡͈̳̟ͧ̑̓͆̔ͬl̗̠̭͖͓͚ͭ̐͊͊ģ͖͈̍̓ͭͩ̚͝͞ơ̢̞̫̜̞̓͗͊ͪ text and it "pushed" the subtitles of screen (big line height)
Note
Not all subtitle files are supported by FFmpeg.
# dump ALL frames
ffmpeg -v level+warning -stats -i INPUT.mp4 ./_dump/frame%03d.png
# dump frames with custom frame rate (here 1fps)
ffmpeg -v level+warning -stats -i INPUT.mp4 -r 1 ./_dump/frame%03d.png
# dump custom number of frames
ffmpeg -v level+warning -stats -i INPUT.mp4 -frames:v 3 ./_dump/frame%03d.png
# dump all frames in a timeframe (here from 0:00:02 to 0:00:05)
ffmpeg -v level+warning -stats -ss 2 -i INPUT.mp4 -t 3 ./_dump/frame%03d.png
ffmpeg -v level+warning -stats -ss 2 -i INPUT.mp4 -to 5 ./_dump/frame%03d.png
Important
The directory path must exist, ie, folders must be created beforehand.
png
is a good middle ground (lossless compression, but supports less colors)jpeg
is slower but has good compression (lossy compression)bmp
is faster but has large file size (uncompressed)
The format frame%03d.png
means files will be named: frame001.png
, frame002.png
, ..., frame050.png
, ..., frame1000.png
, and so on
Tip
use -start_number 0
(before output) to start at frame000.png
-v
documentation-stats
documentation- image file muxer (output)
-r
documentation-ss
documentation-t
documentation-to
documentation
-ss
, -t
, and -to
expect a specific time format
in short [-][HH:]MM:SS[.m...]
or [-]S+[.m...][s|ms|us]
# uses files INPUT000.png, INPUT001.png, etc to create the mp4 video (with 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i INPUT%03d.png OUTPUT.mp4
# uses every png file that starts with INPUT (at 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i INPUT*.png OUTPUT.mp4
# uses every png file (at 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i *.png OUTPUT.mp4
ffmpeg -v level+warning -stats -i INPUT.mp4 -vf crop=WIDTH:HEIGHT:POSX:POSY OUTPUT.mp4
-
WIDTH
- the width of the croped window -
HEIGHT
- the height of the croped window -
POSX
- the X position of the croped window (can be omitted = auto center) -
POSY
- the Y position of the croped window (can be omitted = auto center) -
all values are in pixels, but there is no "px" after it (or an expression that gets calculated each frame)
lower values are better (higher bitrate), but also lead to larger file size
# for `h.264` values from 18 to 23 are very good
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -c:v libx264 -crf 20 OUTPUT.mp4
# for `h.265` values from 24 to 30 are very good
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -c:v libx265 -crf 25 OUTPUT.mp4
faster with GPU hardware acceleration / NVIDIA CUDA
# for h.265 → h264_nvenc with NVIDIA CUDA
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT.mp4 -c copy -c:v h264_nvenc -fps_mode passthrough -b_ref_mode disabled -preset medium -tune hq -rc vbr -multipass disabled -qp 20 OUTPUT.mp4
# for h.265 → hevc_nvenc with NVIDIA CUDA
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT.mp4 -c copy -c:v hevc_nvenc -fps_mode passthrough -b_ref_mode disabled -preset medium -tune hq -rc vbr -multipass disabled -qp 25 OUTPUT.mp4
-v
documentation-stats
documentation-c
documentation-crf
documentation (the best description is under libaom-AV1 but it's also in other encoders like MPEG-4)- also see this FFmpeg guide for CRF with
libx264
- and "this FFmpeg guide" for hardware acceleration with differend OS/hardware (specifically section CUDA (NVENC/NVDEC))
- and "Using FFmpeg with NVIDIA GPU Hardware Acceleration" on the NVIDIA Documentation Hub
- CUDA ignores
-crf
(the best description is under libaom-AV1 but it's also in other encoders like MPEG-4) so it's-qp
for the hardware acceleration
# start at 0:00:01 and stop at 0:00:10
ffmpeg -v level+warning -stats -ss 1 -i INPUT.mp4 -to 10 -c copy OUTPUT.mp4
# start at 0:00:10 and stop at 0:00:20 (0:00:10 duration)
ffmpeg -v level+warning -stats -ss 10 -i INPUT.mp4 -t 10 -c copy OUTPUT.mp4
# caps output to be 0:00:30 max
ffmpeg -v level+warning -stats -i INPUT.mp4 -t 30 -c copy OUTPUT.mp4
timing from -ss
, -to
, and -t
shift to the nearest frame and not at the exact timestamp when stream is copied (like here)
if exact time is needed the video needs to be re-encoded (-c:v libx264
after or instead of -c copy
) which obviously takes longer
when -ss
is after -i
it will decode and discard the video until the time is reached,
when it's before -i
like here it will seek into the video without decoding it first (during the seek) so it will be faster.
-v
documentation-stats
documentation-ss
documentation-to
documentation-t
documentation-c
documentation
# loop video infinitely but stop after 0:00:30
ffmpeg -v level+warning -stats -stream_loop -1 -i INPUT.mp4 -t 30 -c copy OUTPUT.mp4
# loop video to length of audio
ffmpeg -v level+warning -stats -stream_loop -1 -i INPUT.mp4 -i INPUT.mp3 -shortest -map 0:v -map 1:a OUTPUT.mp4
# loop audio to length of video
ffmpeg -v level+warning -stats -i INPUT.mp4 -stream_loop -1 -i INPUT.mp3 -shortest -map 0:v -map 1:a OUTPUT.mp4
if exact timing is needed, it is better to re-encode the video (-c:v libx264
after or instead of -c copy
)
Note
Looping once means two playthroughs.
-v
documentation-stats
documentation-stream_loop
documentation-t
documentation-c
documentation-shortest
documentation-map
documentation
Warning: these filters require a lot of memory (buffer of the entire clip) so it's suggested to also use the trim filter as shown
# reverse video only (first 5sec)
ffmpeg -v level+warning -stats -i INPUT.mp4 -vf trim=end=5,reverse OUTPUT.mp4
# reverse audio only (first 5sec)
ffmpeg -v level+warning -stats -i INPUT.mp4 -af atrim=end=5,areverse OUTPUT.mp4
-v
documentation-stats
documentation- reverse filter documentation
- trim filter documentation
- areverse filter documentation
- atrim filter documentation
# using filter complex and the concat filter (if video formats are not the same add `:unsafe` to the `concat` filter)
ffmpeg -v level+warning -stats -i INPUT_0.mp4 -i INPUT_1.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v1] [a1]" -map "[v1]" -map "[a1]" OUTPUT.mp4
# using a list file and demuxer
ffmpeg -v level+warning -stats -safe 0 -f concat -i VIDEO_LIST.txt -c copy OUTPUT.mp4
content of VIDEO_LIST.txt
as follows
file 'INPUT_0.mp4'
file 'INPUT_1.mp4'
-v
documentation-stats
documentation-filter_complex
documentation- can also be read from a file via
-filter_complex_script
withpath/to/file.txt
, although this is not mentioned in the official documentation. - concat multimedia filter
- can also be read from a file via
-map
documentation- concat demuxer documentation
-safe
option for concat demuxer-c
documentation
Cut clips and concat them (with re-encoding) as follows (video and audio are cut and combined separately).
# 00:00 to 00:02 video and audio of INPUT_0.mp4
# 00:04 to 00:08 video and audio of INPUT_0.mp4
# 00:01 to 00:05 video and audio of INPUT_1.mp4
# 00:06 to 00:08 video and audio of INPUT_1.mp4
ffmpeg -v level+warning -stats -i INPUT_0.mp4 -i INPUT_1.mp4 -filter_complex "[0:v]trim=0:2,setpts=PTS-STARTPTS[i0v0];[0:v]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];[0:v]trim=4:8,setpts=PTS-STARTPTS[i0v1];[0:v]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];[1:v]trim=1:5,setpts=PTS-STARTPTS[i1v0];[1:v]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];[1:v]trim=6:8,setpts=PTS-STARTPTS[i1v1];[1:v]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];[i0v0][i0a0][i0v1][i0a1][i1v0][i1a0][i1v1][i1a1]concat=n=4:v=1:a=1[cv][ca]" -map "[cv]" -map "[ca]" OUTPUT.mp4
# with (h.264) NVIDIA:CUDA and slow/low compression (4 to 8 MB variable bitrate and 4 QP) for the first video stream of the combined video clips
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT_0.mp4 -hwaccel cuda -hwaccel_output_format cuda -i INPUT_1.mp4 -filter_complex "[0:v]trim=0:2,setpts=PTS-STARTPTS[i0v0];[0:v]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];[0:v]trim=4:8,setpts=PTS-STARTPTS[i0v1];[0:v]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];[1:v]trim=1:5,setpts=PTS-STARTPTS[i1v0];[1:v]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];[1:v]trim=6:8,setpts=PTS-STARTPTS[i1v1];[1:v]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];[i0v0][i0a0][i0v1][i0a1][i1v0][i1a0][i1v1][i1a1]concat=n=4:v=1:a=1[cv][ca]" -map "[cv]" -c:v:0 h264_nvenc -preset p7 -tune hq -profile:v:0 high -level:v:0 auto -rc vbr -b:v:0 4M -minrate:v:0 500k -maxrate:v:0 8M -bufsize:v:0 8M -multipass disabled -fps_mode passthrough -b_ref_mode:v:0 disabled -rc-lookahead:v:0 32 -qp 4 -map "[ca]" OUTPUT.mp4
# the `-hwaccel cuda -hwaccel_output_format cuda` must be in front of every input video (that is in the filter and gets encoded as video stream)
Click to show formatted filtergraph
[0:v] trim=0:2, setpts=PTS-STARTPTS[i0v0];
[0:v]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];
[0:v] trim=4:8, setpts=PTS-STARTPTS[i0v1];
[0:v]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];
[1:v] trim=1:5, setpts=PTS-STARTPTS[i1v0];
[1:v]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];
[1:v] trim=6:8, setpts=PTS-STARTPTS[i1v1];
[1:v]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];
[i0v0][i0a0]
[i0v1][i0a1]
[i1v0][i1a0]
[i1v1][i1a1]
concat=n=4:v=1:a=1
[cv][ca]
Click to show formatted CUDA (video output) codec arguments
-map "[cv]"
-c:v:0 h264_nvenc
-preset p7
-tune hq
-profile:v:0 high
-level:v:0 auto
-rc vbr
-b:v:0 4M
-minrate:v:0 500k
-maxrate:v:0 8M
-bufsize:v:0 8M
-multipass disabled
-fps_mode passthrough
-b_ref_mode:v:0 disabled
-rc-lookahead:v:0 32
-qp 4
-v
documentation-stats
documentation-filter_complex
documentation- can also be read from a file via
-filter_complex_script
withpath/to/file.txt
, although this is not mentioned in the official documentation. - trim multimedia filter
- atrim multimedia filter
- concat multimedia filter
- can also be read from a file via
-map
documentation-c
documentation- also, see the section about video compression specifically with GPU hardware acceleration / NVIDIA CUDA
- "this FFmpeg guide" for hardware acceleration with differend OS/hardware (specifically section CUDA (NVENC/NVDEC))
- and "Using FFmpeg with NVIDIA GPU Hardware Acceleration" on the NVIDIA Documentation Hub
# this will whitelist urls (`-i`) for files available via file, http/s, tcp, tls, or crypto protocol (for this command, not permanent)
ffmpeg -v level+warning -stats -protocol_whitelist file,http,https,tcp,tls,crypto -i INPUT.m3u8 -c copy OUTPUT.mp4
ffmpeg -v level+warning -stats -protocol_whitelist file,http,https,tcp,tls,crypto -i https://example.com/INPUT.m3u8 -c copy OUTPUT.mp4
# finds sections min 240sec long and max -70db loud and writes them to LOG.txt
ffmpeg -v level+warning -stats -i INPUT.mp4 -af silencedetect=noise=-70dB:d=240 -f null - 2> LOG.txt
look for [silencedetect @ *
lines in log file
[silencedetect @ 0000000000******] silence_start: 01:00:02.500
[silencedetect @ 0000000000******] silence_end: 01:10:02.500 | silence_duration: 00:09:59.989
[silencedetect @ 000000000*******] silence_start: 02:00:02.500
[silencedetect @ 000000000*******] silence_end: 02:10:02.500 | silence_duration: 00:09:59.989
[...]
Create new videos via lavfi
virtual input device and a video source.
- sierpinski (pan)
- mandelbrot (zoom)
- (elementary) cellular automaton
- life (Cellular automaton)
- mptestsrc (animated test patterns)
- empty (input)
- color (input)
- smptebars (input)
- smptehdbars (input)
- testsrc (input)
- testsrc2 (input)
- rgbtestsrc (input)
- yuvtestsrc (input)
- colorspectrum (input)
- colorchart (input)
- allrgb (input)
- allyuv (input)
Honorable mention: ddagrab
which can be used to capture (Windows) desktop screen (/-cutout).
Scroll TOP
Random pan of sierpinski carpet/triangle fractal.
defaults: s=640x480
, r=25
(fps), and type=carpet
ffmpeg -v level+warning -stats -f lavfi -i sierpinski OUTPUT.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski=type=triangle OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicesierpinski
video source
lavfi_sierpinski.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski -t 60 lavfi_sierpinski.mp4
lavfi_sierpinski_triangle.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski=type=triangle -t 60 lavfi_sierpinski_triangle.mp4
Continuous zoom into the Mandelbrot set.
# Mandelbrot with the "inside" set to black (how it usually is displayed)
# default: 640*480 25fps and position:
# X = -0.743643887037158704752191506114774 (real axis)
# Y = -0.131825904205311970493132056385139 (imaginary axis, inverted to how it usually is displayed)
ffmpeg -v level+warning -stats -f lavfi -i mandelbrot=inner=black -t 60 OUTPUT.mp4
# limited to 60sec
# ! the frame generation gets slower the further in the zoom
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicemandelbrot
video source
lavfi_mandelbrot_black_blur.mp4
speed up and blured to decrease file size
ffmpeg -v level+warning -stats -f lavfi -i mandelbrot=inner=black:s=300x300:end_pts=75,avgblur=1 -t 43 lavfi_mandelbrot_black_blur.mp4
Also, see the same zoom (position, vertically flipped so it looks the same) in my (interactive) Mandelbrot viewer:
Source code and documentation (controls): https://github.com/MAZ01001/AlmondBreadErkunder
"Waterfall" of a 1D cellular automaton.
# random seed, no custom pattern, rule 18, start with an empty screen
# fallback/defaults: s=320x508 r=24 rule=110
ffmpeg -v level+warning -stats -f lavfi -i cellauto=full=0 -t 60 OUTPUT.mp4
# limited to 60sec
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicecellauto
video source
lavfi_cellauto_3.mp4
ffmpeg -v level+warning -stats -f lavfi -i cellauto=full=0:seed=3 -t 60 lavfi_cellauto_3.mp4
2D cellular automaton.
# Conway's Game of Life
# default: random grid 320*240 25fps rule S23/B3 (stay alive with 2/3 neighbors and born with 3 neighbors)
ffmpeg -v level+warning -stats -f lavfi -i life -t 60 OUTPUT.mp4
# limited to 60sec
# as above but with green color and a red afterglow of dying cells
ffmpeg -v level+warning -stats -f lavfi -i life=mold=25:life_color=\#00ff00:death_color=\#aa0000 -t 60 OUTPUT.mp4
# limited to 60sec
lavfi_life_3_200x200_scaled.mp4
smaller initial size and scaled up 4x (nearest neighbor) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i life=mold=25:life_color=\#00ff00:death_color=\#aa0000:seed=3:s=200x200,scale=4*iw:-1:flags=neighbor -t 124 lavfi_life_3_200x200_scaled.mp4
These patterns are equal to those from the MPlayer test filter.
default: r=25
(fps) t=all
(all 10 tests repeating) m=30
(frames per test) d=-1
(infinite duration)
tests: dc_luma
, dc_chroma
, freq_luma
, freq_chroma
, amp_luma
, amp_chroma
, cbp
, mv
, ring1
, and ring2
.
# 60sec, all tests (each 3sec)
ffmpeg -v level+warning -stats -f lavfi -i mptestsrc=m=3*25:d=60 OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicemptestsrc
video source
lavfi_mptestsrc_all_3s.mp4
ffmpeg -v level+warning -stats -f lavfi -i mptestsrc=m=3*25:d=60 lavfi_mptestsrc_all_3s.mp4
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# 1sec 1920*1080 60fps nothing (green)
ffmpeg -v level+warning -stats -f lavfi -i nullsrc=s=1920x1080:r=60:d=1 OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicenullsrc
video source
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# 1sec solid color #ff9900
# default: 320*240 25 fps
ffmpeg -v level+warning -stats -f lavfi -i color=c=\#ff9900:d=1 OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicecolor
video source
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# color bars pattern, based on the SMPTE Engineering Guideline EG 1-1990
ffmpeg -v level+warning -stats -f lavfi -i smptebars OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicesmptebars
video source
ffmpeg -v level+warning -stats -f lavfi -i smptebars -frames 1 lavfi_smptebars.png
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# color bars pattern, based on the SMPTE RP 219-2002
ffmpeg -v level+warning -stats -f lavfi -i smptehdbars OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicesmptehdbars
video source
ffmpeg -v level+warning -stats -f lavfi -i smptehdbars -frames 1 lavfi_smptehdbars.png
default: s=320x240
, r=25
(fps), d=-1
(infinite duration), and n=0
(
n=0
shows timestamp in seconds and n=3
shows timestamp in milliseconds
# test pattern with animated gradient and timecode (seconds)
ffmpeg -v level+warning -stats -f lavfi -i testsrc OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicetestsrc
video source
lavfi_testsrc_n3.mp4
ffmpeg -v level+warning -stats -f lavfi -i testsrc=n=3:d=60 lavfi_testsrc_n3.mp4
default: s=320x240
, r=25
(fps), d=-1
(infinite duration), and alpha=255
(opacity of background, 0 to 255)
I couldn't see a difference with different alpha
values, at least for mp4
/webm
/webp
/png
/gif
file-formats
# animated test pattern
ffmpeg -v level+warning -stats -f lavfi -i testsrc2 OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicetestsrc2
video source
lavfi_testsrc2.mp4
ffmpeg -v level+warning -stats -f lavfi -i testsrc2=d=60 lavfi_testsrc2.mp4
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# RGB test pattern (useful for detecting RGB vs BGR issues)
ffmpeg -v level+warning -stats -f lavfi -i rgbtestsrc OUTPUT.mp4
# there should be red, green, and blue stripes from top to bottom
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicergbtestsrc
video source
ffmpeg -v level+warning -stats -f lavfi -i rgbtestsrc -frames 1 lavfi_rgbtestsrc.png
default: s=320x240
, r=25
(fps), and d=-1
(infinite duration)
# YUV test pattern
ffmpeg -v level+warning -stats -f lavfi -i yuvtestsrc OUTPUT.mp4
# Y (luminance, black/white)
# Cb (blue-difference chroma, yellow/grey/blue)
# Cr (red-difference chroma, turquoise/grey/red)
-v
documentation-stats
documentation-f
documentationlavfi
virtual input deviceyuvtestsrc
video source
ffmpeg -v level+warning -stats -f lavfi -i yuvtestsrc -frames 1 lavfi_yuvtestsrc.png
default: s=320x240
, r=25
(fps), d=-1
(infinite duration), and type=black
(black
/white
/all
)
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicecolorspectrum
video source
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum -frames 1 lavfi_colorspectrum_black.png
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum=type=white -frames 1 lavfi_colorspectrum_white.png
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum=type=all -frames 1 lavfi_colorspectrum_all.png
default: s=320x240
, r=25
(fps), d=-1
(infinite duration), preset=reference
(reference
/skintones
), and patch_size=64x64
(size of each tile)
# colors checker chart (6↔ * 4↕ = 24 tiles)
ffmpeg -v level+warning -stats -f lavfi -i colorchart OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input devicecolorchart
video source
ffmpeg -v level+warning -stats -f lavfi -i colorchart=patch_size=32x32 -frames 1 lavfi_colorchart_reference_32x32.png
ffmpeg -v level+warning -stats -f lavfi -i colorchart=preset=skintones:patch_size=32x32 -frames 1 lavfi_colorchart_skintones_32x32.png
default: r=25
(fps), and d=-1
(infinite duration)
Important
fixed size of 4096x4096
(use scale
filter to change size)
# all rgb colors (static 4096x4096 frames)
ffmpeg -v level+warning -stats -f lavfi -i allrgb OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input deviceallrgb
video source
![]()
scaled down to half size (bicubic) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i allrgb,scale=iw/2:-1 -frames 1 lavfi_allrgb_halfed.png
default: r=25
(fps), and d=-1
(infinite duration)
Important
fixed size of 4096x4096
(use scale
filter to change size)
# all yuv colors (static 4096x4096 frames)
ffmpeg -v level+warning -stats -f lavfi -i allyuv OUTPUT.mp4
-v
documentation-stats
documentation-f
documentationlavfi
virtual input deviceallyuv
video source
![]()
scaled down to half size (bicubic) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i allyuv,scale=iw/2:-1 -frames 1 lavfi_allyuv_halfed.png