FFmpeg is the most commonly used open source software for video processing.
It is powerful and versatile, used extensively in video websites and commercial software (such as Youtube and iTunes), and is the standard encoding/decoding implementation for many audio and video formats.
FFmpeg itself is a huge project containing many components and library files, the most commonly used being its command line tools. This article describes how FFmpeg command line can handle video more concisely and efficiently than desktop video processing software.
If you haven’t installed it yet, you can follow the official documentation to complete the installation first.
Before introducing FFmpeg usage, you need to understand some basic concepts of video processing.
The video file itself is actually a container, which includes video and audio, and may also have other content such as subtitles.
There are several common container formats. Generally speaking, the suffix name of a video file reflects its container format.
The following commands view the containers supported by FFmpeg.
Both video and audio need to be encoded before they can be saved as files. Different encoding formats (CODEC), with different compression ratios, can result in differences in file size and clarity.
Commonly used video encoding formats are listed below.
The above encoding formats are copyrighted, but can be used for free. In addition, there are several copyright-free video encoding formats.
The commonly used audio encoding formats are as follows.
All of the above are lossy encoding formats, where some detail is lost after encoding in exchange for a smaller file size after compression. Lossless encoding formats are compressed to produce larger files, so they are not described here.
The following command allows you to view the encoding formats supported by FFmpeg, both video encoding and audio encoding are included.
Encoders are library files that implement a certain encoding format. Encoding and decoding of video/audio in a format can only be achieved if the encoders for that format are installed.
Here are some of the video encoders built into FFmpeg.
- libx264: The most popular open source H.264 encoder
- NVENC: NVIDIA GPU-based H.264 encoder
- libx265: open source HEVC encoder
- libvpx: Google’s VP8 and VP9 encoders
- libaom: AV1 encoder
The audio encoder is as follows.
The following command allows you to view the installed encoders of FFmpeg.
Format of FFmpeg usage
FFmpeg’s command line parameters are very numerous and can be divided into five parts.
The parameters of the five parts in the above command are in the following order.
- global parameters
- input file parameters
- input file
- output file parameters
- output file
When there are too many parameters, the ffmpeg command can be written in multiple lines for easy viewing.
Here is an example.
The above command converts an mp4 file to a webm file, both of which are container formats. The audio encoding format of the input mp4 file is aac, and the video encoding format is H.264; the video encoding format of the output webm file is VP9, and the audio format is Vorbis.
If no encoding format is specified, FFmpeg will determine the encoding of the input file by itself. Therefore, the above command can be simply written as follows.
Common command line parameters
The following command line parameters are commonly used in FFmpeg.
-c: Specify the encoder
-c copy: Copy directly, without re-encoding (this is faster)
-c:v: Specify the video encoder
-c:a: Specify the audio encoder
-i: Specify the input file
-an: remove the audio stream
-vn: Remove video streams
-preset: Specify the output video quality, which affects the file generation speed, there are several available values ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
-y: Overwrite the file with the same name directly on output without confirmation.
Here are a few common uses of FFmpeg.
View file information
To view meta information about a video file, such as encoding format and bitrate, you can use only the
The above command will output a lot of redundant information, and with the
-hide_banner argument, you can display only meta information.
Convert encoding format
Transcoding refers to the conversion of a video file from one encoding to another. For example, to convert to H.264 encoding, the encoder
libx264 is usually used, so you only need to specify the video encoder of the output file.
Here is a writeup of the conversion to H.265 encoding.
Convert container format
Converting container formats (transmuxing) means that the video files are transferred from one container to another. Here is how mp4 to webm is written.
In the above example, the container is just converted and the internal encoding format remains unchanged, so it is faster to use
-c copy to specify a direct copy without transcoding.
Adjusting the bit rate
Transrating refers to changing the bit rate of the encoding, and is generally used to make the video file smaller in size. The following example specifies a minimum bitrate of 964K, a maximum of 3856K, and a buffer size of 2000K.
Changing resolution (transsizing)
Here is an example of changing the video resolution (transsizing) from 1080p to 480p.
Sometimes, it is necessary to extract the audio (demuxing) from inside the video, you can write it like this.
In the above example,
-vn means remove the video,
-c:a copy means copy directly without changing the audio encoding.
Adding audio tracks
Adding audio tracks (muxing) means that external audio is added to the video, such as adding background music or narration.
In the example above, there are two input files, audio and video, and FFmpeg will combine them into one file.
The following example is a continuous screenshot of a 1-second video starting from a specified time.
If you only need to cut a picture, you can specify to cut only one frame.
In the above example,
-vframes 1 specifies that only one frame will be intercepted, and
-q:v 2 indicates the quality of the output image, which is usually between 1 and 5 (1 being the highest quality).
Cutting means that a clip from the original video is cut and output as a new video. The start time (start) and duration (duration) can be specified, and the end time (end) can be specified.
In the above example,
-c copy means to copy the audio and video directly without changing the encoding format, which will be much faster.
Adding a cover to audio
Some video sites only allow uploading video files. To upload an audio file, you must add a cover to the audio, convert it to a video, and then upload it.
The following command will convert an audio file, to a video file with a cover.
In the above command, there are two input files, one is the cover image
cover.jpg and the other is the audio file
-loop 1 parameter means the picture loops infinitely, and the
-shortest parameter means the audio file ends and the output video ends.
- FFmpeg libav tutorial
- Digital video introduction
- FFmpeg encoding and editing course
- Making Slideshows w/FFMpeg
- The Complete Guide for Using ffmpeg in Linux
- Adding subtitles to your videos the easy way