FFmpeg is the most commonly used open source software for video processing.

It is powerful and versatile, used extensively in video websites and commercial software (such as Youtube and iTunes), and is the standard encoding/decoding implementation for many audio and video formats.

FFmpeg itself is a huge project containing many components and library files, the most commonly used being its command line tools. This article describes how FFmpeg command line can handle video more concisely and efficiently than desktop video processing software.

If you haven’t installed it yet, you can follow the official documentation to complete the installation first.

Concept

Before introducing FFmpeg usage, you need to understand some basic concepts of video processing.

Container

The video file itself is actually a container, which includes video and audio, and may also have other content such as subtitles.

There are several common container formats. Generally speaking, the suffix name of a video file reflects its container format.

  • MP4
  • MKV
  • WebM
  • AVI

The following commands view the containers supported by FFmpeg.

1
$ ffmpeg -formats

Encoding format

Both video and audio need to be encoded before they can be saved as files. Different encoding formats (CODEC), with different compression ratios, can result in differences in file size and clarity.

Commonly used video encoding formats are listed below.

  • H.262
  • H.264
  • H.265

The above encoding formats are copyrighted, but can be used for free. In addition, there are several copyright-free video encoding formats.

  • VP8
  • VP9
  • AV1

The commonly used audio encoding formats are as follows.

  • MP3
  • AAC

All of the above are lossy encoding formats, where some detail is lost after encoding in exchange for a smaller file size after compression. Lossless encoding formats are compressed to produce larger files, so they are not described here.

The following command allows you to view the encoding formats supported by FFmpeg, both video encoding and audio encoding are included.

1
$ ffmpeg -codecs

Encoder

Encoders are library files that implement a certain encoding format. Encoding and decoding of video/audio in a format can only be achieved if the encoders for that format are installed.

Here are some of the video encoders built into FFmpeg.

  • libx264: The most popular open source H.264 encoder
  • NVENC: NVIDIA GPU-based H.264 encoder
  • libx265: open source HEVC encoder
  • libvpx: Google’s VP8 and VP9 encoders
  • libaom: AV1 encoder

The audio encoder is as follows.

  • libfdk-aac
  • aac

The following command allows you to view the installed encoders of FFmpeg.

1
$ ffmpeg -encoders

Format of FFmpeg usage

FFmpeg’s command line parameters are very numerous and can be divided into five parts.

1
$ ffmpeg {1} {2} -i {3} {4} {5}

The parameters of the five parts in the above command are in the following order.

  1. global parameters
  2. input file parameters
  3. input file
  4. output file parameters
  5. output file

When there are too many parameters, the ffmpeg command can be written in multiple lines for easy viewing.

1
2
3
4
5
6
$ ffmpeg \
[global parameters] \
[input file arguments] \
-i [input file] \
[output file arguments] \
[output file]

Here is an example.

1
2
3
4
5
6
$ ffmpeg \
-y \ # global parameters
-c:a libfdk_aac -c:v libx264 \ # input file arguments
-i input.mp4 \ # input file
-c:v libvpx-vp9 -c:a libvorbis \ # output file arguments
output.webm # output file

The above command converts an mp4 file to a webm file, both of which are container formats. The audio encoding format of the input mp4 file is aac, and the video encoding format is H.264; the video encoding format of the output webm file is VP9, and the audio format is Vorbis.

If no encoding format is specified, FFmpeg will determine the encoding of the input file by itself. Therefore, the above command can be simply written as follows.

1
$ ffmpeg -i input.avi output.mp4

Common command line parameters

The following command line parameters are commonly used in FFmpeg.

  • -c : Specify the encoder
  • -c copy : Copy directly, without re-encoding (this is faster)
  • -c:v : Specify the video encoder
  • -c:a : Specify the audio encoder
  • -i : Specify the input file
  • -an : remove the audio stream
  • -vn : Remove video streams
  • -preset : Specify the output video quality, which affects the file generation speed, there are several available values ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
  • -y : Overwrite the file with the same name directly on output without confirmation.

Common Uses

Here are a few common uses of FFmpeg.

View file information

To view meta information about a video file, such as encoding format and bitrate, you can use only the -i parameter.

1
$ ffmpeg -i input.mp4

The above command will output a lot of redundant information, and with the -hide_banner argument, you can display only meta information.

1
$ ffmpeg -i input.mp4 -hide_banner

Convert encoding format

Transcoding refers to the conversion of a video file from one encoding to another. For example, to convert to H.264 encoding, the encoder libx264 is usually used, so you only need to specify the video encoder of the output file.

1
$ ffmpeg -i [input.file] -c:v libx264 output.mp4

Here is a writeup of the conversion to H.265 encoding.

1
$ ffmpeg -i [input.file] -c:v libx265 output.mp4

Convert container format

Converting container formats (transmuxing) means that the video files are transferred from one container to another. Here is how mp4 to webm is written.

1
$ ffmpeg -i input.mp4 -c copy output.webm

In the above example, the container is just converted and the internal encoding format remains unchanged, so it is faster to use -c copy to specify a direct copy without transcoding.

Adjusting the bit rate

Transrating refers to changing the bit rate of the encoding, and is generally used to make the video file smaller in size. The following example specifies a minimum bitrate of 964K, a maximum of 3856K, and a buffer size of 2000K.

1
2
3
4
5

$ ffmpeg \
-i input.mp4 \
-minrate 964K -maxrate 3856K -bufsize 2000K \
output.mp4

Changing resolution (transsizing)

Here is an example of changing the video resolution (transsizing) from 1080p to 480p.

1
2
3
4
$ ffmpeg \
-i input.mp4 \
-vf scale=480:-1 \
output.mp4

Extracting audio

Sometimes, it is necessary to extract the audio (demuxing) from inside the video, you can write it like this.

1
2
3
4
5

$ ffmpeg \
-i input.mp4 \
-vn -c:a copy \
output.aac

In the above example, -vn means remove the video, -c:a copy means copy directly without changing the audio encoding.

Adding audio tracks

Adding audio tracks (muxing) means that external audio is added to the video, such as adding background music or narration.

1
2
3
$ ffmpeg \
-i input.aac -i input.mp4 \
output.mp4

In the example above, there are two input files, audio and video, and FFmpeg will combine them into one file.

Screenshots

The following example is a continuous screenshot of a 1-second video starting from a specified time.

1
2
3
4
5
6

$ ffmpeg \
-y \
-i input.mp4 \
-ss 00:01:24 -t 00:00:01 \
output_%3d.jpg

If you only need to cut a picture, you can specify to cut only one frame.

1
2
3
4
5
$ ffmpeg \
-ss 01:23:45 \
-i input \
-vframes 1 -q:v 2 \
output.jpg

In the above example, -vframes 1 specifies that only one frame will be intercepted, and -q:v 2 indicates the quality of the output image, which is usually between 1 and 5 (1 being the highest quality).

Crop

Cutting means that a clip from the original video is cut and output as a new video. The start time (start) and duration (duration) can be specified, and the end time (end) can be specified.

1
2
$ ffmpeg -ss [start] -i [input] -t [duration] -c copy [output]
$ ffmpeg -ss [start] -i [input] -to [end] -c copy [output]

下面是一个实际的例子。

1
2
$ ffmpeg -ss 00:01:50 -i [input] -t 10.5 -c copy [output]
$ ffmpeg -ss 2.5 -i [input] -to 10 -c copy [output]

In the above example, -c copy means to copy the audio and video directly without changing the encoding format, which will be much faster.

Adding a cover to audio

Some video sites only allow uploading video files. To upload an audio file, you must add a cover to the audio, convert it to a video, and then upload it.

The following command will convert an audio file, to a video file with a cover.

1
2
3
4
5
6

$ ffmpeg \
-loop 1 \
-i cover.jpg -i input.mp3 \
-c:v libx264 -c:a aac -b:a 192k -shortest \
output.mp4

In the above command, there are two input files, one is the cover image cover.jpg and the other is the audio file input.mp3. The -loop 1 parameter means the picture loops infinitely, and the -shortest parameter means the audio file ends and the output video ends.


Reference https://www.ruanyifeng.com/blog/2020/01/ffmpeg.html