Home > ffmpeg, Video > FFmpeg – the swiss army knife of Internet Streaming – part VI

FFmpeg – the swiss army knife of Internet Streaming – part VI

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

The fabulous world of FFmpeg  filtering

Transcoding is not a “static” matter, it is dynamic because you may have in input a very wide range of content’s types and you may have to set encoding parameters accordingly (This is particularly true for user generated contents).

Not only, the elaborations that you need to do in a video project may go beyond a simple transcoding and involve a deeper capacity of analysis, handling and “filtering” of video files.

Let’s consider some examples:

1. you have input files of several resolutions and aspect ratios and you have to encode them to two target output formats (one for 16:9 and one for 4:3) . In this case you need to analyze the input file and decide what profile to apply depending by input aspect ratio.

2. now let’s suppose you want also to encode video at the target resolution only if the input has an equal or higher resolution and keep the original otherwise. Again you’d need some external logic to read the metadata of the input and setup a dedicated encoding profile.

3. sometime video needs to be filtered, scaled and filtered again. Like , for istance, deinterlacing,  watermarking and denoising. You need to be able to specify a sequence of  filtering and/or manipulation tasks.

4. everybody needs thumbnails generation, but it’s difficult to find a shot really representative of the video content. Grabbing shots only on scene changes may be far more efficient.

FFmpeg can satisfy these kinds of complex analysis, handling and filtering tasks even without an external logic using the embedded filtering engine ( -vf ). For very complex workflows an external controller is still necessary but filters come handy when you need to do the job straight and simple.

FFmpeg filtering is a wide topic because there are hundreds of filters and thousands of combinations. So, using the same “recipe” style of the previous articles of this series, I’ll try to solve some common problems with specific command line samples focused on filtering. Note that to simplify command lines I’ll omit the parameters dedicated to H.264 and AAc encoding. Take a look at previous articles for such informations.

1. Adaptive Resize

In FFmpeg you can use the -s switch to set the resolution of the output but this is a not flexible solution. Far more control is provided by the filter “scale”.  The following command line scales the input to the desired resolution the same way as -s:

ffmpeg -i input.mp4 -vf  "scale=640:360" output.mp4

But scale provides you also with a way to specifing only the vertical or horizontal resolution and calculate the other to keep the same aspect ratio of the input:

ffmpeg -i input.mp4 -vf  "scale=640:-1" output.mp4

With -1 in the vertical resolution you delegate to FFmpeg the calculation of the right value to keep the same aspect ratio of input (default) or obtain the aspect radio specified with -aspect switch (if present). Unfortunately, depending by input resolution, this may end with a odd value or an even value witch is not divisable by 2 as requested by H.264. To enforce a “divisible by x” rule, you can simply use the emebedded expression evaluation engine:

ffmpeg -i input.mp4 -vf  "scale=640:trunc(ow/a/2)*2" output.mp4

The expression trunc(ow/a/2)*2 as vertical resolution means: use as output height the output width (ow = in this case 640) divided for input aspect ratio and approximated to the nearest multiple of 2 (I’m sure most of you are practiced with this kind of calculation).

2. Conditional resize

Let’s go further and find a solution to the problem 2 mentioned above: how to skip resize if the input resolution is lower than the target ?

ffmpeg -i input.mp4 -vf  "scale=min(640,iw):trunc(ow/a/2)*2" output.mp4

This command line uses as width the minimum between 640 and the input width (iw), and then scales the height to maintain the original aspect ratio. Notice that “,” may require to be escaped to “\,” in some shells.

With this kind of filtering you can easily setup a command line for massive batch transcoding that adapts smartly the output resolution to the target. Way to use the original resolution when lower than the target? Well, if you encode with -crf this may help you save alot of bandwidth!

3. Deinterlace

SD content is always interlaced and FullHD is very often interlaced. If you encode for the web you need to deinterlace and produce a progressive video which is also easier to compress. FFmpeg has a good deinterlacer filter named yadif (yet another deinterlacing filter) which is more efficient than the standard -deinterlace switch.

ffmpeg -i input.mp4 -vf  "yadif=0:-1:0, scale=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4

This command deinterlace the source (only if it is interlaced) and then scale down to half the horizontal and vertical resolution. In this case the sequence is mandatory: always deinterlace prior to scale!

4. Interlacing aware scaling

Sometimes, especially if you work for ipTV projects, you may need to encode interlaced (this is because legacy STBs require interlaced contents and also because interlaced may have higher temporal resolution). This is simple, just add -tff or -bff (top field first or bottom field first) in the x264 parameters. But there’s a problem: when you start from a 1080i and want to go down to an interlaced SD output (576i or 480i) you need an interlacing aware scaling because a standard scaling will break the interlacing. No fear, recently FFmpeg has introduced this option in the scale filter:

ffmpeg -i input.mp4 -vf  "scale=720:576:-1" output.mp4

The third optional flag of filter is dedicated to interlace scaling. -1 means automatic detection, use 1 instead to force interlacing scaling.

5. Denoising

When seeking for an high compression ratio it is very useful to reduce the video noise of input. There are several possibilities, my preferite is the  hqdn3d filter (high quality de-noising 3d filter)

ffmpeg -i input.mp4 -vf  "yadif,hqdn3d=1.5:1.5:6:6,scale=640:360" output.mp4

The filter can denoise video using a spatial function (first two parameters set strength) and a temporal function (last two parameters). Depending by the type of source (level of motion) can be more useful the spatial or the temporal. Pay attention also to the order of the filters: deinterlace -> denoise ->  scaling is usually the best.

6. Select only specific frames from input

Sometime you need to control which frames are passed to the encoding stage or more simply change the Fps. Here you find some useful usages of the select filter:

ffmpeg -i input.mp4 -vf  "select=eq(pict_type,I)" output.mp4

This sample command filter out every frame that are not an I-frame. This is useful when you know the gop structure of the original and want to create in output a fast preview of the video. Specifing a frame rate for the output with -r accelerate the playback while using -vsync 0 will copy the pts from input and keep the playback real-time.

Note: The previous command is similar to the input switch -skip_frame nokey ( -skip_frame bidir drops b-frames instead during deconding, useful to speedup decoding of big files in special cases).

ffmpeg -i input.mp4 -vf  "select=not(mod(n,3))" output.mp4

This command selects a frame every 3, so it is possible to decimate original framerate by an integer factor N, useful for mobile low-bitrate encoding.

7. Speed-up or slow-down the video

 It is also funny to play with PTS (presentation time stamps)

ffmpeg -i input.mp4 -vf  "setpts=0.5*PTS" output.mp4

Use this to speed-up your video of a factor 2 (frame are dropped accordingly), or this below to slow-down:

ffmpeg -i input.mp4 -vf  "setpts=2.0*PTS" output.mp4

8. Generate thumbnails on scene changes

The filter thumbnail tries to find the most representative frames in the video. Good to generate thumbnails.

ffmpeg -i input.mp4 -vf  "thumbnail,scale=640:360" -frames:v 1 thumb.png

A different way to achieve this is to use again select filter. The following command selects only frames that have more than 40% of changes compared to previous (and so probably are scene changes) and generates a sequence of 5 png.

ffmpeg -i input.mp4 -vf  "select=gt(scene,0.4),scale=640x:360" -frames:v 5 thumb%03d.png

Conclusions

The world of FFmpeg filtering is very wide and this is only a quick and “filtered” view on this world. Let me know in the comments or on twitter (@sonnati) if you need more complex filters or have problems adventuring in this fabulous world ;-)

[Index]

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

About these ads
Categories: ffmpeg, Video
  1. 20 October 2012 at 6:22 am | #1

    Fabio. Very slick. Excellent article and nicely done. This is an excellent details that I’ve yet to explore. Thanks for the advanced heads up. Kudos to your work and findings. FFMPEG and its endless possibilities.

  2. sonnati
    24 October 2012 at 11:36 pm | #2

    thank you Maxim

  3. David González Clemente
    14 November 2012 at 3:15 pm | #3

    Hi Fabio, I have used FFmpeg for the last two years and this is by far the best article I’ve ever read. Congratulations and keep going!

    • sonnati
      14 November 2012 at 3:22 pm | #4

      thank you David

  4. walo
    15 November 2012 at 4:28 pm | #5

    Hi,

    i’ve read the full article because i’m trying to broadcasto to rtmp server from ffmpeg and a Yuv2 webcam, but when enconding, it shows me some errors:

    ffmpeg -f video4linux2 -i /dev/video0 -pix_fmt yuv420p -r 25 -re -vcodec libx264 -f flv -y rtmp://myip/live/myStream

    fmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
    built on Aug 29 2012 12:42:57 with gcc 4.7.0 20120414 (prerelease)
    configuration: –prefix=/usr –enable-libmp3lame –enable-libvorbis –enable-libxvid –enable-libx264 –enable-libvpx –enable-libtheora –enable-libgsm –enable-postproc –enable-shared –enable-x11grab –enable-libopencore_amrnb –enable-libopencore_amrwb –enable-libschroedinger –enable-libopenjpeg –enable-librtmp –enable-libpulse –enable-gpl –enable-version3 –enable-runtime-cpudetect –enable-libspeex –disable-debug –disable-static
    libavutil 51. 54.100 / 51. 54.100
    libavcodec 54. 23.100 / 54. 23.100
    libavformat 54. 6.100 / 54. 6.100
    libavdevice 54. 0.100 / 54. 0.100
    libavfilter 2. 77.100 / 2. 77.100
    libswscale 2. 1.100 / 2. 1.100
    libswresample 0. 15.100 / 0. 15.100
    libpostproc 52. 0.100 / 52. 0.100
    [video4linux2,v4l2 @ 0x140c140] Estimating duration from bitrate, this may be inaccurate
    Input #0, video4linux2,v4l2, from ‘/dev/video0′:
    Duration: N/A, start: 31876.167607, bitrate: 147456 kb/s
    Stream #0:0: Video: rawvideo (YUY2 / 0×32595559), yuyv422, 640×480, 147456 kb/s, 30 tbr, 1000k tbn, 30 tbc
    [buffer @ 0x1409660] w:640 h:480 pixfmt:yuyv422 tb:1/1000000 sar:0/1 sws_param:flags=2
    [buffersink @ 0x1409b80] No opaque field provided
    [format @ 0x1409fe0] auto-inserting filter ‘auto-inserted scaler 0′ between the filter ‘src’ and the filter ‘format’
    [scale @ 0x140aaa0] w:640 h:480 fmt:yuyv422 sar:0/1 -> w:640 h:480 fmt:yuv420p sar:0/1 flags:0×4
    [libx264 @ 0x140cb80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64
    [libx264 @ 0x140cb80] profile High, level 3.0
    [libx264 @ 0x140cb80] 264 – core 123 r1775+359M ddcf640 – H.264/MPEG-4 AVC codec – Copyleft 2003-2012 – http://www.videolan.org/x264.html – options: cabac=1 ref=3 deblock=1:0:0 analyse=0×3:0×113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, flv, to ‘rtmp://myip/live/myStream’:
    Metadata:
    encoder : Lavf54.6.100
    Stream #0:0: Video: h264 ([7][0][0][0] / 0×0007), yuv420p, 640×480, q=-1–1, 1k tbn, 25 tbc
    Stream mapping:
    Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Press [q] to stop, [?] for help

    DTS 48670997, next:89352230 st:0 invalid dropping
    PTS 48670997, next:89352230 invalid dropping st:0

    • sonnati
      15 November 2012 at 4:42 pm | #6

      try setting rate for input :

      ffmpeg -r 25 -re -f video4linux2 -i /dev/video0 -pix_fmt yuv420p -vcodec libx264 -f flv -y rtmp://myip/live/myStream

  5. walo
    15 November 2012 at 9:39 pm | #7

    Thank you very much, it seems to work although this message is always running:

    DTS 48670997, next:89352230 st:0 invalid dropping
    PTS 48670997, next:89352230 invalid dropping st:0

  6. 10 January 2013 at 10:21 pm | #8

    Hi Fabio, ffmpeg now supports “burning” .srt subtitles into a file http://goo.gl/H7ZJd. How can i bottom-place a readable (proportioned for 1080p) white font subtitle into an mpeg2 1080p file without reencoding it (if possible)?

  7. Saravanan
    19 February 2013 at 6:29 am | #9

    Hi… your series of “the swiss army knife” is very helpful.. thanks
    Is there a way to create multiple thumbnails with different resolutions like thum1.png(640×360), thumb2.png(126×86)… in a single ffmpeg command?

  8. MQ
    22 March 2013 at 8:18 am | #10

    This is a really helpful article – thanks for posting it.

    One tip in case anyone else has the same problem I have. I copied and pasted some code from here and got the following errors:
    No such filter: ‘“setpts’
    Error opening filters!

    Yet when I ran avconv -filters I could clearly see that setpts is in the list of available filters. I eventually worked out that the double-quotes in the web page version I was copying from were pretty quotes of some kind. By deleting them and inserting normal command-line ” quotes, I got it to work.

  9. 24 March 2013 at 9:37 pm | #11

    Wat is the best method to deal with anamorphic videos. How do I get rid of the anamorphic stuff.

  10. Shin Muraoka
    25 March 2013 at 11:44 am | #12

    Hi Fabio!

    And thank you–you have already been a great help! As an everyday user of FFmpeg, I refer to your series of articles more often than I do the canonical manpages.

    I have an oddball problem that seems to have no graceful solution. As I’m sure you are aware, Google has established a cloud storage service (Google Drive) and provides 5 GB free to every registered user.

    In the course of archiving large-ish files (videos of about 1 GB in size) to the cloud, I have discovered that among the cloud apps (formerly Google Docs) associated with the Drive service is a video preview app that allows the user to stream previously uploaded video files to a browser window.

    This feature appears to be closely-related to the YouTube API, offering resolution switching, closed-caption selection, player sizing, and audio volume adjustment in a very familiar layout. An uploaded video file is processed and made available at a number of fixed resolutions, so that a 1080p upload can be streamed at 720p, 480p, or 360p (in addition to 1080p).

    Non-standard resolutions are not upscaled, so a file with a native resolution of 832:468 can be streamed at 360p only The original video remains available for download in full. I am committed to archiving the original file unaltered, but i would like to be able to preview it in a higher-quality stream.

    I had hoped that padding the video can be performed without transcoding the video, that would be acceptable–alas, combining

    -vf pad=832:480:0:0

    with

    -c copy

    accomplishes nothing. A flag that could make the client side behave as though it had been presented with a video of height 480 without transcoding the video–something that works like

    -vf setdar

    –would be ideal (e.g.,

    -vf setdw=480

    ).

    Thank you for your time, Fabio. I’m no coder, but if anyone can point me in the right direction, I’m sure you can. In order of preference, here is how i hope my problem is resolved:

    1. What I’m looking for already exists as an FFMpeg filter, and I’ve just missed it.
    2. What I’m looking for exists but I’ll have to learn MEncoder,
    3. What I’m looking for exists but I’ll have to learn X264;
    4. What I’m looking for exists but I’ll have to learn AviSynth;
    5. What I’m looking for exists but I’ll have to learn Chrome’s JavaScript Console.

    • sonnati
      25 March 2013 at 12:22 pm | #13

      unfortunately it is not possible to change frame dimensions without re-encoding because of the inner nature of h264 bitstream

      • Shin Muraoka
        25 March 2013 at 1:55 pm | #14

        Thank you so very much for the speedy reply.

        I suspected that might be the case–I was just hoping I was wrong.

        Here’s a longshot–I realise that even if it’s possible to preserve the original H264 stream in this way, the result might be unlikely to survive processing into a 480p Flash preview:

        Would it be possible to mux two video streams–the first being the unaltered 832×468 H264 stream; the second a black 832×12 bar–to play simultaneously (one above the other) in a Matroska (.mkv) container?

        I’m sure this sounds insane. I cannot find any satisfactory answers to the following::

        • Would a typical player decode both streams simultaneously?
        • Is it possible to align them so the combined effect is a single 832×480 frame?

        Thanks again for your time. If this is too crazy to merit a response, I understand completely.

  11. Evgeny Kaminsky
    26 March 2013 at 9:48 pm | #15

    Hi Fabio, When we encode h.264 video with ffmpeg from live source such as a football match, the video comes out with jerky artifacts especially when there is a strong movement of the camera. May be necessary to use more than one filter?

    Thank you in advance

    • sonnati
      27 March 2013 at 11:18 am | #16

      the source is interlaced. Add -vf yadif

      • Evgeny Kaminsky
        27 March 2013 at 4:44 pm | #17

        Thanks for your quick reply.
        I tried yadif, without parameters, and yadif= 0:-1:0, but it does not help with fast camera movements.

  12. 12 April 2013 at 6:18 am | #18

    Question- How to use the timestamp? I want to stream files on a timeline (and Im tired of having to connect by ssh when the stream ends) I am streaming flv files to red5 and jwplayer using ffmpeg -re -i file.flv -acodec copy -vcodec copy -f flv rtmp://localhost:1935/live/livestream and it works pretty well. So far i tried typing in:
    Begin at 11:00 ffmpeg -re -i file.flv -acodec copy -vcodec copy -f flv rtmp://localhost:1935/live/livestream timecode=’00//:11//:00//;02′
    ffmpeg says it cannot find a suitable output format. I’m sure that I’m doing something wrong.

  13. Alvgarci
    18 April 2013 at 11:16 pm | #19

    Fabio: Really congrats. This is a great article. Let me leave a wish here, I love to see some real examples about creation date metadata add, copy, modification. It’s extremely common that some of this info is deleted on any default ffmpeg process. People like me that love to store all source data for indexing loose hours on scripting the correct way to copy this metadata on any conversion. Can you please share correct syntax (and tested :-)) to add creation media data metadata into a new MP4 file I.E? I hurt my eyes in google without success..

  14. ux
    10 May 2013 at 11:03 am | #21

    scene change with select requires escaping; your post is lacking simple quotes around the scene expression (or maybe your blog engine strips then, dunno). You also have a typo in the same filtergraph around the scale arguments (‘:’ and ‘x’). Last, since you’re picking multiple frames with variable frame rate, you need to either -vsync vfr, or setpts after the select, otherwise the output frames won’t look like what you want.

  15. djimmi
    4 June 2013 at 1:07 am | #22

    great stuff here … what are you guys doing for rtmp (red5,ffmpeg) to prevent hot linking ??

  16. 2 August 2013 at 4:50 pm | #23

    What do the values mean for the yadif filter? I can’t seem to find that info anywhere. I’d like to know what each number means and what other options are available for that filter.

  17. Evgeny Kaminsky
    19 September 2013 at 3:54 pm | #25

    Hi Fabio,
    After deinterlacing of original 1080i@50 video using yadif I see a flicker of letters in a running band at the bottom of the frame. How can this be reduced?

  18. 7 October 2013 at 12:57 pm | #26

    Hello Fabio – Thank you for this page :-)

    Question: do you know of any filter to reduce light flickering issues, say when footage was shot under fluorescent lights, hz prob, etc. ? Some say hqdn3s, but it doesn’t for me.

    FYI, you’re missing the “\” in the following command:

    DOESN’T WORK
    ffmpeg -i input.mp4 -vf “select=eq(pict_type,I)” output.mp4

    WORKS
    ffmpeg -i input.mp4 -vf “select=eq(pict_type\,I)” output.mp4

  19. Vinod
    6 November 2013 at 12:35 pm | #27

    hi Fabio, How to engrave timestamp in the live http stream in ffmpeg kindly help me with this ?

  20. 15 November 2013 at 2:32 pm | #28

    i am using avconv 9.10, but cannot get the scene filter working.

    Error while parsing expression ‘gt(scene,0.4)’

    was it removed from avconv!?

  21. Arnaud
    18 November 2013 at 7:11 pm | #29

    Hi Fabio

    I was doing some tests with some IMX50 material : crop + scale + overlay. The result was blurry which is normal because I downscaled the original video. However I wanted the text to be clearer so I tried using the unsharp filter with its default values. As a result I got sub-blacks and super-whites on a Tektronix directly fed with a playout from an Omneon system over SDI. I then used 0:0:0.0:3:3:1.0 and got some chroma errors instead.

    I have no such problem without the unsharp filter. Would you have any suggestion?

    Thanks in advance!

  22. Parth
    25 November 2013 at 1:26 pm | #30

    Hi Fabio, I want the ffmpeg output of a live stream to a video card/port/audio video port. Is it possible to get the ffmpeg output on hardware?

  23. Evgeny Kaminsky
    28 January 2014 at 4:42 pm | #31

    Hi Fabio,

    What is the best SD to HD upscale solution for compressed streams? It is possible to increase the resolution with high quality using FFmpeg only, or do I need to use third-party applications?

    Thank you in advance!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 98 other followers

%d bloggers like this: