FFmpeg – the swiss army knife of Internet Streaming – part II

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

(because of the success of this series I have decided to revise the content of the articles, update the syntax of command lines to recent api changes and extend the series with a fifth part, good reading!)

Second part

After the short introduction of the previous article, now it’s time to see FFmpeg in action. This post is dedicated to the most important parameters and ends with an example of transcoding to H.264. FFmpeg supports hundreds of AV formats and codecs as input and output (for a complete list type ffmpeg -formats and ffmpeg -codecs) but you know that nowadays the most important output format is without doubt H.264.

H.264 is supported by Flash (99% of computers + mobile), iOS, Android, BlackBerry, low-end mobile devices, STBs and Connected TVs. So why target different formats ? A streaming professional has primarily to master H.264. Other formats are marginal, even Google’s VP8 which is still in a too early phase of adoption.

FFmpeg is vast subject, so I’ll focus on what I think to be the most important options and a useful selection of command lines. But first of all, how to find a build of FFmpeg to start managing your video?

The shortest path if you don’t want to built from source code is to visit Zeranoe’s FFmpeg builds. Remember that there are several libraries that can be included or excluded in the build and so if you need something special or a fine control onthe capabilities of your build, the longest path is the best.

KEY PARAMETERS

This is the base structure of an FFmpeg invocation:

ffmpeg -i input_file […parameter list…] output_file

Input_file and output_file can be defined not only as file system objects but a number of protocols are supported: file, http, pipe, rtp/rtsp, raw udp, rtmp. I’ll focus on the possiblity to use RTMP as input and/or output in the fourth part of this series, while in the following examples I’ll use only the local file option.

FFmpeg supports litterally hundreds of parameters and options. Very often, FFmpeg infers the parameters from the context, for example the input or output format from the file extention and it also applies default values to unspecified parameters. Sometimes it is instead necessary to specify some important parameters to avoid errors or to optimize the encoding.

Let’s start with a selection of the most important, not codec related, parameters:

-formats   print the list of supported file formats
-codecs    print the list of supported codecs (E=encode,D=decode)
-i         set the input file. Multiple -i switchs can be used
-f         set video format (for the input if before of -i, for output otherwise)
-an        ignore audio
-vn        ignore video
-ar        set audio rate (in Hz)
-ac        set the number of channels
-ab        set audio bitrate
-acodec    choose audio codec or use “copy” to bypass audio encoding
-vcodec    choose video codec or use “copy” to bypass video encoding
-r         video fps. You can also use fractional values like 30000/1001 instead of 29.97
-s         frame size (w x h, ie: 320x240)
-aspect    set the aspect ratio i.e: 4:3 or 16:9
-sameq     ffmpeg tries to keep the visual quality of the input
-t N       encode only N seconds of video (you can use also the hh:mm:ss.ddd format)
-croptop, -cropleft, -cropright, -cropbottom   crop input video frame on each side
-y         automatic overwrite of the output file
-ss        select the starting time in the source file
-vol       change the volume of the audio
-g         Gop size (distance between keyframes)
-b         Video bitrate
-bt        Video bitrate tolerance
-metadata  add a key=value metadata

SYNTAX UPDATE (*)

The syntax of some commands have changed. Commands like -b (is the bitrate related to audio or video?) have now a different syntax:

Use:

-b:a instead of -ab to set audio bitrate
-b:v
instead of -b to set video bitrate
-codec:a
or -c:a instead of -acodec
-codec:v
or -c:v instead of -vcodec

Most of these parameters allow also a third optional number to specify the index of the audio or video channel to use.

Es: -b:a:1 to specify the bitrate of the second audio stream (the index is 0 based).

Read official documentation for more info: http://ffmpeg.org/ffmpeg.html

COMMAND LINES EXAMPLES

And now let’s combine these parameters to manipulate AV files:

1. Getting info from a video file

ffmpeg -i video.mpg

Useful to retrieve info from a media file like audio/video codecs, fps, frame size and other params. You can parse the output of the command line in a script redirecting the stderr channel to a file with a command like this:

 ffmpeg –i inputfile 2>info.txt

2. Encode a video to FLV

ffmpeg –i input.avi –r 15 –s 320×240 –an video.flv

By default FFmpeg encodes flv files in the old Sorenson’s Spark format (H.263). This can be useful today only for compatibility with older systems or if you wanto to encode for a Wii (which supports only Flash Player 7).
Before the introduction of H.264 in Flash Player I was used to re-encode the FLVs recorded by FMS with a command like this:

ffmpeg –i input.flv –acodec copy –sameq output.flv
(ffmpeg –i input.flv –codec:a copy –sameq output.flv)

This produced a file 40-50% smaller with the same quality of the input, preserving at the same time the Nellymoser ASAO audio codec which was not supported by FFmpeg in such days and therefore not encodable in something else.

Today you can easily re-encode to H.264 and also transcode ASAO or Speex to something else.

VP6 codec (Flash Player 8+) is officially supported only in decoding.

3. Encode from a sequence of pictures

ffmpeg -f image2 -i image%d.jpg –r 25 video.flv

Build a video from a sequence of frame with a name like image1.jpg,image2.jpg,..,imageN.jpg
Is it possible to use different naming conventions like image%3d.jpeg where FFmpeg search for file with names like image 001.jpeg, image002.jpg, etc…The output is an FLV file at 25Fps.

4. Decode a video into a sequence of frames

ffmpeg -i video.mp4 –r 25 image%d.jpg

Decodes the video in a sequence of images (25 images per second) with names like image1, image2, image 3, … , imageN.jpg. It’s possible to change the naming convention.

ffmpeg –i video.mp4 –r 0.1 image%3d.jpg

Decodes a picture every 10 seconds (1/0.1=10). Useful to create a thumbnail gallery for your video. In this case the putput files have names like image001, image002, etc.

ffmpeg –i video.mp4 –r 0.1 –t 20 image%3d.jpg

Extracts 2-3 images from the first 20 seconds of the source.

5. Extract an image from a video

ffmpeg -i video.avi -vframes 1 -ss 00:01:00 -f image2 image-%3d.jpg
(ffmpeg -i video.avi -frames:v 1 -ss 00:01:00 -f image2 image-%3d.jpg)

This is a more accurate command for image extraction. It extracts only 1 single frame (-vframes 1) starting 1min from the start of the video. The thumbnail will have the name image-001.jpg.

ffmpeg -i video.avi -r 0.5 -vframes 3 -ss 00:00:05 -f image2 image-%3d.jpg
(ffmpeg -i video.avi -r 0.5 -frames :v 3 -ss 00:00:05 -f image2 image-%3d.jpg)

In this case FFmpeg will extract 3 frames, each every 1/0.5=2seconds, starting from time 5s. Useful for video CMS where you want to offer a selection of thumbnails and a backend user choose the best one.

6. Extract only audio track without re-encoding

ffmpeg -i video.flv -vn -c:a copy audio.mp3

Here I assume that the audio track is an mp3. Use audio.mp4 if it is AAC or audio.flv if it is ASAO or Speex. Similarly you can extract the video track without re-encoding.

7. Extract audio track with re-encoding

ffmpeg -i video.flv -vn -ar 44100 -ac 2 -ab 128k -f mp3 audio.mp3

This command extract audio and transcode to MP3. Useful when the video.flv is saved by FMS and has an audio track encoded with ASAO or Speex.

ffmpeg –i video.flv –vn –c:a libaac –ar 44100 –ac 2 –ab 64k audio.mp4

The same as above but encoded to AAC.

8. Mux audio + video

ffmpeg -i audio.mp4 -i video.mp4 output.mp4

Depending by the content of the input file you may need to use the map setting to choose and combine the audio video tracks correctly. Here I suppose to have an audio only and video only input files.

9. Change container

ffmpeg –i input.mp4 –c:a copy –c:v copy output.flv

I use this to put h.264 in FLV container, sometimes it is useful. This kind of syntax will come back when we will talk about FFmpeg and RTMP.

10. Grab from a webcam

On Linux it is easy to use an audio or video grabbing device as input to FFmpeg:

ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 out.flv

On windows it is possible to use vfwcap (only video) or direct show (audio and video):

ffmpeg -r 15 -f vfwcap -s 320×240 -i 0 -r 15 -f mp4 webcam.mp4

ffmpeg -r 15 -f dshow -s 320×240 -i video=”video source name”:audio=”audio source name” webcam.flv

Notice here that the parameters –r –f –s are setted before –i.

11. Extract a slice without re-encoding

ffmpeg -i input -ss 00:01:00 -t 00:01:00 -c:a copy -c:v copy output.mp4

Extract 1min of video starting from time 00:01:00. Be aware that putting the -ss and -t parameters before or after -i may have different effects.

12. Make a video file from a single frame

ffmpeg -loop_input -frames:v 1 -i frame.jpg -t 10s -r 25 output.mp4

Generate a video with 25Fps and length 10s from a single frame. Playing with -vframes it is possible to loop a sequence of frames (not video). Note: -loop_input is deprecated in favor of -loop (filter option).

13. Add metadata

ffmpeg -i input.flv -c:v copy -c:a copy -metadata title=”MyVideo” output.flv

Useful to change or add metadata like resolution, bitarate or other info

14. Encode to H.264

Let’s conclude this second part of the series with an example of encoding to H.264 + AAC. In the example above I have used, for simplicity sake, FLV or MP4 output. But to encode to H.264 you have to explicitly set the output codec and some required parameters.

ffmpeg -y -i input.mov -r 25 -b 1000k -c:v libx264 -pass 1 -vpre fastfirstpass -an output.mp4
ffmpeg -y -i input.mov -r 25 -b 1000k -c:v libx264 -pass 2 -vpre hq -acodec libfaac -ac 2 -ar 44100 -ab 128k output.mp4

This first example tells FFmpeg to use the libx264 to produce a H.264 output. We are using a two pass encoding (-pass 1 generate only a stat file that will be used by a second pass). The -vpre option tells FFmpeg to use the preset file “fastfirstpass” that its found in the preset folder of the FFmpeg installation directory. The second line performs the second pass using a more accurate preset (-vpre hq) and adds the audio encoding.

FFmpeg use a dedicated remapping of the most important parameters of libx264. x264 has an high number of parameters and if you know what you are doing you can also set each of them individually instead of using a predefined preset. This is an example of two pass encoding without preset:

ffmpeg -y -i input -r 25 -b 1000k -c:v libx264 -pass 1 -flags +loop -me_method dia -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 1 -me_range 16 -coder 1 -sc_threshold 40 -flags2 -bpyramid-wpred-mixed_refs-dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 0 -directpred 1 -partitions -parti8x8-parti4x4-partp8x8-partp4x4-partb8x8 -an output.mp4

ffmpeg -y -i input -r 25 -b 1000k -c:v libx264 -pass 2 -flags +loop -me_method umh -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 8 -me_range 16 -coder 1 -sc_threshold 40 -flags2 +bpyramid+wpred+mixed_refs+dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 1 -directpred 3 -partitions +parti8x8+parti4x4+partp8x8+partb8x8 -acodec libfaac -ac 2 -ar 44100 -ab 128k output.mp4

IN THE NEXT PART

In this second part of the series we have taken a look at the most important “not codec related” features of FFmpeg.
The next part will entirely dedicated to how encode to H.264 using FFmpeg and libx264.

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

55 thoughts on “FFmpeg – the swiss army knife of Internet Streaming – part II

  1. -croptop, -cropleft, -cropright, -cropbottom are all deprecated since vhooks was removed from the FFmpeg base months ago. You must use libavfilter notation instead if using a recent build.
    The notation can be seen here:
    http://ffmpeg.org/libavfilter.html#SEC17

    A neat trick in regards to: 5. Extract an image from a video
    is to move the -ss argument before the -i argument, which will speed up the seek time a lot. It doesn’t work for all formats (if it doesn’t , all you get is a grey frame) but for most formats lige H.264.

    Also it’s worth noting that you can grab images to bmp, which is useful if you need to do further processing of the grab outside FFmpeg – and also since FFmpeg doesn’t produce the greatest jpg compression in comparasion to eg. Photoshop.
    All you do is to change the extension of the output file from .jpg to .bmp.

    Regards,

    Frederik


  2. FFmpeg, for various reasons, is provided officially only as source files.

    not true
    I got ffmpeg.exe

  3. but even with that i can’t get the

    ffmpeg -r 15 -f vfwcap -s 320×240 -i 0 -r 15 -f mp4 webcam.mp4

    “time base must be specified” or “could not connect to the device”

    1. are you sure your video source is vfw compatible ? vfw is an older capture standard and some cam are not compatible.

  4. Hi,

    I’ve left a message few days ago but it looks like wordpress didn’t post it. I applogize If I repeat myself again 🙂

    I’m making a film from jpeg images. Because it’s a movie made from images, I can’t use 25 fps, the film runs just too fast, instead I’m using 7fps. The video has 179 images. At 25fps the movie lenght is 00:07. The problem is that I’m using 7fps and eventhough ffmpeg respects the frame rate, keeps the same length. That means that a lot of images are eliminated during the process, as the final output film length should be around 00:25.

    I’ve used the 2 pass code posted above (it works great in terms of quality and speed, thanks!)

    ffmpeg -y -i proves/img/img%d.jpg -r 7 -b 1000k -vcodec libx264 -pass 1 -flags +loop -me_method dia -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 1 -me_range 16 -coder 1 -sc_threshold 40 -flags2 -bpyramid-wpred-mixed_refs-dct8x8+fastpskip -keyint_min 7 -refs 3 -trellis 0 -directpred 1 -partitions -parti8x8-parti4x4-partp8x8-partp4x4-partb8x8 -an /proves/9.mp4

    ffmpeg -y -i proves/img/img%d.jpg -r 7 -b 1000k -vcodec libx264 -pass 2 -flags +loop -me_method umh -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 8 -me_range 16 -coder 1 -sc_threshold 40 -flags2 +bpyramid+wpred+mixed_refs+dct8x8+fastpskip -keyint_min 7 -refs 3 -trellis 1 -directpred 3 -partitions +parti8x8+parti4x4+partp8x8+partb8x8 -acodec libfaac -ac 2 -ar 44100 -ab 128k proves/9.mp4

    Is there any parameter I’m missing or I should change? I’m really not an expert, I’m starting to learn.

    Many thanks and thanks for your blog, it is very useful to me!

    1. try duplicating “-r 7” before “-i” and let me know,
      this should force FFmpeg to see the source as 7Fps and then
      encode it at 7 Fps. without this trick FFmpeg could see
      the source as 25Fps but encode it at 7Fps droppind frames.

  5. Hi,
    We are new to this and were delighted to see your suggestion. We were trying the two commands you mentioned. pass 1 went through fine. But we are unable to encode audio in pass 2 – get the below error:
    Unknown encoder ‘libfaac’

    Tried all possible package installations suggested in ubuntu forums. Is there any library missing?
    We installed ffmpeg 0.8.1-4:0.8.1-0ubuntu1 on Ubuntu 12.04. Any help is highly appreciated.

  6. Hi,

    The cool thing about FFmpeg is that it changes constantly and the developers are not afraid to deprecate things. The bad thing is, that tutorials and guides quickly becomes obsolete.
    The error you are seeing has to do with the lib libfaac which we’re pulled out of most builds the last year or so becuase it’s not LGPL – like the rest of FFmpeg but only GPL:
    http://en.wikipedia.org/wiki/FAAC

    There’s a new AAC encoder included with most builds, which is enabled but using -strict experimental, but it is exactly that – experimental.

    I have done the following the last 4 years:
    I use the external executable Nero AAC for all AAC encoding:
    http://www.nero.com/eng/downloads-nerodigital-nero-aac-codec.php
    The Nero AAC is freeware but not open source and sounds very, very good.

    I do the following:
    decode inout video file to wav. Encode to AAC. Encode video using FFmpeg, mux video and audio together using MP4Box. I use Windows and the following works great for my use for video for web:

    Video encoding (size 768×432 pixels, 1 megabit. -tune film is the default, use -tune animation for non-film inputs):

    Pass 1:
    ffmpeg -i “input.mov” -vf scale=768:432 -pass 1 -vcodec libx264 -x264opts level=4.1 -preset slow -tune film -y -an -b 1000000 -f rawvideo NUL
    Pass 2:
    ffmpeg -i “input.mov” -vf scale=768:432 -pass 2 -vcodec libx264 -x264opts level=4.1 -preset slow -tune film -y -an -b 1000000 temp.m4v

    Audio encoding:

    ffmpeg -i “input.mov” -y -ac 2 -f wav temp.wav
    neroAacEnc -q 0.35 -if temp.wav -of temp.m4a

    Muxing together:

    mp4box -add temp.m4a#audio “out.mp4”
    mp4box -add temp.m4v#video “out.mp4”

    Regards,

    Frederik

  7. Sorry, here goes without typos:

    Hi,

    The cool thing about FFmpeg is that it changes constantly and the developers are not afraid to deprecate things. The bad thing is, that tutorials and guides quickly becomes obsolete.
    The error you are seeing has to do with the library libfaac. Libfaac were pulled out of most builds the last year or so becuase it’s not LGPL – like the rest of FFmpeg but only GPL:
    http://en.wikipedia.org/wiki/FAAC

    There’s a new AAC encoder included with most builds, which is enabled by using -strict experimental, but it is exactly that – experimental.

    I have done the following the last 4 years:
    I use the external executable Nero AAC for all AAC encoding:
    http://www.nero.com/eng/downloads-nerodigital-nero-aac-codec.php
    The Nero AAC is freeware but not open source but sounds very, very good.

    I do the following:
    decode input video file to wav (uncompressed). Encode to AAC. Encode video using FFmpeg, mux video and audio together using MP4Box. I use Windows and the following works great for my use – which is encoding video for web with playback through Flash or native (HTML5 video):

    Video encoding (size 768×432 pixels, 1 megabit. -tune film is the default, use -tune animation for non-film inputs):

    Pass 1:
    ffmpeg -i “input.mov” -vf scale=768:432 -pass 1 -vcodec libx264 -x264opts level=4.1 -preset slow -tune film -y -an -b 1000000 -f rawvideo NUL
    Pass 2:
    ffmpeg -i “input.mov” -vf scale=768:432 -pass 2 -vcodec libx264 -x264opts level=4.1 -preset slow -tune film -y -an -b 1000000 temp.m4v

    Audio encoding:
    ffmpeg -i “input.mov” -y -ac 2 -f wav temp.wav
    neroAacEnc -q 0.35 -if temp.wav -of temp.m4a

    Muxing together:
    mp4box -add temp.m4a#audio “out.mp4″
    mp4box -add temp.m4v#video “out.mp4″

    Regards,
    Frederik

  8. Hello Sonnati or Frederik, would you help me build a ffmpeg command line, please?

    I have two files, a 1080p .mp4 with video/audio and a .mp3 with audio that I want to mix into the main audio stream of the .mp4. This is for youtube. I want to stream-copy the video and not re-encode it. What command line do I need to use? I also want to increase the volume level of the audio in the .mp4 by %100 and the volume level in the. mp3 by %50. The .mp3 is commentary for the video that my capture card does not mix into the .mp4 that has only computer audio. Thank you a whole lot for your time!! This will save me many, many hours of needlessly re-encoding videos with Sony Vegas to mix the audio streams.

    ps. In addition is it possible to add a video ‘intro’ sequence at the beginning without having to re-encode the main video? If so, how?

    1. Hey JJ,
      When mixing or changing the volume you have to re-encode the audio.
      For that, I would decode the two signals using ffmpeg:
      ffmpeg input.mp4 output.wav
      and use the commandline audio program called SoX:
      http://sox.sourceforge.net/

      Mixing to audio signals:
      sox −m music.mp3 voice.wav mixed.flac
      http://sox.sourceforge.net/sox.html

      For the gaining of the two it gets a little trickier. Audio loudness is in decibel not procentage. And if you boost the signal too much, it’s going to clip.
      For that, I would use a DAW – my favorite is Reaper:
      http://www.reaper.fm/
      and actually remix the signal with limiters to prevent the signal from cliping – cause a nasty digital distortion.

      If there enough headroom, you can boost the signals, before mixing, using SoX’s -gain switch.

      If you use a good AAC encoder, the quality loss suffered from the transcoding is not a whole lot.

      Editing MP4 files is not optimal because it’s such a complex format but you might want to give the following a try:

      http://svnpenn.blogspot.dk/2011/10/ffmpeg-join-video-files.html

      Regards,

      Frederik

      BTW: I have a little blog of my own:
      http://transcoding.wordpress.com/

  9. Just a note that -loop_input has been deprecated. The man page now suggests “-loop 1”.
    all the best,
    Robert

  10. Hi sonnati/Fred,
    I developed a video chat application in visual c# with jpeg encoding. Is there any way that i can make it to h264 since I would really like to minimize the bandwidth usage. currently im using directshow. Any guidance on how to go about it would be much appreciated!

  11. Hey Littledrops,

    I don’t know. I’ve only used FFmpeg together with C# by calling the command from .Net.
    I think it’s possible to compile FFmpeg (I guess libavcodec in particular) with Visual Studio with some tweaks. You can Google most the answers for that.
    What remains is for you to find a way to stream between the two clients using a protocol. Maybe RTSP is the way to go? A quick googling lead me to: http://www.live555.com/liveMedia/ but I have no experience with it….

    Fred

  12. I used this command to extract thumbnails from my video which is in mp4 format on my Android phone, but the process is awfully slow, even on a dual core device which clocks at 1.3Ghz.
    It takes me 60-90 secs to extract images from a 15 sec movie clip, any pointers on how to improve the speed?

  13. Thanks for the prompt reply, well that works fast enough for a single image, but I want frames at different time intervals say every one second, for that I use -r parameter and set its value to 1, it gets me the frames but lesser the value of -r slower the extraction, i.e if I set -r to 25 it takes me 3 secs for the operation but the frames are almost the same, if I reduce -r to a value say 1, images extracted are perfectly one second apart but unfortunately takes longer to extract. I guess you understand my situation.

    I use this command now

    ffmpeg -ss 00:00:04 -i input.mp4 -r 1 -vframes 10 -f image2 -s 120×96 output-%3d.jpg

    I find that this is the most optimum speed I can obtain, but still is there any way to reduce the time for image extraction further.

  14. Hi Sonnati/Fred,

    I am trying get a simple streaming session going to Wowza; however I am unable to switch the h264 profile.

    With this setting:

    ffmpeg -f dshow -i video=”Logitech HD Pro Webcm C920″ -c:v libx264 -s 320×180 -preset fast -tune zerolatency -y -an -f flv rtmp://wowza_server_ip/live/myStream

    I can get the stream created in Wowza but player wont show anything. This stream gets coded as:

    profile High 4:4:4 Predictive, level = 1.3, 4:4:4 8-bit

    Publishing a stream via a tool like wirecast, I can see the stream. The difference being, Wirecast sends the stream as Main, level 3.1.

    When I try to spell out the profile, via profile:v main, I get an error:

    main profile does not support 4:4:4
    Error setting profile main.
    Possible profiles: ultrafast superfast veryfast faster fast medium

    What am I doing wrong ?

    Appreciate the help.

    Thanks
    mobi

    1. Thanks a bunch. That did the trick. However not home yet …

      I’ve a real time app, in which I would like to display whats being recorded by the webcam as instantaneously as possible. On my local setup with the followings options -preset ultrafast -profile main -tune zweolatency ) to a local udp port and then play it back from the same, I get the playback lagging by about 300-400ms. However, routing it thru the Wowza, there is an easy 5-6 sec of lag.

      This definitely appears to be in Wowza domain — probably too much stuff in the jitter buffer ! Any recommendation on this aspect ? I considering to replace Wowza with Red5 as well down the road.

      Assuming, the round trip latency to the media server is in the ~100ms range and using my local processing numbers of ~300-400ms, is it realistic to expect a sub second lag time while publishing the stream to a media server and then playing it back via a Flash client ?

      Once again thanks for the support.
      mobi

      1. Mobi, does your app need to display the feed from the camera on a local or remote machine? If it’s a local machine you might want to take a look at VideoLan DotNet as an alternative to FFMpeg. I’ve been using it fairly successfully in a C#/WPF project recently. It’ll allow you to embed a VLC video player component in a WPF application. This component can then (through the use of command strings similar to FFMpeg’s) duplicate an incoming stream to a variety of destinations: on-screen display in almost perfect real-time, stream to a server over a variety of protocols, transcode and store to disk, etc etc etc. See: http://vlcdotnet.codeplex.com/ .

  15. Fabio
    Great articles! However many of the examples as listed won’t work. You’re using an en-dash instead of the hypen-minus (–, not -), and the multiplication sign instead of a latin x for the size (×, not x).

  16. Hi Fred/Sonnatti,
    I am trying to stream my webcam video to red5 server using ffmpeg (latest build from zeranoe). I am able to stream successfully with the below command without any h264 codec (with ~2sec delay though) :

    ffmpeg -f dshow -r 15 -i video=”Logitech HD Pro Webcam C920″:audio=”Microphone (HD Pro Webcam C920)” -f flv rtmp://localhost:1935/live/livestream

    However when I include the vcodec h264, it is not able to send the video data, but audio can be still transferred! This is the command line that I use in ffmpeg:

    ffmpeg -f dshow -r 15 -i video=”Logitech HD Pro Webcam C920″:audio=”Microphone (HD Pro Webcam C920)” -pass 1 -codec:v libx264 -b:v 200k -bt 200k -maxrate 1000k -bufsize 100000k -threads 0 -f flv rtmp://localhost:1935/live/livestream

    Any idea what am I doing wrong or what is missing in my command line? Any input would be greatly appreciated! Thank you!

    1. Hi, I was browsing through the other comments earlier and managed to solve the problem by using the 4:2:0 pixel format. Still the delay of ~2sec is there. Any way I can make it minimal/ almost real-time?

      1. Hi,

        I never tried using FFmpeg this way, but your -bufsize switch seems way too big. Try 2000k instead – that ought to do it.

        If the delay needs to be smaller than that, you can try a smaller bufsize, but after that, you need to use a different x264 preset, that doesn’t compress as much – thus making it possible to send the stream with less delay – Something that will include options like: “-preset ultrafast -tune zerolatency -x264opts “vbv-maxrate=2000:vbv-bufsize=200:slice-max-size=1500″ -f h264”

        This will however will make the stream much bigger in terms of data.

        Let us know if you manage to make it work…

        Frederik

  17. Hi Fred, I did try reducing the buffersize and it would’nt help at all in minimizing the delay. I also tried the method u mentioned above “-preset ultrafast -tune zerolatency -x264opts “vbv-maxrate=2000:vbv-bufsize=200:slice-max-size=1500″ -f h264″ but video is not displayed on the flash player on red5 server. Even when I use the yuv420 command, it does not show up.
    No idea why is that so. Here’s the command line that I use now.
    “ffmpeg -an -f dshow -r 15 -i video=”Logitech HD Pro Webcam C920” -pix_fmt yuv420p -vcodec libx264 -preset veryfast -tune zerolatency -x264opts “vbv-maxrate=5000:vbv-bufsize=200:slice-max-size=1500″ -f h264 rtmp://localhost:1935/live/livestream”

  18. littledrops :
    Hi Fred, I did try reducing the buffersize and it would’nt help at all in minimizing the delay. I also tried the method u mentioned above “-preset ultrafast -tune zerolatency -x264opts “vbv-maxrate=2000:vbv-bufsize=200:slice-max-size=1500″ -f h264″ but video is not displayed on the flash player on red5 server. Even when I use the yuv420 command, it does not show up.
    No idea why is that so. Here’s the command line that I use now.
    “ffmpeg -an -f dshow -r 15 -i video=”Logitech HD Pro Webcam C920″ -pix_fmt yuv420p -vcodec libx264 -preset veryfast -tune zerolatency -x264opts “vbv-maxrate=5000:vbv-bufsize=200:slice-max-size=1500″ -f h264 rtmp://localhost:1935/live/livestream”

    Can you try using the pix_fmt on the input ( i.e. before -i) ?

  19. Hi mobi,
    Thank you so much for the input. I just tried that. it works now and I think it was actually due to the format h264 i used in the end. After changing it to flv, flash player in the red5 server can display the video properly. However this command line still does not help the latency. I still have a delay of ~3 sec at the playback end even though I am using the local machine to view it.

    1. In that case, you need to look at buffering at the Red5 level and Flash Player itself. In a local setup the best I could so far is 1.5sec but using the WMS . I heard Red5 is supposed to be better.

      1. Thanks for your response mobi. I will look up further on the red5 side then. Hopefully I can find something to reduce the latency. My ultimate aim is to use it for videoconferencing!

  20. Hi people, i´m from Argentine and i´m trying to stream an HD channel with multicast. I have a Windows server 2008 with an PCI Osprey 700e HD capture, connected by SDI to a A/V converter. I had preview the A/V signal with VLC and it´s working fine.
    So, i´ve tried to stream the signal with ffmpeg but theres an error that doesnt keep me working.
    Error: “Could not run filter”

    These is the command line i use:

    ffmpeg.exe -f dshow -s 1920×1080 -i video=”Osprey-700 HD Video Device 1″ audio=”Osprey-700 HD Video Device 1″ -codec:v mpeg4 -b 10000k -codec:a mp3 -ab 128k -f mpegts udp://239.192.41.85:1234

    I have to encode in Mpeg4 part 2 only for now.

    I really aprecciate any help.
    Best regardds.
    Diego

  21. This is more important than ever in a world where there are dozens
    of different audio formats. They also apparently believe
    that their customers who like a particular song they hear on
    the radio, are likely to purchase that song, which could
    add to downloads from i – Tunes. Perfect for long trips and for up to date information on road conditions ahead.

  22. Hi, my name is Renato, i would like to knows if its possible Grab from a WebCam in samsung’s Android Phone, or from a WebCam in MacBook Pro with MacOS 10.8.4, to make a Stream over UDP?

  23. Is it possible to stream live event, and let watchers choose between video(with audio) and only audio stream of the same broadcast, in order to save some my expenses streaming only audio part for the people who don’t need video, just audio?

Leave a reply to mobi Cancel reply