FFmpeg – the swiss army knife of Internet Streaming – part IV


PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

Fourth Part

In this article I will focus on the support for RTMP that makes FFmpeg an excellent tool for enhancing the capabilities of the Adobe Flash Streaming Ecosystem.

FFmpeg introduced a strong support for RTMP streaming with the release 0.5 by the inclusion of the librtmp (rtmpdump) core. An RTMP stream can be used both as an input and/or as an output in a command line.

The required syntax is:

rtmp_proto://server[:port][/application][/stream] options

where rtmp_proto can be: “rtmp“, “rtmpt“, “rtmpte“, “rtmps“, “rtmpte“, “rtmpts” and options contain a list of space-separated options in the form key=val (more info here).

Using some of the parameters that we have seen in the first three parts of the series, it’s possible to do a lot of things that the standard Flash Streaming Ecosystem cannot offer. Sometimes there are minor bugs but generally speaking the rtmplib works well and helps FMS to fill the gap with some advanced feature of Wowza Server (like re-purposing of rtp/rtsp stream, TS-stream and so on). FFmpeg works with FMS as well as Wowza Server and RED5, so in the article I will use FMS as a generic term to mean any “RTMP-server”.


With the help of FFmpeg it is possible for example to stream a pre-encoded file to FMS as if it were a live source. This can be very useful for test purpose but also to create pseudo-live channels.

 ffmpeg -re -i localFile.mp4 -c copy -f flv rtmp://server/live/streamName 

The -re option tells FFmpeg to read the input file in realtime and not in the standard as-fast-as-possible manner. With -c copy (alias -acodec copy -vcodec copy ) I’m telling FFmpeg to copy the essences of the input file without transcoding, then to package them in an FLV container (-f flv) and send the final bitstream to an rtmp destination (rtmp://server/live/streamName).

The input file must have audio and video codec compatible with FMS, for example H.264 for video and AAC for audio but any supported codecs combination should work.
Obviously it would be also possible to encode on the fly the input video. In this case remember that the CPU power requested for a live encoding can be high and cause loss in frame rate or stuttering playback on subscribers’ side.

In which scenario can be useful a command like that ?

For example, suppose to have created a communication or conference tool in AIR. One of the partecipants at the conference could fetch a local file and stream it to the conference FMS to show, in realtime, the same file to other partecipants. Leveraging the “native process” feature of AIR it is simple to launch a command line like the one above and do the job. In this scenario, probably you will have to transcode the input, or check for the compatibility of codecs analyzing the input up front (remember ffmpeg -i INPUT trick we spoke about in the second article).


Using a command like this:

 ffmpeg -i rtmp://server/live/streamName -c copy dump.flv 

It’s possible to dump locally the content of a remote RTMP stream. This can be useful for test/audit/validation purpose. It works for both live and on-demand content.


One of the more interesting scenario is when you want to convert a format to a different one for compatibility sake or to change the characteristics of the original stream.

Let’s suppose to have a Flash Player based app that do a live broadcast. You know that until FP11, Flash can only encode using the old Sorenson spark for video and NellyMoser ASAO or Speex for audio. You may use a live transcoding command to enhance the compression of the video transcoding from Sorenson to H.264:

 ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv rtmp://server/live/h264Stream 

This could be useful to reduce bandwidth usage especially in live broadcasting where latency it’s not a problem.
The next release of FMS will also offer support for the Apple HTTP Live Streaming (like Wowza already do). So it will be possible to use FMS to stream live to iOS device. But FMS does not transcode the stream essence, it performs only a repackaging or repurposing of the original essences. But FFmpeg can help us to convert the uncompliant Sorenson-Speex stream to a H.264-AAC stream in this way:

 ffmpeg -i rtmp://server/live/originalStream -c:a libfaac -ar 44100 -ab 48k -c:v libx264 -vpre slow -vpre baseline -f flv rtmp://server/live/h264Stream 

(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)

See also the point 4 and 5 to know how to generate a multibitrate stream to be compliant with Apple requirements for HLS. This approach will be useful also with FP11 that encode in H.264, but generate only one stream.

Another common scenario is when you are using FMLE to make a live broadcast. The standard windows version of FMLE supports only MP3 and not AAC for audio encoding (plug-in required). This may be a problem when you want to use your stream also to reach iOS devices with FMS or Wowza (iOS requires AAC for HLS streams). Again FFmpeg can help us:

 ffmpeg -i rtmp://server/live/originalStream -acodec libfaac -ar 44100 -ab 48k -vcodec copy -f flv rtmp://server/live/h264_AAC_Stream 

On the other hand, I have had the opposite problem recently with an AIR 2.7+ apps for iOS. AIR for iOS does not support by now H.264 or AAC streaming with the classical netStream object, but I needed to subscribe AAC streams generated for the desktops. FFmpeg helped me in transcoding AAC streams to MP3 for the AIR on iOS app.

Again, you probably know that Apple HLS requires an audio only AAC stream with a bitrate less than 64Kbit/s for the compliance of video streaming apps, but at the same time you probably want to offer an higher audio quality for your live streaming (on desktop fpo istance). Unfortunately FMLE encode at multiple bitrates only the video track while use a unique audio preset for all bitrates. With FFmpeg is possible to generate a dedicated audio only stream in AAC with bitrate less than 64Kbit/s.


Very similarly, if you want to be compliant with older iOS versions or other mobile devices (older BB for istance) you need to encode in Baseline profile, but at the same time you may want to leverage high profile for desktop HDS. So you could use FMLE to generate high profile streams, with high quality AAC and then generate server side a baseline set of multi-bitrate streams for HLS and/or low end devices compatibility.

This command read from FMS the highest quality of a multi bitrate set generated by FMLE and starting from that generate 3 scaled down versions in baseline profile for HLS or Mobile. The last stream is an audio only AAC bitstream at 48Kbit/s.

 ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640x360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480x272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320x200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k 

UPDATE: using the -x264opts parameter you may rewrite the command like this:

 ffmpeg -re -i rtmp://server/live/high_FMLE_stream -c:a copy -c:v x264lib -s 640x360 -x264opts bitrate=500:profile=baseline:preset=slow rtmp://server/live/baseline_500k -c:a copy -c:v x264lib -s 480x272 -x264opts bitrate=300:profile=baseline:preset=slow rtmp://server/live/baseline_300k -c:a copy -c:v x264lib -s 320x200 -x264opts bitrate=150:profile=baseline:preset=slow rtmp://server/live/baseline_150k -c:a libfaac -vn -b:a 48k rtmp://server/live/audio_only_AAC_48k 

(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)


FFmpeg can use also a local AV source, so it’s possible to encode live directly from FFmpeg and bypass completely FMLE. I suggest to do that only in very controlled scenarios because FMLE offers precious, addictional functions like auto-encoding adjust to keep as low as possible the latency when the bandwidth between the acquisition point and the server is not perfect.

This is an example of single bitrate:

 ffmpeg -r 25 -f dshow -s 640x480 -i video="video source name":audio="audio source name" -vcodec libx264 -b 600k -vpre slow -acodec libfaac -ab 128k rtmp://server/application/stream_name 

Join this command line and the previous and you have a multi-bitrate live encoding configuration for desktop and mobile.


H.264 has a very efficient Intra compression mode, so it is possible to leverage it for picture compression. I have estimated an improvement of around 50% in compression compared to JPG. Last year I have discussed estensively the possibility to use this kind of image compression to protect professional footage with FMS and RTMPE. Here you find the article, and this is the command line:

 ffmpeg.exe -i INPUT.jpg -an -vcodec libx264 -coder 1 -flags +loop -cmp +chroma -subq 10 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -flags2 +dct8x8 -trellis 2 -partitions +parti8x8+parti4x4 -crf 24 -threads 0 -r 25 -g 25 -y OUTPUT.mp4 

Change -crf to modulate encoding quality (and compression rate).


Sometimes when connecting to FMS you may receive some cryptic error. It may help to enclose the destination RTMP address in double quotes and add the option live=1. ES:

 ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv "rtmp://server/live/h264Stream live=1" 

Other info on RTMP dump libray: http://ffmpeg.org/ffmpeg.html#toc-rtmp


There are a lot of other scenarios where using FFmpeg with FMS (or Wowza) can help you creating new exciting services for you projects and overcome the limitations of the current Flash Video Ecosystem, so now it’s up to you. Try to mix my examples and post comments about new ways of customization that you have found of your RTMP delivery system.
Remember also to follow the discussion on my twitter account (@sonnati).


PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)


66 thoughts on “FFmpeg – the swiss army knife of Internet Streaming – part IV

    1. pyke3698,

      This is awesome!!! I was able to download and build ffmpeg within an hour on my CentOS 5.6 server.

      My first attempt failed due to missing “librtmp” so I removed “–enable-librtmp” from ffmpeg.

      Also, I was able to update to libvpx to 0.9.7.

      Two questions if you feel like answering…

      1) How can I correct the librtmp issue?
      2) How can I use “GIT” to get latest libraries?

      Thanks so much to you and Sonnati with FFMPEG.

  1. If I understand correctly, I can use ffmpeg to read data from a webcam and stream it? I’m hoping to port this into .NET Gadgeteer. Thanks!

  2. Hi,

    I’m testing FFMPEG to grab single (or a series of) still frames from an rtmp source. It works, but it can take upwards of 2 minutes before ffmpeg starts saving the stills. I’m at a loss as to why this is, as the streams come straight down in any regular player.

    ffmpeg -i “rtmp:// playpath=point_of_rocks.stream” image%d.jpg

    Any ideas of options to help speed this up?


    1. Same issue here, no luck so far,using RED5. However, their test app oflaDemo works great ie I can stream from a camera and send to other clients. Just can’t get ffmpeg to read in the rtmp, ie using rtmp://localhost/oflaDemo/mytestname Running Ubuntu 11.04, with red5 v1.0 and ffmpeg installed via default methods (ie apt-get install). I also have compiled my own bleeding edge ffmpeg but no differences Okay here’s a log (hope you all don’t get angry… its not too long

      ../ffmpeg/ffmpeg -loglevel verbose -re -i “rtmp://localhost/oflaDemo/quandt” -acodec copy -vcodec copy -y a.flv
      Parsed protocol: 0
      Parsed host : localhost
      Parsed app : oflaDemo
      RTMP_Connect1, … connected, handshaking
      HandShake: Type Answer : 03
      HandShake: Server Uptime : 16205
      HandShake: FMS Version :
      HandShake: Handshaking finished….
      RTMP_Connect1, handshaked
      Invoking connect
      HandleClientBW: client BW = 64000 0
      HandleCtrl, received ctrl. type: 0, len: 6
      HandleCtrl, Stream Begin 0
      RTMP_ClientPacket, received: invoke 161 bytes
      (object begin)
      Property: NULL
      (object begin)
      (object end)
      (object end)
      HandleInvoke, server invoking
      HandleInvoke, received result for method call
      sending ctrl. type: 0x0003
      Invoking createStream
      RTMP_ClientPacket, received: bytes read report
      RTMP_ClientPacket, received: invoke 29 bytes
      (object begin)
      Property: NULL
      (object end)
      HandleInvoke, server invoking
      HandleInvoke, received result for method call
      SendPlay, seekTime=0, stopTime=0, sending play: quandt
      Invoking play
      sending ctrl. type: 0x0003
      RTMP_ClientPacket, received: invoke 131 bytes
      (object begin)
      Property: NULL
      (object begin)
      (object end)
      (object end)
      HandleInvoke, server invoking
      HandleInvoke, onStatus: NetStream.Play.StreamNotFound
      Closing connection: NetStream.Play.StreamNotFound
      rtmp://localhost/oflaDemo/quandt: Operation not permitted

  3. Hi,
    Great post.

    I’m trying to follow the examples and to transmit local flv file to FMS and then to watch with ffplayer.

    ffmpeg -loglevel verbose -re -i host.flv -acodec copy -vcodec copy -f flv rtmp://server_address/live/test

    ffplay -loglevel verbose rtmp://server_address/live/test

    on the ffplay side I’m getting: Closing connection: NetStream.Play.StreamNotFound.

    Will appreciate advice.


  4. Hi,

    Does the step 4, “4. GENERATE BASELINE FOR LOW-END DEVICES” actually allow you to stream to an iPad for example?

    I’m trying to find out how to use ffmpeg to do live streaming out to mobile devices

    1. You need to encode in multibitrate (HLS) to be compliant with Apple’s AppStore requirements.
      I’ll update soon the FFmpeg codebase with multibitrate support and more.

      1. That’s great, I really appreciate the help. Is there a need to utilize another program to do any segmenting or can it be done straight from ffmpeg to the mobile devices?

  5. I’m hoping to use ffmpeg rtmp to take an already encoded stream and push it to one of the “free” (ad-based) video systems. I have been working unsuccessfully on ustream (FMS) and find others on the net who agree. It seems that the results on justin.tv and livestream are better with ffmpeg rtmp. Any ideas out there? Is there some portion of rtmp ‘keepalive’ that ffmpeg is missing?

    1. More poking around the web indicates this is due to SWF Verification on the RTMP connection. It looks like other open source projects have implemented support for this with the ability to specify a token/hash. Most folks are just looking to download/play content from a RTMP server, whereas I need to push RTMP to ustream. The main reason I’m inclined to use ffmpeg is that my source is already encoded, so I’m just using “-vcodec copy”. I don’t know if any other piece of software can push RTMP without touching the encoding. Any help out there? Is there a better forum for this discussion?

  6. I think this could help many of you:

    if the input url is of a live rtmp – it needs to be enclosed in quotes and be followed by a live=1, for example:

    ffmpeg -i “rtmp://server-ip/appname/streamname live=1”

    Otherwise it gives you the “StreamNotFound” error.
    I found this out when I saw somebody use it like that as a parameter for ffplay, so I tried it for ffmpeg and it worked!

    1. I’ve seen this as well, but have not been able to make it work against RED5, was your testing against ustream or other site? Or a red5 site?

  7. hi i have flv file generated using wowza, it is videoless audio file

    how can i convert it into mp3 using ffmpeg? appreciate if you can give me the command…

  8. from:

    ffmpeg -re -i /home/alacret/Escritorio/test3.ogv -acodec copy -vcodec copy -f flv rtmp://localhost:9001/live/streamName
    rtmp://localhost:9001/live/streamName: Operation not permitted

    I got the message
    RTMP_Connect0, failed to connect socket. 111 (Connection refused)

    1. your flash media server is runing on port 9001 ? (default is 1935)
      in general this seems to be a wrong port error

      ps. sometimes you should write output in this way

      -f flv “rtmp://localhost:9001/live/streamName live=1”

    2. it seems to be – wrong port error, maybe 1935 instead of 9001

      and sometimes you should write output from ffmpeg in this way:

      -f flv “rtmp://localhost:1935/live/streamName live=1”


      -f flv “rtmp://localhost:9001/live/streamName live=1”
      if you use custom port, which is 9001 in this example

  9. hi friends
    i have a doubt in ffmpeg with rtmp
    i want to edit the video which is getting from rtmp server and save to same rtmp server
    i try the bellow but not working
    ffmpeg -i rtmp://server1/vod/sample.flv -flags gray rtmp://server1/vod/fmsgrayout.flv

    1. you are using an insufficient number of parameters. Try to encode at least the video track in h264 or ad “-acodec copy -f flv” before the output

  10. Hi,

    First of all great tutorial! it was really useful to me.

    I have problem with ffmpeg that I can’t fix. I’m making a video from jpeg images. I need to use a frame rate between 7-9. The problem is that doesn’t matter how much I change the frame rate (25fps or 7fps) the movie lenght remains the same. That means that ffmpeg skips some input images in order to keep that length. Therefore, I need ffmpeg to include the entire image sequence and move the final output video length.

    I’m using ffmpeg 0.7.11 and the following command line:

    ffmpeg -i proves/img/img%d.jpg -r 7 -b 1200k -vcodec libx264 -flags +loop -me_method hex -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 8 -me_range 16 -coder 1 -sc_threshold 40 -flags2 +bpyramid+wpred+mixed_refs+dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 1 -directpred 1 -partitions -parti8x8-parti4x4-partp8x8-partp4x4-partb8x8 -threads 0 -acodec libfaac -ar 44100 -ab 96k -y proves/8.mp4

    Is there something I’m missing or I should change in order to let ffmpeg keep all the input frames and make the movie longer (and more heavy)?

    Many thanks!

  11. About point 3 “TRANSCODE LIVE RTMP TO LIVE RTMP”, how do you integrate this command
    “ffmpeg -i rtmp://server/live/originalStream -acodec libfaac -ar 44100 -ab 48k -vcodec libx264 -vpre slow -vpre baseline -f flv rtmp://server/live/h264Stream”
    with FMS?

  12. Hello again, I was trying to use ffmpeg to transcode a rtmp stream from speex to aac using:

    ffmpeg -re -i “rtmp://localhost/live/_definst_/livestream swfVfy=0 live=1” -acodec libfaac -ar 44100 -ab 48k -vcodec copy -f flv ‘rtmp://localhost/livepkgr/livestream?adbe-live-event=liveevent&adbe-record-mode=”record”‘

    but ffmpeg wont start streaming until the input stream is over. Is there a way of making it start streaming right away?

      1. So it’s supposed to start immediately from what I understand. Well, I tried the most recent version from git.
        I’ll try some other version. Can you post the version you used when you wrote this post? Or at least one you know works? =)
        Thank you.

    1. Ok, let me answer my own question. I just found out the problem is not related to my ffmpeg binary. It’s just that I got this error from rtmpdump “Missing Speex header, assuming defaults.”

      I needed a file with audio in speex format in order to test this functionality, so I created one using
      ffmpeg -i some_file.mp4 -acodec libspeex -ar 16000 -vcodec copy -f flv sample.flv

      The problem is that ffmpeg creates a file without a “Speex header”, and I could not (yet) find how to add that header to the conversion command.

      What I’ll be assuming for now, is that audio encoded in speex coming from a flash app does have this header.

      Indeed, if I try with a file encoded in any other format, the conversion goes normally.
      I wish they would just add transcoding to fms.

  13. I have a almost similar issue as Paulo, but but slightly different.

    First I have a flex application to publish a live thru a Wowza server.
    Then I use the command line to encode only the sound, the video is already in H264.
    which is :
    “ffmpeg -i rtmp://server/live/originalStream -acodec libfaac -ar 44100 -ab 48k -vcodec copy -f flv rtmp://server/live/h264_AAC_Stream”

    ffmpeg starts (with some log info) then crashes with these errors :
    [flv @ 0x2050e60]st:0 error, non monotone timestamps 1607 >= 1607
    av_interleaved_write_frame(): Operation not permitted
    The timestamps number change each tries.

    Do you know why ?

    Thanks in advance.


    1. Hi Adrien,

      try to patch ffmpeg. I resolved the same problem with:

      libavformat/utils.c (line 2918):

      if(st->cur_dts && st->cur_dts != AV_NOPTS_VALUE && st->cur_dts >= pkt->dts){

      if(st->cur_dts && st->cur_dts != AV_NOPTS_VALUE && st->cur_dts > pkt->dts){


  14. Hi, i want to stream VOD, i tried wowza media server but i didn’t find any solution to stream VOD on RTMP. can you give me the process how to set-up RTMP streaming. Please fetch in step-by-step. i’m using two laptops for this streaming.

  15. sree7k7 :
    Hi, i want to stream VOD, i tried wowza media server but i didn’t find any solution to stream VOD on RTMP. can you give me the process how to set-up RTMP or HTTP streaming. Please fetch in step-by-step. i’m using two laptops for this streaming.

  16. Hi,
    I want to stream my webcam (or any ip cam) through ffmpeg on a rmtp server.
    I try this command but it does not stream :
    ffmpeg -f video4linux2 -s qvga -i /dev/video0 -re -vcodec copy -acodec copy -f flv -y rtmp://live-ams.dacast.com/xxxxxxxxx
    What am I doing wrong?
    It is the save on another platform (infomaniak).

  17. Hi,
    i want to know if this(ffmpeg encoding to H.264) can be used if i want to use TCP protocol.
    Thanks for the reply!

  18. even with -preset veryfast and -tune zerolatency I’m noticing about 6 seconds latency to my Wowza instance. Any suggestions to get the latency lower?

      1. Thank you Fabio,

        I tried all the modifications I have read about both for wowza and the flash player (setting buffer to 0) but I still see about 5 seconds latency. I see this latency regardless of what settings I use in ffmpeg and x264. Is there another step I am missing? Does ffmpeg introduce latency when muxing to flv for the rtmp stream? I noticed fps went faster when muxing to different containers. What latency do you generally get when streaming from a webcam? I’m stuck with this one so any advise would be greatly appreciated!

  19. Hi All,

    Thanx for such a nice tutorial. I jsut want to merge two RTMP stream into a single RTMP stream. i.e. One RTMP stream is of Audio file and other is of Video file (without Audio). I simply want to merge these streams to produce a real-time stream which contains both audio and video. Can anyone tell me what will be the command for this. Thanx in advance.

  20. Awesome tutorial and very insightful..I am looking to use ffmpeg to take in an existing rtmp stream, then output as a source for FMLE to utilize the Muticast cabilities in FMS 4.5-5.0. Any insight and and assistance? My limited understanding is that I may need to pass it through a directshow filter in ffmpeg for FMLE to recognize the ffmpeg output as a valid video source. I assume I need to do the something simlar for the audio as well. Appreciate any insight.

      1. Ah. Shame. I’ve got BubbleUPnP server bridging UPnP/DLNA servers from my home server. Ideally I’d like for it to be able to dynamically change the quality so it matches the bandwidth available. Even something as simple as when writing keep track of the data rate and adjust should there be delays in writing. Using strace it looks like you could gain enough info from the blocked writes to adjust the quality.

    1. ffsplit does this today by dropping incoming frames from dshow, but ffmpeg isn’t adjusting quality per se. libx264 provides a way to do this but it’s not plugged into ffmpeg.

  21. Hi,
    Please help, I cannot make it working, what I do wrong:
    ffmpeg -re -i “” -c:v copy -c:a copy -f flv rtmp:// or
    ffmpeg -re -i “” -c:v libx264 -c:a libfaac -f flv rtmp://
    I got the message :
    RTMP_Connect0, failed to connect socket. 111 (Connection refused)
    rtmp:// Unknown error occurred

    (it is with –enable-librtmp)
    ffmpeg version git-2013-02-10-3acaea2
    built on Feb 10 2013

    Thanx in advance.

  22. please someone help me with this……. I am trying to do a live stream with multiple camera angles so Im doing as such: webcamstudio> ffmpeg > rtmp://localhost so I was trying to use this command: ffmpeg -f video4linux2 -i /dev/video2 -f alsa -i hw:0 -acodec libmp3lame -b 64k -ar 44100 -ac 2 -vcodec libx264 -r 30 -b 1500k -s 720×480 -f flv rtmp://localhost:1935/oflaDemo/stream and was getting horrible audio and alsa xrun buffer errors every frame. so I tried: avconv -f video4linux2 -i /dev/video2 -f alsa -i hw:0 -acodec libmp3lame -b 64k -ar 44100 -ac 2 -vcodec libx264 -r 30 -b 1500k -s 720×480 -f flv rtmp://localhost:1935/oflaDemo/stream and it works, but audio is out of sync (i hear the sound before my lips move) by 4 seconds. I tried -itsoffset 4.267 but Im just scratching my head…. please someone help…

  23. hi, can anyone show me how to send a video clip from a home computer1 and receive/play from another home computer2 please

  24. Im using ffmpeg to broadcast a playlist to justin.tv. I am using kmplayer skin for mplayer to play it and ffmpeg to send it using this script. I am getting really bad sync issues though. Any ideas?

    INRES=”640×360″ # input resolution
    OUTRES=”640×360″ # Output resolution
    FPS=”25″ # target FPS
    QUAL=”fast” # one of the many FFMPEG presets in /usr/share/ffmpeg
    SAMPLERATE=”44100″ # AAC: any, MP3: max 44.1 KHz
    BITRATE=”96k” # AAC: 160 kb/s, MP3: 128 kb/s
    BUFFER=”500k” # adjust if you have issues

    ffmpeg \
    -f x11grab -s “$INRES” -r “$FPS” -i :0.0+4,180 \
    -f alsa -ac 2 -i pulse \
    -vcodec libx264 -crf 18 -s “$OUTRES” -pix_fmt yuv420p -g 2 -minrate 1000k -maxrate 1000k -bufsize $BUFFER -b:v 1000k \
    -acodec libfdk_aac -ar $SAMPLERATE -b:a $BITRATE -threads 0 -q:a 3 \
    -f flv “rtmp://live.justin.tv/app/$STREAM_KEY”

  25. Hello,
    I’ve been trying to loop an output stream with the command:
    ffmpeg -re -i -loop 1 -i outout220.ts -vcodec copy -acodec copy -f mpegts “udp://” but i get the error “option loop not found”
    Is there a way for an output stream to be looped?

  26. Hey there – really appreciate all the info you’ve got here. I wonder if you could recommend a best-practice for the following project:

    We are attempting to dump a live video feed (such as from SDI or HDMI) into a UPNP destination so that it can be grabbed and viewed using UPNP players.

    Have investigated many options – Wirecast, Serviio, Flash Media Encoder, etc. – and not sure what the best approach would be. We are using a Blackmagic Ultrastudio video capture device to bring in the live video/audio feed.

    Thanks so much in advance for any pointers!

    1. Hi, FFmpeg can acquire from some Blackmagic cards (SDI) and encode on the fly. Search for input device into the documentation of FFmpeg

  27. Hi there, this is a great post and really informative but was wondering if you could put me in the right direction on a particular piece of work. I need to sync multiple live radio streams with a pre-recorded, looped video whilst also varying the volume at certain frames.

    Would this, or at least some of it, be possible with ffmpeg?

    Any information greatly appreciated.

  28. Hello Fabio,

    First, I really want to thank you for taking time to create the excellent tutorial on FFMPEG. Thank you very much!!! It really helped a beginner like me to gain confidence.

    I am using a FFMPEG application to stream video to my hard drive folder. I am now trying to write the proper code to stream multi-bitrate. This is what I have now and it works very well for 1 fixed bitrate. It exports HLS files direct to my hard drive:

    ADD 1 STREAM Z:/video/stream.m3u8 -vcodec libx264 -tune zerolatency -preset ultrafast -crf 25 -maxrate 300k -bufsize 600k -vf “scale=426:-1,format=yuv420p” -g 60 -c:a libvo_aacenc -b:a 32k -hls_time 10 -hls_wrap 10

    I want to have multiple-bitrate like you explained in your tutorial but I am very very confused about the proper way to write the code. What else do I need to add in the code? Where do I add it? Please kindly help me.

    Thanks a lot!

    – Mark

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s