Archive

Author Archive

Be quick! 5 promo codes to save 200$ for a MAX 2011 full conference pass

22 September 2011 1 comment

I’m happy to offer to 5 of my readers the opportunity to obtain a discount of $200 for a full conference pass at Adobe MAX 2011. The first 5 of you that will use the promo code ESSONNATI while registering at the conference (max.adobe.com) will obtain the discounted rate of $1,095 instead of $1,295. Be quick! Max is approaching fast!

It you get in, remember to attend my presentation: “Encoding for performance on multiple device”
http://bit.ly/qvKjP0

Categories: Uncategorized

Bandwidth is running out. Let’s save the bandwidth

15 September 2011 19 comments

The global bandwidth consumption is growing every day and one of the main causes is the explosion of bandwidth hungry Internet services based on Video On Demand (VOD)  or live streaming. Youtube is accounted for a considerable portion of the whole Internet bandwidth usage, but also Hulu and NetFlix are first class consumers.

One of the cause of this abnormal consumption (apart from the high popularity) is the low level of optimization used in video encoding: for example Youtube encodes 480p video @1Mbit/s, 720p @2.1Mbit/s and 1080p @3.5Mbit/s which are rather high values. But also NetFlix, BBC and Hulu use conservatives settings. You may observe that NetFlix and Hulu use adaptive streaming to offer different quality levels depending by network conditions but such techniques are aimed at improving the QoS and not reduce the bandwidth consumption, so it’s very important to offer a quality/bitrate ratio as high as possible and not underestimate all the consequences of an un-optimized encoding.

The main consequence of a not optimized video is an high overall bandwidth consumption and therefore an high CDN bill. But for those giants this is not always a problem because, thanks to very high volumes, they can negotiate a very low cost per GByte.

However it is not only a matter of pure bandwith cost. There are many other hidden “costs”. For example at peek hours may be difficult to stream HD videos from Youtube without frequent, and annoying, rebuffering. Furthermore a lot of users nowadays use mobile connections for their laptop/tablet and rarely such connections offer more than 1-2 Mbit/s of real average bandwidth. If the video streaming service, differently from YouTube, uses dynamic streaming (like Hulu, NetFlix, EpicHD, etc…) the user is still able to see the video without re-buffering but it is very likely that he will obtain one of the lower quality versions of the stream in this bandwidth constrained scenarios and not the high quality one.

Infact, the use of dynamic streaming is today very often used as an alibi for poorly optimized encoding workflows…

This state of insufficient bandwidth is more frequent in less developed countries. But even highly developed countries can have problems if we think at the recent data trasfer restrictions introduced in Canada or in USA  by some network providers (AT&T – 150GB/month, for example).

These limits are established especially because at peak hours the strong video streaming consumption can saturate the infrastructure, even of an entire nation as was happening in 2008-2009 in UK after the launch and the consequent extraordinary success of BBC’s iPlayer.

So dynamic streaming can help but must not to be used as an excuse to poorly optimized encodings and it is absurd to advertise a streaming service as HD when it requires to have 3-4Mbit/s+ of average bandwidth to stream the higher quality bitrate while in USA the average is around 2.9Mbit/s (meaning that more than 50% of users will stream a lower quality stream and not the HD one).

How many customers are really able to see an HD stream from start to end in a real scenario with this kind of bitrates ?

The solution is : invest in video optimization

Fortunately today every first class video provider uses H.264 for their video and H.264 offers still much room for improvement.
In the past I have shown several examples of optimized encodings. They were often experiments to explore the limit of H.264, or the possibilities of further quality improvements that the Flash Player can provide to a video streaming service (take a look at my “best articles” area).

In such experiments I have usually tried to encode a 720p video at a very low bitrate like 500Kbit/s. 500Kbit/s is more than a psycological threshold because at this level of bitrate it is really-really complex to achieve a satisfactory level of quality in 720p. Therefore my first experiments used to be accomplished on not too much complex contents.

But in these last 3 years I have improved considerably my skills and the knowledge of the inner principles of H.264. I have worked for first class media companies and contributed to the creation of advanced video platforms capable to offer excellent video quality for desktop (Flash, Silverlight, Widevine), mobile (Flash, HLS, native) and STB (vbr or cbr .ts).

So now I’m able to show you some examples of complex content encoded with very good quality/bitrate ratios in a real world scenario.

I’m not afraid

To show you this new level of optimization of H.264 I have choosen one of the most watched video in Youtube: Not afraid by Eminem.
This is a complex clip with a lot of movements, dark scenes, some transparencies, lens flares and a lot of fine details on artist’s face.

Youtube offers the video in these 4 versions (plus a 240p):

1080p @ 3.5Mbit/s
720p @ 2.1Mbit/s
480p @ 1Mbit/s
360p @ 0.5Mbit/s

Starting from this “state of art”, I have tryed to show what can be obtained with a little bit of optimization.
Why not try to offer the quality of the first three stream options but at half the bitrate ? Let’s say:

1080p @ 1.7Mbit/s
720p @ 1Mbit/s
576p @ 0.5Mbit/s

A such replacement would lead to two consequences:

A. Total bandwidth consumption reduced approximately by  2.
B. Much more users would be able to watch high quality video, even in low speed scenarios (mobile, capped connections, peak hours and developing countries).

But first of all, let’s take a look at the final result. Here you find a comparison page. On the left you have the YouTube video, on the right the optimized set of encodings. It is not simple to compare two 1080p or 720p videos (follow the instructions in the comparison page), so I have extracted some screenshots to compare the original Youtube version with the optimized encoding.

1. Youtube 1080p @ 3.5Mbit/s vs Optimized 1080p @ 1.7Mbit/s

Notice the skin details and imperfections. The optimized encoding offers virtually the same quality at half the bitrate. Consequently, the quality of 1080p at 15% less bitrate than the 720p version of Youtube.

2. Youtube 720p @ 2Mbit/s vs optimized 720p @ 1Mbit/s

Again virtually the same quality at half the bitrate. Consequently 720p video can be offered instead of 480p which has the same bitrate:

3. Youtube 480p @ 1Mbit/s vs 720p @ 1Mbit/s

Optimized 720p offers higher quality (details, grain, spatial resolution) at the same bitrate.

4. Youtube 480p @ 1Mbit/s vs optimized 576p@ 500Kbit/s

Instead of using a 854×480 @ 500Kbit/s resolution I have preferred to use a 1024×576 (576p). I have also tryed to encode in 720p @ 600-700Kbit/s with very good results but I liked the factor 2 reduction in bitrate, so in the end, I opted for 576p which offered more stable results across the whole video. In this case the quality, details level and spatial resolution is higher than the original but at half the bitrate.

5. Youtube 360p @ 500Kbit/s vs optimized 576 @ 500Kbit/s

Again much higher spatial resolution, details level and overal quality at the same bitrate.

For the sake of optimization

How have I obtained a bitrate / quality ratio like this ? Well, it is not simple but I will try to explain the base principle.

Modern encoders do a lot of work to optimize the encoding from a mathematical / machine point of view. So for example a metric is used for Rate Distortion Optimization (like PSNR or SSIM). But this kind of approach is not always usefull at low bitrates, or when a high quality/bitrate ratio is required. In this scenario the standard approach may not lead to the best encoding because it is not capable to forecast what pictures are more important to enhance the quality perceived by the average user. Not every keyframes or portions of video are equally important.

These examples of optimized encodings are obtained with a mix of automated video analysis tool (for dynamic filtering, for istance) and human-guided fitting approach (for keyframe placement and quality burst). I’m actually developing a fully automated pipeline but by now, if an expert eye guides the process, it produces better results.

Unfortuntely there is a downside in using an ultra-optimized encoding: the encoding time rises consistently, so it is not realistic to think that Youtube could re-encode every single video with new optimized profiles.

But, you know, when we talk about big numbers, there’s an empiric law which may help use in a real world scenario: the Pareto principle. Let’s apply the Pareto principle to Youtube…

The Pareto principle

The Pareto principle (aka the 80-20 law) states that, for many events, roughly 80% of the effects come from 20% of the causes. Applying this rule to YouTube, it’s very likely that 80% of traffic comes from 20% of videos. A derivation of Pareto law known as 64-4 rule states that 64% of effects come from 4% of causes (and so on). So optimizing a reduced set of most popular videos would lead to huge savings and optimal user experience with only a limited amount of extra effort (the 4%).

But “Not Afraid” belongs to the top 10 of most popular video on YouTube, so it’s a perfect candidate for an extremization of Pareto law.

Let’s do some calculation. My samples reduce the bandwith of a factor 2 at every versions. So if we suppose that the most preferite version of the video is 720p and consider that the video has been watched more than 250 M times in the last 12 months, YouTube has consumed : 64MB * 250 M views = 16 PBytes, only to stream Not Afraid for 1 year.

Supposing an “equivalent” cost of 2c$/GByte*, this means 320.000$ (* it’s the lowest cost in CDN industry for huge volumes; probably YouTube uses different models of billing so consider it as a rough evaluation).

So an hand-made encoding of only 1 video could generate a saving of 160.000$. Wow… Encoding even only the TOP10 Youtube videos means probably at least 1M$ of saving…multiply this for the TOP1000 video and probably we talk of tens of millions per year…what to say…youtube, you know where to find me ;-)

Moral of the story

The proposed application of Pareto rule is an example of adaptive strategy. Instead of encode all the video with a complex process that could not be affordable, why not encode only a limited subset of very popular videos ? Why not encode them with the standard set and then re-process without hurry only if the rate of popularity rise over an interesting  threshold ?

Adaptive strategies are always the most productive. So if you apply this to the Youtube model, you get huge bandwidth (money) savings, if you apply this to a NetFlix model (dynamic streaming) you get a sudden increase in average quality delivered to clients and so on.

Concluding, the moral of the story is that every investment in encoding optimizations and adaptive encoding workflows can have very positive effects on user experience and/or business balance.

PS: I’ll speak about encoding and adaptive strategies during Adobe MAX 2011 (2-5 October) – If you are there and interested in encoding join my presentation : http://bit.ly/qvKjP0

Categories: Video

FFmpeg – the swiss army knife of Internet Streaming – part IV

30 August 2011 60 comments

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

Fourth Part

In this article I will focus on the support for RTMP that makes FFmpeg an excellent tool for enhancing the capabilities of the Adobe Flash Streaming Ecosystem.

FFmpeg introduced a strong support for RTMP streaming with the release 0.5 by the inclusion of the librtmp (rtmpdump) core. An RTMP stream can be used both as an input and/or as an output in a command line.

The required syntax is:

rtmp_proto://server[:port][/application][/stream] options

where rtmp_proto can be: “rtmp“, “rtmpt“, “rtmpte“, “rtmps“, “rtmpte“, “rtmpts” and options contain a list of space-separated options in the form key=val (more info here).

Using some of the parameters that we have seen in the first three parts of the series, it’s possible to do a lot of things that the standard Flash Streaming Ecosystem cannot offer. Sometimes there are minor bugs but generally speaking the rtmplib works well and helps FMS to fill the gap with some advanced feature of Wowza Server (like re-purposing of rtp/rtsp stream, TS-stream and so on). FFmpeg works with FMS as well as Wowza Server and RED5, so in the article I will use FMS as a generic term to mean any “RTMP-server”.

1. STREAM A FILE TO FMS AS IF IT WERE LIVE

With the help of FFmpeg it is possible for example to stream a pre-encoded file to FMS as if it were a live source. This can be very useful for test purpose but also to create pseudo-live channels.

 ffmpeg -re -i localFile.mp4 -c copy -f flv rtmp://server/live/streamName 


The -re option tells FFmpeg to read the input file in realtime and not in the standard as-fast-as-possible manner. With -c copy (alias -acodec copy -vcodec copy ) I’m telling FFmpeg to copy the essences of the input file without transcoding, then to package them in an FLV container (-f flv) and send the final bitstream to an rtmp destination (rtmp://server/live/streamName).

The input file must have audio and video codec compatible with FMS, for example H.264 for video and AAC for audio but any supported codecs combination should work.
Obviously it would be also possible to encode on the fly the input video. In this case remember that the CPU power requested for a live encoding can be high and cause loss in frame rate or stuttering playback on subscribers’ side.

In which scenario can be useful a command like that ?

For example, suppose to have created a communication or conference tool in AIR. One of the partecipants at the conference could fetch a local file and stream it to the conference FMS to show, in realtime, the same file to other partecipants. Leveraging the “native process” feature of AIR it is simple to launch a command line like the one above and do the job. In this scenario, probably you will have to transcode the input, or check for the compatibility of codecs analyzing the input up front (remember ffmpeg -i INPUT trick we spoke about in the second article).

2. GRAB AN RTMP SOURCE

Using a command like this:

 ffmpeg -i rtmp://server/live/streamName -c copy dump.flv 

It’s possible to dump locally the content of a remote RTMP stream. This can be useful for test/audit/validation purpose. It works for both live and on-demand content.

3. TRANSCODE LIVE RTMP TO LIVE RTMP

One of the more interesting scenario is when you want to convert a format to a different one for compatibility sake or to change the characteristics of the original stream.

Let’s suppose to have a Flash Player based app that do a live broadcast. You know that until FP11, Flash can only encode using the old Sorenson spark for video and NellyMoser ASAO or Speex for audio. You may use a live transcoding command to enhance the compression of the video transcoding from Sorenson to H.264:

 ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv rtmp://server/live/h264Stream 

This could be useful to reduce bandwidth usage especially in live broadcasting where latency it’s not a problem.
The next release of FMS will also offer support for the Apple HTTP Live Streaming (like Wowza already do). So it will be possible to use FMS to stream live to iOS device. But FMS does not transcode the stream essence, it performs only a repackaging or repurposing of the original essences. But FFmpeg can help us to convert the uncompliant Sorenson-Speex stream to a H.264-AAC stream in this way:

 ffmpeg -i rtmp://server/live/originalStream -c:a libfaac -ar 44100 -ab 48k -c:v libx264 -vpre slow -vpre baseline -f flv rtmp://server/live/h264Stream 

(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)

See also the point 4 and 5 to know how to generate a multibitrate stream to be compliant with Apple requirements for HLS. This approach will be useful also with FP11 that encode in H.264, but generate only one stream.

Another common scenario is when you are using FMLE to make a live broadcast. The standard windows version of FMLE supports only MP3 and not AAC for audio encoding (plug-in required). This may be a problem when you want to use your stream also to reach iOS devices with FMS or Wowza (iOS requires AAC for HLS streams). Again FFmpeg can help us:

 ffmpeg -i rtmp://server/live/originalStream -acodec libfaac -ar 44100 -ab 48k -vcodec copy -f flv rtmp://server/live/h264_AAC_Stream 

On the other hand, I have had the opposite problem recently with an AIR 2.7+ apps for iOS. AIR for iOS does not support by now H.264 or AAC streaming with the classical netStream object, but I needed to subscribe AAC streams generated for the desktops. FFmpeg helped me in transcoding AAC streams to MP3 for the AIR on iOS app.

Again, you probably know that Apple HLS requires an audio only AAC stream with a bitrate less than 64Kbit/s for the compliance of video streaming apps, but at the same time you probably want to offer an higher audio quality for your live streaming (on desktop fpo istance). Unfortunately FMLE encode at multiple bitrates only the video track while use a unique audio preset for all bitrates. With FFmpeg is possible to generate a dedicated audio only stream in AAC with bitrate less than 64Kbit/s.

4. GENERATE BASELINE FOR LOW-END DEVICES

Very similarly, if you want to be compliant with older iOS versions or other mobile devices (older BB for istance) you need to encode in Baseline profile, but at the same time you may want to leverage high profile for desktop HDS. So you could use FMLE to generate high profile streams, with high quality AAC and then generate server side a baseline set of multi-bitrate streams for HLS and/or low end devices compatibility.

This command read from FMS the highest quality of a multi bitrate set generated by FMLE and starting from that generate 3 scaled down versions in baseline profile for HLS or Mobile. The last stream is an audio only AAC bitstream at 48Kbit/s.

 ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640x360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480x272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320x200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k 

UPDATE: using the -x264opts parameter you may rewrite the command like this:

 ffmpeg -re -i rtmp://server/live/high_FMLE_stream -c:a copy -c:v x264lib -s 640x360 -x264opts bitrate=500:profile=baseline:preset=slow rtmp://server/live/baseline_500k -c:a copy -c:v x264lib -s 480x272 -x264opts bitrate=300:profile=baseline:preset=slow rtmp://server/live/baseline_300k -c:a copy -c:v x264lib -s 320x200 -x264opts bitrate=150:profile=baseline:preset=slow rtmp://server/live/baseline_150k -c:a libfaac -vn -b:a 48k rtmp://server/live/audio_only_AAC_48k 



(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)

5. ENCODE LIVE FROM LOCAL GRABBING DEVICES

FFmpeg can use also a local AV source, so it’s possible to encode live directly from FFmpeg and bypass completely FMLE. I suggest to do that only in very controlled scenarios because FMLE offers precious, addictional functions like auto-encoding adjust to keep as low as possible the latency when the bandwidth between the acquisition point and the server is not perfect.

This is an example of single bitrate:

 ffmpeg -r 25 -f dshow -s 640x480 -i video="video source name":audio="audio source name" -vcodec libx264 -b 600k -vpre slow -acodec libfaac -ab 128k rtmp://server/application/stream_name 

Join this command line and the previous and you have a multi-bitrate live encoding configuration for desktop and mobile.

6. ENCODE SINGLE PICTURES WITH H.264 INTRA COMPRESSION

H.264 has a very efficient Intra compression mode, so it is possible to leverage it for picture compression. I have estimated an improvement of around 50% in compression compared to JPG. Last year I have discussed estensively the possibility to use this kind of image compression to protect professional footage with FMS and RTMPE. Here you find the article, and this is the command line:

 ffmpeg.exe -i INPUT.jpg -an -vcodec libx264 -coder 1 -flags +loop -cmp +chroma -subq 10 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -flags2 +dct8x8 -trellis 2 -partitions +parti8x8+parti4x4 -crf 24 -threads 0 -r 25 -g 25 -y OUTPUT.mp4 

Change -crf to modulate encoding quality (and compression rate).

UPDATES

Sometimes when connecting to FMS you may receive some cryptic error. It may help to enclose the destination RTMP address in double quotes and add the option live=1. ES:

 ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv "rtmp://server/live/h264Stream live=1" 

Other info on RTMP dump libray: http://ffmpeg.org/ffmpeg.html#toc-rtmp

CONCLUSIONS

There are a lot of other scenarios where using FFmpeg with FMS (or Wowza) can help you creating new exciting services for you projects and overcome the limitations of the current Flash Video Ecosystem, so now it’s up to you. Try to mix my examples and post comments about new ways of customization that you have found of your RTMP delivery system.
Remember also to follow the discussion on my twitter account (@sonnati).

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

Categories: Video

FFmpeg – the swiss army knife of Internet Streaming – part III

19 August 2011 9 comments

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 


Third part

In this third part we will look more closely at the parameters you need to know to encode to H.264.

FFmpeg uses x264 library to encode to H.264. x264 offers a very wide set of parameters and therefore an accurate control over compression. However you have to know that FFmpeg applies a parameter name re-mapping and doesn’t offer the whole set of x264 options.

UPDATE: FFmpeg allows to specify directly the parameters to the underling x264 lib using the option -x264opt. -x264opt accept parameters as key=value pairs separated by “:”. ES: -x264opt bitrate=1000:profile=baseline:level=4.1…etc.

Explain the meaning of all the parameters is a long task and it is not the aim of this article. So I’ll describe only the most important and provide some useful samples. Therefore, if you want to go deeper in the parameterization of FFmpeg, I can suggest you to read this article to know the meaning of each x264 parameters and the mapping between FFmpeg and x264. To know more about the technical principles of H.264 encoding, I suggest also to take a look at the first part of my presentions at MAX2008, MAX2009 and MAX2010.

ENCODING IN H.264 WITH FFMPEG

Let’s start analyzing a sample command line to encode in H.264 :

ffmpeg -i INPUT -r 25 -b 1000k –s 640×360 -c:v libx264 -flags +loop -me_method hex -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 8 -me_range 16 -coder 1 -sc_threshold 40 -flags2 +bpyramid+wpred+mixed_refs+dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 1 –level 30 -directpred 1 -partitions -parti8x8-parti4x4-partp8x8-partp4x4-partb8x8 -threads 0 -acodec libfaac -ar 44100 -ab 96k -y OUTPUT.mp4

(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)

This command line encodes the INPUT file using a framerate of 25 Fps (-r), a target bitrate of 1000Kbit/s (-b), a gop max-size of 250 frames (-g), 3 b-frames (-bf) and resizing the input to 640×360 (-s). The level is set to 3.0 (-level), the entropy coder to CABAC (-coder 1) and the number of reference frames to 3 (-refs). The profile is determined by the presence of b-frames, dct8x8 and Cabac, so it is an high-profile. Notice the syntax to enable/disable options in the multi options parameters like -partitions, -flags2 and -cmp. The string -flags2 +bpyramid+wpred+mixed_refs+dct8x8″ means that you are enabling b-pyramid, weighted prediction, mixed references frames and the use of the 8×8 dct. So for example, if you want to disable dct8x8 to generate an output compliant with the main-profile, you can do that changing the previous string to -flags2 +bpyramid+wpred+mixed_refs-dct8x8″ (notice the “-” character in front of dct8x8 instead of “+”). Disabling dct8x8 you obtain a main profile, disabling also b-frames and CABAC (setting “-bf 0” and  “-codec 0“) you obtain a baseline-profile.

Profiles and Levels are very important for device compatibility so it is important to know how to produce a specific profile and level pair. You find a short primer to profiles and levels here and generic raccomandations for multi device encoding here.

MAIN PARAMETERS

Here you find a short explanation of the most significative parameters.

-me_method

Sets the accuracy of the search method in motion estimation. Allowed values: dia (fastest), hex, umh, full (slowest). Dia is usually used for first pass encoding only and full is too slow and not significantly better than umh. For single pass encoding or the second pass in multi-pass encoding use umh or hex depending by encoding speed requirements or constraints.

-subq

Sets the accuracy of motion vectors. Accepts values in the range 1-10. Use lower values like 1-3 for first pass and higher values like 7-10 for the second pass. Again, the effective value depends by a quality/speed tradeoff.

-g, -keyint_min, -sc_threshold

x264 uses by default a dynamic gop size. -g selects the max gop size, -keyint_min the min size. –sc_threshold is the Scene Change sensitivity (0-100). At every scene change a new i-frame (intra compressed frame) is inserted. Depending by -g and -keyint_min an I-frame (IDR frame alias keyframe) is inserted instead. The gop can be long (i.e. -g 300) for compression efficiency sake, or short (i.e. 25/50) for accessibility sake. This depends by what you need to achieve and by the delivery technique used (when using RTMP streaming you can seek to every frame, with progressive downloading only to IDRs). Sometimes you may need to have a consistent, contant gop size across multiple bitrates (i.e. for Http Dynamic Streaming or HLS). To do that set min and max gop size equal and disable completely scene change (i.e. -g 100 -keyint_min 100 -sc-threashold 0).

-bf, b-strategy

-bf sets the max number of consecutive b-frames (H.264 supports up to 16 b-frames). Remember that b-frames are not allowed in baseline profile. B-strategy defines the technique used for b-frames placement.

Use 0 to disable dynamic placement.
Use 1 to enable a fast-choice technique for dynamic placement. Fast but less accurate.
Use 2 to enable a slow-and-accurate mode. Can be really slow if used with an high number of b-frames.

-refs

sets the number of reference frames (H.264 supports up to 16 reference frames). Influences the encoding time. Using more than  4-5 refs gives commonly very little or null gain.

 -partitions

H.264 supports several partitions modes for MBs estimation and compensation. P-macroblocks can be subdivided into 16×8, 8×16, 8×8, 4×8, 8×4, and 4×4 partitions. B-macroblocks can be divided into 16×8, 8×16, and 8×8 partitions. I-macroblocks can be divided into 4×4 or 8×8 partitions. Analyzing more partition options improves quality at the cost of speed. The default in FFmpeg is to analyze all partitions except p4x4 (p8x8, i8x8, i4x4, b8x8). Note that i8x8 requires 8x8dct, and is the only partition High Profile-specific. p4x4 is rarely useful (i.e. for small frame size).

-b, -pass, -crf, -maxrate, -bufsize

-b sets the desired bitrate that will be achieved using a single pass or multi-pass process using the -pass parameter. -crf define a desired average quality instead of a target bitrate.
These are all options retalted to bitrate allocation and rate control. Rate Control is a key area of video encoding and deserves a wider description.

RATE CONTROL OPTIONS

Particular attention must be paid to the Rate Control mode used. x264 supports different rate control techniques: Average Bit Rate (ABR), Costant Bit Rate (CBR), Variable Bit Rate (VBR at constant quality or constant quantization). Furthermore it is possible to use 1, 2 or more passes.

MultiPass encoding

FFmpeg supports multi pass encoding. The most common is the 2 pass encoding. In the first pass the encoder collects informations about the video’s complexity and create a stat file. In the 2nd pass the stat file is used for final encoding and better bit allocation. This is the generic syntax:

ffmpeg -i input -pass 1 [parameters] output.mp4
ffmpeg -i input -pass 2 [parameters] output.mp4

-pass 1 tells to FFmpeg to analize video and write a stat file. -pass 2 tells to read the stat file and encode accordingly. Exist also a -pass 3 option that read and update the stat. So if you want to do a 3-pass encoding the correct sequence is:

ffmpeg -i input -pass 1 [parameters] output.mp4
ffmpeg -i input -pass 3 [parameters] output.mp4
ffmpeg -i input -pass 2 [parameters] output.mp4

3-pass encoding is rarely useful.

ABR

Average Bitrate is the default rate control mode. Simply set the desired target average bitrate using -b. Remember that the bitrate can fluctuate freely locally and only the average value over the whole video duration is controlled. ABR can be performed with 1 or 2 pass but I suggest to always use a 2-pass for better data allocation.

CBR

Using the VBV model (Video Bitrate Verifier) it’s possible to obtain CBR encoding with custom buffer control. For example, to encode in canonical CBR mode use:

ffmpeg -i input -b 1000k -maxrate 1000k -bufsize 1000k [parameters] output.mp4

CBR encoding can be performed in single pass or multi pass. Single pass CBR is sufficiently efficient.

VBR

libx264 supports two unconstrained VBR modes. In pure VBR you don’t know the final average bitrate of your video but you set a target quality (or quantization) that is applied by the encoder across the whole video.

-cqp sets a costant quantization for each frame. It is rarely useful.
-crf (Constant Rate Factor) sets a target quality factor and lets the encoder to change the quantization depending by frame type and sequence complexity. Adaptive Quantization and MB-Tree techniques change quantization at macroblock level according to macroblock importance. The -crf factor can usually be chosen in the range 18 (trasparent quality) to 30-35 (low quality, but the perceived quality depends by frame resolution and device dpi).

Usually VBR encoding is performed in single pass.

SIMPLIFY YOUR LIFE USING PRESETS

Fortunately it is possible to avoid long command lines using pre-defined or custom encoding settings. Indeed I do not like very much this approach because there are a lot of cases when you need to have an accurate control over the parameters like in the case of HLS or HDS. But I recognise that the use of presets can save a lot of time in every-day works.

Profiles are simply a set of parameters enclosed in a profile file which you find in the ffpresets folder after unzipping the FFmpeg build package. Presets can change depending by the version of FFmpeg you have, so the best is to take a look at the content of the preset file. Commonly you will find a set of quality preset like libx264-hq.ffpreset or  libx264-slow.ffpreset , first pass presets like libx264-hq_firstpass.ffpreset and constraints presets like libx264-main.ffpreset or libx264-baseline.ffpreset

So, to make a 2-pass encoding in baseline profile with the HQ preset you can use a command like this:

ffmpeg -i INPUT -pass 1 -an -vcodec libx264 -vpre hq_firstpass -vpre baseline -b 1000k -s 640×360 OUTPUT.mp4
ffmpeg -i INPUT -pass 2 -acodec libfaac -ab 96k -ar 44100 -vcodec libx264 -vpre hq -b 1000k -vpre baseline -s 640×360 OUTPUT.mp4

(UPDATE: libfaac is now an external library and maybe you can have problem encoding in AAC – Read part V of the series to know more about this topic.)

Notice that the constrains preset is applyed with a second -vpre and that the first pass has the audio encoding disabled.
Sometimes I have had problems with presets in Windows. You can bypass problems locating the presets simply using -fpre instead of -vpre. When using -fpre you must specify the absolute path to the preset file and not only the short name like in -vpre.

UPDATE:

Since FFmpeg introduced a direct access to x264 parameters it is also possible to use native x264 profiles. ES:

ffmpeg -i INPUT -an -c:v libx264 -s 960×540 -x264opts preset=slow:tune=ssim:bitrate=1000 OUTPUT.mp4

ENCODING FOR DIFFERENT DEVICES

Using the constraints presets it is possible to encode for mobile devices that usually require baseline profile to enable hardware acceleration. This limit is rapidly being surpassed by current hardware and operative systems. But if you need to target older devices (for example iOS 3 devices) and newer with the same video it’s still necessary to be able to generate easily video compliant to baseline profile. You find other generic raccomandations for multi device encoding here.

THE NEXT PART

In this part we have seen how to encode to H.264 using FFmpeg as well as the richness of encoding parameters. In the part IV of this series we will see how to leverage the FFmpeg support for RTMP streaming to enhance the Flash Video Ecosystem capabilities.

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

Categories: Video

FFmpeg – the swiss army knife of Internet Streaming – part II

8 August 2011 55 comments

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

(because of the success of this series I have decided to revise the content of the articles, update the syntax of command lines to recent api changes and extend the series with a fifth part, good reading!)

Second part

After the short introduction of the previous article, now it’s time to see FFmpeg in action. This post is dedicated to the most important parameters and ends with an example of transcoding to H.264. FFmpeg supports hundreds of AV formats and codecs as input and output (for a complete list type ffmpeg -formats and ffmpeg -codecs) but you know that nowadays the most important output format is without doubt H.264.

H.264 is supported by Flash (99% of computers + mobile), iOS, Android, BlackBerry, low-end mobile devices, STBs and Connected TVs. So why target different formats ? A streaming professional has primarily to master H.264. Other formats are marginal, even Google’s VP8 which is still in a too early phase of adoption.

FFmpeg is vast subject, so I’ll focus on what I think to be the most important options and a useful selection of command lines. But first of all, how to find a build of FFmpeg to start managing your video?

The shortest path if you don’t want to built from source code is to visit Zeranoe’s FFmpeg builds. Remember that there are several libraries that can be included or excluded in the build and so if you need something special or a fine control onthe capabilities of your build, the longest path is the best.

KEY PARAMETERS

This is the base structure of an FFmpeg invocation:

ffmpeg -i input_file [...parameter list...] output_file

Input_file and output_file can be defined not only as file system objects but a number of protocols are supported: file, http, pipe, rtp/rtsp, raw udp, rtmp. I’ll focus on the possiblity to use RTMP as input and/or output in the fourth part of this series, while in the following examples I’ll use only the local file option.

FFmpeg supports litterally hundreds of parameters and options. Very often, FFmpeg infers the parameters from the context, for example the input or output format from the file extention and it also applies default values to unspecified parameters. Sometimes it is instead necessary to specify some important parameters to avoid errors or to optimize the encoding.

Let’s start with a selection of the most important, not codec related, parameters:

-formats   print the list of supported file formats
-codecs    print the list of supported codecs (E=encode,D=decode)
-i         set the input file. Multiple -i switchs can be used
-f         set video format (for the input if before of -i, for output otherwise)
-an        ignore audio
-vn        ignore video
-ar        set audio rate (in Hz)
-ac        set the number of channels
-ab        set audio bitrate
-acodec    choose audio codec or use “copy” to bypass audio encoding
-vcodec    choose video codec or use “copy” to bypass video encoding
-r         video fps. You can also use fractional values like 30000/1001 instead of 29.97
-s         frame size (w x h, ie: 320x240)
-aspect    set the aspect ratio i.e: 4:3 or 16:9
-sameq     ffmpeg tries to keep the visual quality of the input
-t N       encode only N seconds of video (you can use also the hh:mm:ss.ddd format)
-croptop, -cropleft, -cropright, -cropbottom   crop input video frame on each side
-y         automatic overwrite of the output file
-ss        select the starting time in the source file
-vol       change the volume of the audio
-g         Gop size (distance between keyframes)
-b         Video bitrate
-bt        Video bitrate tolerance
-metadata  add a key=value metadata

SYNTAX UPDATE (*)

The syntax of some commands have changed. Commands like -b (is the bitrate related to audio or video?) have now a different syntax:

Use:

-b:a instead of -ab to set audio bitrate
-b:v
instead of -b to set video bitrate
-codec:a
or -c:a instead of -acodec
-codec:v
or -c:v instead of -vcodec

Most of these parameters allow also a third optional number to specify the index of the audio or video channel to use.

Es: -b:a:1 to specify the bitrate of the second audio stream (the index is 0 based).

Read official documentation for more info: http://ffmpeg.org/ffmpeg.html

COMMAND LINES EXAMPLES

And now let’s combine these parameters to manipulate AV files:

1. Getting info from a video file

ffmpeg -i video.mpg

Useful to retrieve info from a media file like audio/video codecs, fps, frame size and other params. You can parse the output of the command line in a script redirecting the stderr channel to a file with a command like this:

 ffmpeg –i inputfile 2>info.txt

2. Encode a video to FLV

ffmpeg –i input.avi –r 15 –s 320×240 –an video.flv

By default FFmpeg encodes flv files in the old Sorenson’s Spark format (H.263). This can be useful today only for compatibility with older systems or if you wanto to encode for a Wii (which supports only Flash Player 7).
Before the introduction of H.264 in Flash Player I was used to re-encode the FLVs recorded by FMS with a command like this:

ffmpeg –i input.flv –acodec copy –sameq output.flv
(ffmpeg –i input.flv –codec:a copy –sameq output.flv)

This produced a file 40-50% smaller with the same quality of the input, preserving at the same time the Nellymoser ASAO audio codec which was not supported by FFmpeg in such days and therefore not encodable in something else.

Today you can easily re-encode to H.264 and also transcode ASAO or Speex to something else.

VP6 codec (Flash Player 8+) is officially supported only in decoding.

3. Encode from a sequence of pictures

ffmpeg -f image2 -i image%d.jpg –r 25 video.flv

Build a video from a sequence of frame with a name like image1.jpg,image2.jpg,..,imageN.jpg
Is it possible to use different naming conventions like image%3d.jpeg where FFmpeg search for file with names like image 001.jpeg, image002.jpg, etc…The output is an FLV file at 25Fps.

4. Decode a video into a sequence of frames

ffmpeg -i video.mp4 –r 25 image%d.jpg

Decodes the video in a sequence of images (25 images per second) with names like image1, image2, image 3, … , imageN.jpg. It’s possible to change the naming convention.

ffmpeg –i video.mp4 –r 0.1 image%3d.jpg

Decodes a picture every 10 seconds (1/0.1=10). Useful to create a thumbnail gallery for your video. In this case the putput files have names like image001, image002, etc.

ffmpeg –i video.mp4 –r 0.1 –t 20 image%3d.jpg

Extracts 2-3 images from the first 20 seconds of the source.

5. Extract an image from a video

ffmpeg -i video.avi -vframes 1 -ss 00:01:00 -f image2 image-%3d.jpg
(ffmpeg -i video.avi -frames:v 1 -ss 00:01:00 -f image2 image-%3d.jpg)

This is a more accurate command for image extraction. It extracts only 1 single frame (-vframes 1) starting 1min from the start of the video. The thumbnail will have the name image-001.jpg.

ffmpeg -i video.avi -r 0.5 -vframes 3 -ss 00:00:05 -f image2 image-%3d.jpg
(ffmpeg -i video.avi -r 0.5 -frames :v 3 -ss 00:00:05 -f image2 image-%3d.jpg)

In this case FFmpeg will extract 3 frames, each every 1/0.5=2seconds, starting from time 5s. Useful for video CMS where you want to offer a selection of thumbnails and a backend user choose the best one.

6. Extract only audio track without re-encoding

ffmpeg -i video.flv -vn -c:a copy audio.mp3

Here I assume that the audio track is an mp3. Use audio.mp4 if it is AAC or audio.flv if it is ASAO or Speex. Similarly you can extract the video track without re-encoding.

7. Extract audio track with re-encoding

ffmpeg -i video.flv -vn -ar 44100 -ac 2 -ab 128k -f mp3 audio.mp3

This command extract audio and transcode to MP3. Useful when the video.flv is saved by FMS and has an audio track encoded with ASAO or Speex.

ffmpeg –i video.flv –vn –c:a libaac –ar 44100 –ac 2 –ab 64k audio.mp4

The same as above but encoded to AAC.

8. Mux audio + video

ffmpeg -i audio.mp4 -i video.mp4 output.mp4

Depending by the content of the input file you may need to use the map setting to choose and combine the audio video tracks correctly. Here I suppose to have an audio only and video only input files.

9. Change container

ffmpeg –i input.mp4 –c:a copy –c:v copy output.flv

I use this to put h.264 in FLV container, sometimes it is useful. This kind of syntax will come back when we will talk about FFmpeg and RTMP.

10. Grab from a webcam

On Linux it is easy to use an audio or video grabbing device as input to FFmpeg:

ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 out.flv

On windows it is possible to use vfwcap (only video) or direct show (audio and video):

ffmpeg -r 15 -f vfwcap -s 320×240 -i 0 -r 15 -f mp4 webcam.mp4

ffmpeg -r 15 -f dshow -s 320×240 -i video=”video source name”:audio=”audio source name” webcam.flv

Notice here that the parameters –r –f –s are setted before –i.

11. Extract a slice without re-encoding

ffmpeg -i input -ss 00:01:00 -t 00:01:00 -c:a copy -c:v copy output.mp4

Extract 1min of video starting from time 00:01:00. Be aware that putting the -ss and -t parameters before or after -i may have different effects.

12. Make a video file from a single frame

ffmpeg -loop_input -frames:v 1 -i frame.jpg -t 10s -r 25 output.mp4

Generate a video with 25Fps and length 10s from a single frame. Playing with -vframes it is possible to loop a sequence of frames (not video). Note: -loop_input is deprecated in favor of -loop (filter option).

13. Add metadata

ffmpeg -i input.flv -c:v copy -c:a copy -metadata title=”MyVideo” output.flv

Useful to change or add metadata like resolution, bitarate or other info

14. Encode to H.264

Let’s conclude this second part of the series with an example of encoding to H.264 + AAC. In the example above I have used, for simplicity sake, FLV or MP4 output. But to encode to H.264 you have to explicitly set the output codec and some required parameters.

ffmpeg -y -i input.mov -r 25 -b 1000k -c:v libx264 -pass 1 -vpre fastfirstpass -an output.mp4
ffmpeg -y -i input.mov -r 25 -b 1000k -c:v libx264 -pass 2 -vpre hq -acodec libfaac -ac 2 -ar 44100 -ab 128k output.mp4

This first example tells FFmpeg to use the libx264 to produce a H.264 output. We are using a two pass encoding (-pass 1 generate only a stat file that will be used by a second pass). The -vpre option tells FFmpeg to use the preset file “fastfirstpass” that its found in the preset folder of the FFmpeg installation directory. The second line performs the second pass using a more accurate preset (-vpre hq) and adds the audio encoding.

FFmpeg use a dedicated remapping of the most important parameters of libx264. x264 has an high number of parameters and if you know what you are doing you can also set each of them individually instead of using a predefined preset. This is an example of two pass encoding without preset:

ffmpeg -y -i input -r 25 -b 1000k -c:v libx264 -pass 1 -flags +loop -me_method dia -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 1 -me_range 16 -coder 1 -sc_threshold 40 -flags2 -bpyramid-wpred-mixed_refs-dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 0 -directpred 1 -partitions -parti8x8-parti4x4-partp8x8-partp4x4-partb8x8 -an output.mp4

ffmpeg -y -i input -r 25 -b 1000k -c:v libx264 -pass 2 -flags +loop -me_method umh -g 250 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 -b_strategy 1 -i_qfactor 0.71 -cmp +chroma -subq 8 -me_range 16 -coder 1 -sc_threshold 40 -flags2 +bpyramid+wpred+mixed_refs+dct8x8+fastpskip -keyint_min 25 -refs 3 -trellis 1 -directpred 3 -partitions +parti8x8+parti4x4+partp8x8+partb8x8 -acodec libfaac -ac 2 -ar 44100 -ab 128k output.mp4

IN THE NEXT PART

In this second part of the series we have taken a look at the most important “not codec related” features of FFmpeg.
The next part will entirely dedicated to how encode to H.264 using FFmpeg and libx264.

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

Categories: Video

I’ll speak about Encoding at MAX 2011

19 July 2011 5 comments

This is my fourth partecipation to Adobe MAX as a speaker. I’ll be in Los Angeles in October to speak about what I know most: encode video at best for every device.

Every edition I have had very good feedbacks from the audience so I thank you all and this year I want to follow a request I have had often from many attendees: how to get the max out of FFmpeg ?

So let’s take a look at the abstract of my session this year:

Encoding for Performance on Multiple Devices

Learn how to create amazing H.264 video that performs well on multiple devices from one of the industry masters. The session will begin by discussing the fundamentals of encoding H.264 for Adobe Flash Player and will focus on using techniques using FFMPEG for live and on demand encoding. We’ll also cover hardware acceleration and optimizing H.264 for tablets and smartphones. Find out Adobe’s recommendations for video encoding for HTTP delivery and how you can make your video look great.
In the session I’ll speak about encoding vod and live with Adobe  products but also with FFmpeg. I’ll discuss some tricks and tips about encoding but most of all about how to use this tool to overcome some limitations of the Flash Video platform. Furthermore,
a specific chapter of the presentation will be dedicated to encoding and delivery optimizations for AIR apps on iOS devices.
So, if you are at MAX2011 on Monday, October 3, from 3:30 p.m. – 4:30 p.m. you know where to find me!
Categories: Video

FFmpeg – the swiss army knife of Internet Streaming – part I

11 July 2011 9 comments

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

(because of the success of this series I have decided to revise the content of the articles, update the syntax of command lines to recent api changes and extend the series with a fifth part, good reading!)

First part – Intro

This is the start post of a small series dedicated to FFmpeg. I have already talked about it 6 years ago when this tool was still young (Why I love FFmpeg: post1 and post2) but in these 6 years it has evolved widely and now it is a really useful “swiss army knife” for Internet streaming. I could define FFmpeg also as one of the pilllars of Internet Video. Sites like YouTube, Vimeo, Google video and the entire trend of UGC would not exist without FFmpeg. It is an exceptionally flexible tool that can be very useful for who works in the streaming business and most of all it is open source, free and well supported.

A brief history of FFmpeg

From wikipedia:
“FFmpeg
is a free software / open source project that produces libraries and programs for handling multimedia data. The most notable parts of FFmpeg are libavcodec, an audio/video codec library used by several other projects, libavformat, an audio/video container mux and demux library, and the ffmpeg command line program for transcoding multimedia files. FFmpeg is published under the GNU Lesser General Public License 2.1+ or GNU General Public License 2+ (depending on which options are enabled). The project was started by Fabrice Bellard, and has been maintained by Michael Niedermayer since 2004.”

So FFmpeg is the command line program that using libavcodec, libavformat and several other open source programs (notably x264 for H.264 encoding) offers exceptional transcoding capabilities especially for server side batch transcoding, but also for live encoding and audio/video files manipulation (muxing, demuxing, slicing, splitting and so on).

Me and FFmpeg

I started to study video encoding optimization during University and in the last decade I have used several open source and commercial encoders and designed my own original approaches and optimizations to encoding (take a look at best articles page for some experiments). But video encoding  and streaming became a business for me only after the release of FMS and Flash Player 6 in the late 2002. In the next years I developed several real time communication programs for my clients and a couple products. One of these was the “BlackBox” (2005), an HW/SW appliance that acquired multiple video sources using a Flash front-end and a FMS back-end. The system not only acquired video but provided also video editing features.

And here started my FFmpeg discovery. I needed a tool, possibly free, to manipulate FLV (cut, join, resize, re-encode, etc…) to provide the primitive operations of video editing. Being a .net developer I created my own tools (especially for splitting and joining) but then I found FFmpeg to be the best solution to do the most complex parts of the work. So my confidence with FFmpeg dates back to 2005. Around the same year Youtube started to use it as a free and fast way to encode (almost) any video format in input to a common output format (Sorenson’s Spark) for playback in Flash… you know the rest of the story.

Services like Youtube could not afford massive video transcoding using commercial solutions, so a tool like FFmpeg has been of fundamental importance for the sostenibility of their business model. This is why I defined FFmpeg as one of the pillars of Internet Video.

What is it possible to do with FFmpeg ?

The most obvious functionality is the decoding and encoding of audio and video files (transcoding). FFmpeg accepts an exceptional number of formats in input and is capable to decode, process (resize, change fps, filter, deinterlace, and so on) and finally encode to several output formats. But it can do a lot of other useful tasks like extract the elementary AV streams from a container, mux elementary streams in a new container, cut portions from a video, extract track informations.

These are features that FFmpeg has from many years while only more recently has been added the support for RTMP protocol (librtmp)

I think that, for an Internet Streaming professional, this has become one of the most important feature of FFmpeg. Before, if you wanted to acquire streams or push streams to a server in live mode you needed to use the RTP/RTSP protocol, but it is too complex and the implementation not really stable. On the other hand, RTMP is a simple yet powerful protocol and most important of all, it is supported by FMS and Wowza which are the most used streaming server in the last 5 years.

For example with librtmp it is possible to:

1. Connect to FMS, subscribe a live or vod stream and record it to file system.
2. Connect to FMS, subscribe a stream, transcode and publish a new version to a different or the same FMS.
3. Publish a local video file to FMS to simulate live streaming (with or without transcoding).
4. Acquire a live feed on the local PC, transcode and publish to FMS.

The series

After this conceptual introduction I invite you to enter in to details reading the other chapters of this series. This project has gained attentions and I have decided to transform it from a simple series of blog posts to a permanent knowledge base around FFmpeg and how to use it for simple and complex tasks in the video streaming business.

Concluding I like to underline that with a smart usage of FFmpeg and RTMP it’s possible to create infine combinations and overcome current limitations of Flash Video platform.
For example, one of the most interesting consequence of the point 3 is that using this tool and Wowza Media Server or FMS 4.5 which offer HLS compatibility, it is possible to transcode on-the-fly a stream generated by Flash Player (sorenson + asao or speex) to HLS for  consumption on iOS devices…not bad… continue reading to know more and follow the discussion on my twitter account too (@sonnati).

[Index]

PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)

 

Categories: Video

Mobile development with AIR and Flex 4.5.1

8 July 2011 13 comments

Recently I have made a very pleasant experience developing a set of mobile applications for the BlackBerry Playbook using Adobe Flex 4.5.1.

In the past I have been critical of Adobe because I believed that Flex for Mobile was not sufficiently smooth on devices and the workflow not efficient, but after this project I had to think again. The main application is not very complex but has given to me the opportunity to evaluate in a real scenario the efficiency of the framework and the performance level on multiple devices.

The final impression is that Adobe is doing really well and after a year of tests and  improvements Flex is becoming an efficient and powerfull cross-device development framework. There are yet some points to improve and some features to implement/enhance but I’m not so critial anymore.

The application I developed is a classic multi media app developed by a media client (Virgin Radio Italy) to offer several multimedia contents for the entertainment of  their mobile users. The app offers:

- A selection of thematic web radio plus the live broadcast of the main radio channel (Virgin Radio Italy)
– A selection of podcasts (MP3) from the main programs of the radio
– The charts/playlist created by the Virgin’s sound designers or voted by the users
– A multi-touch photo gallery
– A selection of VOD contents like video clips, interviews, concerts

The application is now under approval and should be available in the Playbook’s AppWorld in a few days. In the while you can take a look at the UX with this preview video:

Categories: Uncategorized

FlashCamp Milan – presentation online

21 May 2011 Leave a comment

Uno dei pochi post in Italiano che faccio su questo Blog per ringraziare Flash Mind, Adobe Flash Platform User Group Italy, per  la bella esperienza avuta al Flash Camp che si è tenuto a Milano ieri. Era la prima volta che partecipavo alla manifestazione WhyMCA che ha ospitato il Flash Camp e devo dire che ne sono rimasto positivamente colpito. Qui trovate le slide della mia presentazione sull’ottimizzazione del video encoding e del video delivery per mobile.

Categories: Video

A dream comes true: H.264 encoding into Flash Player 11

14 May 2011 29 comments

Yes. A dream comes true. After almost 9 years the Flash Player will have a new video codec. Sorenson’s Spark is about to retire, finally. But let’s recap the whole story starting, obviously, from the beginning.

In 2002 Macromedia included in Flash Player 6 a video codec provided by Sorenson. The Spark video codec was a custom and simple implementation of the international standard H.263. Spark supported simple encoding techniques derived from H.263v1 (P-frames with motion estimation and compensation, half-pixel accuracy, 1 reference frame, +-16 pixel long reference frames, RLE and Huffman for entropy coding to name a few) plus some enhanced features like deblocking in post-processing and the special D-frames (Desposable frames) which are like P-frames but cannot be used as reference. Especially this latest technique was introduced to support the main objective of Spark: provide a low latency codec for video communication over the Internet.

Flash Player 6 was so capable to generate and consume video streams but only from a new Macromedia’s server product: The Flash Communication Server (FCS), a revolutionary product ahead of the market of several years. To be honest, everybody know the story: FCS had the potentialities to be a disruptive product but was heavily ruined by an absurdly inaccessible price tag  and strongly capped configurations (es: 4500$ for a 10Mbit/s capped version, 990$ for a 1Mbit/s capped version, no developer versions, etc…I’m not kidding). The result: years passed and only a few mad developers (I’m one of them) continued to support the product hoping for a brighter future.

With the successive Flash Player 7, Macromedia decided to unlock the use of Sorenson videos for progressive download. This is the spark that ignited the revolution of video on Internet. After some years of limbo even FCS/FMS was relaunced with less restrictive licenses and the product became more mature release after release (here we could start a different debate about the slowness of Adobe in improving FMS and the recent lawsuite with Wowza but this post has a different topic…).

After only 2-3 years Spark started to be obsolete, even because the implementation of the encoder in the Flash Player was not so optimized and used very simple approaches to rate-control in video encoding. The community started to ask for some improvement in this area but without response until now. I think to have asked for a new encoder for almost 5-6 years. I have also developed in the past some optimization to enhance the encoder performance for screen grabbing or webcam communication but 9-years in computer programming are a whole age and a 30% improvement in efficiency was still insufficient to compare Flash video with Skype video, for example.

So yes, a dream comes true because Adobe has introduced in the current Flash Incubator an H.264 video encoder. Oh yes! H.264 is the state of the art in video encoding, the presence of B-frames could be useful to replace the D-frames and the potentialities of this codec are excellent so that even a poor implementation can lead to excellent improvements over Spark. I’m only a bit afraid for real-time, but from some comments found in the Internet I think that it will be possible to opt for different configuration to address lantency and encoding efficiency.

From the Incubator’s Forum I have estracted this peace of code that show how to change the codec from Spark to H.264 and configure it.

 var h264Settings:H264VideoStreamSettings = new H264VideoStreamSettings();
 h264Settings.setProfileLevel(H264Profile.BASELINE, H264Level.LEVEL_2);
 stream.videoStreamSettings = h264Settings;

* stream is the NetStream istance that will perform the publish of the encoded stream to the FMS.

Notice the H264Profile enumeration which is probably capable to specify not only a BASELINE, but also a MAIN or perhaps HIGH profile for encoding. Similarly the H264Level specify the level (substancially reference frames number) presumibly from 2 to 5. I hope to be also able to define the number of consecutive B-frames, and/or something like an accuracy switch (suppose H264accuracy.SLOW, H264accuracy.FAST and so on).

I’m starting to do some testing by myself because I’m really excited of this future Flash Player feature. If you add the new EchoCancellation API, the support of the open source Speex codec, p2p, and mobile availability, I think that a new youth is starting for Flash based communication applications development.

Better late than never.

Categories: FMS, Video
Follow

Get every new post delivered to your Inbox.

Join 101 other followers