Bandwidth is running out. Let’s save the bandwidth

The global bandwidth consumption is growing every day and one of the main causes is the explosion of bandwidth hungry Internet services based on Video On Demand (VOD)  or live streaming. Youtube is accounted for a considerable portion of the whole Internet bandwidth usage, but also Hulu and NetFlix are first class consumers.

One of the cause of this abnormal consumption (apart from the high popularity) is the low level of optimization used in video encoding: for example Youtube encodes 480p video @1Mbit/s, 720p @2.1Mbit/s and 1080p @3.5Mbit/s which are rather high values. But also NetFlix, BBC and Hulu use conservatives settings. You may observe that NetFlix and Hulu use adaptive streaming to offer different quality levels depending by network conditions but such techniques are aimed at improving the QoS and not reduce the bandwidth consumption, so it’s very important to offer a quality/bitrate ratio as high as possible and not underestimate all the consequences of an un-optimized encoding.

The main consequence of a not optimized video is an high overall bandwidth consumption and therefore an high CDN bill. But for those giants this is not always a problem because, thanks to very high volumes, they can negotiate a very low cost per GByte.

However it is not only a matter of pure bandwith cost. There are many other hidden “costs”. For example at peek hours may be difficult to stream HD videos from Youtube without frequent, and annoying, rebuffering. Furthermore a lot of users nowadays use mobile connections for their laptop/tablet and rarely such connections offer more than 1-2 Mbit/s of real average bandwidth. If the video streaming service, differently from YouTube, uses dynamic streaming (like Hulu, NetFlix, EpicHD, etc…) the user is still able to see the video without re-buffering but it is very likely that he will obtain one of the lower quality versions of the stream in this bandwidth constrained scenarios and not the high quality one.

Infact, the use of dynamic streaming is today very often used as an alibi for poorly optimized encoding workflows…

This state of insufficient bandwidth is more frequent in less developed countries. But even highly developed countries can have problems if we think at the recent data trasfer restrictions introduced in Canada or in USA  by some network providers (AT&T – 150GB/month, for example).

These limits are established especially because at peak hours the strong video streaming consumption can saturate the infrastructure, even of an entire nation as was happening in 2008-2009 in UK after the launch and the consequent extraordinary success of BBC’s iPlayer.

So dynamic streaming can help but must not to be used as an excuse to poorly optimized encodings and it is absurd to advertise a streaming service as HD when it requires to have 3-4Mbit/s+ of average bandwidth to stream the higher quality bitrate while in USA the average is around 2.9Mbit/s (meaning that more than 50% of users will stream a lower quality stream and not the HD one).

How many customers are really able to see an HD stream from start to end in a real scenario with this kind of bitrates ?

The solution is : invest in video optimization

Fortunately today every first class video provider uses H.264 for their video and H.264 offers still much room for improvement.
In the past I have shown several examples of optimized encodings. They were often experiments to explore the limit of H.264, or the possibilities of further quality improvements that the Flash Player can provide to a video streaming service (take a look at my “best articles” area).

In such experiments I have usually tried to encode a 720p video at a very low bitrate like 500Kbit/s. 500Kbit/s is more than a psycological threshold because at this level of bitrate it is really-really complex to achieve a satisfactory level of quality in 720p. Therefore my first experiments used to be accomplished on not too much complex contents.

But in these last 3 years I have improved considerably my skills and the knowledge of the inner principles of H.264. I have worked for first class media companies and contributed to the creation of advanced video platforms capable to offer excellent video quality for desktop (Flash, Silverlight, Widevine), mobile (Flash, HLS, native) and STB (vbr or cbr .ts).

So now I’m able to show you some examples of complex content encoded with very good quality/bitrate ratios in a real world scenario.

I’m not afraid

To show you this new level of optimization of H.264 I have choosen one of the most watched video in Youtube: Not afraid by Eminem.
This is a complex clip with a lot of movements, dark scenes, some transparencies, lens flares and a lot of fine details on artist’s face.

Youtube offers the video in these 4 versions (plus a 240p):

1080p @ 3.5Mbit/s
720p @ 2.1Mbit/s
480p @ 1Mbit/s
360p @ 0.5Mbit/s

Starting from this “state of art”, I have tryed to show what can be obtained with a little bit of optimization.
Why not try to offer the quality of the first three stream options but at half the bitrate ? Let’s say:

1080p @ 1.7Mbit/s
720p @ 1Mbit/s
576p @ 0.5Mbit/s

A such replacement would lead to two consequences:

A. Total bandwidth consumption reduced approximately by  2.
B. Much more users would be able to watch high quality video, even in low speed scenarios (mobile, capped connections, peak hours and developing countries).

But first of all, let’s take a look at the final result. Here you find a comparison page. On the left you have the YouTube video, on the right the optimized set of encodings. It is not simple to compare two 1080p or 720p videos (follow the instructions in the comparison page), so I have extracted some screenshots to compare the original Youtube version with the optimized encoding.

1. Youtube 1080p @ 3.5Mbit/s vs Optimized 1080p @ 1.7Mbit/s

Notice the skin details and imperfections. The optimized encoding offers virtually the same quality at half the bitrate. Consequently, the quality of 1080p at 15% less bitrate than the 720p version of Youtube.

2. Youtube 720p @ 2Mbit/s vs optimized 720p @ 1Mbit/s

Again virtually the same quality at half the bitrate. Consequently 720p video can be offered instead of 480p which has the same bitrate:

3. Youtube 480p @ 1Mbit/s vs 720p @ 1Mbit/s

Optimized 720p offers higher quality (details, grain, spatial resolution) at the same bitrate.

4. Youtube 480p @ 1Mbit/s vs optimized 576p@ 500Kbit/s

Instead of using a 854×480 @ 500Kbit/s resolution I have preferred to use a 1024×576 (576p). I have also tryed to encode in 720p @ 600-700Kbit/s with very good results but I liked the factor 2 reduction in bitrate, so in the end, I opted for 576p which offered more stable results across the whole video. In this case the quality, details level and spatial resolution is higher than the original but at half the bitrate.

5. Youtube 360p @ 500Kbit/s vs optimized 576 @ 500Kbit/s

Again much higher spatial resolution, details level and overal quality at the same bitrate.

For the sake of optimization

How have I obtained a bitrate / quality ratio like this ? Well, it is not simple but I will try to explain the base principle.

Modern encoders do a lot of work to optimize the encoding from a mathematical / machine point of view. So for example a metric is used for Rate Distortion Optimization (like PSNR or SSIM). But this kind of approach is not always usefull at low bitrates, or when a high quality/bitrate ratio is required. In this scenario the standard approach may not lead to the best encoding because it is not capable to forecast what pictures are more important to enhance the quality perceived by the average user. Not every keyframes or portions of video are equally important.

These examples of optimized encodings are obtained with a mix of automated video analysis tool (for dynamic filtering, for istance) and human-guided fitting approach (for keyframe placement and quality burst). I’m actually developing a fully automated pipeline but by now, if an expert eye guides the process, it produces better results.

Unfortuntely there is a downside in using an ultra-optimized encoding: the encoding time rises consistently, so it is not realistic to think that Youtube could re-encode every single video with new optimized profiles.

But, you know, when we talk about big numbers, there’s an empiric law which may help use in a real world scenario: the Pareto principle. Let’s apply the Pareto principle to Youtube…

The Pareto principle

The Pareto principle (aka the 80-20 law) states that, for many events, roughly 80% of the effects come from 20% of the causes. Applying this rule to YouTube, it’s very likely that 80% of traffic comes from 20% of videos. A derivation of Pareto law known as 64-4 rule states that 64% of effects come from 4% of causes (and so on). So optimizing a reduced set of most popular videos would lead to huge savings and optimal user experience with only a limited amount of extra effort (the 4%).

But “Not Afraid” belongs to the top 10 of most popular video on YouTube, so it’s a perfect candidate for an extremization of Pareto law.

Let’s do some calculation. My samples reduce the bandwith of a factor 2 at every versions. So if we suppose that the most preferite version of the video is 720p and consider that the video has been watched more than 250 M times in the last 12 months, YouTube has consumed : 64MB * 250 M views = 16 PBytes, only to stream Not Afraid for 1 year.

Supposing an “equivalent” cost of 2c$/GByte*, this means 320.000$ (* it’s the lowest cost in CDN industry for huge volumes; probably YouTube uses different models of billing so consider it as a rough evaluation).

So an hand-made encoding of only 1 video could generate a saving of 160.000$. Wow… Encoding even only the TOP10 Youtube videos means probably at least 1M$ of saving…multiply this for the TOP1000 video and probably we talk of tens of millions per year…what to say…youtube, you know where to find me 😉

Moral of the story

The proposed application of Pareto rule is an example of adaptive strategy. Instead of encode all the video with a complex process that could not be affordable, why not encode only a limited subset of very popular videos ? Why not encode them with the standard set and then re-process without hurry only if the rate of popularity rise over an interesting  threshold ?

Adaptive strategies are always the most productive. So if you apply this to the Youtube model, you get huge bandwidth (money) savings, if you apply this to a NetFlix model (dynamic streaming) you get a sudden increase in average quality delivered to clients and so on.

Concluding, the moral of the story is that every investment in encoding optimizations and adaptive encoding workflows can have very positive effects on user experience and/or business balance.

PS: I’ll speak about encoding and adaptive strategies during Adobe MAX 2011 (2-5 October) – If you are there and interested in encoding join my presentation : http://bit.ly/qvKjP0

22 thoughts on “Bandwidth is running out. Let’s save the bandwidth

  1. Please tell me how to optimize my videos as you describe here?
    I mean what’s the method to convert the videos to be optimized?
    Thanks..

  2. Could you please include a link to an actual video that has this new encoding and any new software needed to watch it.
    Thanks – Maybe Akamai would be interested in your work?

  3. In second paragraph it is mentioned that HULU, Netflix are improving the QoS and not bandwidth consumption. It is not clear to me, i though that adaptive streaming is going to adapt to client needs and choose the appropriate bit-rate thereby use the available bandwidth.

    1. yes, but if you use too high and or inefficient bitrates it is not good for the client.
      Suppose you have 2Mbit/s and that NetFlix is offering to you 1080p @ 4Mbit/s, 720p @ 3Mbit/s, 576p @ 2Mbit/s, 360p @ 1Mbit/s, 240p @ 0.5
      What happens ? You can stream only the 360p version of the stream. QoS is assured, but final quality ?
      Instead with an optimized set, for example 1080p @ 3, 720 @ 2, 576p @ 1.2, 360p @ 0.7, 240p @ 0.3, you could stream consistently at 576p instead of 360p
      with an higher client-end quality. Simply using Dynamis streaming is not a guarantee of quality.
      Now suppose to push the limits and be able to offer a set like this: 1080p @ 2.5, 720 @ 1.5, 576p @ 0.8, 360p @ 0.5, 240p @ 0.25 …

  4. Would love to start contributing to the web by reducing the size of files with minimal quality loss. But have no idea of where to begin. Tried to follow your series on FFMPeg but the command line and presets have all changed in new versions. For what it’s worth we’re using Sony Vegas Pro to do editing but are not satisfied with its encoding. Reading through your articles goes to show we really don’t know anything about encoding so even a human-guided process might be out of most of our leagues especially considering we don’t have much time.

    Would love to hear some suggestions from you. Thanks for all your contributions!

  5. Don’t know if my last comment went through so copy-pasting (feel free to delete if it’s a duplicate):

    Would love to start contributing to the web by reducing the size of files with minimal quality loss. But have no idea of where to begin. Tried to follow your series on FFMPeg but the command line and presets have all changed in new versions. For what it’s worth we’re using Sony Vegas Pro to do editing but are not satisfied with its encoding. Reading through your articles goes to show we really don’t know anything about encoding so even a human-guided process might be out of most of our leagues especially considering we don’t have much time.

    Would love to hear some suggestions from you. Thanks for all your contributions!

  6. Your blog has helped me in many ways. I am impressed by the comparision above. How did you reach to final result? You must have tried different parameter, and looked at result and kept on optimising?

    How do you propose to something like that in a dynamic environment, where thousands of videos are uploaded every hour, and all of those videos have different profile (some right, some could be better)?

    How do you figure out what is right bitrate for one particular video? Some people debate around bits/pixel*frame as indicator, though it is not.

    Looking at reference (youtube’s version), and optimising something to meet that, manually, is lot easier as compared to do the same in an environment where these decisions have to made on the fly (by software), and there is no reference to compare against? And it is not really practical to review results of each video?

    I am sure, Youtube and others can do better job, which I am sure they must be doing because it cost them right? I am impressed by their work, and the scale they operate.

    Looking to hear your thoughts.

    Thanks
    -abdul

    1. Well, by now it is quite difficult to find an automatic heuristic for this kind of optimization. I’m working on it but still not finished

  7. netflix HD is like 4 Mb/s isn’t it? That seems pretty compressed to me… 🙂 Also see h.265 I guess, though I’ve never used it…

  8. “Youtube encodes 480p video @1Mbit/s, 720p @2.1Mbit/s and 1080p @3.5Mbit/s which are rather high values”

    “Let’s save the bandwidth”

    That whole idea is preposterous. Bandwidth will not go down and we will not have to “save” anything. To the contrary, the demand for higher quality video will increase and at some point there will be lossless streaming of video on demand. Therefore, traffic will increase and instead of wasting money and effort in futile and counterproductive attempts to stop that, the solution is not to care about saving some 500kbits, but to invest in better distribution technology and infrastructure to support the needs, especially as commercial TV will very likely be replaced by video on demand in the not too far future.

    1. I dont’ agree with you. The point is not to save bandwidth forever or in absolute terms but to make a better use of it today. Bandwidth consumtion will increase for sure, but optimize existing streaming service per user and globally is not a “waste of money and effort”, on the contrary, with a minimal effort is a huge money saving and this is confirmed by Netflix’s actions that at the same time decrease per GB/costs and try to increase encoding efficiency. It is not a my opinion that in US the average bandwidth is well below the bitrate used by, for istance, Netflix for HD streaming. is it better to wait for the whole nation to have +50% higher bandwidth or optimize encoding ?

  9. Hi…
    In the e-learning environment i want to build my own video server is there any way how i can optimize the bandwidth requirement for the video server?

  10. Hi, How have you calculated the bandwidth of youtube videos before your optimization,
    Can you please elaborate them .. I am using wireshark to check the bandwidth and I am not getting the values of this video that you have written.
    Can you please tell how u get these values (a bit in detail)
    1080p @ 3.5Mbit/s
    720p @ 2.1Mbit/s
    480p @ 1Mbit/s
    360p @ 0.5Mbit/s

    1. Hi, Youtube changed a lot in the last years. Today is not easy to measure the bandwidth because they use a variation of DASH to stream.
      However you can use a tool like 4K Video Downloader to download any single rendition offered by Youtube a see the bitrate using Mediainfo.
      You will see that today the bitrate is variable because Youtube is using a smart approach to encoding which is a bit difficult to explain
      here. If I have time I could post something in the future.

Leave a reply to Ram Cancel reply