Adobe has announced that the new Adobe Media Server 5 (formerly Flash Media Server) will be available soon. The change in the name reflects the recent shift in strategy of Adobe [sarcastic mode on] that is running away from Flash as fast as possible to embrace more cool technolgies like, in this case, HLS [sarcastic mode off].
Irony apart, I’m very happy with this announcement because I have always stated that FMS could have interesting new possibilities supporting streaming for iOS with content protection. FMS supports HLS streaming since release 4.5, but this is not sufficient to keep the leadership.
I work for large media clients that need content protection and in the last 2 years I have seen such clients choose MS’s SmoothStreaming and PlayReady too much times because some independent vendors have been able to offer them native clients for iOS supporting PlayReady DRM inside HLS protocol.
So it’s impossible to remain on the technology and adoption edge if not supporting all the business needs for the most preminent mobile platform.
Now with AMS 5 we are able to use our preferite streaming server to deliver protected content to iOS device too, both in applications created with AIR as well as native ObjC apps.
We have two protection techniques: full blown DRM protection using Adobe Access 4 (again “Flash” is flashed away) or using PHLS (Protected HLS) the HLS version of PHDS (Protected Http Dynamic Streaming).
This last feature is indeed very interesting because offers a stronger protection than the very simple encryption possible with standard HLS without requiring the costs and worries of DRM servers.
More info in the Kevin Towes’ Blog
I have already talked (perhaps too much) about the Future of Flash in this post. There I didn’t hide my perplexities about the Market position of Flash compared to alternative technologies. After the drop of Flash Player for Mobile there was a strong decline in confidence for Flash platform. But now the scenario is beginning to emerge sharply and I begin to understand the purpose of the Adobe strategy.
Yesterday Adobe has released a public beta of AIR 3.2 for mobile application development. This version implement the promised support for Stage3D in mobile platforms like iOS and Android. A number of demo video appeared on the web showing excellent 3D performance and a lot of renewed interest about mobile game development using AIR:
Square Enix’s [Barts] running on Android
The time will tell, but AIR has the potentialities to become a leader platform in 2D/3D games development. A single code base is sufficient to create a game for Desktop (AIR’s captivate runtime), Browser (someone named Facebook ?) and now iOS and Android. With ConnectedTVs and STBs support to come (already showed during MAX), the dream of the Open Screen project is becoming reality, at least in the game dev area (but also intensive graphic/media applications may leverage 2D/3D accelerations).
Therefore Adobe has concentrated the resources in a promising field where Flash could easily become leader. In 2D/3D browser gaming it is just leader (500Million players on Facebook may be sufficient as business card ?) . Try by yourself searching for Stage3D demo in YouTube to see the huge amount of interest for this technology from any game developers (big and small).
The second strong commitment of the platform is for video delivery where Flash has been leader in the past 5 years and is still today. The performance of video decoding in the browser has been widely improved with a completely redesigned pipeline that now exploits mult-threading heavily. But most important, the support for accelerated H.264 streaming has been added to AIR for iOS using the standard Apple HLS (already supported by FMS 4.5 and Wowza Server).
During the spring Adobe will release the new version of Flash Access (now Adobe Access 4) that will include content protection for iOS devices (both in AIR and native application) in the form of DRM on HLS. This move has the potentiality to make Adobe re-gain the favor of majors and big content providers that would have the possibility to uniform DRMs across Android, iOS, Desktop Apps, Browsers, Google TV and some STBs.
The support for HW accelerated 2D, 3D and video playback on mobile, plus an improvement in performance for Flex applications, plus the possibility to integrate HTML5 contents with StageWebView, plus the DRM, plus native extentions, **finally**, makes AIR (for Mobile) an interesting, efficient, effective and valuable solution for cross platform application development.
(Updated 03 March 2012)
I think the platform is 99% complete now, which is very good, but I would like to see the following issues addressed ASAP to complete the features list of AIR for Mobile:
- H.264/AAC on RTMP : necessary for effecient real time video application, especially now that FP supports H.264 encoding.
- Echo cancellation : see the previous point.
- Effective and Robust support for key native features like InApp Purchase and Notification. I like Native Extentions’s idea but I’d prefer an official API for critical features like these.
- Better integration/communication between AS3 and JS in StageWebView. No more hacks please.
Make a comment if you think that there’s something else of important to add to AIR for Mobile/AIR for iOS.
Long time passed since my last post on this blog. I have been very busy in an important video streaming project but this is not the only reason for my absence. I have also wanted to wait and take all the necessary time to analyze, ponder and “digest” the infamous Flash affair.
I will not hide my bitterness about the fact, but I’m also more optimitic now, after I have seen the real consequences and have had the time to elaborate on the future scenario. I’ts not all a bed of roses but I’m somewhat optimistic.
First of all, fortunately, I’m not limited to Flash technology in my consultancies. I work with .net technologies for many years and I have designed and deployed successful streaming services in HLS with both Wowza Server and FMS 4.5
You also know that I’m an encoding expert with important success cases and a deep knowledge of commercial and open source encoders like Ffmpeg, x264, Flip Factory, Telestream Vantage, Atheme KFE, Rozhet CarbonCoder, Digital Rapids to name a few.
I have created encoding pipelines and optimized existing ones for delivery platforms based on HLS, Flash HDS, MS Silverlight and ipTV and designed decoding and delivery optimizations for Flash and Silverlight.
So when I talk about my bitterness, it is not driven by the fear for the future but by the awareness of the big mistake that Adobe has done stabbing Flash in the back. I want to focus this post on the future prospectives for Flash and not on the disastrous announcement of Adobe (a masterpiece of masochism, at least from a PR point of view), however a brief summary of my thoughts on the topic is a good thing. I do two short considerations:
1. Adobe may also have had good, long term stategic reasons for dropping Flash for mobile browser, but they could choose modes and terms with much less collateral damages. Why not reduce progressively the commitments and the investments across the lifespan of FP11 to avoid harming the Flash Community ? After all, FP11 has been released for Android and QNX and it has brought important improvements in performance and stability. I know that Flash for mobile browsing has a lot of problems and those problems are due tot the excessive use of bad Flash coding that has been done over time especially for advertising. Obviously if you have a page with 5-6 Flash banners that can kill an old desktop computer, how can be able a tablet to handle this ?
A simple solution could be to put every swf of a page in an idle mode, with a clickable poster image that activates the swf only when touched. Simple, clear and always better than have no Flash support in mobile browsing.
2. Adobe just does not realize that is killing the goose that lays golden eggs. Have you even thought about the fact that Flash is used every day by 2 billion people! It’s probably the most pervasive peace of sofware after MS Windows. Giants like Steve Jobs would have exploited such competitive advantage in ways that the current Adobe management are not even able to imagine. Yet it is not difficult to imagine for example a marketplace of Flash and AIR apps on the model of the MacOS AppStore (but with 20 times more potential customers). What it is worth this kind of power ? Evidently near t0 zero for Adobe.
But now the damage is done and it worth nothing to complain, and so there will be some short, medium and long term consequences. The short term consequences are paradoxically positive for experienced Flash developers. This is because new developers, creative shops and consultancy firms are focusing interest to HTML5 because of the bad medium and long term outlook for the Flash technology and because of marketing reasons. But the demand for Flash technology is not decrasing as fast as the offer and so there is a burst in the amount of work available for skilled developers.
In a medium term I see an higher convergence between the demand and offer for Flash-based projects in general. Flash will mainain or increase it’s penetration in web gaming thanks to 3D (remember that the casual game market on Internet is completely Flash-centric today, how forget that every day 200+ million people play some Flash games in Facebook ?) and probably will remain the reference for video streaming, but in the RIA market and creative market HTML5 will definitely gain it’s momentum (in real terms, not like now where only a few important creative, video or gaming projects has migrated from Flash to HTML5).
Flash in the mobile market, as a cross platform mobile development technology, has not, in my opinion a clear outlook for the future. The sudden drop of Flash for mobile browser and the drastic reduction of commitment for Flex has been percepited as a treachery of Adobe from the point of view of the loyal base of sustainers and developers and as a definitive change in the wind from the point of view of customers and stake holders. How to blame them ? the lack of support from its own creator is a mortal stub for a technology and the message from Adobe is clear: in the long term we’ll substitute Flash with HTML5. Not only, we will focus more on tools than technologies (Flex docet).
No place for developers in the future of Adobe ? I don’t know but the long term perspective of Flash, Flex and other Flash related technology (FMS?) has been heavely perturbated by the infamous move. Flex is now an Apache baked project but is it a guarantee of evolution and support ? Who will invest time and credibility among customers in a technology for mobile development that has not a clear commitment from its creator and controller ?
Concluding, what I intends to do as a Flash developer ? In the short term I have to do a lot of Flash related projects, so no problem. In the medium term I think to continue using Flash/AIR for Mobile development. This is a clear path for me, I can capitalize on my AS3,Flash and Flex platform skills to develop desktop, browser and mobile apps. Now the level of features for Android and iOS has become good enough to be able to develop any kind of apps without the need for adding Java and Objective C to your skill portfolio (in my opinion, the recent support for notifications, in app purchase and HLS have cleared the top three entries of the most wanted and needed features list).
And in the long-term ? I dont’ have an answer, I think I’ll simply wait and see.
PS: Very interesting article about “migrating” from Flex to JS (Thanks to Anna Karim) – https://plus.google.com/109047477151984864676/posts/CVGJKLMMehs
Finally the recording of my presentation at MAX2011 (Encoding for performance on multiple devices) is available on Adobe TV.
You can also download the pdf version here. My using of FFmpeg for repurposing the streams of FMS has attracted quite a lot of interest and attention. I’m planning to extend the series of article dedicated to FFmpeg and also to transform it in a permanent knowledge-base on FFmpeg and related best-practices.
Finally the applications I have developed with Flex and AIR for Playbook are officially in the RIM AppWorld market.
They are 6 media apps developed for Finelco, the owner of the biggest radio network in Italy.
– A selection of thematic web radio plus the live broadcast of the main radio channel
– A selection of podcasts (MP3) from the main programs of the radio
– The charts/playlist created by the sound designers or voted by the users
– A multi-touch photo gallery
– A selection of VOD contents like video clips, interviews, concerts
I’m very proud because the apps are collecting a lot of 5 starts reviews. Flex and AIR can assure an excellent UX, especially for multimedia (live audio, live video, vod and so on), and a easy customization.
Pictures from the Virgin Radio app:
If you have not a Playbook, take a look at the UX in this video:
I’m happy to offer to 5 of my readers the opportunity to obtain a discount of $200 for a full conference pass at Adobe MAX 2011. The first 5 of you that will use the promo code ESSONNATI while registering at the conference (max.adobe.com) will obtain the discounted rate of $1,095 instead of $1,295. Be quick! Max is approaching fast!
It you get in, remember to attend my presentation: “Encoding for performance on multiple device”
The global bandwidth consumption is growing every day and one of the main causes is the explosion of bandwidth hungry Internet services based on Video On Demand (VOD) or live streaming. Youtube is accounted for a considerable portion of the whole Internet bandwidth usage, but also Hulu and NetFlix are first class consumers.
One of the cause of this abnormal consumption (apart from the high popularity) is the low level of optimization used in video encoding: for example Youtube encodes 480p video @1Mbit/s, 720p @2.1Mbit/s and 1080p @3.5Mbit/s which are rather high values. But also NetFlix, BBC and Hulu use conservatives settings. You may observe that NetFlix and Hulu use adaptive streaming to offer different quality levels depending by network conditions but such techniques are aimed at improving the QoS and not reduce the bandwidth consumption, so it’s very important to offer a quality/bitrate ratio as high as possible and not underestimate all the consequences of an un-optimized encoding.
The main consequence of a not optimized video is an high overall bandwidth consumption and therefore an high CDN bill. But for those giants this is not always a problem because, thanks to very high volumes, they can negotiate a very low cost per GByte.
However it is not only a matter of pure bandwith cost. There are many other hidden “costs”. For example at peek hours may be difficult to stream HD videos from Youtube without frequent, and annoying, rebuffering. Furthermore a lot of users nowadays use mobile connections for their laptop/tablet and rarely such connections offer more than 1-2 Mbit/s of real average bandwidth. If the video streaming service, differently from YouTube, uses dynamic streaming (like Hulu, NetFlix, EpicHD, etc…) the user is still able to see the video without re-buffering but it is very likely that he will obtain one of the lower quality versions of the stream in this bandwidth constrained scenarios and not the high quality one.
Infact, the use of dynamic streaming is today very often used as an alibi for poorly optimized encoding workflows…
This state of insufficient bandwidth is more frequent in less developed countries. But even highly developed countries can have problems if we think at the recent data trasfer restrictions introduced in Canada or in USA by some network providers (AT&T – 150GB/month, for example).
These limits are established especially because at peak hours the strong video streaming consumption can saturate the infrastructure, even of an entire nation as was happening in 2008-2009 in UK after the launch and the consequent extraordinary success of BBC’s iPlayer.
So dynamic streaming can help but must not to be used as an excuse to poorly optimized encodings and it is absurd to advertise a streaming service as HD when it requires to have 3-4Mbit/s+ of average bandwidth to stream the higher quality bitrate while in USA the average is around 2.9Mbit/s (meaning that more than 50% of users will stream a lower quality stream and not the HD one).
How many customers are really able to see an HD stream from start to end in a real scenario with this kind of bitrates ?
The solution is : invest in video optimization
Fortunately today every first class video provider uses H.264 for their video and H.264 offers still much room for improvement.
In the past I have shown several examples of optimized encodings. They were often experiments to explore the limit of H.264, or the possibilities of further quality improvements that the Flash Player can provide to a video streaming service (take a look at my “best articles” area).
In such experiments I have usually tried to encode a 720p video at a very low bitrate like 500Kbit/s. 500Kbit/s is more than a psycological threshold because at this level of bitrate it is really-really complex to achieve a satisfactory level of quality in 720p. Therefore my first experiments used to be accomplished on not too much complex contents.
But in these last 3 years I have improved considerably my skills and the knowledge of the inner principles of H.264. I have worked for first class media companies and contributed to the creation of advanced video platforms capable to offer excellent video quality for desktop (Flash, Silverlight, Widevine), mobile (Flash, HLS, native) and STB (vbr or cbr .ts).
So now I’m able to show you some examples of complex content encoded with very good quality/bitrate ratios in a real world scenario.
I’m not afraid
To show you this new level of optimization of H.264 I have choosen one of the most watched video in Youtube: Not afraid by Eminem.
This is a complex clip with a lot of movements, dark scenes, some transparencies, lens flares and a lot of fine details on artist’s face.
Youtube offers the video in these 4 versions (plus a 240p):
1080p @ 3.5Mbit/s
720p @ 2.1Mbit/s
480p @ 1Mbit/s
360p @ 0.5Mbit/s
Starting from this “state of art”, I have tryed to show what can be obtained with a little bit of optimization.
Why not try to offer the quality of the first three stream options but at half the bitrate ? Let’s say:
1080p @ 1.7Mbit/s
720p @ 1Mbit/s
576p @ 0.5Mbit/s
A such replacement would lead to two consequences:
A. Total bandwidth consumption reduced approximately by 2.
B. Much more users would be able to watch high quality video, even in low speed scenarios (mobile, capped connections, peak hours and developing countries).
But first of all, let’s take a look at the final result. Here you find a comparison page. On the left you have the YouTube video, on the right the optimized set of encodings. It is not simple to compare two 1080p or 720p videos (follow the instructions in the comparison page), so I have extracted some screenshots to compare the original Youtube version with the optimized encoding.
Notice the skin details and imperfections. The optimized encoding offers virtually the same quality at half the bitrate. Consequently, the quality of 1080p at 15% less bitrate than the 720p version of Youtube.
2. Youtube 720p @ 2Mbit/s vs optimized 720p @ 1Mbit/s
Again virtually the same quality at half the bitrate. Consequently 720p video can be offered instead of 480p which has the same bitrate:
3. Youtube 480p @ 1Mbit/s vs 720p @ 1Mbit/s
Optimized 720p offers higher quality (details, grain, spatial resolution) at the same bitrate.
4. Youtube 480p @ 1Mbit/s vs optimized 576p@ 500Kbit/s
Instead of using a 854×480 @ 500Kbit/s resolution I have preferred to use a 1024×576 (576p). I have also tryed to encode in 720p @ 600-700Kbit/s with very good results but I liked the factor 2 reduction in bitrate, so in the end, I opted for 576p which offered more stable results across the whole video. In this case the quality, details level and spatial resolution is higher than the original but at half the bitrate.
5. Youtube 360p @ 500Kbit/s vs optimized 576 @ 500Kbit/s
Again much higher spatial resolution, details level and overal quality at the same bitrate.
For the sake of optimization
How have I obtained a bitrate / quality ratio like this ? Well, it is not simple but I will try to explain the base principle.
Modern encoders do a lot of work to optimize the encoding from a mathematical / machine point of view. So for example a metric is used for Rate Distortion Optimization (like PSNR or SSIM). But this kind of approach is not always usefull at low bitrates, or when a high quality/bitrate ratio is required. In this scenario the standard approach may not lead to the best encoding because it is not capable to forecast what pictures are more important to enhance the quality perceived by the average user. Not every keyframes or portions of video are equally important.
These examples of optimized encodings are obtained with a mix of automated video analysis tool (for dynamic filtering, for istance) and human-guided fitting approach (for keyframe placement and quality burst). I’m actually developing a fully automated pipeline but by now, if an expert eye guides the process, it produces better results.
Unfortuntely there is a downside in using an ultra-optimized encoding: the encoding time rises consistently, so it is not realistic to think that Youtube could re-encode every single video with new optimized profiles.
But, you know, when we talk about big numbers, there’s an empiric law which may help use in a real world scenario: the Pareto principle. Let’s apply the Pareto principle to Youtube…
The Pareto principle
The Pareto principle (aka the 80-20 law) states that, for many events, roughly 80% of the effects come from 20% of the causes. Applying this rule to YouTube, it’s very likely that 80% of traffic comes from 20% of videos. A derivation of Pareto law known as 64-4 rule states that 64% of effects come from 4% of causes (and so on). So optimizing a reduced set of most popular videos would lead to huge savings and optimal user experience with only a limited amount of extra effort (the 4%).
But “Not Afraid” belongs to the top 10 of most popular video on YouTube, so it’s a perfect candidate for an extremization of Pareto law.
Let’s do some calculation. My samples reduce the bandwith of a factor 2 at every versions. So if we suppose that the most preferite version of the video is 720p and consider that the video has been watched more than 250 M times in the last 12 months, YouTube has consumed : 64MB * 250 M views = 16 PBytes, only to stream Not Afraid for 1 year.
Supposing an “equivalent” cost of 2c$/GByte*, this means 320.000$ (* it’s the lowest cost in CDN industry for huge volumes; probably YouTube uses different models of billing so consider it as a rough evaluation).
So an hand-made encoding of only 1 video could generate a saving of 160.000$. Wow… Encoding even only the TOP10 Youtube videos means probably at least 1M$ of saving…multiply this for the TOP1000 video and probably we talk of tens of millions per year…what to say…youtube, you know where to find me😉
Moral of the story
The proposed application of Pareto rule is an example of adaptive strategy. Instead of encode all the video with a complex process that could not be affordable, why not encode only a limited subset of very popular videos ? Why not encode them with the standard set and then re-process without hurry only if the rate of popularity rise over an interesting threshold ?
Adaptive strategies are always the most productive. So if you apply this to the Youtube model, you get huge bandwidth (money) savings, if you apply this to a NetFlix model (dynamic streaming) you get a sudden increase in average quality delivered to clients and so on.
Concluding, the moral of the story is that every investment in encoding optimizations and adaptive encoding workflows can have very positive effects on user experience and/or business balance.
PS: I’ll speak about encoding and adaptive strategies during Adobe MAX 2011 (2-5 October) – If you are there and interested in encoding join my presentation : http://bit.ly/qvKjP0