Archive for the ‘Video’ Category

FFmpeg – the swiss army knife of Internet Streaming – part I

11 July 2011 9 comments


PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)


(because of the success of this series I have decided to revise the content of the articles, update the syntax of command lines to recent api changes and extend the series with a fifth part, good reading!)

First part – Intro

This is the start post of a small series dedicated to FFmpeg. I have already talked about it 6 years ago when this tool was still young (Why I love FFmpeg: post1 and post2) but in these 6 years it has evolved widely and now it is a really useful “swiss army knife” for Internet streaming. I could define FFmpeg also as one of the pilllars of Internet Video. Sites like YouTube, Vimeo, Google video and the entire trend of UGC would not exist without FFmpeg. It is an exceptionally flexible tool that can be very useful for who works in the streaming business and most of all it is open source, free and well supported.

A brief history of FFmpeg

From wikipedia:
is a free software / open source project that produces libraries and programs for handling multimedia data. The most notable parts of FFmpeg are libavcodec, an audio/video codec library used by several other projects, libavformat, an audio/video container mux and demux library, and the ffmpeg command line program for transcoding multimedia files. FFmpeg is published under the GNU Lesser General Public License 2.1+ or GNU General Public License 2+ (depending on which options are enabled). The project was started by Fabrice Bellard, and has been maintained by Michael Niedermayer since 2004.”

So FFmpeg is the command line program that using libavcodec, libavformat and several other open source programs (notably x264 for H.264 encoding) offers exceptional transcoding capabilities especially for server side batch transcoding, but also for live encoding and audio/video files manipulation (muxing, demuxing, slicing, splitting and so on).

Me and FFmpeg

I started to study video encoding optimization during University and in the last decade I have used several open source and commercial encoders and designed my own original approaches and optimizations to encoding (take a look at best articles page for some experiments). But video encoding  and streaming became a business for me only after the release of FMS and Flash Player 6 in the late 2002. In the next years I developed several real time communication programs for my clients and a couple products. One of these was the “BlackBox” (2005), an HW/SW appliance that acquired multiple video sources using a Flash front-end and a FMS back-end. The system not only acquired video but provided also video editing features.

And here started my FFmpeg discovery. I needed a tool, possibly free, to manipulate FLV (cut, join, resize, re-encode, etc…) to provide the primitive operations of video editing. Being a .net developer I created my own tools (especially for splitting and joining) but then I found FFmpeg to be the best solution to do the most complex parts of the work. So my confidence with FFmpeg dates back to 2005. Around the same year Youtube started to use it as a free and fast way to encode (almost) any video format in input to a common output format (Sorenson’s Spark) for playback in Flash… you know the rest of the story.

Services like Youtube could not afford massive video transcoding using commercial solutions, so a tool like FFmpeg has been of fundamental importance for the sostenibility of their business model. This is why I defined FFmpeg as one of the pillars of Internet Video.

What is it possible to do with FFmpeg ?

The most obvious functionality is the decoding and encoding of audio and video files (transcoding). FFmpeg accepts an exceptional number of formats in input and is capable to decode, process (resize, change fps, filter, deinterlace, and so on) and finally encode to several output formats. But it can do a lot of other useful tasks like extract the elementary AV streams from a container, mux elementary streams in a new container, cut portions from a video, extract track informations.

These are features that FFmpeg has from many years while only more recently has been added the support for RTMP protocol (librtmp)

I think that, for an Internet Streaming professional, this has become one of the most important feature of FFmpeg. Before, if you wanted to acquire streams or push streams to a server in live mode you needed to use the RTP/RTSP protocol, but it is too complex and the implementation not really stable. On the other hand, RTMP is a simple yet powerful protocol and most important of all, it is supported by FMS and Wowza which are the most used streaming server in the last 5 years.

For example with librtmp it is possible to:

1. Connect to FMS, subscribe a live or vod stream and record it to file system.
2. Connect to FMS, subscribe a stream, transcode and publish a new version to a different or the same FMS.
3. Publish a local video file to FMS to simulate live streaming (with or without transcoding).
4. Acquire a live feed on the local PC, transcode and publish to FMS.

The series

After this conceptual introduction I invite you to enter in to details reading the other chapters of this series. This project has gained attentions and I have decided to transform it from a simple series of blog posts to a permanent knowledge base around FFmpeg and how to use it for simple and complex tasks in the video streaming business.

Concluding I like to underline that with a smart usage of FFmpeg and RTMP it’s possible to create infine combinations and overcome current limitations of Flash Video platform.
For example, one of the most interesting consequence of the point 3 is that using this tool and Wowza Media Server or FMS 4.5 which offer HLS compatibility, it is possible to transcode on-the-fly a stream generated by Flash Player (sorenson + asao or speex) to HLS for  consumption on iOS devices…not bad… continue reading to know more and follow the discussion on my twitter account too (@sonnati).


PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)


Categories: Video

FlashCamp Milan – presentation online

21 May 2011 Leave a comment

Uno dei pochi post in Italiano che faccio su questo Blog per ringraziare Flash Mind, Adobe Flash Platform User Group Italy, per  la bella esperienza avuta al Flash Camp che si è tenuto a Milano ieri. Era la prima volta che partecipavo alla manifestazione WhyMCA che ha ospitato il Flash Camp e devo dire che ne sono rimasto positivamente colpito. Qui trovate le slide della mia presentazione sull’ottimizzazione del video encoding e del video delivery per mobile.

Categories: Video

A dream comes true: H.264 encoding into Flash Player 11

14 May 2011 29 comments

Yes. A dream comes true. After almost 9 years the Flash Player will have a new video codec. Sorenson’s Spark is about to retire, finally. But let’s recap the whole story starting, obviously, from the beginning.

In 2002 Macromedia included in Flash Player 6 a video codec provided by Sorenson. The Spark video codec was a custom and simple implementation of the international standard H.263. Spark supported simple encoding techniques derived from H.263v1 (P-frames with motion estimation and compensation, half-pixel accuracy, 1 reference frame, +-16 pixel long reference frames, RLE and Huffman for entropy coding to name a few) plus some enhanced features like deblocking in post-processing and the special D-frames (Desposable frames) which are like P-frames but cannot be used as reference. Especially this latest technique was introduced to support the main objective of Spark: provide a low latency codec for video communication over the Internet.

Flash Player 6 was so capable to generate and consume video streams but only from a new Macromedia’s server product: The Flash Communication Server (FCS), a revolutionary product ahead of the market of several years. To be honest, everybody know the story: FCS had the potentialities to be a disruptive product but was heavily ruined by an absurdly inaccessible price tag  and strongly capped configurations (es: 4500$ for a 10Mbit/s capped version, 990$ for a 1Mbit/s capped version, no developer versions, etc…I’m not kidding). The result: years passed and only a few mad developers (I’m one of them) continued to support the product hoping for a brighter future.

With the successive Flash Player 7, Macromedia decided to unlock the use of Sorenson videos for progressive download. This is the spark that ignited the revolution of video on Internet. After some years of limbo even FCS/FMS was relaunced with less restrictive licenses and the product became more mature release after release (here we could start a different debate about the slowness of Adobe in improving FMS and the recent lawsuite with Wowza but this post has a different topic…).

After only 2-3 years Spark started to be obsolete, even because the implementation of the encoder in the Flash Player was not so optimized and used very simple approaches to rate-control in video encoding. The community started to ask for some improvement in this area but without response until now. I think to have asked for a new encoder for almost 5-6 years. I have also developed in the past some optimization to enhance the encoder performance for screen grabbing or webcam communication but 9-years in computer programming are a whole age and a 30% improvement in efficiency was still insufficient to compare Flash video with Skype video, for example.

So yes, a dream comes true because Adobe has introduced in the current Flash Incubator an H.264 video encoder. Oh yes! H.264 is the state of the art in video encoding, the presence of B-frames could be useful to replace the D-frames and the potentialities of this codec are excellent so that even a poor implementation can lead to excellent improvements over Spark. I’m only a bit afraid for real-time, but from some comments found in the Internet I think that it will be possible to opt for different configuration to address lantency and encoding efficiency.

From the Incubator’s Forum I have estracted this peace of code that show how to change the codec from Spark to H.264 and configure it.

 var h264Settings:H264VideoStreamSettings = new H264VideoStreamSettings();
 h264Settings.setProfileLevel(H264Profile.BASELINE, H264Level.LEVEL_2);
 stream.videoStreamSettings = h264Settings;

* stream is the NetStream istance that will perform the publish of the encoded stream to the FMS.

Notice the H264Profile enumeration which is probably capable to specify not only a BASELINE, but also a MAIN or perhaps HIGH profile for encoding. Similarly the H264Level specify the level (substancially reference frames number) presumibly from 2 to 5. I hope to be also able to define the number of consecutive B-frames, and/or something like an accuracy switch (suppose H264accuracy.SLOW, H264accuracy.FAST and so on).

I’m starting to do some testing by myself because I’m really excited of this future Flash Player feature. If you add the new EchoCancellation API, the support of the open source Speex codec, p2p, and mobile availability, I think that a new youth is starting for Flash based communication applications development.

Better late than never.

Categories: FMS, Video

Flash Camp in Milan

2 May 2011 1 comment

In a few weeks (20-21 May) will start in Milan a Flash Camp dedicated to Mobile App development with the Flash Platform. The camp is hosted by whymca, a mobile developer conference that covers several mobile platforms and development tools. This year, thanks to the efforts of my friend Andrea Trento, an entire conference track will be dedicated to mobile development using the Flash Platform.

I’m one of the crew (4 Adobe Community Professionals + 1 Evangelist) that will speak at the Camp.

First of all Mihai Corlan (Adobe Evangelist) will open the track talking about Flex on Mobile and the new possibilities for cross platform development offered by the Flash platform. Andrea Trento will show how to create a cross-platform game with Flash, Luca Mezzalira will show the use of Design Patterns in mobile development, Piergiorgio Niero will focus on code optimization for mobile while I’ll speak about video encoding and delivery optimization for mobile.

It will be a great event for learning the cutting edge of technology for mobile development and for keeping in touch with the ever growing flash community.

The event will take place in Milanofiori – Assago – Milan  (more info in the whymca web site), and is completely free. Due to the limited numbers of seats I suggest you to book early your presence.

Categories: Flash, Mobile, Video

AIR 2.6 for iOS and video playback

26 April 2011 43 comments

If you work with audio and video streaming, one of the worst limitation of AIR 2.6 for iOS is that it is not possible to stream video encoded in H.264 (and audio in AAC) inside your AIR application. AIR 2.6 for iOS supports NetConnection and NetStream but can decode only Spark, VP6, MP3, NellyMoser and Speex formats. So no H.264 and no AAC (don’t ask me why).

This is a real problem. Who is using today VP6 for video streaming or MP3 for audio ? On top of that, the performance of audio/video streaming is not perfect in AIR for mobile today (even for Android) especially because you have a lot of dropped frames and no frame interpolation, so delivery a stream in VP6 and MP3 for iOS devices is a very sub optimal solution that cannot compete with the very good native streaming capabilities of iOS to which the user is accustomed.

The AIR for iOS documentation mentions that it is possible to launch the native iOS video player pointing to an .mp4 (or .m3u8) video file, but this is not handy because the video is opened outside the AIR application and especially for iPad the user experience is really bad.

Fortunately there’s a sufficiently working solution to integrate in the AIR app the flowless experience of the native player: use the StageWebView object.

The StageWebView object  is a powerful way to integrate the elements that AIR is lacking today. Do you need a list with perfect scrolling ? AAC audio streaming ? H.264 streaming ? Well you can use a StageWebView to load “HTML5” code inside the AIR app and integrate this kind of content. Let’s do a simple example:

var webView = new StageWebView();
webView.stage = this.stage;
webView.viewPort = new Rectangle( 0, 0, stage.stageWidth, stage.stageHeight);
var path:String = new File(new File("app:/html/service.htm").nativePath).url;

This code will open a fullscreen instance of the native browser (without UI) and load a locally stored .html file (notice that it is necessary in iOS to use some hack like that at line 4 to obtain a valid url to access local html). Now you can easily understand that the best is to mix AIR UI and this windowed browser to exploit HTML5 capabilities especially for media streaming. A simple <video> tag inside the html code can do the job and offer perfect H.264 progressive and streaming playback to your AIR apps both in window and at fullscreen.

Communicate with StageWebView

It is not so easy to communicate with the page loaded inside the StageWebView. The object does not provide specifics API. Fortunately exist a class (StageWebViewBridge) developed to overcome the limitations of the standard StageWebView object. With StageWebViewBridge it is possible to communicate bidirectionally with the hosted html page and so create something similar to PhoneGap with Flash AIR.

Waiting for the future AIR 2.7 (which could have yet problems in the video area if it does not implement StageVideo in iOS) this is the best solution I found to overcome the limitations of AIR for iOS.

Categories: Flash, Mobile, Video

FlashPlayer 10.2 is coming on Android

14 February 2011 4 comments

Adobe announced today, during the Mobile World Congress, that Flash Player 10.2 will be available soon for Android Devices. This is very important because FP 10.2 introduced the Stage Video object which offers a direct control over hardware acceleration in video deconding. In my opinion the worst point of weekness of FP10.1 for Android is the performance of video deconding so I’m very happy of having Stage Video ASAP.

In Flash Player 10.1 for Android the decoding of H.264 can be hardware accelerated (depending by the HW of the device) but the color conversion, the blending and the compositing of the video on the display is still demanded to a software layer. This is because the canonical Video object is part of the display list and so it injected inside the display list rendering pipeline. Stage Video is a alternative way to access video layer and it is not part of the display object. At the cost of a lower flexibility you have a direct access to harware acceleration, from bitstream decoding to video compositing.

Stage Video will be available only on Android 3.0, this means tablets like Motorola Xoom, Samsung Galaxy Tab 10.1 and so on. The need for full hardware acceleration is much more important for a tablet which has a big screen compared to a smart phone but I suppose we will see even new smart phones equipped with Android 3.0 any time soon.

Now I feel only the need for a new, efficient and accelerated iOS packager…

Categories: Flash, Mobile, Video

The NAB Show 2011

11 February 2011 Leave a comment

I’m every day more and more involved in matters related to the TV of the future, therefore this year I’m planning to attend the NAB Show in Las Vegas.

The 2011 edition of  NAB Show, coming up April 9-14 in Las Vegas, is the world’s largest event for video, audio and digital media professionals. This year show will feature a lot of products, technology pavilions and educational sessions specifically focused on online video.

With the registration code SM06 you can have free access to the exhibit floor, the Opening Keynote and State of the Industry Address, Info Sessions, Content Theater, Exhibits and PITS (around $150 value). Visit to redeem or register at with the code SM06.

See you in Las Vegas.

Categories: Video

Flash to HTM5 fallback

12 January 2011 8 comments

After the debate around what it’s better at serving video between Flash and HTML5 I have found a lot of code in the Internet on how to serve video in HTML5 with a Flash fall back.

Independently by the fact that the debate is far from being concluded, I think that it’s a nonsense to check first for HTML5 and then for Flash instead of the contrary. Let’s follow this simple reasoning:

Today you have Flash Player in 98% of the desktops and on Android 2.2. Even excluding the fact that in a few months it will be available on RIM OS, Web OS, Windows Phone 7, Samsung and Sony connected TVs and Google TVs, the penetration of Flash Player is very very high because the computer’s desktop is still the most important place where Internet video are consumed.

The performance of Flash on these platforms is very good and I’m not speaking only about pure Fps or CPU usage statistics, but overall performance including video control, personalization, easyness of development, advertising compatibility, content protection and QoS. And now with StageVideo API the performance is very very good on Mac too.

On the other hand you have HTML5 which is a compact, standard and effective technology for video… Sorry, I correct myself: It WILL BE standard and effective because now it is a mess with every vendor implementing (or anticipating) the standard in its own way.  It works well on Mobile, especially on  iOS devices (even if the implementation does not adhere completely to the ‘draft’ standard), but on the desktop (over a Billion screens) it simply doesn’t work today, at least for video delivery.

The scenario is clear: On Desktop we have 50% of IE (sub 9) browsers who do not support HTML5 at all. 35% of Firefox browsers who support at the moment only Theora for video. Only Chrome and Safari seem to work flawlessly with HTML5, but yesterday Google has complicated everything announcing that Chrome will drop the support to H.264 in favor of VP8.  It is definitely not a good scenario…

Therefore, I wonder, why should I serve my video first to HTML5 and THEN to Flash ? that would be crazy.

Use HTML5 as a Flash fall-back

You know that I develop video encoding pipelines as well as video delivery platforms and optimizations. So for my clients I propose Flash based players (custom or OSMF based). With Flash, not only they have the widest audience possible, but even a great and consistent interactive experience. Furthermore, one very important point for top media companies is content protection, and with Flash you can protect content in several ways. This is not possible today using HTML5 (indeed it’s also not possible to stream, you have only progressive dowloading), only iOS devices support a not standard streaming and protection schema.

But the new “Mobile Wave” cannot be ignored. Everyone want to watch video on iPhone, iPad or Android. And also BlackBerry, Symbian and Windows Mobile devices need to be served in some way.

So, what I propose to my clients ?  The following Fall-Back approach:

IF The device has Flash (Desktop + Android2.2) THEN
stream using Flash with Dynamic Streaming / Interactivity / Protection.
……IF The device supports HTML5 (iOS, Android, etc…) THEN
………stream (on iOS) or progressive download (on Android and other webKit based browsers).
………serve video in a low end format for other devices in progressive downloading.

This kind of multiple fall-back schema can be achieved in several way. I like to use a compact schema, easily applicable for video that can be embedded in external pages. The example below uses the fall back feature of the HTML tag <object> and HTML5 tag <video>:

<object id="VideoPlayer" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase=",0,0,0" width="600" height="400" >
 <param name="movie" value="videoPlayer.swf?videoID=10" />
 <param name="quality" value="high" />
 <param name="allowfullscreen" value="true" />
 <param name="allowscriptaccess" value="always" />
 <param name="wmode" value="opaque">
 <object type="application/x-shockwave-flash" data="videoPlayer.swf?videoID=10" width="600" height="400" >
 <param name="movie" value="videoPlayer.swf?videoID=10" />
 <param name="quality" value="high" />
 <param name="allowfullscreen" value="true" />
 <param name="allowscriptaccess" value="always" />
 <param name="wmode" value="opaque">
 <video controls preload width="600" height="400" poster="thumbnail.jpg" >
 <source src="iPhoneVersion.m3u8" type="video/mp4" />
 <source src="AndroidVersion.mp4" type="video/mp4" />
 <a href="LowEndMobileVersion.3gp">
 <img src="thumbnail.jpg" /></a>

First of all I have used two nested <object> tag which is one of the techinque for embedding the Flash Player in an HTML page. The first is for IE, the second (in fall back) is for FF, Chrome and the others (Change videoPlayer.swf?videoID=10 to your video player of choice). If even this second tag fails you are probably on a mobile device or on a tablet so it’s logic to try the HTML5 <video> tag which works in IOS devices, Android and other OS with webkit based mobile browsers. The video tag allows to specify a list of possible video sources and tries to access them in sequence. Here I propose at first the Apple HTTP Live streaming technique serving the iPhoneVersion .m3u8 manifest file and only if this is not supported by the device I fall back to a progressive downloaded video in .mp4 format called AndroidVersion.mp4 (for Android, WebOS, RIM OS6, etc…). If even the <video> tag fails you are probably on a early version of Android, Symbian, BlackBerry or Windows Mobile so a low end version of video is served in progressive downloading (LowEndMobileVersion.3gp).

This approach guarantee the best performance on desktop (Flash Player, streaming with bitrate switching, advanced QoS and protection) and on iOS (Apple HTTP Live streaming with bitrate switching and protection) where this schema works for both live and ondemand video. On the rest of the devices you can still serve on demand video without protection. It’s your video platform that must selectively built the right fall-back schema to exclude the lower version of video when it must be protected or if it is a live streaming.

Categories: Flash, Video

AIR on TV and the StageVideo Object

31 October 2010 3 comments

One of the most important new technologies presented at AdobeMAX 2010 is in my opinion “AIR for TV”. Flash Player is now present on multiple screens ranging from desktop to mobile passing by tablet and set-top-boxes.
The TV screen is always been one of the most desired and the availability of Flash on TVs and STBs is a strategic move.

We already knew that Flash will be supported by Google TV and by Sony connected tv but the big new is that starting from now, every connected tv by Samsung (probably the most important tv producer) will support flash and AIR for TV. Not only, in a short time frame we will seen on the market several bluray readers and stb all flash enabled.

Wow, the world of applications for the living room is one of the most promising and “rich”. The income generated by the traditional TV world is still huge and the kind of interactity that a connected tv can assure can only enhance the business giving tailored advertising at a generally untargeted media.

In this scenario Flash is the perfect mate for the big companies who want to create new business model in this market (Sony, Google, Samsung in primis) but has to fight at the same time the power of Apple and his vision. In this context Flash is a tool that assure the availability of millions of developers ready to develop very good applications for new app markets.

The most important features of AIR for TV is the StageVideo. StageVideo can be used instead of the classic Video object when a direct hardware acceleration is desired. Using this object it is possible to access the underlying hardware to accelerate video decoding of H.264. This approach unload the cpu (usually a low power risc processor in STBs) from the video decoding and can guarantee an excellent performance in video decoding. Every supported chipset will be able to decode one or multiple streams in FullHD, any level, any profile at full framerate with out losing one frame.

The drawback is that the video plane will be composited after the stage rendering (StageVideo is not a child of the stage and not belongs to DisplayObject) so you have only a limited amount of control on it.

I think it is a good cost to pay to have a perfect video experience, which is what any viewer want from his tv set.

Fortunately I have an AIR on TV stb so I can start to experiment with this new exciting possibility for flash development. I’ll give you further impression but what I have already seen is awesome.

Here you find detailed information about the new StageVideo API.

I conclude saying that this king of low level control of hw acceleration will be extended by Adobe soon on any platform so you will be able to decode FullHD video in hardware with excellent performance and very very low cpu usage on windows, mac, mobile and tablets. Very exciting days are ahead…

Categories: Flash, Video

Flash + H.264 = H.264 Squared – Part III

6 October 2010 18 comments

In the last post I have explained my vision about an ultra optimized video encoding workflow with video enhancements done inside the Flash Player at run-time.

I want to conclude this serie with some additional informations on how to restore video details and film grain.

Details restoration

Flash Player has the possibility to apply predefined filters since the version 8 (es: emboss, dropshadow, etc..). Not only, it also supports the definition of custom filters using two different techniques: convolution matrices or Pixel Bender.

Pixel Bender is a rather complex topic because you need to learn a dedicated, yet simple, language to manipulate pixels during filtering iterations. Therefore I’ll concentrate on the more simple convolution method.

A convolution matrix is a mathematical object (aka “Kernel”) used to alter a pixel depending by the value of the surrounding pixels. Usually 3×3 or 5×5 matrices are used.

This is an example of a strong sharpness kernel:

What do you need to implement that filter in AS3 ? It is indeed really simple, use this code:

// "video" is your video object istance
var filter = new flash.filters.ConvolutionFilter();
filter.matrixX = 3;
filter.matrixY = 3;
filter.matrix = [-1, 0, -1, 0, 8, 0, -1, 0, -1];
filter.bias =  0;
filter.divisor = 4;

Film Grain restoration

One of the more interesting thing of my last experiment is the generation of a credible video noise. During strong compression, video noise (film grain) is strongly reduced. Adding a “synthetic” grain can enhance efficiently the perceived quality of the final video. Indeed this approach is also defined in one minor annex of H.264 standard but it has been rarely implemented in commercial video players.

Creating a credible video noise is not an easy task. I started with pre-rendered noise clips to overlay in trasparency over the video but this approach was not good. Then I tried to implement a noise generator in Pixel Bender but it had poor performance.  Finally I found a simple and self contained method. It uses the perlin noise generator built inside the Flash Player.

Perlin Noise is very different from a random noise (the type we need), but with a proper parametrization it can become very similar. The problem is that it is practically impossible to generate continuosly “noise frames” because the calculations for perling noise are very intensive. So I used a pre-rendering approach. During the player initialization I pre-calculate a number of “noise frame” and store them in separate bitmap objects. During the playback the frames are overlayed on the video in sequence to create the film grain effect. Take a look at the code:

// Init

var baseX:Number = 1.3;
var baseY:Number = 1.3;
var numOctaves:Number = 1;
var stitch:Boolean = true;
var fractalNoise:Boolean = true;
var channelOptions:Number = BitmapDataChannel.BLUE|BitmapDataChannel.RED|BitmapDataChannel.GREEN
var grayScale:Boolean=false;
var offsets:Array = new Array(new Point(), new Point());
var bitmapArray:Array = new Array();
var frameNumber:Number = 12;

for (var i = 0; i<frameNumber;i++) {
  var bmpData:BitmapData = new BitmapData(1280,720);  
  var bmp:Bitmap = new Bitmap(bmpData);
  bmpData.perlinNoise(baseX, baseY, numOctaves, Math.random()*65000,stitch, fractalNoise, channelOptions, grayScale, offsets);

// Noise video sequence

var altcnt=0;

function alternate() {
  altcnt++; if (altcnt>=frameNumber){altcnt=0};
  for(var i=0;i<frameNumber;i++){bitmapArray[i].visible=false;}

Notice that every noise frame has an alpha value that is generate randomly around a target value. Doing that the final effect is less repetitive and less predictable.


It is not an easy task to apply a convolution matrix on a 1280×720 H.264 video and add in overlay a synthetic noise. It requires a lot of processing power and so it cannot be used in any scenario.

For example it is impossible today to use it on mobile, and on desktops you need to check carefully for the performance of the computer and disable selectively the effects to save processing power if the cpu is under pressure. I will speak about how to effectively measure video performance in a future post. In the next release of Flash Player we will be able, probably, to leverage GPU compositing and acceleration in a more deep way. Video Enhancement is a perfect way of exploiting GPU acceleration, so I’m looking forward for a GPU accelerated version of Pixel Bender (in the Flash Player, outside it is already accelerated).

In the while let me know about your experience with the enhanced video, reporting computer specs and performance. In my experience it works well on a Core2 Duo 2GHz+ on Windows (for Mac its better to upgrade to the latest accelerated version of Flash Player).

Categories: Flash, Video