Shared publicly  - 
Wow. I don't know how well this actually works, but you have to be impressed with the demo!
Lucas Teixeira dos Santos's profile photoZoran Knezevic's profile photoAsbjørn Grandt's profile photoMax Huijgen's profile photo
^--- I flagged the first comment for abuse, let's see if I can claim my spot back muhahaha ! 1st Comment ... YES!
so they can stop making Vibration Reduction equipment now? - or just more drunk photography just became acceptable!
Wow, it will be interesting to see what it ends up being in the consumer product.
It's hard to determine what it does regarding image quality from this tape. Did you actually see it in action?
Cue a thousand "Zoom... enhance!" CSI jokes.

Very, very awesome though :D
This will play out just like the content-aware fill feature that was very meme-ish last year. I'm sure it works well in some fairly specific scenarios, but not enough to make it practically useful without lots of fussing around with parameters.
That is amazing! Can't wait to see that in a future release!
This video could benefit from the technology! Cool stuff though.

I wonder if this would make it possible to uncensor Japanese porn??
I tend to agree with Chris Harrington. Demos always look great. But using these kinds of tools in our own pictures are not always desirable in terms of time and/or image quality.
Who is the speaker - Sze Meng Tan?

Generating the point-spread function is the hard part. I used to do this in matlab.
OMG, that is so amazing. I want! :)
So how long until this tech gets on my phone? Far too many of my pictures look like the 1st one.
lol I love the delay in the commenting as everybody watches the video
Kinda feel bad for deleting so many blurry photos now...
The, what I am assuming is, auto camera movement was horribly distracting.
Photoshop is really depressing to use these days. Psp x4 is far faster and more advanced it has a feature like this, as did x3. It's utterly dependant on the original image. Kudos to the marketing department for making it "secret", reminds me of the iphones that get "lost" on buses pre-release.
It might be very impressive, but you'd never know it from that poor quality (even at 720p) video.
+Seth Aaron I was thinking the same thing; this on a phone would be incredible; its like steady-cam on steroids. Imagine if it could utilize gyroscope data from your phone how much more accurate it could be
Saw this from a SIGGRAPH paper many years ago.
wow. Can't wait to get home and listen to this with sound!
Note: you cannot tell how good this one is because
a) low Q video
b) he loaded "special" presets for each run ... these presets are important weighting.
c) even if the res was good the display is too far away to see artifacts of the algorith used

note: we already have deblurring algorithms - there was no comparison for the audience. You cannot tell from the movie if the final result is an improvement at all.
This is pretty neat for recovering from camera shake. For focus blur, get a light field camera, then you can choose the focal point any time you want. I wonder if this processing technology will eat into the sales of vibration reduction lenses and tripods once it becomes available?
+Amanda Johnson : please be specific.

But you are right - as usual, Photoshop is late.

If you want to see what is available already - eg GIMP users - look for "deconvolution" in the description of plugins.

Deconvolution to deblur a photo is a standard exercize for students of signals processing, and physical inverse problems, and suchlike.
Content Aware Fill elicited the same response when it was first presented. I appreciate Adobe's continuing work that pushes the edge of the envelope, that in the end, helps the working photographer produce better long as it is used for good...
psp x3 - that would be Corel Paintshop Pro x3?
Already explicitly calls the point-spread function to do blind deconvolution? What did they call it?
You lot who are aclling this "awesome" - could you please state how you make this determination? You realise that Photoshop already has deblurring methods right? How do you know this is any better?
I saw this "Sneak" when he was putting the demo together - it actually works as well as it looks in the demo. It's still a prototype, but it'll be very cool if it makes it into Photoshop
a low res presentation on anti-blurring. funny!
How about integrating this function into Android. So that you can unblur taken pictures through an option in the gallery.
I'm thinking this might make all those low light cell pics usable at last! I would love to see it as a Lightroom or Bridge plugin. 
Maybe the algorithm compares the blurry image with a corpus of exact image patterns and uses matching patterns to "patch" the blurry image.
I saw this on a movie a long time ago."Enlarge this!, pick up the pixels outside of the picture, zoom in on that reflection, okay now sharpen it to make it look clear."

We are one step closer.
If they really can turn a blurry image into a sharp one without any other reference photos, American recon satellites would have finer resolutions than is physically possible now.
+Ziyuan Yao You will find that it corrects for distortion due to moving the camera. It doesn't increase the resolution like what you are suggesting.
Just challenge them to sharpen Google Maps satellite photos :-)
Meh. CSI has had this technology for 12 seasons.
Looks pretty cool.

Shame I can't afford to buy Photoshop!
I think (simplified) what it is doing is looking at the image to find a single pixels motion and then building a camera motion "map" to reconstruct it. I'm betting it's not perfect but better is good.

I think, just like content aware, you won't take a picture with the intent of using it. It's more of a backup plan for when you don't have a good enough shot. 
How about building this into all cameras, so that we never have a blurry image ever again?
This could be useful for a lot more than just photography if they can speed it up.
This is only possible if the information is in the picture already (which may be so if the picture was taken with a hi-res camera that wasnt completely steady). If the blurring is caused by insufficient resolution as is most often the cause of it, this is impossible.... You cannot "create" information which isnt there.... Or, rather, you can, but then your picture has nothing to do with what is really out there. +Ziyuan Yao is right here: if this were truly possible you would not need expensive equipment! Simply make all pictures with a 1 Mbit camera and sharpen the blurred image.... clearly ridiculous.
Wow this is so cool! What kind of algorithm is used in this demo???
thanks for the link to my recording +Vic Gundotra - it was definitely an impressive demo, the video quality doesn't do it justice
ahh but Peter - the crowd intake of breath is palpable! well captured
Good presentation. However image de-blurring has been around for years (Lucy-Richardson algorithm at the 70s, Wiener Filter at the 80s, and blind devolution etc). With a few lines of codes you can perform the magic in Matlab or Python, even with your own motion trajectory :)
There was even Santa Claus OhOhOhing on the 1.20 minute mark
Quite amazing, I really hope that feature makes it into CS6
As someone that was in the audience, I can vouch that it was a stark difference -- definitely not conveyed by the video. Total night and day difference.
Another tool for 'photographers' to look better than they are. Extreme post-processing is kill true photography.
That would depend on how you define "true photography," eh? How much processing is too much processing? Dodging and burning your (physical) print? Doing the same in Photoshop? A Hoya filter on the camera? Using an auto-focus camera with in-body image stabilization? What if you use a digital camera's b&w setting rather than Ilford XP2 or Kodak Tri-X? How much Photoshoppery kills "true photography"? Should we all go back to wet-plate, large-format work?

I'm primarily interested in the final image the photographer presents to me, regardless what goes into getting it there. I say this as an amateur photographer who does primarily natural-life photography (never learned proper use of a flash, don't own one for my (film) SLRs) and whose digital editing skills are limited to the occasional crop. And who does a lot of work with an uncomplicated digital P&S these days.

(That's my way of saying, since I don't use PhotoShop or other digital editing software, I have no personal use for this feature.)
I could see this being utilized by local authorities.
Damn I missed this years MAX and it was in my calendar for one year...
+Lisa Hirsch This!

It always irks me to hear or read that manipulating digital images in Photoshop is somehow an abomination. This mostly from those who have never stepped into a darkroom. Look into the past of some of the great photographers and you will find multiple prints of the same photo, some wildly different due to different chems, development times, filters, crops, burns and dodges. The idea is to capture the feeling of the original scene, not necessarily to reproduce it exactly.
I wonder what the parameters he's loading do?
I am now be proud to say I can be a great test subject!
the unblurrer isn't that awesome, the analyzer who identifys the shotter pattern is, ... AMAZING
+Lisa Hirsch First off I'm not against any of the modern photography technology, in fact I would never go back to film because It takes too much time for an amateur like me. My 'issue' is that photographers now (less the ones from film) don't portray any emotions. When I look at a photo I want to feel something - and when you are just pressing a button and making it look fancy in photoshop, its hollow (In a way It's the same feel as when I see a airbrushed celebrity, you know its fake). When you had a limited number of shots you made it work when you pressed the shutter.
I use editing software because you can't go without it. As you mentioned it practically an analogue to the darkroom. Overuse gives a false hope and over processed shots.

+Jason Woods Can't disagree, I just wish it wasn't used so much for the sake of it.
I think it would be better if it was implemented directly into cameras.
Awesome!!! Can't wait to get my hands on this :D
i like the moment the audience screams WOOOW :D
That's just incredible - I have no idea how that works but I'm happy to live in ignorance so long as it works!
Damn, I want freebies too... and that awesome huge screen.

It does take me back a couple of years: I used to teach webdesign and a part of that was some basic photoshopping. The fun part of it was that all the kids had photoshop on their computers, but I as a teacher didn't have photoshop on any computer, so I was teaching a program I had no access to. Since I refused to pirate software to do my job, I ended up refusing to teach webdesign, multimedia and several other computer related courses. ...but daaa#$%& I miss Photoshop, Dreamweaver, Flash and writing CSS templates.
Alie Oz
Reconstruction is the way to future.. (in every form, and platform..:-)
Idk what the big deal is? We've been able to do this for years. Just go to image effects>blur and then CTRL-Z :)
I thought this were already common place, we see it used on even worse or grainier images all the time on cop shows...

From what was shown, it is really impressive, though probably not feasible for video if it takes 5-8 seconds per frame.
Another thought, I wonder how the de-Blur handles the Gaussian blur most commonly used to obscure faces and text online and on TV...
Just reverse +Asbjørn Grandt Photoshop stems from basic (de-)convolution filters and you can still use your own. Just fill in the number you want to use for weighing the surrounding pixels. I think it´s listed under custom filters from the top of my head. I have used custom filters for longer than Photoshop exists and it´s very handy. However for that kind of work Mathlab is much handier.

Focus blur and motion blur (not only of the camera but also of the images in the pictures can be restored if you analyze the vectors. I guess it can be done quite fast in the future if GPU´s are used as it a good subject for parallelism.
Add a comment...