Shared publicly  - 
 
I predict that in the future, we'll see many of those videos from the low-res "dark ages" of video in high-def, thanks to very clever video reconstruction technology
6
Brad Templeton's profile photoTony Demetriou's profile photoSean Fagan's profile photoJames Salsman's profile photo
5 comments
 
So extrapolating from those thoughts, would you assume that future movies would film with one or more cameras, and a computer would build a 3D model, which is what the audience would then watch?
Which is something we're almost doing already (since having depth & lighting information makes it faster to add CG) - I could imagine having the "full 3D" animated model of the scene, letting the director make after-the-fact decisions about camera angles, zooms and pans.

Or perhaps we'll just do away with physical filming altogether? Considering that there were some pretty impressive close-ups of Davey Jones eyes from the Pirates movies, which were entirely CG, I could imagine someone grabbing a model of a set (or walking a camera through the set and letting the computer construct a model) and then adding in the mocap actors, loading the model of the actor they want, and adding in the voice track. DIY movie. "Oh, let's put Hugh Jackman in as the main character instead of Will Smith. See if it looks any better."

(I look forward to that future, where movies have to compete on story & imagination, rather than big name actors & budget)
 
Well, a lot of them were actually filmed. Those can gain a lot from higher resolution than NTSC. And of course any CGI-heavy show could benefit as well, partially.

But, yeah, the same stuff that can be done to make 2D into 3D can also be used to make low-def look better.

(Shall I rant about OTA, cable, and satellite "high def"?)
 
For what is filmed, of course they can remaster. I expect to see this tech apply even to people's home movies. In that case you won't make digital actors but you might integrate still photos with complex video analysis and video restoration to produce something better than what existed before.
 
Yes, it is the sub-pixel work (which I first saw in photos from space) that led me to this prediction. The leap is to take the reconstructed still images and figure out how to animate them based on the original video to make it move, even as angles of view are moving. It may mean things are blurry when they spin or move, but get sharp when more still, like a slow exposure rate. The computation will get cheaper -- this parallelizes nicely.
Add a comment...