Profile cover photo
Profile photo
Miguel Algaba Borrego
Miguel Algaba Borrego's interests
View all
Miguel Algaba Borrego's posts

Post has attachment
Occipital ha lanzado Bridge, un sistema de Mixed Reality y Positional Tracking para dispositivos iOS. No os perdáis el vídeo introductorio en el siguiente enlace

Post has attachment

Post has attachment
Check out the Bridge Engine Demo from Occipital at CES 2016. Mixed reality for iOS devices coming true

Post has shared content
Today I calibrated one of our kinect2 (kinect-ONE) sensors. So I took the time to improve the documentation on the calibration system from my colleague, +Thiemo Wiedemeyer, for his +ROS kinect2_bridge package. Find the documentation here:

It took about 90 minutes to collect all the necessary images, and the result is very nice. Look at the difference! (See images)

On this sensor, the depth calibration found an offset of 21mm between the kinect2 depth data and the optically measured data. This is corrected from now on by the kinect2_bridge.

I tried to make the steps in the documentation on github as clear as possible, including some advice on preparation, and all the commands you need to run. Also, there are now pictures of the calibration setup using two tripods (useful if your sensor is not yet mounted on a robot), and example images before and after the calibration.
5 Photos - View album

Post has shared content
Apple might be a bit late arriving to the Virtual Reality race, but they are coming up with ideas that sound pretty good. 

Here, we can check out their latest patent for a VR HUD controllable with your iphone.. 

Not bad, Apple! What do you think?

#apple   #appletech   #appletechnology   #technology   #technews   #Macrumors   #macstuff   #macVR   #macvirtualreality   #Mactechnology   #futuretech   #HMD   #appleHMD   #VirtualrealityHMD   #VRHMD   #technologytoday   #virtualreality   #VR  +AppleInsider 

Post has shared content
Our new Siggraph Asia paper addresses a common problem; that mocap results in unnatural animations.

Mocap systems output skeletons, throwing away all the soft tissue motions of real humans. The result is a lifeless animation.

In contrast, MoSh estimates body shape, pose and soft tissue deformations directly from sparse markers. MoSh turns a mocap system into a body scanner. Animation is driven directly by the markers, preserving subtle nuance.

MoSh captures soft tissue motions without large marker sets. Soft tissue motions make animations more realistic. These can be amplified or attenuated and even retargeted to new characters.

MoSh is automatic, does not need a body scan and can work with any marker set.

Post has shared content
PatchMatch Based Joint View Selection and Depthmap Estimation. #CVPR14
Variational inference approximation (Probabilistic Graphical Model and Patch-Match propagation)

Post has shared content
Wait while more posts are being loaded