Thomas B. Myers
672 followers -
Proud dad. :) Humble Husband. Medical Imaging Geek. Entrepreneur. Avid Fly Fisherman. Usually in that order. :)
Proud dad. :) Humble Husband. Medical Imaging Geek. Entrepreneur. Avid Fly Fisherman. Usually in that order. :)

672 followers
Communities and Collections
View all
Posts
Post is pinned.Post has attachment
Forming the edge and tip of the blade....
Post has attachment
Norwegian lake sunset....
Post has attachment
Midsummer's bonfire.
Post has shared content
Awesome. 😀
First a shout out to old Tango friends – yup, I stayed with it, I caught it, and I beat it. That said, it was f**g scary and most of the time I thought I was going to die an ignominious death.

This video is the result of a very long strange trip, starting with when I was invited into the Google Tango project, and then starting in earnest when Google steadfastly refused to do a decent job and I wandered off like a 3-year-old going “Imma do it myself!” I ended up focusing on inexpensive edge AI stereo machine vision lower down the stack than the more commonly encountered segmentation and classification activities, though I now plan on romping about extensively with them using this data.

This shows how my system processes the parallax error it locates in left and right views in a stereo camera. There’s a vicious twist right out of the gate in that the two physical cameras are fused into a single synthetic camera based on the overlapping FoV. It was an interesting day when I discovered you still had to calibrate a synthetic camera, even if it was consuming fully rectified data from the physical cameras.

For every image it uses a bunch of math to calculate the relationships with adjacent off-axis images, about 5 degrees to the right and left, or, each images parallax data is a definite integral from -5 degrees to +5 degrees from the center axis where the camera is currently aimed. Of course, there’s also a homography that has to work inside that integral in order to correctly remap all of the off-axis coordinates. It’s a prickly beast.

Where it measures parallax error with the left camera it paints it blue, and where it measures parallax error with the right camera it paints it red. At the color changes to magenta and becomes more intense, this shows the very high accuracy data, i.e. its seeing it with both cameras from a number of angles and everything keeps agreeing. It processes this data first, and then uses the weaker magenta points and blue and red points like small children in the kitchen – they may be useful, but you can’t quite trust them.

I only use about a 5 degree off-axis range with these cameras, as they’re cheap, thermally noisy and have poor lenses. Penalties of self-funded research, I guess. As you watch the colored pixels shift between red and blue and magenta, you can see more easily how error is cancelled and concordance is merged. One thing to keep in mind is that there’s a true parallax measure to look at where each of those colored pixels is, but to see that in bulk takes a movie of vector fields, and trust me, those can be profoundly disorienting. I don’t need hate mail from people who barfed on their keyboard.

I’m looking forward to making another version where the computers and cameras get jumped up, for example switching the Raspberry Pi’s out for bigger SoCs and giving it a couple of high end Nikon backs and lenses for imagers. Should be very interesting, and the visual resolution and tracking should approach bone-chilling.

One thing I’ve deduced on this journey is that the deep levels of this visual cortex (all this is is the deep levels, Googz is playing in the higher levels and I’ll just stand on their shoulders for that) are probably similar to deep levels of biological stacks at that level. They’re target acquisition and tracking machines. Nothing more, nothing less. They rapidly identify deltas in the stereo image, and they track and combine those. I’ve worked on letting the head run active tracking on me, and it’s an odd experience. Simplifying the math behind it to a ridiculous degree, it’s just running the servos to make me as magenta as it possibly can, and it does a damn good job at it. The only time I managed to get it to loose me I crashed into the tool chest and it broke a servo axle trying to keep up and then stop.

In closing, it’s kind of bittersweet to be delivering this at last and knowing Tango died early this year. It never should have happened, there was talent and skill, but it was unfortunately overwhelmed by ineptness and arrogance. That is one reason I do love working in ML and MV, it is absolutely unforgiving to BS. I’ve got my popcorn stored up for the inevitable implosions to come, as ML ‘players’ ignore this not just to their peril, but to their absolute destruction. ML is incredibly powerful, and it can be fiendishly complex. As to what it will do in any new situation, all you’ve got is a wing and a prayer. If you pay attention, if you are rigorous in your model building, if you don’t contribute to additional error to a system already trying to suppress the error pouring in, victory may be yours. Fail to do that and you will almost certainly die, it will be unpleasant, and it will hurt the entire time it’s happening.

For those going cool, I wanna go do this too, I can offer one item of note. What I didn’t know when I started was exactly how important calibration was. Calibration in this world is the attempt to solve for n^2 unknowns with n equations, only slightly tongue in cheek. Calibration will rule your every waking moment, and much of your dreams as well. If you do not calibrate, then all you do is add more noise, and the whole house of cards keeps falling over and bursting into flames. Success in this space is driven by the quality of your calibration, it is the absolute limit on what you can achieve.

https://youtu.be/byfJ-lt6Eus
Post has shared content
Nicely done Lubos! Tango is Dead! Long live Tango/ARCore!
Let's go back to the future. Today I have released the June version of the 3D Scanner for ARcore. The download link is: https://play.google.com/store/apps/details?id=com.lvonasek.arcore3dscanner

This version is a big milestone for me. I fixed all major issues regarding the device compatibility. Now all Android ARcore compatible devices should be compatible with my app: https://developers.google.com/ar/discover/supported-devices

As you can see, outdoors the scanning is reaching almost the same quality like Tango: https://skfb.ly/6yYzE

Indoors, the improvement is also big but it is still not good enough. I had to disable the passive depth sensing for places without feature points. I will have to reconcept the algorithm for it to make it working again. For now, the walls and floors without visible patterns are not scanable.

The depth sensor and the stereo camera system is not supported by ARcore yet. On Github, there is a feature request for it, and you can support me (and others) here: https://github.com/google-ar/arcore-android-sdk/issues/120
Post has shared content
Very impressive!

In the video is a group of 3D high quality models created using photogrammetry.

Photogrammetry is a method of 3D scanning. In order to create such a 3D model, you have to capture many high resolution photos of the object, and then you have to use PC software to generate a 3D model.

It may sound easy, but it is not. Photogrammetry requires captures from specific angles and distances. The 3D model computation can take many hours, and in some cases even days.

There exist other and faster methods to create 3D models. There are smartphones with a 3D sensor that makes it possible to create a 3D model of a whole room in less than 5 minutes. However, 3D models scanned by smartphones lack a good quality.

In the year 2017, we developed the best rated 3D scanning app for Android smartphones using Google Tango. Google Tango devices use the 1st generation of mobile 3D sensors, thus the quality may not be the best.

The 2nd generation of mobile 3D sensors is already available, but not yet for end customers. We believe that with the 2nd generation sensor on smartphones, we can reach a comparable 3D scan quality like the one you are watching in this video.

But, as the 2nd generation 3D sensor provides much more data than generation one, it is more complicated to process the data on a mobile device. We are working hard to make it possible.

What we need for further development, is a mobile device with a 2nd generation 3D sensor. We are searching for manufacturers who would provide us with a device prototype we can test on.

Post has attachment
Vancouver!
IS anyone else experiencing a Project FI outage right now?

I can't get in touch with support over landline or Wifi...

Says my SIM card is no longer active.

Seems MIA right now...

Bueller? Bueller? Anyone?
Looks like Fi is down for me in MSP? Anyone else?

No connection to the 1-(844) 825-5234 number from landline or from Wifi.

Support Chat is offline.

SIM card says it is no longer active.

Anyone else having issues?
Post has shared content
Congratulations on the long and dedicated road to bringing technology to bear on making a difference in peoples lives! Congratulations!
I live to help people, which is why I became involved in the Tango project at the beginning.
Over the last three years my colleagues and I have been working with Tango technology to help those less fortunate than ourselves, and we are presenting the results of our work tomorrow at the University of Copenhagen in Denmark. All are welcome.
http://www.ibos.dk/konferencer/konference-teknologiske-hjaelpemidler-til-mennesker-med-synshandicap.html