Post has attachment
VisualData(www.visualdata.io) is a search engine for computer vision datasets together with models and code.

Each dataset is tagged with related topics and you can easily filter and search the dataset you are interested in working with.

You can also sign in to add your own dataset for others to use. Currently only public dataset is allowed.

All the dataset information is hand curated and we are constantly adding more!
PhotoPhotoPhotoPhotoPhoto
8/23/17
5 Photos - View album

Post has attachment

How to track and detect multiple moving object in real time?

Post has shared content
The point cloud generated by Apple ARKit with iPhone/iPad without 3-D camera is highly noisy.

However, one of the advantages of the current combination of ARKit with iPhone/iPad is that the measurement distance is limitless if you can translate large enough your iPhone/iPad.

You could even measure the moon by using an iPhone/iPad IF you could translate (i.e. without rotating) enough your iPhone/iPad.

The passive 3-D measuring method (photogrammetry, stereo vision, multi-view geometry) will govern in 5 years. The active method (laser scanner) will disappear.
Photo

Post has attachment
We have demonstrated how CurvSurf FindSurface works with Google Tango, Intel RealSense, Microsoft HoloLens for Real-time 3-D data streaming.
https://www.youtube.com/c/CurvSurf
https://developers.curvsurf.com/docu.jsp


In next months, CurvSurf will reveal:

1. A video of demonstrative AR application to Apple ARKit. As far as we have found, the point cloud from Apple ARKit using the latest iPhone/iPad without 3-D camera option is highly noisy. Anyway, CurvSurf FindSurface may extract geometric information from that such bad point cloud.

2. An Autodesk Revit Plugin source code in C# for automatic feature extraction from point cloud by using CurvSurf FindSurface.

3. A real-time tracking & occlusion C/C++ code for Intel RealSense ZR300.

Joon

Post has attachment
Built-in fonts of OpenCV aren't enough for you? Learn how to you your own fonts,

Post has attachment
e-CAM131_CUTX2 - 4K MIPI NVIDIA® Jetson TX2 Camera Board: https://goo.gl/VJT7m8

Post has attachment
Learn how to build your own Snapchat like Image Overlays with Dlib, OpenCV, and Python.

Post has attachment
Two simple approaches for implementing visual recognition. No previous #MachineLearning experience required!

https://medium.com/ibm-developer-advocacy/visual-recognition-378dd49ee272

Post has shared content
Apple has recently announced the ARKit, a promising SDK for AR development, simple & easy to use. Apple has trimmed off all subjects bothering developers. ARKit is based on visual-inertial odometry and provides developers with motion tracking and plane detection, the two core functionalities for developing an AR application. Additionally, a sparse point cloud is generated by the way of motion tracking.

- Size of audience: ARKit > Tango > HoloLens
- Simplicity of hardware: ARKit > Tango > HoloLens
- Convenience in development: ARKit > Tango > HoloLens
- Potential for application: HoloLens > Tango > ARKit
- Motion tracking accuracy: HoloLens > Tango > ARKit
- Motion tracking speed: ARKit > HoloLens > Tango
- Point cloud density: Tango > HoloLens > ARKit.

The weakness of ARKit originates from the minimal hardware requirement for visual-inertial odometry, i.e. a plain single moving camera. Eventhough users need not to be aware of, in order to increase the accuracy of motion tracking, a BW fish-eye camera as with Tango or multiple cameras as with HoloLens are necessary.
Wait while more posts are being loaded