Two fundamental functionalities are required to endow a mobile robot with autonomy: SLAM (localization, mapping, and navigation), and 3D obstacle avoidance. Without an initial map or an absolute localization means, SLAM requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. Among the vision sensors stereo camera provides 3D information directly, which is used for estimating the geometry of the environment. In addition, stereo camera provides high resolution depth map in real time, making it the ideal sensor for 3D obstacle avoidance.
The traditional stereo camera relies on textures and features on the object for matching and calculating the depth. This makes it the only 3D vision sensor that works both indoor and outdoor, but it also has limitations on indoor walls lacking of texture and dark environment. We have developed new 3D vision sensors by combining stereo cameras with active structured-light and motion sensors, making it works for all of such corner cases, so it becomes accurate and the most-affordable sensor for SLAM and 3D obstacle avoidance of robots and cars.
Our object detection and recognition algorithms based on Convolutional Neural Network (CNN) can recognizes over hundreds of objects in real time up to 120fps. You can use our deep learning model to train the objects your application cares.
We have also developed our real-time face reorganization based on deep learning optimized for the embedded processors that most mobile robots use.
At MYNT AI our strength is designing and making industry’s state-of-art stereo camera hardware and developing VIO, VSLAM software optimized for our cameras and other embedded processors used on robots. You will find our stereo camera has widest open source SLAM supports, you will find our SLAM module has the best stability and performance on processors that customers can afford.