Our self-developed vSLAM visual core provides robots, drones, unmanned vehicles, as well as AR and VR devices with spatial perception, allowing them to recognize their locations in a similar way to humans. This self-positioning capability supports 3D reconstruction of environments, environmental positioning, and obstacle avoidance. Compared with traditional navigation and obstacle avoidance, our technology can prevent errors and disorientation caused by reliance on algorithms and changing road conditions.
Our self-developed CNN neural network algorithm with recognition speed of up to 120fps enables it to identify different objects quickly and efficiently, while the ability to compare them in real-time with objects stored on the cloud, as well as learning and memory provides real-time scene recognition for robots, drones, unmanned vehicles, in addition to AR and VR devices. Detection of humans, vehicles, and roads can play an important role in autonomous-driving.
The self-developed real-time identification algorithm based on machine learning, the CNN neural network, and local real-time calculations ensure efficient and real-time identification of single- and multiple-person identification with up to 95% accuracy. Similarity detection can achieve comparisons of photos and identities, which can help enterprises to build corporate databases, carry out rapid look-ups and create security black/white lists
In addition to technology, MYNTAI also provides a series of hardware and software products, including different vSLAM visual modules, binocular solutions, binocular camera modules, and robot products based on visual technology. We offer exclusive and customized visual solutions for upstream and downstream enterprises through our multi-step strategy integrating software and hardware.