Carnegie Mellon University's TagBots use Intel boards
With advances in microchip design, nanotech sciences, software architecture, and mini-power cells, robot systems can be more than just another pair of eyes. They are already being tested and used in a variety of applications. They can traverse different, even dangerous environments and perform complex tasks on their own. For example, mil-spec iRobot Packbots have been used in Afghanistan to detect and map the locations and contents of caves. Another iRobot rover was used in the historic exploration of both the southern and northern shafts that led to the Queen's Chamber in the Great Pyramid at Giza (Egypt). The rover was able to illuminate areas beyond the blocking stones in the shafts, which had last been viewed by human eyes some 4,500 years ago.
Robot mobility issues
Regardless of a robot's design or tasks, there are still three main issues with its mobility:
* Localization: How does a robot know where it is in its environment?
* Mapping: How does the robot know the details of its environment?
* Navigation: How does a robot traverse its environment?
Intel works closely with researchers to identify novel ways for a robot to perform its mobility tasks. Intel is particularly interested in machine-vision libraries that can be used to perform localization and mapping based on monocular- or stereo-vision systems. For example, right now, most robots navigate by using infrared or radio waves to avoid objects in their paths. However, Intel software researchers recently developed several libraries that are very applicable to robotics systems. Intel's computer vision library is already used extensively by vision researchers.
Intel has also released a test version of a technical library for building Bayesian networks to support machine-learning activities. Bayesian networks are a form of probability-based artificial intelligence. Such a network would let a robot navigate by matching sensor data to a map stored in its memory.
No comments:
Post a Comment