Google It All

Friday, May 22, 2009

Artificial Nose: Mimic the Human Sense of Smell, by Computerically Sensor Technology

Introduction

Artificial eyes, ears, and noses for stronger, safer troops A layer of mucus dissolves the arriving scents and separates out different odors molecules so that they arrive at the receptors at different speeds and times. The brain is able to interpret this pattern to distinguish a diverse range of smells.

In contrast, an artificial nose consists of a much smaller array of chemical sensors, typically between six and 12, connected to a computer or neural network capable of recognizing patterns of molecules.

A neural network is a collection of computer processors that function in a similar way to a simple animal brain. The nose doesn't have a specific receptor for the smell of roses; instead it detects a particular mixture of sweet, sour, and floral, which the brain recognizes as a rose. Similarly, the Tufts artificial nose has 16 fluorescent sensor strips, each sensitive to a different range of molecules, and a computer that interprets their response pattern to determine whether or not they have sniffed a mine. While this method can be better at filtering out false alarms than the Fido approach, it may not be quite as sensitive to explosives-related chemicals.The human nose contains more than 100 million receptors.

Initially developed as laboratory instruments, electronic noses that mimic the human sense of smell are moving into food, beverage, medical, and environmental applications. The Researchers and manufacturers alike have long envisioned creating devices that can 'smell' odors in many different applications. Thanks to recent advances in organic chemistry, sensor technology, electronics, and artificial intelligence, the measurement and characterization of aromas by electronic noses (or e-noses) is on the verge of becoming a commercial reality.

Research Topic: Distributed Detection for Smart Sensor Networks

Distributed Detection and Estimation

The literature on distributed detection and estimation is quite extensive, including the topic of multi-sensor data fusion. Initially, let us consider distributed detection; a good, relatively recent tutorial on the subject is given by Viswanathan and Varshney [1]. The basic idea is to have a number of independent sensors each make a local decision (typically a binary one) and then to combine these decisions at a fusion center to generate a global decision. The figure below illustrates the parallel fusion topology, which implements this processing. Either the Bayesian or the Neyman-Pearson criterion can be used. Assuming the Neyman-Pearson formulation where one assumes a bound on the global probability of false alarm, the goal is to determine the optimum local and global decision rules that minimize the global probability of miss (or equivalently maximize the global probability of detection). When the sensors are deployed so that their observations are conditionally independent, one can show that these decision rules are threshold rules based on likelihood ratios [2]. The problem now becomes one of determining the optimal threshold at each sensor, as well as at the fusion center. While this task is quite non-trivial, it can still be done for a reasonably small number of sensors using iterative techniques [3]. More importantly, by using soft, multi-bit decisions at each of the sensors, it is possible to increase the performance so that it is asymptotically close to the optimal centralized scheme [4].

Depending on the sensor network topology, it may be more useful to implement the distributed detection or estimation using a tree structure. Tsistsiklis [3] shows that the optimal decision rules are still in the form of threshold tests. Tang et al. [5] consider the case where the local decisions made at a number of sensors are communicated to multiple root nodes for data fusion. With each sensor node characterized by a receiver operating curve (ROC) and assuming a Bayes' risk criterion, they reformulate the problem as a nonlinear optimal control problem that can be solved numerically. Furthermore, they briefly examine communication and robustness issues for two types of tree structures: a functional hierarchy and a decentralized market. One conclusion is that if the communication costs are a primary concern, then the functional hierarchy is preferred because it leads to less network traffic. However if robustness is the primary issue, then the decentralized market structure may be a better choice.

In the cases discussed above, the information flows in one direction from the sensors to either the single fusion center or to a number of root nodes. Even in the decentralized market topology, where numerous sensors report to multiple intermediate nodes, the graph of the network is still acyclic. If the communication network is able to handle the increased load, performance can be improved through the use of decision feedback [6, 7]. Pados et al. [6] examine two distributed structures: 1.) a network where the fusion center provides decision feedback connections to each of the sensor nodes, and 2.) a set of sensors that are fully interconnected via decision feedback. The performance of the fully connected network is quantifiably better, but their initial system was non-robust to variations in the statistical descriptions of the two hypotheses. Robust testing functions are able to overcome this problem, and they show that robust networks tend to reject the feedback when operating with contaminated data. Alhakem and Varshney [7] study a distributed detection system with feedback and memory. That is, each sensor not only uses its present input and the previous fed-back decision from the fusion center, but it also uses its own previous inputs. They derive the optimal fusion rule and local decision rules, and they show that the probability of error in a Bayesian formulation goes to zero asymptotically. Additionally, they address the communication requirements by developing two data transmission protocols that reduce the number of messages sent among the nodes.

Swaszek and Willet propose a more extensive feedback approach that they denote parleying [8]. The basic idea is that each sensor makes an initial binary decision that is then distributed to all the other sensors. The goal is to achieve a consensus on the given hypothesis through multiple iterations. They develop two versions of the algorithm; the first is a greedy approach that achieves fast convergence at the expense of performance. The nth-root approach constrains the consensus to be optimum in that it would match that of a centralized processor having access to all the data. The main issue is the number of parleys (iterations) required to reach this consensus.



Wireless Networks

The second component of a smart sensor network is the wireless communications network used to relay the sensor information. In essentially all of the work discussed above, the initialization, routing, and reconfiguration details of this network are not considered. The effect of the distributed algorithm on the use of networking resources is often not examined. When it has been examined, the effects of lost or corrupted messages on the performance of the detection or estimation algorithm have been typically neglected. An exception is Tang et al. [5], who studied robustness to the loss of communication links. Also, Thomopoulos and Zhang [9] make some assumptions about networking delays and channel errors. Recent work in distributed estimation [10] assumes error-free communication channels with capacity constraints.

Since a real wireless network imposes channel errors, delays, packet losses, and power and topology constraints, it is essential that the over-all sensor network design consider these factors. Typically after deployment, the first action for a sensor network is to determine its topology. This step is done because many of the traditional routing protocols require topological information for initialization. This is especially true for link state routing, which forms the basis for the open shortest path first algorithm used within autonomous systems in the Internet [11]. In order to both conserve battery power and reduce the probability of detection by hostile forces, it is better to use a reactive routing protocol. This is a protocol that determines a route only when it is required.

Another design choice is whether the network has a flat or hierarchical architecture. Both have advantages and disadvantages. The former is more survivable since it does not have a single point of failure; it also allows multiple routes between nodes. The latter provides simpler network management, and can help further reduce transmissions.

In a mobile ad-hoc network, the changing network topology leads to a requirement that the network periodically re-configure itself. Not only must the routing protocols be able to handle this situation, but also the media access mechanism. Link parameters, such as modulation type, amount of channel coding, transmitter power, etc. , must adapt to the new configuration. While we are initially assuming that the sensors are stationary, the possibility of (deliberate) sensor destruction requires that the communications network be re-configurable. Moreover, the distributed detection and estimation algorithms must also have this capability.
Summary of Research Issues
As discussed above, there has been a relatively long history of research in distributed detection and estimation in one research community, and wireless networking in another. However, there has been much less overlap between these two communities. The DARPA-sponsored SensIt program is also beginning to address this topic, with the focus primarily on collaborative signal processing, network routing protocols, and query procedures. The main contribution of our work is to combine the two disciplines of distributed detection/estimation and wireless networking so that realistic smart sensor network configurations are developed, evaluated, and optimized. Based on these designs, research is being conducted to answer the following inter-related questions:

* How do the communication and networking effects, specifically routing, bandwidth, and power constraints determine the quality of the distributed detection and/or estimation?
* What is the load on the communication network caused by the distributed processing? How many messages/second and how many system resources are required to achieve a desired quality of service?
* How robust is the resulting sensor network to lost nodes? What is the mechanism for reconfiguration that allows the network to adapt to such loss events?

WIRELESS/MOBILE NETWORKS

Wireless technology, which uses electromagnetic waves to communicate information from one point to another, can be applied to computers and other electronic devices. Although wireless technologies have been used in specific applications for decades, wireless networks have recently become much more widespread due to better technology and lower prices. Once the IEEE first defined wireless standards in the late 1990’s, wireless networking became feasible for a wide range of business and personal applications. Wireless networking offers various advantages over wired connections, including mobility, connectivity, adaptability, and ease of use in locations that prohibit wiring. Universities, airports, and major public places are currently taking advantage of wireless technology, and many businesses, health care facilities, and major cities are developing their own wireless networks. Since the cost of wireless networks has dropped dramatically in recent years, they are also becoming more popular in home computing.

The Special Topics list of highly cited wireless/mobile network papers from the past decade covers various aspects of wireless technology, but focuses on improving network performance. Some articles deal with improving wireless network speed through modifying transmission protocols. Attempts to increase performance when transmitting multimedia and video data are also present. Routing protocols, call admission schemes, and mobility management are examined for the purpose of alleviating network congestion and increasing overall performance. Other articles focus on energy concerns in wireless networks, including battery life and power-sensitive networks, while another article concentrates on security issues. The use of beamforming to exploit multiuser diversity for increased capacity emerges in two of the later articles.

The highly cited wireless/mobile network articles from the past two years cover diverse topics emerging in the wireless technology field. Improving performance remains a major issue, as shown in articles on relay channel signaling protocols and spectral efficiency, along with articles on improved models and metrics for assessing performance. Cooperation, in particular multiuser and spatial diversity, is explored for the purpose of increasing performance and capacity. Other topics include energy usage, security in location-based services, mobility management, and Bluetooth-based networks. Some specific wireless applications are studied, including wireless sensor networks and wireless devices used in elementary school classrooms. Since the FCC allocated bandwidth for commercial ultra-wideband (UWB) devices in 2002, UWB system design has also emerged as a wireless network topic.

Methodology

To construct this database, papers were extracted based on article-supplied keywords for Wireless/Mobile Networks. The keywords used were as follows:

wireless network*
OR
mobile network*

The baseline time span for this database is 1995-Dec. 31, 2005. The resulting database contained 3,249 (10 years) and 1,449 (2 years) papers; 6,142 authors; 63 countries; 313 journals; and 1,511 institutions.

Rankings

Once the database was in place, it was used to generate the lists of top 20 papers (two- and ten-year periods), authors, journals, institutions, and nations, covering a time span of 1995-December 31, 2005 (sixth bimonthly, an 11-year period).

The top 20 papers are ranked according to total cites. Rankings for author, journal, institution, and country are listed in three ways: according to total cites, total papers, and total cites/paper. The paper thresholds and corresponding percentages used to determine scientist, institution, country, and journal rankings according to total cites/paper, and total papers respectively are as follows:

Entity: Scientists Institutions Countries Journals
Thresholds: 9 23 9 6
Percentage: 1% 2% 50% 20%

Wednesday, May 20, 2009

Convergence of Mobile Phones and Sensor Networks


Pervasive computing with mobile phones and sensor networks are a future area of research. In a partner project with the Technical University of Berlin we develop our own sensors which can use the mobile phone as a data gateway or as an user interface for information retrieval on sensor data. The sensors communicate via the ISM-band (industrial, medical, and scientific) and employ a family of 16 Bit mixed controllers. The sensor platform contains a large set of ready to use data processing functions. For ease of use, researchers and students may develop and program their own applications in C. The development tools are employed in a sensor programming class at TU Berlin and in a future programming class at AAU.

Mobile Emulab: A Robotic Wireless and Sensor Network Testbed

Simulation has been the dominant research methodology in wireless and sensor networking. When mobility is added, real-world experimentation is especially rare. However, it is becoming clear that simulation models do not sufficiently capture radio and sensor irregularity in a complex, real-world environment, especially indoors. Unfortunately, the high labor and equipment costs of truly mobile experimental infrastructure present high barriers to such experimentation.

We describe our experience in creating a testbed to lower those barriers. We have extended the Emulab network testbed software to provide the first remotely-accessible mobile wireless and sensor testbed. Robots carry motes and single board computers through a fixed indoor field of sensor-equipped motes, all running the user's selected software. In real-time, interactively or driven by a script, remote users can position the robots, control all the computers and network interfaces, run arbitrary programs, and log data. Our mobile testbed provides simple path planning, a vision-based tracking system accurate to 1 cm, live maps, and webcams. Precise positioning and automation allow quick and painless evaluation of location and mobility effects on wireless protocols, location algorithms, and sensor-driven applications. The system is robust enough that it is deployed for public use.

We present the design and implementation of our mobile testbed, evaluate key aspects of its performance, and describe a few experiments demonstrating its generality and power

Robotic Face Prosthetics for Patients with Severe Paralysis Technology is currently patent pending

Over the past few years, there has been a great leap in the development of prosthetic limbs. Today, companies create prosthetics that feature "mechatronic" elements, which are normally used in creating robots. These elements turn simple prosthetics into functional substitutes for missing body parts so that some of the latest inventions in technology and science allow users to control prosthetics with the help of their brains.

However, internal prosthetics, like the ones used in the reconstruction of patients' injured faces, still do not include such advanced technologies, which is why they are somewhat awkward and look unrealistic. But that is about to change, as surgeons Craig Senders and Travis Tollefson of the University of California, Davis, look forward to apply artificial polymer muscles to revive the facial features of patients suffering from severe paralysis.



"The face is an area where natural-appearing active prosthetics would be particularly welcome," the surgeons write in a current patent application. The two experts hope that their latest invention in science will provide a solution. They reported that the tests carried out on cadavers proved to be successful, but they haven't had the chance to experience on live patients.



A complete example provided in the patent document explains the way artificial muscles work in helping regain control over eyelids of the patients that suffered spinal injuries or have nervous disorders such as Bell's palsy. There is a number of different disadvantages for people who lost control over their eyelids, including the fact that the eyes can become ulcerated, which can lead to blindness.



Senders and Tollefson describe their invention by saying that a polymer muscle attached to the skull pulls on cords that hook up to the upper and lower eyelids. In case a person attempts to close their eyes, the move generates electrical activity in the muscles that would close the patient's eyelids. The polymer muscle identifies this action and contracts, dragging on its cords to entirely close the eyelids.



According to the surgeons, other methods of using the polymer muscle could be used as well. For example, if a patient has lost control over one eye, as a result of stroke, the system has the ability to control the activity of the normal eye of the person and then synchronize the actions of the damaged eye to match. It is interesting to note that the patent also states that other sensors could be used to close eyes in front of bright light or in case a certain object moves really close to a person's eye. Experts suggest using timing systems to replicate natural blinking patterns.



The two surgeons consider that their latest invention could also be used to revive other facial features, or develop an artificial diaphragm to help a patient breathe, or substitute fingers and hands.

Sunday, May 10, 2009

Revolutionary Robotic Technology--the da Vinci Surgical System

hearing the name da Vinci inspires thoughts of a visionary, an inventor and an extraordinary artist of epic proportion. The da Vinci Surgical System, used for a variety of complex medical procedures, pays homage to this great artist and inventor, both in terms of its advanced technological

Since early 2005, David Sowden, MD, a thoracic and cardiovascular surgeon with Fort Wayne Cardiovascular Surgeons, has been using this state-of-the-art equipment to perform select procedures at Parkview Hospital's Randallia campus. He has repaired atrial septal heart defects in adults, as well as removing thymus glands, and has the ability to repair mitral valves in adult patients. In all, he has touched the lives of nearly two-dozen patients so far, including one who traveled from northern Michigan to take advantage of this technology.

What makes the da Vinci system so unique is the fact that it is actually a robotic surgery system operated from a console. The surgeon looks into the console and uses master hand controls and foot pedals to operate any or all of the four robotic arms positioned over a patient. These arms hold the necessary instruments and mimic the surgeon's movements, which can actually be scaled down to address the most intricate surgical maneuvers. "The technology is unbelievable," says Dr. Sowden. "The instruments themselves are like wrists and attach to the robotic arms. And, although the controls follow the movements of the physician's hands, the system filters out any fine tremors that might occur."

But this system doesn't simply duplicate procedures traditionally done by surgeons as they stand over

Emerging Robotic Technologies

Enhanced Artificial Intelligence Key to Spurring Robotics Technology Deployment

Technical improvements in fields such as microelectronics, microprocessing, and artificial intelligence have given rise to prototype versions of humanoids. With an ability to emulate limited human behavior accurately, this robotics technology is finding application in diverse sectors ranging from manufacturing to defense. Robotics software constantly undergoes improvements with developments in areas such as neural networking, natural language processing, image recognition, and speech recognition/synthesis. For instance, the University of Missouri has created a machine, which comprehends basic spatial relationships in a real-time interactive mode to use that information to navigate complex obstacle courses.

This Technical Insights study, World Emerging Robotics Technology, provides an in-depth analysis of the latest trends. The study throws light on worldwide technology developments in this field and analyses applications in different verticals such as manufacturing, defense, healthcare, and consumer robotics. The research service also provides the unique advantage of discovering the trends of the vendor and the needs of the user communities. This will enable participants to plan strategically for long-term benefits.

Robotics Technology Increasingly Used in Real-time Applications

Robotics technology is moving out of research labs and is increasingly being deployed in various applications. Moving from traditional fields such as industrial automation and manufacturing, robots are now utilized even in healthcare and defense segments. Progress in video imaging, endoscopic technology, and instrumentation is changing conventional open surgeries into endoscopic ones. Another area of use is defense with increased research and development for equipment, which can minimize risk to human life.

"Surveillance, mine detection, and reconnaissance are some of the applications provoking maximum interest for the use of robots," observes the analyst of this research. "The applications of robots are growing manifold with miniaturization, development of micro-electromechanical systems (MEMS), and biologically inspired robots that can move swiftly like insects."

Nano-robots Impacting Medical Industry on a Large Scale

A pioneering innovation, nano-robots or nanobots, is expected to revolutionize the medical industry. With the ability to treat at a cellular level, nanobots can be used for functions such as repairing genes, delivering drugs locally, and battling cancer cells and viruses.

"These robots can be introduced into the human body for drug delivery purposes, for clearing clogged arteries, for inspection, or for excision purposes," explains the analyst. "They also can be used to target a specific cancer cell in the body and deliver the drug to the specific location for enhanced effectiveness besides avoiding side effects and damage to surrounding normal cells."

Wednesday, April 22, 2009

A neural network for Java Lego robots - Learn to program intelligent Lego Mindstorms robots with Java

By Julio César Sandria Reynoso, JavaWorld.com, 05/16/05

This article shows how to develop a robot that can learn by using the backpropagation algorithm, a basic neural network, and implementing it on a Lego Roverbot. Using both the algorithm and Java, the Roverbot—a Lego robot vehicle—can learn some basic rules for moving forward, backward, left, and right.

In this article, we use the Lego Mindstorms Robotics Invention System 2.0 for building the Lego robot; leJOS 2.1.0, a little Java operating system for downloading and running Java programs inside the Roverbot; and J2SE for compiling the Java programs under leJOS.
Lego robots

The Lego Mindstorms Robotics Invention System (RIS) is a kit for building and programming Lego robots. It has 718 Lego bricks including two motors, two touch sensors, one light sensor, an infrared tower, and a robot brain called the RCX.

The RCX is a large brick that contains a microcontroller and an infrared port. You can attach the kit's two motors (as well as a third motor) and three sensors by snapping wire bricks on the RCX. The infrared port allows the RCX to communicate with your desktop computer through the infrared tower.

In this article, we use a Roverbot as it is constructed in the Lego Mindstorms Constructopedia, the guide for constructing robots. This Roverbot, as shown in Figure 1, has been configured to use all three sensors and two motors included in Lego Mindstorms RIS 2.0.

Figure 1. A Lego Roverbot with two touch sensors, one light sensor, and two motors
leJOS

leJOS is a small Java-based operating system for the Lego Mindstorms RCX. Because the RCX contains just 32 KB of RAM, only a small subset of the JVM and APIs can be implemented on the RCX. leJOS includes just a few commonly used Java classes from java.lang, java.io, and java.util, and thus fits well on the RCX.

You must load the RAM with the Lego firmware, or, in our case, with the leJOS firmware, and your programs. The firmware contains a bytecode interpreter, which can run programs downloaded from RCX code.

For setting up your leJOS installation, please take a look at Jonathan Knudsen's article "Imaginations Run Wild with Java Lego Robots," (JavaWorld, February 2001), Programming Lego Mindstorms with Java (Syngress Publishing, 2002), or the leJOS readme file contained in the leJOS zip file, which you can download from the leJOS homepage.
Neural networks

If we want to build intelligent machines, we should model the human brain. Early in the 1940s, the neurophysiologist Warren McCulloch and the mathematician Walter Pitts began working on the idea of building an intelligent machine out of artificial neurons. One of the earliest neural network models was the perceptron, an invention of F. Rosenblat in 1962. A perceptron can learn; it models a neuron by taking a weighted sum of its inputs and sending an output of 1 if the sum is greater than some adjustable threshold value, otherwise it sends 0. If a perceptron can compute, it can learn to compute. Figure 2 shows a neuron and Figure 3 shows a perceptron.

ADAPTIVE NEURAL NETWORK CONTROL OF ROBOTIC MANIPULATORS

by S S Ge, T H Lee (National University of Singapore) & C J Harris (University of Southampton)

Recently, there has been considerable research interest in neural network control of robots, and satisfactory results have been obtained in solving some of the special issues associated with the problems of robot control in an "on-and-off" fashion. This book is dedicated to issues on adaptive control of robots based on neural networks. The text has been carefully tailored to (i) give a comprehensive study of robot dynamics, (ii) present structured network models for robots, and (iii) provide systematic approaches for neural network based adaptive controller design for rigid robots, flexible joint robots, and robots in constraint motion. Rigorous proof of the stability properties of adaptive neural network controllers is provided. Simulation examples are also presented to verify the effectiveness of the controllers, and practical implementation issues associated with the controllers are also discussed.


Contents:

* Mathematical Background
* Dynamic Modelling of Robots
* Structured Network Modelling of Robots
* Adaptive Neural Network Control of Robots
* Neural Network Model Reference Adaptive Control
* Flexible Joint Robots
* Task Space and Force Control

Robotics Technology And Flex

In recent years, robotics have dramatically changed the arena of manufacturing, fabrication and assembly. And in the process of its evolution, the robot has become an intelligent being possesing sensory and control capability and improved dexterity.This book unfolds the full potential of robotics technology and automation and explores the field of robotics as a powerful manufacturing tool. Topics covered include: robot kinematics, sensors, AI, robot vision, robot language, programming simulation, installation and design.

Wednesday, April 1, 2009

Neural Network Control of Robot Manipulators

In this article, the author describes neural network controllers for robot manipulators in a variety of applications, including position control, force control, parallel-link mechanisms, and digital neural network control. These "model-free" controllers offer a powerful and robust alternative to adaptive control.

In recent years there has been increasing interest in universal model-free controllers. Mimicing the functions of human processes, these controllers learn about the systems they are controlling on-line, and thus automatically improve their performance. So far, neural networks have made their mark in the areas of classification and pattern recognition; with this success they've become an important tool in the repertoire of the signal processor and computer scientist. However, the same cannot be said for neural networks in system theory applications.

There has been a good deal of research on the use of neural networks for control, although most of the articles have been ad hoc discussions lacking theoretical proofs and repeatable design algorithms. As a result, neither the control systems community nor US industry have fully accepted neural networks for closed-loop control applications. In this article, I address the major problems facing neural network control and demonstrate that neural networks do indeed fulfill the promise of providing model-free learning controllers for a class of nonlinear systems.

The basic challenges for neural network control are

# providing repeatable design algorithms,
# providing on-line learning algorithms that do not require preliminary off-line tuning,
# initializing the neural network weights for guaranteed stability,
# demonstrating closed-loop trajectory following,
# computing various weight tuning gradients, and
# demonstrating that the neural network weights remain bounded despite unmodelled dynamics-because bounded weights guarantee bounded control signals.

K.S. Narendra and others have paved the way for neural network control by studying the dynamical behavior of neural networks in closed-loop applications, including computation of the gradients needed for backpropagation tuning. (There are also several groups currently analyzing neural network controllers using a variety of techniques.) Unfortunately, the necessary gradients often depend on the unknown system or satisfy their own differential equations. Thus, though rigorously applying them to identification, researchers have not fully developed neural networks for direct closed-loop control.

Neural networks and artificial intelligence in robotics

When talking of artificial intelligence(AI) many people think that neural nets should be as intelligent as human brain. Some people don’t even imagine how widely they are used in their life. Lets narrow us to something more simple and understandable.

The most exiting usage of AI is robotics. Today isn’t very hard to build simple robot with few sensors and couple motors. But harder part is to make it react to real world as you expect to be. Usually when programming robot brain usually there is checking sensor states and reacting on particular circumstances like hitting the wall, line on the ground and so on. But more sensors robot has more complex reactions may be. Programming of more complex systems may be pain without using simple neural networks.

In neural networks we have two subjects: “knowledge” and “learning”. This means that intelligent systems has some knowledge,or so called experience and ability to learn and improve. Lets have an example: Suppose we have T form maze. In this maze there is a mouse. In one side of T there is electric shock and in another side si a cheese. During some tries mouse will learn which side to choose in order to avoid electric shock and get cheese. You see – wee have touched both subjects: learning and knowledge. Speaking of science on first try probability of choices ara 50×50 percents, but after repeat tries probability grows due to experience gained. Everything seems fine until we face hardware – a robot. How to make robot feel hunger, anger, thirst, pain, satisfaction?

We human have these standard reflexes on birth brought by genes. Using them we grow and learn. Once you burned a hand in oven, you will always remember how painful it is and try to avoid the heat. With artificial intelligence there is almost the same. Just in different level. In modern world AI is making first steps in understanding how our conscious works, how neurons interact and how brain works.

In digital electronics neuron could be interpreted as multiple input AND, OR or XOR element. But in reality neuron is analogue element with multiple inputs with different sensitivities. Sum of these input signals defines the activity of neuron. The output signal of neuron may be processed as rezult or forwarded to another neuron input.

In general neural network is a set of interconnected elements where each of them has their own input signals and outputs some resulting signal. For instance simple robot platform:





Mathematically everything is possible to describe with formula: Outputs=f(Inputs),

Function may be any logical algorithm, finite state algorithm or simply as a set of operations of any programming language. But understand, that algorithm isn’t a reaction to one or another input, but simply describes method which is used by neural network. Teaching neural network is done by examples: network inputs are affected by some actions and output signals are compared to our expected reaction. If it differs – then we get so called an “error”. In order to reduce this error sensitivities of inputs are reduces so, that error would be minimal. This process is repeated many times until network reacts as expected.

Some science has to be done while choosing right structure of neural networks. If neural network will be too small, then it will be ineffective and it won’t be able to learn what you want. If there will be too many neurons, then learning can take much more time then expected.

After initial training - neural network may be used to control robot platform. It can learn by itself reacting to real world objects. For instance: it can learn to find the most effective way to turn to light with minimal collision number and so on…

This is very narrow area we have touched in this article. Building effective neural networks require some knowledge and experience. But such simple solutions can be good starting point.

Looking further there are many more interesting things that touches neural networking like genetic algorithms. They are used to copy real world example, where it acts as population of several neural networks and can duplicate and form effective neural net that is most effective for particular purposes. But this is early to talk about artificial intelligence as it is more likely artificial reflex.

Tuesday, March 10, 2009

NovAtel Inc. Technology is OnBoard Neural Robotics’ Autonomous Mini-Helicopter

NovAtel Inc. announced that its GPS engine is onboard Neural Robotics Inc.’s (NRI) AutoCopter™ Express Unmanned Aerial Vehicle (UAV).

10.01.2007 — NovAtel Inc. (NASDAQ: NGPS), a precise positioning technology company, announced that its GPS engine is onboard Neural Robotics Inc.’s (NRI) AutoCopter™ Express Unmanned Aerial Vehicle (UAV) - a fully autonomous and electric-powered mini-helicopter that can be used as an aerial platform in the sky for applications including aerial photography, surveillance, pipeline and utility line inspection, convoy escort, and mine detection.

The key technology in the AutoCopter is NRI’s patented neural network-based flight control system. The algorithms in this system provide the “intelligence” to allow operators, without any piloting experience, to maneuver the helicopter (via joysticks on the transmitter) to takeoff and land, fly to a point and hover, etc. The system can also be flown in a fully autonomous mode (automatic takeoff, pre-programmed flight path, and automatic landing). The system consists of NovAtel’s GPS+ Wide Area Augmentation System (WAAS) capable engine combined with a PC/104 computer, and attitude and heading reference system.

“NovAtel is excited to be onboard such an innovative UAV. Our feature rich, precision GPS engines are ideally suited to applications such as the AutoCopter. NovAtel continues to make strides in this emerging market segment with currently over 25 design wins in both military and commercial sectors,” according to Graham Purves, Vice President of Sales for NovAtel.

“NRI has depended on the quality and reliability of NovAtel’s GPS receivers since the inception of our company. The flexibility of their product line allows us to tailor GPS solutions according to the requirements of each AutoCopter customer,” according to Michael Fouche, CEO of NRI.

About Neural Robotics Inc.
NRI specializes in the design and manufacture of affordable unmanned rotorcraft (called AutoCopter™) for both the commercial and government markets and has exported AutoCopters to three continents (Europe, Asia, and South America). NRI’s core technology is the set of neural network flight control algorithms that allow the operator to fly the AutoCopter with confidence - whether performing aerial surveys or carrying a payload that is slung 30 feet under the AutoCopter. For more information, visit http://www.neural-robotics.com.

About NovAtel Inc.
NovAtel designs, markets and sells high-precision GPS and other positioning components and sub-systems used in a wide variety of commercial applications principally in the aviation, geomatics (surveying and mapping), mining, precision agriculture, marine and defence industries. NovAtel is also the principal supplier of reference receivers to national aviation ground networks in the US, Japan, Europe, China and India. NovAtel’s solutions combine hardware, such as receivers and antennas, with software to enable its customers to fully integrate the Company’s high-precision GPS technology into their respective products and systems. NovAtel, an ISO 9001 certified company, is focused on supplying core high-precision positioning technology to OEMs and system integrators who build systems for various end market applications. For more information, visit http://www.novatel.com.

Monday, March 2, 2009

Standardized building blocks

As one of the industry leaders of the RETF, Intel is devising low-cost reference designs for relatively small robots. The reference designs are based on silicon for Intel's XScale microprocessor and Intel Centrino mobile technology, flash memory, and 802.11 wireless networking with built-in support for wireless sensor networks. The designs give researchers an intermediate scale between the embedded microprocessors currently used in internal robotics and the large-scale laptops used for mobile intelligence.

The robotics package also includes the open-source Linux 2.4.19 operating system, as well as a multitude of open-source drivers. Drivers include vision-system drivers for sensing infrared, drivers for ultrasonic devices that measure the distance from a robot to objects in the robot's environment, and so on. The software platform also supports Java applications, and integrates USC's Player device server for robotics systems. All elements in the open-source robotics package are wirelessly connected using 802.11 networks.

With internal robot systems standardized, researchers and developers will not have to redesign the wheel for each robot's brain. Instead, developers can spend more time on mobility, visual recognition systems, and the software for artificial intelligence (AI).

Robotics task force

The thrust of Intel's robotics effort is to reduce the cost and engineering required to build small, powerful, sophisticated robots. This thrust, however, requires standards and protocols. Right now, robotics standards and protocols are in their infancy. With technology convergence becoming increasingly important in Intel's areas of interest, Intel is leading industry efforts for the Robotics Engineering Task Force (RETF).

The RETF is modeled after the Internet Engineering Task Force (IETF). RETF allows government and university researchers to work together to establish standard software protocols and interfaces for robotics systems. Currently, government representatives include researchers from NASA, DARPA, and NIST (National Institute of Standards and Technology). All told, approximately 35 government and university researchers are already participating in the RETF.

The most pressing issue for the RETF is devising standards for commanding and controlling the mobile robots. The task force has already defined a charter to develop standards for robotics systems. A working draft of the first framework document is now being reviewed for comments.

The task force has also begun work on standards for bridging networks, on protocols, and on application programming interfaces (APIs). Current issues being discussed include intellectual property rights and copyright. The task force hopes to begin work on full specifications as soon as the framework document is approved. The task force expects to publish its work as open-source code when the work is complete, something it hopes to finish in about two years.

Numerous collaborations on robotics projects

Overall, Intel is working with approximately 20 robotics research groups, including Carnegie Mellon University (CMU), University of Southern California (USC), University of Pennsylvania, Northwestern, and Georgia Tech. Intel is also in discussions with universities and robotics manufacturers, such as Sony, about robotic dogs, and Honda and Samsung on using Intel silicon to build robotic humanoids. Intel is also in discussion with NASA and DARPA (the Defense Advanced Research Projects Agency) on several major projects.

Other pilot projects include professor Sebastian Thrun's CMU research into an aerial mapping helicopter (photo below), which is currently about 4 feet in length and which has been demonstrated in certain DARPA programs. Acroname is also using Intel's open-source robotics package in their latest commercial robot, called Garcia (see photo at beginning).


Numerous collaborations on robotics projects
Sebastian Thrun's aerial mapping helicopter


In other collaborations, professor Balch of Georgia Tech is using Intel technology to develop hundreds of mobile robots in order to model the swarm behavior of insects. Professor Vijay Kumar is using Intel's XScale boards (photo below) and open-source software for off-road robot investigations. Professor Illah Nourbakhsh is teaching mobile robot programming using new robotics systems with Intel XScale boards and the Linux operating system.


Intel boards are being used in a number of robotics projects

Gateways into sensor networks

Two technologies in particular seem to be moving toward an interesting convergence: mobile robotics and wireless sensor networks. The two main questions here are:

* Can a mobile robot act as a gateway into a wireless sensor network?
* Can sensor networks take advantage of a robot's mobility and intelligence?

One major issue with a mobile robot acting as a gateway is the communication between the robot and the sensor network. Sensor networks typically communicate using 900 MHz radio waves. Mobile robots use laptops that communicate via 802.11, in the 2.4- to 2.483-GHz range. Intel hopes to prove that a sensor net can be equipped with 802.11 capabilities to bridge the gap between robotics and wireless networks.

Intel recently demonstrated how a few motes equipped with 802.11 wireless capabilities can be added to a sensor network to act as wireless hubs. Other motes in the network then use each other as links to reach the 802.11-equipped hubs. The hubs forward the data packets to the main 802.11-capable gateway, which is usually a laptop. Using some motes as hubs cuts down on the number of hops any one data packet has to make to reach the main gateway. It also reduces power consumption across the sensor net.

Intel believes that one of the most interesting technology convergences will be in designing mobile robots that can act as gateways into the wireless sensor networks. For example, Intel recently installed small sensors in a vineyard in Oregon to monitor microclimates. The sensors measured temperature, humidity, and other factors to monitor the growing cycle of the grapes, then transmitted the data from sensor to sensor until the data reached a gateway. There, the data was interpreted and used to help prevent frostbite, mold, and other agricultural problems.

The agricultural example shows just how a sensor network could take advantage of a mobile robot's capabilities. Over time, sensors need to be recalibrated, just like any other measuring equipment. If a robot could act as a gateway to the sensor network, it could automatically perform tasks such as calibration. For example, a robot could periodically collect data along the network, determine which sensors are out of tolerance, move to the appropriate location, and recalibrate each out-of-tolerance device.

To look into using mobile robots as gateways to such wireless sensor networks, Intel is bringing in a Ph.D. candidate from the University of Southern California, under the guidance of professor Gaurav Sukhatme. This person will work with Intel on integrating wireless sensor networks into robotics research for localization techniques. This type of collaboration is just one example of how Intel is promoting the convergence of microelectronics and robotics.

Robots growing in sophistication

Although we are surrounded by robots that we think of as automated tools, there are some sophisticated robots already in use (photo below). A remote telepresence is one of the most common applications that today's mobile, autonomous robots provide. Intelligence for these robots is handled via an embedded microcontroller that manages internal systems, and by a laptop that is attached to the robot. Humans control the robot through wireless communications. In this way, humans can tell the robot to change directions, shift a camera angle, take measurements, grasp objects, and so on. For example, mobile robots can let security personnel stay in a central office and still check out unsupervised areas in a warehouse or other remote site.


Carnegie Mellon University's TagBots use Intel boards


With advances in microchip design, nanotech sciences, software architecture, and mini-power cells, robot systems can be more than just another pair of eyes. They are already being tested and used in a variety of applications. They can traverse different, even dangerous environments and perform complex tasks on their own. For example, mil-spec iRobot Packbots have been used in Afghanistan to detect and map the locations and contents of caves. Another iRobot rover was used in the historic exploration of both the southern and northern shafts that led to the Queen's Chamber in the Great Pyramid at Giza (Egypt). The rover was able to illuminate areas beyond the blocking stones in the shafts, which had last been viewed by human eyes some 4,500 years ago.

Robot mobility issues

Regardless of a robot's design or tasks, there are still three main issues with its mobility:

* Localization: How does a robot know where it is in its environment?
* Mapping: How does the robot know the details of its environment?
* Navigation: How does a robot traverse its environment?

Intel works closely with researchers to identify novel ways for a robot to perform its mobility tasks. Intel is particularly interested in machine-vision libraries that can be used to perform localization and mapping based on monocular- or stereo-vision systems. For example, right now, most robots navigate by using infrared or radio waves to avoid objects in their paths. However, Intel software researchers recently developed several libraries that are very applicable to robotics systems. Intel's computer vision library is already used extensively by vision researchers.

Intel has also released a test version of a technical library for building Bayesian networks to support machine-learning activities. Bayesian networks are a form of probability-based artificial intelligence. Such a network would let a robot navigate by matching sensor data to a map stored in its memory.

What is a robot?

Robotics is not a new field. It has been around for decades. In fact, most people have robots in their own home, even if they don't recognize the robots as such. For example, a dishwasher automatically washes and dries your dishes, then grinds up the rinsed-off food so the organic matter doesn't clog your drains. A washing machine soaks, soaps, agitates, and rinses your clothes. Down the street, the car wash-n-wax cleans, brushes, washes, and waxes your car, all for a few dollars. One of the better known home-oriented robots is iRobot's smart vacuum cleaner, called the Roomba, which has already won the Good Housekeeping Award for efficiency and ease of use.

More sophisticated robots are used in manufacturing plants and warehouses. Car makers use automated machines to position car frames, bolt pieces together, and even do welds and priming. In wafer communications, test systems position themselves along grids, take measurements, and then correlate the data into graphs. Robot-assisted heart microsurgery is now performed routinely in the U.S.

To some extent, we have become so used to robots that we no longer pay attention to the automated machines. We look only at the tasks they complete, and we think of them simply as tools. It is easy to think this way: most of today's robots are stationary tools in fixed locations, like a fruit sorter in a cannery, or an alarm sensor that triggers a call to security.

Monday, January 26, 2009

Tower PC & Server Cooling Systems to Cure your PC & Server of Overheating problems.

PC Cooling is a very serious matter with today's computers. PC & Server cooling is an essential part of your computers needs. Without the proper system cooling, CPU cooling, Video Card Cooling, Mother Board Cooling, Hard Drive Cooling, and Ram Chip cooling the life expectancy of your PC & Server will be cut short. You can spend so much extra money trying to fix each overheating problem you have or you can look into our new line of System coolers that tackle all of these needs in one easy to use Tower PC & Server Cooling System.

Without proper air flow cooling, your computer hardware suffers from overheating. This overheating causes slow downs, system error messages, and crashing. Also, the life expectancy of your PC's components will greatly diminish.

* CPU Cooling
* Video Card Cooling,
* Mother Board Cooling
* System Ram Cooling
* Hard Drive Cooling.

Other manufactures have come up with some very unique and interesting ideas on how you can better cool a component in your computer system. We have tested some of these devices and as some do work for what they are made to do you could spend a lot of money to cover your complete computer system or you can purchase one system that does it all! Cooler

Tower PC & Server Cooling Systems to Cure your PC & Server of Overheating problems. CLICK HERE
2coolpc
FrozenCPU
Corsair Water Cooling System

Saturday, January 17, 2009

MOBILITY AND ROBOTIC SYSTEMS

Richard Volpe, Manager
Gabriel Udomkesmalee, Deputy Manager

Welcome to the JPL Robotics website! Here you'll find detailed descriptions of the activities of the Mobility and Robotic Systems Section, as well as related robotics efforts around the Jet Propulsion Laboratory. We are approximately 100 engineers working on all aspects of robotics for space exploration and related terrestrial applications. We write autonomy software that drives rovers on Mars, and operations software to monitor and control them from Earth. We do the same for their instrument-placement and sampling arms, and are developing new systems with many limbs for walking and climbing. To achieve mobility off the surface, we are creating prototypes of airships to fly through the atmospheres of Titan and Venus, and drills and probes to go underground on Mars and Europa.

To enable all of these robots to interact with their surroundings, we make them see with cameras and measure their environments with other sensors. Based on these measurements, the robots control themselves with algorithms also developed by our research teams. We capture the control-and-sensor-processing software in unifying frameworks, which enable reuse and transfer among our projects. In the course of developing this technology, we build real end-to-end systems as well as high-fidelity simulations of how the robots will work on worlds we are planning to visit.

Please use the menu at left to navigate to the view of our work that is most important to you. Our application domains are described in general terms, and then specifically in the context of flight projects and research tasks. Personnel are described in terms of the groups that constitute the section, as well as the people who constitute the groups. Most of our major robot systems are described, as are the laboratory facilities in which they are developed and exercised. For more detailed information, our publications may be accessed through a search engine, or more recent news may be browsed. Finally, to provide context to our current work, our charter is documented, the history of JPL robotics is described, and links to other related work are provided.