Introduction
Artificial eyes, ears, and noses for stronger, safer troops A layer of mucus dissolves the arriving scents and separates out different odors molecules so that they arrive at the receptors at different speeds and times. The brain is able to interpret this pattern to distinguish a diverse range of smells.
In contrast, an artificial nose consists of a much smaller array of chemical sensors, typically between six and 12, connected to a computer or neural network capable of recognizing patterns of molecules.
A neural network is a collection of computer processors that function in a similar way to a simple animal brain. The nose doesn't have a specific receptor for the smell of roses; instead it detects a particular mixture of sweet, sour, and floral, which the brain recognizes as a rose. Similarly, the Tufts artificial nose has 16 fluorescent sensor strips, each sensitive to a different range of molecules, and a computer that interprets their response pattern to determine whether or not they have sniffed a mine. While this method can be better at filtering out false alarms than the Fido approach, it may not be quite as sensitive to explosives-related chemicals.The human nose contains more than 100 million receptors.
Initially developed as laboratory instruments, electronic noses that mimic the human sense of smell are moving into food, beverage, medical, and environmental applications. The Researchers and manufacturers alike have long envisioned creating devices that can 'smell' odors in many different applications. Thanks to recent advances in organic chemistry, sensor technology, electronics, and artificial intelligence, the measurement and characterization of aromas by electronic noses (or e-noses) is on the verge of becoming a commercial reality.
Friday, May 22, 2009
Research Topic: Distributed Detection for Smart Sensor Networks
Distributed Detection and Estimation
The literature on distributed detection and estimation is quite extensive, including the topic of multi-sensor data fusion. Initially, let us consider distributed detection; a good, relatively recent tutorial on the subject is given by Viswanathan and Varshney [1]. The basic idea is to have a number of independent sensors each make a local decision (typically a binary one) and then to combine these decisions at a fusion center to generate a global decision. The figure below illustrates the parallel fusion topology, which implements this processing. Either the Bayesian or the Neyman-Pearson criterion can be used. Assuming the Neyman-Pearson formulation where one assumes a bound on the global probability of false alarm, the goal is to determine the optimum local and global decision rules that minimize the global probability of miss (or equivalently maximize the global probability of detection). When the sensors are deployed so that their observations are conditionally independent, one can show that these decision rules are threshold rules based on likelihood ratios [2]. The problem now becomes one of determining the optimal threshold at each sensor, as well as at the fusion center. While this task is quite non-trivial, it can still be done for a reasonably small number of sensors using iterative techniques [3]. More importantly, by using soft, multi-bit decisions at each of the sensors, it is possible to increase the performance so that it is asymptotically close to the optimal centralized scheme [4].
Depending on the sensor network topology, it may be more useful to implement the distributed detection or estimation using a tree structure. Tsistsiklis [3] shows that the optimal decision rules are still in the form of threshold tests. Tang et al. [5] consider the case where the local decisions made at a number of sensors are communicated to multiple root nodes for data fusion. With each sensor node characterized by a receiver operating curve (ROC) and assuming a Bayes' risk criterion, they reformulate the problem as a nonlinear optimal control problem that can be solved numerically. Furthermore, they briefly examine communication and robustness issues for two types of tree structures: a functional hierarchy and a decentralized market. One conclusion is that if the communication costs are a primary concern, then the functional hierarchy is preferred because it leads to less network traffic. However if robustness is the primary issue, then the decentralized market structure may be a better choice.
In the cases discussed above, the information flows in one direction from the sensors to either the single fusion center or to a number of root nodes. Even in the decentralized market topology, where numerous sensors report to multiple intermediate nodes, the graph of the network is still acyclic. If the communication network is able to handle the increased load, performance can be improved through the use of decision feedback [6, 7]. Pados et al. [6] examine two distributed structures: 1.) a network where the fusion center provides decision feedback connections to each of the sensor nodes, and 2.) a set of sensors that are fully interconnected via decision feedback. The performance of the fully connected network is quantifiably better, but their initial system was non-robust to variations in the statistical descriptions of the two hypotheses. Robust testing functions are able to overcome this problem, and they show that robust networks tend to reject the feedback when operating with contaminated data. Alhakem and Varshney [7] study a distributed detection system with feedback and memory. That is, each sensor not only uses its present input and the previous fed-back decision from the fusion center, but it also uses its own previous inputs. They derive the optimal fusion rule and local decision rules, and they show that the probability of error in a Bayesian formulation goes to zero asymptotically. Additionally, they address the communication requirements by developing two data transmission protocols that reduce the number of messages sent among the nodes.
Swaszek and Willet propose a more extensive feedback approach that they denote parleying [8]. The basic idea is that each sensor makes an initial binary decision that is then distributed to all the other sensors. The goal is to achieve a consensus on the given hypothesis through multiple iterations. They develop two versions of the algorithm; the first is a greedy approach that achieves fast convergence at the expense of performance. The nth-root approach constrains the consensus to be optimum in that it would match that of a centralized processor having access to all the data. The main issue is the number of parleys (iterations) required to reach this consensus.
Wireless Networks
The second component of a smart sensor network is the wireless communications network used to relay the sensor information. In essentially all of the work discussed above, the initialization, routing, and reconfiguration details of this network are not considered. The effect of the distributed algorithm on the use of networking resources is often not examined. When it has been examined, the effects of lost or corrupted messages on the performance of the detection or estimation algorithm have been typically neglected. An exception is Tang et al. [5], who studied robustness to the loss of communication links. Also, Thomopoulos and Zhang [9] make some assumptions about networking delays and channel errors. Recent work in distributed estimation [10] assumes error-free communication channels with capacity constraints.
Since a real wireless network imposes channel errors, delays, packet losses, and power and topology constraints, it is essential that the over-all sensor network design consider these factors. Typically after deployment, the first action for a sensor network is to determine its topology. This step is done because many of the traditional routing protocols require topological information for initialization. This is especially true for link state routing, which forms the basis for the open shortest path first algorithm used within autonomous systems in the Internet [11]. In order to both conserve battery power and reduce the probability of detection by hostile forces, it is better to use a reactive routing protocol. This is a protocol that determines a route only when it is required.
Another design choice is whether the network has a flat or hierarchical architecture. Both have advantages and disadvantages. The former is more survivable since it does not have a single point of failure; it also allows multiple routes between nodes. The latter provides simpler network management, and can help further reduce transmissions.
In a mobile ad-hoc network, the changing network topology leads to a requirement that the network periodically re-configure itself. Not only must the routing protocols be able to handle this situation, but also the media access mechanism. Link parameters, such as modulation type, amount of channel coding, transmitter power, etc. , must adapt to the new configuration. While we are initially assuming that the sensors are stationary, the possibility of (deliberate) sensor destruction requires that the communications network be re-configurable. Moreover, the distributed detection and estimation algorithms must also have this capability.
Summary of Research Issues
As discussed above, there has been a relatively long history of research in distributed detection and estimation in one research community, and wireless networking in another. However, there has been much less overlap between these two communities. The DARPA-sponsored SensIt program is also beginning to address this topic, with the focus primarily on collaborative signal processing, network routing protocols, and query procedures. The main contribution of our work is to combine the two disciplines of distributed detection/estimation and wireless networking so that realistic smart sensor network configurations are developed, evaluated, and optimized. Based on these designs, research is being conducted to answer the following inter-related questions:
* How do the communication and networking effects, specifically routing, bandwidth, and power constraints determine the quality of the distributed detection and/or estimation?
* What is the load on the communication network caused by the distributed processing? How many messages/second and how many system resources are required to achieve a desired quality of service?
* How robust is the resulting sensor network to lost nodes? What is the mechanism for reconfiguration that allows the network to adapt to such loss events?
The literature on distributed detection and estimation is quite extensive, including the topic of multi-sensor data fusion. Initially, let us consider distributed detection; a good, relatively recent tutorial on the subject is given by Viswanathan and Varshney [1]. The basic idea is to have a number of independent sensors each make a local decision (typically a binary one) and then to combine these decisions at a fusion center to generate a global decision. The figure below illustrates the parallel fusion topology, which implements this processing. Either the Bayesian or the Neyman-Pearson criterion can be used. Assuming the Neyman-Pearson formulation where one assumes a bound on the global probability of false alarm, the goal is to determine the optimum local and global decision rules that minimize the global probability of miss (or equivalently maximize the global probability of detection). When the sensors are deployed so that their observations are conditionally independent, one can show that these decision rules are threshold rules based on likelihood ratios [2]. The problem now becomes one of determining the optimal threshold at each sensor, as well as at the fusion center. While this task is quite non-trivial, it can still be done for a reasonably small number of sensors using iterative techniques [3]. More importantly, by using soft, multi-bit decisions at each of the sensors, it is possible to increase the performance so that it is asymptotically close to the optimal centralized scheme [4].
Depending on the sensor network topology, it may be more useful to implement the distributed detection or estimation using a tree structure. Tsistsiklis [3] shows that the optimal decision rules are still in the form of threshold tests. Tang et al. [5] consider the case where the local decisions made at a number of sensors are communicated to multiple root nodes for data fusion. With each sensor node characterized by a receiver operating curve (ROC) and assuming a Bayes' risk criterion, they reformulate the problem as a nonlinear optimal control problem that can be solved numerically. Furthermore, they briefly examine communication and robustness issues for two types of tree structures: a functional hierarchy and a decentralized market. One conclusion is that if the communication costs are a primary concern, then the functional hierarchy is preferred because it leads to less network traffic. However if robustness is the primary issue, then the decentralized market structure may be a better choice.
In the cases discussed above, the information flows in one direction from the sensors to either the single fusion center or to a number of root nodes. Even in the decentralized market topology, where numerous sensors report to multiple intermediate nodes, the graph of the network is still acyclic. If the communication network is able to handle the increased load, performance can be improved through the use of decision feedback [6, 7]. Pados et al. [6] examine two distributed structures: 1.) a network where the fusion center provides decision feedback connections to each of the sensor nodes, and 2.) a set of sensors that are fully interconnected via decision feedback. The performance of the fully connected network is quantifiably better, but their initial system was non-robust to variations in the statistical descriptions of the two hypotheses. Robust testing functions are able to overcome this problem, and they show that robust networks tend to reject the feedback when operating with contaminated data. Alhakem and Varshney [7] study a distributed detection system with feedback and memory. That is, each sensor not only uses its present input and the previous fed-back decision from the fusion center, but it also uses its own previous inputs. They derive the optimal fusion rule and local decision rules, and they show that the probability of error in a Bayesian formulation goes to zero asymptotically. Additionally, they address the communication requirements by developing two data transmission protocols that reduce the number of messages sent among the nodes.
Swaszek and Willet propose a more extensive feedback approach that they denote parleying [8]. The basic idea is that each sensor makes an initial binary decision that is then distributed to all the other sensors. The goal is to achieve a consensus on the given hypothesis through multiple iterations. They develop two versions of the algorithm; the first is a greedy approach that achieves fast convergence at the expense of performance. The nth-root approach constrains the consensus to be optimum in that it would match that of a centralized processor having access to all the data. The main issue is the number of parleys (iterations) required to reach this consensus.
Wireless Networks
The second component of a smart sensor network is the wireless communications network used to relay the sensor information. In essentially all of the work discussed above, the initialization, routing, and reconfiguration details of this network are not considered. The effect of the distributed algorithm on the use of networking resources is often not examined. When it has been examined, the effects of lost or corrupted messages on the performance of the detection or estimation algorithm have been typically neglected. An exception is Tang et al. [5], who studied robustness to the loss of communication links. Also, Thomopoulos and Zhang [9] make some assumptions about networking delays and channel errors. Recent work in distributed estimation [10] assumes error-free communication channels with capacity constraints.
Since a real wireless network imposes channel errors, delays, packet losses, and power and topology constraints, it is essential that the over-all sensor network design consider these factors. Typically after deployment, the first action for a sensor network is to determine its topology. This step is done because many of the traditional routing protocols require topological information for initialization. This is especially true for link state routing, which forms the basis for the open shortest path first algorithm used within autonomous systems in the Internet [11]. In order to both conserve battery power and reduce the probability of detection by hostile forces, it is better to use a reactive routing protocol. This is a protocol that determines a route only when it is required.
Another design choice is whether the network has a flat or hierarchical architecture. Both have advantages and disadvantages. The former is more survivable since it does not have a single point of failure; it also allows multiple routes between nodes. The latter provides simpler network management, and can help further reduce transmissions.
In a mobile ad-hoc network, the changing network topology leads to a requirement that the network periodically re-configure itself. Not only must the routing protocols be able to handle this situation, but also the media access mechanism. Link parameters, such as modulation type, amount of channel coding, transmitter power, etc. , must adapt to the new configuration. While we are initially assuming that the sensors are stationary, the possibility of (deliberate) sensor destruction requires that the communications network be re-configurable. Moreover, the distributed detection and estimation algorithms must also have this capability.
Summary of Research Issues
As discussed above, there has been a relatively long history of research in distributed detection and estimation in one research community, and wireless networking in another. However, there has been much less overlap between these two communities. The DARPA-sponsored SensIt program is also beginning to address this topic, with the focus primarily on collaborative signal processing, network routing protocols, and query procedures. The main contribution of our work is to combine the two disciplines of distributed detection/estimation and wireless networking so that realistic smart sensor network configurations are developed, evaluated, and optimized. Based on these designs, research is being conducted to answer the following inter-related questions:
* How do the communication and networking effects, specifically routing, bandwidth, and power constraints determine the quality of the distributed detection and/or estimation?
* What is the load on the communication network caused by the distributed processing? How many messages/second and how many system resources are required to achieve a desired quality of service?
* How robust is the resulting sensor network to lost nodes? What is the mechanism for reconfiguration that allows the network to adapt to such loss events?
WIRELESS/MOBILE NETWORKS
Wireless technology, which uses electromagnetic waves to communicate information from one point to another, can be applied to computers and other electronic devices. Although wireless technologies have been used in specific applications for decades, wireless networks have recently become much more widespread due to better technology and lower prices. Once the IEEE first defined wireless standards in the late 1990’s, wireless networking became feasible for a wide range of business and personal applications. Wireless networking offers various advantages over wired connections, including mobility, connectivity, adaptability, and ease of use in locations that prohibit wiring. Universities, airports, and major public places are currently taking advantage of wireless technology, and many businesses, health care facilities, and major cities are developing their own wireless networks. Since the cost of wireless networks has dropped dramatically in recent years, they are also becoming more popular in home computing.
The Special Topics list of highly cited wireless/mobile network papers from the past decade covers various aspects of wireless technology, but focuses on improving network performance. Some articles deal with improving wireless network speed through modifying transmission protocols. Attempts to increase performance when transmitting multimedia and video data are also present. Routing protocols, call admission schemes, and mobility management are examined for the purpose of alleviating network congestion and increasing overall performance. Other articles focus on energy concerns in wireless networks, including battery life and power-sensitive networks, while another article concentrates on security issues. The use of beamforming to exploit multiuser diversity for increased capacity emerges in two of the later articles.
The highly cited wireless/mobile network articles from the past two years cover diverse topics emerging in the wireless technology field. Improving performance remains a major issue, as shown in articles on relay channel signaling protocols and spectral efficiency, along with articles on improved models and metrics for assessing performance. Cooperation, in particular multiuser and spatial diversity, is explored for the purpose of increasing performance and capacity. Other topics include energy usage, security in location-based services, mobility management, and Bluetooth-based networks. Some specific wireless applications are studied, including wireless sensor networks and wireless devices used in elementary school classrooms. Since the FCC allocated bandwidth for commercial ultra-wideband (UWB) devices in 2002, UWB system design has also emerged as a wireless network topic.
Methodology
To construct this database, papers were extracted based on article-supplied keywords for Wireless/Mobile Networks. The keywords used were as follows:
wireless network*
OR
mobile network*
The baseline time span for this database is 1995-Dec. 31, 2005. The resulting database contained 3,249 (10 years) and 1,449 (2 years) papers; 6,142 authors; 63 countries; 313 journals; and 1,511 institutions.
Rankings
Once the database was in place, it was used to generate the lists of top 20 papers (two- and ten-year periods), authors, journals, institutions, and nations, covering a time span of 1995-December 31, 2005 (sixth bimonthly, an 11-year period).
The top 20 papers are ranked according to total cites. Rankings for author, journal, institution, and country are listed in three ways: according to total cites, total papers, and total cites/paper. The paper thresholds and corresponding percentages used to determine scientist, institution, country, and journal rankings according to total cites/paper, and total papers respectively are as follows:
Entity: Scientists Institutions Countries Journals
Thresholds: 9 23 9 6
Percentage: 1% 2% 50% 20%
The Special Topics list of highly cited wireless/mobile network papers from the past decade covers various aspects of wireless technology, but focuses on improving network performance. Some articles deal with improving wireless network speed through modifying transmission protocols. Attempts to increase performance when transmitting multimedia and video data are also present. Routing protocols, call admission schemes, and mobility management are examined for the purpose of alleviating network congestion and increasing overall performance. Other articles focus on energy concerns in wireless networks, including battery life and power-sensitive networks, while another article concentrates on security issues. The use of beamforming to exploit multiuser diversity for increased capacity emerges in two of the later articles.
The highly cited wireless/mobile network articles from the past two years cover diverse topics emerging in the wireless technology field. Improving performance remains a major issue, as shown in articles on relay channel signaling protocols and spectral efficiency, along with articles on improved models and metrics for assessing performance. Cooperation, in particular multiuser and spatial diversity, is explored for the purpose of increasing performance and capacity. Other topics include energy usage, security in location-based services, mobility management, and Bluetooth-based networks. Some specific wireless applications are studied, including wireless sensor networks and wireless devices used in elementary school classrooms. Since the FCC allocated bandwidth for commercial ultra-wideband (UWB) devices in 2002, UWB system design has also emerged as a wireless network topic.
Methodology
To construct this database, papers were extracted based on article-supplied keywords for Wireless/Mobile Networks. The keywords used were as follows:
wireless network*
OR
mobile network*
The baseline time span for this database is 1995-Dec. 31, 2005. The resulting database contained 3,249 (10 years) and 1,449 (2 years) papers; 6,142 authors; 63 countries; 313 journals; and 1,511 institutions.
Rankings
Once the database was in place, it was used to generate the lists of top 20 papers (two- and ten-year periods), authors, journals, institutions, and nations, covering a time span of 1995-December 31, 2005 (sixth bimonthly, an 11-year period).
The top 20 papers are ranked according to total cites. Rankings for author, journal, institution, and country are listed in three ways: according to total cites, total papers, and total cites/paper. The paper thresholds and corresponding percentages used to determine scientist, institution, country, and journal rankings according to total cites/paper, and total papers respectively are as follows:
Entity: Scientists Institutions Countries Journals
Thresholds: 9 23 9 6
Percentage: 1% 2% 50% 20%
Wednesday, May 20, 2009
Convergence of Mobile Phones and Sensor Networks
Pervasive computing with mobile phones and sensor networks are a future area of research. In a partner project with the Technical University of Berlin we develop our own sensors which can use the mobile phone as a data gateway or as an user interface for information retrieval on sensor data. The sensors communicate via the ISM-band (industrial, medical, and scientific) and employ a family of 16 Bit mixed controllers. The sensor platform contains a large set of ready to use data processing functions. For ease of use, researchers and students may develop and program their own applications in C. The development tools are employed in a sensor programming class at TU Berlin and in a future programming class at AAU.
Mobile Emulab: A Robotic Wireless and Sensor Network Testbed
Simulation has been the dominant research methodology in wireless and sensor networking. When mobility is added, real-world experimentation is especially rare. However, it is becoming clear that simulation models do not sufficiently capture radio and sensor irregularity in a complex, real-world environment, especially indoors. Unfortunately, the high labor and equipment costs of truly mobile experimental infrastructure present high barriers to such experimentation.
We describe our experience in creating a testbed to lower those barriers. We have extended the Emulab network testbed software to provide the first remotely-accessible mobile wireless and sensor testbed. Robots carry motes and single board computers through a fixed indoor field of sensor-equipped motes, all running the user's selected software. In real-time, interactively or driven by a script, remote users can position the robots, control all the computers and network interfaces, run arbitrary programs, and log data. Our mobile testbed provides simple path planning, a vision-based tracking system accurate to 1 cm, live maps, and webcams. Precise positioning and automation allow quick and painless evaluation of location and mobility effects on wireless protocols, location algorithms, and sensor-driven applications. The system is robust enough that it is deployed for public use.
We present the design and implementation of our mobile testbed, evaluate key aspects of its performance, and describe a few experiments demonstrating its generality and power
We describe our experience in creating a testbed to lower those barriers. We have extended the Emulab network testbed software to provide the first remotely-accessible mobile wireless and sensor testbed. Robots carry motes and single board computers through a fixed indoor field of sensor-equipped motes, all running the user's selected software. In real-time, interactively or driven by a script, remote users can position the robots, control all the computers and network interfaces, run arbitrary programs, and log data. Our mobile testbed provides simple path planning, a vision-based tracking system accurate to 1 cm, live maps, and webcams. Precise positioning and automation allow quick and painless evaluation of location and mobility effects on wireless protocols, location algorithms, and sensor-driven applications. The system is robust enough that it is deployed for public use.
We present the design and implementation of our mobile testbed, evaluate key aspects of its performance, and describe a few experiments demonstrating its generality and power
Robotic Face Prosthetics for Patients with Severe Paralysis Technology is currently patent pending
Over the past few years, there has been a great leap in the development of prosthetic limbs. Today, companies create prosthetics that feature "mechatronic" elements, which are normally used in creating robots. These elements turn simple prosthetics into functional substitutes for missing body parts so that some of the latest inventions in technology and science allow users to control prosthetics with the help of their brains.
However, internal prosthetics, like the ones used in the reconstruction of patients' injured faces, still do not include such advanced technologies, which is why they are somewhat awkward and look unrealistic. But that is about to change, as surgeons Craig Senders and Travis Tollefson of the University of California, Davis, look forward to apply artificial polymer muscles to revive the facial features of patients suffering from severe paralysis.
"The face is an area where natural-appearing active prosthetics would be particularly welcome," the surgeons write in a current patent application. The two experts hope that their latest invention in science will provide a solution. They reported that the tests carried out on cadavers proved to be successful, but they haven't had the chance to experience on live patients.
A complete example provided in the patent document explains the way artificial muscles work in helping regain control over eyelids of the patients that suffered spinal injuries or have nervous disorders such as Bell's palsy. There is a number of different disadvantages for people who lost control over their eyelids, including the fact that the eyes can become ulcerated, which can lead to blindness.
Senders and Tollefson describe their invention by saying that a polymer muscle attached to the skull pulls on cords that hook up to the upper and lower eyelids. In case a person attempts to close their eyes, the move generates electrical activity in the muscles that would close the patient's eyelids. The polymer muscle identifies this action and contracts, dragging on its cords to entirely close the eyelids.
According to the surgeons, other methods of using the polymer muscle could be used as well. For example, if a patient has lost control over one eye, as a result of stroke, the system has the ability to control the activity of the normal eye of the person and then synchronize the actions of the damaged eye to match. It is interesting to note that the patent also states that other sensors could be used to close eyes in front of bright light or in case a certain object moves really close to a person's eye. Experts suggest using timing systems to replicate natural blinking patterns.
The two surgeons consider that their latest invention could also be used to revive other facial features, or develop an artificial diaphragm to help a patient breathe, or substitute fingers and hands.
However, internal prosthetics, like the ones used in the reconstruction of patients' injured faces, still do not include such advanced technologies, which is why they are somewhat awkward and look unrealistic. But that is about to change, as surgeons Craig Senders and Travis Tollefson of the University of California, Davis, look forward to apply artificial polymer muscles to revive the facial features of patients suffering from severe paralysis.
"The face is an area where natural-appearing active prosthetics would be particularly welcome," the surgeons write in a current patent application. The two experts hope that their latest invention in science will provide a solution. They reported that the tests carried out on cadavers proved to be successful, but they haven't had the chance to experience on live patients.
A complete example provided in the patent document explains the way artificial muscles work in helping regain control over eyelids of the patients that suffered spinal injuries or have nervous disorders such as Bell's palsy. There is a number of different disadvantages for people who lost control over their eyelids, including the fact that the eyes can become ulcerated, which can lead to blindness.
Senders and Tollefson describe their invention by saying that a polymer muscle attached to the skull pulls on cords that hook up to the upper and lower eyelids. In case a person attempts to close their eyes, the move generates electrical activity in the muscles that would close the patient's eyelids. The polymer muscle identifies this action and contracts, dragging on its cords to entirely close the eyelids.
According to the surgeons, other methods of using the polymer muscle could be used as well. For example, if a patient has lost control over one eye, as a result of stroke, the system has the ability to control the activity of the normal eye of the person and then synchronize the actions of the damaged eye to match. It is interesting to note that the patent also states that other sensors could be used to close eyes in front of bright light or in case a certain object moves really close to a person's eye. Experts suggest using timing systems to replicate natural blinking patterns.
The two surgeons consider that their latest invention could also be used to revive other facial features, or develop an artificial diaphragm to help a patient breathe, or substitute fingers and hands.
Sunday, May 10, 2009
Revolutionary Robotic Technology--the da Vinci Surgical System
hearing the name da Vinci inspires thoughts of a visionary, an inventor and an extraordinary artist of epic proportion. The da Vinci Surgical System, used for a variety of complex medical procedures, pays homage to this great artist and inventor, both in terms of its advanced technological
Since early 2005, David Sowden, MD, a thoracic and cardiovascular surgeon with Fort Wayne Cardiovascular Surgeons, has been using this state-of-the-art equipment to perform select procedures at Parkview Hospital's Randallia campus. He has repaired atrial septal heart defects in adults, as well as removing thymus glands, and has the ability to repair mitral valves in adult patients. In all, he has touched the lives of nearly two-dozen patients so far, including one who traveled from northern Michigan to take advantage of this technology.
What makes the da Vinci system so unique is the fact that it is actually a robotic surgery system operated from a console. The surgeon looks into the console and uses master hand controls and foot pedals to operate any or all of the four robotic arms positioned over a patient. These arms hold the necessary instruments and mimic the surgeon's movements, which can actually be scaled down to address the most intricate surgical maneuvers. "The technology is unbelievable," says Dr. Sowden. "The instruments themselves are like wrists and attach to the robotic arms. And, although the controls follow the movements of the physician's hands, the system filters out any fine tremors that might occur."
But this system doesn't simply duplicate procedures traditionally done by surgeons as they stand over
Since early 2005, David Sowden, MD, a thoracic and cardiovascular surgeon with Fort Wayne Cardiovascular Surgeons, has been using this state-of-the-art equipment to perform select procedures at Parkview Hospital's Randallia campus. He has repaired atrial septal heart defects in adults, as well as removing thymus glands, and has the ability to repair mitral valves in adult patients. In all, he has touched the lives of nearly two-dozen patients so far, including one who traveled from northern Michigan to take advantage of this technology.
What makes the da Vinci system so unique is the fact that it is actually a robotic surgery system operated from a console. The surgeon looks into the console and uses master hand controls and foot pedals to operate any or all of the four robotic arms positioned over a patient. These arms hold the necessary instruments and mimic the surgeon's movements, which can actually be scaled down to address the most intricate surgical maneuvers. "The technology is unbelievable," says Dr. Sowden. "The instruments themselves are like wrists and attach to the robotic arms. And, although the controls follow the movements of the physician's hands, the system filters out any fine tremors that might occur."
But this system doesn't simply duplicate procedures traditionally done by surgeons as they stand over
Subscribe to:
Posts (Atom)