{"id":1254,"date":"2016-08-09T13:25:19","date_gmt":"2016-08-09T16:25:19","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=1254"},"modified":"2016-08-09T13:25:19","modified_gmt":"2016-08-09T16:25:19","slug":"desarrollo-de-nuevos-equipos-para-ambientes-peligrosos-con-navegacion-inercial","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=1254","title":{"rendered":"Desarrollo de nuevos equipos para ambientes peligrosos con navegaci\u00f3n inercial"},"content":{"rendered":"<p>El sistema de iluminaci\u00f3n activa\u00a0integra:\u00a0 sensores de\u00a0im\u00e1genes ,\u00a0mediciones de\u00a0distancia con\u00a0lasers modulados\u00a0y navegadores\u00a0inerciales (IMU)\u00a0que se\u00a0reproducen en una tablet\u00a0. Los ensayos del prototipo\u00a0\u00a0demuestran que el sistema proporciona un buen posicionamiento y \u00a0mapeo en \u00a0ambientes\u00a0de \u00a0interior, \u00a0oscuros\u00a0\u00a0y con humos o nieblas\u00a0.\u00a0Brinda capacidades que\u00a0\u00a0pueden acelerar la b\u00fasqueda y rescate, ayuda a por ejemplo\u00a0\u00a0a soldados y\u00a0\u00a0bomberos a no perderse ,\u00a0\u00a0aumentan la seguridad del personal en entornos desconocidos y peligrosos, y tambi\u00e9n pueden facilitar el control remoto de veh\u00edculos terrestres no tripulados (UGV).<!--more--><\/p>\n<p>Positioning and mapping abilities for\u00a0indoor environments can speed search\u00a0and rescue, keep firefighters from getting\u00a0lost, and help a commander track soldiers\u00a0searching a building. Accurate results from\u00a0these environments increase personnel\u00a0safety in unknown, dangerous environments and can\u00a0also facilitate remote control of unmanned ground\u00a0vehicles (UGVs).<\/p>\n<p>A new iteration in the Chameleon family of positioning\u00a0systems that we have developed, the Tiger Chameleon,\u00a0combines an inertial measurement unit (IMU) with an\u00a0active camera that measures distances using modulated laser light. This type of camera provides dense and\u00a0accurate distance measurements, and has the added\u00a0advantage of working well in darkness.<\/p>\n<p>The main goal of the Chameleon systems is to provide\u00a0soldiers and first responders with position information\u00a0and approximate overview maps, preferably without\u00a0affecting their operating procedure. The Tiger Chameleon\u00a0does not require any infrastructure, such as visual markers\u00a0or radio beacons. Positioning and mapping results are\u00a0computed in real time based on data from the IMU\u00a0and the active camera, and wirelessly transmitted to a\u00a0visualization interface running on a tablet computer or\u00a0smartphone.<\/p>\n<h2>Sensors and Hardware<\/h2>\n<p>In initial experiments, the Tiger Chameleon\u2019s components\u00a0are enclosed in a wooden box which can be mounted on\u00a0a soldier\u2019s tactical vest. <strong><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections.jpg\">FIGURE 1<\/a>\u00a0<\/strong>gives a schematic overview.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47425\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections-300x141.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections-300x141.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections-250x117.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_1_components_and_connections.jpg 700w\" alt=\"FIGURE 1. Schematic overview of how the components are connected.\" width=\"300\" height=\"141\" \/><\/a><\/p>\n<p><strong>Image Sensor.<\/strong> The image sensor includes two cameras:\u00a0one high-resolution visual camera and one lower-resolution\u00a0depth sensor. The latter uses a modulated near-infrared\u00a0(NIR) light source and a special type of sensor to measure\u00a0distance. Essentially, each pixel is divided into two halves,\u00a0one of which integrates incoming light when the light\u00a0source is turned on, while the other integrates light when\u00a0it is turned off. If the light hitting a pixel has bounced off\u00a0an object located close to the sensor, almost all light will\u00a0be integrated by the first half of the pixel. If the object is\u00a0moved further away, it will take longer for the light to\u00a0return, and hence more light will be integrated by the\u00a0second half of the pixel.<\/p>\n<p>In addition to measuring the depth of the scene, the\u00a0NIR camera provides an intensity image where the value\u00a0of each pixel is determined by the total amount of light\u00a0hitting the two pixel halves. The intensity image is similar\u00a0to an ordinary visual image. Unlike an image from a\u00a0passive camera, however, it is largely independent of\u00a0ambient light, since it mostly measures reflected light from\u00a0the illuminator. Hence, both the intensity and the depth\u00a0images are available even in completely dark or obscured\u00a0environments. Coming from the same sensor, the intensity\u00a0and depth images are perfectly registered to each other.\u00a0Images are produced at 30 Hz.<\/p>\n<p>The high-resolution visual camera is not used by this\u00a0prototype.<\/p>\n<p><strong>Inertial Sensor.<\/strong> The IMU provides calibrated\u00a0measurements of acceleration and angular velocity at 400 Hz.\u00a0The sensor itself does not perform any inertial navigation\u00a0or attitude estimation. Since we fuse the inertial data with\u00a0image-based features for navigation,\u00a0however, the basic acceleration and\u00a0angular velocity measurements are more\u00a0useful in our application than pre-filtered\u00a0position or orientation estimates. The\u00a0sensor measures acceleration up to 5 g,\u00a0and angular velocity up to 1,000 degrees\/\u00a0second, since relatively high-dynamic\u00a0motion is common in the intended\u00a0application. The IMU also contains a\u00a0magnetometer and a barometer, but\u00a0these are not used since air pressure and\u00a0magnetic fields are not always reliable for\u00a0navigation indoors.<\/p>\n<h2>Algorithms<\/h2>\n<p>The positioning algorithm is based on EKF-SLAM,\u00a0simultaneous localization and mapping (SLAM) implemented as an extended Kalman filter (EKF).\u00a0The EKF fuses data from the IMU (three-dimensional\u00a0accelerations and angular velocities) with image data,\u00a0according to the uncertainties of the different types of data. It tracks the system state, composed of the position,\u00a0velocity and orientation of the system, the IMU biases\u00a0(slowly varying offsets in the acceleration and angular\u00a0velocity measurements), and the positions of a number\u00a0of landmarks in the images.<\/p>\n<p>The landmarks are points observed in the images,\u00a0which are used for navigation. These are chosen to be\u00a0points which are recognizable, well-defined and stationary.\u00a0This essentially means that it should be easy to recognize\u00a0a landmark when it appears in a new image, and that the\u00a0image coordinates of a landmark should be stable in both\u00a0the horizontal and vertical directions. Thus, corner points\u00a0are good candidates for landmarks, while line structures\u00a0in an image are not. The world coordinates of a landmark\u00a0should not change over time.<\/p>\n<p>Theoretically, it would be possible to navigate using only\u00a0IMU data. The system orientation would then be obtained\u00a0by integrating the angular velocities, while the acceleration\u00a0(after removing the effect of gravity) would be integrated\u00a0to obtain the velocity, and double-integrated to obtain\u00a0the position. Due to the high bias variability and noise of\u00a0micro-electro-mechanical systems (MEMS) IMUs \u2014 the\u00a0only type sufficiently small, lightweight and inexpensive\u00a0for use by soldiers or firefighters \u2014 this only works for\u00a0a few seconds before the accumulated error grows to an\u00a0unacceptable level.<\/p>\n<p>In theory, it would also be possible to navigate using\u00a0only landmarks extracted from the image sequence. This,\u00a0however, is also problematic in practice. If the system\u00a0moves too rapidly, successive images may not share any\u00a0landmarks at all. Additionally, no landmarks are found\u00a0in featureless environments. This causes image-only\u00a0navigation to fail in many realistic scenarios.<\/p>\n<p>By fusing the inertial data with landmark observations,\u00a0we alleviate most of these problems. While the IMU\u00a0provides good short-term performance, the image data\u00a0provides long- term stability. Hence, the IMU can overcome\u00a0short periods with few or no landmarks, while the image\u00a0data limits the error growth of the system (assuming that\u00a0the periods without landmarks are not too long).<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47429\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm-300x235.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm-300x235.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm-250x196.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm.jpg 500w\" alt=\"FIGURE 2. Overview of the algorithm.\" width=\"300\" height=\"235\" \/><\/a><\/p>\n<p><strong>Algorithm Flow.<\/strong> \u00a0In the data fusion\u00a0<a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_2_algorithm.jpg\"><strong>FIGURE 2<\/strong><\/a>, potential\u00a0landmarks are found in an intensity image from the\u00a0NIR camera, by identifying points that can be well\u00a0localized. The potential landmarks are found by using an\u00a0interest-point detector. The visual appearance of a point\u00a0is represented using the SIFT descriptor. The distance to\u00a0each potential landmark is determined by using the depth\u00a0image. Potential landmarks are matched to landmarks that\u00a0are already tracked by the system, based on a combination\u00a0of their visual appearance and their coordinates (the spatial\u00a0distance between the observed points and the predicted\u00a0image coordinates of the tracked points, and the difference\u00a0between the predicted and the observed distance to\u00a0the points).<\/p>\n<p>IMU data is used to predict where tracked\u00a0points should appear. Pure inertial navigation is relatively\u00a0accurate in the time interval between successive frames\u00a0(approximately 30 images per second are processed).\u00a0Observed points, which match tracked points, are used\u00a0to update the estimated state (position, velocity and so\u00a0on). Tracking is started for observed points, which do\u00a0not match any tracked points unless a predetermined\u00a0number (30 in the current implementation) of points are\u00a0already tracked. Points that are tracked but not observed\u00a0in a number of consecutive image pairs are removed from\u00a0the point tracker.<\/p>\n<p>Each depth image corresponds to a local point cloud,\u00a0where points lie on the surfaces of objects in the scene\u00a0observed by the camera. The intensity of each point can\u00a0be obtained from the corresponding intensity image. Since\u00a0the position and orientation of the system are estimated\u00a0by the SLAM algorithm, these local point clouds can be\u00a0transformed into a common coordinate system, thereby\u00a0creating a large point cloud representing the entire\u00a0environment along the trajectory of the system. This point\u00a0cloud can either be used as a three dimensional model,\u00a0or projected onto a horizontal plane for an overview map.<\/p>\n<h2>Implementation<\/h2>\n<p>To accurately predict where tracked landmarks will appear\u00a0in a new image, it is important to synchronize the image\u00a0stream and the inertial data. While the IMU handles this\u00a0well (it can either trigger, or timestamp the trigger pulse\u00a0from, a camera), synchronization is potentially difficult\u00a0when working with consumer hardware such as the the one selected for this prototype. However, while the\u00a0camera cannot be triggered externally, it does perform\u00a0internal synchronization between the NIR and the visual\u00a0cameras, which both run at 30 Hz. (It is unknown to\u00a0us which camera triggers the other, or if there is a third\u00a0piece of hardware which triggers both cameras.) Using an\u00a0oscilloscope, we found that a pulse train at 30 Hz can be\u00a0accessed at a solder point inside the camera.<\/p>\n<p>This 30 Hz signal is active whenever the camera is\u00a0powered. It is therefore not possible to synchronize the\u00a0camera to the IMU using only this signal, since it only\u00a0indicates that an arbitrary image was acquired at the\u00a0time of each pulse. To synchronize the sensors, we need\u00a0to know exactly when each specific image was acquired.\u00a0Thus, a synchronization pulse, which is only transmitted\u00a0when image acquisition is enabled, is needed. In addition\u00a0to the 30-Hz pulse train, a signal that is high only when\u00a0the cameras are active can also be found inside the camera.\u00a0We perform a logical AND to combine these signals into\u00a0the desired synchronization pulse.<\/p>\n<p>We obtain the timestamp of each image from the IMU\u00a0by connecting the combined synchronization signal to its\u00a0synchronization input. There is a small delay between the\u00a0first pulse in the combined signal and the time when the\u00a0first image is acquired. This was measured by recording\u00a0inertial data and images while first keeping the system\u00a0stationary, and then rotating it quickly. Manual inspection\u00a0of the image sequence and the angular velocities reveals that\u00a0the delay is approximately 8 intra-frame intervals (0.267 s).<\/p>\n<p><strong>Software.<\/strong> The recording and analysis software are\u00a0written in C++, and run in real time under Linux. It makes\u00a0heavy use of multi-threading in order to optimize the\u00a0computational performance of the algorithms.<\/p>\n<p>The software is divided into two subsystems: one that\u00a0communicates with the sensors, and one that performs the\u00a0image analysis and SLAM computations. These subsystems\u00a0communicate using the Robot Operating System (ROS).\u00a0There is also a subsystem (or \u201cnode\u201d in ROS terminology)\u00a0for playing back previously recorded data. The playback\u00a0node publishes the same type of messages as the sensor\u00a0node, and these nodes can therefore be used interchangeably.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart.jpg\"><img loading=\"lazy\" class=\"wp-image-47430 size-full\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart.jpg\" sizes=\"(max-width: 600px) 100vw, 600px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart.jpg 600w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart-250x200.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart-300x241.jpg 300w\" alt=\"FIGURE 3. Overview of the analysis node. Blue boxes represent threads, while red boxes are queues. Purple boxes are other software components, and the orange and the red\/green boxes represent the sensors. Orange, red and green lines represent data from the respective sensors. Black lines represent data based on more than one sensor.\" width=\"600\" height=\"481\" \/><\/a><\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_3_flowchart.jpg\"><strong>FIGURE 3<\/strong><\/a> shows how data flows through the analysis node.\u00a0Queues buffer data between the different threads. Incoming\u00a0images from the recording or playback node are initially\u00a0put in the image queues for feature extraction. After feature\u00a0extraction (where the depth image is used to determine the\u00a0distance to each observed landmark), the resulting image\u00a0features are stored in the potential landmarks queue. The\u00a0features are then processed by the EKF-SLAM thread,\u00a0which also reads from the IMU data queue. The resulting\u00a0pose estimates (positions and orientations) are sent to the\u00a0communication module, where they are transmitted to\u00a0the visualization device. In parallel with this, images are\u00a0also processed by the rectification and mapping threads,\u00a0where they are paired with poses from the SLAM algorithm\u00a0to create map data. The map data is also sent to the\u00a0communication module.\u00a0A number of efficient libraries exist for low-level image\u00a0processing, and also for the linear algebra computations\u00a0needed by the EKF. The following libraries are used:<\/p>\n<ul>\n<li>VLFeat for finding landmarks and extracting SIFT\u00a0features<\/li>\n<li>OpenCV for image rectification<\/li>\n<li>Armadillo for high-level math operations<\/li>\n<li>ROS for communication between the subsystems<\/li>\n<li>A modified version of IAI Kinect2 for ROS\u00a0communication with the sensor<\/li>\n<li>A slightly modified version of Libfreenect for low-level\u00a0communication with the sensor.<\/li>\n<\/ul>\n<h2>Evaluation<\/h2>\n<p>The prototype has been evaluated in a number of\u00a0experiments, in cooperation with soldiers and first\u00a0responders. Three experiments are reported here. All\u00a0were performed during soldier or smoke-diver training.\u00a0Mapping results in the figures are presented along with\u00a0estimated soldier and smoke-diver trajectories (bluegreen\u00a0lines) from the positioning system. The results are\u00a0evaluated based on the mapping, since the ground truth\u00a0trajectories are unknown. Positioning is still evaluated\u00a0implicitly, as inaccurate positioning would cause poor\u00a0mapping performance such as double or skewed walls,\u00a0since the mapping is based on estimated camera positions\u00a0and orientations.<\/p>\n<p><strong>Searching for Biological Hazards.<\/strong> Collection of evidence\u00a0at a crime scene is commonly performed using a camera,\u00a0taking numerous pictures of the scene from different\u00a0positions at different angles and distances. To keep track\u00a0of all images and where they were captured is cumbersome\u00a0work, which can easily be automated by integrating the\u00a0camera with a positioning system.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/inertial_navigation_featured.jpg\"><img loading=\"lazy\" class=\"aligncenter size-full wp-image-47424\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/inertial_navigation_featured.jpg\" sizes=\"(max-width: 622px) 100vw, 622px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/inertial_navigation_featured.jpg 622w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/inertial_navigation_featured-250x151.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/inertial_navigation_featured-300x180.jpg 300w\" alt=\"inertial_navigation_featured\" width=\"622\" height=\"375\" \/><\/a><\/p>\n<p>Several variations of this experiment were performed,\u00a0all in cooperation with soldiers from the Swedish Armed\u00a0Forces National CBRN Defence Centre. The main task\u00a0was to document different areas in the building, to be able\u00a0to handle any encountered dangerous biological objects\u00a0in a safe way, and to preserve evidence for later use. If\u00a0a dangerous object is found, the mapping result from\u00a0the system could be used to plan how the object should\u00a0be taken care of and what exit path the soldiers could\u00a0use to minimize the time being exposed to a dangerous\u00a0environment.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47431\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1-300x235.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1-300x235.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1-250x196.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1.jpg 500w\" alt=\"FIGURE 4. Map estimated by the prototype positioning and mapping system while two soldiers searched for biological hazards. The trajectory is also shown. The grid spacing is two meters.\" width=\"300\" height=\"235\" \/><\/a><\/p>\n<p>The <strong>photo above<\/strong>\u00a0an image from the NIR sensor in\u00a0the prototype system, captured in the building while two\u00a0soldiers searched for biological hazards. A map estimated\u00a0by the prototype system appears in <strong><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_4_map1.jpg\">FIGURE 4<\/a>.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>For evaluation of the mapping performance, a part of\u00a0this building was also measured using a highly accurate\u00a0scanning laser system. <strong><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1.jpg\">FIGURE 5<\/a><\/strong>\u00a0shows the point clouds from\u00a0both the Tiger Chameleon (black) and the scanning laser\u00a0(white). Locally (within a room), the errors are less than\u00a05 centimeters. Over larger distances, heading drift causes\u00a0larger errors, which are visible as slightly misaligned walls.\u00a0In this case, there are no significant errors. (The structures\u00a0that are only visible in the laser scanner data are parts of\u00a0the ceiling and its supporting beams, which were never\u00a0observed by the positioning system.)<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47432\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1-300x236.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1-300x236.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1-250x197.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_5_map_gt_1.jpg 500w\" alt=\"FIGURE 5 Map produced by the Tiger Chameleon (black) overlaid on reference data (white).\" width=\"300\" height=\"236\" \/><\/a><\/p>\n<p><strong>Searching a Building.<\/strong> A group of eight soldiers searched\u00a0and cleared a part of a building. This required them to\u00a0move more quickly than in the first experiment, since the\u00a0building could not be assumed to be safe. Additionally, one\u00a0or several soldiers often appears in the field of view of the\u00a0positioning system. We requested the soldier carrying the\u00a0system to avoid walking too close behind another soldier,\u00a0to avoid covering the entire field of view. Apart from this,\u00a0the soldiers were asked to act as they normally would\u00a0during the exercise.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2.jpg\"><img loading=\"lazy\" class=\"wp-image-47433 size-medium\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2-300x286.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2-300x286.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2-250x238.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2-32x32.jpg 32w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2.jpg 600w\" alt=\"FIGURE 6 Map estimated while eight soldiers cleared a building.\" width=\"300\" height=\"286\" \/><\/a><\/p>\n<p>Positioning and mapping results are shown in <a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_6_map2.jpg\"><strong>FIGURE 6<\/strong><\/a>.\u00a0The map is slightly distorted due to errors in the estimated\u00a0heading, but still good enough to understand the building\u00a0layout. The distortion is visible as double walls in the top\u00a0part of the figure. We believe that most of the errors were\u00a0caused by the system not being entirely stationary during\u00a0the sensor initialization at the start of the experiment.\u00a0No reference map is available for this location. We also\u00a0performed experiments where the soldiers fired their\u00a0weapons near the positioning system. This affects the IMU,\u00a0severely degrading the measurements of acceleration and\u00a0angular velocity. Hence, the positioning system was not\u00a0able to present correct position or mapping estimates in\u00a0these cases.<\/p>\n<p><strong>Smoke-Diver Searching a Building.<\/strong> The smoke-diver\u00a0experiments indicated how the positioning system performs\u00a0when used in a smoke-filled environment. During the\u00a0experiments, the system was exposed to different levels\u00a0of smoke.<\/p>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47434\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3-300x196.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3-300x196.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3-250x163.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3.jpg 600w\" alt=\"FIGURE 7 Map estimated while a smoke-diver searched a (partly) smoke-filled building. The trajectory is also shown.\" width=\"300\" height=\"196\" \/><\/a><\/p>\n<p>Positioning and mapping results from the experiment\u00a0are shown in<a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_7_map3.jpg\"><strong> FIGURE 7<\/strong><\/a>, while <strong><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3.jpg\">FIGURE 8<\/a>\u00a0<\/strong>shows the result from the\u00a0Tiger Chameleon (black) overlaid on reference data from\u00a0the laser scanner (white). The estimated map is in good\u00a0agreement with the reference map data, although a heading\u00a0error affects some walls (bottom right).<\/p>\n<p>All smoke densities affect the image quality from the NIR sensor, since the active illumination is reflected by\u00a0the smoke particles. In light smoke, the main effect is\u00a0that the smoke appears as spurious points in the point\u00a0clouds, eventually ending up in the map, while positioning\u00a0performance is not significantly affected. Most spurious\u00a0points were removed by adding the constraint that points\u00a0very close to the camera are not added to the map, although\u00a0smoke still causes more interior points to be visible\u00a0in Figure 7 than in the other maps. The effect on the\u00a0positioning system increases with the smoke density. In\u00a0thick smoke, most illumination is reflected, effectively\u00a0rendering positioning impossible using this type of image\u00a0sensor.<\/p>\n<h2>Discussion<\/h2>\n<p><a href=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3.jpg\"><img loading=\"lazy\" class=\"size-medium wp-image-47435\" src=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3-300x202.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3-300x202.jpg 300w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3-250x168.jpg 250w, http:\/\/gpsworld.com\/wp-content\/uploads\/2016\/07\/fig_8_map_gt_3.jpg 500w\" alt=\"FIGURE 8 Maps from the experiment in smoke. The map produced by the Tiger Chameleon (black) is overlaid on reference data (white).\" width=\"300\" height=\"202\" \/><\/a><\/p>\n<p>The current positioning algorithm works well, but over\u00a0longer experiments its position estimate slowly drifts away\u00a0from the true position. This is caused by both the inertial\u00a0sensors and the landmark-based positioning updating\u00a0the current position estimate only relative to recent\u00a0estimates. Since landmarks are discarded after not being\u00a0observed for a short time, no loop closures ever occur.\u00a0Saving landmarks could solve this in specific scenarios,\u00a0where landmarks are re-observed after long times, but\u00a0doing so would increase the computational complexity\u00a0considerably. Additionally, for landmark-based loop\u00a0closure to work well, the landmarks would need to be reobserved\u00a0from approximately the same position, further\u00a0limiting the scenarios where this could be expected to\u00a0work well. Ongoing work aims instead at closing loops\u00a0by recognizing the scene geometry, as represented by the\u00a0point-cloud models.<\/p>\n<p>When using active illumination, the mapping\u00a0performance does not depend on texture on walls and other\u00a0surfaces. This is an advantage compared to passive stereo\u00a0cameras. Further, active illumination enables positioning\u00a0and mapping in darkness.<\/p>\n<p><strong>Comparison to Other Systems.<\/strong> The prototype system\u00a0was not constructed with the purpose of creating high accuracy\u00a0models of small, detailed environments. Rather,\u00a0the purpose was to enable soldiers to create approximate\u00a0maps of entire buildings with minimal impact on their\u00a0operating procedure. Detailed reconstruction is handled\u00a0better by other systems, but these have the disadvantage of\u00a0requiring the user to survey the environment systematically,\u00a0adapting his or her work methods to the sensor.<\/p>\n<p><strong>SWaP-C and Hardware.<\/strong> Size, weight, power and cost\u00a0(SWaP-C) are constant issues for soldier equipment. Ideally,\u00a0the equipment should be disposable: cheap enough to throw\u00a0away after being used just once, or fit into some type of\u00a0disposable container.<\/p>\n<p>Since this is a prototype system built for research and\u00a0demonstration purposes, components with convenient\u00a0electrical and programming interfaces have been selected.\u00a0Therefore, the system is considerably larger than a final\u00a0end-user product would be. In such a product, sensors\u00a0and computation hardware with similar performance, but\u00a0considerable smaller size, lower weight and lower power\u00a0consumption, would be selected instead. Such components\u00a0are commercially available.<\/p>\n<p><strong>Outdoor\/Indoor Use.<\/strong> Though this positioning system\u00a0is primarily designed for indoor use, seamless transition\u00a0between indoor and outdoor environments is desired.\u00a0Ongoing work aims at integrating a GPS receiver to\u00a0achieve this. Adding GPS would also enable positioning\u00a0in a georeferenced coordinate system; currently, all\u00a0results are presented in a local coordinate system defined\u00a0by the start position and orientation. This requires an\u00a0algorithm for making robust decisions regarding when\u00a0GPS measurements should be considered reliable.<\/p>\n<p><strong>Visualization Interface.<\/strong> The visualization interface\u00a0currently runs on a tablet computer, and is therefore\u00a0most useful to a commander or group leader who\u00a0remotely tracks the soldier or firefighter. By adapting the\u00a0interface to the smaller screen of a smartphone, it would\u00a0be possible to also give the user access to position and\u00a0map information.<\/p>\n<p>An important advantage of automatic mapping,\u00a0according to several users, is the ability to detect hidden\u00a0spaces in buildings. In certain types of operations, the\u00a0ability to document a building before leaving it is also\u00a0considered valuable.<\/p>\n<p><strong>Real-Time Implementation.<\/strong> An interesting aspect of\u00a0performing all computations in real time is that it effectively\u00a0precludes tuning of algorithm parameters to individual data\u00a0sets. When analyzing data offline, this is far too common,\u00a0and typically overestimates the performance of the system\u00a0or algorithms. All experiments reported in this article have\u00a0been performed without any such parameter tuning.<\/p>\n<p><strong>Autonomous or Remote-Controlled Platforms.<\/strong> The\u00a0system can also be used on an unmanned ground or<br \/>\naerial vehicle (UGV or UAV). This could be suitable for\u00a0searching buildings too dangerous to enter. The system\u00a0would continuously distribute its position and mapping\u00a0estimates while traveling through the building. This could\u00a0provide soldiers and rescue teams with a preview of the\u00a0unknown environment and a possibility to plan their\u00a0operations in a safer way. Many robotics projects use ROS,\u00a0which makes integration of the Tiger Chameleon relatively\u00a0straightforward.<\/p>\n<p><strong>Firing of Weapons.<\/strong> During some experiments, we\u00a0discovered that the measures of acceleration and angular\u00a0velocity are affected by close-range gunfire. Relatively\u00a0long segments of measurements are affected, which\u00a0makes interpolation of missing values difficult. During\u00a0these periods, it may be possible to disregard the inertial\u00a0measurements, resorting to only image-based positioning.<\/p>\n<p><strong>Fuente:<\/strong>\u00a0<em><a href=\"http:\/\/gpsworld.com\/inertial-for-soldiers-first-responders-in-dangerous-environments\/\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/gpsworld.com<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>El sistema de iluminaci\u00f3n activa\u00a0integra:\u00a0 sensores de\u00a0im\u00e1genes ,\u00a0mediciones de\u00a0distancia con\u00a0lasers modulados\u00a0y navegadores\u00a0inerciales (IMU)\u00a0que se\u00a0reproducen en una tablet\u00a0. Los ensayos del prototipo\u00a0\u00a0demuestran que el sistema proporciona&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[2,29,24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/1254"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1254"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/1254\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}