Feature Article: DOI. No. 10.1109/MAES.2017.160176 Compressed Fusion of GNSS and Inertial Navigation With Simultaneous Localization and Mapping Jonghyuk Kim, Australian National University, Canberra, Australia Jiantong Cheng, Key Laboratory of Complex Aviation System Simulation, Beijing, China Jose Guivant, University of New South Wales, Sydney, Australia Juan Nieto, Autonomous Systems Lab, ETH Zurich, Switzerland INTRODUCTION Unmanned aerial vehicles (UAVs) have attracted significant attention from both civilian and defense industries over the last few decades. With the advances in low-cost inertial sensor technology and the global navigation satellite system (GNSS), the six degreesof-freedom (6DOF) vehicle state can be estimated accurately by fusing this information, which has been a crucial step toward realtime guidance and flight control [1], [2]. GNSS and inertial integrated navigation has been extensively studied, with many commercial products in the market for mobile and aerial vehicles [3]–[6]. Depending on the GNSS information used in fusion, the integration architecture has been classified as loosely coupled, tightly coupled, or deep integration. The stateof-the-art architecture fuses the dynamic information from inertial sensors to aid code and/or carrier-phase tracking loops, thus enabling the fast acquisition of a signal in highly dynamic conditions [5], [7]. Although these integrations have been quite successful, precise navigation in cluttered environments is still challenging because of the use of low-quality and low-cost inertial measurement units (IMUs). If satellite signals are partially or fully blocked for an extended period, the system can provide reliable navigation solutions only for a limited time, typically in the range of a few minutes if the IMU is properly calibrated. To address this problem in the robotics community, simultaneous localization and mapping (SLAM) has been developed; it solves the mapping problem of unknown environments while Authors' current addresses: J. Kim, College of Engineering and Computer Science, Australian National University, Canberra, ACT 0200, Australia, E-mail: (jonghyuk.kim@anu.edu.aul). J. Cheng, Key Laboratory of Complex Aviation System Simulation, Beijing, 100076, China. J. Guivant, School of Mechanical and Manufacturing Engineering, University of New South Wales, Sydney, NSW 2052, Australia. J. Nieto, Autonomous Systems Lab, ETH Zurich, Zürich 8092, Switzerland. Manuscript received August 15, 2016, revised January 28, 2017, March 21, 2017, and ready for publication May 3, 2017. Review handled by A. Dempster. 0885/8985/17/$26.00 © 2017 IEEE 22 simultaneously localizing the vehicle using the constructed map [8]–[12]. In contrast to the conventional navigation systems, such as GNSS and terrain and image matching systems, no infrastructure or a priori information about the environment is required, thus making it quite attractive for standalone or aided navigation in global positioning system (GPS)-denied environments. Based on the seminal work by [13] and [14], there have been two types of approaches: filtering and graph optimization. Because this article aims for a real-time solution, only the filtering-based approach is summarized here. Filtering-based SLAM solutions are mostly based on the extended Kalman filter (EKF) [15], unscented Kalman filter (UKF) [16], particle filter known as FastSLAM [17], and sparse extended information filter (SEIF) [18]. The EKF-based solutions have been extensively developed in the robotics community because of their ease of implementation and relatively low computational cost [14]. The nonlinearity involved in the vehicle dynamics and measurements, however, can cause degraded performance or even divergence of the filter because of the errors involved in the linearization process and Jacobian calculation. This issue is also encountered in the SEIF. In contrast, the sampling-based solutions utilize a set of samples to estimate the state and the uncertainty without explicit linearization, thus making it a Jacobian-free method yet with an added computational complexity. For example, unscented SLAM deterministically draws samples and their weights for nonlinear prediction and update. Numerous methods have been investigated for unscented SLAM. For example, [19] presented a solution based on the square-root unscented filter for a visual mono-SLAM problem in which the square root of the covariance matrix is propagated; it has a cubic complexity of O(N3), with N being the dimension of the state vector. To reduce the computational cost, [20] proposed a hybrid filtering strategy in which only the vehicle pose is heuristically sampled by the unscented filter, while the whole state vector is updated using the conventional EKF. Linearization, however, is still required to predict the cross-correlation between the vehicle and the map. To circumvent this linearization problem, [21] proposed a partial sampling strategy based on the linear regression model. This work showed that Jacobian matrices could be inferred from the propagated sigma points, subsequently achieving IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 a quadratic O(N2) complexity as in EKF SLAM. The equivalence of the partial sampling and the full sampling methods is formally proved by the authors in [22]. The computational cost of SLAM solutions is significantly affected by the number of landmark features, which typically grows as the vehicle explores more regions. One of the effective approaches to dealing with this is the compressed form of SLAM [15], [22] in which a large map is partitioned into a local and global map, and only the local map is updated with a quadratic complexity of O N L2 , with NL being the size of the local map, which is typically NL << N. At the same time, the local-toglobal correlation information is accumulated, or compressed, and propagated to the global map only when the vehicle crosses the local boundary. Most SLAM work has been for land vehicles or stabilized drones in which the motion can be approximately planar. If the vehicle undergoes full 6DOF motion, inertial sensors are necessary to measure the high dynamics of the vehicle with high data rates. Because vision systems can also provide the full pose (the orientation and position with scale ambiguity in the case of a monocular system), their integration within the SLAM framework has also been actively investigated [23]–[29]. However, the fusion of full SLAM with the GNSS and inertial system has not been studied in depth [30]. Among the reasons is that the robotics community is more interested in the standalone capability of the SLAM system without relying on the GNSS information, as well as the high computational cost hindering the integration. However, the mapping capability in the GPS coordinates (for example, latitude and longitude) becomes increasingly important to using the geographical infrastructure system (GIS) database and Google Map. This article presents tightly coupled integration of SLAM and the GNSS and inertial system, extending previous work on loosely coupled integration [30] and compressed and unscented SLAM for 3 degrees-of-freedom vehicles [22]. Figure 1 illustrates the simulation output of the fusion system showing the observations of the GNSS pseudorange and pseudorange rates, as well as the downlooking visual feature measurements. The key benefits of this approach are the continuous and reliable calibration of the receiver clock and IMU errors even with only one satellite in view, as well as offering the mapping in the GPS coordinates. The main contributions of this article are as follows: ( ) AUGUST 2017 CC CC Integration of the GNSS pseudorange and pseudorange rate with the inertial SLAM with a reduced number of satellite vehicles Compressed implementation of unscented GNSS and inertial SLAM in a full 6DOF scenario, achieving O N L2 complexity ( ) The overall architecture of the fusion system is shown in Figure 2, in which a compressed-unscented filter is used with 17 inertial and receiver states and 3N map states, with N being the number of features. The raw GNSS pseudorange, the GNSS pseudorange rate, and the IMU measurement are directly fused in the filter, thus forming a tightly coupled architecture. A vision system provides the (triangulated) range, bearing, and elevation measurements to Figure 1. Tightly coupled GNSS and SLAM navigation in a simulation environment. The vehicle undergoes 6DOF motion following two racehorse tracks after taking off and fuses GNSS pseudoranges (shown as three magenta lines toward the satellite vehicles) and ground features (shown as a red line down to a feature). The compressed filter partitions the area into a local region (a rectangular box beneath the vehicle, with map uncertainty ellipses in blue) and a global one (outside the box, with map uncertainty ellipses in red). The local region is redefined whenever the vehicle crosses the boundary, and because of the compressed approach, the computational time can be managed in a way suitable for real-time applications. IEEE A&E SYSTEMS MAGAZINE 23 Inertial Navigation and Simultaneous Localization and Mapping Figure 2. The integrated navigation system architecture. A compressed-unscented filter is used with 17 inertial and receiver states and 3N map states with N being the number of features. The raw GPS pseudorange and the pseudorange rate, along with the IMU measurement, are directly fused in the filter and thus a tightly coupled architecture. the fusion filter. The inertial navigation system utilizes a localtangent frame with north, east, and down (NED) coordinates, and the GNSS system is based on the ellipsoidal coordinates with the World Geodetic System 1984 (WGS-84) Earth model. The map features are maintained within the fusion filter, as are the satellite ephemeris data. UNSCENTED GNSS AND INERTIAL SLAM The state vector of unscented inertial SLAM consists of 17 inertial and GNSS receiver states (15 inertial navigation system states and 2 receiver clock states) and variable-size map states with the control input from IMU measurement, T x kv = pT , vT ,ψ T , bTa , bTg , ctb , ctdT (4) T The stochastic nonlinear state transition and observation models are x k +1 = f ( x k , u k , w k )(1) m k = m1T , mT2 , , mTN (5) T u k = f T , ω T ,(6) z k +1 = h ( x k , v k ) ,(2) where xk and uk are the state and the control vector at current discrete time k, respectively, with wk as the process noise, and zk is the observation vector, with wk as the observation noise. In this work, the state vector is partitioned into a local map and a global map, as shown in Figure 3. This limits the quadratic complexity within the local region and is called compressed filtering. This approach is discussed further in the following section. The estimates and covariance of the vehicle and local map states are defined as xˆ v Pxx , k xˆ k = k Pk = ˆ m Pmx , k k Pxm, k ,(3) Pmm , k with xˆ kv being the estimate of the vehicle state. The unscented filter in SLAM utilizes a partial sampling scheme, because the nonlinearity appears only in the vehicle state transition, whereas map states have linear dynamics. The equivalence of the partial sampling and full sampling schemes was provided in [22], showing that the performance of the partially sampled filter is not affected by the number of feature landmarks. 24 Figure 3. The large covariance matrix is partitioned into a local matrix PLL and a global matrix PGG. The local covariance consists of the inertial and receiver matrix and the local map matrix. The cross-covariance PLG is accumulated, or compressed, during the local update cycle and only used during a global update cycle. IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. where the components are defined as follows: UPDATE CC Vehicle position1 is p = [x, y, z]T CC Vehicle velocity is v = [υx, υy, υz] CC Vehicle attitude (Euler angles) is ψ = [ϕ, θ, ψ]T CC Accelerometer bias2 is b a = bxa , bya , bza CC Gyroscope bias is b g = bxg , byg , bzg CC Receiver clock bias is ctb, with c being the speed of light CC Receiver clock drift is ctd CC Map position is mi = [mix, miy, miz]T CC Accelerometer measurement is f = [ fx, fy, fz]T CC Gyroscope measurement is ω = [ωx, ωy, ωz]T The measurement consists of either GNSS (pseudorange ρi and pseudorange rate ρ i) from an ith satellite vehicle or range rkj, bearing φkj, and elevation θ kj from a jth landmark feature as follows: T T T i i i T zk = ρ k ρ k , for an ith satellite vehicle, or zk = T zkj = rkj φkj θ kj , for a jth feature The observation model of the pseudorange and the pseudorange rate, respectively, is ρ ki = (X i −x ) + (Y 2 i −y ) + (Z 2 i −z ) 2 + ctb + υ ρ ρ ki = (Vxi − υ x ) lx + (Vyi − υ y ) l y + (Vzi − υ z ) lz + ctd + υ ρ , PREDICTION The estimates are propagated using the state transition model, which can be decoupled into a nonlinear part and a linear part as T where [X i, Y i, Z i]T and Vxi ,Vyi ,Vzi are the ith satellite vehicle's position and velocity, respectively, defined in the local NED coT pˆ k + υˆk ΔT pˆ k +1 n n ˆ ˆ υˆk + Cb , k f k − ba , k + g ΔT υ k +1 ψˆ k +1 n ψˆ k + Eb , k ω k − bˆg , k ΔT bˆa , k +1 bˆa , k = ,(7) ˆ bg , k +1 ˆ bg , k ctˆ b , k +1 ctˆb , k + ctˆb , k ΔT ˆ ctd , k +1 ˆ ctd , k ˆ m k +1 ˆ m k ( ( ) ) with ΔT being the prediction sampling time and gn being the gravitational vector [0, 0, −9.8 m/s2]T. This is a simplified model of a strapdown inertial navigation system targeting low-cost inertial sensors with accelerometer biases on the order of 2 mg and gyroscope biases on the order of 100°/h. Although the developed simulator (presented later in this article) internally simulates Earth and the navigational frame rotation, they are not included in this model because of the low quality of the sensor used. In addition, the effect of Earth's curvature on the attitude compensation, called Schuler tuning, is ignored in the model, because we are considering the small flight coverage in the simulation. We use the UKF to predict the state by propagating the sigma points; for detailed implementation, refer to [21] and [31]. One of key aspect is that the Jacobian matrix is inferred from the sigma points rather than linearized, as discussed in our previous work [22]. 1 2 A NED navigation frame is used in this work. A roll–pitch–yaw body frame is used in this work. AUGUST 2017 ordinate system, and lxi , l yi , lzi are the line-of-sight vector of the satellite vehicle from the user vehicle, with υρ and υ ρ being the pseudorange and the pseudorange rate noise, respectively. In this observation model, the ionospheric and tropospheric delay errors, as well as any multipath error, are not simulated and thus are not included in the filter model. For the range, bearing, and elevation measurement, we first T compute the relative position vector u j = u xj , u yj , u zj of the jth feature from the vehicle and then transform it to the sensor-coordinate frame (refer to [32] for more details). The Cartesian-to-polar coordinate transformation yields the observation (u ) + (u ) + (u ) = arctan ( u u ) + υ rkj = φ j k j x 2 θ kj = arctan 2 j y j y j z j x u φ j z (u ) + (u ) j x 2 j y 2 2 + υr (8) + υθ , with υr, υϕ, and υθ being the range, bearing, and elevation noise, respectively. At the update stage, a set of sigma points is selected from the current state (typically, the control input is not related to the measurement and thus not included as the sigma points). The sigma points are then propagated through the nonlinear observation models to predict the observations, thus generating the innovation and the filter update. As in the prediction step, the Jacobian is inferred and used to compute the cross-correlation term, which is required to update the filter. If a new (m + 1)th landmark is detected, the sigma points are also used to predict the new feature position and the new crosscorrelation between the feature and the state [22], [31]. IEEE A&E SYSTEMS MAGAZINE 25 Inertial Navigation and Simultaneous Localization and Mapping Figure 4. The evolutions of the 172 feature uncertainties (only 1σ values on the north axis are shown for clarity), along with the number of satellite vehicles, which drops from seven to three and one (right axis). As each feature is added to the filter on the fly, its initial covariance monotonically decreases because of subsequent filter updates. The large uncertainties until 60 s and between 130 and 170 s are related to the new features added, or georeferenced, from each trajectory cycle. However, those uncertainties drop rapidly at the loop-closure events, which happen at 60 and 170 s. COMPRESSED IMPLEMENTATION As described in the previous section and Figure 3, the state estimates and the covariance matrix are partitioned into local and global parts to reduce the computational complexity of SLAM [15]. The state vector and the covariance matrix are represented as xˆ L xˆ v ˆ xG = mˆ L (9) mˆ G PL P GL Pxx Pxm PLG Pmx Pmm = PGG P GL PLG (10) PGG where xL includes the vehicle state x and the local landmarks mL, while other landmarks are stacked as mG. The key aspect of the compressed implementation is in accumulating, or compressing, the cross-correlation terms: Pxm between the vehicle and the local map and PLG between the local and the global maps. This is achieved by preserving the local Jacobian FL,k and three auxiliary variables (for more details, refer to [15]). Whenever the cross-correlation term PLG is required, a global Jacobian is formed as υ 26 FL , k FG , k = 0 0 ,(11) I NG with NG being the dimension of the global map. Similarly, for the newly registered features, the global initialization Jacobian GG is constructed from the local compressed Jacobian GL: G L , k G G ,k = 0 0 .(12) I NG Because this work adopts the unscented filtering framework, all Jacobians are induced from the sampled sigma points rather than direct linearization, thus reducing the effects of the nonlinearity in the system. SIMULATION RESULTS To validate the tightly coupled SLAM performance on a full 6DOF platform scenario, the high-fidelity simulator GPSoft is used to generate the flight trajectory, along with raw measurement data. SIMULATION ENVIRONMENT The simulation parameters are listed in Table 1 for IMU, GPS, range, bearing, and elevation sensors. The IMU has an accelerometer bias of 2 mg and gyroscope bias of 100°/h, which are the typi- IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. Table 1. The Simulation Parameters Used for IMU, GNSS, and the Range, Bearing, and Elevation Sensors Sensor IMU GPS Type Unit Specification Sampling rate Hz 100 Accel bias mg 2 Gyro bias °/h 100 Accel bias stability g, 1 0.02 Gyro bias stability °/h, 1 100 Accel bias correlation time s 300 Gyro bias correlation time s 300 Frame rate Hz 1 Receiver clock bias m 0.1 Receiver clock drift m/s 10−6 Pseudorange noise m, 1 2 m/s, 1 1 Earth model — WGS-84 Frame rate Hz 5 FOV angle ° 30 Range noise m 1 Bearing noise ° 1 Elevation noise ° 1.5 Pseudorange rate noise RBE RBE, range, bearing, and elevation. cal performance of a low-cost, automotive-grade IMU. The GPS sensor utilizes the WGS-84 Earth model and has a pseudorange noise of 2 m (1σ) and a pseudorange rate noise of 1 m/s (1σ). The receiver clock bias and drift are simulated to have values of 0.1 m and 10−6 m/s, respectively. The range, bearing, and elevation sensor has a 5-Hz output rate with a down-looking field of view (FOV) of 30°. The flight trajectory generated is shown in Figure 5. It consists of two racehorse tracks after taking off, and each track has two loops at an altitude of 50 m. The flight time is 245 s, and the average flight speed is 20 m/s. MATLAB was used to implement the SLAM filter on a computer with an Intel i7 processor at 3.4 GHz and 8 GB of random access memory (RAM). For the data association between the feature observations and the map, the joint compatibility branch and bound (JCBB) method is implemented [33], in which a set of feature measurements are matched to multiple hypotheses of map features to improve the robustness of matching. TIGHTLY COUPLED SLAM RESULTS Figures 5 and 6 illustrate the three-dimensional trajectory and the evolution of the states in the tightly coupled SLAM and GPS system. Figure 5 depicts the results during the first racehorse loop, initially fusing the pseudoranges from seven satellite vehicles (in AUGUST 2017 magenta lines upward) and the ground feature measurements (in red). The rectangular box underneath the vehicle is the dynamic local region boundary with local map uncertainties shown in blue, while the global feature uncertainties are shown in red outside of the box. The number of satellite vehicles (SVs) drops to three while maintaining the navigation performance, and the first loop-closure event, revisiting the initial place, successfully reduces the map and vehicle uncertainties. Figure 6 shows the successive loops with a single SV measurement. The plots show that the uncertainties of the vehicle and map further decrease monotonically because of the successive loop closures. The bottom-right plot shows the topdown view of the final trajectory, showing the estimated features on the ground with uncertainty ellipsoids (10σ is used for clarity). Recall that Figure 4 showed the evolutions of the feature uncertainties in the northward direction (172 registered out of 400 total features, and only the north axis is shown for clarity), along with the number of satellite vehicles dropping from seven to three at 50 s and one at 100 s (the right axis of the plot). Because new features are added to the filter during the flight, their initial covariances monotonically decrease due to subsequent observations. Around 60 s, the map uncertainties decrease below 0.5 m, as well as around 170 s, which is the effect of the loop closure in each track—that is, the vehicle reobserves the previously registered map features and IEEE A&E SYSTEMS MAGAZINE 27 Inertial Navigation and Simultaneous Localization and Mapping Figure 5. Detailed views of tightly coupled SLAM and GPS during the first round. Top left: after taking off, the system initially fuses the pseudoranges from seven satellite vehicles. The line of sight of each of the SVs is shown in magenta. The ground features are shown in red. The rectangular box on the ground is the local region boundary with local map uncertainties (in blue). Map uncertainties outside the region are shown in red (top right) and (bottom left) present incremental map building, simultaneously correcting IMU errors. The number of SVs drops to three. The bottom-right panel shows the vehicle revisiting the initial place (loop closure) and the reduced map and vehicle uncertainties. thus has a chance to estimate accumulated errors during the loops. The large uncertainties that appeared from 130 s are related to the registration of new features along the second racehorse track. Figures 7 to 9 show 12 Monte-Carlo simulation results for the navigation errors and IMU bias estimations. The north and east position errors with 2σ uncertainties are shown in Figure 7, in which the errors are computed as the difference of the estimated and true positions. Because the number of satellites drops to three around 50 s, the position error starts growing until 60 s, when the vehicle closes the loop and thus improves map accuracy. When the vehicle exits the first circuit and registers new features from the second circuit, the position error increases again (the number of satellite vehicles is one during this interval) until the loop closure occurs at 170 s. However, those uncertainties from the two tracks approach a similar lower bound, which is determined by the sensing uncertainties. From these, the complementary benefits of tight integration of 28 GNSS and inertial navigation with SLAM can be observed. SLAM can effectively aid the low-cost inertial navigation system under a lower number of visible GPS satellites (one satellite vehicle in this demonstration). Figure 8 provides 12 Monte-Carlo simulation results for roll and yaw angle errors, showing consistent convergence but with less sensitivity to the loop closures. Figure 9 (a) depicts the estimated bias from the accelerometer x-axis, showing the reliable convergence to the true biases (2.0 × 10−2 m/s2). The estimated gyroscope x-axis bias also converges to the truth (100°/h). One of the benefits of tightly coupled fusion is the capability of estimating the GPS receiver clock bias and drift. Figure 10 illustrates the convergence of the estimated receiver bias and drift to 0.1 m and 10−6 m/s, respectively, thus enabling a precise timing output even when there is only one visible GPS satellite. Although the clock bias and drift converge to the true value, the filter covari- IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. Figure 6. Detailed views of tightly coupled SLAM and GPS during successive loops. The number of SVs drops to one for the rest of the flight. The plots show that the uncertainties of the vehicle and map further decrease monotonically because of successive reobservations and loop closures. The bottom-right panel shows the top-down view of the map or trajectory. ance parameters seem too conservative considering relative to the actual errors. This implies that further refinement of error models may be warranted. COMPUTATIONAL COMPLEXITY The computational complexity of the filter is determined by the UKF sampling strategy and the size of the feature map (N). The complexity of the full-sampling UKF increases rapidly as O(N3). Because the partial sampling strategy is adopted in this work, the UKF attains a O(N2) complexity, which is comparable to that of the EKF. Figure 11 (a) compares the UKF update times of an uncompressed version (in dash-dotted red lines) and a compressed version (in solid blue lines). The map size N of the uncompressed UKF increases as the vehicle explores the area, as does the filter update time. However, the compressed version maintains a constant local map size of N = NL ≤ 12 during the simulation, making it suitable for real-time implementation. Figure 11 (b) compares the AUGUST 2017 cumulative runtime, which includes the inertial navigation, prediction, and update, for the uncompressed and compressed versions of EKF and UKF. The total flight time is 245 s, and the total running time of the UKF is 219 s, which is the slowest but slightly faster than that of the EKF at 204 s. The total cumulative time of the compressed UKF (26 s) is comparable to that of the compressed EKF (20 s), but considering the total flight time, both compressed filters seem to be efficient enough to run in real time. CONCLUSIONS In this article, we presented a computationally efficient and tightly coupled integration of GNSS and inertial navigation with SLAM, aiming for continuous 6DOF navigation solutions in cluttered environments. The unscented SLAM filter was developed to be suitable for IMU-based navigation and then extended to a compressed form by dividing the map into a small, manageable local map and a global map. The locally collected information was compressed and IEEE A&E SYSTEMS MAGAZINE 29 Inertial Navigation and Simultaneous Localization and Mapping Figure 7. a. The estimated north and b. east position errors with uncertainties. The errors are from 12 Monte Carlo simulations in which the error (in blue trace) is found by taking the difference between the estimates and the true values. The navigation accuracy is mostly dependent on the SLAM system, because even with a single GNSS SV, the accuracy improves from the loop closures, as can be seen around 60 and 170 s. 30 IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. Figure 8. a. The estimated roll error with uncertainty and b. yaw error. Result of 12 Monte Carlo simulations. AUGUST 2017 IEEE A&E SYSTEMS MAGAZINE 31 Inertial Navigation and Simultaneous Localization and Mapping Figure 9. a. The estimated x-axis accelerometer bias with uncertainty and b. x-axis gyroscope bias. Result of 12 Monte Carlo simulations. 32 IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. Figure 10. a. The estimated receiver clock bias with uncertainty and b. clock drift, both converging to the true values shown as black lines (0.1 m and 10−6 m/s, respectively). AUGUST 2017 IEEE A&E SYSTEMS MAGAZINE 33 Inertial Navigation and Simultaneous Localization and Mapping Figure 11. The computational times of various filter configurations. a. The computational time of filter updates for UKF (in dash-dotted red lines) and compressed UKF (in solid blue lines). The time includes both GPS and SLAM update cycles. The UKF update time increases whenever new features are added to the filter between 600 and 800 steps, while the compressed version is maintained near constant, although there exist frequent spikes during the global updates. b. The cumulative time of UKF (in a dash-dotted red line), EKF (in a dotted cyan line), compressed UKF (in a dashed red line), and compressed EKF (in a solid blue line). 34 IEEE A&E SYSTEMS MAGAZINE AUGUST 2017 Kim et al. is only propagated to the global map whenever the vehicle crosses the local boundary, thus achieving O N L2 computational complexity. The results from a high-fidelity simulator demonstrated the benefits of the tightly coupled integration of raw sensor data, providing accurate GNSS receiver and IMU calibrations even under a single GNSS satellite vehicle. The SLAM and GNSS filter has 17 integrated navigation states and 516 map states (172 features × 3 feature positions). The total processing time of the proposed filter was 26 s out of the total flight time of 245 s, yielding around a 0.1-s filter processing time per second and thus making it quite suitable for real-time applications. The use of real flight data is being investigated to verify the performance in real conditions. In addition, extending the concept of geometric dilution of precision (GDOP) is being investigated to incorporate the ground features in a way that can provide a generalized measure of the optimal sensing geometry. For example, a reduced number of ground features can be selected to compensate the poor vertical dilution of precision (VDOP) from the GNSS system. ( ) REFERENCES [1] Parkinson, B. Origins, evolution, and future of satellite navigation. Journal of Guidance, Control, and Dynamics, Vol. 20, 1 (1997), 11–25. [2] Kim, J., Sukkarieh, S., and Wishart, S. Real-time navigation, guidance and control of a UAV using low-cost sensors. In Proceedings of the International Conference of Field and Service Robotics, Yamanashi, Japan, 2003, 95–100. [3] George, M., and Sukkarieh, S. Tightly coupled INS/GPS with bias estimation for UAV applications. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), 2005. [4] Li, Y., Wang, J., Rizos, C., Mumford, P., and Ding, W. Low-cost tightly coupled GPS/INS integration based on a nonlinear Kalman filtering design. In Proceedings of the ION National Technical Meeting, 2006, 18–20. [5] Sun, D., Petovello, M. G., and Cannon, M. E., “Ultratight GPS/ Reduced-IMU Integration for Land Vehicle Navigation,” in IEEE Transactions on Aerospace and Electronic Systems, Vol. 49, 3 (2013), 1781–1791. [6] Kennedy, S., and Rossi, J. Performance of a deeply coupled commercial grade GPS/INS system from KVH and Novatel Inc. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, 2008, 17–24. [7] Babu, R., Wang, J., and Dempster, A. Performance of code tracking loops in ultra-tight GPS/INS/PL integration. The European Navigation Conference, Munich, Pages 1–8, 2005. [8] Durrant-Whyte, H., and Bailey, T. Simultaneous localization and mapping: part I. IEEE Robotics and Automation Magazine, Vol. 13, 2 (2006), 99–110. [9] Bailey, T., and Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): part II. IEEE Robotics & Automation Magazine, Vol. 13, 3 (2006), 108–117. [10] Newman, P., Leonard, J., Tardos, J. D., and Neira, J. Explore and return: experimental validation of real-time concurrent mapping and localization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Vol. 2, Washington, DC, May 2002, 1802–1809. AUGUST 2017 [11] Fairfield, N., Kantor, G., and Wettergreen, D. Real-time SLAM with octree evidence grids for exploration in underwater tunnels. Journal of Field Robotics, Vol. 24, 12 (Jan. 2007), 3–21. [12] Ribas, D., Ridao, P., Tardós, J. D., and Neira, J. Underwater SLAM in manmade structured environments. Journal of Field Robotics, Vol. 25, 1112 (Dec. 2008), 898–921. [13] Thrun, S., Burgard, W., and Fox, D. A probabilistic approach to concurrent mapping and localization for mobile robots. Autonomous Robots, Vol. 5, 3–4 (1998), 253–271. [14] Dissanayake, M. G., Newman, P., Clark, S., Durrant-Whyte, H. F., and Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation, Vol. 17, 3 (Jun. 2001), 229–241. [15] Guivant, J. E., and Nebot, E. M. Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Transactions on Robotics and Automation, Vol. 17, 3 (Jun. 2001), 242–257. [16] Martinez-Cantin, R., and Castellanos, J. A. Unscented SLAM for large-scale outdoor environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Alberta, Canada, Aug. 2005, 3427–3432. [17] Montemerlo, M. FastSLAM: a factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI International Conference on Artificial Intelligence, Edmonton, Canada, 2002, 593–598. [18] Thrun, S., Liu, Y., Koller, D., Ng, A. Y., Ghahramani, Z., and Durrant-Whyte, H. Simultaneous localization and mapping with sparse extended information filters. International Journal of Robotics Research, Vol. 23, 7–8 (May 2004), 693–716. [19] Holmes, S., Klein, G., and Murray, D. W. A square root unscented Kalman filter for visual monoSLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, May 2008, 3710–3716. [20] Andrade-Cetto, J., Vidal-Calleja, T., and Sanfeliu, A. Unscented transformation of vehicle states in SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, Apr. 2005, 323–328. [21] Huang, G. P., Mourikis, A. I., and Roumeliotis, S. I. A quadratic-complexity observability-constrained unscented Kalman filter for SLAM. IEEE Transactions on Robotics, Vol. 29, 5 (2013), 1226–1243. [22] Cheng, J., Kim, J., Jiang, Z., and Yang, X. Compressed unscented Kalman filter-based SLAM. In Proceedings of the IEEE Conference on Robotics and Bio-mimetic (ROBIO), Bali, Dec. 2014, 1602–1607. [23] Kim, J., and Sukkarieh, S. Real-time implementation of airborne inertial-SLAM. Robotics and Autonomous Systems, Vol. 55, 1 (Jan. 2007), 62–71. [24] Mourikis, A. I., Trawny, N., Roumeliotis, S. I., Johnson, A. E., Ansar, A., and Matthies, L. Vision-aided inertial navigation for spacecraft entry, descent, and landing. IEEE Transactions on Robotics, Vol. 25, 2 (2009), 264–280. [25] Ntzi, G., Weiss, S., Scaramuzza, D., and Siegwart, R. Fusion of IMU and vision for absolute scale estimation in monocular SLAM. Journal of Intelligent Robot Systems, Vol. 61, 1 (2011), 287–299. [26] Martinelli, A. Vision and IMU data fusion: closed-form solutions. IEEE Transactions on Robotics, Vol. 28, 1 (2012), 44–60. IEEE A&E SYSTEMS MAGAZINE 35 Inertial Navigation and Simultaneous Localization and Mapping [27] Hesch, J., Kottas, D., Bowman, S., and Roumeliotis, S. Camera-IMUbased localization: observability analysis and consistency improvement. International Journal of Robotics Research, Vol. 33, 1 (2014), 182–201. [28] Nikolic, J., Burri, M., Gilitschenski, I., Nieto, J., and Siegwart, R. Nonparametric extrinsic and intrinsic calibration of visual-inertial sensor systems. IEEE Sensors Journal, Vol. 16, 13 (July 2016), 5433–5443. [29] Burri, M., Oleynikova, H., Achtelik, M. W., and Siegwart, R. Realtime visual-inertial mapping, re-localization and planning onboard MAVS in unknown environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sept. 2015, 1872–1878. 36 [30] Kim, J., and Sukkarieh, S. 6DOF SLAM aided GNSS/INS navigation in GNSS denied and unknown environments. Journal of Global Positioning Systems, Vol. 4, 1–2 (Dec. 2005), 120–128. [31] Julier, S. J., and Uhlmann, J. K. Unscented filtering and nonlinear estimation. Proceedings of the IEEE, Vol. 92, 3 (2004), 401–422. [32] Kim, J.-H., and Sukkarieh, S. Airborne simultaneous localisation and map building. In Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 1, 2003, 406–411. [33] Neira, J., and Tards, J. Data association in stochastic mapping using the joint compatibility test. IEEE Transactions on Robotics and Automation, Vol. 17, 6 (2001), 890–897. IEEE A&E SYSTEMS MAGAZINE AUGUST 2017

1/--страниц