DOI QR코드

DOI QR Code

Loosely-Coupled Vision/INS Integrated Navigation System

  • Kim, Youngsun (Payload Electronics Team, Korea Aerospace Research Institute) ;
  • Hwang, Dong-Hwan (Department of Electronics Engineering, Chungnam National University)
  • Received : 2017.04.10
  • Accepted : 2017.05.17
  • Published : 2017.06.15

Abstract

Since GPS signals are vulnerable to interference and obstruction, many alternate aiding systems have been proposed to integrate with an inertial navigation system. Among these alternate systems, the vision-aided method has become more attractive due to its benefits in weight, cost and power consumption. This paper proposes a loosely-coupled vision/INS integrated navigation method which can work in GPS-denied environments. The proposed method improves the navigation accuracy by correcting INS navigation and sensor errors using position and attitude outputs of a landmark based vision navigation system. Furthermore, it has advantage to provide redundant navigation output regardless of INS output. Computer simulations and the van tests have been carried out in order to show validity of the proposed method. The results show that the proposed method works well and gives reliable navigation outputs with better performance.

Keywords

1. INTRODUCTION

Inertial Navigation System (INS) is a device that provides navigation results using the acceleration and angular velocity measured by Inertial Measurement Unit (IMU). INS can independently perform navigation without an external aid, and can provide high-accuracy navigation results in a short period of time. In particular, application fields have recently been expanded using low-priced, small, and low-power Micro Electro-Mechanical System (MEMS) Strap-Down INS (SDINS) (Gao et al. 2014). However, as INS suffers from navigation error that continuously increases with time, it is generally used together with a position fixing navigation sensor. One typical example of this is the Global Positioning System (GPS)/INS (Groves 2008). However, GPS signals are vulnerable to external interferences such as jamming and spoofing, and reliable navigation results cannot be expected in an urban area with a lot of buildings or in forests as signals are not available in these regions. Therefore, an integrated navigation system is needed to provide reliable navigation results in an environment where GPS signals are not available (George & Sukkarieh 2007).

Vision navigation, which uses the image data of a camera, has disadvantages of large computational load and slow output rate, but has advantages of low power consumption and low price. With the development of digital image sensor and computer technology, real-time vision navigation has been used in various fields (Veth 2011). The vision navigation can be classified into the map-based method, the landmark-based method, and the visual odometry method, depending on how navigation solutions are determined (Groves 2013). In the map-based method, a navigation solution is estimated by comparing the image data measured from a vision sensor with the map from the system or the database. In the landmark-based method, the solution is estimated through a known landmark information and the measurement value of the landmark on the focal plane. The visual odometry method, which performs navigation using continuous images, uses dead reckoning techniques, unlike the two aforementioned methods. Among these, the landmark-based method is simple, and the error does not increase during long-term navigations.

In recent years, INS and vision navigation system should provide reliable navigation results for unmanned aircraft, automobiles, and mobile robots; and various types of navigation methods have been studied. For example, velocities provided from a camera (Yue et al. 2007), direction angles (Wang & Wang 2008), and gimbal angles (Oh et al. 2006) were used to generate the measurement value of integrated navigation; and a method that improves the navigation performance in an environment with insufficient landmarks by directly using the measurement value on the camera focal plane was also provided (Kim & Hwang 2016). In addition, the integrated navigation of INS and a visual odometry system using stereo cameras, and integrated navigation methods that estimate relative change of a vehicle using images and INS data were investigated (Tardif et al. 2010).

This paper proposes a vision/INS integrated navigation method that can provide accurate and reliable navigation results in weak GPS environments. The proposed integrated navigation system is based on a loosely-coupled method where the navigation error and sensor error of INS are estimated and corrected using the navigation information provided from the landmark-based vision navigation. Since the vision navigation module and INS operate independently, the operation of the navigation system can be easily monitored.

In Section 2, the determination of a navigation solution using the focal plane measurement value of a landmark is explained; and in Section 3, the proposed vision/INS integrated navigation system is described. In Section 4, the navigation performance of the proposed algorithm is evaluated based on the results of computer simulations and experiments; and in Section 5, conclusions of this paper are summarized and directions for future research are presented.

2. LANDMARK BASED VISION NAVIGATION

The landmark-based vision navigation system consists of a camera, a landmark, and a navigation computer, as shown in Fig. 1. The camera provides the image of the landmark, and the navigation computer identifies the landmark and calculates the position and attitude of the vehicle.

f1.jpg 이미지
Fig. 1. Landmark based vision navigation system.

2.1 Landmarks Measurements

As shown in Fig. 2, the focal plane of the camera is perpendicularly away from the origin by \(f\) along the optical axis. \(x_c\)-axis of the camera coordinate system is consistent with the optical axis of the camera, and \(y_c\)-axis and \(z_c\)-axis are the horizontal and vertical directions of the focal plane, respectively. The landmark located at \(P_k^c\left(X_k^c,\ Y_k^c,Z_k^c\right)\) in the camera coordinate system is projected onto the two-dimensional coordinate point \(P_k^i\left(c_k^i,\ r_k^i\right)\) of the focal plane coordinate system (i.e., the image coordinate system), and this can be converted into the three-dimensional coordinate value \(p_k^c\left(f,\ u_k,v_k\right)\) of the camera coordinate system. Eqs. (1) and (2) show the relationship between the measurement value of the focal plane and the coordinate value of the landmark in the camera coordinate system.

f2.jpg 이미지
Fig. 2. Landmark measurements in vision navigation.

\(u_k=f\frac{Y_k^c}{X_k^c}\)                                                                                                          (1)

\(v_k=f\frac{Z_k^c}{X_k^c}\)                                                                                                           (2)

where \(f\) is the focal length of the camera, \(X_k^c,\ Y_k^c\), and \(Z_k^c\) represent the \(k\)-th landmark coordinate value in the camera coordinate system, and \(u_k\) and \(v_k\) are the focal plane measurement values of the landmark in the camera coordinate system.

2.2 Landmark Detection and Identification

It is known that detecting a landmark from image data through its key points is robust to the environmental change (e.g., scale, lighting, and direction) (Panchal et al. 2013). The representative detection methods using the key points include Scale Invariant Feature Transform (SIFT), Speed Up Robust Features (SURF), and Features from Accelerated Segment Test (FAST). While SIFT has an outstanding detection performance, its computational load is large; and while FAST has a small computational load, the performance is unsatisfactory. SURF is a method where the computational load of SIFT is reduced while maintaining its excellent detection performance (Bay et al. 2008).

Fig. 3 shows a process of the detection and identification of the landmark based on SURF used in this paper. As shown in the figure, the key points of the landmark are detected from the image obtained by using the SURF algorithm. The landmark is then identified by matching the detected key points with the landmark key points of the database using Fast Library for Approximate Nearest Neighbors (FLANN). Then, outliers are eliminated using RANdom SAmple Consensus (RANSAC), and the center coordinate of the landmark image is calculated.

f3.jpg 이미지
Fig. 3. SURF-based landmark detection and identification.

2.3 Vision Navigation Solution

The navigation solution of the vision navigation can be obtained from the center point position and measurement value of a known landmark. The equation for calculating the navigation solution of the vision navigation system is expressed by Eq. (3).

\(P_k^n-\left(P_u^n+P_{su}^n\right)=r_kC_b^nC_c^bp_k^c\)                                                                                             (3)

where the superscripts n, b, and c represent the navigation coordinate system, the body coordinate system, and the camera coordinate system, respectively. \(P_u^n\) is the position of the vehicle to be estimated in the navigation coordinate system, and \(P_{su}^n\) is the expression of the center of the vehicle and the installed position of the camera in the navigation coordinate system. \(P_k^n\) is the position of the \(k\)-th landmark in the navigation coordinate system. \(p_k^c\) is the measurement value of the landmark in the camera coordinate system. \(r_k\) is the ratio of the distance between the center of the vehicle and the landmark projected onto the camera focal plane to the distance between the center of the vehicle and the landmark. \(C_b^n\) and \(C_c^b\) represent the direction cosine matrix from the body coordinate system to the navigation coordinate system, and the direction cosine matrix from the camera coordinate system to the body coordinate system, respectively. Since the relationship between the camera coordinate system and the body coordinate system is exactly known, \(P_{su}^n\) and \(C_c^b\) can be easily calculated. From the navigation equation Eq. (3), three-dimensional position and attitude (i.e., six variables) can be calculated. As two equations can be obtained from one landmark, at least three landmark measurement values are required to calculate the navigation solution. When the number of landmark measurement values for the focal plane is more than 3, the navigation solution can be calculated using the least square method.

f4.jpg 이미지
Fig. 4. Determination of solution in vision navigation.

As shown in Fig. 4, the angle between the center point of the camera coordinate system and each landmark is expressed by Eq. (4).

\(\emptyset_{kl}={\cos}^{-1}\left(\frac{f^2+u_ku_l+v_kv_l}{\sqrt{\left(f^2+u_k^2+v_k^2\right)\left(f^2+u_l^2+v_l^2\right)}}\right)\)                                                                                          (4)

where \(\emptyset_{kl}\) is the angle between the \(l\)-th landmark and the \(k\)-th landmark. Based on the angle between the landmarks obtained from Eq. (4), the distance between the center point of the camera coordinate system and each landmark is calculated. The relation of the angle between landmarks and the distance of each landmark is expressed by Eq. (5).

\(L_{kl}^2=L_k^2+L_l^2-2L_kL_l \cos\emptyset_{kl}\)                                                                                               (5)

where \(L_k\), and \(L_l\) are the distances between the center point of the camera coordinate system and the \(k\)-th and \(l\)-th landmarks, respectively. \(L_{kl}\) is the distance between the \(k\)-th and \(l\)-th landmarks. Based on the distance between the center point of the camera coordinate system and each landmark obtained from Eq. (5), the position of the vehicle \((X_u,Y_u,Z_u)\) can be calculated using Eq. (6).

\(\left(X_k-X_u\right)^2+\left(Y_k-Y_u\right)^2+\left(Z_k-Z_u\right)^2=L_k^2\)                                                                             (6)

Lastly, based on the position of the vehicle obtained from Eq. (6), the attitude of the vehicle (\(C_b^n\)) is calculated using Eq. (7).

\(P_k^n-P_u^n=r_kC_b^np_k^c\)                                                                                                   (7)

3. VISION/INS INTEGRATED NAVIGATION

Fig. 5 shows the proposed vision/INS integrated navigation method. INS estimates the position (\(P_{INS}\)), velocity (\(V_{INS}\)), and attitude (\(\Psi_{INS}\)) of the vehicle from the IMU output. The vision navigation estimates the position (\(P_{VIS}\)) and attitude (\(\Psi_{VIS}\)) of the vehicle from the landmark coordinate value (\(u_k,v_k\)) projected onto the focal plane. The Kalman filter estimates the INS navigation error (\(\delta x\)) and sensor error (\(\nabla,\varepsilon\)) from the differences in the position and attitude output values of INS and vision navigation. When the vision navigation cannot provide a navigation solution, the Kalman filter performs only the time update.

f5.jpg 이미지
Fig. 5. Proposed vision/INS integrated navigation method.

The proposed method is structurally very similar to the loosely-coupled GPS/INS integration method. Thus, the proposed method has a redundant navigation module, and the vision navigation module can provide navigation results even when INS cannot provide navigation results. The vision navigation independently provides a navigation solution regardless of INS. This property increases reliability of the system. In particular, when faults occur in INS, the faults of INS can be diagnosed from the result of the vision navigation, and navigation can be continuously performed regardless of this.

3.1 Inertial Navigation System

The INS module estimates the attitude between the body coordinate system and the navigation coordinate system. Based on this, the acceleration measured in the body coordinate system is converted into a value in the navigation coordinate system, and the gravitational acceleration is added. Then, the velocity and position are estimated. Also, the INS navigation error and the error of IMU are estimated by a Kalman filter and are corrected.

3.2 Vision Navigation System

The vision navigation module provides the navigation results described in Section 2. First, the landmark is detected and identified using SURF from the processed image data of the camera. The navigation solution is then estimated using the landmark information of the database.

3.3 Kalman Filter

The error equation of INS can be derived by perturbing the INS navigation equation (Goshen-Meskin & Bar-Itzhack 1992). Eq. (8) is the process model of the INS error equation and the sensor error equation.

\(\dot{\delta x}(t)=F(t)\delta x(t)+w(t),\ \ w(t) \sim N\left(0,\ \ Q(t)\right)\)                                                                           (8)

where \(w(t)\) is a white Gaussian with the covariance \(Q(t)\). Eq. (8) can be written in terms of the navigation error and the sensor error as Eq. (9).

\(\left[\begin{matrix}{\dot{\delta x}}_{nav}\\{\dot{\delta x}}_{sen}\\\end{matrix}\right]=\left[\begin{matrix}F_{11}&F_{12}\\0_{6\times9}&0_{6\times9}\\\end{matrix}\right]\left[\begin{matrix}\delta x_{nav}\\\delta x_{sen}\\\end{matrix}\right]+\left[\begin{matrix}w_{nav}\\w_{sen}\\\end{matrix}\right]\)                                                                           (9)

where \(w_{nav}\) and \(w_{sen}\) are the navigation error noise and the sensor error noise, respectively. The state variable \(\delta x\) consists of the navigation error (\({\delta x}_{nav}\)) and the sensor error (\({\delta x}_{sen}\)). \(0_{m\times n}\) represents the \(m\times n\) zero matrix. The navigation error (\({\delta x}_{nav}\)) consists of the position error, velocity error, and attitude error of INS as given in Eq. (10).

\({\delta x}_{nav}=\left[\begin{matrix}\delta P_N&\delta P_E&\delta P_D\\\end{matrix}\ \ \begin{matrix}\delta V_N&\delta V_E&\delta V_D\\\end{matrix}\ \begin{matrix}\varphi_N&\varphi_E&\varphi_D\\\end{matrix}\right]^T\)                                                            (10)

where \(\delta P\), \(\delta V\), and \(\varphi\) are the position error, the velocity error, and the attitude error expressed by a rotation vector, respectively. Subscripts \(N\), \(E\), and \(D\) represent the north, east, and vertical down directions in the navigation coordinate system, respectively. The sensor error (\({\delta x}_{sen}\)) consists of the accelerometer error and the gyro error, as shown in Eq. (11).

\({\delta x}_{sen}=\left[\begin{matrix}\mathrm{\nabla}_x&\mathrm{\nabla}_y&\mathrm{\nabla}_z&\varepsilon_x&\varepsilon_y&\varepsilon_z\\\end{matrix}\right]^T\)                                                                                   (11)

where \(\nabla\) and \(\varepsilon\) are the accelerometer error and the gyro error, respectively. The subscripts \(x\), \(y\), and \(z\) represent the roll, pitch, and yaw directions in the body coordinate system, respectively. The submatrix of Eq. (9), \(F_{11}\), is expressed by Eq. (12).

\( F_{11}=\left[\begin{matrix}-\Omega_{en}^n&I_{3\times3}&O_{3\times3}\\O_{3\times3}& \Omega_{ie}^n+\Omega_{in}^n&f^n\times\\O_{3\times3}&O_{3\times3}&{-\Omega}_{in}^n\\\end{matrix}\right]\)                                                                                  (12)

where \(\Omega_{en}^n\), \(\Omega_{ie}^n\), and \(\Omega_{in}^n\) represent the skew-symmetric matrices for the moving angular velocity of the vehicle, the Earth’s rotational angular velocity, and the angular velocity of the navigation coordinate system relative to the inertial coordinate system, expressed in the navigation coordinate system, respectively. \(f^n\) is the skew-symmetric matrix for the specific force expressed in the navigation coordinate system. The submatrix of Eq. (9), \(F_{12}\), is expressed by Eq. (13).

\(F_{12}=\left[\begin{matrix}O_{3\times3}&O_{3\times3}\\C_b^n&O_{3\times3}\\O_{3\times3}&-C_b^n\\\end{matrix}\right]\)                                                                                              (13)

The measurement equation of the Kalman filter is expressed by Eq. (14).

\(\delta z\left(t\right)=H\left(t\right)\delta x\left(t\right)+v\left(t\right),\ \ v(t)~N\left(0,\ \ R(t)\right)\)                                                                           (14)

where \(H(t)\) is the measurement matrix, and \(v(t)\) is the measurement noise with the covariance \(R(t)\). The measurement \(\delta z\) represents the difference in the position and attitude estimated by INS and vision navigation, and it is expressed by Eq. (15).

\(\delta z=\left[\begin{matrix}{\delta P}_N&{\delta P}_E&{\delta P}_D&\delta\alpha&\delta\beta&\delta\gamma\\\end{matrix}\right]^T\)                                                                             (15)

where \({\delta P}_N\), \({\delta P}_E\), and \({\delta P}_D\) are the position errors for each axis in the navigation coordinate system, respectively. \(\delta\alpha\), \(\delta\beta\), and \(\delta\gamma\) are the Euler angle attitude errors for each axis, respectively. The measurement matrix of Eq. (14), \(H\), is expressed by Eq. (16).

\(H=\left[\begin{matrix}I_{3\times3}&0_{3\times3}&0_{3\times1}&0_{3\times1}&0_{3\times1}&0_{3\times3}\\0_{1\times3}&0_{1\times3}&-\frac{\cos\gamma}{\cos\beta}&-\frac{\sin r\gamma}{\cos\beta}&0&0_{3\times3}\\0_{1\times3}&0_{1\times3}&\sin\gamma&-\cos\gamma&0&0_{3\times3}\\0_{1\times3}&0_{1\times3}&-\tan\beta \cos\gamma&-\tan\beta \sin\gamma&0&0_{3\times3}\\\end{matrix}\right]\)                                                             (16)

where \(\alpha\), \(\beta\), and \(\gamma\) are the roll angle, the pitch angle, and the yaw angle, respectively.

4. PERFORMANCE EVALUATION

To show the validity of the proposed integrated navigation method, computer simulations and van tests were conducted.

4.1 Computer Simulations

Fig. 6 shows the configuration of the computer simulation. A three-dimensional reference trajectory was first generated, and the IMU data and camera data were generated accordingly. For the generation of the reference trajectory and IMU data, MATLAB and the MATLAB Toolbox for INS (GPSoft) were used. For the camera measurement value of the landmark, true values were generated from Eqs. (1) and (2), and noise was added.

f6.jpg 이미지
Fig. 6. Scheme of simulation.

Tables 1 and 2 summarize the specifications and error characteristics of the low mid-grade IMU and the low-priced camera used for the computer simulations. The camera has a wide field of view although the focal length is not long, so that many landmarks can be included in one image. For the simulation, update rate of the Kalman filter was set to be 10 Hz, which is identical to the camera output.

Table 1. IMU specification and INS initial attitude error for simulation.

  Description
  Accelerometer bias
  Accelerometer random walk
  Gyro bias
  Gyro random walk
  Data rate
  Roll error
  Pitch error
  Yaw error
  5 mg
  0.1 m/s/\(\sqrt h\)
  100\(^\circ\)/hour
  0.5\(^\circ\)/\(\sqrt h\)
  100 Hz
  0.1\(^\circ\)
  0.1\(^\circ\)
  5.0\(^\circ\)

 

Table 2. Vision sensor specification for simulation.

 

  Description
  Focal length
  No. of horizontal pixels
  No. of vertical pixels
  Field of view
  Horizontal pixel pitch
  Vertical pixel pitch
  Data rate
  Avg. of focal lengh error
  Avg. of horizontal optical axis coordinate error
  Avg. of vertical optical axis coordinate error
  Focal lengh error (1\(\sigma\))
  Horizontal optical axis coordinate error (1\(\sigma\))
  Vertical optical axis coordinate error (1\(\sigma\))
  25 mm
  4000
  3000
  90\(^\circ\)
  8 um
  8 um
  10 Hz
  200 um
  200 um
  200 um
  200 um
  200 um
  200 um

 

As shown in Fig. 7, a trajectory for circular motion on the plane at a certain altitude was generated, and it was established in a way that 0-10 landmarks could be captured by the camera depending on the position of the vehicle (Kim & Hwang 2016). The section where IMU does not provide output was also included.

 

f7.jpg 이미지
Fig. 7. Vehicle trajectory and landmarks in simulation.

f8.jpg 이미지
Fig. 8. Navigation results in simulation.

In Fig. 8, the result of the proposed integrated navigation is compared with those of the INS and the vision navigation. As shown in the figure, the navigation result of INS diverges, but the vision navigation and the proposed integrated navigation track the true trajectory well.

Fig. 9 shows the position errors in the north, east, and downward directions. Fig. 10 shows attitude errors of the roll angle, pitch angle, and yaw angle. As shown in the figure, the position error and attitude error of the proposed method increase in the section where IMU does not provide output. However, errors decrease when they are available again. In addition, navigation errors increase when the vision navigation cannot provide navigation outputs. But errors decrease when the vision navigation outputs are available again. On the other hand, the vision navigation operates independently regardless of INS, and thus provides navigation results in this section. Therefore, the status of INS can be monitored based on this.

f9.jpg 이미지
Fig. 9. Position error in simulation.

f10.jpg 이미지
Fig. 10. Attitude error in simulation.

Table 3 summarizes the navigation results of the simulations in RMS error. The results show that the integrated navigation works well in the section where IMU cannot provide outputs or the vision navigation cannot obtain navigation solutions. In particular, when all the navigation systems operate normally, the performance of the proposed integrated navigation is significantly improved compared to that of INS.

Table 3. RMS navigation error of the simulation.

Navigation error INS Propsosed method
 Excluding the path of no
 IMU data & no vision
 outputs available    
Whole path
 Position
 error (m)
N
E
D
2667.51
3757.37
4090.66
1.91
1.60
1.09
4.55
12.76
4.01
 Velocity
 error (m/s)
N
E
D
57.72
103.27
92.38
0.56
0.50
1.33
6.10
31.77
3.24
 Attitude
 error (\(^\circ\))
Roll
Pitch
Yaw
12.40
12.50
132.82
0.17
0.53
0.50
0.40
0.49
1.97

 

4.2 Van Test

 

Fig. 11 shows the van test setup. As shown in Tables 4 and 5, the navigation system consists of low-priced MEMS IMU and a commercial camera. The frame rate of the camera was set to be 1.4 Hz. Fig. 12 shows the moving trajectory of the van during experiments. To obtain the navigation results, data were collected in real time and post-processed. The quantitative performance of the proposed integrated navigation was examined by comparing the navigation results obtained from the post-processing with those of the high-performance Carrier-phase Differential GPS (INS/CDGPS) integrated navigation system.

f11.png 이미지
Fig. 11. Experimental setup and reference navigation system.

f12.png 이미지
Fig. 12. Vehicle trajectory and environments in experiment.

Table 4. IMU specification and initial attitude error for experiment.

  Description
Manufacturer
Accelerometer bias
Accelerometer random walk
Accelerometer scaling factor error
Gyro bias
Gyro random walk
Gyro scaling factor error
Data rate
Roll error
Pitch error
Crossbow Ltd.
10 mg
0.1 m/s/\(\sqrt h\)
10000 ppm
3600\(^\circ\)/h
1.0\(^\circ\)/\(\sqrt h\)
1000 ppm
135 Hz
0.61\(^\circ\)
0.01

 

Table 5. Vision sensor specification for experiment.

 

  Manufacturer Description Axis Ltd.
Image
sensor
Sensor type
No. of horizontal pixel
No. of vertical pixel
Horizontal pixel pitch
Vertical pixel pitch
CMOS-color
1280
800
3 um
3 um
Lens Focal length
Field of view
1.7 mm
99\(^\circ\)
Data rate (frame rate) Max 30 Hz
(1.4 Hz is used in experiment)

 

A simple database of the landmark that can be used in this paper was constructed from the image information obtained through the van tests, a large-scale map of the region around the trajectory for the experiment, aerial photographs, and the actually measured heights of the landmarks. As mentioned earlier, SURF, FLANN, and RANSAC were used for the detection and identification of the landmark. In this paper, the number of landmark key points used for the detection was 20~30.

 

Figs. 13 and 14 show navigation results when the result of the high-performance Carrier-phase Differential GPS (INS/CDGPS) integrated navigation system was regarded as the true trajectory. The vision navigation tracks the true trajectory, but it cannot output navigation results when more than three landmarks cannot be obtained from one image. Table 6 shows the RMS error of navigation results. In Table 6, the VIS result in the second column indicates the value obtained only when the vision navigation result was available. The results of the experiment indicate that the proposed integrated navigation gives significantly better navigation performance in the entire navigation section compared to INS although vision navigation outputs are not available in some section. In addition, in the section where the navigation solution of the vision navigation can be obtained, the vision navigation also shows outstanding navigation performance. The navigation performance from the experiment result is poorer than that from the simulation. This is due to update rate, magnitude of IMU error, position errors of landmarks, and position and attitude error of the installed camera on the vehicle.

f13.jpg 이미지
Fig. 13. Navigation results of experiment.

f14.jpg 이미지
Fig. 14. Position error in experiment.

Table 6. RMS navigation error of the experiment.

Navigation error INS VIS Proposed method
Position
error (m)
N
E
D
2667.51
3757.37
4090.66
6.10
9.10
2.79
9.29
16.44
15.34
Velocity
error (m/s)
N
E
D
57.72
103.27
92.38
-
-
-
2.58
4.13
7.83
Attitude
error (\(^\circ\))
Roll
Pitch
Yaw
12.40
12.50
132.82
3.10
4.29
4.11
4.04
3.53
5.61

 

5. CONCLUDING REMARKS AND FURTHER STUDIES

 

In this paper, a vision/INS integrated navigation method was proposed in order to replace the GPS/INS integrated navigation system in an environment where GPS is not available. In the proposed method, the integrated navigation was constructed using the position and attitude output of the landmark-based vision navigation system, and an integration model for this was derived. The proposed method significantly improves the navigation performance by correcting the INS navigation error and sensor error using the vision navigation information, and provides reliable navigation results through the independent vision navigation module when INS could not provide navigation results. In addition, as the two navigation systems operate independently, the operation of each system can be easily monitored. Computer simulations and van tests were conducted to evaluate the performance of the proposed method, and the results show that the proposed method provides an accurate and reliable navigation solution.

Further studies include the effect of the landmark detection and identification performance for vision navigation on the integrated navigation result, systematic construction for the landmark database, the integrated navigation method when the number of landmarks is less than 3, and the estimation of the vehicle velocity using continuous images.

ACKNOWLEDGMENTS

This work was supported by research fund of Chungnam National University.

References

  1. Bay, H., Ess, A., Tuytelaars, T., & Gool, L. V. 2008, Speeded-Up Robust Features, Journal of Computer Vision and Image Understanding, 110, 3, 346-359. https://doi.org/10.1016/j.cviu.2007.09.014
  2. Gao, W., Zhang, Y., & Wang, J. 2014, A Strap-down Inertial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter, Sensors, 14, 1511-1527. https://doi.org/10.3390/s140101511
  3. George, M. & Sukkarieh, S. 2007, Camera aided inertial navigation in poor GPS environments, Proc. of Aerospace Conference IEEE 2007, Montana, USA, 3-10 Mar 2007, pp.1-12. https://doi.org/10.1109/AERO.2007.352849
  4. Goshen-Meskin, D. & Bar-Itzhack, I. Y. 1992, Unified Approach to Inertial Navigation System Error Modeling, Journal of Guidance, Control, and Dynamics, 15, 648-653. https://doi.org/10.2514/3.20887
  5. Groves, P. D. 2008, Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 1st ed. (Boston: Artech House)
  6. Groves, P. D. 2013, The PNT Boom: Future Trends in Integrated Navigation, Inside GNSS, 8, 44-49. http://discovery.ucl.ac.uk/id/eprint/1394442
  7. Kim, Y. & Hwang, D.-H. 2016, Vision/INS Integrated Navigation System for Poor Vision Navigation Environments, Sensors, 16, 1-14. https://doi.org/10.3390/s16101672
  8. Oh, S. J., Kim, W. H., Lee, J. G., Lee, H. K., & Park, C. G. 2006, Design of INS/Image Sensor Integrated Navigation System, Journal of Institute of Control, Robotics and Systems, 12, 982-988. https://doi.org/10.5302/J.ICROS.2006.12.10.982
  9. Panchal, P. M., Panchal, S. R., & Shah, S. K. 2013, A Comparison of SIFT and SURF, International Journal of Innovative Research in Computer and Communication Engineering, 1, 323-327
  10. Tardif, J. P., George, M., Laverne, M., Kelly, A., & Stentz, A. 2010, A New Approach to Vision-Aided Inertial Navigation, Proc. of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18-22 Oct 2010, pp. 4161-4168. https://doi.org/10.1109/IROS.2010.5651059
  11. Veth, M. J. 2011, Navigation Using Images, A Survey of Techniques, Navigation, 58, 2, 127-139. https://doi.org/10.1002/j.2161-4296.2011.tb01796.x
  12. Wang, W. & Wang, D. 2008, Land vehicle navigation using odometry/INS/vision integrated system, Proc. of 2008 IEEE International Conference on Cybernetics Intelligent Systems, 21-24 Sep 2008, pp.754-759. https://doi.org/10.1109/ICCIS.2008.4670839
  13. Yue, D. X., Huang, X. S., & Tan, H. L. 2007, INS/VNS fusion based on unscented particle filter, Proc. Of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2-4 Nov 2007, pp.151-156. https://doi.org/10.1109/ICWAPR.2007.4420654