DOI QR코드

DOI QR Code

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo (school of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University) ;
  • Kim, Hee Sung (School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University) ;
  • Lee, Je Young (School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University) ;
  • Choi, Kwang Ho (School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University) ;
  • Kang, Sung Jin (School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University) ;
  • Chun, Sebum (Korea Aerospace Research Institute) ;
  • Lee, Hyung Keun (School of Electronics, Telecommunication & Computer Engineering, Korea Aerospace University)
  • Received : 2014.01.22
  • Accepted : 2014.02.07
  • Published : 2014.03.15

Abstract

Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

Keywords

1. INTRODUCTION

In case of hazardous material spill in urban area due to the accidents of vehicles carrying hazardous materials, secondary economic and social losses as well as the primary damage from vehicle accidents are enormous. Therefore, studies on the real-time detection and prevention of the accidents of vehicles carrying hazardous materials have been carried out. Kim et al. (2013) proposed a detection technique for vehicles carrying hazardous materials using moving reference based Real-Time Kinematic (RTK) byinstalling Global Positioning System (GPS) receivers at a tractor and a trailer, respectively, as shown in Fig. 1. Also, Lee et al. (2013b) analyzed the performance of the accident detection of vehicles carrying hazardous materials basedon Hardware-In-the-Loop Simulation (HILS) experiment. Jack knifing and rollover were defined as the representative accidents of vehicles carrying hazardous materials, and a relative navigation system that combines GPS and Inertial Navigation System (INS) was used to detect these accidents. A method that detects jack knifing accidents using the yaw acceleration of the connection angle of a tractor and a semi-trailer and that detects rollover accidents using the roll acceleration information of a tractor was proposed. A relative navigation system that combines GPS/INS can estimate the relative angular acceleration of two-body vehicles carrying hazardous materials, and thus can be efficiently used for monitoring accidents.

Fig. 1. Navigation system for vehicles carrying hazardous materials (Kim et al. 2013).

GPS can be used to estimate the information necessary for monitoring the accidents of vehicles carrying hazardous materials. However, in the case of GPS, the accuracy and continuity of navigation solutions could deteriorate due to satellite signal blockage depending on the surrounding environments (Soloviev & Venable 2010). To resolve the problems of GPS such as the deterioration of accuracy and continuity due to signal blockage, studies on hybrid navigation systems have been actively performed. A hybrid navigation system is a methodology for estimating the position, velocity, and posture of a vehicle through the efficient combination of various sensors such as GPS, INS, and vision sensor. In recent years, it has been actively studied in relation to the urban environment navigation of robots and unmanned vehicles. In this regard, for the relative navigation system suggested by Fosbury & Crassidis (2006), a method that uses the navigation solution of GPS to maintain the accuracy of INS was proposed; and Wang et al. (2008) and Vu et al. (2012) proposed a hybrid navigation system for controlling an unmanned aerial vehicle and a hybrid navigation system for estimating the position of a vehicle in urban environments. Also, Kim et al. (2011) proposed a hybrid navigation system using GPS, INS, odometer, and polyhedral vision sensor in order to improve navigation performance in urban environments. A method that can maintain the positioning accuracy of a vehicle, even in sections where GPS signals cannot be used, by estimating the posture of the vehicle based on the vanishing points extracted from polyhedral vision sensor data was presented.

In this study, as the basic research for the performance improvement of a GPS/INS based relative navigation system, which is used for monitoring the accidents of vehicles carrying hazardous materials, a method using a vision sensor is proposed, and its feasibility is examined. A relative navigation system, which maintains the accuracy of INS in urban area using GPS, has a problem of diverging navigation solution in sections where GPS signal blockage occurs for a long period. Therefore, in this study, the feasibility of a technique, which detects the accidents of vehicles carrying hazardous materials by estimating angular acceleration using a monocular vision sensor in environments where the visibility of GPS signals is not secured for a long period, is evaluated.

The contents of this study are as follows. First, in using a vision sensor, as a method for the combination with GPS/INS systems, coordinate systems are defined, and the distortion calibration of a vision sensor is explained. Then, a process that extracts relative angular acceleration based on the changes in the relative angular velocities of the feature points extracted from a vision sensor is explained. In the experiment, to examine the validity of the proposed technique, performance evaluation is carried out by comparing the accident detection sections estimated using a vision sensor and the accident detection sections extracted from INS. By using an equipment set simulating a vehicle carrying hazardous materials, the feasibility of the proposed method is evaluated.

2. RELEVANT COORDINATE SYSTEMS

For hybrid navigation that combines various sensors, a procedure that clearly defines the relation among the coordinate systems associated with each sensor should be preceded. In this study, five coordinate systems were used as summarized in Table 1 (Ligorio & Sabatini 2013).

Table 1. Coordinate system.

Sign Frame Coordinate
b1
b2
s
t
p
Master body frame
Slave body frame
Vision sensor frame
Landmark frame
Pixel frame
\(X^{b1},Y^{b1},Z^{b1}\)
\(X^{b2},Y^{b2},Z^{b2}\)\)
\( X^s,Y^s,Z^s \)
\( x^t,y^t \)
\( x^p,y^p\)

 

Fig. 2 shows the relation among the body frames of the tractor and the trailer carrying hazardous materials and the vision sensor frame. For the centers of the body frames of the tractor and the trailer, center points could vary relative to the position of the INS sensor depending on the installation position. The moving direction of the vehicle was set as \(X^b\), the rightward direction of the moving direction was set as \(Y^b\), and the \(Z^b\) axis was set by applying a right-handed coordinate system. Also, the center of the vision sensor was set as the center direction of the lens, the direction that views a landmark image was set \(X^s\), the righward direction of the viewing direction set as \(Y^s\), and \(Z^b\) axis was set by applying a righ-handed coordinate system. In additio, \(h\) is the height of the installed camera, which represents the distance from the groud to the lens of thd vision sensor, and it can be measured during the installation of the vision sensor.

Fig. 2. Illustration of Body and Vision frame.

Fig. 3. shows the relation between the three-dimenstonal vision sensor frame and the two-dimensional pixel frame. \(f\) represents the focal length, and the image coordinate of feature points are expressed as points on the image plane.

Fig. 3. Illustration of Vision sensor and Pixel frame.

Fig. 4 shows the landmark frame, which was used to extract feature points in this study. Based on the center point of the landmark, the upward direction was set as \(x^t\) and the rightward dirction was set as \(y^t\). Also , to the geometric characteristics of feature points regarding a landmark, the width of the image was expressed as \(w\) and the length was expressed as \(l\). Both \(w\) and \(l\) are measured before capturing image.

Fig. 4. Landmark frame.

The relation bewteen the two-dimensional pixel frame and the three-dimenional vision sensor frame can be expressed as Eq. (1) (Lim et al. 2012, Tsai 1987).

\(TC_t^sX^t=TX^s=0\)                                                                                                           (1)

where

\(T=\left[\begin{matrix}x^p&-f&0\\y^p&0&f\\\end{matrix}\right],~~X^t=\left[\begin{matrix}x^t\\y^t\\h\\\end{matrix}\right],~h:\) height of vision sensor

\(C_t^s=\left[\begin{matrix}\cos{\theta}\cos{\psi}&\cos{\theta}\sin{\psi}&-\sin{\theta}\\\sin{\phi}\sin{\theta}\cos{\psi}-\cos{\phi}\sin{\psi}&\sin{\phi}\sin{\theta}\sin{\psi}+\cos{\phi}\cos{\psi}&\sin{\phi}\cos{\theta}\\\cos{\phi}\sin{\theta}\cos{\psi}+\sin{\phi}\sin{\psi}&\cos{\phi}\sin{\theta}\sin{\psi}-\sin{\phi}\cos{\psi}&\cos{\phi}\cos{\theta}\\\end{matrix}\right]\)

3. CALIBRATION OF THE VISION SENSOR

For an inexpensive vision sensor, it is essential to calibrate distortions that are formed during the manufacturing process. The radial distortion, which occurs due to the shape of a lens, and the tangential distortion, which is formed during the manufacturing process, need to be calibrated (Bradski & Kaehler 2008, Tsai 1987).

3.1 Radial Distortion

As shown in Fig. 5, barrel distortion occurs because the ray that passes through the part away from the center of a lens is refracted more than the ray that passes through the center part of a lens. In practice, radial distortion is insignificant when the distance to the center of an image, \(r = 0\), and this can be wxpressed using several terms of Taylor series as shown in Eq. (2). For an inexpensive vision sensor, distortion can be expressed using the first two terms. In the case of radial distortion \(f(r) = 0\) at the position of \(r = 0\), and thus \(a_0 = 0\) should be saatisfied. Also, a distortion function has the form of a symmeric function, and thus the coefficients of the terms in whice \(r\) is raised to an odd power should all be 0. Therefore, the characteristics of radial distortion in determined by the coefficients of the terms in which r is raised to an enen power. In this reagard, one is \(k_1\), and the other is \(k_2\).

\(x_{corrected}=x(1+k_1r^2+k_2r^4),~~y_{corrected}=y(1+k_1r^2+k_2r^4)\)                                                                               (2)

where

\(x,y:\) distortion pixel point

\(x_{corrected},y_{corrected}:\) un-distortion pixel point

\(r=\sqrt{x^2+y^2}\)

Fig. 5. Radial distortion.

3.2 Tangential Distortion

Besieds radial distortion, common distortion tht occurs in an image is tangential distortion shown Fig. 6, which is mostly formed during the manufacturing process of a vision sensor. Tangential distortion, which is formed because the lens and the image plane are not completely parallel during the manufacturing process of a vision sensor, can be calibrated using the tangential coefficients, \(p_1, p_2\), as shown in Eq. (3) (Brown 1966).

\(x_{corrected}=x+\left[2p_1y+p_2(r^2+2x^2)\right]\)                                                                                      

\(y_{corrected}=y+\left[p_1(r^2+2y^2)+2p_2x\right]\)                                                                                   (3)

Fig. 6. Tangential distortion.

3.3. Calibration

As explained earlier, to calibrate the distortion of an inexpensive vision sensor, four intrinsic camera parameters \((f_x, f_y, c_x,c_y)\) and four distortion parameter \((k_1, k_2, p_1, p_2)\) are required. The intrinsic camera parameters, \(f_x, f_y\), represent the focal length of a vision sensor, and \(c_x, c_y\) represent the principal point, which is the actual center position of a vision sensor, In this study, to obtain these eight parameters, OpenCV fuctions were used (Bradski & Kaehler 2008). Fig. 7 shows the flow chart for obtaining undistorted images based on the camera calibration algorithm implemneted using OpenCV funtions. For distortion calibration, OpenCV calculates distortion parameters for a planar object with a grid form using mult-view images. For a planar object whose general geometric form and length are known, the intrinsic parameters of the vision sensor were obtained through an indoor experiment (Lim 2013). In this regard, Fig. 8a shows the imput image for calculating camera parameters, and Fig. 8b showns the undistorted image obtained using the calculated parameters.

Fig. 7. Flow chart for camera calibration using OpenCV library.

Fig. 8. (a)Input image, (b)Un-distortion image.

4. ESTIMATION OF RELATIVE ANGULAR ACCELERATON

A vision sensor projects three-dimensional object information onto a two-dimensional image via dimension reduction. From another aspect, this indicates that three-dimentional position information is reduced angular acceleration, a number of intermediate processes and assumptions are required. In this study, to extract threedimensional angular acceleration from two-dimensional position information, following procedures are considered.

a) Conversion from two-dimensional image information to three-dimensional relative position information for a number of feature points. b) Conversion from relative position information to relative angular velocity information for a number of feature points. c) Conversion from relative angle information to relative angular acceleration information.

For the conversion of a), the length information of a known landmark is used; and for the conversion of b), the assumption that each feature point has been extracted from a rigid body is used. Lastly, for the conversion of c), the time interval of two images, from which a relative angle has been extracted, is required. First, four feature points that are detected in an image can be expressed as coordinate values for the twodimensional image coordinate system using Eq. (4).

\(g1=\left[\begin{matrix}x^{p1}\\y^{p1}\\\end{matrix}\right],~ g2=\left[\begin{matrix}x^{p2}\\y^{p2}\\\end{matrix}\right],~ g3=\left[\begin{matrix}x^{p3}\\y^{p3}\\\end{matrix}\right],~ g4=\left[\begin{matrix}x^{p4}\\y^{p4}\\\end{matrix}\right]\)                                                                             (4)

Also, when the same feature points are presented as coordinate values for the three-dimensional landmark fram, they can be expressed as Eq. (5) where \(h\) represents the height of feature points.

\(g1'=\left[\begin{matrix}x^t\\y^t\\h\\\end{matrix}\right],~ g2'=\left[\begin{matrix}x^t\\y^t+w\\h\\\end{matrix}\right],~ g3'=\left[\begin{matrix}x^t+l\\y^t\\h\\\end{matrix}\right],~ g4'=\left[\begin{matrix}x^t+l\\y^t+w\\h\\\end{matrix}\right]\)                                                                     (5)

If the perturbation technique is applied to Eqs. (1-5) based on the four feature points within the landmark image, Eqs. (6) and (7) can be obtained.

\(Z_1=T_1C_t^sX^t=h_{pos1}\left[\begin{matrix}\delta x^t\\\delta y^t\\\delta h\\\end{matrix}\right]+h_{ang1}\left[\begin{matrix}\delta\phi\\\phi\theta\\\phi\psi\\\end{matrix}\right]+v_1\)                                                                                                  

\(Z_2=T_2C_t^s\left(X^t+\left[\begin{matrix}0\\w\\0\\\end{matrix}\right]\right)=h_{pos2}\left[\begin{matrix}\delta x^t\\\delta y^t\\\delta h\\\end{matrix}\right]+h_{ang2}\left[\begin{matrix}\delta\phi\\\phi\theta\\\phi\psi\\\end{matrix}\right]+v_2\)                                                                              

\(Z_3=T_3C_t^s\left(X^t+\left[\begin{matrix}l\\0\\0\\\end{matrix}\right]\right)=h_{pos3}\left[\begin{matrix}\delta x^t\\\delta y^t\\\delta h\\\end{matrix}\right]+h_{ang3}\left[\begin{matrix}\delta\phi\\\phi\theta\\\phi\psi\\\end{matrix}\right]+v_3\)                                                                               

\(Z_4=T_4C_t^s\left(X^t+\left[\begin{matrix}l\\w\\0\\\end{matrix}\right]\right)=h_{pos4}\left[\begin{matrix}\delta x^t\\\delta y^t\\\delta h\\\end{matrix}\right]+h_{ang4}\left[\begin{matrix}\delta\phi\\\phi\theta\\\phi\psi\\\end{matrix}\right]+v_4\)                                                                           (6)

\(Z=\left[\begin{matrix}Z_1\\Z_2\\Z_3\\Z_4\\\end{matrix}\right]=\left[\begin{matrix}h_{pos1}&h_{ang1}\\h_{pos2}&h_{ang2}\\h_{pos3}&h_{ang3}\\h_{pos4}&h_{ang4}\\\end{matrix}\right]\left[\begin{matrix}\delta x^t\\\delta y^t\\\delta h\\\delta\phi^t\\\delta\theta^t\\\delta\psi^t\\\end{matrix}\right]+v\)                                                                                         (7)

where

\(h_{pos}=TC_t^s,h_{ang}=T\left\langle X^s\right\rangle\)

\(\left\langle X^s\right\rangle\): skew-symmetric matrix generated by the vector \(X^s\)                                                                                        

Based on the relative position of each feature point estimated in Eq. (7), relative angular velocity can be estimated using Eqs. (8) and (9).

\(C_t^sX_k^t=X_k^s=\left[\begin{matrix}x_k^s\\y_k^s\\h\\\end{matrix}\right]=C_kX_{k-1}^s=C_k\left[\begin{matrix}x_{k-1}^s\\y_{k-1}^s\\h\\\end{matrix}\right]\)                                                                                         (8)

\({\dot{\phi}}_k={\tan}^{-1}{\left(\frac{C_k(3,2)}{C_k(3,3)}\right)} \)                                                                                                                
\( {\dot{\theta}}_k={\tan}^{-1}{\left(\frac{C_k(3,1)}{C_k(1,1)^2+C_k(1,3)^2}\right)} \)                                                                                                    
\( {\dot{\psi}}_k={\tan}^{-1}{\left(\frac{C_k(2,1)}{C_k(1,1)}\right)}\)                                                                                                             (9)

From the relative angular velocity obtained by Eq. (9), relative angular acceleration can be estimated using Eq. (10).

\( \Delta{\dot{\phi}}_k={\dot{\phi}}_k-{\dot{\phi}}_{k-1} \)                                                                                                               
\( \Delta{\dot{\theta}}_k={\dot{\theta}}_k-{\dot{\theta}}_{k-1} \)                                                                                                               
\( \Delta{\dot{\psi}}_k={\dot{\psi}}_k-{\dot{\psi}}_{k-1} \)                                                                                                         (10)

Relative angular acceleration can be estimated by the time differencing of relative angular velocity, and can be estimated based on the changes in the position of feature points in two images that are obtained sequentially.

5. PERFOMANCE EVALUATION

To evaluate the accuracy of the accident detection estimation of vehicles carrying hazardous materials using the proposed technique, an indoor experiment, which simulated actual environments, was performed as shown in Fig. 9. For the standard of relative angular acceleration, the data using MTi(X-Sens) were analyzed utilized (Lee et al. 2013a). Table 2 summarizes the edtailed specification of the Inertial Measurement Unit (IMU) used in the experiment, and the sampling of the IMU data was 100 Hz. Also, in the case of the vision sensor for obtaining a landmark image, HD Pro Webcam C920 (Logitech) was used, and the sampling period of the image data was 10 Hz. The AEK-4T GPS receiver (U-Blox) was used for the time synchronization between 100 Hz IMU data and 10 Hz image data (Lim 2013).

Fig. 9. Simulation environments.

Table 2. X-SENS MTi sensor performance.

  Rate of turn Acceleration Magnetic field
Dimensions
Full scale (standard)
Linearity
Bias stability5 (1σ)
Scale factor stability5 (1σ)
Noise
Alignment error
Bandwidth
Max update rate
3 axes
± 300 deg/s
0.1% of FS
1 deg/s
-
0.05 deg/s/√Hz
0.1 deg
40 Hz
512 Hz
3 axes
± 50 m/s2
0.2% of FS
0.02 m/s2
0.03%
0.002 m/s2/√Hz
0.1 deg
30 Hz
512 Hz
3 axes
± 750 mGauss
0.2% of FS
0.1 mGauss
0.5%
0.5 mGauss (1σ)
0.1 deg
10 Hz
512 Hz

 

Fig. 10 shows the detection of four feature points for an actual image depending on the changes in the angle. As shown in Fig. 10, four feature points could be successfully extracted regardless of the angle of the landmark image. Also, it showed the possibility of the extraction of feature points when the feature point extraction algorithm using a landmark image is applied to actual vehicles carrying hazardous materials

Fig. 10. Detection of feature points.

Fig. 11a shows the relative angles calculated from the 100 Hz IMU data through 10 Hz time synchronization, Fig. 11b shows the relative angular acceleration calculated from IMU, and Fig. 11c shows the relative angular acceleration estimated using the vision sensor. Table 3 summarizes the angular acceleration thresholds of the yaw and roll for detecting jack knifing and rollover in the experiment environment (Lee et al. 2013a). Fig. 12 shows the graphs obtained by the estimation of accident detection through applying the thresholds to the estimated angular acceleration, and the dots expressed as “1” represent the accident detection sections. Figs. 12a and 12b show the results obtained by performing accident detection through applying the thresholds in Table 3 to the data acquired from the IMU data. Also, Tables 4 and 5 summarize the GPS time for each section obtained by performing the jack knifing and rollover accident detection. Using the IMU data, jack knifing accidents were detected at a total of five sections, and rollover accidents were detected at a total of three sections.

Fig. 11. Relative angle and angular acceleration.

Fig. 12. Detection of accidents.

Table 3. Simulation environment of angular acceleration threshold for IMU data (Lee et al. 2013a).

  Road type (∞ shape) Variation
of velocity
(km/h)
Loading
capacity
(ton)
Angular
acceleration
(rad/s2)
Friction
coefficient of
left road
Friction
coefficient of
right road
Jack knifing
Rollover
0.85
1
0.1
1
50 → 70
90
8
16
0.349
0.226

 

Table 4. Detected jack knifing by IMU data.

Section Detected GPS time
1
2
3
4
5
450000.1
450033.1
450071.1
450094.1
450116.1
450000.2
450033.2
450071.2
450095.1
450116.2
450001.1
 
450072.1
450095.2
450117.1
450001.2
 
450072.2
450097.1
450117.2
 
 
 
450097.2
450118.1
 
 
 
 
450118.2

 

Table 5. Detected rollover by IMU data.

Section Detected GPS time
1
2
3
 
450094.1
450102.1
450113.1
450117.1
450094.2
450102.2
450113.2
450117.2
450094.3
450103.1
450114.1
450118.2
450095.1
450103.2
450114.2
450119.1
450095.2
450105.1
450116.1
450119.2
 
450105.2
450117.0
450120.0

 

The accident detection thresholds suggested by Lee et al. (2013a) are the values calculated based on the data obtained from IMU, and thus are not appropriate for the technique proposed in this study. Therefore, studies on thresholds for the accident detection using a vision sensor are required. As shown in the graphs in Figs. 11b and 11c, the angular acceleration obtained from IMU and the angular acceleration estimated based on the vision sensor were found to be different. This indicates that the threedimensional angular acceleration estimated using the vision sensor is sensitive to the changes in the angles of the twodimensional feature points, and thus changes abruptly. In other words, for the accident detection of vehicles carrying hazardous materials using the vision sensor, new thresholds need to be established. In this study, to establish thresholds for the accident detection of vehicles carrying hazardous materials using the proposed vision sensor, IMU accident detection values and accident sections were analyzed. Accident detection thresholds using the data obtained from the vision sensor were experimentally estimated by analyzing the thresholds in Table 3 and the accident detection sections in Figs. 12a and 12b. The accident detection thresholds estimated based on the proposed vision sensor data were 0.04 rad/s2 for the yaw acceleration, and 0.17 rad/s2 for the roll acceleration. Fig. 12c shows the results for the detection of jack knifing using the vision sensor thresholds, and Fig. 12d shows the results for the detection of rollover. Tables 6 and 7 summarize the GPS time for the accident detection sections using the vision sensor data.

Table 6. Detected jack knifing by Vision sensor data.

Section Detected GPS time
1
2
3
4
5
 
6
 
449999.6
450033.1
450071.3
450094.2
450100.2
450103.3
450114.0
450117.3
450000.1
 
450071.4
450094.3
450102.0
450104.0
450116.1
450118.1
450000.2
 
450072.0
450095.0
450102.1
450104.3
450116.2
450118.2
450000.5
 
 
450096.1
450102.2
 
450116.3
450119.0
 
 
 
450096.2
450103.0
 
450116.4
450120.0
 
 
 
450097.0
450103.1
 
450117.0
450120.1
 
 
 
450097.4
450103.2
 
 
450117.1

 

Table 7. Detected rollover by Vision sensor data.

Section Detected GPS time
1
2
3
4
5
450000.1
450071.3
450094.2
450103.1
450116.4
450000.2
450071.4
450094.3
450103.2
450117.0
 
450072.0
 
 
 

 

When the accident sections detected using the vision sensor data were compared with the IMU data, all the five jack knifing accident detection sections were detected, and the over detection at 5 section was observed. In the case of the rollover, all the three sections detected by IMU were detected, and the over detection at 1 and 2 sections were observed. To summarize the results of the experiment, the jack knifing and rollover accident sections, which had been detected by estimating accident thresholds using the data obtained from the vision sensor, were consistent with the accident detection sections detected using the IMU data, but they had a tendency toward overestimation. It is because the estimation of angular acceleration using the vision sensor is sensitive to the changes in the angles of the two-dimensional feature points. This could be improved by the application of a scale factor in estimating the threedimensional position from two-dimensional simplified position information.

6. CONCLUSIONS

In this study, to resolve the problems of existing systems that calibrate the error of INS in urban area using GPS, the feasibility of a technique that detects the accidents of vehicles carrying hazardous materials using only vision sensor data was examined. Based on the relative angular acceleration estimated using a vision sensor, the proposed technique experimentally estimates the jack knifing and rollover accident detection thresholds of a vehicle carrying hazardous materials, and detects accidents. In using a vision sensor, for the combination with existing GPS/ INS systems, five coordinate systems were defined, and a study on the process for improving the accuracy of feature points via image distortion calibration was performed. In this study, relative angular acceleration information estimated using a vision sensor was used in order to utilize the thresholds calculated based on the changes in relative angular acceleration that had been previously suggested. The performance evaluation of the proposed algorithm was carried out by comparing the accident detection sections based on the changes in the relative angular acceleration of the feature points extracted from a vision sensor and the accident detection sections detected using IMU data.

The accident detection thresholds calculated using the existing IMU data were not appropriate for the proposed algorithm, and thus vision sensor accident detection thresholds were presented based on actual experiment data. When the results of the accident detection performed using the accident detection thresholds calculated based on the vision sensor data in indoor environments that simulated a vehicle carrying hazardous materials were compared with the IMU data, all the accident detection sections were detected, but several overestimation cases were observed. It is thought that the overestimation cases occurred in the process of converting two-dimensional simplified image information into three-dimensional information, and that they were the errors that occurred because the experimental environment where the IMU accident detection values had been estimated was different from the environment where this study was performed.

The feasibility of the accident detection of vehicles carrying hazardous materials was examined through an experiment using only the data obtained from a vision sensor, and it was found that the calculated accident detection thresholds need to be analyzed based on a lot more data. To improve the reliability of vision sensor accident detection sections, further studies on image processing algorithm robust to surrounding environments are required. The sensitivity of the vision sensor to the changes in the brightness and luminosity of surrounding environments could be improved by more studies on adaptive feature point extraction algorithm using the histograms and changes of the average values of each image frame obtained from a vision sensor. Also, the accuracy of the accident detection of vehicles carrying hazardous materials could be improved by more studies on detection thresholds in order to reduce overestimation tendency.

References

  1. Bradski, G. & Kaehler, A. 2008, Learning OpenCV (CA: O'Reilly Media, Inc.), pp.370-404.
  2. Brown, D. C. 1966, Decentering distortion of lenses, Photometric Engineering, 32, 444-462.
  3. Fosbury, A. M. & Crassidis, J. L. 2006, Kalman Filtering for Relative Inertial Navigation of Uninhabited Air Vehicles, AIAA Guidance, Navigation, and Control Conference, Keystone, CO, Aug 2006, AIAA, paper #2006-6544. http://dx.doi.org/10.2514/6.2006-6544
  4. Kim, H. S., Choi, K. H., Lee, J. Y., Lim, J. H., Chen S. B., et al. 2013, Relative Positioning of Vehicles Carrying Hazardous Materials Using Real-Time Kinematic GPS, JKGS, 2, 23-31. http://dx.doi.org/10.11003/ JKGS.2013.2.1.019
  5. Kim, S. B., Bazin, J. C., Lee, H. K., Choi, K. H., & Park, S. Y. 2011, Ground Vehicle Navigation in Harsh Urban Conditions by Integrating Inertial Navigation System, Global Positioning System, Odometer and Vision Data, IET Radar, Sonar and Navigation, 5, 814-823. http:// dx.doi.org/10.1049/iet-rsn.2011.0100
  6. Lee, B. S., Chun, S. B., Lim, S., Heo, M. B., Nam, G. W., et al. 2013a, Performance Evaluation of Accident Detection using Hazard Material Transport Vehicle HILS, 2013 KGS Conference, Jeju, Korea, 06-08 Nov., 2013
  7. Lee, J. Y., Kim, H. S., Choi, K. H., Lim, J. H., Kang, S. J., et al. 2013b, Design of Adaptive Filtering Algorithm for Relative Navigation, Proceedings of the World Congress on Engineering and Computer Science, vol. 2, San Francisco, USA, 23-25 October
  8. Ligorio, G. & Sabatini, A. M. 2013, Extended Kalman Filter- Based Methods for Pose Estimation Using Visual, Inertial and Magnetic Sensors: Comparative Analysis and Performance Evaluation. Sensors, 13, 1919-1941. https://doi.org/10.3390/s130201919
  9. Lim, J. H. 2013, Estimation for Displacement of Vehicle based on GPS and Monocular Vision sensor, Master Dissertation, Korea Aerospace University
  10. Lim, J. H., Choi, K. H., Kim, H. S., Lee, J. Y., & Lee, H. K. 2012, Estimation of Vehicle Trajectory in Urban Area by GPS and Vision Sensor, in. The 14th IAIN Congress 2012, Cairo, Egypt, 01-03 Oct. 2012.
  11. Soloviev, A. & Venable, D. 2010, Integration of GPS and vision measurements for navigation in GPS challenged environments. In Position Location and Navigation Symposium (PLANS), May 2010, IEEE/ION, pp.826-833.
  12. Tsai, R. Y. 1987, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses, Robotics and Automation, IEEE Journal, 3, 323-344. http://dx.doi. org/10.1109/JRA.1987.1087109
  13. Vu, A., Ramanandan, A., Chen, A., Farrell, J. A., & Barth, M. 2012, Real-time computer vision/DGPS-aided inertial navigation system for lane-level vehicle navigation. Intelligent Transportation Systems, IEEE Transactions on, 13, 899-913. http://dx.doi.org/10.1109/TITS.2012.2187641
  14. Wang, J., Garratt, M., Lambert, A., Wang, J.J., Han, S., et al. 2008, Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles, IAPRSSIS, 37(B1), 963-9