DOI QR코드

DOI QR Code

Implementation of Vehicle Plate Recognition Using Depth Camera

  • Choi, Eun-seok (Dept. of Computer Software Engineering, Dong-eui University) ;
  • Kwon, Soon-kak (Dept. of Computer Software Engineering, Dong-eui University)
  • Received : 2019.09.06
  • Accepted : 2019.09.23
  • Published : 2019.09.30

Abstract

In this paper, a method of detecting vehicle plates through depth pictures is proposed. A vehicle plate can be recognized by detecting the plane areas. First, plane factors of each square block are calculated. After that, the same plane areas are grouped by comparing the neighboring blocks to whether they are similar planes. Width and height for the detected plane area are obtained. If the height and width are matched to an actual vehicle plate, the area is recognized as a vehicle plate. Simulations results show that the recognition rates for the proposed method are about 87.8%.

Keywords

I. INTRODUCTION

Recently, it has begun to take the spotlight of smart traffic, and as it has been a long time, there is a foothold for building smart cities such as autonomous vehicles and smart signals. Because of this, there is automation technology, and it is necessary to work to miniaturize the technology so that it can be applied more and more in real life. Intelligent transportation systems are continuously being introduced and use of existing CCTVs or black box systems to recognize the number of vehicles is also increasing. Research is accomplished on how to quickly detect license plates, and the results of these studies are being applied to actual products. Smart vehicle plate detection is an essential technology to find out the information of vehicles in operation. By detecting the vehicle plate area, it is possible to recognize the vehicle number in the vehicle plate area, thereby acquiring and utilizing the information of the vehicle and the driving information.

Methods of detecting the features of the outside of the license plate in the color image were mainly used for recognizing a vehicle plate. R. Zunino [1] proposed amethod of detecting a vehicle plate region using window segmentation and vector quantization. This method detects an area with a large change in brightness and divides the license plate area into blobs. The method has a disadvantage in that it is difficult to construct a blob when the plate is long distance. A method of generating the respective fuzzy maps for the edge and color components of the license plate area and detecting the license plate area through fuzzy inference was also studied [2]. A method of vehicle license plate detection using a Haar-like feature [3] constructed a chain weak classifier to learn the morphological arrangement of characters in the license plate. A method of detecting license plate areas by learning local binary patterns of license plates[4] was proposed. However, these methods using color videos had the disadvantage that it was difficult to apply these to real life due to phenomena such as rotation and perspective projection distortion accompanying a change in screen attitude. There were also computer performance issues, so the methods were hard to apply to real situation.

In this paper, we provide a method of detecting a plate with a depth image in order to overcome the weaknesses of illumination change and camera attitude change in plate detection using color images. When a depth camerais used to recognize an object, an infrared sensor of the depth camera determines a value obtained by measuring the distance between the object and the depth camera as information. Recently, methods of object recognition through depth videos [5-7] have been studied.

Due to the structural characteristics in which the plateis planarized, it is possible to regard an area composed of similar planes as a license plate candidate area of avehicle. After dividing the image into square shaped blocks for detection of planar areas, the depth information in each block can be used to obtain information of its surface and the first near plane. If the in-block plane error is less than a certain value, the blockis considered to be planar. Thereafter, the normal vectors of the planes between adjacent blocks are compared to measure the degree of similarity. If the degree of similarity between the two blocks is high enough, it can be considered that the two blocks were performed in the same plane. Therefore, the plane area in the image can be obtained. The actual size of the area is then measured using depth information and if the actual height and width of the measured area is similar to the size of the actual plate, the area is considered as a plate area. Foreach labeled plane area, the actual size of the plane can be measured through a depth image. The width of the plane can be obtained by converting the distance into camera coordinate distances for each pixel at the left end and each pixel at the right end of each labeled area. Foreach pixel at the top end of the area and at the bottom end, the height of the corresponding plane can be obtained in the same way. If the width and height of this measured plane area match the width and height of the actual plane, the area is detected by the number plate. The method proposed in this paper can be used to perform vehicle plate detection independent of the lighting environment.

 

II. Vehicle Plate Recognition Using Depth Camera

In the case of vehicle plate detection using RGB images, the contrast of the vehicle plate, the color difference information and the morphological information of the edge are used with the actual information of the surface of the vehicle plate and the actual length of the vehicle plate as shown in Fig. 1. It is possible to easily obtain information on the surface and the actual length between two points by using the depth information that indicates the distance to the camera.

A relationship between a pixel p≡(x, y) in the 2Dimage coordinate system and a point pc≡(xc, yc, zc) in the 3D camera coordinate system is represent as Eq. (1) [1].

\(x_{c}=\frac{d}{f} x, y_{c}=\frac{d}{f} y, z_{c}=d\)       (1)

where f means a focal length that is such of the cameraparameter and d means a depth pixel value of p.

 

E1MTCD_2019_v6n3_119_f0001.png 이미지

Fig. 1. Example of object recognition through camera.

In order to detect a plane area using depth information, 2D image coordinates of depth pixel should transform to 3D camera coordinate system.

A plane containing a point (xc, yc, zc) on a 3D cameracoordinate system is represented as follows:

\(a_{p} x_{c}+b_{p} y_{c}-z_{c}+c_{p}=0\)       (2)

where ap, bp, cp are plane coefficients that determine the plane.

The following matrices are obtained by substituting points in an N×N block into Equation (2):

\(\begin{aligned} &A R=B\\ &\mathbf{A}=\left[\begin{array}{ccc} {x_{c_{1}}} & {y_{c_{1}}} & {1} \\ {x_{c_{2}}} & {y_{c_{2}}} & {1} \\ {} & {\vdots} & {} \\ {x_{c_{n}}} & {y_{c_{n}}} & {1} \end{array}\right] \mathbf{B}=\left[\begin{array}{c} {z_{c_{1}}} \\ {z_{c_{2}}} \\ {\vdots} \\ {z_{c_{n}}} \end{array}\right] \mathbf{R}=\left[\begin{array}{c} {a_{p}} \\ {b_{p}} \\ {c_{p}} \end{array}\right] \end{aligned}\)       (3)

where (xci, yci, zci) mean coordinates of it point in the block and n means the number of point in the block. If n is more than 3, a plane that has the smallest distance from the given point can be obtained by finding planecoefficients through following equation:

\(\mathbf{R}=\mathbf{A}^{+} \mathbf{B}\)       (4)

where A+ means a pseudo-inverse matrix of A that iscalculated by following equation:

\(\mathbf{A}^{+}=\left(\mathbf{A}^{\mathrm{T}} \mathbf{A}\right)^{-1} \mathbf{A}^{\mathrm{T}}\)       (5)

In Equation (2), a normal vector is (ap, bp, -1), and adistance between an origin of the camera coordinatesystem and the plane is cp as shown in Fig. 2.

If both planes are similar, normal vectors and distances of the planes should be similar. Therefore, the planesimilarity between the adjacent blocks can be measured by comparing coefficients of the modeled planes of the blocks.

 

E1MTCD_2019_v6n3_119_f0002.png 이미지

Fig. 2. Normal vector and distance from origin of modeled plane.

In order to measuring the plane similarity, the angle between normal vectors and a distances difference between the planes are calculated by following equations:

\(\delta_{12}=\cos \theta=\left(\mathbf{n}_{1} \cdot \mathbf{n}_{2}\right) /\left|\mathbf{n}_{1}\right|\left|\mathbf{n}_{2}\right|\)       (6)

\(\begin{array}{c} {=\frac{\left(a_{p_{1}} a_{p_{2}}+b_{p_{1}} b_{p_{2}}+1\right)}{(\sqrt{a_{p_{1}}^{2}+b_{p_{1}}^{2}+1}+\sqrt{a_{p_{2}}^{2}+b_{p_{2}}^{2}+1})}} \\ {\varepsilon_{12}=\left|c_{p_{1}}-c_{p_{2}}\right|} \end{array}\)       (7)

If 𝛿12 and 𝜀12 satisfy Equation (7), the planes can beconsidered to be similar planes.

𝛿12 > 𝑈
𝜀12 < 𝐷 ,                                            (8)

where U and D mean thresholds of the angle between normal vectors and the distance difference. If a pair of modeled planes for two adjacent blocks satisfiesequation (8), the blocks are grouped into a same plane.

The outlines shape of each grouped plane area is extracted in order to find rectangle object. If the shape of the outline is similar to the shape of a polygon with fourcorners, the area becomes a vehicle plate candidate as shown in Fig. 3.

 

E1MTCD_2019_v6n3_119_f0004.png 이미지

Fig. 3. Detection of rectangle object with planar surface.

Actual width and height of each vehicle platecandidate are measured through the depth information. Image coordinates of 4 pixels at the top-most, bottom-most, left-most, and right-most of the candidate area aretransformed to 3D camera coordinate through Equation (1). The width and height are measured through following equation:

\(\begin{array}{l} {\text { height }^{2}=\left(x_{t}-x_{b}\right)^{2}+\left(y_{t}-y_{b}\right)^{2}+\left(z_{t}-z_{b}\right)^{2}} \\ {\text { width }^{2}=\left(x_{r}-x_{l}\right)^{2}+\left(y_{r}-y_{l}\right)^{2}+\left(z_{r}-z_{l}\right)^{2}} \end{array}\)       (9)

where ( xt, yt, zt ) mean the 3D coordinates of pixels at the top-most as shown in Fig. 4. If the width and height is matched to the vehicle plate, the area is detected as thevehicle plate area.

 

E1MTCD_2019_v6n3_119_f0003.png 이미지

Fig. 4. Calculating width and height of plane area.

 

III. Simulation Result

In order to measure a performance of vehicle platerecognition through the proposed method, we use amanufactured car model with a vehicle plate. Results of the vehicle plate recognize through the manufacture caris shown in Fig. 5.

 

E1MTCD_2019_v6n3_119_f0005.png 이미지

Fig. 5. Results of vehicle plate recognition through manufacture car model. (a) RGB picture of manufacture car model, (b) plane surface detection, and (c) Recognized vehicle plate area.

In order to measure the performance of the proposed method, we use 20 depth pictures including vehicles with a Korean vehicle plates are captured such as Fig. 6 forsimulation. The depth pictures are captured by Intel Realsense D435 whose resolution is 320x240 and focal distance f used in Equation (1) is 423.5. The width and height specifications of Korean vehicle plates are shown in Table 1.

 

Table 1. Specification of Korean vehicle plate.

E1MTCD_2019_v6n3_119_t0001.png 이미지

 

E1MTCD_2019_v6n3_119_f0006.png 이미지

Fig. 6. Example pictures of vehicles used in simulations. (a) produced before 2006 and (b) produced after 2006.

Table 2 shows successful recognition rates of vehicle plate recognition according to D and U as shown in Fig. 7. When D and U are 50 and 0.95, respectively, therecognition rates are best. The recognition rate is moreaccuracy for the vehicle plates that are produced after 2006.

 

E1MTCD_2019_v6n3_119_f0007.png 이미지

Fig. 7. Results of license plate recognition simulation.

 

Table 2. Simulation results of vehicle plate detection.

E1MTCD_2019_v6n3_119_t0002.png 이미지

 

IV. Conclusion

In this paper, we proposed a method of recognizing vehicle plates through capturing depth pictures. The existed methods of recognizing the plates through a colorpicture had a problem that it were affected by lightingenvironment, but the proposed method can recognize thevehicle plates more accurately. We expect that the proposed method will be possible to improve the intelligent traffic system by recognizing the vehicles and license plates more quickly and accurately.

 

Acknowledgement

This research was supported by the BB21+ Project in 2019.

References

  1. C. N. E. Anagnostopoulos, I. E. Anagnostopoulos, V. Loumos, and E. Kayafas, "A License Plate-Recognition Algorithm for Intelligent Transportation System Applications," IEEE Transactions on Intelligent Transportation Systems, 7(3), 377-392.
  2. S. Chang, L. Chen, Y. Chung, and S. Chen, "Automatic License Plate Recognition," IEEE Transactions on Intelligent Transportation Systems, 5(1), 42-53.
  3. H. Zhang, W. Zia, X. He, and Q. Wu, "Learning Based License Plate Detection Using Global and Local Features," Proceeding of 18th International Conference on Pattern Recognition, 1102-1105.
  4. S. Du, M. Ibrahim, M. Shehata, and W. Badawy, "Automatic License Plate Recognition (ALPR): A State-of-the-art Review," IEEE Transactions on Circuits and Systems for Video Technology, 23(2), 311-325. https://doi.org/10.1109/TCSVT.2012.2203741
  5. D. S. Lee and S. K. Kwon, "Improvement of Depth Video Coding by Plane Modeling", Journal of the Korea Industrial Information Society, 21(5), 11-17.
  6. D. S. Lee, and S. K. Kwon, "Correction of Perspective Distortion Image Using Depth Information", Journal of Multimedia, 18(2), 106-112.
  7. D. M. Kim, H. J. Wi, J. H. Kim, and H. C. Shin, "Drowsy Driving Detection Using Facial Recognition System", Proceeding of the Conference of Korea Information Science Society, pp. 2007-2009, 2015.