DOI QR코드

DOI QR Code

Analogical Face Generation based on Feature Points

  • 투고 : 2019.03.15
  • 심사 : 2019.03.21
  • 발행 : 2019.03.31

초록

There are many ways to perform face recognition. The first step of face recognition is the face detection step. If the face is not found in the first step, the face recognition fails. Face detection research has many difficulties because it can be varied according to face size change, left and right rotation and up and down rotation, side face and front face, facial expression, and light condition. In this study, facial features are extracted and the extracted features are geometrically reconstructed in order to improve face recognition rate in extracted face region. Also, it is aimed to adjust face angle using reconstructed facial feature vector, and to improve recognition rate for each face angle. In the recognition attempt using the result after the geometric reconstruction, both the up and down and the left and right facial angles have improved recognition performance.

키워드

I. INTRODUCTION

The face recognition method determines the presence orabsence of a face in the image, and if the face exists in the image, it means to find the position and size of the face. Existing face detection methods can be classified into the knowledge-based method, the feature-based method, the template-matching method, and the appearance-based method [1].

The knowledge-based method assumes that a person's face consists of two eyes, one nose and mouth, and each face element has a certain distance and position. And it is amethod to detect faces considering the relation between these elements. The feature-based methods infer face size and position using face-specific features such as facial features, color, shape, and size [2][3]. It detects facesthrough inferred data, and also detects faces through distance between facial elements, position on face, and soon. . The template matching method is to create a basictemplate for the face, and then analyze the inputted face image to create a standard template for the face. Then, the standard template and the input face are compared and detected. The appearance-based method is to detect a face using a model learned by a set of learning images. This method uses statistical numerical values to detect face parts in complex images.

The goal of this study is to improve face recognition rate. Improvement of recognition rate is one of the mostimportant factors. There are many factors that degrade therecognition rate. Changes in skin color due to illumination or strong contrasts, changes in facial expressions, attachments such as glasses, and angle of the face greatly affect the face recognition rate. In this paper, we propose amethod to prevent recognition rate degradation due to face angle change. That is, the recognition rate is improved in the turned face which is in up, down, left, and right directions.

 

II. BASIC MODEL CONFIGURATION

 

2.1 Face recognition model

The study limit of this paper is to prevent the deterioration of recognition rate, even if the face rotationangle is being increased. Therefore, the facial recognition model as shown in Fig. 1 was created under the assumption that the face is turned up, down, left, and right. The target face recognition rotation angle is 15 degrees up and down and 30 degrees left and right.

 

E1MTCD_2019_v6n1_15_f0001.png 이미지Fig. 1. Face recognition model.

The face source is to generate a three-dimensional faceshape using a flat face photograph. The generated 3D faceshape is created via information such as the brightness, the contrast, the feature point of the face, the distance between the facial elements, and the bending analogy based on the face rotation angle from the 2D photograph.

The most commonly used algorithms are the kNNalgorithm and the blend shape algorithm. The analogy based on the face recognition model is done through complicated calculations.

Therefore, even if they are the same person, different face recognition models can be created depending on the type of photograph. When the algorithm is executed, the face recognition model is accumulated in several layers. In addition, the above-mentioned contrast, feature point, and angle of rotation are different depending on the element. The facial surface, especially the protruding parts of the cheekbones, was not considered in this study.

The advantages of this face recognition model are: First, it is not necessary to store all faces. As a result, storagespace and recognition time can be drastically reduced. It canalso be a powerful force in the surveillance system. A singlecamera can take the role of dozens because it cansimultaneously recognize the faces of many people with different face angles [4].

Second, it is possible to solve the problem that the same person is mistakenly recognized as another person or failed in recognition depending on the facial expression or angle. For example, although the face model in Fig. 1 is the same person, there is a problem in that it can be recognized as adifferent person by using the existing system [4].

The feature of this method is that it can internally generate and recognize faces of various angles with only one front face without having to store various faces of the same person. Thus, internally generated faces of various angles are stored as angle vectors.

When a new face image is searched, first, the coordinates of the minutiae of the eyes, nose, and mouth are extracted and the minutiae points of the left eye and the right mouthare connected. Then, the direction and angle of the face are determined by connecting the feature points of the right eyeand the left mouth and the feature points of the nose.

 

2.2 Structural features of face

Feature points on the face are relatively easy todistinguish from the face, and each person has a differentshape. In order to compare each feature point, it will be used the correlation value of the eye, the nose, and the mouth, and the linear relation of the chin, mouth, and face area.

Fig. 2 schematically shows each feature point of the face and the structural position value of each point. And the structural position value of the corresponding component is obtained from the coordinate of the candidate area of eachextracted feature point [8].

A is the distance between the eyes, B is the distance between the eye and the nose, and C is the distance between the nose and mouth. In addition, the distance between the eyebrow and the eye, and the distance between the two eyebrows can be obtained.

In this study, the error of the ratio was reduced by using the average of the positions of the eyes, nose, and mouth and the minimum / maximum values between them. The ratio of distance between feature points is calculated by the Euclidean distance equation. Recognition by Euclideandistance is normalized to the standard deviation of each person.

The height of the male face of Korean is about 199m, the female is about 190mm and the male is about 9mm big. Each person 's face has a different distance for each element, and when the face is divided into three, the ratio of the height of each of the Upper, Middle, and Lower faces is also different. As shown in Table 1, the percentage of the height of the Upper, Middle, and Lower faces of the male was about 40%, 37%, and 21%, respectively. The female ratio was about 42%, 37% and 21% [14]. In a simpler way, the ratio of Upper face height : Middle face height : Lower face height was about 1.0 : 1.0 : 0.6 for males and 1.0 : 0.9 : 0.5 for females.

 

E1MTCD_2019_v6n1_15_f0002.png 이미지Fig. 2. Structural position value of facial feature points.

 

Table 1. Measurement data matched total face height. Male samples are selected within female mean±SD of total face height (n=84). Statistically significant difference between the genders (p<0.05).

E1MTCD_2019_v6n1_15_t0001.png 이미지

 

Table 2. Ratios among face heights.

E1MTCD_2019_v6n1_15_t0002.png 이미지

 

E1MTCD_2019_v6n1_15_f0003.png 이미지Fig. 3. Ratio of lower and upper face height.

As shown in Table 2, the ratio of the Lower face height to the Upper face height was about 56% for male and about 50% for female [14]. As a result of classification according to the height of the face structure in the face, male and femaleshowed different patterns. The Middle face type, which the percentage of the Lower face to the Upper face (50 ~ 60%) was slightly higher than half of the male face (54.1%). Upper face type and low face type were similar to 24.3% and 21.6%, respectively. However, in female, the rate of Low facial type was more than half (53.5%), Middle facial type is 40.8% and Upper facial type is 5.7%. Figure 3 shows the various face ratios of Koreans [14].

 

Ⅲ. EXTRACTING FEATURE POINT

 

3.1 Extracting depth data of 2D face

In this section, a method of extracting depth information from a 2D face is described. Generally, 2D face recognition performs detection by brightness and contrast feature values. However, since there is no depth information, that’s why this is an obvious limitation.

Therefore, in order to recognize various angular faces from a 2D face, it is necessary to calculate the depthinformation that does not exist in the 2D face. The Point Signature method is used to extract depth information. The Point Signature is a method of expressing distance information as a one - dimensional spatial signal for angle, and Chua and Ho [6][7] introduced the concept of Point Signature to face recognition [9].

This method extracts depth information of face corresponding to each feature point based on one point and extracts curvature information of face structure. The procedure of the Point Signature method is as follows.

(1) Set angle θ and radius r for depth data extraction

(2) Generate the normal vector after setting the referencepoint

(3) Project a circle of radius r centered on the normalvector onto the face shape

(4) Extraction of depth data corresponding to angle θ

 

3.2 3D Face generation

The cause of performance degradation of the 2Drecognition model is the recognition error due to the change of the external environment, that is, the generation of the lost data due to the change of the facial expression of therecognition face, and the illumination change.

These problems often cause false detection of facerecognition. In order to overcome this problem, 3D facerecognition should be performed using depth information of a face shape that does not exist in a 2D face.

The 2D face information can obtain a certain amount of face depth information by analyzing the reflection patternaccording to the brightness and contrast ratio of the photograph. When the face depth information is acquired, it becomes basic data for making a 3D face shape. The obtained 3D face shape obscures the face shape of the point cloud shape. It is also robust to illumination changes because it has depth information [6][9].

Moreover, it is possible to compensate poses freely, which can compensate for the disadvantages of 2D facerecognition. Since the generated 3D face is an imperfect face, the face shape is reconstructed by performing the posecompensation by the preprocessing step.

In order to perform face recognition, it is necessary tonormalize the coordinates of the generated face shapethrough the reference point. 2D faces differ in size and position of facial features acquired according to the environmental factors at the time of shooting and personal facial characteristics [10].

Therefore, it is difficult to extract certain facial featuredata if the normalization process is not performed. Therefore, pose compensation is performed in front to extract accuratefeature data.

In the case of 2D face recognition, the rotated face has data loss in the rotated part. However, the 3D face shape has the advantage that the respective points are posing compensation in all axes rotatable since it has the coordinates of the X, Y, Z in the form of a point cloud possible.

 

3.3 kNN Algorithm

The kNN algorithm is very useful for extracting the feature points of face recognition. It serves to rationallyclassify the feature points that are difficult to classify by thenearest neighbors.

kNN is a method of predicting new data with information of the nearest k neighbors of existing data when new data is given. As shown in the Fig. 4, spotted ball categoryinformation can be inferred with neighbors. If k is 1, it will be classified as white, if k is 3, it will be classified as black. If it is a regression problem, then the average of theneighbors dependent variable is the predicted value.

In fact, kNN does not have a procedure that is called “Learning”. That's also because when new data comes in, it draws neighbors from the distance between existing data. Sosome people call it a lazy model, meaning that it does not build a model for kNN separately. It's also called Instance-based Learning. This contrasts with Model-based learning, where models are created from data to perform tasks. Therefore, it performs tasks such as classification/regression using only each instance without generating a separatemodel [15].

Thus, the minimum distance classification rule according to the basic principle is referred to as kNN (k-Nearest Neighbor) classification rule. In order to do this, a standard pattern must be selected in advance for each class. That is, itis examined how many k of an arbitrary pattern x belongs to which class of the nearest neighbors, and the class of the largest number belonging to is determined as the class of x.

 

E1MTCD_2019_v6n1_15_f0004.png 이미지Fig. 4. Concept of 3-Nearest Neighbors method.

The order of execution of the kNN algorithm will be briefly described below with reference to Fig. 4.

In the first step, k nearest neighbors are searched for givenunknown data. The second step determines the class to which the unknown data given in the voting method among k-nearest neighbors belongs [15].

In addition, several forms can be used depending on the distance metric of Equation 1.

\(\boldsymbol{d}_{\boldsymbol{p}}(\boldsymbol{x})=\left(\sum_{i}\left|\boldsymbol{x}_{i}\right|^{p}\right)^{1 / \boldsymbol{p}}\)       (1)

where p = 1, As Manhattan Distance, this is used as Equation 2.

\(\boldsymbol{d}_{1}(\boldsymbol{x})=\sum_{i}\left|\boldsymbol{x}_{1}\right|\)       (2)

When p = 2, As Euclidean Distance, this is presented as Equation 2.

\(\boldsymbol{d}_{2}(\boldsymbol{x})=\sqrt{\sum_{i}\left|\boldsymbol{x}_{i}\right|^{2}}\)       (3)

When p = ∞, As the maximum distance metric, this is presented as Equation 4.

\(\boldsymbol{d}_{\infty}(\mathbf{x})=\max _{i}\left|x_{i}\right|\)       (4)

The kNN algorithm in this study uses the Euclidean Distance of p = 2.

 

3.4 Blend Shape Algorithm

Blended Shapes are a nearly standard way of expressing facial expressions in computer animation, and are also used in 3D authoring tools such as Maya, and are also used in movies such as Lord of the Rings, KingKong, and Final Fantasy.

The blend shape is represented by a linear combination of Basis vectors representing each facial expression. When the blend shape with n vertices is expressed as a 3n × 1 vector, the blend shape is expressed as Equation 5 [2][3].

\(\boldsymbol{f}=\sum_{\boldsymbol{k}=0}^{\boldsymbol{n}} \boldsymbol{w}_{\boldsymbol{k}} \boldsymbol{b}_{\boldsymbol{k}}\)        (5)

where b represents the Neutral Expression. A commercial program such as Maya uses Delta Blendshape, which can beexpressed as: In this case, the weights are normalized between 0 and 1.

\(\boldsymbol{f}=\boldsymbol{b}_{0}+\sum_{k=1}^{n} \boldsymbol{w}_{k}\left(\boldsymbol{b}_{k}-\boldsymbol{b}_{0}\right)\)       (6)

In addition, there are Intermediate Blend shapes and Combination Blend shapes. In the case of CG movieproduction, more than 100 blend shapes are made, so Combination Blend shape is often used.

Blend Shapes have advantages over other facial modeling methods. The biggest advantage is that each blend shape hasits meaning, and by adjusting the weights of these blendshapes, you can create the desired look directly [11].

This contrasts with a similar method, PCA (Principal Component Analysis), in that the Basis vectors of the PCAproduce facial expressions that are not intuitively understood by humans [13]. Blend shape, however, also hasits disadvantages, and the biggest problem is that it requiresa significant number of Blend shapes to create a natural look.

 

Ⅳ. EXPERIMENT

 

4.1 Extracting geometric feature information at 2D

The geometric feature information of a face collectively refers to eye, nose, mouth, eyebrow size, relative position relation, distance between both ears, and shape information of chin line.

In this study, a candidate region in which an eye, a nose, and a mouth exist in a face image is determined in advance, and then a plurality feature information is extracted by projecting on a X-axis and a Y-axis. And a relative distance is obtained based on the feature information.

For face recognition based on geometric feature information extraction, FMG (face model graph) was generated for training face image samples [5][6].

Then, the feature points of the face were searched by the method of finding the face graph and FG (Face Graph) according to the arbitrary facial image. FG was obtained from the face photographs and the positions of the featurepoints of the face were automatically found and Gaber and LBP features were extracted for each feature point [5].

The face has a fixed spatial arrangement of eyes, nose, and mouth. For example, the position of eyes and nose is not changed. It is arranged in the order of eyes, nose, and mouth from the top, and two eyes are arranged. Since thespatial arrangement information is fixed, it is possible to easily find an expected candidate space of eyes, nose, and mouth.

After finding the face space in which the facial elements are located, Gaber and LBP features are extracted from each feature point of the FG. Then, each of the Gaber and LBP features is represented as a set, and the similarity iscalculated by comparing these facial images with the inputted face image.

Because of the variety of faces, FMG is generated torepresent facial images in one representative form. It ispossible to obtain FMG by sex, race, and age by collecting facial images that can reflect various kinds of man, woman, age, and race, and then displaying the characteristic points by handwork and then finding the average position of each feature. Fig. 2 shows the concept of feature extraction for the better understanding [5].

 

E1MTCD_2019_v6n1_15_f0005.png 이미지Fig. 5. FMG and feature points extracting.

 

E1MTCD_2019_v6n1_15_f0006.png 이미지Fig. 6. Comparison of extracted FMG.

The FMG for the pictures in Fig. 5 is shown in Fig. 6. Depending on the face angle in the photo, it can be seen that the FMG is generated slightly differently. Accordingly, the feature points also have different vector values.

If the feature points are not found or the featureless facearea information is considered noise. These results showthat facial feature extraction can obtain much information from face elements rather than skin. Based on these results, we applied the kNN algorithm and the blend shapealgorithm to FMG to generate a 3D rotated face.

Fig. 7 shows the feature points extracted from the 2D image. Because it was extracted from the frontal face, both sides are balanced distribution around the nose. In Fig. 7 (a), FMG is constructed by connecting extracted feature points with wires. Fig. 7 (b) shows extracted feature points.

Fig. 8 shows the calculation of the face depth by calculating the distance between the feature points when the front face is turned to the left. Also, through the kNN algorithm, feature points can be moved, or they cannot be used and can be erased. The depth of the face was changed by excessive calculation. Fig. 8 (a) is the 2Dimage and (b) is the feature point of the face turned 30 degrees to the left. In Fig. 9 (b), FMG is constructed by connecting the featurepoints of the rotated face with wires. The facial form appears relatively intact, but the facial depth of the side is excessive.

 

E1MTCD_2019_v6n1_15_f0007.png 이미지Fig. 7. Feature points of 2D image.

 

E1MTCD_2019_v6n1_15_f0008.png 이미지Fig. 8. Rotated 3D image creation.

 

E1MTCD_2019_v6n1_15_f0009.png 이미지Fig. 9. Obtained FMG.

 

4.2 Face Tracking method in Video

Particle Filter (PF) have been widely used to track objects in video images. Particle Filter has a unique position in the processing of similarity with Kalman Filter.

The PF can predict the dynamic state of an object through the Monte Carlo method given in the Bayesian framework. The posterior probability is estimated using the prior probability and the measured likelihood function by repeating Equation 7[12].

The construction of a general PF has the following foursteps: [Selection], selects a new sample proportional to the weight of the sample obtained in the previous step. [Propagation], propagates the selected sample to a new location. [Observation], and weights are measured and weighed in each sample. [Estimation], Estimates the objectas an average of the positions of the weighted samples. Then repeat the above procedure to track the object [13].

The color-based PF tracking method is a typical tracking algorithm that calculates the similarity of the colordistribution with respect to the tracking object at each sample position, measures the probability that the objectexists, and tracks the object. However, when the pose of the face to be tracked changes, such as the face tracking, the shape and color distribution of the tracking object changes according to the pose change.

In this case, the IVT tracking algorithm can improve the performance degradation when the shape of the tracking target changes. The IVT algorithm can extract the features from the previous images using the PCA and track the musing the PF technique to effectively cope with the object whose shape changes. In IVT, the following Bayesian probabilities are calculated to track the object [13].

\(p\left(X_{t} | Z_{1: t}\right) \propto p\left(Z_{t} | X_{t}\right) \int_{X_{t-1}}^\dot{} p\left(X_{t} | X_{t-1}\right) p\left(X_{t-1} | Z_{t-1}\right)\)       (7)

where Xt and Zt are the state of the object at time t and the input image frame, respectively. And assumes that the initial X0 is known. Parameters of the state Xt of the sample include the object's center coordinates, the size relative to the reference image, and the rotation angle with respect to the horizontal axis.

As in the conventional face tracking PF technique, IVTuses a simple Gaussian distribution model instead of using a complex dynamic model to propagate samples from Xt-1 to Zt.

In other words,

\(p\left(X_{t} | X_{t-1}\right)=N\left(X_{t} ; X_{t-1}, \Sigma\right)\)       (8)

where ∑ is a properly selected diagonal covariance matrix. In the IVT, PCA is performed on past traced images Xt-1 to the similarity ( | ) of the image Zt obtained from the state Xt of one sample in the Observation step. At thistime, the current image Zt can be obtained in a subspacerepresented by the mean (μ) and eigenvector obtained.

 

E1MTCD_2019_v6n1_15_f0010.png 이미지Fig. 10. Tracking in video.

This probability is inversely proportional to the distance between the image Zt and the center point in the subspace, so we can obtain the following two distances. That is, thesimilarity is calculated based on the distance from the Zt to the projected point and the distance from the projected point to the center point in the subspace when projecting Zt to the subspace.

Fig. 10 shows feature points extracted from videoimages and feature points are obtained by adapting to facemovements and changes. Despite face changes, occlusions, and scene changes, IVT and PF are interlocked by continuously tracking changes in the face continuously.

Detector and tracker operate independently for every frame, and the situation of detection and tracking failure can be compensated by exchanging information between them. If the similarity value of the detection results is lower thanthe specific value, it is determined that the target to betracked has not been detected correctly, and the detection is considered as failure.

 

Ⅴ. CONCLUSIONS

The subject of this paper is to extract the feature points from the face and rotate the extracted feature points in 3D to generate the side face. In this process, the kNN algorithmand the blend shape algorithm are applied. As the angle of the face is increased, the accuracy of the face is deteriorated. Creating a whole 3D face through computation using a 2D face is a near-impossible task at present. However, if the contrast ratio of the face photograph is large and the sharpness is excellent, it can be upgraded considerably.

Also, having the face location information of extracted feature points together to improve the face recognition performance can be a solution for promoting the enhancement. In addition, when the division method using the ASM (Active Shape Model), which is a method forimproving the background component, is applied, the facerecognition performance is relatively improved.

It is a solution that can extract the features of the facial component or extract the components from the facial feature points, or both methods can contribute to enhance the face recognition.

In this paper, we created left and right faces, but did not create up and down faces. It is necessary to implement amore difficult algorithm that is inferior to the left and right. Therefore, it is a reality that the vertical angle is difficult to exceed 15 degrees.

Face recognition technology will play a central role in modern biometrics technology. Although there are various biometrics technologies, it is difficult to make the muniversal due to having many fatal weaknesses.

Relatively, face recognition technology has no fatal weaknesses, so it is expected to lead the biometric market for a while.

Therefore, it is necessary to study in many fields such as 3D face generation and facial expression generation through algorithms such as Blend shape, kNN, and PF.

참고문헌

  1. M.Akhil jabbar, B.L Deekshatulua, Priti Chandra, "Classification of Heart Disease Using K- Nearest Neighbor and Genetic Algorithm," Procedia Technology 10, pp. 85 - 94, 2013. https://doi.org/10.1016/j.protcy.2013.12.340
  2. Mohammed Hazim Alkawaz, Dzulkifli Mohamad, Ahmad Hoirul Basori, Tanzila Saba, "Blend Shape Interpolation and FACS for Realistic Avatar," Springer 3D Res., Vol. 6, No.6, 2015
  3. Henry A. Rowley, Shumeet Baluja, Takeo Kanade, "Neural Network Based Face Detection," Computer Vision and Pattern Recognition, Carnegie Mellon University, 1996.
  4. Wooky Lee, Jongtae Baek, Hwaki Lee, Young-Mo Kim, "Face Recognition Model based on Angled Feature Vectors", Journal of KIISE. Vol.39, No.11, pp. 871 - 875 , 2012.
  5. Jin-Ho Kim,, "Face Recognition by Fiducial Points Based Gabor and LBP Features," The Journal of the Korea Contents Association, Vol.13 No.1, pp. 1 - 8 , 2013. https://doi.org/10.5392/JKCA.2013.13.01.001
  6. Kaveh, A., Particle Swarm Optimization." Advances in Metaheuristic Algorithms for Optimal Design of Structures, Springer International Publishing, pp 9-40. Mar. 2014.
  7. Karaboga, Dervis, et al. "A comprehensive survey: artificial bee colony (ABC) algorithm and applications," Artificial Intelligence Review, Vol. 42, No. 1, pp. 21-57, June 2014. https://doi.org/10.1007/s10462-012-9328-0
  8. Young-Il, Eung-Joo Lee, "Real-Time Automatic Human Face Detection and Recognition System Using Skin Colors of Face, Face Feature Vectors and Facial Angle Informations," The KIPS transactions. Software and Data Engineering, pp. 491 - 500 , 2002.
  9. Chan-Jun Park, Sung-Kwun Oh, Jin-Yul Kim, "A Study On Three-dimensional Optimized Face Recognition Model: Comparative Studies and Analysis of Model Architectures," The Transactions of the Korean Institute of Electrical Engineers, Vol. 64, No. 6, pp. 900- 911, 2015. https://doi.org/10.5370/KIEE.2015.64.6.900
  10. Hae Min Moon, Sung Bum Pan, "Long Distance Face Recognition System using the Automatic Face Image Creation by Distance", Journal of The Institute of Electronics and Information Engineering, Vol. 51, NO. 11, November 2014.
  11. Jae Mo Chung, Hyeon Bae, Sung Shin Kim, "Realtime Face Recognition by Analysis of Feature Information," Journal of fuzzy logic and intelligent systems, Vol.11 No.9, pp.822 - 826, 2001.
  12. Jeany Son, Ilchae Jung, Bohyung Han, "Real-Time Human Tracking with Detection Feedback", Journal of KIISE. Software and application, Vol.40 No.12, pp.859 - 868, 2013.
  13. Jin Yul Kim, Yong-Seok Kim, "Face Tracking and Recognition in Video with PCA-based Pose-Classification and $(2D)^{2}PCA$ recognition algorithm," Journal of Korean Institute of Intelligent Systems, Vol. 23, No. 5, pp. 423-430, October 2013. https://doi.org/10.5391/JKIIS.2013.23.5.423
  14. Woo-Cheol Song, Sung-Ho Kim, Ki-seok Ko, "High-Set or Low-Set of Korean Face," Korean J. Phys Anthropol., Vol. 30, No. 1, pp. 1-6, 2017. https://doi.org/10.11637/kjpa.2017.30.1.1
  15. Park, Moon Kyu, "Face Recognition Using Principal Component Analysis and KNN method," Thesis of M.S., Chungbuk National University, 2003.