• Title/Summary/Keyword: Latent Structural SVM

Search Result 3, Processing Time 0.018 seconds

Sentiment Analysis using Latent Structural SVM (잠재 구조적 SVM을 활용한 감성 분석기)

  • Yang, Seung-Won;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.5
    • /
    • pp.240-245
    • /
    • 2016
  • In this study, comments on restaurants, movies, and mobile devices, as well as tweet messages regardless of specific domains were analyzed for sentimental information content. We proposed a system for extraction of objects (or aspects) and opinion words from each sentence and the subsequent evaluation. For the sentiment analysis, we conducted a comparative evaluation between the Structural SVM algorithm and the Latent Structural SVM. As a result, the latter showed better performance and was able to extract objects/aspects and opinion words using VP/NP analyzed by the dependency parser tree. Lastly, we also developed and evaluated the sentiment detector model for use in practical services.

Jointly Learning Model using modified Latent Structural SVM (Latent Structural SVM을 확장한 결합 학습 모델)

  • Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.70-73
    • /
    • 2013
  • 자연어처리에서는 많은 모듈들이 파이프라인 방식으로 연결되어 사용되나, 이 경우 앞 단계의 오류가 뒷 단계에 누적되는 문제와 앞 단계에서 뒷 단계의 정보를 사용하지 못한다는 단점이 있다. 본 논문에서는 파이프라인 방식의 문제를 해결하기 위해 사용되는 일반적인 결합 학습 방법을 확장하여, 두 작업이 동시에 태깅된 학습 데이터뿐만 아니라 한 작업만 태깅된 학습데이터도 동시에 학습에 사용할 수 있는 결합 학습 모델을 Latent Structural SVM을 확장하여 제안한다. 실험 결과, 기존의 한국어 띄어쓰기와 품사 태깅 결합 모델의 품사 태깅 성능이 96.99%였으나, 본 논문에서 제안하는 결합 학습 모델을 이용하여 대용량의 한국어 띄어쓰기 학습데이터를 추가로 학습한 결과 품사 태깅 성능이 97.20%까지 향상 되었다.

  • PDF

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.