• Title/Summary/Keyword: Semi-CRF

Search Result 5, Processing Time 0.025 seconds

Semi-CRF or Linear-chain CRF? A Comparative Study of Joint Models for Korean Morphological Analysis and POS Tagging (Semi-CRF or Linear-Chain CRF? 한국어 형태소 분할 및 품사 태깅을 위한 결합 모델 비교)

  • Na, Seung-Hoon;Kim, Chang-Hyun;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.9-12
    • /
    • 2013
  • 본 논문에서는 한국어 형태소 분할 및 품사 태깅 방법을 위한 결합 모델로 Semi-CRF와 Linear-chain CRF에 대한 초기 비교 실험을 수행한다. Linear-chain방법은 출력 레이블을 형태소 분할 정보와 품사 태그를 조합함으로써 결합을 시도하는 방식이고, Semi-CRF는 출력의 구조가 분할과 태깅 정보를 동시에 포함하도록 표현함으로써, 디코딩 과정에서 분할과 태깅을 동시에 수행하는 방법이다. Sejong품사 부착말뭉치에서 비교결과 Linear-chain방법이 Semi-CRF방법보다 우수한 성능을 보여주었다.

  • PDF

Sign Language Spotting Based on Semi-Markov Conditional Random Field (세미-마르코프 조건 랜덤 필드 기반의 수화 적출)

  • Cho, Seong-Sik;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1034-1037
    • /
    • 2009
  • Sign language spotting is the task of detecting the start and end points of signs from continuous data and recognizing the detected signs in the predefined vocabulary. The difficulty with sign language spotting is that instances of signs vary in both motion and shape. Moreover, signs have variable motion in terms of both trajectory and length. Especially, variable sign lengths result in problems with spotting signs in a video sequence, because short signs involve less information and fewer changes than long signs. In this paper, we propose a method for spotting variable lengths signs based on semi-CRF (semi-Markov Conditional Random Field). We performed experiments with ASL (American Sign Language) and KSL (Korean Sign Language) dataset of continuous sign sentences to demonstrate the efficiency of the proposed method. Experimental results show that the proposed method outperforms both HMM and CRF.

Semi-automatic Construction of Training Data using Active Learning (능동 학습을 이용한 학습 데이터 반자동 구축)

  • Lee, Chang-Ki;Hur, Jeong;Wang, Ji-Hyun;Lee, Chung-Hee;Oh, Hyo-Jung;Jang, Myung-Gil;Lee, Young-Jik
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1252-1255
    • /
    • 2006
  • 본 논문은 정보검색, 정보추출, 번역, 자연어처리 등의 작업을 위한 통계적 방법론에서 필요한 학습 데이터 구축을 효율적으로 하기 위한 학습 데이터 반자동 구축 장치 및 그 방법에 대하여 기술한다. 본 논문에서는 학습 데이터 구축양을 줄이기 위해서 능동 학습을 이용한다. 또한 최근 각광 받고 있는 Conditional Random Fields(CRF)를 능동학습에 이용하기 위해서 CRF를 이용한 Confidence measure를 정의한다.

  • PDF

CDISC Transformer: a metadata-based transformation tool for clinical trial and research data into CDISC standards

  • Park, Yu-Rang;Kim, Hye-Hyeon;Seo, Hwa-Jeong;Kim, Ju-Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.10
    • /
    • pp.1830-1840
    • /
    • 2011
  • CDISC (Clinical Data Interchanging Standards Consortium) standards are to support the acquisition, exchange, submission and archival of clinical trial and research data. SDTM (Study Data Tabulation Model) for Case Report Forms (CRFs) was recommended for U.S. Food and Drug Administration (FDA) regulatory submissions since 2004. Although the SDTM Implementation Guide gives a standardized and predefined collection of submission metadata 'domains' containing extensive variable collections, transforming CRFs to SDTM files for FDA submission is still a very hard and time-consuming task. For addressing this issue, we developed metadata based SDTM mapping rules. Using these mapping rules, we also developed a semi-automatic tool, named CDISC Transformer, for transforming clinical trial data to CDISC standard compliant data. The performance of CDISC Transformer with or without MDR support was evaluated using CDISC blank CRF as the 'gold standard'. Both MDR and user inquiry-supported transformation substantially improved the accuracy of our transformation rules. CDISC Transformer will greatly reduce the workloads and enhance standardized data entry and integration for clinical trial and research in various healthcare domains.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.