• 제목/요약/키워드: multi-task learning

검색결과 132건 처리시간 0.027초

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Wu, Shengbin;Wang, Yibai
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.441-452
    • /
    • 2021
  • Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task's similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

사출성형공정에서 다수 품질 예측에 적용가능한 다중 작업 학습 구조 인공신경망의 정확성에 대한 연구 (A study on the accuracy of multi-task learning structure artificial neural network applicable to multi-quality prediction in injection molding process)

  • 이준한;김종선
    • Design & Manufacturing
    • /
    • 제16권3호
    • /
    • pp.1-8
    • /
    • 2022
  • In this study, an artificial neural network(ANN) was constructed to establish the relationship between process condition prameters and the qualities of the injection-molded product in the injection molding process. Six process parmeters were set as input parameter for ANN: melt temperature, mold temperature, injection speed, packing pressure, packing time, and cooling time. As output parameters, the mass, nominal diameter, and height of the injection-molded product were set. Two learning structures were applied to the ANN. The single-task learning, in which all output parameters are learned in correlation with each other, and the multi-task learning structure in which each output parameters is individually learned according to the characteristics, were constructed. As a result of constructing an artificial neural network with two learning structures and evaluating the prediction performance, it was confirmed that the predicted value of the ANN to which the multi-task learning structure was applied had a low RMSE compared with the single-task learning structure. In addition, when comparing the quality specifications of injection molded products with the prediction values of the ANN, it was confirmed that the ANN of the multi-task learning structure satisfies the quality specifications for all of the mass, diameter, and height.

Multi-Task Learning에서 공유 공간과 성능과의 관계 탐구 (Exploring the Relationship between Shared Space and Performance in Multi-Task Learning)

  • 성수진;박성재;정인규;차정원
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2018년도 제30회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.305-309
    • /
    • 2018
  • 딥러닝에서 층을 공유하여 작업에 따라 변하지 않는 정보를 사용하는 multi-task learning이 다양한 자연어 처리 문제에 훌륭하게 사용되었다. 그렇지만 우리가 아는 한 공유 공간의 상태와 성능과의 관계를 조사한 연구는 없었다. 본 연구에서는 공유 공간과 task 의존 공간의 자질의 수와 오염 정도가 성능에 미치는 영향도 조사하여 공유 공간과 성능 관계에 대해서 탐구한다. 이 결과는 multi-task를 진행하는 실험에서 공유 공간의 역할과 성능의 관계를 밝혀서 시스템의 성능 향상에 도움이 될 것이다.

  • PDF

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

다중 레이블 분류 작업에서의 Coarse-to-Fine Curriculum Learning 메카니즘 적용 방안 (Applying Coarse-to-Fine Curriculum Learning Mechanism to the multi-label classification task)

  • 공희산;박재훈;김광수
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2022년도 제66차 하계학술대회논문집 30권2호
    • /
    • pp.29-30
    • /
    • 2022
  • Curriculum learning은 딥러닝의 성능을 향상시키기 위해 사람의 학습 과정과 유사하게 일종의 'curriculum'을 도입해 모델을 학습시키는 방법이다. 대부분의 연구는 학습 데이터 중 개별 샘플의 난이도를 기반으로 점진적으로 모델을 학습시키는 방안에 중점을 두고 있다. 그러나, coarse-to-fine 메카니즘은 데이터의 난이도보다 학습에 사용되는 class의 유사도가 더욱 중요하다고 주장하며, 여러 난이도의 auxiliary task를 차례로 학습하는 방법을 제안했다. 그러나, 이 방법은 혼동행렬 기반으로 class의 유사성을 판단해 auxiliary task를 생성함으로 다중 레이블 분류에는 적용하기 어렵다는 한계점이 있다. 따라서, 본 논문에서는 multi-label 환경에서 multi-class와 binary task를 생성하는 방법을 제안해 coarse-to-fine 메카니즘 적용을 위한 방안을 제시하고, 그 결과를 분석한다.

  • PDF

Automatic assessment of post-earthquake buildings based on multi-task deep learning with auxiliary tasks

  • Zhihang Li;Huamei Zhu;Mengqi Huang;Pengxuan Ji;Hongyu Huang;Qianbing Zhang
    • Smart Structures and Systems
    • /
    • 제31권4호
    • /
    • pp.383-392
    • /
    • 2023
  • Post-earthquake building condition assessment is crucial for subsequent rescue and remediation and can be automated by emerging computer vision and deep learning technologies. This study is based on an endeavour for the 2nd International Competition of Structural Health Monitoring (IC-SHM 2021). The task package includes five image segmentation objectives - defects (crack/spall/rebar exposure), structural component, and damage state. The structural component and damage state tasks are identified as the priority that can form actionable decisions. A multi-task Convolutional Neural Network (CNN) is proposed to conduct the two major tasks simultaneously. The rest 3 sub-tasks (spall/crack/rebar exposure) were incorporated as auxiliary tasks. By synchronously learning defect information (spall/crack/rebar exposure), the multi-task CNN model outperforms the counterpart single-task models in recognizing structural components and estimating damage states. Particularly, the pixel-level damage state estimation witnesses a mIoU (mean intersection over union) improvement from 0.5855 to 0.6374. For the defect detection tasks, rebar exposure is omitted due to the extremely biased sample distribution. The segmentations of crack and spall are automated by single-task U-Net but with extra efforts to resample the provided data. The segmentation of small objects (spall and crack) benefits from the resampling method, with a substantial IoU increment of nearly 10%.

포인터 네트워크를 이용한 한국어 의존 구문 분석 (Korean Dependency Parsing using Pointer Networks)

  • 박천음;이창기
    • 정보과학회 논문지
    • /
    • 제44권8호
    • /
    • pp.822-831
    • /
    • 2017
  • 본 논문에서는 멀티 태스크 학습 기반 포인터 네트워크를 이용한 한국어 의존 구문 분석 모델을 제안한다. 멀티 태스크 학습은 두 개 이상의 문제를 동시에 학습시켜 성능을 향상시키는 방법으로, 본 논문에서는 이 방법에 기반한 포인터 네트워크를 이용하여 어절 간의 의존 관계와 의존 레이블 정보를 동시에 구하여 의존 구문 분석을 수행한다. 어절 기반의 의존 구문 분석에서 형태소 기반의 멀티 태스크 학습 기반 포인터 네트워크를 수행하기 위하여 입력 기준 5가지를 정의하고, 성능 향상을 위하여 fine-tuning 방법을 적용한다. 실험 결과, 본 논문에서 제안한 모델이 기존 한국어 의존 구문 분석 연구들 보다 좋은 UAS 91.79%, LAS 89.48%의 성능을 보였다.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Multi-task learning with contextual hierarchical attention for Korean coreference resolution

  • Cheoneum Park
    • ETRI Journal
    • /
    • 제45권1호
    • /
    • pp.93-104
    • /
    • 2023
  • Coreference resolution is a task in discourse analysis that links several headwords used in any document object. We suggest pointer networks-based coreference resolution for Korean using multi-task learning (MTL) with an attention mechanism for a hierarchical structure. As Korean is a head-final language, the head can easily be found. Our model learns the distribution by referring to the same entity position and utilizes a pointer network to conduct coreference resolution depending on the input headword. As the input is a document, the input sequence is very long. Thus, the core idea is to learn the word- and sentence-level distributions in parallel with MTL, while using a shared representation to address the long sequence problem. The suggested technique is used to generate word representations for Korean based on contextual information using pre-trained language models for Korean. In the same experimental conditions, our model performed roughly 1.8% better on CoNLL F1 than previous research without hierarchical structure.

Multi-task sequence-to-sequence learning을 이용한 한국어 형태소 분석과 구구조 구문 분석 (Korean morphological analysis and phrase structure parsing using multi-task sequence-to-sequence learning)

  • 황현선;이창기
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2017년도 제29회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.103-107
    • /
    • 2017
  • 한국어 형태소 분석 및 구구조 구문 분석은 한국어 자연어처리에서 난이도가 높은 작업들로서 최근에는 해당 문제들을 출력열 생성 문제로 바꾸어 sequence-to-sequence 모델을 이용한 end-to-end 방식의 접근법들이 연구되었다. 한국어 형태소 분석 및 구구조 구문 분석을 출력열 생성 문제로 바꿀 시 해당 출력 결과는 하나의 열로서 합쳐질 수가 있다. 본 논문에서는 sequence-to-sequence 모델을 이용하여 한국어 형태소 분석 및 구구조 구문 분석을 동시에 처리하는 모델을 제안한다. 실험 결과 한국어 형태소 분석과 구구조 구문 분석을 동시에 처리할 시 형태소 분석이 구구조 구문 분석에 영향을 주는 것을 확인 하였으며, 구구조 구문 분석 또한 형태소 분석에 영향을 주어 서로 영향을 줄 수 있음을 확인하였다.

  • PDF