• Title/Summary/Keyword: overlay text

Search Result 6, Processing Time 0.046 seconds

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Comparison of Text Beginning Frame Detection Methods in News Video Sequences (뉴스 비디오 시퀀스에서 텍스트 시작 프레임 검출 방법의 비교)

  • Lee, Sanghee;Ahn, Jungil;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.21 no.3
    • /
    • pp.307-318
    • /
    • 2016
  • 비디오 프레임 내의 오버레이 텍스트는 음성과 시각적 내용에 부가적인 정보를 제공한다. 특히, 뉴스 비디오에서 이 텍스트는 비디오 영상 내용을 압축적이고 직접적인 설명을 한다. 그러므로 뉴스 비디오 색인 시스템을 만드는데 있어서 가장 신뢰할 수 있는 실마리이다. 텔레비전 뉴스 프로그램의 색인 시스템을 만들기 위해서는 텍스트를 검출하고 인식하는 것이 중요하다. 이 논문은 뉴스 비디오에서 오버레이 텍스트를 검출하고 인식하는데 도움이 되는 오버레이 텍스트 시작 프레임 식별을 제안한다. 비디오 시퀀스의 모든 프레임이 오버레이 텍스트를 포함하는 것이 아니기 때문에, 모든 프레임에서 오버레이 텍스트의 추출은 불필요하고 시간 낭비다. 그러므로 오버레이 텍스트를 포함하고 있는 프레임에만 초점을 맞춤으로써 오버레이 텍스트 검출의 정확도를 개선할 수 있다. 텍스트 시작 프레임 식별 방법에 대한 비교 실험을 뉴스 비디오에 대해서 실시하고, 적절한 처리 방법을 제안한다.

Automatic Name Line Detection for Person Indexing Based on Overlay Text

  • Lee, Sanghee;Ahn, Jungil;Jo, Kanghyun
    • Journal of Multimedia Information System
    • /
    • v.2 no.1
    • /
    • pp.163-170
    • /
    • 2015
  • Many overlay texts are artificially superimposed on the broadcasting videos by humans. These texts provide additional information to the audiovisual content. Especially, the overlay text in news videos contains concise and direct description of the content. Therefore, it is most reliable clue for constructing a news video indexing system. To make the automatic person indexing of interview video in the TV news program, this paper proposes the method to only detect the name text line among the whole overlay texts in one frame. The experimental results on Korean television news videos show that the proposed framework efficiently detects the overlaid name text line.

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.

The Development of Real-time Video Associated Data Service System for T-DMB (T-DMB 실시간 비디오 부가데이터 서비스 시스템 개발)

  • Kim Sang-Hun;Kwak Chun-Sub;Kim Man-Sik
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.474-487
    • /
    • 2005
  • T-DMB (Terrestrial-Digital Multimedia Broadcasting) adopted MPEG-4 BIFS (Binary Format for Scene) Core2D scene description profile and graphics profile as the standard of video associated data service. By using BIFS, we can support to overlay objects, i.e. text, stationary image, circle, polygon, etc., on the main display of receiving end according to the properties designated in broadcasting side and to make clickable buttons and website links on desired objects. Therefore, a variety of interactive data services can be served by BIFS. In this paper, we implement real-time video associated data service system far T-DMB. Our developing system places emphasis on real-time data service by user operation and on inter-working and stability with our previously developed video encoder. Our system consists of BIFS Real-time System, Automatic Stream Control System and Receiving Monitoring System. Basic functions of our system are designed to reflect T-DMB programs and characteristics of program production environment as a top priority. Our developed system was used in BIFS trial service via KBS T-DMB, it is supposed to be used in T-DMB main service after improvement process such as intensifying system stability.