• Title/Summary/Keyword: Multi-Focus

Search Result 738, Processing Time 0.027 seconds

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Multi-focus 3D Display (다초점 3차원 영상 표시 장치)

  • Kim, Seong-Gyu;Kim, Dong-Uk;Gwon, Yong-Mu;Son, Jeong-Yeong
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.07a
    • /
    • pp.119-120
    • /
    • 2008
  • A HMD type multi-focus 3D display system is developed and proof about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects. And the HMD type multi-focus 3D display can be applied to a monocular 3D display and monocular AR 3D display.

  • PDF

Multi-Focus Image Fusion Using Transformation Techniques: A Comparative Analysis

  • Ali Alferaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.39-47
    • /
    • 2023
  • This study compares various transformation techniques for multifocus image fusion. Multi-focus image fusion is a procedure of merging multiple images captured at unalike focus distances to produce a single composite image with improved sharpness and clarity. In this research, the purpose is to compare different popular frequency domain approaches for multi-focus image fusion, such as Discrete Wavelet Transforms (DWT), Stationary Wavelet Transforms (SWT), DCT-based Laplacian Pyramid (DCT-LP), Discrete Cosine Harmonic Wavelet Transform (DC-HWT), and Dual-Tree Complex Wavelet Transform (DT-CWT). The objective is to increase the understanding of these transformation techniques and how they can be utilized in conjunction with one another. The analysis will evaluate the 10 most crucial parameters and highlight the unique features of each method. The results will help determine which transformation technique is the best for multi-focus image fusion applications. Based on the visual and statistical analysis, it is suggested that the DCT-LP is the most appropriate technique, but the results also provide valuable insights into choosing the right approach.

Head Fixed Type Multi-Focus Display System Using Galvano-Scanner and DMD(Digital Micro-Mirror Device) (갈바노 스캐너와 DMD(Digital Micro-mirror Device)를 이용한 두부 고정형 다초점 디스플레이 시스템)

  • Kim, Dong-Wook;Kwon, Yong-Moo;Kim, Sung-Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1117-1123
    • /
    • 2009
  • Head fixed type multi-focus display system using Galvano-scanner and DMD(Digital Micro-mirror Device), which is able to perfectly accommodate, can solve eye fatigue due to conflict between convergence eye movement and accommodation action in stereoscopic display. This system is able to accommodate through making convergence point about each view point and offering it in front of observer's pupil by using laser scanning method. In this paper, we analyzed laser scanning method of this multi-focus display system. And multi-focus display system based on this analysis was made, which showed that focus adjustment was possible through video camera. As a result, formation principle of view point of multi-focus system by laser scanning method was verified.

FUSESHARP: A MULTI-IMAGE FOCUS FUSION METHOD USING DISCRETE WAVELET TRANSFORM AND UNSHARP MASKING

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of applied mathematics & informatics
    • /
    • v.41 no.5
    • /
    • pp.1115-1128
    • /
    • 2023
  • In this paper, a novel hybrid method for multi-focus image fusion is proposed. The method combines the advantages of wavelet transform-based methods and focus-measure-based methods to achieve an improved fusion result. The input images are first decomposed into different frequency sub-bands using the discrete wavelet transform (DWT). The focus measure of each sub-band is then calculated using the Laplacian of Gaussian (LoG) operator, and the sub-band with the highest focus measure is selected as the focused sub-band. The focused sub-band is sharpened using an unsharp masking filter to preserve the details in the focused part of the image.Finally, the sharpened focused sub-bands from all input images are fused using the maximum intensity fusion method to preserve the important information from all focus images. The proposed method has been evaluated using standard multi focus image fusion datasets and has shown promising results compared to existing methods.

Focus Control for Multi-Focal Projection onto Nonplanar Surface (곡면 전초점 투사를 위한 멀티 프로젝터 초점제어)

  • Shim, Jae-Young;Park, Han-Hoon;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1081-1086
    • /
    • 2006
  • 일반적으로 프로젝터는 심도(depth of field)가 제한되어 있기 때문에 스크린이 곡면일 경우, 일부 영역에서는 초점이 맞지 않게(out-of-focus) 된다. 이런 out-of-focus 영역의 정보는 블러링(blurring)되기 때문에 사용자에게 정확한 정보를 전달할 수 없다. 여러 대의 프로젝터를 이용할 경우, 각 프로젝터는 다른 in-focus 영역을 가지기 때문에 각 프로젝터 픽셀의 in-focus/out-of-focus 판별을 통해 in-focus 픽셀만을 투사함으로써, out-of-focus 픽셀의 영향을 제거할 수 있다. 그러나 여러 대의 프로젝터의 in-focus영역이 거의 일치할 경우, out-of-focus영역은 여전히 out-of-focus상태일 수 밖에 없다. 따라서, 각 프로젝터의 초점을 유연하게 조절하면서 동시에 여러 대의 프로젝터에 의한 in-focus 영역을 최대한 크게 할 수 있는 방법이 필요하다. 본 논문에서는 각 프로젝터의 초점을 유동적으로 조절하면서 취득된 영상을 처리하여 in-focus 영역을 판별하고 각 프로젝터에 대한 in-focus 영역을 조합하여 전체in-focus 영역의 면적을 최대화하는 방법을 제안한다. 제안된 방법의 유용성을 검증하기 위해, 각 프로젝터의 in-focus 영역을 적절한 컬러를 이용해 시각적으로 표현해주고 이 정보를 참조해 각 프로젝터의 초점을 유동적으로 조절하여 전초점 영상을 만들어내는 시스템을 구현하였다.

  • PDF

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

PATN: Polarized Attention based Transformer Network for Multi-focus image fusion

  • Pan Wu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1234-1257
    • /
    • 2023
  • In this paper, we propose a framework for multi-focus image fusion called PATN. In our approach, by aggregating deep features extracted based on the U-type Transformer mechanism and shallow features extracted using the PSA module, we make PATN feed both long-range image texture information and focus on local detail information of the image. Meanwhile, the edge-preserving information value of the fused image is enhanced using a dense residual block containing the Sobel gradient operator, and three loss functions are introduced to retain more source image texture information. PATN is compared with 17 more advanced MFIF methods on three datasets to verify the effectiveness and robustness of PATN.

Multi-focus 3D display of see-through Head-Mounted Display type (투시형 두부 장착형 디스플레이방식의 다초점 3차원 디스플레이)

  • Kim, Dong-Wook;Yoon, Seon-Kyu;Kim, Sung-Kyu
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.441-447
    • /
    • 2006
  • See-through HMD type 3D display can provide an advantage of us seeing virtual 3D data used stereoscopic display simultaneously with real object(MR-Mixed Reality). But, when user sees stereoscopic display for a long time, not only eye fatigue phenomenon happens but also de-focus phenomenon of data happens by fixed focal point of virtual data. Dissatisfaction of focus adjustment of eye can be considered as the important reason of this phenomenon. In this paper, We proposed an application of multi-focus in see-through HMD as a solution of this problem. As a result, we confirmed that the focus adjustment coincide between the object of real world and the virtual data by multi-focus in monocular condition.

Research on the Multi-Focus Image Fusion Method Based on the Lifting Stationary Wavelet Transform

  • Hu, Kaiqun;Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1293-1300
    • /
    • 2018
  • For the disadvantages of multi-scale geometric analysis methods such as loss of definition and complex selection of rules in image fusion, an improved multi-focus image fusion method is proposed. First, the initial fused image is quickly obtained based on the lifting stationary wavelet transform, and a simple normalized cut is performed on the initial fused image to obtain different segmented regions. Then, the original image is subjected to NSCT transformation and the absolute value of the high frequency component coefficient in each segmented region is calculated. At last, the region with the largest absolute value is selected as the postfusion region, and the fused multi-focus image is obtained by traversing each segment region. Numerical experiments show that the proposed algorithm can not only simplify the selection of fusion rules, but also overcome loss of definition and has validity.