• 제목/요약/키워드: Depth

검색결과 26,207건 처리시간 0.041초

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권7호
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

DEPTH AND STANLEY DEPTH OF TWO SPECIAL CLASSES OF MONOMIAL IDEALS

  • Xiaoqi Wei
    • 대한수학회보
    • /
    • 제61권1호
    • /
    • pp.147-160
    • /
    • 2024
  • In this paper, we define two new classes of monomial ideals I𝑙,d and Jk,d. When d ≥ 2k + 1 and 𝑙 ≤ d - k - 1, we give the exact formulas to compute the depth and Stanley depth of quotient rings S/It𝑙,d for all t ≥ 1. When d = 2k = 2𝑙, we compute the depth and Stanley depth of quotient ring S/I𝑙,d. When d ≥ 2k, we also compute the depth and Stanley depth of quotient ring S/Jk,d.

AdaMM-DepthNet: Unsupervised Adaptive Depth Estimation Guided by Min and Max Depth Priors for Monocular Images

  • ;김문철
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.252-255
    • /
    • 2020
  • Unsupervised deep learning methods have shown impressive results for the challenging monocular depth estimation task, a field of study that has gained attention in recent years. A common approach for this task is to train a deep convolutional neural network (DCNN) via an image synthesis sub-task, where additional views are utilized during training to minimize a photometric reconstruction error. Previous unsupervised depth estimation networks are trained within a fixed depth estimation range, irrespective of its possible range for a given image, leading to suboptimal estimates. To overcome this suboptimal limitation, we first propose an unsupervised adaptive depth estimation method guided by minimum and maximum (min-max) depth priors for a given input image. The incorporation of min-max depth priors can drastically reduce the depth estimation complexity and produce depth estimates with higher accuracy. Moreover, we propose a novel network architecture for adaptive depth estimation, called the AdaMM-DepthNet, which adopts the min-max depth estimation in its front side. Intensive experimental results demonstrate that the adaptive depth estimation can significantly boost up the accuracy with a fewer number of parameters over the conventional approaches with a fixed minimum and maximum depth range.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

건축공간(建築空間) 구성(構成)에 있어서 시각적(視覺的) 깊이의 활용(活用)에 관(關)한 연구(硏究) (A Study on the Application of Visual Depth In Aspects of the Spatial Organization of Architecture)

  • 백민석
    • 한국디지털건축인테리어학회논문집
    • /
    • 제3권1호
    • /
    • pp.1-8
    • /
    • 2003
  • Perceiving the depth of space in the spatial organization of architecture is perceiving spaces as well three dimensions as the fourth dimensions -perceive the time-. Physical depth in architectural space differs from perceptional depth in aspects of not only dimension but also perceptional effects. In this study, the perceptional depth is defined as visual depth and physical depth is depth of space. These purposes of this study are classifying the perceptional effects of visual depth -visual access, sense of variety, dynamic and cubic effect... - and the methods of spatial composition which causes visual depth in architectural space.

  • PDF

Depth Profiling에서 Sputtering Rate의 영향 (The influence of sputtering rate during depth profiling)

  • 김주광;성인복;김태준;오상훈;강석태
    • 한국진공학회지
    • /
    • 제12권3호
    • /
    • pp.162-167
    • /
    • 2003
  • 시료에 주입된 이온의 깊이방향에 따른 농도분포를 알아보기 위하여 시료표면을 sputtering 하면서 튀어나온 주입된 이온을 depth profiling한다. Depth profiling 측정 시에 깊이방향에 영향을 주는 sputtering rate가 변화하는 효과를 SRIM simulation을 이용하여 계산하였다. 시료에 이온이 주입하게 되면 시료의 원자밀도는 약간 증가하게 되는데, 그 결과로 sputtering yield가 변화하게 된다. 이러한 변화가 결과적으로 depth profile 측정시에 깊이방향에 영향을 줄 수 있는 sputtering rate를 변화시키는 원인이 된다. SRIM(Stopping and Range of Ions in Matter) Monte Carlo simulation code를 사용하여 이온주입에 의한 시료의 원자밀도의 변화에 따른 sputtering yield를 구하여 sputtering rate를 계산하고, 그 차이가 depth profiling 측정에서 깊이방향 분포에 영향을 줄 수 있다는 것을 확인하였다.

깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법 (Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images)

  • 엄기문;안충현;이수인;김강연;이관행
    • 방송공학회논문지
    • /
    • 제9권3호
    • /
    • pp.185-195
    • /
    • 2004
  • 본 논문에서는 정확한 3차원 장면복원을 위한 다중깊이맵 융합기법을 제안한다. 제안한 기법은 수동적 3차원 정보획득 방법인 스테레오 정합기법과 능동적 3차원 정보획득 방법인 깊이정보 카메라로부터 얻어진 다중깊이맵을 융합한다. 전통적인 두 개의 스테레오 영상 간에 변이정보를 추정하는 전통적 스테레오 정합기법은 차폐 영역과 텍스쳐가 적은 영역에서 변이 오차를 많이 발생한다. 또한 깊이정보 카메라를 이용한 깊이맵은 비교적 정확한 깊이정보를 얻을 수 있으나, 잡음이 많이 포함되며, 측정 가능한 깊이의 범위가 제한되어 있다. 따라서 본 논문에서는 이러한 두 기법의 단점을 극복하고, 상호 보완하기 위하여 이 두 기법에 의해 얻어진다. 중깊이맵의 변이 또는 깊이값을 적절하게 선택하기 위한 깊이맵 융합기법을 제안한다. 3-시점 영상으로부터 가운데 시점을 기준으로 좌우 영상에 대해 두 개의 변이맵들을 각각 얻으며, 가운데 시점 카메라에 설치된 깊이정보 카메라로부터 얻어진 깊이맵들 간에 위치와 깊이값을 일치시키기 위한 전처리를 행한 다음. 각 화소 위치의 텍스쳐 정보, 깊이맵 분포 등에 기반하여 적절한 깊이값을 선택한다. 제안한 기법의 컴퓨터 모의실험 결과. 일부 배경 영역에서 깊이맵의 정확도가 개선됨을 볼 수 있었다.

동해 심해 생태계의 수심별 종조성 및 계절변동 (Seasonal variation of species composition by depths in deep sea ecosystem of the East Sea of Korea)

  • 손명호;이해원;홍병규;전영열
    • 수산해양기술연구
    • /
    • 제46권4호
    • /
    • pp.376-391
    • /
    • 2010
  • To investigate seasonal variation and species composition by depth layers in the deep sea ecosystem of the East Sea of Korea, bottom trawl survey was conducted at 4 depth layers during spring and autumn from 2007 to 2009. A total of 47 species were collected and were composed of 23 fish species, 9 crustacea, 6 cephalopoda and 9 gastropoda. The main dominant species at each depth layers were Chionoecetes opilio in 300m, Berryteuthis magister in 500m, Chionoecetes japonicus in 700m and 900m. In spring, richness indices (R) showed low value of 2.01 in 500m depth, and high value of 2.16 in 300m depth. Diversity indices (H') showed low value of 1.53 in 300m depth, and high value of 2.09 in 700m depth. Dominance indices (D) showed low value of 0.15 in 700m depth, and high value of 0.31 in 300m depth. In Autumn, richness indices showed low value of 1.48 in 900m depth, and high value of 2.69 in 300m depth. Diversity' indices (H') showed low value of 1.13 in 300m depth, and high value of 2.23 in 700m depth. Dominance indices (D) showed low value of 0.14 in 700m depth, and high value of 0.54 in 300m depth. In spring, similarity analysis in each depth layers showed the difference between 900m and othe depth layer, on the contrary 500m and 700m showed the similarity. In autumn, similarity analyssis in each depth layers showed the difference between 700m and other depth layers, on the contrary 300m and 500m showed the similarity.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제5권11호
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.