DOI QR코드

DOI QR Code

Virtual portraits from rotating selfies

  • Yongsik Lee (Telecommunications & Media Research, Electronics and Telecommunications Research Institute) ;
  • Jinhyuk Jang (Artificial Intelligence Research, Electronics and Telecommunications Research Institute) ;
  • SeungjoonYang (Department of Electrical Engineering, Ulsan National Institute of Science and Technology)
  • Received : 2021.11.10
  • Accepted : 2022.05.23
  • Published : 2023.04.20

Abstract

Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

Keywords

Acknowledgement

This work was supported by the U-K BRAND Research Fund of Ulsan National Institute of Science & Technology (UNIST) (1.210040.01) (Contribution Rate: 34%). This work was supported by Institute for Information and Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIP) (No. 2018-0-00198, Contribution Rate: 33%). This research was supported by 2022 Cultural Heritage Smart Preservation & Utilization R&D Program of Cultural Heritage Administration, National Research Institute of Cultural Heritage (Project Name: Development of AI based CAD conversion technology for traditional architecture drawing images, Project Number: 2022A02P03-001, Contribution Rate: 33%).

References

  1. A. Watt, 3D computer graphics, CUMINCAD, 1993.
  2. R. Hartley and A. Zisserman, Multiple view geometry in computer vision, Cambridge University Press, 2003.
  3. M. Tanimoto, FTV (free-viewpoint television), APSIPA Trans. Signal Inform. Process. 1 (2012), E4.
  4. J. Starck and A. Hilton, Model-based multiple view reconstruction of people, (IEEE International Conference on Computer Vision, Nice, France), 2003, pp. 915-922.
  5. H.-Y. Lin and Y.-H. Xiao, Free-viewpoint image synthesis based on non-uniformly resampled 3D representation, (IEEE International Conference on Image Processing, HongKong, China), 2010, pp. 2745-2748.
  6. J. Starck and A. Hilton, Virtual view synthesis of people from multiple view video sequences, Graph. Models 67 (2005), no. 6, 600-620. https://doi.org/10.1016/j.gmod.2005.01.008
  7. Z. Liu, P. An, S. Liu, and Z. Zhang, Arbitrary view generation based on DIBR, (International Symposium on Intelligent Signal Processing and Communication Systems, Xiamen, China), 2007, pp. 168-171.
  8. C.-M. Cheng, S.-J. Lin, S.-H. Lai, and J.-C. Yang, Improved novel view synthesis from depth image with large baseline, (19th International Conference on Pattern Recognition, Tampa, FL, USA), 2008, pp. 1-4.
  9. X. Chen, H. Liang, H. Xu, S. Ren, H. Cai, and Y. Wang, Virtual view synthesis based on asymmetric bidirectional DIBR for 3D video and free viewpoint video, Appl. Sci. 10 (2020), no. 5, 1562.
  10. S. Watanabe, Digital camera with electronic zooming function, 2003. US Patent 2003/0007082 A1.
  11. T. Chen, NeLF: Neural light-transport field for portrait view synthesis and relighting, arXive preprint, 2021. https://doi.org/10.48550/arXiv.2107.12351
  12. J. Freer, K. M. Yi, W. Jiang, J. Choi, and H. J. Chang, Novel-view synthesis of human tourist photos, (Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA), 2022, pp. 3069-3076.
  13. T. Leimkuhler and G. Drettakis, FreeStyleGAN: Free-view editable portrait rendering with the camera manifold-supplemental materials, arXive preprint, 2021. https://doi.org/10.48550/arXiv.2109.09378
  14. A. Fowler, Viewfinder Preview. https://apps.apple.com/kr/app/viewfinder-preview/id1216484605
  15. Ltd Tencent Technology Co., Pitu-Best selfie and PS Soft. https://apps.apple.com/us/app/pitu-best-selfie-and-ps-soft/id724295527
  16. D. G. Lowe, Distinctive image features from scale-invariant key-points, Int. J. Comput. Vision 60 (2004), no. 2, 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  17. M. A. Fischler and R. C. Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (1981), no. 6, 381-395. https://doi.org/10.1145/358669.358692
  18. A. Levin, D. Lischinski, and Y. Weiss, A closed-form solution to natural image matting, IEEE Trans. Pattern Anal. Machine Intell. 30 (2007), no. 2, 228-242.
  19. J. Wang and M. F. Cohen, Optimized color sampling for robust matting, (IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA), 2007, pp. 1-8.
  20. D. Sun, S. Roth, and M. J. Black, Secrets of optical flow estimation and their principles, (IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA), 2010, pp. 2432-2439.
  21. J. Demers, Depth of field: A survey of techniques, GPU Gems 1 (2004), no. 375, U390.
  22. J. Wu, C. Zheng, X. Hu, Y. Wang, and L. Zhang, Realistic rendering of bokeh effect based on optical aberrations, Visual Comput. 26 (2010), no. 6-8, 555-563. https://doi.org/10.1007/s00371-010-0459-5
  23. P. Perona, A new perspective on portraiture, J. Vision 7 (2007), no. 9, 992-992. https://doi.org/10.1167/7.9.992
  24. O. Fried, E. Shechtman, D. B. Goldman, and A. Finkelstein, Perspective-aware manipulation of portrait photos, ACM Trans. Graph. 35 (2016), no. 4, 1-10.  https://doi.org/10.1145/2897824.2925933
  25. G. Vass and T. Perlaki, Applying and removing lens distortion in post production, (Proceedings of the 2nd Hungarian Conference on Computer Graphics and Geometry), 2003, pp. 9-16.
  26. I. Daribo and B. Pesquet-Popescu, Depth-aided image inpainting for novel view synthesis, (IEEE International Workshop on Multimedia Signal Processing, Saint_Malo, France), 2010, pp. 167-170.
  27. C. Guillemot and O. Le Meur, Image inpainting: Overview and recent advances, IEEE Signal Process. Mag. 31 (2013), no. 1, 127-144. https://doi.org/10.1109/MSP.2013.2273004
  28. M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, (Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques), 2000, pp. 417-424. https://doi.org/10.1145/344779.344972
  29. A. Criminisi, P. Perez, and K. Toyama, Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process. 13 (2004), no. 9, 1200-1212. https://doi.org/10.1109/TIP.2004.833105
  30. Y. Xiong and K. Pulli, Fast panorama stitching for high-quality panoramic images on mobile phones, IEEE Trans. Consumer Electron. 56 (2010), no. 2, 298-306.  https://doi.org/10.1109/TCE.2010.5505931