## Current IssuePrevious Issue

Volume 5 No. 2
05 June 2019

Ali Borji, Ming-Ming Cheng, Qibin Hou, Huaizu Jiang, Jia Li

2019, 5(2): 117-150.   doi:10.1007/s41095-019-0149-9
Abstract ( 102 HTML ( 0   PDF(11324KB) ( 41 )

Detecting and segmenting salient objects from natural scenes, often referred to as salient object detection, has attracted great interest in computer vision. While many models have been proposed and several applications have emerged, a deep understandingof achievements and issues remains lacking. We aim to provide a comprehensive review of recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal ge...

Hirokazu Sakai, Kosuke Nabata, Shinya Yasuaki, Kei Iwasaki

2019, 5(2): 151-160.   doi:10.1007/s41095-019-0137-0
Abstract ( 78 HTML ( 0   PDF(28633KB) ( 7 )

In many-light rendering, a variety of visual and illumination effects, including anti-aliasing, depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their su...

Martin Ritz, Simon Breitfelder, Pedro Santos, Arjan Kuijper, Dieter W. Fellner

2019, 5(2): 161-170.   doi:10.1007/s41095-019-0141-4
Abstract ( 113 HTML ( 0   PDF(19324KB) ( 13 )

We show how to overcome the single weakness of an existing fully automatic system for acquisition of spatially varying optical material behavior of real object surfaces. While the expression of spatially varying material behavior with spherical dependence on incoming light as a 4D texture (an ABTF material model) allows flexible mapping onto arbitrary 3D geometry, with photo-realistic rendering and interaction in real time, this very method of texture-like representation exposes it to common ...

Richard Roberts, J. P. Lewis, Ken Anjyo, Jaewoo Seo, Yeongho Seol

2019, 5(2): 171-191.   doi:10.1007/s41095-019-0138-z
Abstract ( 110 HTML ( 0   PDF(26727KB) ( 9 )

Motion capture is increasingly used in games and movies, but often requires editing before it can be used, for many reasons. The motion may need to be adjusted to correctly interact with virtual objects or to fix problems that result from mapping the motion to a character of a different size or, beyond such technical requirements, directors can request stylistic changes. Unfortunately, editing is laborious because of the low-level representation of the data. While existing motion editing meth...

Xiaochuan Wang, Xiaohui Liang, Bailin Yang, Frederick W. B. Li

2019, 5(2): 193-208.   doi:10.1007/s41095-019-0131-6
Abstract ( 106 HTML ( 0   PDF(25345KB) ( 12 )

Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interac-tive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive...

Salma Alqazzaz, Xianfang Sun, Xin Yang, Len Nokes

2019, 5(2): 209-219.   doi:10.1007/s41095-019-0139-y
Abstract ( 224 HTML ( 0   PDF(13354KB) ( 15 )

The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation. Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only time-consuming but also prone to human error, and its performance depends on pathologists’ experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNe...

Liang Han, Pin Tao, Ralph R. Martin

2019, 5(2): 221-228.   doi:10.1007/s41095-019-0132-5
Abstract ( 97 HTML ( 0   PDF(8476KB) ( 18 )

In order to accurately count the number of animals grazing on grassland, we present a livestock detection algorithm using modified versions of U-net and Google Inception-v4 net. This method works wellto detect dense and touching instances. We also introduce a dataset for livestock detection in aerial images, consisting of 89 aerial images collected by quadcopter. Each image has resolution of about $<...$