Top Read Articles Published in last 1 year | In last 2 years | In last 3 years | All
 Select Evaluation of modified adaptive $k$-means segmentation algorithm Taye Girma Debelee, Friedhelm Schwenker, Samuel Rahimeto, Dereje Yohannes Computational Visual Media   2019, 05 (04): 347-361.   DOI: 10.1007/s41095-019-0151-2 Abstract （32）   HTML （0）    PDF （13536KB）（11）       Segmentation is the act of partitioning an image into different regions by creating boundaries between regions. $k$-means image segmentation is the simplest prevalent approach. However, the segmentation quality is contingent on the initial parameters (the cluster centers and their number). In this paper, a convolution-based modified adaptive $k$-means (MAKM) approach is proposed and evaluated using images collected from different sources (MATLAB, Berkeley image database, VOC2012, BGH, MIAS, and MRI).The evaluation shows that the proposed algorithm is superior to $k$-means++, fuzzy $c$-means, histogram-based $k$-means, and subtractive $k$-means algorithms in terms of image segmentation quality ($Q$-value), computational cost, and RMSE. The proposed algorithm was also compared to state-of-the-art learning-based methods in terms of IoU and MIoU; it achieved a higher MIoU value.
 Select SpinNet: Spinning convolutional network for lane boundary detection Ruochen Fan, Xuanrun Wang, Qibin Hou, Hanchao Liu, Tai-Jiang Mu Computational Visual Media   2019, 05 (04): 417-428.   DOI: 10.1007/s41095-019-0152-1 Abstract （18）   HTML （0）    PDF （10147KB）（6）       In this paper, we propose a simple but effective framework for lane boundary detection, called SpinNet. Considering that cars or pedestrians often occlude lane boundaries and that the local features of lane boundaries are not distinctive, therefore, analyzing and collecting global context information is crucial for lane boundary detection. To this end, we design a novel spinning convolution layer and a brand-new lane parameterization branch in our network to detect lane boundaries from a global perspective. To extract features in narrow strip-shaped fields, we adopt strip-shaped convolutions with kernels which have $1×n$ or $n×1$ shape in the spinning convolution layer. To tackle the problem of that straight strip-shaped convolutions are only able to extract features in vertical or horizontal directions, we introduce the concept of feature map rotation to allow the convolutions to be applied in multiple directions so that more information can be collected concerning a whole lane boundary. Moreover, unlike most existing lane boundary detectors, which extract lane boundaries from segmentation masks, our lane boundary parameterization branch predicts a curve expression for the lane boundary for each pixel in the output feature map. And the network utilizes this information to predict the weights of the curve, to better form the final lane boundaries. Our framework is easy to implement and end-to-end trainable. Experiments show that our proposed SpinNet outperforms state-of-the-art methods.