Salient Region Detection Pixel Evaluation Approach Using Medical Images

Main Article Content

R.Poonkuzhali
K.Gayathiri
D.Deepak Raj
P.Iragul

Keywords

State-of-the-art salient region detection, Superpixels, segmentation, Saliency detection, Hexagonal Image Processing

Abstract

Detecting visually salient areas in pictures is an essential problem. salient item areas are a gentle decomposition of foreground and history picture elements. Discovering salient areas in a picture in phrases of the saliency map. Using the linear aggregate of hues in high-dimensional color space produces a saliency map. Utilize the relative proximity and color comparison among large pixels to improve the saliency estimation's general performance. using a collection of rules based on studying to fix the saliency estimation from trimap. creating three benchmark datasets, it's miles green, contrasting with a preceding country of artwork saliency estimation techniques. This is primarily founded on the observation that in human belief, salient areas frequently have distinctive hues relative to backgrounds, but that human belief is intricate and incredibly nonlinear. By finding the ideal linear aggregate of color coefficients inside the high-dimensional color space, it is possible to show a composite correct saliency map by mapping the low-dimensional red, green, and blue colors to a function vector in the high-dimensional color space. Even as many such fashions exist. Saliency detection has won numerous interests in picture processing. Many saliency detection methods have been suggested over the past few years, but this one allows for a wide range of saliency detection methods while also improving saliency estimation's overall performance. The key idea is to solve the saliency estimation from a trimap using a learning-based collection of rules, using relative location and color comparison among Superpixels as functions. The additional local functions augment the global estimation from the collection of high-dimensional color transform-based rules. The experimental results on three standard datasets show how effectively the method differs from the earlier brand-new saliency estimation approaches.

Abstract 103 | pdf Downloads 88

References

1. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, "SLIC superpixels in comparison to present day superpixel methods," IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282, Nov. 2012
2. J. Kim, D. Han, Y.-W. Tai, and J. Kim, "Salient vicinity detection thru high-dimensional color transform," in Proc. IEEE Conf. Compute. Vis. Pattern Recognin. (CVPR), Jun. 2014, pp. 883–890.
3. Borji, M.-M. Cheng, H. Jiang, and J. Li. (2014).“Salient item detection: A survey.”
4. R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,“Frequency-tuned salient region detection,” in Proc.IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 1597–1604.
5. Borji, M.-M. Cheng, H. Jiang, and J. Li. (2015).“Salient item detection: A benchmark. Itti, J. Braun, D.
6. Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues,” in Proc. ACM Multimedia, 2006, pp. 815–824.
7. K. Lee, and C. Koch, “Attentional modulation of human sample discrimination psychophysics reproduced through a quantitative model, in Proc. Conf. Adv. Neural Inf. Process
8. S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp. 2376– 2383.
9. B. Fulkerson, A. Vedaldi, and S. Soatto, “ClassSegmentation and Object Localization with Superpixel Neighborhoods,” Proc. IEEE Int’l Conf. Computer Vision, 2009.
10. J.M. Gonfaus, X. Boix, J. Weijer, A. Bogdanov, J. Serrat, and J. Gonzalez, "Harmony Potentials for Joint Classification and Segmentation" Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2010.
11. S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller, “Multi-Class Segmentation with Relative Location Prior,” Int’l J. Computer Vision, vol. 80, no. 3, pp. 300-316, 2008.
12. T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, and A.Y. Wu, “A Local Search Approximation Algorithm for K-Means Clustering,” Proc. 18th Ann. Symp. Computational Geometry, pp. 10-18, 2002.
13. A. Kumar, Y. Sabharwal, and S. Sen, “A Simple Linear Time Approximation Algorithm for K-Means Clustering in Any Dimensions,” Proc. Ann. IEEE Symp. Foundations of Computer Science, pp. 454-462, 2004.
14. V. Kwatra, A. Schodl, I. Essa, G. Turk, and A. Bobick, “Graphcut Textures: Image and Video Synthesis Using Graph Cuts,” ACM Trans. Graphics, vol. 22, no. 3, pp. 277-286, July 2003.
15. A. Levinshtein, A. Stere, K. Kutulakos, D. Fleet, S. Dickinson, and K. Siddiqi, “Turbopixels: Fast Superpixels Using Geometric Flows,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2290-2297, Dec. 2009.
16. Y. Li, J. Sun, C.-K. Tang, and H.-Y. Shum, “Lazy Snapping,” ACM Trans. Graphics, vol. 23, no. 3, pp. 303-308, 2004.
17. S.P. Lloyd, “Least Squares Quantization in PCM,” IEEE Trans. Information Theory, vol. 28, no. 2, pp. 129-137, Mar. 1982