A visual approach for user-guided feature fusion

  • Gladys M. Hilasaca University of São Paulo (USP)
  • Fernando V. Paulovich University of São Paulo (USP)

Resumo


Dimensionality Reduction transforms data from high-dimensional space into visual space preserving the existing relationships. This abstract representation of complex data enables exploration of data similarities, but brings challenges about the analysis and interpretation for users on mismatching between their expectations and the visual representation. A possible way to model these understandings is via different feature extractors, because each feature has its own way to encode characteristics. Since there is no perfect feature extractor, the combination of multiple sets of features has been explored through a process called feature fusion. Feature fusion can be readily performed when machine learning or data mining algorithms have a cost function. However, when such a function does not exist, user support needs to be provided otherwise the process is impractical. In this project, we present a novel feature fusion approach that employs data samples and visualization to allow users to not only effortlessly control the combination of different feature sets but also to understand the attained results. The effectiveness of our approach is confirmed by a comprehensive set of qualitative and quantitative experiments, opening up different possibilities for user-guided analytical scenarios. The ability of our approach to provide real-time feedback for feature fusion is exploited in the context of unsupervised clustering techniques, where users can perform an exploratory process to discover the best combination of features that reflects their individual perceptions about similarity. A traditional way to visualize data similarities is via scatter plots, however, they suffer from overlap issues. Overlapping hides data distributions and makes the relationship among data instances difficult to observe, which hampers data exploration. To tackle this issue, we developed a technique called Distance-preserving Grid (DGrid). DGrid employs a binary space partitioning process in combination with Dimensionality Reduction output to create orthogonal regular grid layouts. DGrid ensures non-overlapping instances because each data instance is assigned only to one grid cell. Our results show that DGrid outperforms the existing state-of-the-art techniques, whereas requiring only a fraction of the running time and computational resources rendering DGrid as a very attractive method for large datasets.

Referências

A. C. Telea, Data Visualization: Principles and Practice, Second Edition, 2nd ed. Natick, MA, USA: A. K. Peters, Ltd., 2014.

D. A. Keim, J. Kohlhammer, G. P. Ellis, and F. Mansmann, Mastering the Information Age - Solving Problems with Visual Analytics. Eurographics Association, 2010. https://doi.org/10.2312/14803

D. Sacha, A. Stoffel, F. Stoffel, B. C. Kwon, G. Ellis, and D. Keim, “Knowledge generation model for visual analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 12, pp. 1604–1613, 2014. https://doi.org/10.1109/TVCG.2014.2346481

D. H. Jeong, C. Ziemkiewicz, B. D. Fisher, W. Ribarsky, and R. Chang, “ipca: An interactive system for pca-based visual analytics,” Comput. Graph. Forum, vol. 28, pp. 767–774, 2009. https://doi.org/10.1111/j.1467-8659.2009.01475.x

J. Choo, H. Lee, J. Kihm, and H. Park, “ivisclassifier: An interactive visual analytics system for classification based on supervised dimension reduction,” in 2010 IEEE Symposium on Visual Analytics Science and Technology, Oct 2010, pp. 27–34. https://doi.org/10.1109/vast.2010.5652443

D. Sacha, L. Zhang, M. Sedlmair, J. A. Lee, J. Peltonen, D. Weiskopf, S. C. North, and D. A. Keim, “Visual interaction with dimensionality reduction: A structured literature analysis,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 241–250, Jan. 2017. https://doi.org/10.1109/tvcg.2016.2598495

U. G. Mangai, S. Samanta, S. Das, and P. R. Chowdhury, “A survey of decision fusion and feature fusion strategies for pattern classification,” IETE Technical Review, vol. 27, no. 4, pp. 293–307, 2010. https://doi.org/10.4103/0256-4602.64604

G. Ma, X. Yang, B. Zhang, and Z. Shi, “Multi-feature fusion deep networks,” Neurocomput., vol. 218, no. C, pp. 164–171, Dec. 2016. https://doi.org/10.1016/j.neucom.2016.08.059

E. Gomez-Nieto, F. S. Roman, P. Pagliosa, W. Casaca, E. S. Helou, M. C. F. de Oliveira, and L. G. Nonato, “Similarity preserving snippet-based visualization of web search results,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 457–470, March 2014. https://doi.org/10.1109/TVCG.2013.242

H. Strobelt, M. Spicker, A. Stoffel, D. Keim, and O. Deussen, “Rolled-out wordles: A heuristic method for overlap removal of 2d data representatives,” Computer Graphics Forum, vol. 31, no. 3pt3, pp. 1135–1144, 2012. https://doi.org/10.1111/j.1467-8659.2012.03106.x

R. D. Pinho, M. C. Oliveira, and A. Andrade Lopes, “An incremental space to visualize dynamic data sets,” Multimedia Tools Appl., vol. 50, no. 3, pp. 533–562, Dec. 2010. https://doi.org/10.1007/s11042-010-0483-5

O. Fried, S. DiVerdi, M. Halber, E. Sizikova, and A. Finkelstein, “Isomatch: Creating informative grid layouts,” Comput. Graph. Forum, vol. 34, no. 2, pp. 155–166, May 2015. https://doi.org/10.1111/cgf.12549

N. Quadrianto, A. J. Smola, L. Song, and T. Tuytelaars, “Kernelized sorting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 10, pp. 1809–1821, Oct 2010. https://doi.org/10.1109/TPAMI.2009.184

G. Strong and M. Gong, “Data organization and visualization using self-sorting map,” in Proceedings of Graphics Interface 2011, ser. GI ’11. School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada: Canadian Human-Computer Communications Society, 2011, pp. 199–206.

G. Strong and M. Gong, “Self-sorting map: An efficient algorithm for presenting multimedia data in structured layouts,” Trans. Multi., vol. 16, no. 4, pp. 1045–1058, Jun. 2014. https://doi.org/10.1109/TMM.2014.2306183

P. Joia, D. Coimbra, J. A. Cuminato, F. V. Paulovich, and L. G. Nonato, “Local affine multidimensional projection,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2563–2571, Dec. 2011. https://doi.org/10.1109/tvcg.2011.220

A. Coates, A. Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, 2011, pp. 215–223.

C. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” in CVPR 2009, Max-Planck-Gesellschaft. Piscataway, NJ, USA: IEEE Service Center, Jun. 2009, pp. 951–958. https://doi.org/10.1109/cvprw.2009.5206594

A. Yu and K. Grauman, “Fine-grained visual comparisons with local learning,” in Computer Vision and Pattern Recognition (CVPR), Jun 2014. https://doi.org/10.1109/cvpr.2014.32

A. Krizhevsky, “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.

C. Thomas and A. Kovashka, “Seeing behind the camera: Identifying the authorship of a photograph,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. https://doi.org/10.1109/cvpr.2016.380

L. Chen, G. Lu, and D. Zhang, “Effects of different gabor filter parameters on image retrieval by texture,” in Proceedings of the 10th International Multimedia Modelling Conference, ser. MMM ’04. Washington, DC, USA: IEEE Computer Society, 2004, pp. 273–. https://doi.org/10.1109/mulmm.2004.1264996

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in In CVPR, 2005, pp. 886–893. https://doi.org/10.1109/CVPR.2005.177

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22Nd ACM International Conference on Multimedia, ser. MM ’14. New York, NY, USA: ACM, 2014, pp. 675–678. https://doi.org/10.1145/2647868.2654889

M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available: http://archive.ics.uci.edu/ml

G. M. H. Mamani, F. M. Fatore, L. G. Nonato, and F. V. Paulovich, “User-driven Feature Space Transformation,” Computer Graphics Forum, 2013. https://doi.org/10.1111/cgf.12116
Publicado
28/10/2019
Como Citar

Selecione um Formato
HILASACA, Gladys M.; PAULOVICH, Fernando V.. A visual approach for user-guided feature fusion. In: WORKSHOP DE TESES E DISSERTAÇÕES - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 32. , 2019, Rio de Janeiro. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2019 . p. 133-139. DOI: https://doi.org/10.5753/sibgrapi.est.2019.8313.