Fast feature selection using fractal dimension

Authors

  • Christos Faloutsos CMU
  • Leejay Wu CMU
  • Agma Traina USP-ICMC
  • Caetano Traina Jr. USP-ICMC

DOI:

https://doi.org/10.5753/jidm.2010.936

Keywords:

Dimensionality reduction, Data mining, Machine learning, Clustering

Abstract

Dimensionality curse and dimensionality reduction are two key issues that have retained high interest for data mining, machine learning, multimedia indexing, and clustering.  In this paper we present a fast, scalable algorithm to quickly select the most important attributes (dimensions) for a given set of n-dimensional vectors.  In contrast to older methods, our method has the following desirable properties: (a) it does not do rotation of attributes, thus leading to easy interpretation of the resulting attributes; (b) it can spot attributes that have either linear or nonlinear correlations; (c) it requires a constant number of passes over the dataset; (d) it gives a good estimate on how many attributes should be kept. The idea is to use the ‘fractal' dimension of a dataset as a good approximation of its intrinsic dimension, and to drop attributes that do not affect it.  We applied our method on real and synthetic datasets, where it gave fast and correct results.

Downloads

Download data is not yet available.

Downloads

Published

2010-05-27 — Updated on 2021-01-13

Versions

How to Cite

Faloutsos, C., Wu, L., Traina, A., & Traina Jr., C. (2021). Fast feature selection using fractal dimension. Journal of Information and Data Management, 1(1), 3. https://doi.org/10.5753/jidm.2010.936 (Original work published May 27, 2010)