A Scalable Parallel Deduplication Algorithm
Resumo
The identification of replicas in a database is fundamental to improve the quality of the information. Deduplication is the task of identifying replicas in a database that refer to the same real world entity. This process is not always trivial, because data may be corrupted during their gathering, storing or even manipulation. Problems such as misspelled names, data truncation, data input in a wrong format, lack of conventions (like how to abbreviate a name), missing data or even fraud may lead to the insertion of replicas in a database. The deduplication process may be very hard, if not impossible, to be performed manually, since actual databases may have hundreds of millions of records. In this paper, we present our parallel deduplication algorithm, called FER- APARDA. By using probabilistic record linkage, we were able to successfully detect replicas in synthetic datasets with more than 1 million records in about 7 minutes using a 20- computer cluster, achieving an almost linear speedup. We believe that our results do not have similar in the literature when it comes to the size of the data set and the processing time.
Palavras-chave:
Couplings, Computer science, Scalability, Computer architecture, High performance computing, Demography, Clustering algorithms, Deductive databases, Software libraries, Erbium
Publicado
24/10/2007
Como Citar
SANTOS, Walter; TEIXEIRA, Thiago; MACHADO, Carla; MEIRA JR., Wagner; FERREIRA, Renato; GUEDES, Dorgival; SILVA, Altigran S. Da.
A Scalable Parallel Deduplication Algorithm. In: INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD), 19. , 2007, Gramado/RS.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2007
.
p. 79-86.
