A Recommender for Choosing Data Systems based on Application Profiling and Benchmarking
Resumo
In our data-driven society, there are hundreds of possible data systems in the market with a wide range of configuration parameters, making it very hard for enterprises and users to choose the most suitable data systems. There is a lack of representative empirical evidence to help users make an informed decision. Using benchmark results is a widely adopted practice, but like there are several data systems, there are various benchmarks. This ongoing work presents an architecture and methods of a system that supports the recommendation of the most suitable data system for an application. We also illustrates how the recommendation would work in a fictitious scenario.
Palavras-chave:
data system advisor, database recommendation, database benchmarking
Referências
Brahimi, L., Bellatreche, L., and Ouhammou, Y. (2016). A recommender system for dbms selection based on a test data repository. In East European Conference on Advances in Databases and Information Systems, pages 166–180. Springer.
Elnaffar, S., Horman, R. W., Lightstone, S. S., Martin, P., Schiefer, B. K., and Van Boeschoten, R. D. (2009). Method for identifying a workload type for a given workload of database requests. US Patent 7,499,908.
Gray, J. (1992).Benchmark handbook: for database and transaction processing systems. Morgan Kaufmann Publishers Inc.
Guo, Y., Pan, Z., and Heflin, J. (2005). Lubm: A benchmark for owl knowledge base systems. Journal of Web Semantics, 3(2-3):158–182.
Huppler, K. (2009). The art of building a good benchmark. InTechnology Conference on Performance Evaluation and Benchmarking, pages 18–30. Springer.
Kuhn, D., Alapati, S., and Padfield, B. (2012).Expert Indexing in Oracle Database 11g: Maximum Performance for your Database. Springer.
Morsey, M., Lehmann, J., Auer, S., and Ngomo, A.-C. N. (2011). DBpedia SPARQL Benchmark–Performance Assessment with Real Queries on Real Data. In International semantic web conference, pages 454–469. Springer.
Zilio, D. C., Rao, J., Lightstone, S., Lohman, G., Storm, A., Garcia-Arellano, C., and Fadden, S. (2004). DB2 Design Advisor: Integrated Automatic Physical Database Design. In Proceedings of the Thirtieth international conference on Very large databases-Volume 30, pages 1087–1097.
Elnaffar, S., Horman, R. W., Lightstone, S. S., Martin, P., Schiefer, B. K., and Van Boeschoten, R. D. (2009). Method for identifying a workload type for a given workload of database requests. US Patent 7,499,908.
Gray, J. (1992).Benchmark handbook: for database and transaction processing systems. Morgan Kaufmann Publishers Inc.
Guo, Y., Pan, Z., and Heflin, J. (2005). Lubm: A benchmark for owl knowledge base systems. Journal of Web Semantics, 3(2-3):158–182.
Huppler, K. (2009). The art of building a good benchmark. InTechnology Conference on Performance Evaluation and Benchmarking, pages 18–30. Springer.
Kuhn, D., Alapati, S., and Padfield, B. (2012).Expert Indexing in Oracle Database 11g: Maximum Performance for your Database. Springer.
Morsey, M., Lehmann, J., Auer, S., and Ngomo, A.-C. N. (2011). DBpedia SPARQL Benchmark–Performance Assessment with Real Queries on Real Data. In International semantic web conference, pages 454–469. Springer.
Zilio, D. C., Rao, J., Lightstone, S., Lohman, G., Storm, A., Garcia-Arellano, C., and Fadden, S. (2004). DB2 Design Advisor: Integrated Automatic Physical Database Design. In Proceedings of the Thirtieth international conference on Very large databases-Volume 30, pages 1087–1097.
Publicado
04/10/2021
Como Citar
SOARES, Elton Figueiredo de Souza; SOUZA, Renan; THIAGO, Raphael Melo; MACHADO, Marcelo de Oliveira Costa; AZEVEDO, Leonardo Guerreiro.
A Recommender for Choosing Data Systems based on Application Profiling and Benchmarking. In: SIMPÓSIO BRASILEIRO DE BANCO DE DADOS (SBBD), 36. , 2021, Rio de Janeiro.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.
p. 265-270.
ISSN 2763-8979.
DOI: https://doi.org/10.5753/sbbd.2021.17883.