Método Empírico para Avaliar a Sensibilidade do Tempo de Execução de Tarefas de Tempo Real aos Dados de Entrada

  • Karila Palma Silva UFSC
  • Luís Fernando Arcaro UFSC
  • Rômulo Silva de Oliveira UFSC

Resumo

Neste trabalho realizamos uma análise empírica dos tempos de execução de tarefas de tempo real com respeito aos dados de entrada. A análise tem por objetivo (1) verificar a sensibilidade dos tempos de execução de tarefas aos dados de entrada utilizados, e (2) avaliar quantitativamente seu impacto nos tempos de execução resultantes. Para (1) utilizamos testes estatísticos, onde verificamos se diferentes dados de entrada geram distribuições diferentes, podendo então concluir se existem evidências estatísticas de que o tempo de execução da tarefa é sensível aos dados de entrada. Com relação a (2) utilizamos um algoritmo genético para encontrar dados de entrada que maximizam (MAX) ou minimizam (MIN) o tempo de execução da tarefa, e calculamos a razão da mediana das entradas MAX e MIN. A finalidade da análise realizada é fornecer um método para o testador de software obter informaçõees sobre o impacto dos dados de entrada nos tempos de execução da tarefa, e portanto, a importância da identificação dos dados de entrada de pior caso — com respeito ao tempo de execução — para serem utilizados no teste de tarefas em Sistemas de Tempo Real.

Referências

J. W.-S. Liu, Real-Time systems, 1st ed. Prentice Hall, 2000.

R. Wilhelm, T. Mitra, F. Mueller, I. Puaut, P. Puschner, J. Staschulat, P. Stenstr¨om, J. Engblom, A. Ermedahl, N. Holsti, S. Thesing, D. Whalley, G. Bernat, C. Ferdinand, and R. Heckmann, “The Worst-Case Execution-Time Problem - Overview of Methods and Survey of Tools,” ACM Transactions on Embedded Computing Systems (TECS), vol. 7, pp. 36:1–36:53, 2008.

G. C. Buttazzo, “Hard Real Time Computing Systems: Predictable Scheduling Algorithms and Applications,” Vasa, 2008.

F. J. Cazorla, J. Abella, J. Andersson et al., “PROXIMA: Improving Measurement-Based Timing Analysis through Randomisation and Probabilistic Analysis,” in Euromicro Conference on Digital System Design 2016 (DSD’16). IEEE, 2016, pp. 276–285.

J. Abella, C. Hernandez, E. Qui˜nones, F. J. Cazorla, P. R. Conmy, M. Azkarate-askasua, J. Perez, E. Mezzetti, and T. Vardanega, “WCET Analysis Methods: Pitfalls and Challenges on their Trustworthiness,” in International Symposium on Industrial Embedded Systems 2015 (SIES’15). IEEE, 2015, pp. 1–10.

J. Engblom, “Processor pipelines and static worst-case execution time analysis,” Ph.D. dissertation, Uppsala University, 2002.

J. Schneider, “Cache and pipeline sensitive fixed priority scheduling for preemptive real-time systems,” in Real-Time Systems Symposium 2000 (RTSS’00). IEEE, 2000, pp. 195–204.

N. Zhang, A. Burns, and M. Nicholson, “Pipelined processors and worst case execution times,” Real-Time Systems, vol. 5, pp. 319–343, 1993.

J. Yan and W. Zhang, “WCET Analysis for Multi-Core Processors with Shared L2 Instruction Caches,” in Real-Time and Embedded Technology and Applications Symposium 2008 (RTAS’08). IEEE, 2008, pp. 80–89.

R. Wilhelm, D. Grund, J. Reineke, M. Schlickling, M. Pister, and C. Ferdinand, “Memory Hierarchies, Pipelines, and Buses for Future Architectures in Time-Critical Embedded Systems,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 28, pp. 966–978, 2009.

S. Law and I. Bate, “Achieving Appropriate Test Coverage for Reliable Measurement-Based Timing Analysis,” in Euromicro Conference on Real-Time Systems 2016 (ECRTS’16). IEEE, 2016, pp. 189–199.

J. Edvardsson, “A survey on automatic test data generation,” in Proceedings of the 2nd Conference on Computer Science and Engineering, 1999, pp. 21–28.

C. Cadar and K. Sen, “Symbolic execution for software testing: three decades later,” Communications of the ACM, vol. 56, no. 2, pp. 82–90, 2013.

I. Wenzel, R. Kirner, B. Rieder, and P. Puschner, “Measurement-based worst-case execution time analysis,” in Software Technologies for Future Embedded and Ubiquitous Systems, 2005. SEUS 2005. Third IEEE Workshop on. IEEE, 2005, pp. 7–10.

L. C. Briand, Y. Labiche, and M. Shousha, “Stress testing real-time systems with genetic algorithms,” in Proceedings of the 7th annual conference on Genetic and evolutionary computation. ACM, 2005, pp. 1021–1028.

J. Wegener and F. Mueller, “A comparison of static analysis and evolutionary testing for the verification of timing constraints,” Real-Time Systems, vol. 21, no. 3, pp. 241–268, 2001.

I. Ashraf, G. M. Hassan, K. Yahya, S. A. Shah, S. Ullah, A. Manzoor, and M. Murad, “Parameter tuning of evolutionary algorithm by meta-eas for wcet analysis,” in Emerging Technologies (ICET), 2010 6th International Conference on. IEEE, 2010, pp. 7–10.

J. Wegener, H. Sthamer, B. F. Jones, and D. E. Eyres, “Testing real-time systems using genetic algorithms,” Software Quality Journal, vol. 6 no. 2, pp. 127–135, 1997.

D. B. Oliveira and R. S. Oliveira, “Comparative Analysis of Trace Tools for Real-Time Linux,” IEEE Latin America Transactions, vol. 12, no. 6, pp. 1134–1140, Sept 2014.

J. Gustafsson, A. Betts, A. Ermedahl, and B. Lisper, “The M¨alardalen WCET Benchmarks: Past, Present And Future,” in International Workshop on Worst-Case Execution Time Analysis 2010 (WCET’10), vol. 15. Schloss Dagstuhl - Leibniz-Zentrum f ¨ur Informatik, 2010.

H. Falk, S. Altmeyer, P. Hellinckx, B. Lisper, W. Puffitsch, C. Rochange, M. Schoeberl, R. B. Sørensen, P. W¨agemann, and S. Wegener, “TACLeBench: A benchmark collection to support worst-case execution time research,” in 16th International Workshop on Worst-Case Execution Time Analysis (WCET 2016), ser. OpenAccess Series in Informatics (OASIcs), M. Schoeberl, Ed., vol. 55. Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum f ¨ur Informatik, 2016, pp. 2:1–2:10.

Perf, “Linux Perf tool,” 2015. [Online]. Available: https://perf.wiki.kernel.org/

D. B. Oliveira and R. S. Oliveira, “Automata-based modeling of interrupts in the Linux PREEMPT RT kernel,” in 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Sept 2017, pp. 1–8.

G. Marsaglia and W. W. Tsang, “Some Difficult-to-pass Tests of Randomness,” Journal of Statistical Software, vol. 7, pp. 1–9, 2002.

J. Abella, E. Qui˜nones, F. Wartel, T. Vardanega, and F. J. Cazorla, “Heart of Gold: Making the Improbable Happen to Increase Confidence in MBPTA,” in Euromicro Conference on Real-Time Systems 2014 (ECRTS’14). IEEE, 2014, pp. 255–265.

L. Kosmidis, E. Qui˜nones, J. Abella, T. Vardanega, C. Hernandez, A. Gianarro, I. Broster, and F. J. Cazorla, “Fitting Processor Architectures for Measurement-Based Probabilistic Timing Analysis,” Microprocessors and Microsystems (MICPRO), vol. 47B, pp. 287–302, 2016.

F. W. Scholz and M. A. Stephens, “K-Sample Anderson-Darling Tests,” Journal of the American Statistical Association, vol. 82, 1987.

R, “R: A Language and Environment for Statistical Computing,” 2017. [Online]. Available: http://www.r-project.org/
Publicado
2018-11-06
Como Citar
SILVA, Karila Palma; ARCARO, Luís Fernando; DE OLIVEIRA, Rômulo Silva. Método Empírico para Avaliar a Sensibilidade do Tempo de Execução de Tarefas de Tempo Real aos Dados de Entrada. Anais Estendidos do Simpósio Brasileiro de Engenharia de Sistemas Computacionais (SBESC), [S.l.], nov. 2018. ISSN 2763-9002. Disponível em: <https://sol.sbc.org.br/index.php/sbesc_estendido/article/view/11006>. Acesso em: 18 maio 2024.