skip to main content
10.1145/2382636.2382644acmotherconferencesArticle/Chapter ViewAbstractPublication PageswebmediaConference Proceedingsconference-collections
research-article

An architecture for multidimensional computer adaptive test with educational purposes

Published:15 October 2012Publication History

ABSTRACT

Given a set of items, a Multidimensional Computer Adaptive Test (MCAT) selects those items from the bank according to the estimated abilities of the student, resulting in an individualized test. MCATs seek to maximize the test's accuracy, based on multiple simultaneous examination abilities (unlike a Computer Adaptive Test - CAT - which evaluates a single ability) using the sequence of items previously answered. Although MCATs have been very well studied from a statistical point of view, there is no computational system that covers all the steps needed for its appropriated use such as: the use of a calibrated item bank, proposal of initial and stopping criteria for the test, criteria for estimating the ability of the examinee and criteria to select items. The purpose of this paper is twofold: (i) to present an innovative architecture of an MCAT for real users, as a Web application, and (ii) to discuss the theoretical and methodological development of such MCAT, through a new approach named here Computer-based Multidimensional Adaptive Testing (CBMAT). The proof of concept of CBMAT was an implementation called Multidimensional Adaptive Test System for Educational Purposes (MADEPT). In simulations, MADEPT proved to be a computer system suitable for applications with real users, secure, accurate and portable.

References

  1. S. M. Alu1sio, V. T. de Aquino, R. Pizzirani, and O. N. Oliveira. High order skills with partial knowledge evaluation: Lessons learned from using a computer-based proficiency test of English for academic purposes. Journal of Information Technology Education, 2(1):185--201, 2003.Google ScholarGoogle Scholar
  2. D. F. Andrade, H. R. Tavares, and R. da Cunha Valle. Teoria de Resposta ao Item: Conceitos e Aplicacoes. Associacao Brasileira de Estat1stica, Sao Paulo, 2000.Google ScholarGoogle Scholar
  3. D. Bartram. Computer-Based Testing and the Internet: Issues and Advances, chapter Testing on the Internet: Issues, Challenges and Opportunities in the Field of Occupational Assessment, pages 13--37. John Wiley and Sons, Ltd, 2006.Google ScholarGoogle Scholar
  4. R. D. Bock, R. Gibbons, and E. Muraki. Full-information item factor analysis. Applied Psychological Measurement, 12(3):261--280, 1988.Google ScholarGoogle ScholarCross RefCross Ref
  5. J. E. Bruno. Admissible probability measures in instructional management. Journal of Computer-Based Instruction, 14, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. H.-H. Chang and Z. Ying. A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20:213--229, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  7. M. Curi, J. Piton-Gonçalves, T. A. M. Ricarte, and S. M. Alu1sio. Metodos de selecao de itens em teste adaptativo multidimensional. 1o Congresso Brasileiro de Teoria de Resposta ao Item.Google ScholarGoogle Scholar
  8. Q. Diao and M. D. Reckase. Comparison of ability estimation and item selection mathods in multidimensional computerized adaptive testing. In Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing, 2009.Google ScholarGoogle Scholar
  9. E. Guzman, R. Conejo, and E. Garc1a-hervas. An authoring environment for adaptive testing. Educational Technology & Society, 8:66--76, 2005.Google ScholarGoogle Scholar
  10. S. Kullback. Information theory and statistics. John Wiley and Sons, 1959.Google ScholarGoogle Scholar
  11. S. Kullback and R. A. Leibler. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79--86, 1951.Google ScholarGoogle ScholarCross RefCross Ref
  12. Y. H. Li and R. W. Lissitz. An evaluation of the accuracy of multidimensional irt linking. Applied Psychological Measurement, 24:115--138, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  13. J. M. Linacre. Computer-adaptive testing: a methodology whose time has come. Published in Sunhee Chae, Unson Kang, Eunhwa Jeon and J. M. Linacre. Development of computerised middle school achievement test, 2000.Google ScholarGoogle Scholar
  14. F. M. Lord. Application of Item Response Theory to Practical Testing Problems. Lawrence Erlbaum Associates, Hilsdale, New Jersey, EUA, first edition, 1980.Google ScholarGoogle Scholar
  15. C. C. Marinagi, V. G. Kaburlasos, and V. T. Tsoukalas. An architecture for an adaptive assessment tool. Frontiers In Education Conference - Global Engineering: Knowledge Without Borders, Opportunities Without Passports. FIE '07, (37th Annual):T3D-11-T3D-16, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  16. C. McKenna and J. Bull. Design effective objective test questions: an introductory workshop. Proceedings of the Conference at Loughborough University, Flexible Learning, (Third):253--257, June 1999.Google ScholarGoogle Scholar
  17. J. Mulder and W. J. van der Linden. Elements of Adaptive Testing, chapter Multidimensional Adaptive Testing with Kullback-Leibler Information Item Selection, pages 77--101. Springer, 2010.Google ScholarGoogle Scholar
  18. J. Olea, V. Ponsoda, and G. Prieto. Tests Informatizados Fundamentos y Aplicaciones. Ediciones Piramede, 1999.Google ScholarGoogle Scholar
  19. C. G. Parshall, J. A. Spray, J. C. Kalohn, and T. Davey. Practical Considerations in Computer-Based Testing. Springer-Verlag, New York, Inc., 2002.Google ScholarGoogle Scholar
  20. M. Phankokkruad and K. Woraratpanya. Web service architecture for computer-adaptive testing on e-learning. Proceedings of World Academy of Science Engineering Technology, 46:347--351, 2008.Google ScholarGoogle Scholar
  21. J. Piton-Goncalves, S. M. Alu1sio, L. H. Mendonca, and O. O. Novaes. A learning environment for English for academic purposes based on adaptive tests and task-based systems. In J. Lester, R. Vicari, and F. Paraguacu, editors, Intelligent Tutoring Systems, volume 3220 of Lecture Notes in Computer Science, pages 1--11. Springer Berlin Heidelberg, 2004.Google ScholarGoogle Scholar
  22. J. Piton-Goncalves, M. Curi, and S. M. Alu1sio. Studies in kullback-leibler between subsequent posteriors: defining an information-based stopping critetion. Psychometrika: New York: Springer (submitted), 2012.Google ScholarGoogle Scholar
  23. J. Piton-Goncalves, A. J. Monzon, and S. M. Alu1sio. Metodos de avaliacao informatizada que tratam o conhecimento parcial do aluno e geram provas individualizadas. In Anais do XX Simposio Brasileiro de Informatica na Educacao, volume 1, Porto Alegre, RS, Novembro 2009. Sociedade Brasileira de Computacao.Google ScholarGoogle Scholar
  24. M. D. Reckase. An interactive computer program for tailored testing based on the oneparameter logistic model. Behaviour Research Methods and Instrumentation, 6(2):208--307, 1974.Google ScholarGoogle ScholarCross RefCross Ref
  25. M. D. Reckase. The difficulty of test items that measure more than one ability. Applied Psychological Measurement, 9(4):401--412, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  26. M. D. Reckase. Multidimensional Item Response Theory. Springer: New York - NY, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. M. D. Reckase and R. L. McKinely. The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15:361--373, 1991.Google ScholarGoogle ScholarCross RefCross Ref
  28. P. Salcedo, M. Pinninghoff, and R. Contreras. Computerized adaptive tests and item response theory on a distance education platform. In J. Mira and J. alvarez, editors, Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, volume 3562 of Lecture Notes in Computer Science, pages 22--29. Springer Berlin / Heidelberg, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Sao-Paulo. Matrizes de Referencia para a Avaliacao SARESP 2009, volume 1. Secretaria da Educacao do Governo do Estado de Sao Paulo, 2009.Google ScholarGoogle Scholar
  30. D. O. Segall. Multidimensional adaptive testing. Psychometrika, 61(2):331--354, June 1996.Google ScholarGoogle ScholarCross RefCross Ref
  31. D. O. Segall. Computerized Adaptive Testing: Theory and Practice, chapter Principles fo Multidimensional Adaptive Testing, pages 53--73. Kluwer Academic Publishers - New York, Boston, Dordrecht, London, Moscow, 2000.Google ScholarGoogle Scholar
  32. L. J. Simms and L. A. Clark. Validation of a computerized adaptive version for nonadaptive and adaptive personality (snap). Psychological Assessment, 17(1):28--43, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  33. B. P. Veldkamp and W. J. van der Linden. Multidimensional adaptive testing with constraints on the test content. Psychometrika, 67(4):575--588, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  34. C. Wang and H.-H. Chang. Item selection in multidimensional computerized adaptive testing-gaining information from different angles. Psychometrika, 76(3):363--384, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  35. C. Wang, H.-H. Chang, and K. A. Boughton. Kullback-leibler information and its applications in multi-dimensional adaptive testing. Psychometrika, 76(1):13--19, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  36. D. J. Weiss. Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53(6):774--789, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  37. D. J. Weiss and G. G. Kingsbury. Application of computerized adaptive testing to educational problems. Journal of Education Measurement, 21:361--375, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  38. L. A. M. Zaina, J. F. R. Jr., M. A. C. D. A. Cardieri, and G. Bressan. Adaptive learning in the educational e-lors system: an approach based on preference categories. IJLT, 6(4):341--361, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An architecture for multidimensional computer adaptive test with educational purposes

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        WebMedia '12: Proceedings of the 18th Brazilian symposium on Multimedia and the web
        October 2012
        426 pages
        ISBN:9781450317061
        DOI:10.1145/2382636

        Copyright © 2012 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 15 October 2012

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate270of873submissions,31%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader