Characterizing Monte Carlo Dropout to Quantify Uncertainty in Trained Neural Networks
Abstract
Uncertainty quantification is crucial in deep learning applications where reliable decisions depend not only on accuracy but also on predictive confidence. This work investigates Monte Carlo Dropout (MC Dropout), an efficient method to estimate epistemic uncertainty in neural networks. In particular, the influence of its uncertainty parameter is characterized using the California Housing dataset in a regression task. Empirical results reveal that the estimated uncertainty is highly sensitive to this parameter, affecting both the sample average and standard deviation. Interestingly, the error (RMSE) of the model is not monotonic with the uncertainty parameter, indicating an optimal uncertainty value. These findings underscore the need for careful dropout selection during inference and motivate future research on model calibration.References
Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., et al. (2021). A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion.
Baldi, P. and Sadowski, P. J. (2013). Understanding dropout. In Advances in Neural Information Processing Systems (NeurIPS).
Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in neural networks. In International Conference on Machine Learning.
Der Kiureghian, A. and Ditlevsen, O. (2009). Aleatory or epistemic? does it matter? Structural Safety.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature.
Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML).
Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, A. G. (2018). Averaging weights leads to wider optima and better generalization. In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI).
Kendall, A. and Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision? Neural Information Processing Systems (NeurIPS).
Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in Neural Information Processing Systems (NeurIPS).
Louizos, C., Kingma, D. P., and Welling, M. (2017). Multiplicative normalizing flows for variational bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning (ICML).
Mukhoti, J., Ashukha, A., Lampert, C. H., and Gal, Y. (2020). Calibrating deep neural networks using focal loss. In Advances in Neural Information Processing Systems.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research.
Walker, W. E., Harremöes, P., Rotmans, J., van der Sluijs, J. P., van Asselt, M. B. A., Janssen, P., and Krayer von Krauss, M. P. (2003). Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integrated Assessment.
Baldi, P. and Sadowski, P. J. (2013). Understanding dropout. In Advances in Neural Information Processing Systems (NeurIPS).
Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in neural networks. In International Conference on Machine Learning.
Der Kiureghian, A. and Ditlevsen, O. (2009). Aleatory or epistemic? does it matter? Structural Safety.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature.
Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML).
Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, A. G. (2018). Averaging weights leads to wider optima and better generalization. In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI).
Kendall, A. and Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision? Neural Information Processing Systems (NeurIPS).
Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in Neural Information Processing Systems (NeurIPS).
Louizos, C., Kingma, D. P., and Welling, M. (2017). Multiplicative normalizing flows for variational bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning (ICML).
Mukhoti, J., Ashukha, A., Lampert, C. H., and Gal, Y. (2020). Calibrating deep neural networks using focal loss. In Advances in Neural Information Processing Systems.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research.
Walker, W. E., Harremöes, P., Rotmans, J., van der Sluijs, J. P., van Asselt, M. B. A., Janssen, P., and Krayer von Krauss, M. P. (2003). Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integrated Assessment.
Published
2025-09-29
How to Cite
LUNA, Rafael de S.; LUNA, Rodrigo de S.; FIGUEIREDO, Daniel R..
Characterizing Monte Carlo Dropout to Quantify Uncertainty in Trained Neural Networks. In: NATIONAL MEETING ON ARTIFICIAL AND COMPUTATIONAL INTELLIGENCE (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 1972-1983.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.14366.
