Detecção de Mídias Pornográficas em Dispositivos com Recursos Limitados para Controle Parental

  • Jhonatan Geremias PUCPR
  • Eduardo K. Viegas PUCPR
  • Altair O. Santin PUCPR
  • Jackson Mallmann PUCPR / IFC

Resumo


Dispositivos móveis, atualmente, são amplamente utilizados por menores de idade. Este tipo de dispositivo possui acesso a Internet, permitindo assim o seu uso para a visualização de conteúdos pornográficos. Dado este contexto, este artigo propõe uma nova abordagem baseada em contexto para a detecção em tempo real de vídeos pornográficos para controle parental. A partir da sequência de frames de um vídeo, descritores de movimento extraem informação para alimentar um modelo de CNN, fornecendo subsídios para o classificador raso. Resultados experimentais demonstram que a abordagem proposta obteve 93,62% de acurácia enquanto executada em dispositivo com recursos limitados.

Referências

Amato, G., Bolettieri, P., Costa, G., la Torre, F., and Martinelli, F. (2009). Detection of images with adult content for parental control on mobile devices? In Proceedings of the 6th International Conference on Mobile Technology, Application & Systems - Mobility '09. ACM Press.

Bouguet, J.-Y. et al. (2001). Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel corporation, 5(1-10):4.

Clement, J. (2019). Mobile internet usage worldwide - statistics facts. [link] Accessed: Julho 28, 2020.

dos Santos, R. R., Viegas, E. K., and Santin, A. O. (2021). A reminiscent intrusion detection model based on deep autoencoders and transfer learning. In 2021 IEEE Global Communications Conference (GLOBECOM). IEEE.

dos Santos, R. R., Viegas, E. K., Santin, A. O., and Tedeschi, P. (2023). Federated learning for reliable model updates in network-based intrusion detection. Computers amp; Security, 133:103413.

Endeshaw, T., Garcia, J., and Jakobsson, A. (2008). Classification of indecent videos by low complexity repetitive motion detection. In 2008 37th IEEE Applied Imagery Pattern Recognition Workshop. IEEE.

Geremias, J., Viegas, E. K., Santin, A. O., Britto, A., and Horchulhack, P. (2022). Towards multi-view android malware detection through image-based deep learning. In 2022 International Wireless Communications and Mobile Computing (IWCMC). IEEE.

Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., et al. (2018). Recent advances in convolutional neural networks. Pattern Recognition, 77:354–377.

Horchulhack, P., Viegas, E. K., Santin, A. O., Ramos, F. V., and Tedeschi, P. (2024a). Detection of quality of service degradation on multi-tenant containerized services. Journal of Network and Computer Applications, 224:103839.

Horchulhack, P., Viegas, E. K., Santin, A. O., and Simioni, J. A. (2024b). Network-based intrusion detection through image-based cnn and transfer learning. In 2024 International Wireless Communications and Mobile Computing (IWCMC). IEEE.

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. ArXiv, abs/1704.04861.

Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K. (2017). Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡1mb model size. ArXiv, abs/1602.07360.

Ji, S., Xu, W., Yang, M., and Yu, K. (2013). 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221–231.

Jones, T. (2017 [link] Acesso em 2020 Julho 23, 2020)). Deep learning architectures.

Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., and Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Katta, S. S., Nandyala, S., Viegas, E. K., and AlMahmoud, A. (2022). Benchmarking audio-based deep learning models for detection and identification of unmanned aerial vehicles. In 2022 Workshop on Benchmarking Cyber-Physical Systems and Internet of Things (CPS-IoTBench). IEEE.

Kuroki, Y., Nishi, T., Kobayashi, S., Oyaizu, H., and Yoshimura, S. (2007). A psychophysical study of improvements in motion-image quality by using high frame rates. Journal of the Society for Information Display, 15(1):61.

Lee, S., Shim, W., and Kim, S. (2009). Hierarchical system for objectionable video detection. IEEE Transactions on Consumer Electronics, 55(2):677–684.

Li, Q., Qiu, Z., Yao, T., Mei, T., Rui, Y., and Luo, J. (2016). Action recognition by learning deep multi-granular spatio-temporal video representation. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval - ICMR '16. ACM Press.

Lucas, B. and Kanade, T. (1981). An iterative image registration technique with an application to stereo vision (ijcai). volume 81.

Moreira, D., Avila, S., Perez, M., Moraes, D., Testoni, V., Valle, E., Goldenstein, S., and Rocha, A. (2019). Multimodal data fusion for sensitive scene localization. Information Fusion, 45:307–323.

Niu, W., Ma, X., Wang, Y., and Rén, B. (2019). 26ms inference time for resnet-50: Towards real-time execution of all dnns on smartphone. ArXiv, abs/1905.00571.

Perez, M., Avila, S., Moreira, D., Moraes, D., Testoni, V., Valle, E., Goldenstein, S., and Rocha, A. (2017). Video pornography detection through deep learning techniques and motion information. Neurocomputing, 230:279–293.

Rea, N., Lacey, G., Dahyot, R., and Lambe, C. (2006). Multimodal periodicity analysis for illicit content detection in videos. In 3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006. IEE.

Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Viegas, E. K., Santin, A. O., Cogo, V. V., and Abreu, V. (2020). Facing the Unknown: A Stream Learning Intrusion Detection System for Reliable Model Updates, page 898–909. Springer International Publishing.

Wu, Z., Wang, X., Jiang, Y.-G., Ye, H., and Xue, X. (2015). Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In Proceedings of the 23rd ACM international conference on Multimedia - MM '15. ACM Press.

Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., and Toderici, G. (2015). Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Zhang, H. (2004). The optimality of naive bayes. In American Association for Artificial Intelligence. FLAIRS Conference.
Publicado
16/09/2024
GEREMIAS, Jhonatan; VIEGAS, Eduardo K.; SANTIN, Altair O.; MALLMANN, Jackson. Detecção de Mídias Pornográficas em Dispositivos com Recursos Limitados para Controle Parental. In: SIMPÓSIO BRASILEIRO DE SEGURANÇA DA INFORMAÇÃO E DE SISTEMAS COMPUTACIONAIS (SBSEG), 24. , 2024, São José dos Campos/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 256-270. DOI: https://doi.org/10.5753/sbseg.2024.241486.

Artigos mais lidos do(s) mesmo(s) autor(es)

1 2 > >>