Reinforcement Learning on Mobile Devices: Context-Aware Configuration Control
Abstract
Automatic configuration on mobile devices faces challenges in adapting to real user preferences, often leading to frustration with generic adjustments to brightness, volume, and notifications. This paper proposes an embedded solution based on reinforcement learning, capable of dynamically adjusting screen brightness and media volume according to context. The system collects sensor data such as ambient light, location, and meeting status, in order to feed a Q-Learning agent. Actions are refined through manual feedback interpreted as supervised reinforcement. For unseen states, preferences are inferred using contextual similarity. All modules operate offline, with no reliance on cloud services or external connectivity. Results show that the agent learned policies consistent with user behavior, maintained stability in recurring contexts, responded appropriately to new scenarios, and reduced the need for manual intervention. The proposed architecture demonstrates the feasibility of embedded autonomous agents, delivering intelligent personalization directly on Android devices.References
Abeywardhane, J., De Silva, E., Gallanga, I., Rathnayake, L., Wickramarathne, J., e Sriyaratna, D. (2018). Optimization of volume & brightness of android smartphone through clustering & reinforcement learning. In IC on Inform. and Autom. for Sustainability.
Ahmadi-Karvigh, H., Kermani, M., e Aghajan, H. (2019). Intelligent adaptive automation: A framework for an activity-driven and user-centered building automation. Journal of Ambient Intelligence and Smart Environments, 11(2):101–122.
Altulyan, M., Yao, L., Huang, C., Wang, X., e Kanhere, S. S. (2021). Context-induced activity monitoring for on-demand things-of-interest recommendation in an ambient intelligent environment. Sensors, 21(14):4707.
Bai, H., Zhou, Y., Cemri, M., Pan, J., Suhr, A., Levine, S., e Kumar, A. (2024). Digirl: Training in-the-wild device-control agents with autonomous reinforcement learning. arXiv preprint arXiv:2406.11896.
Du, H., Thudumu, S., Nguyen, H., Vasa, R., e Mouzakis, K. (2025). A comprehensive survey on context-aware multi-agent systems: Techniques, applications, challenges and future directions. arXiv preprint arXiv:2402.01968.
Kaladevi, A. C., Kumar, V. V., Mahesh, T. R., e Guluwadi, S. (2024). Optimizing personalized and context-aware recommendations in pervasive computing environments. Int. Journal of Computational Intelligence Systems, 17(1):300.
Lin, J., Sun, G., Shen, J., Cui, T., Yu, P., Xu, D., e Li, L. (2019). A survey of segmentation, annotation, and recommendation techniques in micro learning for next generation of oer. In Int. Conf. Computer Suppor. Cooper. Work in Design, pages 152–157.
Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.
Sarker, I. H. (2019). Context-aware rule learning from smartphone data: survey, challenges and future directions. Journal of Big Data, 6(95).
Sarker, I. H., Colman, A., Han, J., Khan, A. I., Abushark, Y. B., e Salah, K. (2020). Behavdt: A behavioral decision tree learning to build user-centric context-aware predictive model. Mobile Networks and Applications.
Souza, E., Monteiro, E., Barreto, R., e deFreitas, R. (2022). A context-aware automatic smartphone reconfiguration. In ICCE, pages 1–7, Las Vegas, NV, USA. IEEE.
Sutton, R. S. e Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.
Todi, K., Bailly, G., Leiva, L. A., e Oulasvirta, A. (2021). Adapting user interfaces with model-based reinforcement learning. arXiv preprint arXiv:2103.06807.
Tokic, M. (2010). Adaptive -greedy exploration in reinforcement learning based on value differences. KI-Künstliche Intelligenz, 26(2):159–168.
Wang, D., Zhang, X., Yu, D., Xu, G., e Deng, S. (2021). Came: Content-and context-aware music embedding for recommendation. IEEE Transactions on Neural Networks and Learning Systems, 32(3):1375–1388.
Yamasaki, K. et al. (2023). Cluster-aware bayesian optimization for preference inference under cold start. ACM Trans. on Interactive Intelligent Systems (TiiS), 13(2):1–23.
Ahmadi-Karvigh, H., Kermani, M., e Aghajan, H. (2019). Intelligent adaptive automation: A framework for an activity-driven and user-centered building automation. Journal of Ambient Intelligence and Smart Environments, 11(2):101–122.
Altulyan, M., Yao, L., Huang, C., Wang, X., e Kanhere, S. S. (2021). Context-induced activity monitoring for on-demand things-of-interest recommendation in an ambient intelligent environment. Sensors, 21(14):4707.
Bai, H., Zhou, Y., Cemri, M., Pan, J., Suhr, A., Levine, S., e Kumar, A. (2024). Digirl: Training in-the-wild device-control agents with autonomous reinforcement learning. arXiv preprint arXiv:2406.11896.
Du, H., Thudumu, S., Nguyen, H., Vasa, R., e Mouzakis, K. (2025). A comprehensive survey on context-aware multi-agent systems: Techniques, applications, challenges and future directions. arXiv preprint arXiv:2402.01968.
Kaladevi, A. C., Kumar, V. V., Mahesh, T. R., e Guluwadi, S. (2024). Optimizing personalized and context-aware recommendations in pervasive computing environments. Int. Journal of Computational Intelligence Systems, 17(1):300.
Lin, J., Sun, G., Shen, J., Cui, T., Yu, P., Xu, D., e Li, L. (2019). A survey of segmentation, annotation, and recommendation techniques in micro learning for next generation of oer. In Int. Conf. Computer Suppor. Cooper. Work in Design, pages 152–157.
Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.
Sarker, I. H. (2019). Context-aware rule learning from smartphone data: survey, challenges and future directions. Journal of Big Data, 6(95).
Sarker, I. H., Colman, A., Han, J., Khan, A. I., Abushark, Y. B., e Salah, K. (2020). Behavdt: A behavioral decision tree learning to build user-centric context-aware predictive model. Mobile Networks and Applications.
Souza, E., Monteiro, E., Barreto, R., e deFreitas, R. (2022). A context-aware automatic smartphone reconfiguration. In ICCE, pages 1–7, Las Vegas, NV, USA. IEEE.
Sutton, R. S. e Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.
Todi, K., Bailly, G., Leiva, L. A., e Oulasvirta, A. (2021). Adapting user interfaces with model-based reinforcement learning. arXiv preprint arXiv:2103.06807.
Tokic, M. (2010). Adaptive -greedy exploration in reinforcement learning based on value differences. KI-Künstliche Intelligenz, 26(2):159–168.
Wang, D., Zhang, X., Yu, D., Xu, G., e Deng, S. (2021). Came: Content-and context-aware music embedding for recommendation. IEEE Transactions on Neural Networks and Learning Systems, 32(3):1375–1388.
Yamasaki, K. et al. (2023). Cluster-aware bayesian optimization for preference inference under cold start. ACM Trans. on Interactive Intelligent Systems (TiiS), 13(2):1–23.
Published
2025-09-29
How to Cite
BATISTA, Alcilene; SOUZA, Elian; BARRETO, Raimundo.
Reinforcement Learning on Mobile Devices: Context-Aware Configuration Control. In: NATIONAL MEETING ON ARTIFICIAL AND COMPUTATIONAL INTELLIGENCE (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 664-675.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.13983.
