TY - JOUR
AU - NÃ³brega, Caio B.
AU - Marinho, Leandro B.
PY - 2014/07/18
Y2 - 2024/02/21
TI - Predicting the Learning Rate of Gradient Descent for Accelerating Matrix Factorization
JF - Journal of Information and Data Management
JA - JIDM
VL - 5
IS - 1
SE - KDMiLe 2013
DO - 10.5753/jidm.2014.1523
UR - https://sol.sbc.org.br/journals/index.php/jidm/article/view/1523
SP - 94
AB - <p>Matrix Factorization (MF) has become the predominant technique in recommender systems. The model parameters are usually learned by means of numerical methods, such as gradient descent. The learning rate of gradient descent is typically set to lower values in order to ensure that the algorithm will not miss a local optimum. As a consequence, the algorithm may take several iterations to converge. Ideally, one wants to find the learning rate that will lead to a local optimum in the first iterations, but that is very difficult to achieve given the high complexity of the search space. Starting with an exploratory analysis on several recommender systems datasets, we observed that there is an overall linear relationship between the learning rate and the number of iterations needed until convergence. Another key observation is that this relationship holds across the different recommender datasets chosen. From this, we propose to use simple linear regression models for predicting, for an unknown dataset, a good learning rate to start with. The idea is to estimate a learning rate that will get us as close as possible to a local optimal in the first iteration, without overshooting it. We evaluate our approach on 8 real-world recommender datasets and compare it against the standard learning algorithm, that uses a fixed learning rate, and adaptive learning rate strategies from the literature. We show that, for some datasets, we can reduce the number of iterations up to 40% when compared to the standard approach.</p>
ER -