- "\\begin{abstract}{\\color{Abstract_color}\nWe consider the projection onto the $\\ell_{1, \\infty}$ norm ball for solving group sparse optimization problems arising in multi-task learning. \nBased on the primal-dual gradient method (PDG), we present a novel primal-dual Newton method, which updates the primal and dual iterates incrementally with Newton steps. \nWe exploit the problem-inherent structure so that the closed-form for the primal-dual Newton steps can be derived easily. \nCompared with existing algorithms, our approach does not need to maintain the primal feasibility, nor require the computation of the $\\ell_1$ ball projection subproblems. \nWe prove that our algorithm terminates after a finite number of iterations. \nNumerical simulations on synthetic and real-world data show that our proposed method is faster than the previous state-of-the-art.\n\n}\\end{abstract}"
0 commit comments