Abstract
This work presents a parallel variant of the algorithm introduced in [Acceleration of block coordinate descent methods with identification strategies, Comput. Optim. Appl. 72(3):609–640, 2019] to minimize the sum of a partially separable smooth convex function and a possibly nonsmooth block-separable convex function under simple constraints. The proposed method achieves higher efficiency by using a strategy to identify nonzero coordinates, thereby allowing the computational effort to be focused via a nonuniform probability distribution in block selection. Parallelization is achieved by extending theoretical results from [Richtárik and Takác Parallel coordinate descent methods for big data optimization, Math. Prog. Ser. A 156:433–484, 2016]. We present convergence results and comparative numerical experiments on regularized regression problems using both synthetic and real datasets.