Better and simpler error analysis of the Sinkhorn-Knopp algorithm for matrix scaling
成果类型:
Article
署名作者:
Chakrabarty, Deeparnab; Khanna, Sanjeev
署名单位:
Dartmouth College; University of Pennsylvania
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-020-01503-3
发表日期:
2021
页码:
395-407
关键词:
diagonal equivalence
nonnegative matrices
CONVERGENCE
minimization
complexity
ratcliff
darroch
row
摘要:
Given a non-negative n xm real matrix A, the matrix scaling problem is to determine if it is possible to scale the rows and columns so that each row and each column sums to a specified positive target values. The Sinkhorn-Knopp algorithm is a simple and classic procedure which alternately scales all rows and all columns to meet these targets. The focus of this paper is the worst-case theoretical analysis of this algorithm. We present an elementary convergence analysis for this algorithm that improves upon the previous best bound. In a nutshell, our approach is to show (i) a simple bound on the number of iterations needed so that the KL-divergence between the current row-sums and the target row-sums drops below a specified threshold d, and (ii) then show that for a suitable choice of d, whenever KL-divergence is below d, then the 1-error or the 2-error is below e. The well-known Pinsker's inequality immediately allows us to translate a bound on the KL divergence to a bound on 1-error. To bound the 2-error in terms of the KL-divergence, we establish a new inequality, referred to as (KL vs 1/2). This inequality is a strengthening of Pinsker's inequality and may be of independent interest.
来源URL: