Login / Signup

Proportionate Adaptive Filtering Algorithms Derived Using an Iterative Reweighting Framework.

Ching-Hua LeeBhaskar D RaoHarinath Garudadri
Published in: IEEE/ACM transactions on audio, speech, and language processing (2020)
In this paper, based on sparsity-promoting regularization techniques from the sparse signal recovery (SSR) area, least mean square (LMS)-type sparse adaptive filtering algorithms are derived. The approach mimics the iterative reweighted ℓ 2 and ℓ 1 SSR methods that majorize the regularized objective function during the optimization process. We show that introducing the majorizers leads to the same algorithm as simply using the gradient update of the regularized objective function, as is done in existing approaches. Different from the past works, the reweighting formulation naturally leads to an affine scaling transformation (AST) strategy, which effectively introduces a diagonal weighting on the gradient, giving rise to new algorithms that demonstrate improved convergence properties. Interestingly, setting the regularization coefficient to zero in the proposed AST-based framework leads to the Sparsity-promoting LMS (SLMS) and Sparsity-promoting Normalized LMS (SNLMS) algorithms, which exploit but do not strictly enforce the sparsity of the system response if it already exists. The SLMS and SNLMS realize proportionate adaptation for convergence speedup should sparsity be present in the underlying system response. In this manner, we develop a new way for rigorously deriving a large class of proportionate algorithms, and also explain why they are useful in applications where the underlying systems admit certain sparsity, e.g., in acoustic echo and feedback cancellation.
Keyphrases
  • machine learning
  • deep learning
  • drug delivery
  • magnetic resonance
  • image quality
  • diffusion weighted imaging
  • computed tomography
  • neural network