Login / Signup

Computing Generalized Convolutions Faster Than Brute Force.

Barış Can EsmerAriel KulikDániel MarxPhilipp SchepperKarol Węgrzycki
Published in: Algorithmica (2023)
In this paper, we consider a general notion of convolution. Let D be a finite domain and let D n be the set of n -length vectors (tuples) of D . Let f : D × D → D be a function and let ⊕ f be a coordinate-wise application of f . The f -Convolution of two functions g , h : D n → { - M , … , M } is ( g ⊛ f h ) ( v ) : = ∑ v g , v h ∈ D n s.t. v = v g ⊕ f v h g ( v g ) · h ( v h ) for every v ∈ D n . This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f -Convolution via brute-force enumeration in O ~ ( | D | 2 n · polylog ( M ) ) time. Our main result is an improvement over this naive algorithm. We show that f -Convolution can be computed exactly in O ~ ( ( c · | D | 2 ) n · polylog ( M ) ) for constant c : = 3 / 4 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f -Convolution, and we show that an appropriate cyclic partition exists for every f . Furthermore, we demonstrate that a single entry of the f -Convolution can be computed more efficiently. In this variant, we are given two functions g , h : D n → { - M , … , M } alongside with a vector v ∈ D n and the task of the f -Query problem is to compute integer ( g ⊛ f h ) ( v ) . This is a generalization of the well-known Orthogonal Vectors problem. We show that f -Query can be computed in O ~ ( | D | ω 2 n · polylog ( M ) ) time, where ω ∈ [ 2 , 2.372 ) is the exponent of currently fastest matrix multiplication algorithm.
Keyphrases
  • neural network
  • machine learning
  • deep learning
  • single molecule
  • diffusion weighted imaging
  • computed tomography
  • magnetic resonance
  • hiv infected
  • gene therapy