3/26/2021

Matrix Multiplication Inches Closer to Mythic Goal

Kevin Hartnett, Matrix Multiplication Inches Closer to Mythic Goal, Quanta Magazine, March 23, 2021.

“Exponent two” refers to the ideal speed — in terms of number of steps required — of performing one of the most fundamental operations in math: matrix multiplication. If exponent two is achievable, then it’s possible to carry out matrix multiplication as fast as physically possible. If it’s not, then we’re stuck in a world misfit to our dreams.


Reduction:
Say, for example, you want to multiply a pair of eight-by-eight matrices. One way to do it is to break each big matrix into four four-by-four matrices, so each one has four entries. Because a matrix can also have entries that are themselves matrices, you can thus think of the original matrices as a pair of two-by-two matrices, each of whose four entries is itself a four-by-four matrix. Through some manipulation, these four-by-four matrices can themselves each be broken into four two-by-two matrices.

The point of this reduction — of repeatedly breaking bigger matrices into smaller matrices — is that it’s possible to apply Strassen’s algorithm over and over again to the smaller matrices, reaping the savings of his method at each step. Altogether, Strassen’s algorithm improved the speed of matrix multiplication from n^3 to n^2.81 multiplicative steps.
Laser method by tensor:
The next big improvement took place in the late 1970s, with a fundamentally new way to approach the problem. It involves translating matrix multiplication into a different computational problem in linear algebra involving objects called tensors. The particular tensors used in this problem are three-dimensional arrays of numbers composed of many different parts, each of which looks like a small matrix multiplication problem.

Matrix multiplication and this problem involving tensors are equivalent to each other in a sense, yet researchers already had faster procedures for solving the latter one. This left them with the task of determining the exchange rate between the two: How big are the matrices you can multiply for the same computational cost that it takes to solve the tensor problem?

“This is a very common paradigm in theoretical computer science, of reducing between problems to show they’re as easy or as hard as each other,” Alman said.

In 1981 Arnold Schönhage used this approach to prove that it’s possible to perform matrix multiplication in n^2.522 steps. Strassen later called this approach the “laser method.”

Josh Alman and Virginia Vassilevska Williams, A Refined Laser Method and Faster Matrix Multiplication, the 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2021).

The main result of this paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors.

沒有留言:

張貼留言