Even if the matrix is not sparse this the iterative method is the way to go.
Multiplying A.dot(q) has complexity O(N^2), while computing A.dot(A^i) has complexity O(N^3).
The fact that q is sparse (indeed much more sparse than A) may help.
For the first iteration A*q
can be computed as A[q_hot_index,:].T
.
For the second iteration the expected density of A @ q
has the same expected density as A for A (about 10%) so it is still good to do it sparsely.
For the third iteration afterwards the A^i @ q
will be dense.
Since you are accumulating the result it is good that your r
is not sparse, it prevents index manipulation.
Eigen decomposition is good when you need to compute a P(A)
, to compute a P(A)*q
the eigen decomposition becomes advantageous only when P(A)
has degree of the order of the size of A
. Eigen-decomposition has complexity O(N^3)
, a matrix-vector product has complexity O(N^2)
, the evaluation of a polynomial P(A)
of degree D
using the eigen decomposition can be achieved in O(N^3 + N*D)
.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…