Here i have a 2 numpy arrays, and a function that will take those arrays as an input, and then do some numpy calculation and return the result. It works as it is but it's slow and i think we can use multiprocessing to make it a bit faster.
Anyway, here's my code :
A = #4 dimensions big numpy array
B = #1 dimension numpy array
def function(A, B):
P = np.einsum("ijkl,ij->kl", A, B)
return P.astype(np.uint8)
result = function(A,B)
I'm still quite new into this Multiprocessing stuff, but think that we're able to make array A
and B
as a shared memory (maybe using multiprocessing.Array()
??) , and then make multiple processes to compute the function(A, B)
. But i still can't quite understand how to put all of that in the code.
EDIT:
Alright, so it seems like the approach above doesn't work, but let's try another case, but now, lets say the length of array A
is 120, and now i want to use only 3/4 parts of array A
from index number 0 to 89 and use all of array B
in the Process No.1
And then, i also want to use 3/4 parts of array A
but from index number 30 to 119 and use all parts of array B
in the Process No.2, will that help? Of course i can make the A
array even larger to get it's part computed with even more process where, but the thing is, will this concept works?
question from:
https://stackoverflow.com/questions/65917489/python-multiprocess-shared-memory-with-python-numpy-array 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…