Time-efficient matrix elements grouping and summing

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












6












$begingroup$


I'm interested in finding the quickest way of grouping the elements of a large matrix in sub-groups of NxM elements and them summing them together.
To be completely clear, I'm actually not interested in the "regrouped" matrice, but only in the final results where the elements are summed.



"Standard matrix" case



I'll show you an example below:



Say I have the following matrix 9x8:



test = Array[Subscript[a, ##] &, 8, 9]



enter image description here




I regroup it in sub-matrices NxM, in this example 3x2:



subtest = Partition[test, 2, 3]



enter image description here




and then I sum them together (as suggested in the comment by @:



out = MapAt[Total[#, -1] &, subtest, All, All];



enter image description here




I could use other ways of summing the subgroups, as for example:



 out = Total /@ Flatten /@ # & /@ subtest;


Or using two nested tables, or for loops, etc.



My question is what is the fastest method for doing this? I need to do it on a 48k x 48k matrix, so I'd really need something reasonably quick.
Should I look into compiling nested for loops in C (not sure, I haven't ever tried)?



Something worth mentioning is that the entries of the matrix are all integers larger or equal to 0.



EDIT: as pointed out in the comments below, it's important to consider that most of the entries of the matrix (>99%) are zeroes. This might encourage a sparse array approach.



I'll add a (redundant) example with numeric values, thac can be however modified to larger matrices:



test = RandomInteger[1, 8, 9];



0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1,
0, 0, 0, 1, 1, 1, 0, 1, 0, 1




m = 3
n = 2
out = MapAt[Total[#, -1] &, Partition[test, n, m], All, All]



3, 3, 1, 2, 4, 2, 2, 3, 6, 3, 4, 4




Sparse array case



EDIT: In light of very useful discussion below, I'd like to add a "second question" (which is not really a different question).
How to do the same procedure described above, but when the input matrix is instead a sparse array?



Here a sample code for testing with a small sparse array:



test = SparseArray[5, 5 -> 1, 2, 2 -> 2, 3, 3 -> 3, 5, 3 -> 4, 8, 9];


and a sample code for testing with a nxn matrix where 99% of the entries are 0:



n = 100;
entries = #[[1]], #[[2]] -> #[[3]] & /@ RandomInteger[1, n,Ceiling[n*0.01], 3];
SparseArray[Flatten@entries, n, n] // MatrixForm









share|improve this question











$endgroup$







  • 3




    $begingroup$
    No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
    $endgroup$
    – J. M. is slightly pensive
    Feb 25 at 17:36






  • 2




    $begingroup$
    @ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:02







  • 1




    $begingroup$
    The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 9:28







  • 2




    $begingroup$
    I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:39






  • 1




    $begingroup$
    @J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
    $endgroup$
    – Fraccalo
    Feb 27 at 8:46















6












$begingroup$


I'm interested in finding the quickest way of grouping the elements of a large matrix in sub-groups of NxM elements and them summing them together.
To be completely clear, I'm actually not interested in the "regrouped" matrice, but only in the final results where the elements are summed.



"Standard matrix" case



I'll show you an example below:



Say I have the following matrix 9x8:



test = Array[Subscript[a, ##] &, 8, 9]



enter image description here




I regroup it in sub-matrices NxM, in this example 3x2:



subtest = Partition[test, 2, 3]



enter image description here




and then I sum them together (as suggested in the comment by @:



out = MapAt[Total[#, -1] &, subtest, All, All];



enter image description here




I could use other ways of summing the subgroups, as for example:



 out = Total /@ Flatten /@ # & /@ subtest;


Or using two nested tables, or for loops, etc.



My question is what is the fastest method for doing this? I need to do it on a 48k x 48k matrix, so I'd really need something reasonably quick.
Should I look into compiling nested for loops in C (not sure, I haven't ever tried)?



Something worth mentioning is that the entries of the matrix are all integers larger or equal to 0.



EDIT: as pointed out in the comments below, it's important to consider that most of the entries of the matrix (>99%) are zeroes. This might encourage a sparse array approach.



I'll add a (redundant) example with numeric values, thac can be however modified to larger matrices:



test = RandomInteger[1, 8, 9];



0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1,
0, 0, 0, 1, 1, 1, 0, 1, 0, 1




m = 3
n = 2
out = MapAt[Total[#, -1] &, Partition[test, n, m], All, All]



3, 3, 1, 2, 4, 2, 2, 3, 6, 3, 4, 4




Sparse array case



EDIT: In light of very useful discussion below, I'd like to add a "second question" (which is not really a different question).
How to do the same procedure described above, but when the input matrix is instead a sparse array?



Here a sample code for testing with a small sparse array:



test = SparseArray[5, 5 -> 1, 2, 2 -> 2, 3, 3 -> 3, 5, 3 -> 4, 8, 9];


and a sample code for testing with a nxn matrix where 99% of the entries are 0:



n = 100;
entries = #[[1]], #[[2]] -> #[[3]] & /@ RandomInteger[1, n,Ceiling[n*0.01], 3];
SparseArray[Flatten@entries, n, n] // MatrixForm









share|improve this question











$endgroup$







  • 3




    $begingroup$
    No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
    $endgroup$
    – J. M. is slightly pensive
    Feb 25 at 17:36






  • 2




    $begingroup$
    @ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:02







  • 1




    $begingroup$
    The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 9:28







  • 2




    $begingroup$
    I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:39






  • 1




    $begingroup$
    @J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
    $endgroup$
    – Fraccalo
    Feb 27 at 8:46













6












6








6


2



$begingroup$


I'm interested in finding the quickest way of grouping the elements of a large matrix in sub-groups of NxM elements and them summing them together.
To be completely clear, I'm actually not interested in the "regrouped" matrice, but only in the final results where the elements are summed.



"Standard matrix" case



I'll show you an example below:



Say I have the following matrix 9x8:



test = Array[Subscript[a, ##] &, 8, 9]



enter image description here




I regroup it in sub-matrices NxM, in this example 3x2:



subtest = Partition[test, 2, 3]



enter image description here




and then I sum them together (as suggested in the comment by @:



out = MapAt[Total[#, -1] &, subtest, All, All];



enter image description here




I could use other ways of summing the subgroups, as for example:



 out = Total /@ Flatten /@ # & /@ subtest;


Or using two nested tables, or for loops, etc.



My question is what is the fastest method for doing this? I need to do it on a 48k x 48k matrix, so I'd really need something reasonably quick.
Should I look into compiling nested for loops in C (not sure, I haven't ever tried)?



Something worth mentioning is that the entries of the matrix are all integers larger or equal to 0.



EDIT: as pointed out in the comments below, it's important to consider that most of the entries of the matrix (>99%) are zeroes. This might encourage a sparse array approach.



I'll add a (redundant) example with numeric values, thac can be however modified to larger matrices:



test = RandomInteger[1, 8, 9];



0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1,
0, 0, 0, 1, 1, 1, 0, 1, 0, 1




m = 3
n = 2
out = MapAt[Total[#, -1] &, Partition[test, n, m], All, All]



3, 3, 1, 2, 4, 2, 2, 3, 6, 3, 4, 4




Sparse array case



EDIT: In light of very useful discussion below, I'd like to add a "second question" (which is not really a different question).
How to do the same procedure described above, but when the input matrix is instead a sparse array?



Here a sample code for testing with a small sparse array:



test = SparseArray[5, 5 -> 1, 2, 2 -> 2, 3, 3 -> 3, 5, 3 -> 4, 8, 9];


and a sample code for testing with a nxn matrix where 99% of the entries are 0:



n = 100;
entries = #[[1]], #[[2]] -> #[[3]] & /@ RandomInteger[1, n,Ceiling[n*0.01], 3];
SparseArray[Flatten@entries, n, n] // MatrixForm









share|improve this question











$endgroup$




I'm interested in finding the quickest way of grouping the elements of a large matrix in sub-groups of NxM elements and them summing them together.
To be completely clear, I'm actually not interested in the "regrouped" matrice, but only in the final results where the elements are summed.



"Standard matrix" case



I'll show you an example below:



Say I have the following matrix 9x8:



test = Array[Subscript[a, ##] &, 8, 9]



enter image description here




I regroup it in sub-matrices NxM, in this example 3x2:



subtest = Partition[test, 2, 3]



enter image description here




and then I sum them together (as suggested in the comment by @:



out = MapAt[Total[#, -1] &, subtest, All, All];



enter image description here




I could use other ways of summing the subgroups, as for example:



 out = Total /@ Flatten /@ # & /@ subtest;


Or using two nested tables, or for loops, etc.



My question is what is the fastest method for doing this? I need to do it on a 48k x 48k matrix, so I'd really need something reasonably quick.
Should I look into compiling nested for loops in C (not sure, I haven't ever tried)?



Something worth mentioning is that the entries of the matrix are all integers larger or equal to 0.



EDIT: as pointed out in the comments below, it's important to consider that most of the entries of the matrix (>99%) are zeroes. This might encourage a sparse array approach.



I'll add a (redundant) example with numeric values, thac can be however modified to larger matrices:



test = RandomInteger[1, 8, 9];



0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1,
0, 0, 0, 1, 1, 1, 0, 1, 0, 1




m = 3
n = 2
out = MapAt[Total[#, -1] &, Partition[test, n, m], All, All]



3, 3, 1, 2, 4, 2, 2, 3, 6, 3, 4, 4




Sparse array case



EDIT: In light of very useful discussion below, I'd like to add a "second question" (which is not really a different question).
How to do the same procedure described above, but when the input matrix is instead a sparse array?



Here a sample code for testing with a small sparse array:



test = SparseArray[5, 5 -> 1, 2, 2 -> 2, 3, 3 -> 3, 5, 3 -> 4, 8, 9];


and a sample code for testing with a nxn matrix where 99% of the entries are 0:



n = 100;
entries = #[[1]], #[[2]] -> #[[3]] & /@ RandomInteger[1, n,Ceiling[n*0.01], 3];
SparseArray[Flatten@entries, n, n] // MatrixForm






list-manipulation matrix






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 26 at 10:17







Fraccalo

















asked Feb 25 at 17:26









FraccaloFraccalo

2,488518




2,488518







  • 3




    $begingroup$
    No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
    $endgroup$
    – J. M. is slightly pensive
    Feb 25 at 17:36






  • 2




    $begingroup$
    @ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:02







  • 1




    $begingroup$
    The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 9:28







  • 2




    $begingroup$
    I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:39






  • 1




    $begingroup$
    @J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
    $endgroup$
    – Fraccalo
    Feb 27 at 8:46












  • 3




    $begingroup$
    No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
    $endgroup$
    – J. M. is slightly pensive
    Feb 25 at 17:36






  • 2




    $begingroup$
    @ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:02







  • 1




    $begingroup$
    The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 9:28







  • 2




    $begingroup$
    I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:39






  • 1




    $begingroup$
    @J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
    $endgroup$
    – Fraccalo
    Feb 27 at 8:46







3




3




$begingroup$
No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
$endgroup$
– J. M. is slightly pensive
Feb 25 at 17:36




$begingroup$
No need to map if you use the second argument of Total: Total[Partition[test, 2, 3], 3, 4].
$endgroup$
– J. M. is slightly pensive
Feb 25 at 17:36




2




2




$begingroup$
@ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
$endgroup$
– Henrik Schumacher
Feb 25 at 19:02





$begingroup$
@ukar: Wrong. When the integers do not exceed the bounda for machine integera (64 bit integers), Mathematica has a chance to use packed arrays and machine integer computations. And indeed, the matrices generated by RandomInteger[1, n, n] are packed which can be checked with Developer`PackedArrayQ[test].
$endgroup$
– Henrik Schumacher
Feb 25 at 19:02





1




1




$begingroup$
The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
$endgroup$
– J. M. is slightly pensive
Feb 26 at 9:28





$begingroup$
The SVD of a sparse array is not guaranteed to be sparse, so you may need to be more clever than usual if your matrices are large enough to stress your machine's memory. As for file formats that can be handled by Mathematica, look up Harwell-Boeing or Matrix Market.
$endgroup$
– J. M. is slightly pensive
Feb 26 at 9:28





2




2




$begingroup$
I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
$endgroup$
– Sjoerd Smit
Feb 26 at 12:39




$begingroup$
I thought this question might be a nice opportunity to try BlockMap, but after some superficial tests I have to conclude that it really isn't very fast...
$endgroup$
– Sjoerd Smit
Feb 26 at 12:39




1




1




$begingroup$
@J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
$endgroup$
– Fraccalo
Feb 27 at 8:46




$begingroup$
@J.M.iscomputer-less the .mtx does exactly what I needed, thx! Sparse arrays can be exported from python as .mtx and imported with MMA with basically zero effort!
$endgroup$
– Fraccalo
Feb 27 at 8:46










2 Answers
2






active

oldest

votes


















6












$begingroup$

Partition[test, 2,3] is quite slow in this case because it has to rearrange the elements in the data vector that represents the entries of a packed array in the backend:



Flatten[test] == Flatten[Partition[test, 2, 3]]



False




Using Span (;;) as follows employs 6 monotonically increasing read operations; in this specific case, these operations are faster than using Partition:



n = 24000;
test = RandomInteger[1, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; //
AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; //
AbsoluteTiming // First
a == b



245.89



117.943



True




However, this performance advantage seems to decay when the matrix test becomes bigger (so swapping is required). E.g., for $n = 4800$, method b is aboutten times faster thana`, but for $n = 24000$, it's only a factor of 4.6 and here it has degraded to a factor of 2 or so...



SparseArray method



Have I said already that I love SparseArrays?



AbsoluteTiming[
c = Dot[
KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
],
Dot[
test,
KroneckerProduct[
IdentityMatrix[n/3, SparseArray],
ConstantArray[1, 3, 1]
]
]
]
][[1]]
a == c



76.3822



True




The story goes on...



A combination of the SparseArray method from above with a CompiledFunction):



cf = Compile[x, _Integer, 1, k, _Integer,
Table[
Sum[Compile`GetElement[x, i + j], j, 1, k],
i, 0, Length[x] - 1, k],
CompilationTarget -> "C",
RuntimeAttributes -> Listable,
Parallelization -> True,
RuntimeOptions -> "Speed"
];
d = KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
].cf[test, 3]; // AbsoluteTiming // First
a == d



33.5677



True







share|improve this answer











$endgroup$












  • $begingroup$
    Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
    $endgroup$
    – Fraccalo
    Feb 25 at 18:57











  • $begingroup$
    @Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:46






  • 1




    $begingroup$
    This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
    $endgroup$
    – Fraccalo
    Feb 25 at 22:18










  • $begingroup$
    Very nice use of KroneckerProduct!
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 8:24










  • $begingroup$
    Yeah, thank you @J.M.!
    $endgroup$
    – Henrik Schumacher
    Feb 26 at 8:42


















1












$begingroup$

These two aren't faster, but I found them of interest in that they pose the problem in a different way. The Downsample command is easy to describe and use, but slower than then the direct ;; command as I am building the matrices.



From above, for comparison:



n = 6000;
test = RandomInteger[100, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; // AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; // AbsoluteTiming // First
a == b



1.72742



0.402294



True




New methods



c = Sum[Downsample[test, 2, 3, i, j], i, 2, j, 3]) // AbsoluteTiming // First
a == c



2.12463



True




An experiment with ListConvolve. If I could get it to "bound" through the target matrix, it could be pretty fast, as I am throwing out 5/6 of the effort below. I know ListConvolve does take advantage of sparse matrices. Not sure how to exploit that.



kernel = 1, 1, 1, 1, 1, 1;
d = Downsample[ListCorrelate[kernel, test], 2, 3]; // AbsoluteTiming // First
a == d



3.21



True







share|improve this answer









$endgroup$












  • $begingroup$
    To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:58










  • $begingroup$
    Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
    $endgroup$
    – MikeY
    Feb 28 at 20:35










Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "387"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192187%2ftime-efficient-matrix-elements-grouping-and-summing%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









6












$begingroup$

Partition[test, 2,3] is quite slow in this case because it has to rearrange the elements in the data vector that represents the entries of a packed array in the backend:



Flatten[test] == Flatten[Partition[test, 2, 3]]



False




Using Span (;;) as follows employs 6 monotonically increasing read operations; in this specific case, these operations are faster than using Partition:



n = 24000;
test = RandomInteger[1, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; //
AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; //
AbsoluteTiming // First
a == b



245.89



117.943



True




However, this performance advantage seems to decay when the matrix test becomes bigger (so swapping is required). E.g., for $n = 4800$, method b is aboutten times faster thana`, but for $n = 24000$, it's only a factor of 4.6 and here it has degraded to a factor of 2 or so...



SparseArray method



Have I said already that I love SparseArrays?



AbsoluteTiming[
c = Dot[
KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
],
Dot[
test,
KroneckerProduct[
IdentityMatrix[n/3, SparseArray],
ConstantArray[1, 3, 1]
]
]
]
][[1]]
a == c



76.3822



True




The story goes on...



A combination of the SparseArray method from above with a CompiledFunction):



cf = Compile[x, _Integer, 1, k, _Integer,
Table[
Sum[Compile`GetElement[x, i + j], j, 1, k],
i, 0, Length[x] - 1, k],
CompilationTarget -> "C",
RuntimeAttributes -> Listable,
Parallelization -> True,
RuntimeOptions -> "Speed"
];
d = KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
].cf[test, 3]; // AbsoluteTiming // First
a == d



33.5677



True







share|improve this answer











$endgroup$












  • $begingroup$
    Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
    $endgroup$
    – Fraccalo
    Feb 25 at 18:57











  • $begingroup$
    @Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:46






  • 1




    $begingroup$
    This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
    $endgroup$
    – Fraccalo
    Feb 25 at 22:18










  • $begingroup$
    Very nice use of KroneckerProduct!
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 8:24










  • $begingroup$
    Yeah, thank you @J.M.!
    $endgroup$
    – Henrik Schumacher
    Feb 26 at 8:42















6












$begingroup$

Partition[test, 2,3] is quite slow in this case because it has to rearrange the elements in the data vector that represents the entries of a packed array in the backend:



Flatten[test] == Flatten[Partition[test, 2, 3]]



False




Using Span (;;) as follows employs 6 monotonically increasing read operations; in this specific case, these operations are faster than using Partition:



n = 24000;
test = RandomInteger[1, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; //
AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; //
AbsoluteTiming // First
a == b



245.89



117.943



True




However, this performance advantage seems to decay when the matrix test becomes bigger (so swapping is required). E.g., for $n = 4800$, method b is aboutten times faster thana`, but for $n = 24000$, it's only a factor of 4.6 and here it has degraded to a factor of 2 or so...



SparseArray method



Have I said already that I love SparseArrays?



AbsoluteTiming[
c = Dot[
KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
],
Dot[
test,
KroneckerProduct[
IdentityMatrix[n/3, SparseArray],
ConstantArray[1, 3, 1]
]
]
]
][[1]]
a == c



76.3822



True




The story goes on...



A combination of the SparseArray method from above with a CompiledFunction):



cf = Compile[x, _Integer, 1, k, _Integer,
Table[
Sum[Compile`GetElement[x, i + j], j, 1, k],
i, 0, Length[x] - 1, k],
CompilationTarget -> "C",
RuntimeAttributes -> Listable,
Parallelization -> True,
RuntimeOptions -> "Speed"
];
d = KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
].cf[test, 3]; // AbsoluteTiming // First
a == d



33.5677



True







share|improve this answer











$endgroup$












  • $begingroup$
    Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
    $endgroup$
    – Fraccalo
    Feb 25 at 18:57











  • $begingroup$
    @Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:46






  • 1




    $begingroup$
    This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
    $endgroup$
    – Fraccalo
    Feb 25 at 22:18










  • $begingroup$
    Very nice use of KroneckerProduct!
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 8:24










  • $begingroup$
    Yeah, thank you @J.M.!
    $endgroup$
    – Henrik Schumacher
    Feb 26 at 8:42













6












6








6





$begingroup$

Partition[test, 2,3] is quite slow in this case because it has to rearrange the elements in the data vector that represents the entries of a packed array in the backend:



Flatten[test] == Flatten[Partition[test, 2, 3]]



False




Using Span (;;) as follows employs 6 monotonically increasing read operations; in this specific case, these operations are faster than using Partition:



n = 24000;
test = RandomInteger[1, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; //
AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; //
AbsoluteTiming // First
a == b



245.89



117.943



True




However, this performance advantage seems to decay when the matrix test becomes bigger (so swapping is required). E.g., for $n = 4800$, method b is aboutten times faster thana`, but for $n = 24000$, it's only a factor of 4.6 and here it has degraded to a factor of 2 or so...



SparseArray method



Have I said already that I love SparseArrays?



AbsoluteTiming[
c = Dot[
KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
],
Dot[
test,
KroneckerProduct[
IdentityMatrix[n/3, SparseArray],
ConstantArray[1, 3, 1]
]
]
]
][[1]]
a == c



76.3822



True




The story goes on...



A combination of the SparseArray method from above with a CompiledFunction):



cf = Compile[x, _Integer, 1, k, _Integer,
Table[
Sum[Compile`GetElement[x, i + j], j, 1, k],
i, 0, Length[x] - 1, k],
CompilationTarget -> "C",
RuntimeAttributes -> Listable,
Parallelization -> True,
RuntimeOptions -> "Speed"
];
d = KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
].cf[test, 3]; // AbsoluteTiming // First
a == d



33.5677



True







share|improve this answer











$endgroup$



Partition[test, 2,3] is quite slow in this case because it has to rearrange the elements in the data vector that represents the entries of a packed array in the backend:



Flatten[test] == Flatten[Partition[test, 2, 3]]



False




Using Span (;;) as follows employs 6 monotonically increasing read operations; in this specific case, these operations are faster than using Partition:



n = 24000;
test = RandomInteger[1, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; //
AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; //
AbsoluteTiming // First
a == b



245.89



117.943



True




However, this performance advantage seems to decay when the matrix test becomes bigger (so swapping is required). E.g., for $n = 4800$, method b is aboutten times faster thana`, but for $n = 24000$, it's only a factor of 4.6 and here it has degraded to a factor of 2 or so...



SparseArray method



Have I said already that I love SparseArrays?



AbsoluteTiming[
c = Dot[
KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
],
Dot[
test,
KroneckerProduct[
IdentityMatrix[n/3, SparseArray],
ConstantArray[1, 3, 1]
]
]
]
][[1]]
a == c



76.3822



True




The story goes on...



A combination of the SparseArray method from above with a CompiledFunction):



cf = Compile[x, _Integer, 1, k, _Integer,
Table[
Sum[Compile`GetElement[x, i + j], j, 1, k],
i, 0, Length[x] - 1, k],
CompilationTarget -> "C",
RuntimeAttributes -> Listable,
Parallelization -> True,
RuntimeOptions -> "Speed"
];
d = KroneckerProduct[
IdentityMatrix[n/2, SparseArray],
ConstantArray[1, 1, 2]
].cf[test, 3]; // AbsoluteTiming // First
a == d



33.5677



True








share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 25 at 19:44

























answered Feb 25 at 18:43









Henrik SchumacherHenrik Schumacher

57.6k578158




57.6k578158











  • $begingroup$
    Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
    $endgroup$
    – Fraccalo
    Feb 25 at 18:57











  • $begingroup$
    @Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:46






  • 1




    $begingroup$
    This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
    $endgroup$
    – Fraccalo
    Feb 25 at 22:18










  • $begingroup$
    Very nice use of KroneckerProduct!
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 8:24










  • $begingroup$
    Yeah, thank you @J.M.!
    $endgroup$
    – Henrik Schumacher
    Feb 26 at 8:42
















  • $begingroup$
    Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
    $endgroup$
    – Fraccalo
    Feb 25 at 18:57











  • $begingroup$
    @Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
    $endgroup$
    – Henrik Schumacher
    Feb 25 at 19:46






  • 1




    $begingroup$
    This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
    $endgroup$
    – Fraccalo
    Feb 25 at 22:18










  • $begingroup$
    Very nice use of KroneckerProduct!
    $endgroup$
    – J. M. is slightly pensive
    Feb 26 at 8:24










  • $begingroup$
    Yeah, thank you @J.M.!
    $endgroup$
    – Henrik Schumacher
    Feb 26 at 8:42















$begingroup$
Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
$endgroup$
– Fraccalo
Feb 25 at 18:57





$begingroup$
Thanks! The sparse array will need some studying on my side for fully understand what's going on there, but it look quite promising in terms of speed up!
$endgroup$
– Fraccalo
Feb 25 at 18:57













$begingroup$
@Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
$endgroup$
– Henrik Schumacher
Feb 25 at 19:46




$begingroup$
@Fraccalo Have look my latest edits. I seem to have found a method that is twice as fast.
$endgroup$
– Henrik Schumacher
Feb 25 at 19:46




1




1




$begingroup$
This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
$endgroup$
– Fraccalo
Feb 25 at 22:18




$begingroup$
This looks amazing, thank you so much @Henrik Schumacher! I'll go through the details first thing tomorrow morning, tons of stuff to learn (I'm not very familiar with the compile function, and haven't used sparse arrays more then a couple of times so far, so I really have lot of new things to learn here :) )
$endgroup$
– Fraccalo
Feb 25 at 22:18












$begingroup$
Very nice use of KroneckerProduct!
$endgroup$
– J. M. is slightly pensive
Feb 26 at 8:24




$begingroup$
Very nice use of KroneckerProduct!
$endgroup$
– J. M. is slightly pensive
Feb 26 at 8:24












$begingroup$
Yeah, thank you @J.M.!
$endgroup$
– Henrik Schumacher
Feb 26 at 8:42




$begingroup$
Yeah, thank you @J.M.!
$endgroup$
– Henrik Schumacher
Feb 26 at 8:42











1












$begingroup$

These two aren't faster, but I found them of interest in that they pose the problem in a different way. The Downsample command is easy to describe and use, but slower than then the direct ;; command as I am building the matrices.



From above, for comparison:



n = 6000;
test = RandomInteger[100, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; // AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; // AbsoluteTiming // First
a == b



1.72742



0.402294



True




New methods



c = Sum[Downsample[test, 2, 3, i, j], i, 2, j, 3]) // AbsoluteTiming // First
a == c



2.12463



True




An experiment with ListConvolve. If I could get it to "bound" through the target matrix, it could be pretty fast, as I am throwing out 5/6 of the effort below. I know ListConvolve does take advantage of sparse matrices. Not sure how to exploit that.



kernel = 1, 1, 1, 1, 1, 1;
d = Downsample[ListCorrelate[kernel, test], 2, 3]; // AbsoluteTiming // First
a == d



3.21



True







share|improve this answer









$endgroup$












  • $begingroup$
    To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:58










  • $begingroup$
    Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
    $endgroup$
    – MikeY
    Feb 28 at 20:35















1












$begingroup$

These two aren't faster, but I found them of interest in that they pose the problem in a different way. The Downsample command is easy to describe and use, but slower than then the direct ;; command as I am building the matrices.



From above, for comparison:



n = 6000;
test = RandomInteger[100, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; // AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; // AbsoluteTiming // First
a == b



1.72742



0.402294



True




New methods



c = Sum[Downsample[test, 2, 3, i, j], i, 2, j, 3]) // AbsoluteTiming // First
a == c



2.12463



True




An experiment with ListConvolve. If I could get it to "bound" through the target matrix, it could be pretty fast, as I am throwing out 5/6 of the effort below. I know ListConvolve does take advantage of sparse matrices. Not sure how to exploit that.



kernel = 1, 1, 1, 1, 1, 1;
d = Downsample[ListCorrelate[kernel, test], 2, 3]; // AbsoluteTiming // First
a == d



3.21



True







share|improve this answer









$endgroup$












  • $begingroup$
    To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:58










  • $begingroup$
    Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
    $endgroup$
    – MikeY
    Feb 28 at 20:35













1












1








1





$begingroup$

These two aren't faster, but I found them of interest in that they pose the problem in a different way. The Downsample command is easy to describe and use, but slower than then the direct ;; command as I am building the matrices.



From above, for comparison:



n = 6000;
test = RandomInteger[100, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; // AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; // AbsoluteTiming // First
a == b



1.72742



0.402294



True




New methods



c = Sum[Downsample[test, 2, 3, i, j], i, 2, j, 3]) // AbsoluteTiming // First
a == c



2.12463



True




An experiment with ListConvolve. If I could get it to "bound" through the target matrix, it could be pretty fast, as I am throwing out 5/6 of the effort below. I know ListConvolve does take advantage of sparse matrices. Not sure how to exploit that.



kernel = 1, 1, 1, 1, 1, 1;
d = Downsample[ListCorrelate[kernel, test], 2, 3]; // AbsoluteTiming // First
a == d



3.21



True







share|improve this answer









$endgroup$



These two aren't faster, but I found them of interest in that they pose the problem in a different way. The Downsample command is easy to describe and use, but slower than then the direct ;; command as I am building the matrices.



From above, for comparison:



n = 6000;
test = RandomInteger[100, n, n];

a = Total[Partition[test, 2, 3], 3, 4]; // AbsoluteTiming // First
b = Sum[test[[i ;; ;; 2, j ;; ;; 3]], i, 1, 2, j, 1, 3]; // AbsoluteTiming // First
a == b



1.72742



0.402294



True




New methods



c = Sum[Downsample[test, 2, 3, i, j], i, 2, j, 3]) // AbsoluteTiming // First
a == c



2.12463



True




An experiment with ListConvolve. If I could get it to "bound" through the target matrix, it could be pretty fast, as I am throwing out 5/6 of the effort below. I know ListConvolve does take advantage of sparse matrices. Not sure how to exploit that.



kernel = 1, 1, 1, 1, 1, 1;
d = Downsample[ListCorrelate[kernel, test], 2, 3]; // AbsoluteTiming // First
a == d



3.21



True








share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 25 at 22:37









MikeYMikeY

3,463714




3,463714











  • $begingroup$
    To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:58










  • $begingroup$
    Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
    $endgroup$
    – MikeY
    Feb 28 at 20:35
















  • $begingroup$
    To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
    $endgroup$
    – Sjoerd Smit
    Feb 26 at 12:58










  • $begingroup$
    Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
    $endgroup$
    – MikeY
    Feb 28 at 20:35















$begingroup$
To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
$endgroup$
– Sjoerd Smit
Feb 26 at 12:58




$begingroup$
To elaborate on the ListConvolve method: you could use the ConvolutionLayer with a "Stride" option instead. Something like conv=NetReplacePart[NetInitialize[ConvolutionLayer[1,2,3,"Input"-> 1,n,n,"Stride"-> 2,3]], "Weights"->ConstantArray[1,1,1,2,3],"Biases"->None] and then invoke it with Round @ First @ conv[test, TargetDevice -> "GPU"]. It's not perfect, but it works.
$endgroup$
– Sjoerd Smit
Feb 26 at 12:58












$begingroup$
Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
$endgroup$
– MikeY
Feb 28 at 20:35




$begingroup$
Thanks! Turns out to be pokey on my computer. Wish the ListConvolve command had a Stride option.
$endgroup$
– MikeY
Feb 28 at 20:35

















draft saved

draft discarded
















































Thanks for contributing an answer to Mathematica Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192187%2ftime-efficient-matrix-elements-grouping-and-summing%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?