Performing a “mean blur”

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
8
down vote

favorite
2












I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
The kernel size defines how many adjacent values are used for average calculation.



/**
* Performs a mean blur with a given kernel size.
* @param number grid - Data grid holding values to blur.
* @param number kernelSize - Describes the size of the "blur"-cut for each cell.
*/
static MeanBlur(grid, kernelSize)
// Result
let newGrid = ;

// Iterate original grid
for (let row = 0; row < grid.length; row++)
let newRow = ;
for (let col = 0; col < grid[row].length; col++)

let adjacentValues = ;

// Get all adjacent values
for (let i = 1; i <= kernelSize; i++)
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



// Calculate average value
const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

// add average value to the current row
newRow.push(average);

newGrid.push(newRow);

return newGrid;


/**
* Return all adjacent values of a cell in a `number`-array.
* The amount of adjacent value depends on the kernel size.
* @param number row - Describes the cell's row position.
* @param number col - Describes the cell's column position.
* @param number kernelSize - The kernel size.
* @param number grid - Original data grid.
*/
static _getAdjacentValues(row, col, kernelSize, grid)

if (kernelSize < 1)
throw "Kernel size value should be at least 1.";


let adjacentValues = ;

// north
if (row - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col]);

// north-east
if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

// east
if (col + kernelSize < grid[row].length)
adjacentValues.push(grid[row][col + kernelSize]);


// south-east
if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


// south
if (row + kernelSize < grid.length)
adjacentValues.push(grid[row + kernelSize][col]);


// south-west
if (row + kernelSize < grid.length && col - kernelSize >= 0)
adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


// west
if (col - kernelSize >= 0)
adjacentValues.push(grid[row][col - kernelSize]);


// north-west
if (row - kernelSize >= 0 && col - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


return adjacentValues;



What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



enter image description here



What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.



Edit:
I was asked for clarification on how the kernel size actually is supposed to work.
Assuming that I want to calculate the value for the current position (black cell). With a kernel size of 1 I would include all green cells for average value calculation. With a kernel size of 2 I would include all green and blue cell values. And with a kernel size of 3 all orange, blue and green cell values will be used for average calculation.



enter image description here










share|improve this question























  • Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
    – Falco
    Sep 17 at 12:37










  • Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
    – Marc Rohloff
    Sep 17 at 15:03










  • @Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:21










  • @MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:34










  • I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
    – Marc Rohloff
    Sep 17 at 20:19














up vote
8
down vote

favorite
2












I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
The kernel size defines how many adjacent values are used for average calculation.



/**
* Performs a mean blur with a given kernel size.
* @param number grid - Data grid holding values to blur.
* @param number kernelSize - Describes the size of the "blur"-cut for each cell.
*/
static MeanBlur(grid, kernelSize)
// Result
let newGrid = ;

// Iterate original grid
for (let row = 0; row < grid.length; row++)
let newRow = ;
for (let col = 0; col < grid[row].length; col++)

let adjacentValues = ;

// Get all adjacent values
for (let i = 1; i <= kernelSize; i++)
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



// Calculate average value
const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

// add average value to the current row
newRow.push(average);

newGrid.push(newRow);

return newGrid;


/**
* Return all adjacent values of a cell in a `number`-array.
* The amount of adjacent value depends on the kernel size.
* @param number row - Describes the cell's row position.
* @param number col - Describes the cell's column position.
* @param number kernelSize - The kernel size.
* @param number grid - Original data grid.
*/
static _getAdjacentValues(row, col, kernelSize, grid)

if (kernelSize < 1)
throw "Kernel size value should be at least 1.";


let adjacentValues = ;

// north
if (row - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col]);

// north-east
if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

// east
if (col + kernelSize < grid[row].length)
adjacentValues.push(grid[row][col + kernelSize]);


// south-east
if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


// south
if (row + kernelSize < grid.length)
adjacentValues.push(grid[row + kernelSize][col]);


// south-west
if (row + kernelSize < grid.length && col - kernelSize >= 0)
adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


// west
if (col - kernelSize >= 0)
adjacentValues.push(grid[row][col - kernelSize]);


// north-west
if (row - kernelSize >= 0 && col - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


return adjacentValues;



What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



enter image description here



What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.



Edit:
I was asked for clarification on how the kernel size actually is supposed to work.
Assuming that I want to calculate the value for the current position (black cell). With a kernel size of 1 I would include all green cells for average value calculation. With a kernel size of 2 I would include all green and blue cell values. And with a kernel size of 3 all orange, blue and green cell values will be used for average calculation.



enter image description here










share|improve this question























  • Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
    – Falco
    Sep 17 at 12:37










  • Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
    – Marc Rohloff
    Sep 17 at 15:03










  • @Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:21










  • @MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:34










  • I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
    – Marc Rohloff
    Sep 17 at 20:19












up vote
8
down vote

favorite
2









up vote
8
down vote

favorite
2






2





I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
The kernel size defines how many adjacent values are used for average calculation.



/**
* Performs a mean blur with a given kernel size.
* @param number grid - Data grid holding values to blur.
* @param number kernelSize - Describes the size of the "blur"-cut for each cell.
*/
static MeanBlur(grid, kernelSize)
// Result
let newGrid = ;

// Iterate original grid
for (let row = 0; row < grid.length; row++)
let newRow = ;
for (let col = 0; col < grid[row].length; col++)

let adjacentValues = ;

// Get all adjacent values
for (let i = 1; i <= kernelSize; i++)
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



// Calculate average value
const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

// add average value to the current row
newRow.push(average);

newGrid.push(newRow);

return newGrid;


/**
* Return all adjacent values of a cell in a `number`-array.
* The amount of adjacent value depends on the kernel size.
* @param number row - Describes the cell's row position.
* @param number col - Describes the cell's column position.
* @param number kernelSize - The kernel size.
* @param number grid - Original data grid.
*/
static _getAdjacentValues(row, col, kernelSize, grid)

if (kernelSize < 1)
throw "Kernel size value should be at least 1.";


let adjacentValues = ;

// north
if (row - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col]);

// north-east
if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

// east
if (col + kernelSize < grid[row].length)
adjacentValues.push(grid[row][col + kernelSize]);


// south-east
if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


// south
if (row + kernelSize < grid.length)
adjacentValues.push(grid[row + kernelSize][col]);


// south-west
if (row + kernelSize < grid.length && col - kernelSize >= 0)
adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


// west
if (col - kernelSize >= 0)
adjacentValues.push(grid[row][col - kernelSize]);


// north-west
if (row - kernelSize >= 0 && col - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


return adjacentValues;



What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



enter image description here



What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.



Edit:
I was asked for clarification on how the kernel size actually is supposed to work.
Assuming that I want to calculate the value for the current position (black cell). With a kernel size of 1 I would include all green cells for average value calculation. With a kernel size of 2 I would include all green and blue cell values. And with a kernel size of 3 all orange, blue and green cell values will be used for average calculation.



enter image description here










share|improve this question















I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
The kernel size defines how many adjacent values are used for average calculation.



/**
* Performs a mean blur with a given kernel size.
* @param number grid - Data grid holding values to blur.
* @param number kernelSize - Describes the size of the "blur"-cut for each cell.
*/
static MeanBlur(grid, kernelSize)
// Result
let newGrid = ;

// Iterate original grid
for (let row = 0; row < grid.length; row++)
let newRow = ;
for (let col = 0; col < grid[row].length; col++)

let adjacentValues = ;

// Get all adjacent values
for (let i = 1; i <= kernelSize; i++)
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



// Calculate average value
const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

// add average value to the current row
newRow.push(average);

newGrid.push(newRow);

return newGrid;


/**
* Return all adjacent values of a cell in a `number`-array.
* The amount of adjacent value depends on the kernel size.
* @param number row - Describes the cell's row position.
* @param number col - Describes the cell's column position.
* @param number kernelSize - The kernel size.
* @param number grid - Original data grid.
*/
static _getAdjacentValues(row, col, kernelSize, grid)

if (kernelSize < 1)
throw "Kernel size value should be at least 1.";


let adjacentValues = ;

// north
if (row - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col]);

// north-east
if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

// east
if (col + kernelSize < grid[row].length)
adjacentValues.push(grid[row][col + kernelSize]);


// south-east
if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


// south
if (row + kernelSize < grid.length)
adjacentValues.push(grid[row + kernelSize][col]);


// south-west
if (row + kernelSize < grid.length && col - kernelSize >= 0)
adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


// west
if (col - kernelSize >= 0)
adjacentValues.push(grid[row][col - kernelSize]);


// north-west
if (row - kernelSize >= 0 && col - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


return adjacentValues;



What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



enter image description here



What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.



Edit:
I was asked for clarification on how the kernel size actually is supposed to work.
Assuming that I want to calculate the value for the current position (black cell). With a kernel size of 1 I would include all green cells for average value calculation. With a kernel size of 2 I would include all green and blue cell values. And with a kernel size of 3 all orange, blue and green cell values will be used for average calculation.



enter image description here







javascript array signal-processing






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 17 at 16:33

























asked Sep 16 at 16:12









チーズパン

279321




279321











  • Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
    – Falco
    Sep 17 at 12:37










  • Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
    – Marc Rohloff
    Sep 17 at 15:03










  • @Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:21










  • @MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:34










  • I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
    – Marc Rohloff
    Sep 17 at 20:19
















  • Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
    – Falco
    Sep 17 at 12:37










  • Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
    – Marc Rohloff
    Sep 17 at 15:03










  • @Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:21










  • @MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
    – ãƒãƒ¼ã‚ºãƒ‘ン
    Sep 17 at 16:34










  • I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
    – Marc Rohloff
    Sep 17 at 20:19















Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
– Falco
Sep 17 at 12:37




Came here via HNQ expecting not just a blur, but a "really mean blur" from the title
– Falco
Sep 17 at 12:37












Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
– Marc Rohloff
Sep 17 at 15:03




Could you clarify how kernelSize should affect the results? In your implementation of _getAdjacentValues a kernel size of 2 would include cells 2 steps away but not the adjacent cells. This is not what I would expect for mean blurring.
– Marc Rohloff
Sep 17 at 15:03












@Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
– ãƒãƒ¼ã‚ºãƒ‘ン
Sep 17 at 16:21




@Falco If I was wrong assuming that this is "mean blur" I can change the title. Blindman67 is suggesting that this is "Convolution Filter", is that right?
– ãƒãƒ¼ã‚ºãƒ‘ン
Sep 17 at 16:21












@MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
– ãƒãƒ¼ã‚ºãƒ‘ン
Sep 17 at 16:34




@MarcRohloff I added a more detailed explanation on how the kernel size is supposed to work.
– ãƒãƒ¼ã‚ºãƒ‘ン
Sep 17 at 16:34












I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
– Marc Rohloff
Sep 17 at 20:19




I updated my code based on your comments. I assumed that you don't want to include the current cell in your mean which may or may not be true. You can use a convolution filter to solve this but I would consider it excessive unless you are already importing some sort of matrix library that does it for you.
– Marc Rohloff
Sep 17 at 20:19










4 Answers
4






active

oldest

votes

















up vote
7
down vote



accepted










Convolution Filter



In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



In your example




for (let i = 1; i <= kernelSize; i++) 
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



Example substitute value at edges.



const edgeVal = 0;
var val, sum = 0;
for (let i = 0; i < 9; i++)
val = grid[row + (i / 3
const average = sum / 9;


Or ignore edges



var val;
var sum = 0;
var count = 0;
for (let i = 0; i < 9; i++)
val = grid[row + (i / 3
const average = sum / count;


Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



Workers



Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



Attempting to use more workers than available cores will result in slower processing.



GPU



For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






share|improve this answer






















  • I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
    – Marc Rohloff
    Sep 17 at 15:05










  • @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
    – Blindman67
    Sep 17 at 16:23

















up vote
5
down vote













Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






share|improve this answer





























    up vote
    4
    down vote













    This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




    Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



    The computational advantage of separable convolution versus nonseparable convolution is therefore:



    For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




    If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






    share|improve this answer



























      up vote
      3
      down vote













      I would generally solve this by using logic that returns undefined for edge cases (for exame grid[-1] and grid[999] return undefined) and then filtering them out. Something like



      static _getAdjacentMean(row, col, kernelSize, grid) 
      let values = ;
      for (let iRow = -kernelSize; iRow <= kernelSize; iRow++) ;
      for (let iCol = -kernelSize; iCol <= kernelSize; iCol++)
      if (iRow !== 0 && iCol !== 0)
      values.push( gridRow[ col + iCol ];


      values = values.filter( v => v !== undefined);

      const sum = values.reduce((a, b) => a + b, 0));
      const count = values.length;
      return sum / count;



      or it might be more efficient just to skip the intermediate values array entirely.



      static _getAdjacentMean(row, col, kernelSize, grid) 
      let sum = 0, count = 0;
      for (let iRow = -kernelSize; iRow <= kernelSize; iRow++)

      return sum / count;






      share|improve this answer






















        Your Answer




        StackExchange.ifUsing("editor", function ()
        return StackExchange.using("mathjaxEditing", function ()
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
        );
        );
        , "mathjax-editing");

        StackExchange.ifUsing("editor", function ()
        StackExchange.using("externalEditor", function ()
        StackExchange.using("snippets", function ()
        StackExchange.snippets.init();
        );
        );
        , "code-snippets");

        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "196"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203826%2fperforming-a-mean-blur%23new-answer', 'question_page');

        );

        Post as a guest






























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        7
        down vote



        accepted










        Convolution Filter



        In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



        As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



        In your example




        for (let i = 1; i <= kernelSize; i++) 
        adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

        const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



        You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



        Example substitute value at edges.



        const edgeVal = 0;
        var val, sum = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / 9;


        Or ignore edges



        var val;
        var sum = 0;
        var count = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / count;


        Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



        Workers



        Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



        Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



        Attempting to use more workers than available cores will result in slower processing.



        GPU



        For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



        An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






        share|improve this answer






















        • I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
          – Marc Rohloff
          Sep 17 at 15:05










        • @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
          – Blindman67
          Sep 17 at 16:23














        up vote
        7
        down vote



        accepted










        Convolution Filter



        In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



        As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



        In your example




        for (let i = 1; i <= kernelSize; i++) 
        adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

        const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



        You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



        Example substitute value at edges.



        const edgeVal = 0;
        var val, sum = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / 9;


        Or ignore edges



        var val;
        var sum = 0;
        var count = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / count;


        Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



        Workers



        Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



        Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



        Attempting to use more workers than available cores will result in slower processing.



        GPU



        For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



        An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






        share|improve this answer






















        • I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
          – Marc Rohloff
          Sep 17 at 15:05










        • @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
          – Blindman67
          Sep 17 at 16:23












        up vote
        7
        down vote



        accepted







        up vote
        7
        down vote



        accepted






        Convolution Filter



        In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



        As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



        In your example




        for (let i = 1; i <= kernelSize; i++) 
        adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

        const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



        You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



        Example substitute value at edges.



        const edgeVal = 0;
        var val, sum = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / 9;


        Or ignore edges



        var val;
        var sum = 0;
        var count = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / count;


        Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



        Workers



        Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



        Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



        Attempting to use more workers than available cores will result in slower processing.



        GPU



        For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



        An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






        share|improve this answer














        Convolution Filter



        In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



        As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



        In your example




        for (let i = 1; i <= kernelSize; i++) 
        adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

        const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



        You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



        Example substitute value at edges.



        const edgeVal = 0;
        var val, sum = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / 9;


        Or ignore edges



        var val;
        var sum = 0;
        var count = 0;
        for (let i = 0; i < 9; i++)
        val = grid[row + (i / 3
        const average = sum / count;


        Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



        Workers



        Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



        Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



        Attempting to use more workers than available cores will result in slower processing.



        GPU



        For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



        An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Sep 17 at 3:18

























        answered Sep 17 at 3:06









        Blindman67

        5,9191320




        5,9191320











        • I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
          – Marc Rohloff
          Sep 17 at 15:05










        • @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
          – Blindman67
          Sep 17 at 16:23
















        • I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
          – Marc Rohloff
          Sep 17 at 15:05










        • @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
          – Blindman67
          Sep 17 at 16:23















        I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
        – Marc Rohloff
        Sep 17 at 15:05




        I should point out that using substitute values is not a good solution for this problem as it will affect the results of the mean calculation.
        – Marc Rohloff
        Sep 17 at 15:05












        @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
        – Blindman67
        Sep 17 at 16:23




        @MarcRohloff That will depend on the needs of the filter, I showed two alternative solutions and linked to several more. I can not guess the needs of the OP.
        – Blindman67
        Sep 17 at 16:23












        up vote
        5
        down vote













        Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



        If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



        If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






        share|improve this answer


























          up vote
          5
          down vote













          Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



          If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



          If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






          share|improve this answer
























            up vote
            5
            down vote










            up vote
            5
            down vote









            Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



            If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



            If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






            share|improve this answer














            Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



            If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



            If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Sep 16 at 22:24

























            answered Sep 16 at 22:14









            aventurin

            28018




            28018




















                up vote
                4
                down vote













                This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                The computational advantage of separable convolution versus nonseparable convolution is therefore:



                For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                share|improve this answer
























                  up vote
                  4
                  down vote













                  This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                  Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                  The computational advantage of separable convolution versus nonseparable convolution is therefore:



                  For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                  If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                  share|improve this answer






















                    up vote
                    4
                    down vote










                    up vote
                    4
                    down vote









                    This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                    Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                    The computational advantage of separable convolution versus nonseparable convolution is therefore:



                    For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                    If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                    share|improve this answer












                    This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                    Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                    The computational advantage of separable convolution versus nonseparable convolution is therefore:



                    For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                    If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Sep 17 at 3:58









                    user1118321

                    10.5k11145




                    10.5k11145




















                        up vote
                        3
                        down vote













                        I would generally solve this by using logic that returns undefined for edge cases (for exame grid[-1] and grid[999] return undefined) and then filtering them out. Something like



                        static _getAdjacentMean(row, col, kernelSize, grid) 
                        let values = ;
                        for (let iRow = -kernelSize; iRow <= kernelSize; iRow++) ;
                        for (let iCol = -kernelSize; iCol <= kernelSize; iCol++)
                        if (iRow !== 0 && iCol !== 0)
                        values.push( gridRow[ col + iCol ];


                        values = values.filter( v => v !== undefined);

                        const sum = values.reduce((a, b) => a + b, 0));
                        const count = values.length;
                        return sum / count;



                        or it might be more efficient just to skip the intermediate values array entirely.



                        static _getAdjacentMean(row, col, kernelSize, grid) 
                        let sum = 0, count = 0;
                        for (let iRow = -kernelSize; iRow <= kernelSize; iRow++)

                        return sum / count;






                        share|improve this answer


























                          up vote
                          3
                          down vote













                          I would generally solve this by using logic that returns undefined for edge cases (for exame grid[-1] and grid[999] return undefined) and then filtering them out. Something like



                          static _getAdjacentMean(row, col, kernelSize, grid) 
                          let values = ;
                          for (let iRow = -kernelSize; iRow <= kernelSize; iRow++) ;
                          for (let iCol = -kernelSize; iCol <= kernelSize; iCol++)
                          if (iRow !== 0 && iCol !== 0)
                          values.push( gridRow[ col + iCol ];


                          values = values.filter( v => v !== undefined);

                          const sum = values.reduce((a, b) => a + b, 0));
                          const count = values.length;
                          return sum / count;



                          or it might be more efficient just to skip the intermediate values array entirely.



                          static _getAdjacentMean(row, col, kernelSize, grid) 
                          let sum = 0, count = 0;
                          for (let iRow = -kernelSize; iRow <= kernelSize; iRow++)

                          return sum / count;






                          share|improve this answer
























                            up vote
                            3
                            down vote










                            up vote
                            3
                            down vote









                            I would generally solve this by using logic that returns undefined for edge cases (for exame grid[-1] and grid[999] return undefined) and then filtering them out. Something like



                            static _getAdjacentMean(row, col, kernelSize, grid) 
                            let values = ;
                            for (let iRow = -kernelSize; iRow <= kernelSize; iRow++) ;
                            for (let iCol = -kernelSize; iCol <= kernelSize; iCol++)
                            if (iRow !== 0 && iCol !== 0)
                            values.push( gridRow[ col + iCol ];


                            values = values.filter( v => v !== undefined);

                            const sum = values.reduce((a, b) => a + b, 0));
                            const count = values.length;
                            return sum / count;



                            or it might be more efficient just to skip the intermediate values array entirely.



                            static _getAdjacentMean(row, col, kernelSize, grid) 
                            let sum = 0, count = 0;
                            for (let iRow = -kernelSize; iRow <= kernelSize; iRow++)

                            return sum / count;






                            share|improve this answer














                            I would generally solve this by using logic that returns undefined for edge cases (for exame grid[-1] and grid[999] return undefined) and then filtering them out. Something like



                            static _getAdjacentMean(row, col, kernelSize, grid) 
                            let values = ;
                            for (let iRow = -kernelSize; iRow <= kernelSize; iRow++) ;
                            for (let iCol = -kernelSize; iCol <= kernelSize; iCol++)
                            if (iRow !== 0 && iCol !== 0)
                            values.push( gridRow[ col + iCol ];


                            values = values.filter( v => v !== undefined);

                            const sum = values.reduce((a, b) => a + b, 0));
                            const count = values.length;
                            return sum / count;



                            or it might be more efficient just to skip the intermediate values array entirely.



                            static _getAdjacentMean(row, col, kernelSize, grid) 
                            let sum = 0, count = 0;
                            for (let iRow = -kernelSize; iRow <= kernelSize; iRow++)

                            return sum / count;







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited Sep 17 at 20:18

























                            answered Sep 17 at 15:22









                            Marc Rohloff

                            2,75736




                            2,75736



























                                 

                                draft saved


                                draft discarded















































                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203826%2fperforming-a-mean-blur%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Bahrain

                                Postfix configuration issue with fips on centos 7; mailgun relay