Why do the properties of determinants (used to calculate determinants from multiple matrices) apply not only to rows, but to columns as well?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












6












$begingroup$


A set of rules in my textbook is as follows:



a. If $A$ has a zero row (column), then $det A = 0$



b. If $B$ is obtained by interchanging two rows (columns) of $A$, then $det B = -det A$



c. If $A$ has two identical rows (columns), then $det A = 0$



d. If $B$ is obtained by multiplying a row (column) of $A$ by $k$, then $det B = kcdotdet A$



e. If $A$, $B$, and $C$ are identical except that the $i$-th row (column) of $C$ is the sum of the $i$-th rows (columns) of $A$ and $B$, then $det C = det B + det A$



f. If $B$ is obtained by adding a multiple of one row (column) of $A$ to another row (column), then $det B = det A$



I don't understand why "column" is in parentheses after every instance of row. Is that the same as saying, for example with clause a: "if $A$ has a zero row or a zero column, then $det A = 0$"? As in, it's saying that column and row can be used interchangably in the statement, as the statement holds true either way?



If the above interpretation is correct, then how? Why would these statements that apply to rows also apply to columns. The only situation I could see it applying is if the matrix is symmetrical, but the question doesn't specify that, it only says that the matrix is square.



Any help is appreciated.










share|cite|improve this question











$endgroup$











  • $begingroup$
    Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
    $endgroup$
    – timtfj
    Jan 1 at 22:40










  • $begingroup$
    $det(M^T)=det(M)$.
    $endgroup$
    – amd
    Jan 2 at 4:41















6












$begingroup$


A set of rules in my textbook is as follows:



a. If $A$ has a zero row (column), then $det A = 0$



b. If $B$ is obtained by interchanging two rows (columns) of $A$, then $det B = -det A$



c. If $A$ has two identical rows (columns), then $det A = 0$



d. If $B$ is obtained by multiplying a row (column) of $A$ by $k$, then $det B = kcdotdet A$



e. If $A$, $B$, and $C$ are identical except that the $i$-th row (column) of $C$ is the sum of the $i$-th rows (columns) of $A$ and $B$, then $det C = det B + det A$



f. If $B$ is obtained by adding a multiple of one row (column) of $A$ to another row (column), then $det B = det A$



I don't understand why "column" is in parentheses after every instance of row. Is that the same as saying, for example with clause a: "if $A$ has a zero row or a zero column, then $det A = 0$"? As in, it's saying that column and row can be used interchangably in the statement, as the statement holds true either way?



If the above interpretation is correct, then how? Why would these statements that apply to rows also apply to columns. The only situation I could see it applying is if the matrix is symmetrical, but the question doesn't specify that, it only says that the matrix is square.



Any help is appreciated.










share|cite|improve this question











$endgroup$











  • $begingroup$
    Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
    $endgroup$
    – timtfj
    Jan 1 at 22:40










  • $begingroup$
    $det(M^T)=det(M)$.
    $endgroup$
    – amd
    Jan 2 at 4:41













6












6








6


1



$begingroup$


A set of rules in my textbook is as follows:



a. If $A$ has a zero row (column), then $det A = 0$



b. If $B$ is obtained by interchanging two rows (columns) of $A$, then $det B = -det A$



c. If $A$ has two identical rows (columns), then $det A = 0$



d. If $B$ is obtained by multiplying a row (column) of $A$ by $k$, then $det B = kcdotdet A$



e. If $A$, $B$, and $C$ are identical except that the $i$-th row (column) of $C$ is the sum of the $i$-th rows (columns) of $A$ and $B$, then $det C = det B + det A$



f. If $B$ is obtained by adding a multiple of one row (column) of $A$ to another row (column), then $det B = det A$



I don't understand why "column" is in parentheses after every instance of row. Is that the same as saying, for example with clause a: "if $A$ has a zero row or a zero column, then $det A = 0$"? As in, it's saying that column and row can be used interchangably in the statement, as the statement holds true either way?



If the above interpretation is correct, then how? Why would these statements that apply to rows also apply to columns. The only situation I could see it applying is if the matrix is symmetrical, but the question doesn't specify that, it only says that the matrix is square.



Any help is appreciated.










share|cite|improve this question











$endgroup$




A set of rules in my textbook is as follows:



a. If $A$ has a zero row (column), then $det A = 0$



b. If $B$ is obtained by interchanging two rows (columns) of $A$, then $det B = -det A$



c. If $A$ has two identical rows (columns), then $det A = 0$



d. If $B$ is obtained by multiplying a row (column) of $A$ by $k$, then $det B = kcdotdet A$



e. If $A$, $B$, and $C$ are identical except that the $i$-th row (column) of $C$ is the sum of the $i$-th rows (columns) of $A$ and $B$, then $det C = det B + det A$



f. If $B$ is obtained by adding a multiple of one row (column) of $A$ to another row (column), then $det B = det A$



I don't understand why "column" is in parentheses after every instance of row. Is that the same as saying, for example with clause a: "if $A$ has a zero row or a zero column, then $det A = 0$"? As in, it's saying that column and row can be used interchangably in the statement, as the statement holds true either way?



If the above interpretation is correct, then how? Why would these statements that apply to rows also apply to columns. The only situation I could see it applying is if the matrix is symmetrical, but the question doesn't specify that, it only says that the matrix is square.



Any help is appreciated.







linear-algebra matrices determinant






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 1 at 20:54









amWhy

192k28225439




192k28225439










asked Jan 1 at 20:41









Nest Doberman Nest Doberman

311




311











  • $begingroup$
    Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
    $endgroup$
    – timtfj
    Jan 1 at 22:40










  • $begingroup$
    $det(M^T)=det(M)$.
    $endgroup$
    – amd
    Jan 2 at 4:41
















  • $begingroup$
    Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
    $endgroup$
    – timtfj
    Jan 1 at 22:40










  • $begingroup$
    $det(M^T)=det(M)$.
    $endgroup$
    – amd
    Jan 2 at 4:41















$begingroup$
Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
$endgroup$
– timtfj
Jan 1 at 22:40




$begingroup$
Yes, the same rules apply to columns too. This startled me as well when I was taught it. See J. G.'s answer. And welcome to the site.
$endgroup$
– timtfj
Jan 1 at 22:40












$begingroup$
$det(M^T)=det(M)$.
$endgroup$
– amd
Jan 2 at 4:41




$begingroup$
$det(M^T)=det(M)$.
$endgroup$
– amd
Jan 2 at 4:41










2 Answers
2






active

oldest

votes


















13












$begingroup$

The trick is that $det A^T=det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $det A^T=0$. You can check all the other copy-paste-to-column results in that way.






share|cite|improve this answer









$endgroup$








  • 4




    $begingroup$
    And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
    $endgroup$
    – Eric Wofsey
    Jan 1 at 20:59










  • $begingroup$
    @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
    $endgroup$
    – J.G.
    Jan 1 at 21:08











  • $begingroup$
    @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
    $endgroup$
    – user21820
    Jan 2 at 5:34


















0












$begingroup$

That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.



The key thing to remember is that $det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $det A$).

If you've seen eigenvalues, this means that $det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)



If you start from this, then it's far easier to see why rows and columns don't make a difference:



  • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.


  • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)


The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.






share|cite|improve this answer









$endgroup$












    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058856%2fwhy-do-the-properties-of-determinants-used-to-calculate-determinants-from-multi%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    13












    $begingroup$

    The trick is that $det A^T=det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $det A^T=0$. You can check all the other copy-paste-to-column results in that way.






    share|cite|improve this answer









    $endgroup$








    • 4




      $begingroup$
      And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
      $endgroup$
      – Eric Wofsey
      Jan 1 at 20:59










    • $begingroup$
      @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
      $endgroup$
      – J.G.
      Jan 1 at 21:08











    • $begingroup$
      @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
      $endgroup$
      – user21820
      Jan 2 at 5:34















    13












    $begingroup$

    The trick is that $det A^T=det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $det A^T=0$. You can check all the other copy-paste-to-column results in that way.






    share|cite|improve this answer









    $endgroup$








    • 4




      $begingroup$
      And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
      $endgroup$
      – Eric Wofsey
      Jan 1 at 20:59










    • $begingroup$
      @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
      $endgroup$
      – J.G.
      Jan 1 at 21:08











    • $begingroup$
      @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
      $endgroup$
      – user21820
      Jan 2 at 5:34













    13












    13








    13





    $begingroup$

    The trick is that $det A^T=det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $det A^T=0$. You can check all the other copy-paste-to-column results in that way.






    share|cite|improve this answer









    $endgroup$



    The trick is that $det A^T=det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $det A^T=0$. You can check all the other copy-paste-to-column results in that way.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Jan 1 at 20:43









    J.G.J.G.

    23.8k22538




    23.8k22538







    • 4




      $begingroup$
      And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
      $endgroup$
      – Eric Wofsey
      Jan 1 at 20:59










    • $begingroup$
      @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
      $endgroup$
      – J.G.
      Jan 1 at 21:08











    • $begingroup$
      @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
      $endgroup$
      – user21820
      Jan 2 at 5:34












    • 4




      $begingroup$
      And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
      $endgroup$
      – Eric Wofsey
      Jan 1 at 20:59










    • $begingroup$
      @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
      $endgroup$
      – J.G.
      Jan 1 at 21:08











    • $begingroup$
      @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
      $endgroup$
      – user21820
      Jan 2 at 5:34







    4




    4




    $begingroup$
    And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
    $endgroup$
    – Eric Wofsey
    Jan 1 at 20:59




    $begingroup$
    And, if you define determinants by the formula using signs of permutations, $det A^T=det A$ just amounts to the fact that the sign of a permutation is the same as the sign of its inverse.
    $endgroup$
    – Eric Wofsey
    Jan 1 at 20:59












    $begingroup$
    @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
    $endgroup$
    – J.G.
    Jan 1 at 21:08





    $begingroup$
    @EricWolsey Which, since signs multiply on composition of permutations, amounts to the primary-school fact that $(pm 1)^2=1$. (Or, if we work with parities, $1+1=2$ does it.)
    $endgroup$
    – J.G.
    Jan 1 at 21:08













    $begingroup$
    @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
    $endgroup$
    – user21820
    Jan 2 at 5:34




    $begingroup$
    @EricWofsey: I definitely prefer the definition by permutations, because it immediately gives all the properties of determinants. I don't know why so many people even bother with the cofactor definition, which is really quite useless in my opinion.
    $endgroup$
    – user21820
    Jan 2 at 5:34











    0












    $begingroup$

    That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.



    The key thing to remember is that $det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $det A$).

    If you've seen eigenvalues, this means that $det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)



    If you start from this, then it's far easier to see why rows and columns don't make a difference:



    • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.


    • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)


    The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.






    share|cite|improve this answer









    $endgroup$

















      0












      $begingroup$

      That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.



      The key thing to remember is that $det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $det A$).

      If you've seen eigenvalues, this means that $det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)



      If you start from this, then it's far easier to see why rows and columns don't make a difference:



      • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.


      • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)


      The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.






      share|cite|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.



        The key thing to remember is that $det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $det A$).

        If you've seen eigenvalues, this means that $det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)



        If you start from this, then it's far easier to see why rows and columns don't make a difference:



        • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.


        • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)


        The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.






        share|cite|improve this answer









        $endgroup$



        That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.



        The key thing to remember is that $det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $det A$).

        If you've seen eigenvalues, this means that $det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)



        If you start from this, then it's far easier to see why rows and columns don't make a difference:



        • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.


        • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)


        The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 2 at 2:10









        MehrdadMehrdad

        6,63463778




        6,63463778



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058856%2fwhy-do-the-properties-of-determinants-used-to-calculate-determinants-from-multi%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown






            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?