What does calculating the inverse of a matrix mean?
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$
Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.
Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.
linear-algebra
add a comment |Â
up vote
3
down vote
favorite
Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$
Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.
Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.
linear-algebra
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$
Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.
Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.
linear-algebra
Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$
Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.
Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.
linear-algebra
linear-algebra
asked 3 hours ago
Eyad H.
328110
328110
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago
add a comment |Â
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago
add a comment |Â
5 Answers
5
active
oldest
votes
up vote
1
down vote
Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).
Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.
Again, this holds true for all systems, not just $ntimes n$.
add a comment |Â
up vote
1
down vote
Let us organize our equations in this way:
$xbegin bmatrix
1 \
3 \
vdots \
0
endbmatrix$
+ $ybegin bmatrix
2 \
8 \
vdots \
4
endbmatrix$
+ $zbegin bmatrix
1 \
1 \
vdots \
1
endbmatrix$
= $begin bmatrix
2 \
12 \
vdots \
2
endbmatrix$
When you have done inverse we got:
$2begin bmatrix
2/5 \
-3/10 \
vdots \
6/5
endbmatrix$
+ $12begin bmatrix
1/5 \
1/10 \
vdots \
2/5
endbmatrix$
+ $2begin bmatrix
-3/5 \
1/5 \
vdots \
1/5
endbmatrix$
= $begin bmatrix
x \
y \
vdots \
z
endbmatrix$
Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.
An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.
add a comment |Â
up vote
0
down vote
If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
add a comment |Â
up vote
0
down vote
A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.
add a comment |Â
up vote
0
down vote
It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have
$$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$
that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that
$$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$
Technically if you are attempting to find that vector you are using two steps back sub and front sub.
If we have the $Ax=b$ problem we have
$$ LUx=b tag3$$
we get two problems
$$ Ly =b tag4$$
$$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
is front sub
$$ Ux = y tag7 $$
and back sub is
$$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$
The intuition depends on the matrix decomposition.
add a comment |Â
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).
Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.
Again, this holds true for all systems, not just $ntimes n$.
add a comment |Â
up vote
1
down vote
Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).
Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.
Again, this holds true for all systems, not just $ntimes n$.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).
Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.
Again, this holds true for all systems, not just $ntimes n$.
Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).
Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.
Again, this holds true for all systems, not just $ntimes n$.
answered 3 hours ago
Ittay Weiss
62.4k699181
62.4k699181
add a comment |Â
add a comment |Â
up vote
1
down vote
Let us organize our equations in this way:
$xbegin bmatrix
1 \
3 \
vdots \
0
endbmatrix$
+ $ybegin bmatrix
2 \
8 \
vdots \
4
endbmatrix$
+ $zbegin bmatrix
1 \
1 \
vdots \
1
endbmatrix$
= $begin bmatrix
2 \
12 \
vdots \
2
endbmatrix$
When you have done inverse we got:
$2begin bmatrix
2/5 \
-3/10 \
vdots \
6/5
endbmatrix$
+ $12begin bmatrix
1/5 \
1/10 \
vdots \
2/5
endbmatrix$
+ $2begin bmatrix
-3/5 \
1/5 \
vdots \
1/5
endbmatrix$
= $begin bmatrix
x \
y \
vdots \
z
endbmatrix$
Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.
An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.
add a comment |Â
up vote
1
down vote
Let us organize our equations in this way:
$xbegin bmatrix
1 \
3 \
vdots \
0
endbmatrix$
+ $ybegin bmatrix
2 \
8 \
vdots \
4
endbmatrix$
+ $zbegin bmatrix
1 \
1 \
vdots \
1
endbmatrix$
= $begin bmatrix
2 \
12 \
vdots \
2
endbmatrix$
When you have done inverse we got:
$2begin bmatrix
2/5 \
-3/10 \
vdots \
6/5
endbmatrix$
+ $12begin bmatrix
1/5 \
1/10 \
vdots \
2/5
endbmatrix$
+ $2begin bmatrix
-3/5 \
1/5 \
vdots \
1/5
endbmatrix$
= $begin bmatrix
x \
y \
vdots \
z
endbmatrix$
Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.
An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Let us organize our equations in this way:
$xbegin bmatrix
1 \
3 \
vdots \
0
endbmatrix$
+ $ybegin bmatrix
2 \
8 \
vdots \
4
endbmatrix$
+ $zbegin bmatrix
1 \
1 \
vdots \
1
endbmatrix$
= $begin bmatrix
2 \
12 \
vdots \
2
endbmatrix$
When you have done inverse we got:
$2begin bmatrix
2/5 \
-3/10 \
vdots \
6/5
endbmatrix$
+ $12begin bmatrix
1/5 \
1/10 \
vdots \
2/5
endbmatrix$
+ $2begin bmatrix
-3/5 \
1/5 \
vdots \
1/5
endbmatrix$
= $begin bmatrix
x \
y \
vdots \
z
endbmatrix$
Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.
An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.
Let us organize our equations in this way:
$xbegin bmatrix
1 \
3 \
vdots \
0
endbmatrix$
+ $ybegin bmatrix
2 \
8 \
vdots \
4
endbmatrix$
+ $zbegin bmatrix
1 \
1 \
vdots \
1
endbmatrix$
= $begin bmatrix
2 \
12 \
vdots \
2
endbmatrix$
When you have done inverse we got:
$2begin bmatrix
2/5 \
-3/10 \
vdots \
6/5
endbmatrix$
+ $12begin bmatrix
1/5 \
1/10 \
vdots \
2/5
endbmatrix$
+ $2begin bmatrix
-3/5 \
1/5 \
vdots \
1/5
endbmatrix$
= $begin bmatrix
x \
y \
vdots \
z
endbmatrix$
Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.
An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.
answered 1 hour ago
nature1729
43338
43338
add a comment |Â
add a comment |Â
up vote
0
down vote
If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
add a comment |Â
up vote
0
down vote
If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
add a comment |Â
up vote
0
down vote
up vote
0
down vote
If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".
If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".
answered 3 hours ago
edo
169
169
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
add a comment |Â
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
â Ittay Weiss
2 hours ago
add a comment |Â
up vote
0
down vote
A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.
add a comment |Â
up vote
0
down vote
A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.
A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.
edited 1 hour ago
answered 2 hours ago
Drike
352112
352112
add a comment |Â
add a comment |Â
up vote
0
down vote
It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have
$$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$
that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that
$$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$
Technically if you are attempting to find that vector you are using two steps back sub and front sub.
If we have the $Ax=b$ problem we have
$$ LUx=b tag3$$
we get two problems
$$ Ly =b tag4$$
$$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
is front sub
$$ Ux = y tag7 $$
and back sub is
$$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$
The intuition depends on the matrix decomposition.
add a comment |Â
up vote
0
down vote
It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have
$$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$
that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that
$$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$
Technically if you are attempting to find that vector you are using two steps back sub and front sub.
If we have the $Ax=b$ problem we have
$$ LUx=b tag3$$
we get two problems
$$ Ly =b tag4$$
$$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
is front sub
$$ Ux = y tag7 $$
and back sub is
$$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$
The intuition depends on the matrix decomposition.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have
$$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$
that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that
$$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$
Technically if you are attempting to find that vector you are using two steps back sub and front sub.
If we have the $Ax=b$ problem we have
$$ LUx=b tag3$$
we get two problems
$$ Ly =b tag4$$
$$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
is front sub
$$ Ux = y tag7 $$
and back sub is
$$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$
The intuition depends on the matrix decomposition.
It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have
$$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$
that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that
$$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$
Technically if you are attempting to find that vector you are using two steps back sub and front sub.
If we have the $Ax=b$ problem we have
$$ LUx=b tag3$$
we get two problems
$$ Ly =b tag4$$
$$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
is front sub
$$ Ux = y tag7 $$
and back sub is
$$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$
The intuition depends on the matrix decomposition.
edited 12 secs ago
answered 1 hour ago
Ryan Howe
1,8621216
1,8621216
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2954052%2fwhat-does-calculating-the-inverse-of-a-matrix-mean%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
â A.ÃÂ.
3 hours ago
@A.ÃÂ. that's not what I am asking.
â Eyad H.
3 hours ago
To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
â littleO
1 hour ago