What is a “linear function” in the context of multivariable calculus?
Clash Royale CLAN TAG#URR8PPP
On Wikipedia, it says
When $f$ is a function from an open subset of $mathbbR^n$ to $mathbbR^m$, then the directional derivative of $f$ in a chosen direction is the best linear approximation to f at that point and in that direction.
I just want to check that linear functions from $mathbbR^n$ to $mathbbR^m$, are defined as functions of the form $f(x) = ax+b$ where $a$ is a scalar and $b$ is a vector?
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct? I thought that if anything was going to be a counterexample, it was going to be an off center circle; under the transformation x $mapsto$ 2x, I thought an off-center circle might map to an ellipse; but this doesn't seem to be the case. For example, if $(x, y)$ satisfies $(x-2)^2 + (y-2)^2 = 1$, then multiplying both sides by $2^2$ gives $(2x-4)^2 + (2y-4)^2 = 4$; so $(2x, 2y)$ satisfies $(X^2-4)^2 + (Y-4)^2 = 4$, which is still a circle with center at $(4, 4)$, as expected.
real-analysis analysis multivariable-calculus
add a comment |
On Wikipedia, it says
When $f$ is a function from an open subset of $mathbbR^n$ to $mathbbR^m$, then the directional derivative of $f$ in a chosen direction is the best linear approximation to f at that point and in that direction.
I just want to check that linear functions from $mathbbR^n$ to $mathbbR^m$, are defined as functions of the form $f(x) = ax+b$ where $a$ is a scalar and $b$ is a vector?
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct? I thought that if anything was going to be a counterexample, it was going to be an off center circle; under the transformation x $mapsto$ 2x, I thought an off-center circle might map to an ellipse; but this doesn't seem to be the case. For example, if $(x, y)$ satisfies $(x-2)^2 + (y-2)^2 = 1$, then multiplying both sides by $2^2$ gives $(2x-4)^2 + (2y-4)^2 = 4$; so $(2x, 2y)$ satisfies $(X^2-4)^2 + (Y-4)^2 = 4$, which is still a circle with center at $(4, 4)$, as expected.
real-analysis analysis multivariable-calculus
1
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
1
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10
add a comment |
On Wikipedia, it says
When $f$ is a function from an open subset of $mathbbR^n$ to $mathbbR^m$, then the directional derivative of $f$ in a chosen direction is the best linear approximation to f at that point and in that direction.
I just want to check that linear functions from $mathbbR^n$ to $mathbbR^m$, are defined as functions of the form $f(x) = ax+b$ where $a$ is a scalar and $b$ is a vector?
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct? I thought that if anything was going to be a counterexample, it was going to be an off center circle; under the transformation x $mapsto$ 2x, I thought an off-center circle might map to an ellipse; but this doesn't seem to be the case. For example, if $(x, y)$ satisfies $(x-2)^2 + (y-2)^2 = 1$, then multiplying both sides by $2^2$ gives $(2x-4)^2 + (2y-4)^2 = 4$; so $(2x, 2y)$ satisfies $(X^2-4)^2 + (Y-4)^2 = 4$, which is still a circle with center at $(4, 4)$, as expected.
real-analysis analysis multivariable-calculus
On Wikipedia, it says
When $f$ is a function from an open subset of $mathbbR^n$ to $mathbbR^m$, then the directional derivative of $f$ in a chosen direction is the best linear approximation to f at that point and in that direction.
I just want to check that linear functions from $mathbbR^n$ to $mathbbR^m$, are defined as functions of the form $f(x) = ax+b$ where $a$ is a scalar and $b$ is a vector?
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct? I thought that if anything was going to be a counterexample, it was going to be an off center circle; under the transformation x $mapsto$ 2x, I thought an off-center circle might map to an ellipse; but this doesn't seem to be the case. For example, if $(x, y)$ satisfies $(x-2)^2 + (y-2)^2 = 1$, then multiplying both sides by $2^2$ gives $(2x-4)^2 + (2y-4)^2 = 4$; so $(2x, 2y)$ satisfies $(X^2-4)^2 + (Y-4)^2 = 4$, which is still a circle with center at $(4, 4)$, as expected.
real-analysis analysis multivariable-calculus
real-analysis analysis multivariable-calculus
asked Dec 14 at 21:41
Ovi
12.4k1038111
12.4k1038111
1
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
1
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10
add a comment |
1
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
1
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10
1
1
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
1
1
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10
add a comment |
5 Answers
5
active
oldest
votes
A linear function in this context is a map $f: mathbbR^n to mathbbR^m$ such that the following conditions hold:
$f(x+y)=f(x)+f(y)$ for every $x,y in mathbbR^n$
$f(lambda x)=lambda f(x)$ for every $x in mathbbR^n$ and $lambda in mathbbR$.
It can be shown that every such function has the form $f(x)=Ax$ where $A in mathbbR^m times n$ is an $m times n$ matrix. If $f$ has the form $f(x)=Ax + b$ for some $bin mathbbR^m$, then it is called an affine linear function.
This generalises the notion of a linear map $f: mathbbR to mathbbR$ of the form $f(x)=ax+b$, where $a,b$ are real numbers, which is probably what you had in mind. A linear affine map is a linear map, if and only if $b=0$. Note that your example is a special affine linear map from $mathbbR^n to mathbbR^n$ (the dimensions have to match).
An example of a linear function from $mathbbR^3$ to $mathbbR^3$ would be $$f(x,y,z) = beginpmatrix
1 & 2 & 7\
5& 3 & 7\
3& 8& 2
endpmatrix beginpmatrix
x\
y\
z
endpmatrix.$$
Your example in the case of $mathbbR^3$ is of the form
$$f(x,y,z) = beginpmatrix
a& 0 & 0\
0& a & 0\
0& 0& a
endpmatrix beginpmatrix
x\
y\
z
endpmatrix + beginpmatrix
b_x\
b_y\
b_z
endpmatrix,$$
for some $a in mathbbR$ and $(b_x, b_y, b_z) in mathbbR^3$.
In the case of a differentiable function at a point $x_0 in mathbbR^m$ $f: mathbbR^m to mathbbR^n$ we want to approximate the function by an affine linear map, that is locally around $x_0$ we have $f(x) approx A(x-x_0) + f(x_0)$, where $A in mathbbR^n times m$. The offset $f(x_0)$ ensures that the approximation takes the value $f(x_0)$ at the point $x_0$, and the matrix $A$ describes how the function changes linearly around $x_0$. The idea is that linear maps are really easy to handle using the tools of linear algebra.
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
add a comment |
This is in general not the form of a linear function. A function $f: mathbbR^n rightarrow mathbbR^m$ is linear if the following two equalities hold for all $alphainmathbbR$ and $x, yin mathbbR^n$:
$i)$ $f(x + y) = f(x) + f(y)$
$ii)$ $f(alpha x) = alpha f(x)$.
It turns out that all such functions are of the form $f(x) = Ax$ for some matrix $AinmathbbR^mtimes n$ (that is, a matrix with $m$ rows, $n$ columns).
One key difference with your proposed form is that linear functions always go through the origin, that is $f(0) = 0$, where $0$ is the zero vector (rather than the scalar). This is not the case if $bneq 0$ in your proposed form. For $f: mathbbR^2 rightarrow mathbbR$ you should think of a plane through the origin as the graph, rather than a line.
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
add a comment |
I just want to check that linear functions from Rn to Rm, are defined as functions of the form f(x)=ax+b where a is a scalar and b is a vector?
No. In fact, a linear function is one with the property that $f(ax) = af(x)$ for any $x$ is whatever vector space it's defined on and any $a$ in the scalar field of that vector space. In that case, that is precisely those of the form $f(x) = Ax$ for some matrix $A$.
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct?
No, because of the above. For an example involving a circle, take $n = 2$, $m = 2$ and $A = left(array2&0\0&1right)$. This turns the unit circle into an ellipse. More generally, note that $n$ and $m$ do not have to be the same. For example, there's the linear map
beginalign*f&: mathbbR^3tomathbbR\&:left(arrayx\y\zright)mapsto x+y+z,endalign* which collapses everything down to a diagonal line (but not in the most "natural" way).
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
add a comment |
In single variable calculus the best linear approximation to a function $f$ at a point $p$ is
$$
g(x) = f(p) + f'(p)(x-p).
$$
You can see why that's close to $f(x)$ when $x$ is close to $p$ by looking at the definition of the derivative, and thinking about the tangent line.
In several variables $p$ and $x$ will be vectors. That formula will still be correct if you change "$f'(p)$" to "the directional derivative of $f$ at $p$ in the direction from $p$ to $x$".
As the other answers say, most of what you "want to check" isn't right.
add a comment |
In general, the derivative is the best local linear approximation to a function at a point. A differentiable function $f: mathbbR^n rightarrow mathbbR^m$ at $x=x_0$ is locally approximated by a vector space homomorphism $Df_x_0 in cal L(mathbbR^n, mathbbR^m)$, and it is in this sense that you must understand "linear".
In the direction $v in mathbbR^n$, the directional derivative is simply $Df_x_0(v)$ because the derivative contains all information about all local rates of change in all directions.
Basically what happens is that you attach a copy of $mathbbR^m+n$ to $x_0$, and you approximate the curvy graph of $f$ by the flat (linear) graph of $Df(x_0)$. This is called the tangent space to the graph of $f$ at $x=x_0$. If you balance a piece of cardboard on a beach ball, you have a good model for this. The origin is where the cardboard touches the ball, which is why you don't get an additive constant.
If you draw a line on your piece of cardboard through the point where it touches, you get a model for the directional derivative in the direction of your point. Rotate your cardboard tangent plane around that point, and you get different directional derivatives.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3039913%2fwhat-is-a-linear-function-in-the-context-of-multivariable-calculus%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
A linear function in this context is a map $f: mathbbR^n to mathbbR^m$ such that the following conditions hold:
$f(x+y)=f(x)+f(y)$ for every $x,y in mathbbR^n$
$f(lambda x)=lambda f(x)$ for every $x in mathbbR^n$ and $lambda in mathbbR$.
It can be shown that every such function has the form $f(x)=Ax$ where $A in mathbbR^m times n$ is an $m times n$ matrix. If $f$ has the form $f(x)=Ax + b$ for some $bin mathbbR^m$, then it is called an affine linear function.
This generalises the notion of a linear map $f: mathbbR to mathbbR$ of the form $f(x)=ax+b$, where $a,b$ are real numbers, which is probably what you had in mind. A linear affine map is a linear map, if and only if $b=0$. Note that your example is a special affine linear map from $mathbbR^n to mathbbR^n$ (the dimensions have to match).
An example of a linear function from $mathbbR^3$ to $mathbbR^3$ would be $$f(x,y,z) = beginpmatrix
1 & 2 & 7\
5& 3 & 7\
3& 8& 2
endpmatrix beginpmatrix
x\
y\
z
endpmatrix.$$
Your example in the case of $mathbbR^3$ is of the form
$$f(x,y,z) = beginpmatrix
a& 0 & 0\
0& a & 0\
0& 0& a
endpmatrix beginpmatrix
x\
y\
z
endpmatrix + beginpmatrix
b_x\
b_y\
b_z
endpmatrix,$$
for some $a in mathbbR$ and $(b_x, b_y, b_z) in mathbbR^3$.
In the case of a differentiable function at a point $x_0 in mathbbR^m$ $f: mathbbR^m to mathbbR^n$ we want to approximate the function by an affine linear map, that is locally around $x_0$ we have $f(x) approx A(x-x_0) + f(x_0)$, where $A in mathbbR^n times m$. The offset $f(x_0)$ ensures that the approximation takes the value $f(x_0)$ at the point $x_0$, and the matrix $A$ describes how the function changes linearly around $x_0$. The idea is that linear maps are really easy to handle using the tools of linear algebra.
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
add a comment |
A linear function in this context is a map $f: mathbbR^n to mathbbR^m$ such that the following conditions hold:
$f(x+y)=f(x)+f(y)$ for every $x,y in mathbbR^n$
$f(lambda x)=lambda f(x)$ for every $x in mathbbR^n$ and $lambda in mathbbR$.
It can be shown that every such function has the form $f(x)=Ax$ where $A in mathbbR^m times n$ is an $m times n$ matrix. If $f$ has the form $f(x)=Ax + b$ for some $bin mathbbR^m$, then it is called an affine linear function.
This generalises the notion of a linear map $f: mathbbR to mathbbR$ of the form $f(x)=ax+b$, where $a,b$ are real numbers, which is probably what you had in mind. A linear affine map is a linear map, if and only if $b=0$. Note that your example is a special affine linear map from $mathbbR^n to mathbbR^n$ (the dimensions have to match).
An example of a linear function from $mathbbR^3$ to $mathbbR^3$ would be $$f(x,y,z) = beginpmatrix
1 & 2 & 7\
5& 3 & 7\
3& 8& 2
endpmatrix beginpmatrix
x\
y\
z
endpmatrix.$$
Your example in the case of $mathbbR^3$ is of the form
$$f(x,y,z) = beginpmatrix
a& 0 & 0\
0& a & 0\
0& 0& a
endpmatrix beginpmatrix
x\
y\
z
endpmatrix + beginpmatrix
b_x\
b_y\
b_z
endpmatrix,$$
for some $a in mathbbR$ and $(b_x, b_y, b_z) in mathbbR^3$.
In the case of a differentiable function at a point $x_0 in mathbbR^m$ $f: mathbbR^m to mathbbR^n$ we want to approximate the function by an affine linear map, that is locally around $x_0$ we have $f(x) approx A(x-x_0) + f(x_0)$, where $A in mathbbR^n times m$. The offset $f(x_0)$ ensures that the approximation takes the value $f(x_0)$ at the point $x_0$, and the matrix $A$ describes how the function changes linearly around $x_0$. The idea is that linear maps are really easy to handle using the tools of linear algebra.
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
add a comment |
A linear function in this context is a map $f: mathbbR^n to mathbbR^m$ such that the following conditions hold:
$f(x+y)=f(x)+f(y)$ for every $x,y in mathbbR^n$
$f(lambda x)=lambda f(x)$ for every $x in mathbbR^n$ and $lambda in mathbbR$.
It can be shown that every such function has the form $f(x)=Ax$ where $A in mathbbR^m times n$ is an $m times n$ matrix. If $f$ has the form $f(x)=Ax + b$ for some $bin mathbbR^m$, then it is called an affine linear function.
This generalises the notion of a linear map $f: mathbbR to mathbbR$ of the form $f(x)=ax+b$, where $a,b$ are real numbers, which is probably what you had in mind. A linear affine map is a linear map, if and only if $b=0$. Note that your example is a special affine linear map from $mathbbR^n to mathbbR^n$ (the dimensions have to match).
An example of a linear function from $mathbbR^3$ to $mathbbR^3$ would be $$f(x,y,z) = beginpmatrix
1 & 2 & 7\
5& 3 & 7\
3& 8& 2
endpmatrix beginpmatrix
x\
y\
z
endpmatrix.$$
Your example in the case of $mathbbR^3$ is of the form
$$f(x,y,z) = beginpmatrix
a& 0 & 0\
0& a & 0\
0& 0& a
endpmatrix beginpmatrix
x\
y\
z
endpmatrix + beginpmatrix
b_x\
b_y\
b_z
endpmatrix,$$
for some $a in mathbbR$ and $(b_x, b_y, b_z) in mathbbR^3$.
In the case of a differentiable function at a point $x_0 in mathbbR^m$ $f: mathbbR^m to mathbbR^n$ we want to approximate the function by an affine linear map, that is locally around $x_0$ we have $f(x) approx A(x-x_0) + f(x_0)$, where $A in mathbbR^n times m$. The offset $f(x_0)$ ensures that the approximation takes the value $f(x_0)$ at the point $x_0$, and the matrix $A$ describes how the function changes linearly around $x_0$. The idea is that linear maps are really easy to handle using the tools of linear algebra.
A linear function in this context is a map $f: mathbbR^n to mathbbR^m$ such that the following conditions hold:
$f(x+y)=f(x)+f(y)$ for every $x,y in mathbbR^n$
$f(lambda x)=lambda f(x)$ for every $x in mathbbR^n$ and $lambda in mathbbR$.
It can be shown that every such function has the form $f(x)=Ax$ where $A in mathbbR^m times n$ is an $m times n$ matrix. If $f$ has the form $f(x)=Ax + b$ for some $bin mathbbR^m$, then it is called an affine linear function.
This generalises the notion of a linear map $f: mathbbR to mathbbR$ of the form $f(x)=ax+b$, where $a,b$ are real numbers, which is probably what you had in mind. A linear affine map is a linear map, if and only if $b=0$. Note that your example is a special affine linear map from $mathbbR^n to mathbbR^n$ (the dimensions have to match).
An example of a linear function from $mathbbR^3$ to $mathbbR^3$ would be $$f(x,y,z) = beginpmatrix
1 & 2 & 7\
5& 3 & 7\
3& 8& 2
endpmatrix beginpmatrix
x\
y\
z
endpmatrix.$$
Your example in the case of $mathbbR^3$ is of the form
$$f(x,y,z) = beginpmatrix
a& 0 & 0\
0& a & 0\
0& 0& a
endpmatrix beginpmatrix
x\
y\
z
endpmatrix + beginpmatrix
b_x\
b_y\
b_z
endpmatrix,$$
for some $a in mathbbR$ and $(b_x, b_y, b_z) in mathbbR^3$.
In the case of a differentiable function at a point $x_0 in mathbbR^m$ $f: mathbbR^m to mathbbR^n$ we want to approximate the function by an affine linear map, that is locally around $x_0$ we have $f(x) approx A(x-x_0) + f(x_0)$, where $A in mathbbR^n times m$. The offset $f(x_0)$ ensures that the approximation takes the value $f(x_0)$ at the point $x_0$, and the matrix $A$ describes how the function changes linearly around $x_0$. The idea is that linear maps are really easy to handle using the tools of linear algebra.
edited Dec 14 at 22:10
answered Dec 14 at 22:00
Jannik Pitt
291316
291316
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
add a comment |
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
Ah THANK YOU for that edit; so we are after all looking for a function with non-zero "$y-$ intercept", but my mistake was making the coefficient of $x$ a scalar, not a matrix.
– Ovi
Dec 14 at 22:09
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
@Ovi Yeah, but usually the matrix $A$ is called the linear approximation (or derivative) of the function.
– Jannik Pitt
Dec 14 at 22:12
add a comment |
This is in general not the form of a linear function. A function $f: mathbbR^n rightarrow mathbbR^m$ is linear if the following two equalities hold for all $alphainmathbbR$ and $x, yin mathbbR^n$:
$i)$ $f(x + y) = f(x) + f(y)$
$ii)$ $f(alpha x) = alpha f(x)$.
It turns out that all such functions are of the form $f(x) = Ax$ for some matrix $AinmathbbR^mtimes n$ (that is, a matrix with $m$ rows, $n$ columns).
One key difference with your proposed form is that linear functions always go through the origin, that is $f(0) = 0$, where $0$ is the zero vector (rather than the scalar). This is not the case if $bneq 0$ in your proposed form. For $f: mathbbR^2 rightarrow mathbbR$ you should think of a plane through the origin as the graph, rather than a line.
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
add a comment |
This is in general not the form of a linear function. A function $f: mathbbR^n rightarrow mathbbR^m$ is linear if the following two equalities hold for all $alphainmathbbR$ and $x, yin mathbbR^n$:
$i)$ $f(x + y) = f(x) + f(y)$
$ii)$ $f(alpha x) = alpha f(x)$.
It turns out that all such functions are of the form $f(x) = Ax$ for some matrix $AinmathbbR^mtimes n$ (that is, a matrix with $m$ rows, $n$ columns).
One key difference with your proposed form is that linear functions always go through the origin, that is $f(0) = 0$, where $0$ is the zero vector (rather than the scalar). This is not the case if $bneq 0$ in your proposed form. For $f: mathbbR^2 rightarrow mathbbR$ you should think of a plane through the origin as the graph, rather than a line.
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
add a comment |
This is in general not the form of a linear function. A function $f: mathbbR^n rightarrow mathbbR^m$ is linear if the following two equalities hold for all $alphainmathbbR$ and $x, yin mathbbR^n$:
$i)$ $f(x + y) = f(x) + f(y)$
$ii)$ $f(alpha x) = alpha f(x)$.
It turns out that all such functions are of the form $f(x) = Ax$ for some matrix $AinmathbbR^mtimes n$ (that is, a matrix with $m$ rows, $n$ columns).
One key difference with your proposed form is that linear functions always go through the origin, that is $f(0) = 0$, where $0$ is the zero vector (rather than the scalar). This is not the case if $bneq 0$ in your proposed form. For $f: mathbbR^2 rightarrow mathbbR$ you should think of a plane through the origin as the graph, rather than a line.
This is in general not the form of a linear function. A function $f: mathbbR^n rightarrow mathbbR^m$ is linear if the following two equalities hold for all $alphainmathbbR$ and $x, yin mathbbR^n$:
$i)$ $f(x + y) = f(x) + f(y)$
$ii)$ $f(alpha x) = alpha f(x)$.
It turns out that all such functions are of the form $f(x) = Ax$ for some matrix $AinmathbbR^mtimes n$ (that is, a matrix with $m$ rows, $n$ columns).
One key difference with your proposed form is that linear functions always go through the origin, that is $f(0) = 0$, where $0$ is the zero vector (rather than the scalar). This is not the case if $bneq 0$ in your proposed form. For $f: mathbbR^2 rightarrow mathbbR$ you should think of a plane through the origin as the graph, rather than a line.
edited Dec 14 at 21:56
answered Dec 14 at 21:51
Dasherman
995817
995817
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
add a comment |
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
2
2
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
I'm aware that this is the definition of a linear function in linear algebra. But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:55
1
1
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
The way I think of it is that for the differential we translate the point to the origin and then we have a linear approximation through the origin. Note also that in higher dimensions, for example $mathbbR^2 rightarrow mathbbR$, we have a tangent plane rather than a single tangent line. It is this plane that is the linear approximation.
– Dasherman
Dec 14 at 22:00
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
As @Jannik Pitt notes, we can also view it as an affine linear approximation, which is just a translated linear function (or in this case, translated linear approximation), so that instead of passing through the origin, it passes through the point $(x, f(x))$, $x$ being the point at which we calculate the differential.
– Dasherman
Dec 14 at 22:03
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
Thanks for the responses; I can't fully understand your second comment because I haven't done any examples or even looked at definitions, so I don't exactly know at what point we translate to the origin. But I'll actually read my book now and check back after I internalize the definitions.
– Ovi
Dec 14 at 22:10
add a comment |
I just want to check that linear functions from Rn to Rm, are defined as functions of the form f(x)=ax+b where a is a scalar and b is a vector?
No. In fact, a linear function is one with the property that $f(ax) = af(x)$ for any $x$ is whatever vector space it's defined on and any $a$ in the scalar field of that vector space. In that case, that is precisely those of the form $f(x) = Ax$ for some matrix $A$.
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct?
No, because of the above. For an example involving a circle, take $n = 2$, $m = 2$ and $A = left(array2&0\0&1right)$. This turns the unit circle into an ellipse. More generally, note that $n$ and $m$ do not have to be the same. For example, there's the linear map
beginalign*f&: mathbbR^3tomathbbR\&:left(arrayx\y\zright)mapsto x+y+z,endalign* which collapses everything down to a diagonal line (but not in the most "natural" way).
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
add a comment |
I just want to check that linear functions from Rn to Rm, are defined as functions of the form f(x)=ax+b where a is a scalar and b is a vector?
No. In fact, a linear function is one with the property that $f(ax) = af(x)$ for any $x$ is whatever vector space it's defined on and any $a$ in the scalar field of that vector space. In that case, that is precisely those of the form $f(x) = Ax$ for some matrix $A$.
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct?
No, because of the above. For an example involving a circle, take $n = 2$, $m = 2$ and $A = left(array2&0\0&1right)$. This turns the unit circle into an ellipse. More generally, note that $n$ and $m$ do not have to be the same. For example, there's the linear map
beginalign*f&: mathbbR^3tomathbbR\&:left(arrayx\y\zright)mapsto x+y+z,endalign* which collapses everything down to a diagonal line (but not in the most "natural" way).
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
add a comment |
I just want to check that linear functions from Rn to Rm, are defined as functions of the form f(x)=ax+b where a is a scalar and b is a vector?
No. In fact, a linear function is one with the property that $f(ax) = af(x)$ for any $x$ is whatever vector space it's defined on and any $a$ in the scalar field of that vector space. In that case, that is precisely those of the form $f(x) = Ax$ for some matrix $A$.
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct?
No, because of the above. For an example involving a circle, take $n = 2$, $m = 2$ and $A = left(array2&0\0&1right)$. This turns the unit circle into an ellipse. More generally, note that $n$ and $m$ do not have to be the same. For example, there's the linear map
beginalign*f&: mathbbR^3tomathbbR\&:left(arrayx\y\zright)mapsto x+y+z,endalign* which collapses everything down to a diagonal line (but not in the most "natural" way).
I just want to check that linear functions from Rn to Rm, are defined as functions of the form f(x)=ax+b where a is a scalar and b is a vector?
No. In fact, a linear function is one with the property that $f(ax) = af(x)$ for any $x$ is whatever vector space it's defined on and any $a$ in the scalar field of that vector space. In that case, that is precisely those of the form $f(x) = Ax$ for some matrix $A$.
Also, it seems like functions of the form above just enlarge/shrink and shift. Is this correct?
No, because of the above. For an example involving a circle, take $n = 2$, $m = 2$ and $A = left(array2&0\0&1right)$. This turns the unit circle into an ellipse. More generally, note that $n$ and $m$ do not have to be the same. For example, there's the linear map
beginalign*f&: mathbbR^3tomathbbR\&:left(arrayx\y\zright)mapsto x+y+z,endalign* which collapses everything down to a diagonal line (but not in the most "natural" way).
answered Dec 14 at 21:51
user3482749
2,241414
2,241414
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
add a comment |
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
But when we talk in terms of linear approximations, don't we want non-zero "$y$ intercepts"? I can't picture higher dimensions, but in functions from $mathbbR to mathbbR$, when we talk of linear approximations, we talk of tangent lines, which are generally of the form $ax+b$, not just $ax$.
– Ovi
Dec 14 at 21:57
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
This is a matter of terminology. In the terminology of Wikipedia, the directional derivative is the matrix in question (or, rather, the associated linear map), which is actually linear. In the $mathbbRtomathbbR$ case, that's the $a$ in your question (or the map $x mapsto ax$).
– user3482749
Dec 15 at 10:33
add a comment |
In single variable calculus the best linear approximation to a function $f$ at a point $p$ is
$$
g(x) = f(p) + f'(p)(x-p).
$$
You can see why that's close to $f(x)$ when $x$ is close to $p$ by looking at the definition of the derivative, and thinking about the tangent line.
In several variables $p$ and $x$ will be vectors. That formula will still be correct if you change "$f'(p)$" to "the directional derivative of $f$ at $p$ in the direction from $p$ to $x$".
As the other answers say, most of what you "want to check" isn't right.
add a comment |
In single variable calculus the best linear approximation to a function $f$ at a point $p$ is
$$
g(x) = f(p) + f'(p)(x-p).
$$
You can see why that's close to $f(x)$ when $x$ is close to $p$ by looking at the definition of the derivative, and thinking about the tangent line.
In several variables $p$ and $x$ will be vectors. That formula will still be correct if you change "$f'(p)$" to "the directional derivative of $f$ at $p$ in the direction from $p$ to $x$".
As the other answers say, most of what you "want to check" isn't right.
add a comment |
In single variable calculus the best linear approximation to a function $f$ at a point $p$ is
$$
g(x) = f(p) + f'(p)(x-p).
$$
You can see why that's close to $f(x)$ when $x$ is close to $p$ by looking at the definition of the derivative, and thinking about the tangent line.
In several variables $p$ and $x$ will be vectors. That formula will still be correct if you change "$f'(p)$" to "the directional derivative of $f$ at $p$ in the direction from $p$ to $x$".
As the other answers say, most of what you "want to check" isn't right.
In single variable calculus the best linear approximation to a function $f$ at a point $p$ is
$$
g(x) = f(p) + f'(p)(x-p).
$$
You can see why that's close to $f(x)$ when $x$ is close to $p$ by looking at the definition of the derivative, and thinking about the tangent line.
In several variables $p$ and $x$ will be vectors. That formula will still be correct if you change "$f'(p)$" to "the directional derivative of $f$ at $p$ in the direction from $p$ to $x$".
As the other answers say, most of what you "want to check" isn't right.
answered Dec 14 at 21:58
Ethan Bolker
40.9k546108
40.9k546108
add a comment |
add a comment |
In general, the derivative is the best local linear approximation to a function at a point. A differentiable function $f: mathbbR^n rightarrow mathbbR^m$ at $x=x_0$ is locally approximated by a vector space homomorphism $Df_x_0 in cal L(mathbbR^n, mathbbR^m)$, and it is in this sense that you must understand "linear".
In the direction $v in mathbbR^n$, the directional derivative is simply $Df_x_0(v)$ because the derivative contains all information about all local rates of change in all directions.
Basically what happens is that you attach a copy of $mathbbR^m+n$ to $x_0$, and you approximate the curvy graph of $f$ by the flat (linear) graph of $Df(x_0)$. This is called the tangent space to the graph of $f$ at $x=x_0$. If you balance a piece of cardboard on a beach ball, you have a good model for this. The origin is where the cardboard touches the ball, which is why you don't get an additive constant.
If you draw a line on your piece of cardboard through the point where it touches, you get a model for the directional derivative in the direction of your point. Rotate your cardboard tangent plane around that point, and you get different directional derivatives.
add a comment |
In general, the derivative is the best local linear approximation to a function at a point. A differentiable function $f: mathbbR^n rightarrow mathbbR^m$ at $x=x_0$ is locally approximated by a vector space homomorphism $Df_x_0 in cal L(mathbbR^n, mathbbR^m)$, and it is in this sense that you must understand "linear".
In the direction $v in mathbbR^n$, the directional derivative is simply $Df_x_0(v)$ because the derivative contains all information about all local rates of change in all directions.
Basically what happens is that you attach a copy of $mathbbR^m+n$ to $x_0$, and you approximate the curvy graph of $f$ by the flat (linear) graph of $Df(x_0)$. This is called the tangent space to the graph of $f$ at $x=x_0$. If you balance a piece of cardboard on a beach ball, you have a good model for this. The origin is where the cardboard touches the ball, which is why you don't get an additive constant.
If you draw a line on your piece of cardboard through the point where it touches, you get a model for the directional derivative in the direction of your point. Rotate your cardboard tangent plane around that point, and you get different directional derivatives.
add a comment |
In general, the derivative is the best local linear approximation to a function at a point. A differentiable function $f: mathbbR^n rightarrow mathbbR^m$ at $x=x_0$ is locally approximated by a vector space homomorphism $Df_x_0 in cal L(mathbbR^n, mathbbR^m)$, and it is in this sense that you must understand "linear".
In the direction $v in mathbbR^n$, the directional derivative is simply $Df_x_0(v)$ because the derivative contains all information about all local rates of change in all directions.
Basically what happens is that you attach a copy of $mathbbR^m+n$ to $x_0$, and you approximate the curvy graph of $f$ by the flat (linear) graph of $Df(x_0)$. This is called the tangent space to the graph of $f$ at $x=x_0$. If you balance a piece of cardboard on a beach ball, you have a good model for this. The origin is where the cardboard touches the ball, which is why you don't get an additive constant.
If you draw a line on your piece of cardboard through the point where it touches, you get a model for the directional derivative in the direction of your point. Rotate your cardboard tangent plane around that point, and you get different directional derivatives.
In general, the derivative is the best local linear approximation to a function at a point. A differentiable function $f: mathbbR^n rightarrow mathbbR^m$ at $x=x_0$ is locally approximated by a vector space homomorphism $Df_x_0 in cal L(mathbbR^n, mathbbR^m)$, and it is in this sense that you must understand "linear".
In the direction $v in mathbbR^n$, the directional derivative is simply $Df_x_0(v)$ because the derivative contains all information about all local rates of change in all directions.
Basically what happens is that you attach a copy of $mathbbR^m+n$ to $x_0$, and you approximate the curvy graph of $f$ by the flat (linear) graph of $Df(x_0)$. This is called the tangent space to the graph of $f$ at $x=x_0$. If you balance a piece of cardboard on a beach ball, you have a good model for this. The origin is where the cardboard touches the ball, which is why you don't get an additive constant.
If you draw a line on your piece of cardboard through the point where it touches, you get a model for the directional derivative in the direction of your point. Rotate your cardboard tangent plane around that point, and you get different directional derivatives.
answered Dec 14 at 22:10
Matthias
2237
2237
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3039913%2fwhat-is-a-linear-function-in-the-context-of-multivariable-calculus%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
What you write is an affine function. A linear function can be expressed by a matrix.
– orange
Dec 14 at 21:51
1
The term “linear” is overloaded. See this question, this one and others.
– amd
Dec 15 at 1:10