How to display very small numbers in Mathematica?
Clash Royale CLAN TAG#URR8PPP
$begingroup$
I am trying to evaluate the function:
$$f(x) = cos(x) - mathrme^-2.7 x$$
at $x = 1.7 times 10^-25$
and Mathematica keeps returning '0.'
How do I evaluate the expression in a better way?
precision-and-accuracy
$endgroup$
add a comment |
$begingroup$
I am trying to evaluate the function:
$$f(x) = cos(x) - mathrme^-2.7 x$$
at $x = 1.7 times 10^-25$
and Mathematica keeps returning '0.'
How do I evaluate the expression in a better way?
precision-and-accuracy
$endgroup$
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35
add a comment |
$begingroup$
I am trying to evaluate the function:
$$f(x) = cos(x) - mathrme^-2.7 x$$
at $x = 1.7 times 10^-25$
and Mathematica keeps returning '0.'
How do I evaluate the expression in a better way?
precision-and-accuracy
$endgroup$
I am trying to evaluate the function:
$$f(x) = cos(x) - mathrme^-2.7 x$$
at $x = 1.7 times 10^-25$
and Mathematica keeps returning '0.'
How do I evaluate the expression in a better way?
precision-and-accuracy
precision-and-accuracy
edited Jan 20 at 7:24
Henrik Schumacher
52.5k471148
52.5k471148
asked Jan 20 at 4:05
Ray_56Ray_56
412
412
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35
add a comment |
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35
add a comment |
7 Answers
7
active
oldest
votes
$begingroup$
Can also use exact numbers.
f[x_] = Cos[x] - E^(-27 x/10);
f[17 10^-26]//N[#,50]&
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
Another way is to increase the floating point digits.
f[x_] = Cos[x] - E^(-2.7`50 x)
f[1.7`50 10^-25]
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
These methods pretty much work for any calculation that needs extra accuracy over what machine precision provides.
$endgroup$
add a comment |
$begingroup$
First convert the expression to trigonometric form:
y = Cos[x] - Exp[-2.7*x] // ExpToTrig
Cos[x] - Cosh[2.7 x] + Sinh[2.7 x]
y /. x -> 1.7*10^-25
4.59*10^-25
This is the exact solution.
Note: Building on the interesting comments on whether the above is the exact solution, it is worth noting that there is an exact relationship between the three numbers: 1.7, 2.7 and 4.59. Actually 1.7x2.7=4.59. That is why 4.59*10^-25 is the exact solution.
Also, the solution approaches 4.59*10^-25 exactly (and symmetrically) from above and below, for example:
y /. x -> 1.69999*10^-25
4.58997*10^-25
y /. x -> 1.70001*10^-25
4.59003*10^-25
$endgroup$
1
$begingroup$
whyExpToTrig
?
$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
|
show 1 more comment
$begingroup$
Do a series expansion:
Series[Cos[x] - Exp[-2.7 x], x, 0, 1]
(*
SeriesData[x, 0, 2.7, 1, 2, 1]
*)
Then plug in $x = 1.7 times 10^-25$ to get:
$$4.59 times 10^-25$$
$endgroup$
add a comment |
$begingroup$
The simplest methods are usually the best. Try this code
N[Cos[x] - Exp[-x 27/10] /. x -> 17*^-26, 15] // InputForm
or a minor variation
With[x = 17*^-26, N[Cos[x] - Exp[-x 27/10], 15]] // InputForm
or define a function first
f = Cos[#] - Exp[-# 27/10] &; N[f[17*^-26], 15] // InputForm
All of these return the result
4.589999999999999999999998802094999`15.*^-25
You can get more digits by increasing the 15 digits precision.
For example, with 34 digits precision the result returned is
4.5899999999999999999999988020950000000000000000001613`34.*^-25
All those digits may be spurious since the given numbers
$2.7$ and $1.7times 10^-25$ seem to have only 2 digits of precision. In that case, the answer using 2 digits of precision is $4.6times 10^-25$.
Note: In this particular case, given that $x$ is small, $,|x|<<1,,$
then we get
$$cos(x) approx 1 - fracx^22, quad e^-c,x approx 1 - cx,quad cos(x) - e^-c,x approx c,x.$$ The simple answer is thus $2.7 cdot 1.7times 10^-25$.
$endgroup$
add a comment |
$begingroup$
This is another machine-precision solution. In @Vixllator's excellent answer, we were lucky that Mathematica put the Sinh
term last. I say lucky, because the order was determined by alphabetical order, not by numerical reasons. If the Sinh
term is first or second, the sum is 0 (see below†).
The difficulty of computing $1 + u$ for small $u$ is one reason we have expm1(x) = exp(x)-1
and log1p(x) = log(1+x)
. These, or their equivalents, are available in Mathematica through the undocumented functions:
Internal`Expm1[x]
Internal`Logp1[x]
To solve a problem like the OPs, the basic goal is to rewrite a function f[x]
for which f[0] == 0
in terms of functions that vanish at x == 0
. We can use the identities below to rewrite the OP's function:
Cos[z] == 1 - 2 Sin[z/2]^2
Exp[z] == 1 + Internal`Expm1[z]
The constant terms introduced in this process will cancel out since f[0] == 0
.
toVanishingFns = # /. (* rewrite Cos and Exp *)
Cos[z_] -> 1 - 2 Sin[z/2]^2,
Power[E, z_] :> Internal`Expm1[z] + 1
&;
With[cleanupRule = 0. -> 0, 1. -> 1, -1. -> -1,
cleanup = # /. cleanupRule &]; (* cleans up trivial floating point coefficients *)
Block[x,
ff[x_] = Cos[x] - Exp[-2.7 x] // toVanishingFns // cleanup
]
(* -Internal`Expm1[-2.7 x] - 2 Sin[x/2]^2 *)
ff[1.7*^-25]
(* 4.59*10^-25 *)
†In these orders, the Cosh
and Cos
terms, which are 1.`
exactly, don't cancel out first leaving the Sinh
term; instead, the Cosh
and Sinh
terms first sum to -1.`
exactly, since the Sinh
term is less than $MachineEpsilon
.
Sinh[2.7` x] - Cosh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
-Cosh[2.7` x] + Sinh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
(*
0.
0.
*)
$endgroup$
add a comment |
$begingroup$
This answer is a little more complicated than necessary, but I'm documenting my attempt here, nonetheless.
I have constructed a function RationalizedN
which attempts to rationalize all inexact numbers in the expression evaluation before numerical calculations occur, and then compute the value with requested precision using N
:
ClearAll@RationalizedN;
SetAttributes[RationalizedN, HoldFirst];
RationalizedN[expr_, n_: $MachinePrecision] :=
N[FixedPoint[
ReleaseHold@*ReplaceAll[
x_ :> RuleCondition[
Hold@Evaluate@Rationalize[x, 0],
InexactNumberQ@Unevaluated@x]],
Hold@expr], n];
This essentially provides automation for the changes @BillWatts performed in his answer.
Keeping parts of expressions unevaluated while the expression is being modified is somewhat delicate, and includes undocumented RuleCondition
usage.
Now we can define f
normally and evaluate it using this function:
ClearAll@f;
f[x_] := Cos[x] - Exp[-2.7 x];
RationalizedN[f[1.7*^-25], 50]
4.5900000000000000217190516840950001027706331277536*10^-25
The minor difference in result with others stems from the fact that inexact numbers such as 1.7*^-n
, when represented in binary floating point form in computers, are usually not exactly the same as "intuitive" rational form such as $17/10^n$, and not necessarily Rationalize
d to the expected form.
We can see what's actually going on by replacing Echo
in right places of the FixedPoint
function argument:
$textHold[f(text1.7$grave $*$^wedge$-25)]$
$rightarrow$
$textHoldleft[fleft(textHoldleft[frac15882352941176470560401060right]right)right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^-2.7 textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^textHoldleft[-frac2710right] textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(frac15882352941176470560401060right)-frac1e^27/58823529411764705604010600$
$rightarrow$
4.5900000000000000217190516840950001027706331277536*10^-25
$endgroup$
add a comment |
$begingroup$
Your constants 1.7
and 2.7
have too little precision for the intermediates generated during the computation to have adequate final precision.
Precision[1.7 * 10^-25]
(* MachinePrecision *)
Precision[1.7] (* The exponent doesn't matter here. *)
(* MachinePrecision *)
N[MachinePrecision]
(* 15.9546 *)
Your Mathematica instance may have slightly different value of MachinePrecision
.
First, let's get the correct answer so we can compare with it. We do this by eliminating floating point. (That is, we switch our number representation from one that implicitly represents intervals to one that represents exact numbers.)
f[x_] := Cos[x] - Exp[-27/10 x]
f[17/10 *10^-25]
(* -E^(-459/1000000000000000000000000000) + Cos[17/100000000000000000000000000] *)
By considering their power series, we expect both of these terms to have decimal representations which are runs of 0
s or 9
s separating small islands of other digits. We expect the runs in the exponential to be a little shorter than the denominators. In the cosine, we expect the first run to be about twice as long as the denominator. Let's see.
N[-(1/E^(459/1000000000000000000000000000)), 100]
N[Cos[17/100000000000000000000000000], 100]
(* -0.99999999999999999999999954100000000000000000000010534049999999999999999998388290350000000000000000185 *)
(* 0.99999999999999999999999999999999999999999999999998555000000000000000000000000000000000000000000000003 *)
So those meet expectations. Then we can do the subtraction, getting catastrophic cancellation.
N[f[17/10*10^-25], 100]
(* 4.589999999999999999999998802095000000000000000000161170964999999999999999981853635932916666666666668*10^-25 *)
This catastrophic cancellation of the leading 24 digits is our problem. Since 24 is greater than MachinePrecision
, when Mathematica does the subtraction, the Machine Precision
leading digits cancel, leaving 0.
,a floating point number representing the interval $left[ frac-12 * 10^textMachinePrecision, frac12 * 10^textMachinePrecision right]$ (possibly excluding either or both endpoints, depending on implementation details of floating point representations of intervals straddling zero). The true answer is in that interval, so the printed result is accurate.
Now we know that we should get $4.589dots times 10^-25$. Let's see what we can do to make that happen.
We can replace the floating point numbers in the definition and the argument to the function.
Clear[f];
f[x_] := Cos[x] - E^(-27/10 x)
f[17/10*10^-25]
N[f[17/10*10^-25]]
N[f[17/10*10^-25], 2]
N[f[17/10*10^-25], 24]
N[f[17/10*10^-25], 25]
(* -(1/E^(459/1000000000000000000000000000)) + Cos[17/100000000000000000000000000] *)
(* 0. *)
(* 4.6*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)Here, we see
N
experience catastrophic cancellation when we allowMachinePrecision
for intermediates but do not specify a precision goal for the result. When we explicitly set a precision goal for the result,N
detects that the result is not zero and gives us the requested precision. If our requested precision doesn't reach to the next island, then we get the rounded result "$4.590dots times 10^-25$".If we set the precision on the constant in
f
, there is no improvement.Clear[f];
f[x_] := Cos[x] - E^(-2.7`100 x)
f[1.7*10^-25]
(* 0. *)If we set the precision on the argument, there is no improvement.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
f[1.7`100*10^-25]
(* 0. *)If we set the precision of both,
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
f[1.7`24*10^-25]
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
f[1.7`25*10^-25]
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)we get precision limited by our specifications.
So maybe we wonder: Is there something I can do that leaves the definition of f
unaltered, but allows me to improve the precision of evaluation when the argument produces catastrophic cancellation? No. We know that 100
digits of intermediate precision is sufficient to get z result different from zero.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 0. *)
(* 0. *)
The precision of 2.7
is too low. We have to improve the quality of the constant in the definition of $f$ and then, to preserve those gains, we have to improve the quality of the constant in the argument.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
... And if you don't want it rounded to these trailing zeroes, both have to be precise enough.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0.
(* 4.589999999999999999999999*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "387"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189857%2fhow-to-display-very-small-numbers-in-mathematica%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Can also use exact numbers.
f[x_] = Cos[x] - E^(-27 x/10);
f[17 10^-26]//N[#,50]&
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
Another way is to increase the floating point digits.
f[x_] = Cos[x] - E^(-2.7`50 x)
f[1.7`50 10^-25]
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
These methods pretty much work for any calculation that needs extra accuracy over what machine precision provides.
$endgroup$
add a comment |
$begingroup$
Can also use exact numbers.
f[x_] = Cos[x] - E^(-27 x/10);
f[17 10^-26]//N[#,50]&
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
Another way is to increase the floating point digits.
f[x_] = Cos[x] - E^(-2.7`50 x)
f[1.7`50 10^-25]
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
These methods pretty much work for any calculation that needs extra accuracy over what machine precision provides.
$endgroup$
add a comment |
$begingroup$
Can also use exact numbers.
f[x_] = Cos[x] - E^(-27 x/10);
f[17 10^-26]//N[#,50]&
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
Another way is to increase the floating point digits.
f[x_] = Cos[x] - E^(-2.7`50 x)
f[1.7`50 10^-25]
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
These methods pretty much work for any calculation that needs extra accuracy over what machine precision provides.
$endgroup$
Can also use exact numbers.
f[x_] = Cos[x] - E^(-27 x/10);
f[17 10^-26]//N[#,50]&
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
Another way is to increase the floating point digits.
f[x_] = Cos[x] - E^(-2.7`50 x)
f[1.7`50 10^-25]
(*4.5899999999999999999999988020950000000000000000002*10^-25*)
These methods pretty much work for any calculation that needs extra accuracy over what machine precision provides.
edited Jan 20 at 21:10
answered Jan 20 at 6:41
Bill WattsBill Watts
3,3861620
3,3861620
add a comment |
add a comment |
$begingroup$
First convert the expression to trigonometric form:
y = Cos[x] - Exp[-2.7*x] // ExpToTrig
Cos[x] - Cosh[2.7 x] + Sinh[2.7 x]
y /. x -> 1.7*10^-25
4.59*10^-25
This is the exact solution.
Note: Building on the interesting comments on whether the above is the exact solution, it is worth noting that there is an exact relationship between the three numbers: 1.7, 2.7 and 4.59. Actually 1.7x2.7=4.59. That is why 4.59*10^-25 is the exact solution.
Also, the solution approaches 4.59*10^-25 exactly (and symmetrically) from above and below, for example:
y /. x -> 1.69999*10^-25
4.58997*10^-25
y /. x -> 1.70001*10^-25
4.59003*10^-25
$endgroup$
1
$begingroup$
whyExpToTrig
?
$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
|
show 1 more comment
$begingroup$
First convert the expression to trigonometric form:
y = Cos[x] - Exp[-2.7*x] // ExpToTrig
Cos[x] - Cosh[2.7 x] + Sinh[2.7 x]
y /. x -> 1.7*10^-25
4.59*10^-25
This is the exact solution.
Note: Building on the interesting comments on whether the above is the exact solution, it is worth noting that there is an exact relationship between the three numbers: 1.7, 2.7 and 4.59. Actually 1.7x2.7=4.59. That is why 4.59*10^-25 is the exact solution.
Also, the solution approaches 4.59*10^-25 exactly (and symmetrically) from above and below, for example:
y /. x -> 1.69999*10^-25
4.58997*10^-25
y /. x -> 1.70001*10^-25
4.59003*10^-25
$endgroup$
1
$begingroup$
whyExpToTrig
?
$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
|
show 1 more comment
$begingroup$
First convert the expression to trigonometric form:
y = Cos[x] - Exp[-2.7*x] // ExpToTrig
Cos[x] - Cosh[2.7 x] + Sinh[2.7 x]
y /. x -> 1.7*10^-25
4.59*10^-25
This is the exact solution.
Note: Building on the interesting comments on whether the above is the exact solution, it is worth noting that there is an exact relationship between the three numbers: 1.7, 2.7 and 4.59. Actually 1.7x2.7=4.59. That is why 4.59*10^-25 is the exact solution.
Also, the solution approaches 4.59*10^-25 exactly (and symmetrically) from above and below, for example:
y /. x -> 1.69999*10^-25
4.58997*10^-25
y /. x -> 1.70001*10^-25
4.59003*10^-25
$endgroup$
First convert the expression to trigonometric form:
y = Cos[x] - Exp[-2.7*x] // ExpToTrig
Cos[x] - Cosh[2.7 x] + Sinh[2.7 x]
y /. x -> 1.7*10^-25
4.59*10^-25
This is the exact solution.
Note: Building on the interesting comments on whether the above is the exact solution, it is worth noting that there is an exact relationship between the three numbers: 1.7, 2.7 and 4.59. Actually 1.7x2.7=4.59. That is why 4.59*10^-25 is the exact solution.
Also, the solution approaches 4.59*10^-25 exactly (and symmetrically) from above and below, for example:
y /. x -> 1.69999*10^-25
4.58997*10^-25
y /. x -> 1.70001*10^-25
4.59003*10^-25
edited Jan 21 at 20:27
answered Jan 20 at 6:20
VixillatorVixillator
6197
6197
1
$begingroup$
whyExpToTrig
?
$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
|
show 1 more comment
1
$begingroup$
whyExpToTrig
?
$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
1
1
$begingroup$
why
ExpToTrig
?$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
why
ExpToTrig
?$endgroup$
– Jerry
Jan 20 at 10:48
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
It is a useful tool derived from Euler's formula: en.wikipedia.org/wiki/Euler's_formula Ref: mathworld.wolfram.com/HyperbolicFunctions.html
$endgroup$
– Vixillator
Jan 20 at 11:18
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
Note that this is simply multiplication by the derivative at zero, which is 2.7. when evaluating so close to zero this is a quick way to get the answer yourself.
$endgroup$
– user2520938
Jan 20 at 11:19
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
This is not the exact solution. It's mere an approximation, albeit good enough.
$endgroup$
– infinitezero
Jan 21 at 9:18
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
$begingroup$
It would be quite surprising to me if $f(x)$ on the rational number $x = 1.7 times 10^-25$ turned out to be a rational number. I suppose it's possible that the transcendental numbers $cos x$ and $exp(-2.7 x)$ differ by exactly the rational number $4.59 times 10^-25$. Mathematica disagrees, though. Possibly I'm not interpreting your meaning of "exact solution" correctly though. I'm assuming "exact" implies we treat $f(x)$ as an exact mathematical function.
$endgroup$
– Michael E2
Jan 21 at 20:47
|
show 1 more comment
$begingroup$
Do a series expansion:
Series[Cos[x] - Exp[-2.7 x], x, 0, 1]
(*
SeriesData[x, 0, 2.7, 1, 2, 1]
*)
Then plug in $x = 1.7 times 10^-25$ to get:
$$4.59 times 10^-25$$
$endgroup$
add a comment |
$begingroup$
Do a series expansion:
Series[Cos[x] - Exp[-2.7 x], x, 0, 1]
(*
SeriesData[x, 0, 2.7, 1, 2, 1]
*)
Then plug in $x = 1.7 times 10^-25$ to get:
$$4.59 times 10^-25$$
$endgroup$
add a comment |
$begingroup$
Do a series expansion:
Series[Cos[x] - Exp[-2.7 x], x, 0, 1]
(*
SeriesData[x, 0, 2.7, 1, 2, 1]
*)
Then plug in $x = 1.7 times 10^-25$ to get:
$$4.59 times 10^-25$$
$endgroup$
Do a series expansion:
Series[Cos[x] - Exp[-2.7 x], x, 0, 1]
(*
SeriesData[x, 0, 2.7, 1, 2, 1]
*)
Then plug in $x = 1.7 times 10^-25$ to get:
$$4.59 times 10^-25$$
answered Jan 20 at 5:09
David G. StorkDavid G. Stork
24.4k22153
24.4k22153
add a comment |
add a comment |
$begingroup$
The simplest methods are usually the best. Try this code
N[Cos[x] - Exp[-x 27/10] /. x -> 17*^-26, 15] // InputForm
or a minor variation
With[x = 17*^-26, N[Cos[x] - Exp[-x 27/10], 15]] // InputForm
or define a function first
f = Cos[#] - Exp[-# 27/10] &; N[f[17*^-26], 15] // InputForm
All of these return the result
4.589999999999999999999998802094999`15.*^-25
You can get more digits by increasing the 15 digits precision.
For example, with 34 digits precision the result returned is
4.5899999999999999999999988020950000000000000000001613`34.*^-25
All those digits may be spurious since the given numbers
$2.7$ and $1.7times 10^-25$ seem to have only 2 digits of precision. In that case, the answer using 2 digits of precision is $4.6times 10^-25$.
Note: In this particular case, given that $x$ is small, $,|x|<<1,,$
then we get
$$cos(x) approx 1 - fracx^22, quad e^-c,x approx 1 - cx,quad cos(x) - e^-c,x approx c,x.$$ The simple answer is thus $2.7 cdot 1.7times 10^-25$.
$endgroup$
add a comment |
$begingroup$
The simplest methods are usually the best. Try this code
N[Cos[x] - Exp[-x 27/10] /. x -> 17*^-26, 15] // InputForm
or a minor variation
With[x = 17*^-26, N[Cos[x] - Exp[-x 27/10], 15]] // InputForm
or define a function first
f = Cos[#] - Exp[-# 27/10] &; N[f[17*^-26], 15] // InputForm
All of these return the result
4.589999999999999999999998802094999`15.*^-25
You can get more digits by increasing the 15 digits precision.
For example, with 34 digits precision the result returned is
4.5899999999999999999999988020950000000000000000001613`34.*^-25
All those digits may be spurious since the given numbers
$2.7$ and $1.7times 10^-25$ seem to have only 2 digits of precision. In that case, the answer using 2 digits of precision is $4.6times 10^-25$.
Note: In this particular case, given that $x$ is small, $,|x|<<1,,$
then we get
$$cos(x) approx 1 - fracx^22, quad e^-c,x approx 1 - cx,quad cos(x) - e^-c,x approx c,x.$$ The simple answer is thus $2.7 cdot 1.7times 10^-25$.
$endgroup$
add a comment |
$begingroup$
The simplest methods are usually the best. Try this code
N[Cos[x] - Exp[-x 27/10] /. x -> 17*^-26, 15] // InputForm
or a minor variation
With[x = 17*^-26, N[Cos[x] - Exp[-x 27/10], 15]] // InputForm
or define a function first
f = Cos[#] - Exp[-# 27/10] &; N[f[17*^-26], 15] // InputForm
All of these return the result
4.589999999999999999999998802094999`15.*^-25
You can get more digits by increasing the 15 digits precision.
For example, with 34 digits precision the result returned is
4.5899999999999999999999988020950000000000000000001613`34.*^-25
All those digits may be spurious since the given numbers
$2.7$ and $1.7times 10^-25$ seem to have only 2 digits of precision. In that case, the answer using 2 digits of precision is $4.6times 10^-25$.
Note: In this particular case, given that $x$ is small, $,|x|<<1,,$
then we get
$$cos(x) approx 1 - fracx^22, quad e^-c,x approx 1 - cx,quad cos(x) - e^-c,x approx c,x.$$ The simple answer is thus $2.7 cdot 1.7times 10^-25$.
$endgroup$
The simplest methods are usually the best. Try this code
N[Cos[x] - Exp[-x 27/10] /. x -> 17*^-26, 15] // InputForm
or a minor variation
With[x = 17*^-26, N[Cos[x] - Exp[-x 27/10], 15]] // InputForm
or define a function first
f = Cos[#] - Exp[-# 27/10] &; N[f[17*^-26], 15] // InputForm
All of these return the result
4.589999999999999999999998802094999`15.*^-25
You can get more digits by increasing the 15 digits precision.
For example, with 34 digits precision the result returned is
4.5899999999999999999999988020950000000000000000001613`34.*^-25
All those digits may be spurious since the given numbers
$2.7$ and $1.7times 10^-25$ seem to have only 2 digits of precision. In that case, the answer using 2 digits of precision is $4.6times 10^-25$.
Note: In this particular case, given that $x$ is small, $,|x|<<1,,$
then we get
$$cos(x) approx 1 - fracx^22, quad e^-c,x approx 1 - cx,quad cos(x) - e^-c,x approx c,x.$$ The simple answer is thus $2.7 cdot 1.7times 10^-25$.
edited Jan 21 at 0:19
answered Jan 20 at 6:52
SomosSomos
89219
89219
add a comment |
add a comment |
$begingroup$
This is another machine-precision solution. In @Vixllator's excellent answer, we were lucky that Mathematica put the Sinh
term last. I say lucky, because the order was determined by alphabetical order, not by numerical reasons. If the Sinh
term is first or second, the sum is 0 (see below†).
The difficulty of computing $1 + u$ for small $u$ is one reason we have expm1(x) = exp(x)-1
and log1p(x) = log(1+x)
. These, or their equivalents, are available in Mathematica through the undocumented functions:
Internal`Expm1[x]
Internal`Logp1[x]
To solve a problem like the OPs, the basic goal is to rewrite a function f[x]
for which f[0] == 0
in terms of functions that vanish at x == 0
. We can use the identities below to rewrite the OP's function:
Cos[z] == 1 - 2 Sin[z/2]^2
Exp[z] == 1 + Internal`Expm1[z]
The constant terms introduced in this process will cancel out since f[0] == 0
.
toVanishingFns = # /. (* rewrite Cos and Exp *)
Cos[z_] -> 1 - 2 Sin[z/2]^2,
Power[E, z_] :> Internal`Expm1[z] + 1
&;
With[cleanupRule = 0. -> 0, 1. -> 1, -1. -> -1,
cleanup = # /. cleanupRule &]; (* cleans up trivial floating point coefficients *)
Block[x,
ff[x_] = Cos[x] - Exp[-2.7 x] // toVanishingFns // cleanup
]
(* -Internal`Expm1[-2.7 x] - 2 Sin[x/2]^2 *)
ff[1.7*^-25]
(* 4.59*10^-25 *)
†In these orders, the Cosh
and Cos
terms, which are 1.`
exactly, don't cancel out first leaving the Sinh
term; instead, the Cosh
and Sinh
terms first sum to -1.`
exactly, since the Sinh
term is less than $MachineEpsilon
.
Sinh[2.7` x] - Cosh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
-Cosh[2.7` x] + Sinh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
(*
0.
0.
*)
$endgroup$
add a comment |
$begingroup$
This is another machine-precision solution. In @Vixllator's excellent answer, we were lucky that Mathematica put the Sinh
term last. I say lucky, because the order was determined by alphabetical order, not by numerical reasons. If the Sinh
term is first or second, the sum is 0 (see below†).
The difficulty of computing $1 + u$ for small $u$ is one reason we have expm1(x) = exp(x)-1
and log1p(x) = log(1+x)
. These, or their equivalents, are available in Mathematica through the undocumented functions:
Internal`Expm1[x]
Internal`Logp1[x]
To solve a problem like the OPs, the basic goal is to rewrite a function f[x]
for which f[0] == 0
in terms of functions that vanish at x == 0
. We can use the identities below to rewrite the OP's function:
Cos[z] == 1 - 2 Sin[z/2]^2
Exp[z] == 1 + Internal`Expm1[z]
The constant terms introduced in this process will cancel out since f[0] == 0
.
toVanishingFns = # /. (* rewrite Cos and Exp *)
Cos[z_] -> 1 - 2 Sin[z/2]^2,
Power[E, z_] :> Internal`Expm1[z] + 1
&;
With[cleanupRule = 0. -> 0, 1. -> 1, -1. -> -1,
cleanup = # /. cleanupRule &]; (* cleans up trivial floating point coefficients *)
Block[x,
ff[x_] = Cos[x] - Exp[-2.7 x] // toVanishingFns // cleanup
]
(* -Internal`Expm1[-2.7 x] - 2 Sin[x/2]^2 *)
ff[1.7*^-25]
(* 4.59*10^-25 *)
†In these orders, the Cosh
and Cos
terms, which are 1.`
exactly, don't cancel out first leaving the Sinh
term; instead, the Cosh
and Sinh
terms first sum to -1.`
exactly, since the Sinh
term is less than $MachineEpsilon
.
Sinh[2.7` x] - Cosh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
-Cosh[2.7` x] + Sinh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
(*
0.
0.
*)
$endgroup$
add a comment |
$begingroup$
This is another machine-precision solution. In @Vixllator's excellent answer, we were lucky that Mathematica put the Sinh
term last. I say lucky, because the order was determined by alphabetical order, not by numerical reasons. If the Sinh
term is first or second, the sum is 0 (see below†).
The difficulty of computing $1 + u$ for small $u$ is one reason we have expm1(x) = exp(x)-1
and log1p(x) = log(1+x)
. These, or their equivalents, are available in Mathematica through the undocumented functions:
Internal`Expm1[x]
Internal`Logp1[x]
To solve a problem like the OPs, the basic goal is to rewrite a function f[x]
for which f[0] == 0
in terms of functions that vanish at x == 0
. We can use the identities below to rewrite the OP's function:
Cos[z] == 1 - 2 Sin[z/2]^2
Exp[z] == 1 + Internal`Expm1[z]
The constant terms introduced in this process will cancel out since f[0] == 0
.
toVanishingFns = # /. (* rewrite Cos and Exp *)
Cos[z_] -> 1 - 2 Sin[z/2]^2,
Power[E, z_] :> Internal`Expm1[z] + 1
&;
With[cleanupRule = 0. -> 0, 1. -> 1, -1. -> -1,
cleanup = # /. cleanupRule &]; (* cleans up trivial floating point coefficients *)
Block[x,
ff[x_] = Cos[x] - Exp[-2.7 x] // toVanishingFns // cleanup
]
(* -Internal`Expm1[-2.7 x] - 2 Sin[x/2]^2 *)
ff[1.7*^-25]
(* 4.59*10^-25 *)
†In these orders, the Cosh
and Cos
terms, which are 1.`
exactly, don't cancel out first leaving the Sinh
term; instead, the Cosh
and Sinh
terms first sum to -1.`
exactly, since the Sinh
term is less than $MachineEpsilon
.
Sinh[2.7` x] - Cosh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
-Cosh[2.7` x] + Sinh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
(*
0.
0.
*)
$endgroup$
This is another machine-precision solution. In @Vixllator's excellent answer, we were lucky that Mathematica put the Sinh
term last. I say lucky, because the order was determined by alphabetical order, not by numerical reasons. If the Sinh
term is first or second, the sum is 0 (see below†).
The difficulty of computing $1 + u$ for small $u$ is one reason we have expm1(x) = exp(x)-1
and log1p(x) = log(1+x)
. These, or their equivalents, are available in Mathematica through the undocumented functions:
Internal`Expm1[x]
Internal`Logp1[x]
To solve a problem like the OPs, the basic goal is to rewrite a function f[x]
for which f[0] == 0
in terms of functions that vanish at x == 0
. We can use the identities below to rewrite the OP's function:
Cos[z] == 1 - 2 Sin[z/2]^2
Exp[z] == 1 + Internal`Expm1[z]
The constant terms introduced in this process will cancel out since f[0] == 0
.
toVanishingFns = # /. (* rewrite Cos and Exp *)
Cos[z_] -> 1 - 2 Sin[z/2]^2,
Power[E, z_] :> Internal`Expm1[z] + 1
&;
With[cleanupRule = 0. -> 0, 1. -> 1, -1. -> -1,
cleanup = # /. cleanupRule &]; (* cleans up trivial floating point coefficients *)
Block[x,
ff[x_] = Cos[x] - Exp[-2.7 x] // toVanishingFns // cleanup
]
(* -Internal`Expm1[-2.7 x] - 2 Sin[x/2]^2 *)
ff[1.7*^-25]
(* 4.59*10^-25 *)
†In these orders, the Cosh
and Cos
terms, which are 1.`
exactly, don't cancel out first leaving the Sinh
term; instead, the Cosh
and Sinh
terms first sum to -1.`
exactly, since the Sinh
term is less than $MachineEpsilon
.
Sinh[2.7` x] - Cosh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
-Cosh[2.7` x] + Sinh[2.7` x] + Cos[x] // Hold;
% /. x -> 1.7*^-25 // ReleaseHold
(*
0.
0.
*)
edited Jan 20 at 23:37
answered Jan 20 at 20:30
Michael E2Michael E2
147k12197471
147k12197471
add a comment |
add a comment |
$begingroup$
This answer is a little more complicated than necessary, but I'm documenting my attempt here, nonetheless.
I have constructed a function RationalizedN
which attempts to rationalize all inexact numbers in the expression evaluation before numerical calculations occur, and then compute the value with requested precision using N
:
ClearAll@RationalizedN;
SetAttributes[RationalizedN, HoldFirst];
RationalizedN[expr_, n_: $MachinePrecision] :=
N[FixedPoint[
ReleaseHold@*ReplaceAll[
x_ :> RuleCondition[
Hold@Evaluate@Rationalize[x, 0],
InexactNumberQ@Unevaluated@x]],
Hold@expr], n];
This essentially provides automation for the changes @BillWatts performed in his answer.
Keeping parts of expressions unevaluated while the expression is being modified is somewhat delicate, and includes undocumented RuleCondition
usage.
Now we can define f
normally and evaluate it using this function:
ClearAll@f;
f[x_] := Cos[x] - Exp[-2.7 x];
RationalizedN[f[1.7*^-25], 50]
4.5900000000000000217190516840950001027706331277536*10^-25
The minor difference in result with others stems from the fact that inexact numbers such as 1.7*^-n
, when represented in binary floating point form in computers, are usually not exactly the same as "intuitive" rational form such as $17/10^n$, and not necessarily Rationalize
d to the expected form.
We can see what's actually going on by replacing Echo
in right places of the FixedPoint
function argument:
$textHold[f(text1.7$grave $*$^wedge$-25)]$
$rightarrow$
$textHoldleft[fleft(textHoldleft[frac15882352941176470560401060right]right)right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^-2.7 textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^textHoldleft[-frac2710right] textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(frac15882352941176470560401060right)-frac1e^27/58823529411764705604010600$
$rightarrow$
4.5900000000000000217190516840950001027706331277536*10^-25
$endgroup$
add a comment |
$begingroup$
This answer is a little more complicated than necessary, but I'm documenting my attempt here, nonetheless.
I have constructed a function RationalizedN
which attempts to rationalize all inexact numbers in the expression evaluation before numerical calculations occur, and then compute the value with requested precision using N
:
ClearAll@RationalizedN;
SetAttributes[RationalizedN, HoldFirst];
RationalizedN[expr_, n_: $MachinePrecision] :=
N[FixedPoint[
ReleaseHold@*ReplaceAll[
x_ :> RuleCondition[
Hold@Evaluate@Rationalize[x, 0],
InexactNumberQ@Unevaluated@x]],
Hold@expr], n];
This essentially provides automation for the changes @BillWatts performed in his answer.
Keeping parts of expressions unevaluated while the expression is being modified is somewhat delicate, and includes undocumented RuleCondition
usage.
Now we can define f
normally and evaluate it using this function:
ClearAll@f;
f[x_] := Cos[x] - Exp[-2.7 x];
RationalizedN[f[1.7*^-25], 50]
4.5900000000000000217190516840950001027706331277536*10^-25
The minor difference in result with others stems from the fact that inexact numbers such as 1.7*^-n
, when represented in binary floating point form in computers, are usually not exactly the same as "intuitive" rational form such as $17/10^n$, and not necessarily Rationalize
d to the expected form.
We can see what's actually going on by replacing Echo
in right places of the FixedPoint
function argument:
$textHold[f(text1.7$grave $*$^wedge$-25)]$
$rightarrow$
$textHoldleft[fleft(textHoldleft[frac15882352941176470560401060right]right)right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^-2.7 textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^textHoldleft[-frac2710right] textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(frac15882352941176470560401060right)-frac1e^27/58823529411764705604010600$
$rightarrow$
4.5900000000000000217190516840950001027706331277536*10^-25
$endgroup$
add a comment |
$begingroup$
This answer is a little more complicated than necessary, but I'm documenting my attempt here, nonetheless.
I have constructed a function RationalizedN
which attempts to rationalize all inexact numbers in the expression evaluation before numerical calculations occur, and then compute the value with requested precision using N
:
ClearAll@RationalizedN;
SetAttributes[RationalizedN, HoldFirst];
RationalizedN[expr_, n_: $MachinePrecision] :=
N[FixedPoint[
ReleaseHold@*ReplaceAll[
x_ :> RuleCondition[
Hold@Evaluate@Rationalize[x, 0],
InexactNumberQ@Unevaluated@x]],
Hold@expr], n];
This essentially provides automation for the changes @BillWatts performed in his answer.
Keeping parts of expressions unevaluated while the expression is being modified is somewhat delicate, and includes undocumented RuleCondition
usage.
Now we can define f
normally and evaluate it using this function:
ClearAll@f;
f[x_] := Cos[x] - Exp[-2.7 x];
RationalizedN[f[1.7*^-25], 50]
4.5900000000000000217190516840950001027706331277536*10^-25
The minor difference in result with others stems from the fact that inexact numbers such as 1.7*^-n
, when represented in binary floating point form in computers, are usually not exactly the same as "intuitive" rational form such as $17/10^n$, and not necessarily Rationalize
d to the expected form.
We can see what's actually going on by replacing Echo
in right places of the FixedPoint
function argument:
$textHold[f(text1.7$grave $*$^wedge$-25)]$
$rightarrow$
$textHoldleft[fleft(textHoldleft[frac15882352941176470560401060right]right)right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^-2.7 textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^textHoldleft[-frac2710right] textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(frac15882352941176470560401060right)-frac1e^27/58823529411764705604010600$
$rightarrow$
4.5900000000000000217190516840950001027706331277536*10^-25
$endgroup$
This answer is a little more complicated than necessary, but I'm documenting my attempt here, nonetheless.
I have constructed a function RationalizedN
which attempts to rationalize all inexact numbers in the expression evaluation before numerical calculations occur, and then compute the value with requested precision using N
:
ClearAll@RationalizedN;
SetAttributes[RationalizedN, HoldFirst];
RationalizedN[expr_, n_: $MachinePrecision] :=
N[FixedPoint[
ReleaseHold@*ReplaceAll[
x_ :> RuleCondition[
Hold@Evaluate@Rationalize[x, 0],
InexactNumberQ@Unevaluated@x]],
Hold@expr], n];
This essentially provides automation for the changes @BillWatts performed in his answer.
Keeping parts of expressions unevaluated while the expression is being modified is somewhat delicate, and includes undocumented RuleCondition
usage.
Now we can define f
normally and evaluate it using this function:
ClearAll@f;
f[x_] := Cos[x] - Exp[-2.7 x];
RationalizedN[f[1.7*^-25], 50]
4.5900000000000000217190516840950001027706331277536*10^-25
The minor difference in result with others stems from the fact that inexact numbers such as 1.7*^-n
, when represented in binary floating point form in computers, are usually not exactly the same as "intuitive" rational form such as $17/10^n$, and not necessarily Rationalize
d to the expected form.
We can see what's actually going on by replacing Echo
in right places of the FixedPoint
function argument:
$textHold[f(text1.7$grave $*$^wedge$-25)]$
$rightarrow$
$textHoldleft[fleft(textHoldleft[frac15882352941176470560401060right]right)right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^-2.7 textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(textHoldleft[frac15882352941176470560401060right]right)-e^textHoldleft[-frac2710right] textHoldleft[frac15882352941176470560401060right]$
$rightarrow$
$cos left(frac15882352941176470560401060right)-frac1e^27/58823529411764705604010600$
$rightarrow$
4.5900000000000000217190516840950001027706331277536*10^-25
edited Jan 21 at 4:40
answered Jan 20 at 10:42
kirmakirma
10.1k13058
10.1k13058
add a comment |
add a comment |
$begingroup$
Your constants 1.7
and 2.7
have too little precision for the intermediates generated during the computation to have adequate final precision.
Precision[1.7 * 10^-25]
(* MachinePrecision *)
Precision[1.7] (* The exponent doesn't matter here. *)
(* MachinePrecision *)
N[MachinePrecision]
(* 15.9546 *)
Your Mathematica instance may have slightly different value of MachinePrecision
.
First, let's get the correct answer so we can compare with it. We do this by eliminating floating point. (That is, we switch our number representation from one that implicitly represents intervals to one that represents exact numbers.)
f[x_] := Cos[x] - Exp[-27/10 x]
f[17/10 *10^-25]
(* -E^(-459/1000000000000000000000000000) + Cos[17/100000000000000000000000000] *)
By considering their power series, we expect both of these terms to have decimal representations which are runs of 0
s or 9
s separating small islands of other digits. We expect the runs in the exponential to be a little shorter than the denominators. In the cosine, we expect the first run to be about twice as long as the denominator. Let's see.
N[-(1/E^(459/1000000000000000000000000000)), 100]
N[Cos[17/100000000000000000000000000], 100]
(* -0.99999999999999999999999954100000000000000000000010534049999999999999999998388290350000000000000000185 *)
(* 0.99999999999999999999999999999999999999999999999998555000000000000000000000000000000000000000000000003 *)
So those meet expectations. Then we can do the subtraction, getting catastrophic cancellation.
N[f[17/10*10^-25], 100]
(* 4.589999999999999999999998802095000000000000000000161170964999999999999999981853635932916666666666668*10^-25 *)
This catastrophic cancellation of the leading 24 digits is our problem. Since 24 is greater than MachinePrecision
, when Mathematica does the subtraction, the Machine Precision
leading digits cancel, leaving 0.
,a floating point number representing the interval $left[ frac-12 * 10^textMachinePrecision, frac12 * 10^textMachinePrecision right]$ (possibly excluding either or both endpoints, depending on implementation details of floating point representations of intervals straddling zero). The true answer is in that interval, so the printed result is accurate.
Now we know that we should get $4.589dots times 10^-25$. Let's see what we can do to make that happen.
We can replace the floating point numbers in the definition and the argument to the function.
Clear[f];
f[x_] := Cos[x] - E^(-27/10 x)
f[17/10*10^-25]
N[f[17/10*10^-25]]
N[f[17/10*10^-25], 2]
N[f[17/10*10^-25], 24]
N[f[17/10*10^-25], 25]
(* -(1/E^(459/1000000000000000000000000000)) + Cos[17/100000000000000000000000000] *)
(* 0. *)
(* 4.6*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)Here, we see
N
experience catastrophic cancellation when we allowMachinePrecision
for intermediates but do not specify a precision goal for the result. When we explicitly set a precision goal for the result,N
detects that the result is not zero and gives us the requested precision. If our requested precision doesn't reach to the next island, then we get the rounded result "$4.590dots times 10^-25$".If we set the precision on the constant in
f
, there is no improvement.Clear[f];
f[x_] := Cos[x] - E^(-2.7`100 x)
f[1.7*10^-25]
(* 0. *)If we set the precision on the argument, there is no improvement.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
f[1.7`100*10^-25]
(* 0. *)If we set the precision of both,
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
f[1.7`24*10^-25]
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
f[1.7`25*10^-25]
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)we get precision limited by our specifications.
So maybe we wonder: Is there something I can do that leaves the definition of f
unaltered, but allows me to improve the precision of evaluation when the argument produces catastrophic cancellation? No. We know that 100
digits of intermediate precision is sufficient to get z result different from zero.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 0. *)
(* 0. *)
The precision of 2.7
is too low. We have to improve the quality of the constant in the definition of $f$ and then, to preserve those gains, we have to improve the quality of the constant in the argument.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
... And if you don't want it rounded to these trailing zeroes, both have to be precise enough.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0.
(* 4.589999999999999999999999*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)
$endgroup$
add a comment |
$begingroup$
Your constants 1.7
and 2.7
have too little precision for the intermediates generated during the computation to have adequate final precision.
Precision[1.7 * 10^-25]
(* MachinePrecision *)
Precision[1.7] (* The exponent doesn't matter here. *)
(* MachinePrecision *)
N[MachinePrecision]
(* 15.9546 *)
Your Mathematica instance may have slightly different value of MachinePrecision
.
First, let's get the correct answer so we can compare with it. We do this by eliminating floating point. (That is, we switch our number representation from one that implicitly represents intervals to one that represents exact numbers.)
f[x_] := Cos[x] - Exp[-27/10 x]
f[17/10 *10^-25]
(* -E^(-459/1000000000000000000000000000) + Cos[17/100000000000000000000000000] *)
By considering their power series, we expect both of these terms to have decimal representations which are runs of 0
s or 9
s separating small islands of other digits. We expect the runs in the exponential to be a little shorter than the denominators. In the cosine, we expect the first run to be about twice as long as the denominator. Let's see.
N[-(1/E^(459/1000000000000000000000000000)), 100]
N[Cos[17/100000000000000000000000000], 100]
(* -0.99999999999999999999999954100000000000000000000010534049999999999999999998388290350000000000000000185 *)
(* 0.99999999999999999999999999999999999999999999999998555000000000000000000000000000000000000000000000003 *)
So those meet expectations. Then we can do the subtraction, getting catastrophic cancellation.
N[f[17/10*10^-25], 100]
(* 4.589999999999999999999998802095000000000000000000161170964999999999999999981853635932916666666666668*10^-25 *)
This catastrophic cancellation of the leading 24 digits is our problem. Since 24 is greater than MachinePrecision
, when Mathematica does the subtraction, the Machine Precision
leading digits cancel, leaving 0.
,a floating point number representing the interval $left[ frac-12 * 10^textMachinePrecision, frac12 * 10^textMachinePrecision right]$ (possibly excluding either or both endpoints, depending on implementation details of floating point representations of intervals straddling zero). The true answer is in that interval, so the printed result is accurate.
Now we know that we should get $4.589dots times 10^-25$. Let's see what we can do to make that happen.
We can replace the floating point numbers in the definition and the argument to the function.
Clear[f];
f[x_] := Cos[x] - E^(-27/10 x)
f[17/10*10^-25]
N[f[17/10*10^-25]]
N[f[17/10*10^-25], 2]
N[f[17/10*10^-25], 24]
N[f[17/10*10^-25], 25]
(* -(1/E^(459/1000000000000000000000000000)) + Cos[17/100000000000000000000000000] *)
(* 0. *)
(* 4.6*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)Here, we see
N
experience catastrophic cancellation when we allowMachinePrecision
for intermediates but do not specify a precision goal for the result. When we explicitly set a precision goal for the result,N
detects that the result is not zero and gives us the requested precision. If our requested precision doesn't reach to the next island, then we get the rounded result "$4.590dots times 10^-25$".If we set the precision on the constant in
f
, there is no improvement.Clear[f];
f[x_] := Cos[x] - E^(-2.7`100 x)
f[1.7*10^-25]
(* 0. *)If we set the precision on the argument, there is no improvement.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
f[1.7`100*10^-25]
(* 0. *)If we set the precision of both,
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
f[1.7`24*10^-25]
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
f[1.7`25*10^-25]
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)we get precision limited by our specifications.
So maybe we wonder: Is there something I can do that leaves the definition of f
unaltered, but allows me to improve the precision of evaluation when the argument produces catastrophic cancellation? No. We know that 100
digits of intermediate precision is sufficient to get z result different from zero.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 0. *)
(* 0. *)
The precision of 2.7
is too low. We have to improve the quality of the constant in the definition of $f$ and then, to preserve those gains, we have to improve the quality of the constant in the argument.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
... And if you don't want it rounded to these trailing zeroes, both have to be precise enough.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0.
(* 4.589999999999999999999999*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)
$endgroup$
add a comment |
$begingroup$
Your constants 1.7
and 2.7
have too little precision for the intermediates generated during the computation to have adequate final precision.
Precision[1.7 * 10^-25]
(* MachinePrecision *)
Precision[1.7] (* The exponent doesn't matter here. *)
(* MachinePrecision *)
N[MachinePrecision]
(* 15.9546 *)
Your Mathematica instance may have slightly different value of MachinePrecision
.
First, let's get the correct answer so we can compare with it. We do this by eliminating floating point. (That is, we switch our number representation from one that implicitly represents intervals to one that represents exact numbers.)
f[x_] := Cos[x] - Exp[-27/10 x]
f[17/10 *10^-25]
(* -E^(-459/1000000000000000000000000000) + Cos[17/100000000000000000000000000] *)
By considering their power series, we expect both of these terms to have decimal representations which are runs of 0
s or 9
s separating small islands of other digits. We expect the runs in the exponential to be a little shorter than the denominators. In the cosine, we expect the first run to be about twice as long as the denominator. Let's see.
N[-(1/E^(459/1000000000000000000000000000)), 100]
N[Cos[17/100000000000000000000000000], 100]
(* -0.99999999999999999999999954100000000000000000000010534049999999999999999998388290350000000000000000185 *)
(* 0.99999999999999999999999999999999999999999999999998555000000000000000000000000000000000000000000000003 *)
So those meet expectations. Then we can do the subtraction, getting catastrophic cancellation.
N[f[17/10*10^-25], 100]
(* 4.589999999999999999999998802095000000000000000000161170964999999999999999981853635932916666666666668*10^-25 *)
This catastrophic cancellation of the leading 24 digits is our problem. Since 24 is greater than MachinePrecision
, when Mathematica does the subtraction, the Machine Precision
leading digits cancel, leaving 0.
,a floating point number representing the interval $left[ frac-12 * 10^textMachinePrecision, frac12 * 10^textMachinePrecision right]$ (possibly excluding either or both endpoints, depending on implementation details of floating point representations of intervals straddling zero). The true answer is in that interval, so the printed result is accurate.
Now we know that we should get $4.589dots times 10^-25$. Let's see what we can do to make that happen.
We can replace the floating point numbers in the definition and the argument to the function.
Clear[f];
f[x_] := Cos[x] - E^(-27/10 x)
f[17/10*10^-25]
N[f[17/10*10^-25]]
N[f[17/10*10^-25], 2]
N[f[17/10*10^-25], 24]
N[f[17/10*10^-25], 25]
(* -(1/E^(459/1000000000000000000000000000)) + Cos[17/100000000000000000000000000] *)
(* 0. *)
(* 4.6*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)Here, we see
N
experience catastrophic cancellation when we allowMachinePrecision
for intermediates but do not specify a precision goal for the result. When we explicitly set a precision goal for the result,N
detects that the result is not zero and gives us the requested precision. If our requested precision doesn't reach to the next island, then we get the rounded result "$4.590dots times 10^-25$".If we set the precision on the constant in
f
, there is no improvement.Clear[f];
f[x_] := Cos[x] - E^(-2.7`100 x)
f[1.7*10^-25]
(* 0. *)If we set the precision on the argument, there is no improvement.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
f[1.7`100*10^-25]
(* 0. *)If we set the precision of both,
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
f[1.7`24*10^-25]
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
f[1.7`25*10^-25]
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)we get precision limited by our specifications.
So maybe we wonder: Is there something I can do that leaves the definition of f
unaltered, but allows me to improve the precision of evaluation when the argument produces catastrophic cancellation? No. We know that 100
digits of intermediate precision is sufficient to get z result different from zero.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 0. *)
(* 0. *)
The precision of 2.7
is too low. We have to improve the quality of the constant in the definition of $f$ and then, to preserve those gains, we have to improve the quality of the constant in the argument.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
... And if you don't want it rounded to these trailing zeroes, both have to be precise enough.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0.
(* 4.589999999999999999999999*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)
$endgroup$
Your constants 1.7
and 2.7
have too little precision for the intermediates generated during the computation to have adequate final precision.
Precision[1.7 * 10^-25]
(* MachinePrecision *)
Precision[1.7] (* The exponent doesn't matter here. *)
(* MachinePrecision *)
N[MachinePrecision]
(* 15.9546 *)
Your Mathematica instance may have slightly different value of MachinePrecision
.
First, let's get the correct answer so we can compare with it. We do this by eliminating floating point. (That is, we switch our number representation from one that implicitly represents intervals to one that represents exact numbers.)
f[x_] := Cos[x] - Exp[-27/10 x]
f[17/10 *10^-25]
(* -E^(-459/1000000000000000000000000000) + Cos[17/100000000000000000000000000] *)
By considering their power series, we expect both of these terms to have decimal representations which are runs of 0
s or 9
s separating small islands of other digits. We expect the runs in the exponential to be a little shorter than the denominators. In the cosine, we expect the first run to be about twice as long as the denominator. Let's see.
N[-(1/E^(459/1000000000000000000000000000)), 100]
N[Cos[17/100000000000000000000000000], 100]
(* -0.99999999999999999999999954100000000000000000000010534049999999999999999998388290350000000000000000185 *)
(* 0.99999999999999999999999999999999999999999999999998555000000000000000000000000000000000000000000000003 *)
So those meet expectations. Then we can do the subtraction, getting catastrophic cancellation.
N[f[17/10*10^-25], 100]
(* 4.589999999999999999999998802095000000000000000000161170964999999999999999981853635932916666666666668*10^-25 *)
This catastrophic cancellation of the leading 24 digits is our problem. Since 24 is greater than MachinePrecision
, when Mathematica does the subtraction, the Machine Precision
leading digits cancel, leaving 0.
,a floating point number representing the interval $left[ frac-12 * 10^textMachinePrecision, frac12 * 10^textMachinePrecision right]$ (possibly excluding either or both endpoints, depending on implementation details of floating point representations of intervals straddling zero). The true answer is in that interval, so the printed result is accurate.
Now we know that we should get $4.589dots times 10^-25$. Let's see what we can do to make that happen.
We can replace the floating point numbers in the definition and the argument to the function.
Clear[f];
f[x_] := Cos[x] - E^(-27/10 x)
f[17/10*10^-25]
N[f[17/10*10^-25]]
N[f[17/10*10^-25], 2]
N[f[17/10*10^-25], 24]
N[f[17/10*10^-25], 25]
(* -(1/E^(459/1000000000000000000000000000)) + Cos[17/100000000000000000000000000] *)
(* 0. *)
(* 4.6*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)Here, we see
N
experience catastrophic cancellation when we allowMachinePrecision
for intermediates but do not specify a precision goal for the result. When we explicitly set a precision goal for the result,N
detects that the result is not zero and gives us the requested precision. If our requested precision doesn't reach to the next island, then we get the rounded result "$4.590dots times 10^-25$".If we set the precision on the constant in
f
, there is no improvement.Clear[f];
f[x_] := Cos[x] - E^(-2.7`100 x)
f[1.7*10^-25]
(* 0. *)If we set the precision on the argument, there is no improvement.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
f[1.7`100*10^-25]
(* 0. *)If we set the precision of both,
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
f[1.7`24*10^-25]
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
f[1.7`25*10^-25]
(* 4.59000000000000000000000*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)we get precision limited by our specifications.
So maybe we wonder: Is there something I can do that leaves the definition of f
unaltered, but allows me to improve the precision of evaluation when the argument produces catastrophic cancellation? No. We know that 100
digits of intermediate precision is sufficient to get z result different from zero.
Clear[f];
f[x_] := Cos[x] - E^(-2.7 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 0. *)
(* 0. *)
The precision of 2.7
is too low. We have to improve the quality of the constant in the definition of $f$ and then, to preserve those gains, we have to improve the quality of the constant in the argument.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`24 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0. *)
(* 4.59000000000000000000000*10^-25 *)
(* 4.59000000000000000000000*10^-25 *)
... And if you don't want it rounded to these trailing zeroes, both have to be precise enough.
Clear[f];
f[x_] := Cos[x] - E^(-2.7`25 x)
N[f[1.7*10^-25], 100]
N[f[1.7*10^-25], 100, 100]
N[f[1.7`100*10^-25], 100]
N[f[17/10*10^-25], 100]
(* 0. *)
(* 0.
(* 4.589999999999999999999999*10^-25 *)
(* 4.589999999999999999999999*10^-25 *)
edited Jan 21 at 3:39
answered Jan 20 at 21:04
Eric TowersEric Towers
2,336613
2,336613
add a comment |
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f189857%2fhow-to-display-very-small-numbers-in-mathematica%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign!
$endgroup$
– Michael E2
Jan 20 at 23:35