Why aren't primality tests easily linear in time complexity?
Clash Royale CLAN TAG#URR8PPP
Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?
algorithm-analysis time-complexity runtime-analysis
|
show 11 more comments
Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?
algorithm-analysis time-complexity runtime-analysis
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
1
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
2
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05
|
show 11 more comments
Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?
algorithm-analysis time-complexity runtime-analysis
Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear.
I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so?
It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation?
algorithm-analysis time-complexity runtime-analysis
algorithm-analysis time-complexity runtime-analysis
edited Dec 28 '18 at 3:23
David Richerby
66.2k15100190
66.2k15100190
asked Dec 27 '18 at 23:26
bilanushbilanush
333
333
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
1
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
2
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05
|
show 11 more comments
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
1
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
2
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
1
1
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
2
2
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05
|
show 11 more comments
3 Answers
3
active
oldest
votes
It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $Ntimes N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions.
So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value:
- It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear."
- It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^512$), since $2^512$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish.
It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^64$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance.
As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input.
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
|
show 3 more comments
There are two sensible ways to define a variable that can be used in the runtime complexity.
$m$ is the value of the input number (your definition).
$n$ is the number of bits required to represent the input (the input size).
Neither is better than the other, because there's a 1-to-1 correspondence between the two:
$n = O(log m)$, or equivalently- $m = O(2^n)$
Most scientists use $n$. Using that definition, sorting is e.g. $O(n log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(sqrt m)$ algorithm (sublinear in $m$), which is $O(sqrt 2^n)$ (exponential in $n$).
Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((log m)^c)$ for some constant $c$.
add a comment |
Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?
And by all means, you are free to choose whichever representation you feel comfortable with.
We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
|
show 4 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "419"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102099%2fwhy-arent-primality-tests-easily-linear-in-time-complexity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $Ntimes N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions.
So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value:
- It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear."
- It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^512$), since $2^512$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish.
It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^64$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance.
As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input.
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
|
show 3 more comments
It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $Ntimes N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions.
So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value:
- It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear."
- It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^512$), since $2^512$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish.
It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^64$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance.
As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input.
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
|
show 3 more comments
It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $Ntimes N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions.
So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value:
- It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear."
- It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^512$), since $2^512$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish.
It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^64$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance.
As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input.
It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $Ntimes N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions.
So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value:
- It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear."
- It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^512$), since $2^512$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish.
It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^64$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance.
As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input.
edited Dec 29 '18 at 3:20
xskxzr
3,5001730
3,5001730
answered Dec 28 '18 at 5:25
Flight OdysseyFlight Odyssey
2314
2314
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
|
show 3 more comments
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Thank you. It seems as if there is something I am missing. Since I don't really get the logic for reffering to the size of input at all. As to your reasoning, please note, that I am too calculating the number of operations that performed on the input size. This we are doing the same. The only difference is, that I want to compare it to the magnitude of the number because this is what we are dealing with. We are asking about 'primes' meaning about the magnitude of the numbers themselves. Reffering to the size of input is a bit deceving since what we care about are the numbers.
– bilanush
Dec 28 '18 at 19:26
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Basically, what's the point of asking how complex the code become as we are adding another zero to the end of input? What we really care is how difficult the code gets as I move even just from 100, to 101 to 102 ... So on. And this doesn't get into account here in your way of calculating. More so, you are getting different result. In my way it's sublinear and you are getting a far worse complexity which isn't true.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
Because if I ask how worse do I get when I do a prime test on 10 compared to 100 the real answer is that it's linear or even sublinear sqrt( 2^7/2^4). There is something I am really missing here.
– bilanush
Dec 28 '18 at 19:32
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
It's just a convention that happens to be convenient, that is all. It's not a matter of true or false, just different ways of writing the same thing, like using base 10 vs. base 2. Describing it in terms of the numeric value is fine, in fact some sources do so. But just keep in mind that there are in fact exponentially faster primality testing algorithms than the one you described above (see the linked page), and for primes of the sort used in cryptography (512+ bits) your "sublinear" algorithm will be too slow.
– Flight Odyssey
Dec 29 '18 at 0:02
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
Thanks. What I understand by now is, that we really don't care because there's a one to one corresponding. This looks fine to me. The only thing that's still annoying me , is when reffering to this as an inefficient algorithm. This looks to me wrong. Because still in terms of the the real prime number it's still sub linear. So why do we choose to look at the size of the input and then say it's inefficient. (What you say that there are faster ways , I know it. I am purposely talking about this algorithm which check each number for divisibility)
– bilanush
Dec 29 '18 at 15:53
|
show 3 more comments
There are two sensible ways to define a variable that can be used in the runtime complexity.
$m$ is the value of the input number (your definition).
$n$ is the number of bits required to represent the input (the input size).
Neither is better than the other, because there's a 1-to-1 correspondence between the two:
$n = O(log m)$, or equivalently- $m = O(2^n)$
Most scientists use $n$. Using that definition, sorting is e.g. $O(n log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(sqrt m)$ algorithm (sublinear in $m$), which is $O(sqrt 2^n)$ (exponential in $n$).
Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((log m)^c)$ for some constant $c$.
add a comment |
There are two sensible ways to define a variable that can be used in the runtime complexity.
$m$ is the value of the input number (your definition).
$n$ is the number of bits required to represent the input (the input size).
Neither is better than the other, because there's a 1-to-1 correspondence between the two:
$n = O(log m)$, or equivalently- $m = O(2^n)$
Most scientists use $n$. Using that definition, sorting is e.g. $O(n log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(sqrt m)$ algorithm (sublinear in $m$), which is $O(sqrt 2^n)$ (exponential in $n$).
Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((log m)^c)$ for some constant $c$.
add a comment |
There are two sensible ways to define a variable that can be used in the runtime complexity.
$m$ is the value of the input number (your definition).
$n$ is the number of bits required to represent the input (the input size).
Neither is better than the other, because there's a 1-to-1 correspondence between the two:
$n = O(log m)$, or equivalently- $m = O(2^n)$
Most scientists use $n$. Using that definition, sorting is e.g. $O(n log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(sqrt m)$ algorithm (sublinear in $m$), which is $O(sqrt 2^n)$ (exponential in $n$).
Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((log m)^c)$ for some constant $c$.
There are two sensible ways to define a variable that can be used in the runtime complexity.
$m$ is the value of the input number (your definition).
$n$ is the number of bits required to represent the input (the input size).
Neither is better than the other, because there's a 1-to-1 correspondence between the two:
$n = O(log m)$, or equivalently- $m = O(2^n)$
Most scientists use $n$. Using that definition, sorting is e.g. $O(n log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(sqrt m)$ algorithm (sublinear in $m$), which is $O(sqrt 2^n)$ (exponential in $n$).
Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((log m)^c)$ for some constant $c$.
edited Dec 29 '18 at 0:12
answered Dec 28 '18 at 13:43
Albert HendriksAlbert Hendriks
1,391429
1,391429
add a comment |
add a comment |
Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?
And by all means, you are free to choose whichever representation you feel comfortable with.
We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
|
show 4 more comments
Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?
And by all means, you are free to choose whichever representation you feel comfortable with.
We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
|
show 4 more comments
Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?
And by all means, you are free to choose whichever representation you feel comfortable with.
We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.
Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones?
And by all means, you are free to choose whichever representation you feel comfortable with.
We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number.
answered Dec 27 '18 at 23:41
Pål GDPål GD
6,7402241
6,7402241
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
|
show 4 more comments
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
Sorry. But this is exactly what I don't understand. Why do we care about the binary representation ? It just looks like a manipulation of looking at it's binaric input and asking about the complexity of it. Why don't you look at the number itself which is what we truly care about. The whole point of complexity is to assess how fast the time grows as we move to greater numbers. So the most logical way of looking at it is by assessing how fast for example grow when we move from 10 to 100 . So it's 2^4 compared to 2^7 the growth is exactly linearly proportional to the input number
– bilanush
Dec 28 '18 at 2:18
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
We as humans deal with the number itself. Not with it's binaric shape. Why on Earth would it matter the complexity compared to binaric input? We are only trying to figure out how complicated the algorithm gets as we go to greater numbers. It doesn't matter at all the way computer choose to write it. Buttom line we should care only about the number of operations the computer does, and as well as the input. But of course, we should care about the magnitude of the number because this is what we are having in our heads , what is the rate in which the complexity grows as we go to great number
– bilanush
Dec 28 '18 at 2:27
6
6
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
@bilanush no, we as humans are incapable of doing math on "the number itself". Usually we use the base-10 representation. Which is the same size as the base-2 representation, within a constant factor.
– hobbs
Dec 28 '18 at 2:34
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
I agree. But think about it. Why should we care about the size of the input????? This is a very cosmetic factor here. What we really care about is how fast the complexity grows as we bring great numbers. What's the point of the need size of input? This tells me nothing. When I am talking about primes what I really care about is the rate of change of going from 1000 to 100000. I really do care about the magnitude of the number. Why should I care about the binaric input? Or even the base 10 representation for that matter.
– bilanush
Dec 28 '18 at 2:39
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
In complexity we are always talking about the magnitude of the number. Which is what we are dealing with. We are always interested in knowing how complex would it become as I move to greater numbers .
– bilanush
Dec 28 '18 at 2:42
|
show 4 more comments
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102099%2fwhy-arent-primality-tests-easily-linear-in-time-complexity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You're assuming that a single divisibility check is a constant-time operation, which is definitely not true if you let the numbers get arbitrarily large. In fact it's more than O(n).
– hobbs
Dec 28 '18 at 2:38
So this is the answer. Not that we care about the binaric representation . You are basically giving a different reasoning is that correct?
– bilanush
Dec 28 '18 at 2:44
1
No, just pointing out an additional problem.
– hobbs
Dec 28 '18 at 2:47
Ok,. It doesn't even say that it takes more than linear for testing devisability only for division itself.
– bilanush
Dec 28 '18 at 2:48
2
@bilanush, sorry if I appear straightforward, but you seem insisting one perspective while rejecting others. Please note that even if you have a solid perspective, other different or even competing perspectives can be solid as well. The discussion between you and other looks like some debate contest where you (pretend to?) emphasize your point of view that you just care the numeric value instead of the bit or decimal length of the input number, which is, by all means, just your choice and not wrong. But the other side is likely to have an equal footing as you. Possibly ten times better.
– Apass.Jack
Dec 28 '18 at 6:05