Is it 40% or 0.4%? [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












2












$begingroup$


A variable, which should contain percents, also contains some "ratio" values, for example:



0.61
41
54
.4
.39
20
52
0.7
12
70
82


The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).



Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?










share|cite|improve this question











$endgroup$



closed as off-topic by Sycorax, Nick Cox, mdewey, Martijn Weterings, user158565 Mar 6 at 16:09



  • This question does not appear to be about statistics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.











  • 4




    $begingroup$
    I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
    $endgroup$
    – Sycorax
    Mar 4 at 21:17






  • 2




    $begingroup$
    What the data mean != what is the (data) mean.
    $endgroup$
    – Nick Cox
    Mar 4 at 21:27






  • 1




    $begingroup$
    You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
    $endgroup$
    – EngrStudent
    Mar 4 at 22:07






  • 2




    $begingroup$
    There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
    $endgroup$
    – whuber
    Mar 4 at 22:46







  • 7




    $begingroup$
    I'm guessing that "ask the people who collected the data" isn't a valid option, here?
    $endgroup$
    – nick012000
    Mar 5 at 2:45















2












$begingroup$


A variable, which should contain percents, also contains some "ratio" values, for example:



0.61
41
54
.4
.39
20
52
0.7
12
70
82


The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).



Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?










share|cite|improve this question











$endgroup$



closed as off-topic by Sycorax, Nick Cox, mdewey, Martijn Weterings, user158565 Mar 6 at 16:09



  • This question does not appear to be about statistics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.











  • 4




    $begingroup$
    I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
    $endgroup$
    – Sycorax
    Mar 4 at 21:17






  • 2




    $begingroup$
    What the data mean != what is the (data) mean.
    $endgroup$
    – Nick Cox
    Mar 4 at 21:27






  • 1




    $begingroup$
    You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
    $endgroup$
    – EngrStudent
    Mar 4 at 22:07






  • 2




    $begingroup$
    There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
    $endgroup$
    – whuber
    Mar 4 at 22:46







  • 7




    $begingroup$
    I'm guessing that "ask the people who collected the data" isn't a valid option, here?
    $endgroup$
    – nick012000
    Mar 5 at 2:45













2












2








2





$begingroup$


A variable, which should contain percents, also contains some "ratio" values, for example:



0.61
41
54
.4
.39
20
52
0.7
12
70
82


The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).



Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?










share|cite|improve this question











$endgroup$




A variable, which should contain percents, also contains some "ratio" values, for example:



0.61
41
54
.4
.39
20
52
0.7
12
70
82


The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).



Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?







data-cleaning






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 4 at 21:31







Orion

















asked Mar 4 at 21:15









OrionOrion

5612




5612




closed as off-topic by Sycorax, Nick Cox, mdewey, Martijn Weterings, user158565 Mar 6 at 16:09



  • This question does not appear to be about statistics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.







closed as off-topic by Sycorax, Nick Cox, mdewey, Martijn Weterings, user158565 Mar 6 at 16:09



  • This question does not appear to be about statistics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 4




    $begingroup$
    I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
    $endgroup$
    – Sycorax
    Mar 4 at 21:17






  • 2




    $begingroup$
    What the data mean != what is the (data) mean.
    $endgroup$
    – Nick Cox
    Mar 4 at 21:27






  • 1




    $begingroup$
    You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
    $endgroup$
    – EngrStudent
    Mar 4 at 22:07






  • 2




    $begingroup$
    There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
    $endgroup$
    – whuber
    Mar 4 at 22:46







  • 7




    $begingroup$
    I'm guessing that "ask the people who collected the data" isn't a valid option, here?
    $endgroup$
    – nick012000
    Mar 5 at 2:45












  • 4




    $begingroup$
    I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
    $endgroup$
    – Sycorax
    Mar 4 at 21:17






  • 2




    $begingroup$
    What the data mean != what is the (data) mean.
    $endgroup$
    – Nick Cox
    Mar 4 at 21:27






  • 1




    $begingroup$
    You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
    $endgroup$
    – EngrStudent
    Mar 4 at 22:07






  • 2




    $begingroup$
    There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
    $endgroup$
    – whuber
    Mar 4 at 22:46







  • 7




    $begingroup$
    I'm guessing that "ask the people who collected the data" isn't a valid option, here?
    $endgroup$
    – nick012000
    Mar 5 at 2:45







4




4




$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
Mar 4 at 21:17




$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
Mar 4 at 21:17




2




2




$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
Mar 4 at 21:27




$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
Mar 4 at 21:27




1




1




$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
Mar 4 at 22:07




$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
Mar 4 at 22:07




2




2




$begingroup$
There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
$endgroup$
– whuber
Mar 4 at 22:46





$begingroup$
There are plenty of potentially efficient ways to approach this. The choice depends on how many values are 1.0 or less and how many values exceed 1.0. Could you tell us these quantities for the problem(s) you have to deal with? @EngrStudent The interest lies in (hypothetical) situations where some of the very low values actually are percents. That can lead to exponentially many options (as a function of the dataset size) rather than just three (actually two--two of you options lead to the same solution).
$endgroup$
– whuber
Mar 4 at 22:46





7




7




$begingroup$
I'm guessing that "ask the people who collected the data" isn't a valid option, here?
$endgroup$
– nick012000
Mar 5 at 2:45




$begingroup$
I'm guessing that "ask the people who collected the data" isn't a valid option, here?
$endgroup$
– nick012000
Mar 5 at 2:45










2 Answers
2






active

oldest

votes


















5












$begingroup$

Assuming



  • The only data you have is the percents/ratios (no other related explanatory variables)

  • Your percents comes from a unimodal distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_100$).

  • The percent/ratios are all between $0$ and $100$.

Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_100$ and everything over $K$ is more likely to be sampled from $P$.



You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.



Afterwards, find $K :=$ where $P$ and $P_100$ intersect and you can use that to clean your data.



In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
    $endgroup$
    – Sycorax
    Mar 6 at 16:38











  • $begingroup$
    @Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
    $endgroup$
    – djma
    Mar 7 at 20:54



















0












$begingroup$

Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.



Therefore, your data must be percents. You're welcome.






share|cite|improve this answer









$endgroup$








  • 3




    $begingroup$
    The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
    $endgroup$
    – The Laconic
    Mar 4 at 21:29






  • 3




    $begingroup$
    If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
    $endgroup$
    – beta1_equals_beta2
    Mar 4 at 21:31

















2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

Assuming



  • The only data you have is the percents/ratios (no other related explanatory variables)

  • Your percents comes from a unimodal distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_100$).

  • The percent/ratios are all between $0$ and $100$.

Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_100$ and everything over $K$ is more likely to be sampled from $P$.



You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.



Afterwards, find $K :=$ where $P$ and $P_100$ intersect and you can use that to clean your data.



In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
    $endgroup$
    – Sycorax
    Mar 6 at 16:38











  • $begingroup$
    @Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
    $endgroup$
    – djma
    Mar 7 at 20:54
















5












$begingroup$

Assuming



  • The only data you have is the percents/ratios (no other related explanatory variables)

  • Your percents comes from a unimodal distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_100$).

  • The percent/ratios are all between $0$ and $100$.

Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_100$ and everything over $K$ is more likely to be sampled from $P$.



You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.



Afterwards, find $K :=$ where $P$ and $P_100$ intersect and you can use that to clean your data.



In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
    $endgroup$
    – Sycorax
    Mar 6 at 16:38











  • $begingroup$
    @Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
    $endgroup$
    – djma
    Mar 7 at 20:54














5












5








5





$begingroup$

Assuming



  • The only data you have is the percents/ratios (no other related explanatory variables)

  • Your percents comes from a unimodal distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_100$).

  • The percent/ratios are all between $0$ and $100$.

Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_100$ and everything over $K$ is more likely to be sampled from $P$.



You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.



Afterwards, find $K :=$ where $P$ and $P_100$ intersect and you can use that to clean your data.



In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.






share|cite|improve this answer











$endgroup$



Assuming



  • The only data you have is the percents/ratios (no other related explanatory variables)

  • Your percents comes from a unimodal distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_100$).

  • The percent/ratios are all between $0$ and $100$.

Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_100$ and everything over $K$ is more likely to be sampled from $P$.



You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.



Afterwards, find $K :=$ where $P$ and $P_100$ intersect and you can use that to clean your data.



In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Mar 5 at 16:41









Nick Cox

39.1k587131




39.1k587131










answered Mar 4 at 22:27









djmadjma

65949




65949











  • $begingroup$
    I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
    $endgroup$
    – Sycorax
    Mar 6 at 16:38











  • $begingroup$
    @Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
    $endgroup$
    – djma
    Mar 7 at 20:54

















  • $begingroup$
    I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
    $endgroup$
    – Sycorax
    Mar 6 at 16:38











  • $begingroup$
    @Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
    $endgroup$
    – djma
    Mar 7 at 20:54
















$begingroup$
I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
$endgroup$
– Sycorax
Mar 6 at 16:38





$begingroup$
I don't think that this addresses the question. This approach establishes two intervals $(0.0, K], (K, 1.0]$, where one is proposed to be multiplied by 100 and a second left as-is. OP is asking how to determine which values should be multiplied by 100; based on the description in the question, the "squashed" values can appear anywhere in $(0.0, 1.0]$, not solely on one side of $K$ or the other.
$endgroup$
– Sycorax
Mar 6 at 16:38













$begingroup$
@Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
$endgroup$
– djma
Mar 7 at 20:54





$begingroup$
@Sycorax indeed they can appear anywhere, but without any additional information, that's the best we can do. The hope is that the output of this exercise is better than doing nothing for whatever purpose OP had in mind. E.g. if the OP needs an estimate of the mean of that dataset, s/he would be better off using the "K adjustment" than not doing so.
$endgroup$
– djma
Mar 7 at 20:54














0












$begingroup$

Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.



Therefore, your data must be percents. You're welcome.






share|cite|improve this answer









$endgroup$








  • 3




    $begingroup$
    The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
    $endgroup$
    – The Laconic
    Mar 4 at 21:29






  • 3




    $begingroup$
    If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
    $endgroup$
    – beta1_equals_beta2
    Mar 4 at 21:31















0












$begingroup$

Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.



Therefore, your data must be percents. You're welcome.






share|cite|improve this answer









$endgroup$








  • 3




    $begingroup$
    The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
    $endgroup$
    – The Laconic
    Mar 4 at 21:29






  • 3




    $begingroup$
    If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
    $endgroup$
    – beta1_equals_beta2
    Mar 4 at 21:31













0












0








0





$begingroup$

Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.



Therefore, your data must be percents. You're welcome.






share|cite|improve this answer









$endgroup$



Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.



Therefore, your data must be percents. You're welcome.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Mar 4 at 21:26









beta1_equals_beta2beta1_equals_beta2

683




683







  • 3




    $begingroup$
    The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
    $endgroup$
    – The Laconic
    Mar 4 at 21:29






  • 3




    $begingroup$
    If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
    $endgroup$
    – beta1_equals_beta2
    Mar 4 at 21:31












  • 3




    $begingroup$
    The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
    $endgroup$
    – The Laconic
    Mar 4 at 21:29






  • 3




    $begingroup$
    If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
    $endgroup$
    – beta1_equals_beta2
    Mar 4 at 21:31







3




3




$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
Mar 4 at 21:29




$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
Mar 4 at 21:29




3




3




$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
Mar 4 at 21:31




$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
Mar 4 at 21:31


Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay