Does anyone recognize this inequality?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












12












$begingroup$


In some paper the authors make use of the following inequality without further explanation: Let $xinmathbbR^n$ with $x_1lecdotsle x_n$ and $alphain[0,1]^n$ with $sum_i=1^n alpha_i=Nin1,2,ldots,n$. Then $$sum_i=1^nalpha_i x_igesum_i=1^N x_i.$$



While I already have found a (quite lengthy) bare-hands-proof, I wonder if this inequality is just (some variant of) some commonly known inequality that I am just unaware of. Any hints?










share|cite|improve this question











$endgroup$







  • 7




    $begingroup$
    If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
    $endgroup$
    – Martin Sleziak
    Feb 18 at 12:27










  • $begingroup$
    Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:02






  • 1




    $begingroup$
    There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
    $endgroup$
    – Beni Bogosel
    Feb 18 at 21:41















12












$begingroup$


In some paper the authors make use of the following inequality without further explanation: Let $xinmathbbR^n$ with $x_1lecdotsle x_n$ and $alphain[0,1]^n$ with $sum_i=1^n alpha_i=Nin1,2,ldots,n$. Then $$sum_i=1^nalpha_i x_igesum_i=1^N x_i.$$



While I already have found a (quite lengthy) bare-hands-proof, I wonder if this inequality is just (some variant of) some commonly known inequality that I am just unaware of. Any hints?










share|cite|improve this question











$endgroup$







  • 7




    $begingroup$
    If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
    $endgroup$
    – Martin Sleziak
    Feb 18 at 12:27










  • $begingroup$
    Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:02






  • 1




    $begingroup$
    There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
    $endgroup$
    – Beni Bogosel
    Feb 18 at 21:41













12












12








12


6



$begingroup$


In some paper the authors make use of the following inequality without further explanation: Let $xinmathbbR^n$ with $x_1lecdotsle x_n$ and $alphain[0,1]^n$ with $sum_i=1^n alpha_i=Nin1,2,ldots,n$. Then $$sum_i=1^nalpha_i x_igesum_i=1^N x_i.$$



While I already have found a (quite lengthy) bare-hands-proof, I wonder if this inequality is just (some variant of) some commonly known inequality that I am just unaware of. Any hints?










share|cite|improve this question











$endgroup$




In some paper the authors make use of the following inequality without further explanation: Let $xinmathbbR^n$ with $x_1lecdotsle x_n$ and $alphain[0,1]^n$ with $sum_i=1^n alpha_i=Nin1,2,ldots,n$. Then $$sum_i=1^nalpha_i x_igesum_i=1^N x_i.$$



While I already have found a (quite lengthy) bare-hands-proof, I wonder if this inequality is just (some variant of) some commonly known inequality that I am just unaware of. Any hints?







reference-request real-analysis inequalities elementary-proofs






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Feb 18 at 16:58









Iosif Pinelis

19.9k22259




19.9k22259










asked Feb 18 at 12:15









Robert RauchRobert Rauch

1387




1387







  • 7




    $begingroup$
    If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
    $endgroup$
    – Martin Sleziak
    Feb 18 at 12:27










  • $begingroup$
    Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:02






  • 1




    $begingroup$
    There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
    $endgroup$
    – Beni Bogosel
    Feb 18 at 21:41












  • 7




    $begingroup$
    If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
    $endgroup$
    – Martin Sleziak
    Feb 18 at 12:27










  • $begingroup$
    Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:02






  • 1




    $begingroup$
    There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
    $endgroup$
    – Beni Bogosel
    Feb 18 at 21:41







7




7




$begingroup$
If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
$endgroup$
– Martin Sleziak
Feb 18 at 12:27




$begingroup$
If it helps, using Approach0 I was able to find these posts on Mathematics Stack Exchange: Proof of an inequality that seems intuitive and Proving the inequality: $sum_i=1^nq_i r_i leq sum_i=1^k r_i$.
$endgroup$
– Martin Sleziak
Feb 18 at 12:27












$begingroup$
Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
$endgroup$
– Robert Rauch
Feb 18 at 13:02




$begingroup$
Thanks introducing me to Approach0, I see this inequality has been considered on math exchange multiple times before. However, the given proofs are essentially the same as the one I found.
$endgroup$
– Robert Rauch
Feb 18 at 13:02




1




1




$begingroup$
There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
$endgroup$
– Beni Bogosel
Feb 18 at 21:41




$begingroup$
There are quite a few short proofs of this like: 1. When you minimize an affine function you will get an extremal point of the set. (this seems like the fastest approach) 2. Lagrange multipliers.
$endgroup$
– Beni Bogosel
Feb 18 at 21:41










3 Answers
3






active

oldest

votes


















5












$begingroup$

(In a way this is a rephrasing of Iosif's answer, but from a different perspective. My main intention is to point out the connection to matroids.)



The inequality follows from:



Fact. The polytope $K = Biglalpha in [0,1]^n: sum_i = 1^nalpha_i = NBigr$ is the convex hull of indicator vectors of subsets of $[n] = 1, ldots, n$ of size $N$.



Assuming this fact, and because any linear function over a polytope achieves its minimum at a vertex, we have that $sum_i = 1^nalpha_i x_i ge sum_i in S x_i$ for some set $S$ of size $N$. The right hand side is then clearly minimized for $S = 1, ldots, N$ if $x_1 le x_2 le ldots le x_n$.



To show the Fact, we need to show that every extreme point $alpha$ of $K$ is an indicator vector of a set of size $N$. This is clearly true if $alpha in 0,1^n$, so assume, towards contradiction, that for some $i$ we have $0 < alpha_i < 1$. Then there must be at least one other $j neq i$ such that $0 < alpha_j < 1$, and we can write $alpha$ as a convex combination of two vectors in $K$, one in which we add a tiny $h$ to $alpha_i$ and subtract $h$ from $alpha_j$, and one in which we reverse the signs. This means that $alpha$ is not an extreme point, a contradiction. You can see that this argument is essentially the same as Iosif's.



Another way to state the Fact is that $K$ is the base polytope of the uniform matroid over $[n]$ of rank $N$. A vast generalization is the characterization of the facets of the base polytope of any matroid, due to Edmonds: see, e.g., Section 10.7 in these notes.






share|cite|improve this answer











$endgroup$








  • 1




    $begingroup$
    Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
    $endgroup$
    – Fedor Petrov
    Feb 18 at 21:02










  • $begingroup$
    @FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
    $endgroup$
    – Sasho Nikolov
    Feb 18 at 21:07










  • $begingroup$
    @SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
    $endgroup$
    – Robert Rauch
    Feb 19 at 9:54











  • $begingroup$
    @RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
    $endgroup$
    – Sasho Nikolov
    Feb 19 at 18:06


















15












$begingroup$

Use Abel transform: denote $x_i=y_1+y_2+dots+y_i$, then $$sum alpha_i x_i=sum y_i (alpha_i+alpha_i+1+dots+alpha_n). $$
We have $alpha_i+alpha_i+1+dots+alpha_n=N-(alpha_1+dots+alpha_i-1)geqslant N-i+1$ for $i=1,dots,N$. Therefore
$$
sum alpha_i x_igeqslant Ny_1+(N-1)y_2+dots+y_N=x_1+dots+x_N.
$$






share|cite|improve this answer









$endgroup$












  • $begingroup$
    I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 16:40










  • $begingroup$
    @IosifPinelis you are correct, of course
    $endgroup$
    – Fedor Petrov
    Feb 18 at 19:36


















7












$begingroup$

$newcommandalalpha$
The following, rather intuitive proof is done by induction on $n$. The case $n=1$ is trivial. Suppose now that $nge2$. By continuity, without loss of generality $x_1<dots<x_n$. The minimum of $sum_1^nal_i x_i$ over all $al=(al_1,dots,al_n)$ as in the OP is attained. Let $al=(al_1,dots,al_n)$ be a point of such an attainment.



To obtain a contradiction, suppose that $al_1<1$. Then the condition $sum_1^nal_i=Nge1$ implies that $al_j>0$ for some $jin2,dots,n$. Replacing $al_1$ and $al_j$ respectively by $al_1+h$ and $al_j-h$ for a small enough $h>0$, we get a smaller value of $sum_1^nal_i x_i$ (because $x_1<x_j$). This contradicts the assumption that $al$ is a point of minimum of $sum_1^nal_i x_i$.



So, $al_1=1$, and your inequality reduces to $sum_2^nal_i x_igesum_2^N x_i$ given that $al_iin[0,1]$ for all $i$ and $sum_2^nal_i=N-1$, and the latter inequality is true by induction.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:51






  • 5




    $begingroup$
    @RobertRauch, aren't you the OP …?
    $endgroup$
    – LSpice
    Feb 18 at 15:27






  • 1




    $begingroup$
    This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
    $endgroup$
    – Mark Wildon
    Feb 18 at 16:17







  • 1




    $begingroup$
    @RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 17:01






  • 5




    $begingroup$
    I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
    $endgroup$
    – Gerhard Paseman
    Feb 18 at 17:05










Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "504"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f323498%2fdoes-anyone-recognize-this-inequality%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

(In a way this is a rephrasing of Iosif's answer, but from a different perspective. My main intention is to point out the connection to matroids.)



The inequality follows from:



Fact. The polytope $K = Biglalpha in [0,1]^n: sum_i = 1^nalpha_i = NBigr$ is the convex hull of indicator vectors of subsets of $[n] = 1, ldots, n$ of size $N$.



Assuming this fact, and because any linear function over a polytope achieves its minimum at a vertex, we have that $sum_i = 1^nalpha_i x_i ge sum_i in S x_i$ for some set $S$ of size $N$. The right hand side is then clearly minimized for $S = 1, ldots, N$ if $x_1 le x_2 le ldots le x_n$.



To show the Fact, we need to show that every extreme point $alpha$ of $K$ is an indicator vector of a set of size $N$. This is clearly true if $alpha in 0,1^n$, so assume, towards contradiction, that for some $i$ we have $0 < alpha_i < 1$. Then there must be at least one other $j neq i$ such that $0 < alpha_j < 1$, and we can write $alpha$ as a convex combination of two vectors in $K$, one in which we add a tiny $h$ to $alpha_i$ and subtract $h$ from $alpha_j$, and one in which we reverse the signs. This means that $alpha$ is not an extreme point, a contradiction. You can see that this argument is essentially the same as Iosif's.



Another way to state the Fact is that $K$ is the base polytope of the uniform matroid over $[n]$ of rank $N$. A vast generalization is the characterization of the facets of the base polytope of any matroid, due to Edmonds: see, e.g., Section 10.7 in these notes.






share|cite|improve this answer











$endgroup$








  • 1




    $begingroup$
    Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
    $endgroup$
    – Fedor Petrov
    Feb 18 at 21:02










  • $begingroup$
    @FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
    $endgroup$
    – Sasho Nikolov
    Feb 18 at 21:07










  • $begingroup$
    @SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
    $endgroup$
    – Robert Rauch
    Feb 19 at 9:54











  • $begingroup$
    @RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
    $endgroup$
    – Sasho Nikolov
    Feb 19 at 18:06















5












$begingroup$

(In a way this is a rephrasing of Iosif's answer, but from a different perspective. My main intention is to point out the connection to matroids.)



The inequality follows from:



Fact. The polytope $K = Biglalpha in [0,1]^n: sum_i = 1^nalpha_i = NBigr$ is the convex hull of indicator vectors of subsets of $[n] = 1, ldots, n$ of size $N$.



Assuming this fact, and because any linear function over a polytope achieves its minimum at a vertex, we have that $sum_i = 1^nalpha_i x_i ge sum_i in S x_i$ for some set $S$ of size $N$. The right hand side is then clearly minimized for $S = 1, ldots, N$ if $x_1 le x_2 le ldots le x_n$.



To show the Fact, we need to show that every extreme point $alpha$ of $K$ is an indicator vector of a set of size $N$. This is clearly true if $alpha in 0,1^n$, so assume, towards contradiction, that for some $i$ we have $0 < alpha_i < 1$. Then there must be at least one other $j neq i$ such that $0 < alpha_j < 1$, and we can write $alpha$ as a convex combination of two vectors in $K$, one in which we add a tiny $h$ to $alpha_i$ and subtract $h$ from $alpha_j$, and one in which we reverse the signs. This means that $alpha$ is not an extreme point, a contradiction. You can see that this argument is essentially the same as Iosif's.



Another way to state the Fact is that $K$ is the base polytope of the uniform matroid over $[n]$ of rank $N$. A vast generalization is the characterization of the facets of the base polytope of any matroid, due to Edmonds: see, e.g., Section 10.7 in these notes.






share|cite|improve this answer











$endgroup$








  • 1




    $begingroup$
    Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
    $endgroup$
    – Fedor Petrov
    Feb 18 at 21:02










  • $begingroup$
    @FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
    $endgroup$
    – Sasho Nikolov
    Feb 18 at 21:07










  • $begingroup$
    @SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
    $endgroup$
    – Robert Rauch
    Feb 19 at 9:54











  • $begingroup$
    @RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
    $endgroup$
    – Sasho Nikolov
    Feb 19 at 18:06













5












5








5





$begingroup$

(In a way this is a rephrasing of Iosif's answer, but from a different perspective. My main intention is to point out the connection to matroids.)



The inequality follows from:



Fact. The polytope $K = Biglalpha in [0,1]^n: sum_i = 1^nalpha_i = NBigr$ is the convex hull of indicator vectors of subsets of $[n] = 1, ldots, n$ of size $N$.



Assuming this fact, and because any linear function over a polytope achieves its minimum at a vertex, we have that $sum_i = 1^nalpha_i x_i ge sum_i in S x_i$ for some set $S$ of size $N$. The right hand side is then clearly minimized for $S = 1, ldots, N$ if $x_1 le x_2 le ldots le x_n$.



To show the Fact, we need to show that every extreme point $alpha$ of $K$ is an indicator vector of a set of size $N$. This is clearly true if $alpha in 0,1^n$, so assume, towards contradiction, that for some $i$ we have $0 < alpha_i < 1$. Then there must be at least one other $j neq i$ such that $0 < alpha_j < 1$, and we can write $alpha$ as a convex combination of two vectors in $K$, one in which we add a tiny $h$ to $alpha_i$ and subtract $h$ from $alpha_j$, and one in which we reverse the signs. This means that $alpha$ is not an extreme point, a contradiction. You can see that this argument is essentially the same as Iosif's.



Another way to state the Fact is that $K$ is the base polytope of the uniform matroid over $[n]$ of rank $N$. A vast generalization is the characterization of the facets of the base polytope of any matroid, due to Edmonds: see, e.g., Section 10.7 in these notes.






share|cite|improve this answer











$endgroup$



(In a way this is a rephrasing of Iosif's answer, but from a different perspective. My main intention is to point out the connection to matroids.)



The inequality follows from:



Fact. The polytope $K = Biglalpha in [0,1]^n: sum_i = 1^nalpha_i = NBigr$ is the convex hull of indicator vectors of subsets of $[n] = 1, ldots, n$ of size $N$.



Assuming this fact, and because any linear function over a polytope achieves its minimum at a vertex, we have that $sum_i = 1^nalpha_i x_i ge sum_i in S x_i$ for some set $S$ of size $N$. The right hand side is then clearly minimized for $S = 1, ldots, N$ if $x_1 le x_2 le ldots le x_n$.



To show the Fact, we need to show that every extreme point $alpha$ of $K$ is an indicator vector of a set of size $N$. This is clearly true if $alpha in 0,1^n$, so assume, towards contradiction, that for some $i$ we have $0 < alpha_i < 1$. Then there must be at least one other $j neq i$ such that $0 < alpha_j < 1$, and we can write $alpha$ as a convex combination of two vectors in $K$, one in which we add a tiny $h$ to $alpha_i$ and subtract $h$ from $alpha_j$, and one in which we reverse the signs. This means that $alpha$ is not an extreme point, a contradiction. You can see that this argument is essentially the same as Iosif's.



Another way to state the Fact is that $K$ is the base polytope of the uniform matroid over $[n]$ of rank $N$. A vast generalization is the characterization of the facets of the base polytope of any matroid, due to Edmonds: see, e.g., Section 10.7 in these notes.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Feb 19 at 18:06

























answered Feb 18 at 20:57









Sasho NikolovSasho Nikolov

597313




597313







  • 1




    $begingroup$
    Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
    $endgroup$
    – Fedor Petrov
    Feb 18 at 21:02










  • $begingroup$
    @FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
    $endgroup$
    – Sasho Nikolov
    Feb 18 at 21:07










  • $begingroup$
    @SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
    $endgroup$
    – Robert Rauch
    Feb 19 at 9:54











  • $begingroup$
    @RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
    $endgroup$
    – Sasho Nikolov
    Feb 19 at 18:06












  • 1




    $begingroup$
    Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
    $endgroup$
    – Fedor Petrov
    Feb 18 at 21:02










  • $begingroup$
    @FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
    $endgroup$
    – Sasho Nikolov
    Feb 18 at 21:07










  • $begingroup$
    @SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
    $endgroup$
    – Robert Rauch
    Feb 19 at 9:54











  • $begingroup$
    @RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
    $endgroup$
    – Sasho Nikolov
    Feb 19 at 18:06







1




1




$begingroup$
Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
$endgroup$
– Fedor Petrov
Feb 18 at 21:02




$begingroup$
Ah, indeed. I think, the key word is en.wikipedia.org/wiki/Polymatroid
$endgroup$
– Fedor Petrov
Feb 18 at 21:02












$begingroup$
@FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
$endgroup$
– Sasho Nikolov
Feb 18 at 21:07




$begingroup$
@FedorPetrov it's a good keyword :). Polymatroids are technically a little more general.
$endgroup$
– Sasho Nikolov
Feb 18 at 21:07












$begingroup$
@SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
$endgroup$
– Robert Rauch
Feb 19 at 9:54





$begingroup$
@SashoNikolov There is a minor typo, as in the OP we were looking for the minimum of $sum_i=1^nalpha_ix_i$ for $alphain K$, which then is attained for the indicator vector of $S=1,ldots,N$, because we have assumed $x_1lecdotsle x_n$.
$endgroup$
– Robert Rauch
Feb 19 at 9:54













$begingroup$
@RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
$endgroup$
– Sasho Nikolov
Feb 19 at 18:06




$begingroup$
@RobertRauch Thank you, I believe I fixed it. I had flipped the whole problem in my head somehow.
$endgroup$
– Sasho Nikolov
Feb 19 at 18:06











15












$begingroup$

Use Abel transform: denote $x_i=y_1+y_2+dots+y_i$, then $$sum alpha_i x_i=sum y_i (alpha_i+alpha_i+1+dots+alpha_n). $$
We have $alpha_i+alpha_i+1+dots+alpha_n=N-(alpha_1+dots+alpha_i-1)geqslant N-i+1$ for $i=1,dots,N$. Therefore
$$
sum alpha_i x_igeqslant Ny_1+(N-1)y_2+dots+y_N=x_1+dots+x_N.
$$






share|cite|improve this answer









$endgroup$












  • $begingroup$
    I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 16:40










  • $begingroup$
    @IosifPinelis you are correct, of course
    $endgroup$
    – Fedor Petrov
    Feb 18 at 19:36















15












$begingroup$

Use Abel transform: denote $x_i=y_1+y_2+dots+y_i$, then $$sum alpha_i x_i=sum y_i (alpha_i+alpha_i+1+dots+alpha_n). $$
We have $alpha_i+alpha_i+1+dots+alpha_n=N-(alpha_1+dots+alpha_i-1)geqslant N-i+1$ for $i=1,dots,N$. Therefore
$$
sum alpha_i x_igeqslant Ny_1+(N-1)y_2+dots+y_N=x_1+dots+x_N.
$$






share|cite|improve this answer









$endgroup$












  • $begingroup$
    I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 16:40










  • $begingroup$
    @IosifPinelis you are correct, of course
    $endgroup$
    – Fedor Petrov
    Feb 18 at 19:36













15












15








15





$begingroup$

Use Abel transform: denote $x_i=y_1+y_2+dots+y_i$, then $$sum alpha_i x_i=sum y_i (alpha_i+alpha_i+1+dots+alpha_n). $$
We have $alpha_i+alpha_i+1+dots+alpha_n=N-(alpha_1+dots+alpha_i-1)geqslant N-i+1$ for $i=1,dots,N$. Therefore
$$
sum alpha_i x_igeqslant Ny_1+(N-1)y_2+dots+y_N=x_1+dots+x_N.
$$






share|cite|improve this answer









$endgroup$



Use Abel transform: denote $x_i=y_1+y_2+dots+y_i$, then $$sum alpha_i x_i=sum y_i (alpha_i+alpha_i+1+dots+alpha_n). $$
We have $alpha_i+alpha_i+1+dots+alpha_n=N-(alpha_1+dots+alpha_i-1)geqslant N-i+1$ for $i=1,dots,N$. Therefore
$$
sum alpha_i x_igeqslant Ny_1+(N-1)y_2+dots+y_N=x_1+dots+x_N.
$$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Feb 18 at 14:30









Fedor PetrovFedor Petrov

50.9k6118234




50.9k6118234











  • $begingroup$
    I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 16:40










  • $begingroup$
    @IosifPinelis you are correct, of course
    $endgroup$
    – Fedor Petrov
    Feb 18 at 19:36
















  • $begingroup$
    I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 16:40










  • $begingroup$
    @IosifPinelis you are correct, of course
    $endgroup$
    – Fedor Petrov
    Feb 18 at 19:36















$begingroup$
I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
$endgroup$
– Iosif Pinelis
Feb 18 at 16:40




$begingroup$
I think this approach (summation by parts) was also used in the answer by wj32 at the link math.stackexchange.com/questions/1582996/…, provided in the comment by Martin Sleziak.
$endgroup$
– Iosif Pinelis
Feb 18 at 16:40












$begingroup$
@IosifPinelis you are correct, of course
$endgroup$
– Fedor Petrov
Feb 18 at 19:36




$begingroup$
@IosifPinelis you are correct, of course
$endgroup$
– Fedor Petrov
Feb 18 at 19:36











7












$begingroup$

$newcommandalalpha$
The following, rather intuitive proof is done by induction on $n$. The case $n=1$ is trivial. Suppose now that $nge2$. By continuity, without loss of generality $x_1<dots<x_n$. The minimum of $sum_1^nal_i x_i$ over all $al=(al_1,dots,al_n)$ as in the OP is attained. Let $al=(al_1,dots,al_n)$ be a point of such an attainment.



To obtain a contradiction, suppose that $al_1<1$. Then the condition $sum_1^nal_i=Nge1$ implies that $al_j>0$ for some $jin2,dots,n$. Replacing $al_1$ and $al_j$ respectively by $al_1+h$ and $al_j-h$ for a small enough $h>0$, we get a smaller value of $sum_1^nal_i x_i$ (because $x_1<x_j$). This contradicts the assumption that $al$ is a point of minimum of $sum_1^nal_i x_i$.



So, $al_1=1$, and your inequality reduces to $sum_2^nal_i x_igesum_2^N x_i$ given that $al_iin[0,1]$ for all $i$ and $sum_2^nal_i=N-1$, and the latter inequality is true by induction.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:51






  • 5




    $begingroup$
    @RobertRauch, aren't you the OP …?
    $endgroup$
    – LSpice
    Feb 18 at 15:27






  • 1




    $begingroup$
    This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
    $endgroup$
    – Mark Wildon
    Feb 18 at 16:17







  • 1




    $begingroup$
    @RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 17:01






  • 5




    $begingroup$
    I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
    $endgroup$
    – Gerhard Paseman
    Feb 18 at 17:05















7












$begingroup$

$newcommandalalpha$
The following, rather intuitive proof is done by induction on $n$. The case $n=1$ is trivial. Suppose now that $nge2$. By continuity, without loss of generality $x_1<dots<x_n$. The minimum of $sum_1^nal_i x_i$ over all $al=(al_1,dots,al_n)$ as in the OP is attained. Let $al=(al_1,dots,al_n)$ be a point of such an attainment.



To obtain a contradiction, suppose that $al_1<1$. Then the condition $sum_1^nal_i=Nge1$ implies that $al_j>0$ for some $jin2,dots,n$. Replacing $al_1$ and $al_j$ respectively by $al_1+h$ and $al_j-h$ for a small enough $h>0$, we get a smaller value of $sum_1^nal_i x_i$ (because $x_1<x_j$). This contradicts the assumption that $al$ is a point of minimum of $sum_1^nal_i x_i$.



So, $al_1=1$, and your inequality reduces to $sum_2^nal_i x_igesum_2^N x_i$ given that $al_iin[0,1]$ for all $i$ and $sum_2^nal_i=N-1$, and the latter inequality is true by induction.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:51






  • 5




    $begingroup$
    @RobertRauch, aren't you the OP …?
    $endgroup$
    – LSpice
    Feb 18 at 15:27






  • 1




    $begingroup$
    This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
    $endgroup$
    – Mark Wildon
    Feb 18 at 16:17







  • 1




    $begingroup$
    @RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 17:01






  • 5




    $begingroup$
    I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
    $endgroup$
    – Gerhard Paseman
    Feb 18 at 17:05













7












7








7





$begingroup$

$newcommandalalpha$
The following, rather intuitive proof is done by induction on $n$. The case $n=1$ is trivial. Suppose now that $nge2$. By continuity, without loss of generality $x_1<dots<x_n$. The minimum of $sum_1^nal_i x_i$ over all $al=(al_1,dots,al_n)$ as in the OP is attained. Let $al=(al_1,dots,al_n)$ be a point of such an attainment.



To obtain a contradiction, suppose that $al_1<1$. Then the condition $sum_1^nal_i=Nge1$ implies that $al_j>0$ for some $jin2,dots,n$. Replacing $al_1$ and $al_j$ respectively by $al_1+h$ and $al_j-h$ for a small enough $h>0$, we get a smaller value of $sum_1^nal_i x_i$ (because $x_1<x_j$). This contradicts the assumption that $al$ is a point of minimum of $sum_1^nal_i x_i$.



So, $al_1=1$, and your inequality reduces to $sum_2^nal_i x_igesum_2^N x_i$ given that $al_iin[0,1]$ for all $i$ and $sum_2^nal_i=N-1$, and the latter inequality is true by induction.






share|cite|improve this answer









$endgroup$



$newcommandalalpha$
The following, rather intuitive proof is done by induction on $n$. The case $n=1$ is trivial. Suppose now that $nge2$. By continuity, without loss of generality $x_1<dots<x_n$. The minimum of $sum_1^nal_i x_i$ over all $al=(al_1,dots,al_n)$ as in the OP is attained. Let $al=(al_1,dots,al_n)$ be a point of such an attainment.



To obtain a contradiction, suppose that $al_1<1$. Then the condition $sum_1^nal_i=Nge1$ implies that $al_j>0$ for some $jin2,dots,n$. Replacing $al_1$ and $al_j$ respectively by $al_1+h$ and $al_j-h$ for a small enough $h>0$, we get a smaller value of $sum_1^nal_i x_i$ (because $x_1<x_j$). This contradicts the assumption that $al$ is a point of minimum of $sum_1^nal_i x_i$.



So, $al_1=1$, and your inequality reduces to $sum_2^nal_i x_igesum_2^N x_i$ given that $al_iin[0,1]$ for all $i$ and $sum_2^nal_i=N-1$, and the latter inequality is true by induction.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Feb 18 at 13:31









Iosif PinelisIosif Pinelis

19.9k22259




19.9k22259







  • 1




    $begingroup$
    Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:51






  • 5




    $begingroup$
    @RobertRauch, aren't you the OP …?
    $endgroup$
    – LSpice
    Feb 18 at 15:27






  • 1




    $begingroup$
    This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
    $endgroup$
    – Mark Wildon
    Feb 18 at 16:17







  • 1




    $begingroup$
    @RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 17:01






  • 5




    $begingroup$
    I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
    $endgroup$
    – Gerhard Paseman
    Feb 18 at 17:05












  • 1




    $begingroup$
    Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
    $endgroup$
    – Robert Rauch
    Feb 18 at 13:51






  • 5




    $begingroup$
    @RobertRauch, aren't you the OP …?
    $endgroup$
    – LSpice
    Feb 18 at 15:27






  • 1




    $begingroup$
    This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
    $endgroup$
    – Mark Wildon
    Feb 18 at 16:17







  • 1




    $begingroup$
    @RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
    $endgroup$
    – Iosif Pinelis
    Feb 18 at 17:01






  • 5




    $begingroup$
    I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
    $endgroup$
    – Gerhard Paseman
    Feb 18 at 17:05







1




1




$begingroup$
Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
$endgroup$
– Robert Rauch
Feb 18 at 13:51




$begingroup$
Thanks for this interesting approach. However, the OP does not ask for (more) elementary proofs of this inequality ;-)
$endgroup$
– Robert Rauch
Feb 18 at 13:51




5




5




$begingroup$
@RobertRauch, aren't you the OP …?
$endgroup$
– LSpice
Feb 18 at 15:27




$begingroup$
@RobertRauch, aren't you the OP …?
$endgroup$
– LSpice
Feb 18 at 15:27




1




1




$begingroup$
This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
$endgroup$
– Mark Wildon
Feb 18 at 16:17





$begingroup$
This can almost be phrased as two applications of the rearrangement inequality. Since $sum_i=1^n alpha_i x_i$ is minimized, under rearrangement of the $alpha_i$, when the $alpha_i$ are decreasing, we may assume this is the case. (Note this doesn't change $sum_i=1^n alpha_i = N$.) The minimum varying the $alpha$ (while keeping their sum as $N$) is now at $alpha = (1,ldots, 1, 0, ldots, 0)$ since if $alpha_k not=0$ for some $k > N$ we can reduce $sum_i=1^n alpha_i x_i$ by increasing $alpha_1 < 1$ and decreasing $alpha_k$, exactly as the proof above.
$endgroup$
– Mark Wildon
Feb 18 at 16:17





1




1




$begingroup$
@RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
$endgroup$
– Iosif Pinelis
Feb 18 at 17:01




$begingroup$
@RobertRauch : I have added the tag "reference-request" to your post. Perhaps this will help you to get a reference to a known more general inequality.
$endgroup$
– Iosif Pinelis
Feb 18 at 17:01




5




5




$begingroup$
I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
$endgroup$
– Gerhard Paseman
Feb 18 at 17:05




$begingroup$
I imagine Robert is the original poster. However, OP can stand for Original Post as well. Gerhard "Am I What I Write?" Paseman, 2019.02.18.
$endgroup$
– Gerhard Paseman
Feb 18 at 17:05

















draft saved

draft discarded
















































Thanks for contributing an answer to MathOverflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f323498%2fdoes-anyone-recognize-this-inequality%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?