Is the uniqueness of the additive neutral element sufficient to prove x+z=x implies z=0?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












15












$begingroup$


The following was originally stated for n-tuples of elements from a scalar field, so most of the properties of "vectors" are easily established from the properties of the underlying scalar field. But the authors seem to want their development to be "self-reliant". For this reason I have replaced "n-tuple" with "vector".



The equality relation for vectors has been established, as have the
associative and commutative laws of vector addition. The next property
of vector addition to be introduced is the neutral element:



There exists a vector $mathfrak0$ such that $mathfrakx+mathfrak0=mathfrakx$ for every $mathfrakx$. It follows there can be only one neutral element, for if $mathfrak0$ and $mathfrak0^prime$ were
two such elements we would have $mathfrak0^prime+mathfrak0=mathfrak0^prime$ and $mathfrak0+mathfrak0^prime=mathfrak0,$ so that by the commutative law of vector addition and the transitivity of vector equality we would have $mathfrak0=mathfrak0^prime.$



Now suppose that for some $mathfrakx$ we have $mathfrakx+mathfrakz=mathfrakx.$
Do we have enough to prove that $mathfrakz=mathfrak0?$



I note in particular that the proof of the uniqueness of $mathfrak0$
relies on the assumption that $mathfrakx+mathfrak0^prime=mathfrakx$ holds for all vectors, and thereby for $mathfrakx=mathfrak0$. That assumption comes from the definition of $mathfrak0$ satisfying $mathfrakx+mathfrak0=mathfrakx$
for every vector, and the assumption that $mathfrak0^prime$ is also 'such an element'.



Also note that the additive inverses have not yet been introduced.










share|cite|improve this question









$endgroup$











  • $begingroup$
    Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
    $endgroup$
    – Paul K
    Jan 24 at 7:25






  • 3




    $begingroup$
    The last line of the question is: "Also note that the additive inverses have not yet been introduced."
    $endgroup$
    – Steven Hatton
    Jan 24 at 7:26






  • 3




    $begingroup$
    In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
    $endgroup$
    – Paul K
    Jan 24 at 7:29







  • 4




    $begingroup$
    I'm curious, why are you using mathfrak? Is it specific to the subject?
    $endgroup$
    – Teleporting Goat
    Jan 25 at 9:04







  • 1




    $begingroup$
    @TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
    $endgroup$
    – Steven Hatton
    Jan 25 at 9:47















15












$begingroup$


The following was originally stated for n-tuples of elements from a scalar field, so most of the properties of "vectors" are easily established from the properties of the underlying scalar field. But the authors seem to want their development to be "self-reliant". For this reason I have replaced "n-tuple" with "vector".



The equality relation for vectors has been established, as have the
associative and commutative laws of vector addition. The next property
of vector addition to be introduced is the neutral element:



There exists a vector $mathfrak0$ such that $mathfrakx+mathfrak0=mathfrakx$ for every $mathfrakx$. It follows there can be only one neutral element, for if $mathfrak0$ and $mathfrak0^prime$ were
two such elements we would have $mathfrak0^prime+mathfrak0=mathfrak0^prime$ and $mathfrak0+mathfrak0^prime=mathfrak0,$ so that by the commutative law of vector addition and the transitivity of vector equality we would have $mathfrak0=mathfrak0^prime.$



Now suppose that for some $mathfrakx$ we have $mathfrakx+mathfrakz=mathfrakx.$
Do we have enough to prove that $mathfrakz=mathfrak0?$



I note in particular that the proof of the uniqueness of $mathfrak0$
relies on the assumption that $mathfrakx+mathfrak0^prime=mathfrakx$ holds for all vectors, and thereby for $mathfrakx=mathfrak0$. That assumption comes from the definition of $mathfrak0$ satisfying $mathfrakx+mathfrak0=mathfrakx$
for every vector, and the assumption that $mathfrak0^prime$ is also 'such an element'.



Also note that the additive inverses have not yet been introduced.










share|cite|improve this question









$endgroup$











  • $begingroup$
    Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
    $endgroup$
    – Paul K
    Jan 24 at 7:25






  • 3




    $begingroup$
    The last line of the question is: "Also note that the additive inverses have not yet been introduced."
    $endgroup$
    – Steven Hatton
    Jan 24 at 7:26






  • 3




    $begingroup$
    In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
    $endgroup$
    – Paul K
    Jan 24 at 7:29







  • 4




    $begingroup$
    I'm curious, why are you using mathfrak? Is it specific to the subject?
    $endgroup$
    – Teleporting Goat
    Jan 25 at 9:04







  • 1




    $begingroup$
    @TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
    $endgroup$
    – Steven Hatton
    Jan 25 at 9:47













15












15








15


3



$begingroup$


The following was originally stated for n-tuples of elements from a scalar field, so most of the properties of "vectors" are easily established from the properties of the underlying scalar field. But the authors seem to want their development to be "self-reliant". For this reason I have replaced "n-tuple" with "vector".



The equality relation for vectors has been established, as have the
associative and commutative laws of vector addition. The next property
of vector addition to be introduced is the neutral element:



There exists a vector $mathfrak0$ such that $mathfrakx+mathfrak0=mathfrakx$ for every $mathfrakx$. It follows there can be only one neutral element, for if $mathfrak0$ and $mathfrak0^prime$ were
two such elements we would have $mathfrak0^prime+mathfrak0=mathfrak0^prime$ and $mathfrak0+mathfrak0^prime=mathfrak0,$ so that by the commutative law of vector addition and the transitivity of vector equality we would have $mathfrak0=mathfrak0^prime.$



Now suppose that for some $mathfrakx$ we have $mathfrakx+mathfrakz=mathfrakx.$
Do we have enough to prove that $mathfrakz=mathfrak0?$



I note in particular that the proof of the uniqueness of $mathfrak0$
relies on the assumption that $mathfrakx+mathfrak0^prime=mathfrakx$ holds for all vectors, and thereby for $mathfrakx=mathfrak0$. That assumption comes from the definition of $mathfrak0$ satisfying $mathfrakx+mathfrak0=mathfrakx$
for every vector, and the assumption that $mathfrak0^prime$ is also 'such an element'.



Also note that the additive inverses have not yet been introduced.










share|cite|improve this question









$endgroup$




The following was originally stated for n-tuples of elements from a scalar field, so most of the properties of "vectors" are easily established from the properties of the underlying scalar field. But the authors seem to want their development to be "self-reliant". For this reason I have replaced "n-tuple" with "vector".



The equality relation for vectors has been established, as have the
associative and commutative laws of vector addition. The next property
of vector addition to be introduced is the neutral element:



There exists a vector $mathfrak0$ such that $mathfrakx+mathfrak0=mathfrakx$ for every $mathfrakx$. It follows there can be only one neutral element, for if $mathfrak0$ and $mathfrak0^prime$ were
two such elements we would have $mathfrak0^prime+mathfrak0=mathfrak0^prime$ and $mathfrak0+mathfrak0^prime=mathfrak0,$ so that by the commutative law of vector addition and the transitivity of vector equality we would have $mathfrak0=mathfrak0^prime.$



Now suppose that for some $mathfrakx$ we have $mathfrakx+mathfrakz=mathfrakx.$
Do we have enough to prove that $mathfrakz=mathfrak0?$



I note in particular that the proof of the uniqueness of $mathfrak0$
relies on the assumption that $mathfrakx+mathfrak0^prime=mathfrakx$ holds for all vectors, and thereby for $mathfrakx=mathfrak0$. That assumption comes from the definition of $mathfrak0$ satisfying $mathfrakx+mathfrak0=mathfrakx$
for every vector, and the assumption that $mathfrak0^prime$ is also 'such an element'.



Also note that the additive inverses have not yet been introduced.







group-theory proof-verification vector-spaces






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 24 at 7:19









Steven HattonSteven Hatton

962421




962421











  • $begingroup$
    Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
    $endgroup$
    – Paul K
    Jan 24 at 7:25






  • 3




    $begingroup$
    The last line of the question is: "Also note that the additive inverses have not yet been introduced."
    $endgroup$
    – Steven Hatton
    Jan 24 at 7:26






  • 3




    $begingroup$
    In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
    $endgroup$
    – Paul K
    Jan 24 at 7:29







  • 4




    $begingroup$
    I'm curious, why are you using mathfrak? Is it specific to the subject?
    $endgroup$
    – Teleporting Goat
    Jan 25 at 9:04







  • 1




    $begingroup$
    @TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
    $endgroup$
    – Steven Hatton
    Jan 25 at 9:47
















  • $begingroup$
    Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
    $endgroup$
    – Paul K
    Jan 24 at 7:25






  • 3




    $begingroup$
    The last line of the question is: "Also note that the additive inverses have not yet been introduced."
    $endgroup$
    – Steven Hatton
    Jan 24 at 7:26






  • 3




    $begingroup$
    In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
    $endgroup$
    – Paul K
    Jan 24 at 7:29







  • 4




    $begingroup$
    I'm curious, why are you using mathfrak? Is it specific to the subject?
    $endgroup$
    – Teleporting Goat
    Jan 25 at 9:04







  • 1




    $begingroup$
    @TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
    $endgroup$
    – Steven Hatton
    Jan 25 at 9:47















$begingroup$
Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
$endgroup$
– Paul K
Jan 24 at 7:25




$begingroup$
Add the additive inverse of $mathfrakx$ to both sides and you obtain $mathfrakz = 0$.
$endgroup$
– Paul K
Jan 24 at 7:25




3




3




$begingroup$
The last line of the question is: "Also note that the additive inverses have not yet been introduced."
$endgroup$
– Steven Hatton
Jan 24 at 7:26




$begingroup$
The last line of the question is: "Also note that the additive inverses have not yet been introduced."
$endgroup$
– Steven Hatton
Jan 24 at 7:26




3




3




$begingroup$
In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
$endgroup$
– Paul K
Jan 24 at 7:29





$begingroup$
In that case I would say that you can't prove it. Look for example at the real numbers with multiplication. You have a unique neutral element there ($1$). But we have $0 cdot x = 0$ for all $x$, in particular for $x neq 1$.
$endgroup$
– Paul K
Jan 24 at 7:29





4




4




$begingroup$
I'm curious, why are you using mathfrak? Is it specific to the subject?
$endgroup$
– Teleporting Goat
Jan 25 at 9:04





$begingroup$
I'm curious, why are you using mathfrak? Is it specific to the subject?
$endgroup$
– Teleporting Goat
Jan 25 at 9:04





1




1




$begingroup$
@TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
$endgroup$
– Steven Hatton
Jan 25 at 9:47




$begingroup$
@TeleportingGoat One answer is: because I like it. But in this specific circumstance I can reinforce that with: because that's what the book uses. It's pretty common in the German literature that I've seen to use fraktur for vectorish things.
$endgroup$
– Steven Hatton
Jan 25 at 9:47










3 Answers
3






active

oldest

votes


















36












$begingroup$

No, this cannot be proved from just associativity, commutativity, and existence of a neutral element. For instance, consider the set $[0,1]$ with the binary operation $a*b=min(a,b)$. This operation is associative and commutative and $1$ is a neutral element. But for any $x,y$ with $xleq y$, we have $x*y=x$, and $y$ is not necessarily the neutral element $1$.






share|cite|improve this answer











$endgroup$








  • 5




    $begingroup$
    At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
    $endgroup$
    – chepner
    Jan 24 at 18:46







  • 3




    $begingroup$
    The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
    $endgroup$
    – Eric Wofsey
    Jan 24 at 21:31


















30












$begingroup$

For an example with a more additive flavor, let's extend the operation $+$ to a new element $infty$ with the rule that $x+infty=infty+x=infty$ for all $x$. You can check that $+$ is still associative and commutative, and $0$ is still its identity element. However, we have $infty+7=infty$ and $7neq0$.






share|cite|improve this answer









$endgroup$




















    0












    $begingroup$

    Since you are asking merely about the uniqueness of the identity of an abstract operation, and are not using any other structure on the space, we can posit that the operation "+" is isomorphic to "*" in some ring. In a ring, the property $x*y=y$ implies that $x*(y-1)=0$. Thus, insufficiency follows from the existence of rings with zero divisors. So, for instance, if we treat $[a_1,b_1]+[a_2,b_2]$ as being equal to $[a_1a_2,b_1b_2]$, then taking $x=[1,0]$, $y=[1,1]$ gives that $x+y=x$.



    If more properties of the vector space are introduced that make the + operation incompatible with being isomorphic to the multiplicative operation of a ring with zero divisors (such as there being an inverse) are introduced, then those properties, in conjunction with the uniqueness of the identity, may be sufficient to establish the proposition in question.



    Another viewpoint is treating $x+y$ as being equal to some function indexed by $y$ applied to $x$. That is, "$x+y$" represents y.add(x). That there is some object $0$ such that $x+0=x$ for all $x$ simply means that there is some $0$ such that 0.add() = lambda x: x. We can easily have $x$ and $y$ such that y.add(x) is equal to $x$, yet $yneq0$.






    share|cite|improve this answer









    $endgroup$












      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3085568%2fis-the-uniqueness-of-the-additive-neutral-element-sufficient-to-prove-xz-x-impl%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      36












      $begingroup$

      No, this cannot be proved from just associativity, commutativity, and existence of a neutral element. For instance, consider the set $[0,1]$ with the binary operation $a*b=min(a,b)$. This operation is associative and commutative and $1$ is a neutral element. But for any $x,y$ with $xleq y$, we have $x*y=x$, and $y$ is not necessarily the neutral element $1$.






      share|cite|improve this answer











      $endgroup$








      • 5




        $begingroup$
        At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
        $endgroup$
        – chepner
        Jan 24 at 18:46







      • 3




        $begingroup$
        The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
        $endgroup$
        – Eric Wofsey
        Jan 24 at 21:31















      36












      $begingroup$

      No, this cannot be proved from just associativity, commutativity, and existence of a neutral element. For instance, consider the set $[0,1]$ with the binary operation $a*b=min(a,b)$. This operation is associative and commutative and $1$ is a neutral element. But for any $x,y$ with $xleq y$, we have $x*y=x$, and $y$ is not necessarily the neutral element $1$.






      share|cite|improve this answer











      $endgroup$








      • 5




        $begingroup$
        At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
        $endgroup$
        – chepner
        Jan 24 at 18:46







      • 3




        $begingroup$
        The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
        $endgroup$
        – Eric Wofsey
        Jan 24 at 21:31













      36












      36








      36





      $begingroup$

      No, this cannot be proved from just associativity, commutativity, and existence of a neutral element. For instance, consider the set $[0,1]$ with the binary operation $a*b=min(a,b)$. This operation is associative and commutative and $1$ is a neutral element. But for any $x,y$ with $xleq y$, we have $x*y=x$, and $y$ is not necessarily the neutral element $1$.






      share|cite|improve this answer











      $endgroup$



      No, this cannot be proved from just associativity, commutativity, and existence of a neutral element. For instance, consider the set $[0,1]$ with the binary operation $a*b=min(a,b)$. This operation is associative and commutative and $1$ is a neutral element. But for any $x,y$ with $xleq y$, we have $x*y=x$, and $y$ is not necessarily the neutral element $1$.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Jan 24 at 21:29

























      answered Jan 24 at 7:40









      Eric WofseyEric Wofsey

      186k14214341




      186k14214341







      • 5




        $begingroup$
        At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
        $endgroup$
        – chepner
        Jan 24 at 18:46







      • 3




        $begingroup$
        The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
        $endgroup$
        – Eric Wofsey
        Jan 24 at 21:31












      • 5




        $begingroup$
        At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
        $endgroup$
        – chepner
        Jan 24 at 18:46







      • 3




        $begingroup$
        The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
        $endgroup$
        – Eric Wofsey
        Jan 24 at 21:31







      5




      5




      $begingroup$
      At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
      $endgroup$
      – chepner
      Jan 24 at 18:46





      $begingroup$
      At first I thought you meant the two-element set $0,1$ before I realized that was closed-interval notation for $ x in mathbbR, 0 le x le 1 $.
      $endgroup$
      – chepner
      Jan 24 at 18:46





      3




      3




      $begingroup$
      The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
      $endgroup$
      – Eric Wofsey
      Jan 24 at 21:31




      $begingroup$
      The example works with $0,1$ too, though! We still have $0*0=0$ but $0$ is not the neutral element. (Actually, more generally it works for any totally ordered set with a greatest element and at least one other element.)
      $endgroup$
      – Eric Wofsey
      Jan 24 at 21:31











      30












      $begingroup$

      For an example with a more additive flavor, let's extend the operation $+$ to a new element $infty$ with the rule that $x+infty=infty+x=infty$ for all $x$. You can check that $+$ is still associative and commutative, and $0$ is still its identity element. However, we have $infty+7=infty$ and $7neq0$.






      share|cite|improve this answer









      $endgroup$

















        30












        $begingroup$

        For an example with a more additive flavor, let's extend the operation $+$ to a new element $infty$ with the rule that $x+infty=infty+x=infty$ for all $x$. You can check that $+$ is still associative and commutative, and $0$ is still its identity element. However, we have $infty+7=infty$ and $7neq0$.






        share|cite|improve this answer









        $endgroup$















          30












          30








          30





          $begingroup$

          For an example with a more additive flavor, let's extend the operation $+$ to a new element $infty$ with the rule that $x+infty=infty+x=infty$ for all $x$. You can check that $+$ is still associative and commutative, and $0$ is still its identity element. However, we have $infty+7=infty$ and $7neq0$.






          share|cite|improve this answer









          $endgroup$



          For an example with a more additive flavor, let's extend the operation $+$ to a new element $infty$ with the rule that $x+infty=infty+x=infty$ for all $x$. You can check that $+$ is still associative and commutative, and $0$ is still its identity element. However, we have $infty+7=infty$ and $7neq0$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 24 at 7:41









          Chris CulterChris Culter

          21.1k43886




          21.1k43886





















              0












              $begingroup$

              Since you are asking merely about the uniqueness of the identity of an abstract operation, and are not using any other structure on the space, we can posit that the operation "+" is isomorphic to "*" in some ring. In a ring, the property $x*y=y$ implies that $x*(y-1)=0$. Thus, insufficiency follows from the existence of rings with zero divisors. So, for instance, if we treat $[a_1,b_1]+[a_2,b_2]$ as being equal to $[a_1a_2,b_1b_2]$, then taking $x=[1,0]$, $y=[1,1]$ gives that $x+y=x$.



              If more properties of the vector space are introduced that make the + operation incompatible with being isomorphic to the multiplicative operation of a ring with zero divisors (such as there being an inverse) are introduced, then those properties, in conjunction with the uniqueness of the identity, may be sufficient to establish the proposition in question.



              Another viewpoint is treating $x+y$ as being equal to some function indexed by $y$ applied to $x$. That is, "$x+y$" represents y.add(x). That there is some object $0$ such that $x+0=x$ for all $x$ simply means that there is some $0$ such that 0.add() = lambda x: x. We can easily have $x$ and $y$ such that y.add(x) is equal to $x$, yet $yneq0$.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                Since you are asking merely about the uniqueness of the identity of an abstract operation, and are not using any other structure on the space, we can posit that the operation "+" is isomorphic to "*" in some ring. In a ring, the property $x*y=y$ implies that $x*(y-1)=0$. Thus, insufficiency follows from the existence of rings with zero divisors. So, for instance, if we treat $[a_1,b_1]+[a_2,b_2]$ as being equal to $[a_1a_2,b_1b_2]$, then taking $x=[1,0]$, $y=[1,1]$ gives that $x+y=x$.



                If more properties of the vector space are introduced that make the + operation incompatible with being isomorphic to the multiplicative operation of a ring with zero divisors (such as there being an inverse) are introduced, then those properties, in conjunction with the uniqueness of the identity, may be sufficient to establish the proposition in question.



                Another viewpoint is treating $x+y$ as being equal to some function indexed by $y$ applied to $x$. That is, "$x+y$" represents y.add(x). That there is some object $0$ such that $x+0=x$ for all $x$ simply means that there is some $0$ such that 0.add() = lambda x: x. We can easily have $x$ and $y$ such that y.add(x) is equal to $x$, yet $yneq0$.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  Since you are asking merely about the uniqueness of the identity of an abstract operation, and are not using any other structure on the space, we can posit that the operation "+" is isomorphic to "*" in some ring. In a ring, the property $x*y=y$ implies that $x*(y-1)=0$. Thus, insufficiency follows from the existence of rings with zero divisors. So, for instance, if we treat $[a_1,b_1]+[a_2,b_2]$ as being equal to $[a_1a_2,b_1b_2]$, then taking $x=[1,0]$, $y=[1,1]$ gives that $x+y=x$.



                  If more properties of the vector space are introduced that make the + operation incompatible with being isomorphic to the multiplicative operation of a ring with zero divisors (such as there being an inverse) are introduced, then those properties, in conjunction with the uniqueness of the identity, may be sufficient to establish the proposition in question.



                  Another viewpoint is treating $x+y$ as being equal to some function indexed by $y$ applied to $x$. That is, "$x+y$" represents y.add(x). That there is some object $0$ such that $x+0=x$ for all $x$ simply means that there is some $0$ such that 0.add() = lambda x: x. We can easily have $x$ and $y$ such that y.add(x) is equal to $x$, yet $yneq0$.






                  share|cite|improve this answer









                  $endgroup$



                  Since you are asking merely about the uniqueness of the identity of an abstract operation, and are not using any other structure on the space, we can posit that the operation "+" is isomorphic to "*" in some ring. In a ring, the property $x*y=y$ implies that $x*(y-1)=0$. Thus, insufficiency follows from the existence of rings with zero divisors. So, for instance, if we treat $[a_1,b_1]+[a_2,b_2]$ as being equal to $[a_1a_2,b_1b_2]$, then taking $x=[1,0]$, $y=[1,1]$ gives that $x+y=x$.



                  If more properties of the vector space are introduced that make the + operation incompatible with being isomorphic to the multiplicative operation of a ring with zero divisors (such as there being an inverse) are introduced, then those properties, in conjunction with the uniqueness of the identity, may be sufficient to establish the proposition in question.



                  Another viewpoint is treating $x+y$ as being equal to some function indexed by $y$ applied to $x$. That is, "$x+y$" represents y.add(x). That there is some object $0$ such that $x+0=x$ for all $x$ simply means that there is some $0$ such that 0.add() = lambda x: x. We can easily have $x$ and $y$ such that y.add(x) is equal to $x$, yet $yneq0$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 24 at 17:40









                  AcccumulationAcccumulation

                  6,9042618




                  6,9042618



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3085568%2fis-the-uniqueness-of-the-additive-neutral-element-sufficient-to-prove-xz-x-impl%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown






                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Displaying single band from multi-band raster using QGIS

                      How many registers does an x86_64 CPU actually have?