What is the difference between Ethernet types, considering bandwidth?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












2















Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?



On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?










share|improve this question



















  • 1





    It's not just about how fast the signals travel.

    – immibis
    Jan 18 at 4:06















2















Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?



On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?










share|improve this question



















  • 1





    It's not just about how fast the signals travel.

    – immibis
    Jan 18 at 4:06













2












2








2








Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?



On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?










share|improve this question
















Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?



On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?







ethernet layer1 cable bandwidth utp






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 11 at 19:44









Ron Maupin

64.1k1367120




64.1k1367120










asked Jan 11 at 19:14









Martin HeraleckýMartin Heralecký

11814




11814







  • 1





    It's not just about how fast the signals travel.

    – immibis
    Jan 18 at 4:06












  • 1





    It's not just about how fast the signals travel.

    – immibis
    Jan 18 at 4:06







1




1





It's not just about how fast the signals travel.

– immibis
Jan 18 at 4:06





It's not just about how fast the signals travel.

– immibis
Jan 18 at 4:06










4 Answers
4






active

oldest

votes


















6















Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




Progress - technology advances.



Including obsolete and brand new physical layers, Ethernet ranges from 1 Mbit/s to 400 Gbit/s.




On the physical layer, the speed of electrical signals (voltage change) stays the same.




The wave propagation speed depends on the medium, not the signaling rate. Cat-3 is slightly slower than the Cat-5 and fiber, and coax (from ancient 10BASE5) and Cat-7/8 are the fastest - they're all very close though, 60% to 80% speed of light.



However, the frequencies that pass over the medium differ considerably - from 10 MHz for 10BASEx to ca. 1.6 GHz for 40GBASE-T. Of course, higher frequency means faster voltage change. On fiber, the current generation runs 50 Gbit/s per lane.




The cable (twisted pair) is also the same (I think).




No. Cat-3 only supports 10 Mbit/s, Cat-5e up to 1 Gbit/s, 10 Gbit/s requires Cat-6A, and 25/40 Gbit/s require Cat-8 cable rated at 2 GHz. The cable is only the same (Cat-5e) for 100BASE-TX and 1000BASE-T due to a leap in encoding technology.




So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?




Yes. Faster data rates mean faster processing and more elaborate encoding using more silicon (=transistors). Also, back in the 80s and early 90s, computers lacked the internal bandwidth to make use of faster data rates - they even struggled with 10 Mbit/s for a while. Literally nobody would have had a use for 10+ Gbit/s back then.



Ethernet does allow for speed negotiation on twisted pair - faster (and some intermediate) rates have been added for a while:



  • 1 Mbit/s (1987)

  • 10 Mbit/s (1990)

  • 100 Mbit/s (1995)

  • 1000 Mbit/s (1999)

  • 2500 Mbit/s (2016)

  • 5 Gbit/s (2016)

  • 10 Git/s (2006)

  • 25 Gbit/s (2016)

  • 40 Gbit/s (2016, most likely the final speed for twisted pair)

  • fiber currently adds 50, 100, 200, and 400 Gbit/s (2017)





share|improve this answer
































    3














    Well, most devices that support 1000Base-T will also negotiate to 100Base-TX and 10Base-T.



    Older devices that are 10Base-T cannot negotiate to 100Base-TX or 1000Base-T, nor can devices that are 100Base-TX do 1000Base-T (although they can probably do 10Base-T), because those standards did not exist when the older devices were built.



    There is a lot more to this than just the speed. For example, the encoding used by each is different. It is fairly simple to put in backwards compatibility, but how would you propose to build in for a standard that does not yet exist?




    The cable (twisted pair) is also the same (I think).




    There are actually different cable categories. The currently registered cable categories are 3, 5e, 6, 6a and 8 (new). Each category can handle up to a certain frequency that determines the bandwidth you can get for ethernet.



    • Category-3 cable will work for 10Base-T, but faster speeds will not
      work.

    • Category 5e will work for 10Base-T, 100Base-TX, and 1000Base-T.

    • Category-6 will work for 10Base-T, 100Base-TX, and 1000Base-T. It will
      also handle 10GBase-T for a shorter distance.

    • Category-6a will work for 10Base-T, 100Base-TX, 1000Base-T, and
      10GBase-T.

    • Category-8 is newly registered for 25GBase-T and 40GBase-T, but it
      has limitations, and the distance is less than 1/3 of the distance
      (30 meters vs. 100 meters).





    share|improve this answer




















    • 2





      Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

      – Ron Trunk
      Jan 11 at 19:29











    • Or, even used incorrectly (split) so that 1000Base-T would not work.

      – Ron Maupin
      Jan 11 at 19:31











    • The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

      – crasic
      Jan 18 at 14:16


















    1














    Processing power and hardware capabilities are exactly the reason.



    10MBit standards are from the early 1980s. At that time, it would have been:



    • very, very expensive to build circuitry able to do anything intelligent with serialized digital data at 100MBit/s - even with the actual clock being around 32MHz, you would likely have needed low-integration S-TTL circuitry, or even worse, ECL circuitry. This costs a lot, consumes power like hell, and in case of ECL, is tricky to adapt to 5V TTL signalling prevalent in computer systems back then. High integration, fast CMOS was only beginning to find a foothold, the 33Mhz 80386 CPU was considered bleeding edge technology in 1989.


    • unusual to find a computer that could usefully serve or consume data at more than a megabyte a second - an ESDI hard drive capable of delivering 16 mbit/s was considered pretty fast back then.


    There is auto-negotiation alright, under the condition that you are:



    • using twisted-pair ethernet (at any speed), or one of the fibre-based styles that can (not all can!).

    The coaxial systems had no concept of an established link at layer 1 and 2 - the network didn't know something was on it until it started talking or responding. Also, while the coaxial cable used for 10base2 and 10base5 would have electrically supported far faster speeds (bet you could push 10GBit through an RG58 if you optimized your transceivers for it!) if used as a point to point link, these systems abused it as a bridge-tapped common circuit that could never had supported 100mbit links anyway, leaving no reason to implement an autonegotiation facility. Also, exactly that bridge tapped, high power, often installed sloppily, circuit would probably become an EMI nightmare if you attempted to put any signal at the electrical bandwith required for 100MBit on it (you are in shortwave territory here. Shortwave noise propagates nasty far, and most installations you saw back then would have made an awesome shortwave antenna).



    • You actually allow your devices to autonegotiate - with many devices (eg switches), a port preset to a certain link speed and duplex style will no longer cooperate with the autonegotiation protocols, and will only properly work talking to a port preset exactly the same way.





    share|improve this answer






























      1















      Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




      They are incrementally defined by formal specifications that progressed through those speed barriers as technology and customer needs advanced .



      1. 1Mbit proto-ethernet 802.3e (StarLAN 1mbit 1BASE5) - 1987

      2. 10Mbit ethernet 802.3i (10BASE-T) - 1990

      3. 100Mbit ethernet 802.3u (100BASE-TX) - 1995

      4. 1000Mbit ethernet 802.ab (1000BASE-T) - 1999

      5. 10Gbit ethernet 802.3an (10GBASE-T) - 2006


      6. Many sibling standards for alternate cables and signals (1987-today)

      Each incremental step involved networking equipment manufacturers, customers, end users, network admins, IT, and hardware state of the art moving in lockstep to implement, manufacture, and deploy the incremental standards. They are largely, but not completely backwards compatible. They largely, but not entirely follow the development of computers power and data handling needs.




      On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ?




      This is not quite true, both the transmission rate (frequency) and the number of symbols transmitted is different. For physical layer in higher speed ethernet multiple bits are transmitted at once using special encoding into an analog signal that has more than two states. More states means less noise tolerance and more Hertz means more care for cable requirements, so the standard dictates the cable as well.



      1. 10BASE-T 10MHZ Cat 3, 1bit per hz - 2 pairs

      2. 100BASE-TX 31.25MHZ Cat 5, 3.25bits per hz, NRZI Encoding - 2 pairs

      3. 1000BASE-T 62.5MHZ Cat 5e, 4bits per hz, PAM5 Encoding - 4 pairs

      4. 10GBASE-T 400MHZ Cat 6, 6.25 bits per hz, PAM16 Encoding - 4 pairs

      On the NIC side, multiple different standards have progressed for the PHY/MAC bridge (Layer "1.5") aka, Media Independent Interface



      1. MII - 25 MHZ Clock (2.5MHZ for 10Mbit), 18 Data and Control Signals, 100/10Mbit

      2. RMII - 50MHZ Clock,9 signals, 100/10Mbit

      3. GMII - 125MHZ , 26 Signals , 1000/100/10Mbit

      4. RGMII - 125MHZ DDR, 14 signals , 1000/100/10Mbit

      Note that ethernet networking is a self-contained set of specifications. There are other networking specs (e.g. High Speed COAX i.e. Cable Internet) that are just as rigorously defined by their own set of standards and fundamentally incompatible on a physical level (requiring "modems", bridges, and edge interfaces) , For example DOCSIS3.1, the current state of the art cable internet standard specifies links up to 10GBit over coax, using a much more complex signaling scheme than any ethernet physical layer standard






      share|improve this answer
























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "496"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fnetworkengineering.stackexchange.com%2fquestions%2f56058%2fwhat-is-the-difference-between-ethernet-types-considering-bandwidth%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        6















        Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




        Progress - technology advances.



        Including obsolete and brand new physical layers, Ethernet ranges from 1 Mbit/s to 400 Gbit/s.




        On the physical layer, the speed of electrical signals (voltage change) stays the same.




        The wave propagation speed depends on the medium, not the signaling rate. Cat-3 is slightly slower than the Cat-5 and fiber, and coax (from ancient 10BASE5) and Cat-7/8 are the fastest - they're all very close though, 60% to 80% speed of light.



        However, the frequencies that pass over the medium differ considerably - from 10 MHz for 10BASEx to ca. 1.6 GHz for 40GBASE-T. Of course, higher frequency means faster voltage change. On fiber, the current generation runs 50 Gbit/s per lane.




        The cable (twisted pair) is also the same (I think).




        No. Cat-3 only supports 10 Mbit/s, Cat-5e up to 1 Gbit/s, 10 Gbit/s requires Cat-6A, and 25/40 Gbit/s require Cat-8 cable rated at 2 GHz. The cable is only the same (Cat-5e) for 100BASE-TX and 1000BASE-T due to a leap in encoding technology.




        So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?




        Yes. Faster data rates mean faster processing and more elaborate encoding using more silicon (=transistors). Also, back in the 80s and early 90s, computers lacked the internal bandwidth to make use of faster data rates - they even struggled with 10 Mbit/s for a while. Literally nobody would have had a use for 10+ Gbit/s back then.



        Ethernet does allow for speed negotiation on twisted pair - faster (and some intermediate) rates have been added for a while:



        • 1 Mbit/s (1987)

        • 10 Mbit/s (1990)

        • 100 Mbit/s (1995)

        • 1000 Mbit/s (1999)

        • 2500 Mbit/s (2016)

        • 5 Gbit/s (2016)

        • 10 Git/s (2006)

        • 25 Gbit/s (2016)

        • 40 Gbit/s (2016, most likely the final speed for twisted pair)

        • fiber currently adds 50, 100, 200, and 400 Gbit/s (2017)





        share|improve this answer





























          6















          Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




          Progress - technology advances.



          Including obsolete and brand new physical layers, Ethernet ranges from 1 Mbit/s to 400 Gbit/s.




          On the physical layer, the speed of electrical signals (voltage change) stays the same.




          The wave propagation speed depends on the medium, not the signaling rate. Cat-3 is slightly slower than the Cat-5 and fiber, and coax (from ancient 10BASE5) and Cat-7/8 are the fastest - they're all very close though, 60% to 80% speed of light.



          However, the frequencies that pass over the medium differ considerably - from 10 MHz for 10BASEx to ca. 1.6 GHz for 40GBASE-T. Of course, higher frequency means faster voltage change. On fiber, the current generation runs 50 Gbit/s per lane.




          The cable (twisted pair) is also the same (I think).




          No. Cat-3 only supports 10 Mbit/s, Cat-5e up to 1 Gbit/s, 10 Gbit/s requires Cat-6A, and 25/40 Gbit/s require Cat-8 cable rated at 2 GHz. The cable is only the same (Cat-5e) for 100BASE-TX and 1000BASE-T due to a leap in encoding technology.




          So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?




          Yes. Faster data rates mean faster processing and more elaborate encoding using more silicon (=transistors). Also, back in the 80s and early 90s, computers lacked the internal bandwidth to make use of faster data rates - they even struggled with 10 Mbit/s for a while. Literally nobody would have had a use for 10+ Gbit/s back then.



          Ethernet does allow for speed negotiation on twisted pair - faster (and some intermediate) rates have been added for a while:



          • 1 Mbit/s (1987)

          • 10 Mbit/s (1990)

          • 100 Mbit/s (1995)

          • 1000 Mbit/s (1999)

          • 2500 Mbit/s (2016)

          • 5 Gbit/s (2016)

          • 10 Git/s (2006)

          • 25 Gbit/s (2016)

          • 40 Gbit/s (2016, most likely the final speed for twisted pair)

          • fiber currently adds 50, 100, 200, and 400 Gbit/s (2017)





          share|improve this answer



























            6












            6








            6








            Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




            Progress - technology advances.



            Including obsolete and brand new physical layers, Ethernet ranges from 1 Mbit/s to 400 Gbit/s.




            On the physical layer, the speed of electrical signals (voltage change) stays the same.




            The wave propagation speed depends on the medium, not the signaling rate. Cat-3 is slightly slower than the Cat-5 and fiber, and coax (from ancient 10BASE5) and Cat-7/8 are the fastest - they're all very close though, 60% to 80% speed of light.



            However, the frequencies that pass over the medium differ considerably - from 10 MHz for 10BASEx to ca. 1.6 GHz for 40GBASE-T. Of course, higher frequency means faster voltage change. On fiber, the current generation runs 50 Gbit/s per lane.




            The cable (twisted pair) is also the same (I think).




            No. Cat-3 only supports 10 Mbit/s, Cat-5e up to 1 Gbit/s, 10 Gbit/s requires Cat-6A, and 25/40 Gbit/s require Cat-8 cable rated at 2 GHz. The cable is only the same (Cat-5e) for 100BASE-TX and 1000BASE-T due to a leap in encoding technology.




            So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?




            Yes. Faster data rates mean faster processing and more elaborate encoding using more silicon (=transistors). Also, back in the 80s and early 90s, computers lacked the internal bandwidth to make use of faster data rates - they even struggled with 10 Mbit/s for a while. Literally nobody would have had a use for 10+ Gbit/s back then.



            Ethernet does allow for speed negotiation on twisted pair - faster (and some intermediate) rates have been added for a while:



            • 1 Mbit/s (1987)

            • 10 Mbit/s (1990)

            • 100 Mbit/s (1995)

            • 1000 Mbit/s (1999)

            • 2500 Mbit/s (2016)

            • 5 Gbit/s (2016)

            • 10 Git/s (2006)

            • 25 Gbit/s (2016)

            • 40 Gbit/s (2016, most likely the final speed for twisted pair)

            • fiber currently adds 50, 100, 200, and 400 Gbit/s (2017)





            share|improve this answer
















            Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




            Progress - technology advances.



            Including obsolete and brand new physical layers, Ethernet ranges from 1 Mbit/s to 400 Gbit/s.




            On the physical layer, the speed of electrical signals (voltage change) stays the same.




            The wave propagation speed depends on the medium, not the signaling rate. Cat-3 is slightly slower than the Cat-5 and fiber, and coax (from ancient 10BASE5) and Cat-7/8 are the fastest - they're all very close though, 60% to 80% speed of light.



            However, the frequencies that pass over the medium differ considerably - from 10 MHz for 10BASEx to ca. 1.6 GHz for 40GBASE-T. Of course, higher frequency means faster voltage change. On fiber, the current generation runs 50 Gbit/s per lane.




            The cable (twisted pair) is also the same (I think).




            No. Cat-3 only supports 10 Mbit/s, Cat-5e up to 1 Gbit/s, 10 Gbit/s requires Cat-6A, and 25/40 Gbit/s require Cat-8 cable rated at 2 GHz. The cable is only the same (Cat-5e) for 100BASE-TX and 1000BASE-T due to a leap in encoding technology.




            So why does the maximum bandwidth differ? If the reason is that in the past, we didn't have so much processing power in network cards, why doesn't the protocol allow for bandwidth negotiation (that would suit both devices that communicate)?




            Yes. Faster data rates mean faster processing and more elaborate encoding using more silicon (=transistors). Also, back in the 80s and early 90s, computers lacked the internal bandwidth to make use of faster data rates - they even struggled with 10 Mbit/s for a while. Literally nobody would have had a use for 10+ Gbit/s back then.



            Ethernet does allow for speed negotiation on twisted pair - faster (and some intermediate) rates have been added for a while:



            • 1 Mbit/s (1987)

            • 10 Mbit/s (1990)

            • 100 Mbit/s (1995)

            • 1000 Mbit/s (1999)

            • 2500 Mbit/s (2016)

            • 5 Gbit/s (2016)

            • 10 Git/s (2006)

            • 25 Gbit/s (2016)

            • 40 Gbit/s (2016, most likely the final speed for twisted pair)

            • fiber currently adds 50, 100, 200, and 400 Gbit/s (2017)






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jan 18 at 18:05

























            answered Jan 11 at 22:04









            Zac67Zac67

            27.9k21456




            27.9k21456





















                3














                Well, most devices that support 1000Base-T will also negotiate to 100Base-TX and 10Base-T.



                Older devices that are 10Base-T cannot negotiate to 100Base-TX or 1000Base-T, nor can devices that are 100Base-TX do 1000Base-T (although they can probably do 10Base-T), because those standards did not exist when the older devices were built.



                There is a lot more to this than just the speed. For example, the encoding used by each is different. It is fairly simple to put in backwards compatibility, but how would you propose to build in for a standard that does not yet exist?




                The cable (twisted pair) is also the same (I think).




                There are actually different cable categories. The currently registered cable categories are 3, 5e, 6, 6a and 8 (new). Each category can handle up to a certain frequency that determines the bandwidth you can get for ethernet.



                • Category-3 cable will work for 10Base-T, but faster speeds will not
                  work.

                • Category 5e will work for 10Base-T, 100Base-TX, and 1000Base-T.

                • Category-6 will work for 10Base-T, 100Base-TX, and 1000Base-T. It will
                  also handle 10GBase-T for a shorter distance.

                • Category-6a will work for 10Base-T, 100Base-TX, 1000Base-T, and
                  10GBase-T.

                • Category-8 is newly registered for 25GBase-T and 40GBase-T, but it
                  has limitations, and the distance is less than 1/3 of the distance
                  (30 meters vs. 100 meters).





                share|improve this answer




















                • 2





                  Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                  – Ron Trunk
                  Jan 11 at 19:29











                • Or, even used incorrectly (split) so that 1000Base-T would not work.

                  – Ron Maupin
                  Jan 11 at 19:31











                • The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                  – crasic
                  Jan 18 at 14:16















                3














                Well, most devices that support 1000Base-T will also negotiate to 100Base-TX and 10Base-T.



                Older devices that are 10Base-T cannot negotiate to 100Base-TX or 1000Base-T, nor can devices that are 100Base-TX do 1000Base-T (although they can probably do 10Base-T), because those standards did not exist when the older devices were built.



                There is a lot more to this than just the speed. For example, the encoding used by each is different. It is fairly simple to put in backwards compatibility, but how would you propose to build in for a standard that does not yet exist?




                The cable (twisted pair) is also the same (I think).




                There are actually different cable categories. The currently registered cable categories are 3, 5e, 6, 6a and 8 (new). Each category can handle up to a certain frequency that determines the bandwidth you can get for ethernet.



                • Category-3 cable will work for 10Base-T, but faster speeds will not
                  work.

                • Category 5e will work for 10Base-T, 100Base-TX, and 1000Base-T.

                • Category-6 will work for 10Base-T, 100Base-TX, and 1000Base-T. It will
                  also handle 10GBase-T for a shorter distance.

                • Category-6a will work for 10Base-T, 100Base-TX, 1000Base-T, and
                  10GBase-T.

                • Category-8 is newly registered for 25GBase-T and 40GBase-T, but it
                  has limitations, and the distance is less than 1/3 of the distance
                  (30 meters vs. 100 meters).





                share|improve this answer




















                • 2





                  Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                  – Ron Trunk
                  Jan 11 at 19:29











                • Or, even used incorrectly (split) so that 1000Base-T would not work.

                  – Ron Maupin
                  Jan 11 at 19:31











                • The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                  – crasic
                  Jan 18 at 14:16













                3












                3








                3







                Well, most devices that support 1000Base-T will also negotiate to 100Base-TX and 10Base-T.



                Older devices that are 10Base-T cannot negotiate to 100Base-TX or 1000Base-T, nor can devices that are 100Base-TX do 1000Base-T (although they can probably do 10Base-T), because those standards did not exist when the older devices were built.



                There is a lot more to this than just the speed. For example, the encoding used by each is different. It is fairly simple to put in backwards compatibility, but how would you propose to build in for a standard that does not yet exist?




                The cable (twisted pair) is also the same (I think).




                There are actually different cable categories. The currently registered cable categories are 3, 5e, 6, 6a and 8 (new). Each category can handle up to a certain frequency that determines the bandwidth you can get for ethernet.



                • Category-3 cable will work for 10Base-T, but faster speeds will not
                  work.

                • Category 5e will work for 10Base-T, 100Base-TX, and 1000Base-T.

                • Category-6 will work for 10Base-T, 100Base-TX, and 1000Base-T. It will
                  also handle 10GBase-T for a shorter distance.

                • Category-6a will work for 10Base-T, 100Base-TX, 1000Base-T, and
                  10GBase-T.

                • Category-8 is newly registered for 25GBase-T and 40GBase-T, but it
                  has limitations, and the distance is less than 1/3 of the distance
                  (30 meters vs. 100 meters).





                share|improve this answer















                Well, most devices that support 1000Base-T will also negotiate to 100Base-TX and 10Base-T.



                Older devices that are 10Base-T cannot negotiate to 100Base-TX or 1000Base-T, nor can devices that are 100Base-TX do 1000Base-T (although they can probably do 10Base-T), because those standards did not exist when the older devices were built.



                There is a lot more to this than just the speed. For example, the encoding used by each is different. It is fairly simple to put in backwards compatibility, but how would you propose to build in for a standard that does not yet exist?




                The cable (twisted pair) is also the same (I think).




                There are actually different cable categories. The currently registered cable categories are 3, 5e, 6, 6a and 8 (new). Each category can handle up to a certain frequency that determines the bandwidth you can get for ethernet.



                • Category-3 cable will work for 10Base-T, but faster speeds will not
                  work.

                • Category 5e will work for 10Base-T, 100Base-TX, and 1000Base-T.

                • Category-6 will work for 10Base-T, 100Base-TX, and 1000Base-T. It will
                  also handle 10GBase-T for a shorter distance.

                • Category-6a will work for 10Base-T, 100Base-TX, 1000Base-T, and
                  10GBase-T.

                • Category-8 is newly registered for 25GBase-T and 40GBase-T, but it
                  has limitations, and the distance is less than 1/3 of the distance
                  (30 meters vs. 100 meters).






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jan 11 at 19:43

























                answered Jan 11 at 19:24









                Ron MaupinRon Maupin

                64.1k1367120




                64.1k1367120







                • 2





                  Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                  – Ron Trunk
                  Jan 11 at 19:29











                • Or, even used incorrectly (split) so that 1000Base-T would not work.

                  – Ron Maupin
                  Jan 11 at 19:31











                • The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                  – crasic
                  Jan 18 at 14:16












                • 2





                  Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                  – Ron Trunk
                  Jan 11 at 19:29











                • Or, even used incorrectly (split) so that 1000Base-T would not work.

                  – Ron Maupin
                  Jan 11 at 19:31











                • The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                  – crasic
                  Jan 18 at 14:16







                2




                2





                Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                – Ron Trunk
                Jan 11 at 19:29





                Not to mention, cabling that supports higher speeds wasn't available (or very expensive) when 10bT started out.

                – Ron Trunk
                Jan 11 at 19:29













                Or, even used incorrectly (split) so that 1000Base-T would not work.

                – Ron Maupin
                Jan 11 at 19:31





                Or, even used incorrectly (split) so that 1000Base-T would not work.

                – Ron Maupin
                Jan 11 at 19:31













                The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                – crasic
                Jan 18 at 14:16





                The cable requirements are there to ensure that you are guaranteed a link for all devices at the maximum cable length (typ. 100m) and all receivers/transmitters and noise at their thresholds. For shorter connections you may well get away with poorer cables in some situations.

                – crasic
                Jan 18 at 14:16











                1














                Processing power and hardware capabilities are exactly the reason.



                10MBit standards are from the early 1980s. At that time, it would have been:



                • very, very expensive to build circuitry able to do anything intelligent with serialized digital data at 100MBit/s - even with the actual clock being around 32MHz, you would likely have needed low-integration S-TTL circuitry, or even worse, ECL circuitry. This costs a lot, consumes power like hell, and in case of ECL, is tricky to adapt to 5V TTL signalling prevalent in computer systems back then. High integration, fast CMOS was only beginning to find a foothold, the 33Mhz 80386 CPU was considered bleeding edge technology in 1989.


                • unusual to find a computer that could usefully serve or consume data at more than a megabyte a second - an ESDI hard drive capable of delivering 16 mbit/s was considered pretty fast back then.


                There is auto-negotiation alright, under the condition that you are:



                • using twisted-pair ethernet (at any speed), or one of the fibre-based styles that can (not all can!).

                The coaxial systems had no concept of an established link at layer 1 and 2 - the network didn't know something was on it until it started talking or responding. Also, while the coaxial cable used for 10base2 and 10base5 would have electrically supported far faster speeds (bet you could push 10GBit through an RG58 if you optimized your transceivers for it!) if used as a point to point link, these systems abused it as a bridge-tapped common circuit that could never had supported 100mbit links anyway, leaving no reason to implement an autonegotiation facility. Also, exactly that bridge tapped, high power, often installed sloppily, circuit would probably become an EMI nightmare if you attempted to put any signal at the electrical bandwith required for 100MBit on it (you are in shortwave territory here. Shortwave noise propagates nasty far, and most installations you saw back then would have made an awesome shortwave antenna).



                • You actually allow your devices to autonegotiate - with many devices (eg switches), a port preset to a certain link speed and duplex style will no longer cooperate with the autonegotiation protocols, and will only properly work talking to a port preset exactly the same way.





                share|improve this answer



























                  1














                  Processing power and hardware capabilities are exactly the reason.



                  10MBit standards are from the early 1980s. At that time, it would have been:



                  • very, very expensive to build circuitry able to do anything intelligent with serialized digital data at 100MBit/s - even with the actual clock being around 32MHz, you would likely have needed low-integration S-TTL circuitry, or even worse, ECL circuitry. This costs a lot, consumes power like hell, and in case of ECL, is tricky to adapt to 5V TTL signalling prevalent in computer systems back then. High integration, fast CMOS was only beginning to find a foothold, the 33Mhz 80386 CPU was considered bleeding edge technology in 1989.


                  • unusual to find a computer that could usefully serve or consume data at more than a megabyte a second - an ESDI hard drive capable of delivering 16 mbit/s was considered pretty fast back then.


                  There is auto-negotiation alright, under the condition that you are:



                  • using twisted-pair ethernet (at any speed), or one of the fibre-based styles that can (not all can!).

                  The coaxial systems had no concept of an established link at layer 1 and 2 - the network didn't know something was on it until it started talking or responding. Also, while the coaxial cable used for 10base2 and 10base5 would have electrically supported far faster speeds (bet you could push 10GBit through an RG58 if you optimized your transceivers for it!) if used as a point to point link, these systems abused it as a bridge-tapped common circuit that could never had supported 100mbit links anyway, leaving no reason to implement an autonegotiation facility. Also, exactly that bridge tapped, high power, often installed sloppily, circuit would probably become an EMI nightmare if you attempted to put any signal at the electrical bandwith required for 100MBit on it (you are in shortwave territory here. Shortwave noise propagates nasty far, and most installations you saw back then would have made an awesome shortwave antenna).



                  • You actually allow your devices to autonegotiate - with many devices (eg switches), a port preset to a certain link speed and duplex style will no longer cooperate with the autonegotiation protocols, and will only properly work talking to a port preset exactly the same way.





                  share|improve this answer

























                    1












                    1








                    1







                    Processing power and hardware capabilities are exactly the reason.



                    10MBit standards are from the early 1980s. At that time, it would have been:



                    • very, very expensive to build circuitry able to do anything intelligent with serialized digital data at 100MBit/s - even with the actual clock being around 32MHz, you would likely have needed low-integration S-TTL circuitry, or even worse, ECL circuitry. This costs a lot, consumes power like hell, and in case of ECL, is tricky to adapt to 5V TTL signalling prevalent in computer systems back then. High integration, fast CMOS was only beginning to find a foothold, the 33Mhz 80386 CPU was considered bleeding edge technology in 1989.


                    • unusual to find a computer that could usefully serve or consume data at more than a megabyte a second - an ESDI hard drive capable of delivering 16 mbit/s was considered pretty fast back then.


                    There is auto-negotiation alright, under the condition that you are:



                    • using twisted-pair ethernet (at any speed), or one of the fibre-based styles that can (not all can!).

                    The coaxial systems had no concept of an established link at layer 1 and 2 - the network didn't know something was on it until it started talking or responding. Also, while the coaxial cable used for 10base2 and 10base5 would have electrically supported far faster speeds (bet you could push 10GBit through an RG58 if you optimized your transceivers for it!) if used as a point to point link, these systems abused it as a bridge-tapped common circuit that could never had supported 100mbit links anyway, leaving no reason to implement an autonegotiation facility. Also, exactly that bridge tapped, high power, often installed sloppily, circuit would probably become an EMI nightmare if you attempted to put any signal at the electrical bandwith required for 100MBit on it (you are in shortwave territory here. Shortwave noise propagates nasty far, and most installations you saw back then would have made an awesome shortwave antenna).



                    • You actually allow your devices to autonegotiate - with many devices (eg switches), a port preset to a certain link speed and duplex style will no longer cooperate with the autonegotiation protocols, and will only properly work talking to a port preset exactly the same way.





                    share|improve this answer













                    Processing power and hardware capabilities are exactly the reason.



                    10MBit standards are from the early 1980s. At that time, it would have been:



                    • very, very expensive to build circuitry able to do anything intelligent with serialized digital data at 100MBit/s - even with the actual clock being around 32MHz, you would likely have needed low-integration S-TTL circuitry, or even worse, ECL circuitry. This costs a lot, consumes power like hell, and in case of ECL, is tricky to adapt to 5V TTL signalling prevalent in computer systems back then. High integration, fast CMOS was only beginning to find a foothold, the 33Mhz 80386 CPU was considered bleeding edge technology in 1989.


                    • unusual to find a computer that could usefully serve or consume data at more than a megabyte a second - an ESDI hard drive capable of delivering 16 mbit/s was considered pretty fast back then.


                    There is auto-negotiation alright, under the condition that you are:



                    • using twisted-pair ethernet (at any speed), or one of the fibre-based styles that can (not all can!).

                    The coaxial systems had no concept of an established link at layer 1 and 2 - the network didn't know something was on it until it started talking or responding. Also, while the coaxial cable used for 10base2 and 10base5 would have electrically supported far faster speeds (bet you could push 10GBit through an RG58 if you optimized your transceivers for it!) if used as a point to point link, these systems abused it as a bridge-tapped common circuit that could never had supported 100mbit links anyway, leaving no reason to implement an autonegotiation facility. Also, exactly that bridge tapped, high power, often installed sloppily, circuit would probably become an EMI nightmare if you attempted to put any signal at the electrical bandwith required for 100MBit on it (you are in shortwave territory here. Shortwave noise propagates nasty far, and most installations you saw back then would have made an awesome shortwave antenna).



                    • You actually allow your devices to autonegotiate - with many devices (eg switches), a port preset to a certain link speed and duplex style will no longer cooperate with the autonegotiation protocols, and will only properly work talking to a port preset exactly the same way.






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Jan 12 at 1:11









                    rackandbonemanrackandboneman

                    1413




                    1413





















                        1















                        Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




                        They are incrementally defined by formal specifications that progressed through those speed barriers as technology and customer needs advanced .



                        1. 1Mbit proto-ethernet 802.3e (StarLAN 1mbit 1BASE5) - 1987

                        2. 10Mbit ethernet 802.3i (10BASE-T) - 1990

                        3. 100Mbit ethernet 802.3u (100BASE-TX) - 1995

                        4. 1000Mbit ethernet 802.ab (1000BASE-T) - 1999

                        5. 10Gbit ethernet 802.3an (10GBASE-T) - 2006


                        6. Many sibling standards for alternate cables and signals (1987-today)

                        Each incremental step involved networking equipment manufacturers, customers, end users, network admins, IT, and hardware state of the art moving in lockstep to implement, manufacture, and deploy the incremental standards. They are largely, but not completely backwards compatible. They largely, but not entirely follow the development of computers power and data handling needs.




                        On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ?




                        This is not quite true, both the transmission rate (frequency) and the number of symbols transmitted is different. For physical layer in higher speed ethernet multiple bits are transmitted at once using special encoding into an analog signal that has more than two states. More states means less noise tolerance and more Hertz means more care for cable requirements, so the standard dictates the cable as well.



                        1. 10BASE-T 10MHZ Cat 3, 1bit per hz - 2 pairs

                        2. 100BASE-TX 31.25MHZ Cat 5, 3.25bits per hz, NRZI Encoding - 2 pairs

                        3. 1000BASE-T 62.5MHZ Cat 5e, 4bits per hz, PAM5 Encoding - 4 pairs

                        4. 10GBASE-T 400MHZ Cat 6, 6.25 bits per hz, PAM16 Encoding - 4 pairs

                        On the NIC side, multiple different standards have progressed for the PHY/MAC bridge (Layer "1.5") aka, Media Independent Interface



                        1. MII - 25 MHZ Clock (2.5MHZ for 10Mbit), 18 Data and Control Signals, 100/10Mbit

                        2. RMII - 50MHZ Clock,9 signals, 100/10Mbit

                        3. GMII - 125MHZ , 26 Signals , 1000/100/10Mbit

                        4. RGMII - 125MHZ DDR, 14 signals , 1000/100/10Mbit

                        Note that ethernet networking is a self-contained set of specifications. There are other networking specs (e.g. High Speed COAX i.e. Cable Internet) that are just as rigorously defined by their own set of standards and fundamentally incompatible on a physical level (requiring "modems", bridges, and edge interfaces) , For example DOCSIS3.1, the current state of the art cable internet standard specifies links up to 10GBit over coax, using a much more complex signaling scheme than any ethernet physical layer standard






                        share|improve this answer





























                          1















                          Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




                          They are incrementally defined by formal specifications that progressed through those speed barriers as technology and customer needs advanced .



                          1. 1Mbit proto-ethernet 802.3e (StarLAN 1mbit 1BASE5) - 1987

                          2. 10Mbit ethernet 802.3i (10BASE-T) - 1990

                          3. 100Mbit ethernet 802.3u (100BASE-TX) - 1995

                          4. 1000Mbit ethernet 802.ab (1000BASE-T) - 1999

                          5. 10Gbit ethernet 802.3an (10GBASE-T) - 2006


                          6. Many sibling standards for alternate cables and signals (1987-today)

                          Each incremental step involved networking equipment manufacturers, customers, end users, network admins, IT, and hardware state of the art moving in lockstep to implement, manufacture, and deploy the incremental standards. They are largely, but not completely backwards compatible. They largely, but not entirely follow the development of computers power and data handling needs.




                          On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ?




                          This is not quite true, both the transmission rate (frequency) and the number of symbols transmitted is different. For physical layer in higher speed ethernet multiple bits are transmitted at once using special encoding into an analog signal that has more than two states. More states means less noise tolerance and more Hertz means more care for cable requirements, so the standard dictates the cable as well.



                          1. 10BASE-T 10MHZ Cat 3, 1bit per hz - 2 pairs

                          2. 100BASE-TX 31.25MHZ Cat 5, 3.25bits per hz, NRZI Encoding - 2 pairs

                          3. 1000BASE-T 62.5MHZ Cat 5e, 4bits per hz, PAM5 Encoding - 4 pairs

                          4. 10GBASE-T 400MHZ Cat 6, 6.25 bits per hz, PAM16 Encoding - 4 pairs

                          On the NIC side, multiple different standards have progressed for the PHY/MAC bridge (Layer "1.5") aka, Media Independent Interface



                          1. MII - 25 MHZ Clock (2.5MHZ for 10Mbit), 18 Data and Control Signals, 100/10Mbit

                          2. RMII - 50MHZ Clock,9 signals, 100/10Mbit

                          3. GMII - 125MHZ , 26 Signals , 1000/100/10Mbit

                          4. RGMII - 125MHZ DDR, 14 signals , 1000/100/10Mbit

                          Note that ethernet networking is a self-contained set of specifications. There are other networking specs (e.g. High Speed COAX i.e. Cable Internet) that are just as rigorously defined by their own set of standards and fundamentally incompatible on a physical level (requiring "modems", bridges, and edge interfaces) , For example DOCSIS3.1, the current state of the art cable internet standard specifies links up to 10GBit over coax, using a much more complex signaling scheme than any ethernet physical layer standard






                          share|improve this answer



























                            1












                            1








                            1








                            Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




                            They are incrementally defined by formal specifications that progressed through those speed barriers as technology and customer needs advanced .



                            1. 1Mbit proto-ethernet 802.3e (StarLAN 1mbit 1BASE5) - 1987

                            2. 10Mbit ethernet 802.3i (10BASE-T) - 1990

                            3. 100Mbit ethernet 802.3u (100BASE-TX) - 1995

                            4. 1000Mbit ethernet 802.ab (1000BASE-T) - 1999

                            5. 10Gbit ethernet 802.3an (10GBASE-T) - 2006


                            6. Many sibling standards for alternate cables and signals (1987-today)

                            Each incremental step involved networking equipment manufacturers, customers, end users, network admins, IT, and hardware state of the art moving in lockstep to implement, manufacture, and deploy the incremental standards. They are largely, but not completely backwards compatible. They largely, but not entirely follow the development of computers power and data handling needs.




                            On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ?




                            This is not quite true, both the transmission rate (frequency) and the number of symbols transmitted is different. For physical layer in higher speed ethernet multiple bits are transmitted at once using special encoding into an analog signal that has more than two states. More states means less noise tolerance and more Hertz means more care for cable requirements, so the standard dictates the cable as well.



                            1. 10BASE-T 10MHZ Cat 3, 1bit per hz - 2 pairs

                            2. 100BASE-TX 31.25MHZ Cat 5, 3.25bits per hz, NRZI Encoding - 2 pairs

                            3. 1000BASE-T 62.5MHZ Cat 5e, 4bits per hz, PAM5 Encoding - 4 pairs

                            4. 10GBASE-T 400MHZ Cat 6, 6.25 bits per hz, PAM16 Encoding - 4 pairs

                            On the NIC side, multiple different standards have progressed for the PHY/MAC bridge (Layer "1.5") aka, Media Independent Interface



                            1. MII - 25 MHZ Clock (2.5MHZ for 10Mbit), 18 Data and Control Signals, 100/10Mbit

                            2. RMII - 50MHZ Clock,9 signals, 100/10Mbit

                            3. GMII - 125MHZ , 26 Signals , 1000/100/10Mbit

                            4. RGMII - 125MHZ DDR, 14 signals , 1000/100/10Mbit

                            Note that ethernet networking is a self-contained set of specifications. There are other networking specs (e.g. High Speed COAX i.e. Cable Internet) that are just as rigorously defined by their own set of standards and fundamentally incompatible on a physical level (requiring "modems", bridges, and edge interfaces) , For example DOCSIS3.1, the current state of the art cable internet standard specifies links up to 10GBit over coax, using a much more complex signaling scheme than any ethernet physical layer standard






                            share|improve this answer
















                            Different types of Ethernet (standard Ethernet, Fast Ethernet, Gigabit Ethernet) have different speed of data transfer (10 Mbit, 100 Mbit, 1 Gbit). Why is that?




                            They are incrementally defined by formal specifications that progressed through those speed barriers as technology and customer needs advanced .



                            1. 1Mbit proto-ethernet 802.3e (StarLAN 1mbit 1BASE5) - 1987

                            2. 10Mbit ethernet 802.3i (10BASE-T) - 1990

                            3. 100Mbit ethernet 802.3u (100BASE-TX) - 1995

                            4. 1000Mbit ethernet 802.ab (1000BASE-T) - 1999

                            5. 10Gbit ethernet 802.3an (10GBASE-T) - 2006


                            6. Many sibling standards for alternate cables and signals (1987-today)

                            Each incremental step involved networking equipment manufacturers, customers, end users, network admins, IT, and hardware state of the art moving in lockstep to implement, manufacture, and deploy the incremental standards. They are largely, but not completely backwards compatible. They largely, but not entirely follow the development of computers power and data handling needs.




                            On the physical layer, the speed of electrical signals (voltage change) stays the same. The cable (twisted pair) is also the same (I think). So why does the maximum bandwidth differ?




                            This is not quite true, both the transmission rate (frequency) and the number of symbols transmitted is different. For physical layer in higher speed ethernet multiple bits are transmitted at once using special encoding into an analog signal that has more than two states. More states means less noise tolerance and more Hertz means more care for cable requirements, so the standard dictates the cable as well.



                            1. 10BASE-T 10MHZ Cat 3, 1bit per hz - 2 pairs

                            2. 100BASE-TX 31.25MHZ Cat 5, 3.25bits per hz, NRZI Encoding - 2 pairs

                            3. 1000BASE-T 62.5MHZ Cat 5e, 4bits per hz, PAM5 Encoding - 4 pairs

                            4. 10GBASE-T 400MHZ Cat 6, 6.25 bits per hz, PAM16 Encoding - 4 pairs

                            On the NIC side, multiple different standards have progressed for the PHY/MAC bridge (Layer "1.5") aka, Media Independent Interface



                            1. MII - 25 MHZ Clock (2.5MHZ for 10Mbit), 18 Data and Control Signals, 100/10Mbit

                            2. RMII - 50MHZ Clock,9 signals, 100/10Mbit

                            3. GMII - 125MHZ , 26 Signals , 1000/100/10Mbit

                            4. RGMII - 125MHZ DDR, 14 signals , 1000/100/10Mbit

                            Note that ethernet networking is a self-contained set of specifications. There are other networking specs (e.g. High Speed COAX i.e. Cable Internet) that are just as rigorously defined by their own set of standards and fundamentally incompatible on a physical level (requiring "modems", bridges, and edge interfaces) , For example DOCSIS3.1, the current state of the art cable internet standard specifies links up to 10GBit over coax, using a much more complex signaling scheme than any ethernet physical layer standard







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited Jan 18 at 3:57

























                            answered Jan 18 at 3:47









                            crasiccrasic

                            1112




                            1112



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Network Engineering Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fnetworkengineering.stackexchange.com%2fquestions%2f56058%2fwhat-is-the-difference-between-ethernet-types-considering-bandwidth%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown






                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Displaying single band from multi-band raster using QGIS

                                How many registers does an x86_64 CPU actually have?