Drop packets with PREROUTING in iptables

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












The filter table is best place to drop packets, agreed.



But, out of the box, Docker bypasses INPUT filter rules with PREROUTING to its own FORWARD rules making Docker containers world-accessible. Inserting DOCKER-named filter INPUT/FORWARD rules fails because when Docker is restarted they are deleted then inserted (not appended).



My best attempt is to insert another PREROUTING chain before Docker's and send unwanted packets from eth0 (WAN) to a black hole - 0.0.0.1 - because you cannot DROP/REJECT in a nat table anymore.



# Route anything but TCP 80,443 and ICMP to an IPv4 black hole
iptables -t nat -N BLACKHOLE
iptables -t nat -A BLACKHOLE ! -i eth0 -j RETURN
iptables -t nat -A BLACKHOLE -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
iptables -t nat -A BLACKHOLE -p tcp --dport 80 -j RETURN
iptables -t nat -A BLACKHOLE -p tcp --dport 443 -j RETURN
iptables -t nat -A BLACKHOLE -p icmp -j RETURN
iptables -t nat -A BLACKHOLE -p all -j DNAT --to 0.0.0.1
iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j BLACKHOLE


Here are what the NAT chains looks like with Docker and one container running:



Nat table rules



This seems to work well, though, is there a way to explicitly reject packets before reaching the other pre-routing rule?



(Alpine Linux 3.6.2, Docker v17.05.0-ce)







share|improve this question
























    up vote
    1
    down vote

    favorite












    The filter table is best place to drop packets, agreed.



    But, out of the box, Docker bypasses INPUT filter rules with PREROUTING to its own FORWARD rules making Docker containers world-accessible. Inserting DOCKER-named filter INPUT/FORWARD rules fails because when Docker is restarted they are deleted then inserted (not appended).



    My best attempt is to insert another PREROUTING chain before Docker's and send unwanted packets from eth0 (WAN) to a black hole - 0.0.0.1 - because you cannot DROP/REJECT in a nat table anymore.



    # Route anything but TCP 80,443 and ICMP to an IPv4 black hole
    iptables -t nat -N BLACKHOLE
    iptables -t nat -A BLACKHOLE ! -i eth0 -j RETURN
    iptables -t nat -A BLACKHOLE -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
    iptables -t nat -A BLACKHOLE -p tcp --dport 80 -j RETURN
    iptables -t nat -A BLACKHOLE -p tcp --dport 443 -j RETURN
    iptables -t nat -A BLACKHOLE -p icmp -j RETURN
    iptables -t nat -A BLACKHOLE -p all -j DNAT --to 0.0.0.1
    iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j BLACKHOLE


    Here are what the NAT chains looks like with Docker and one container running:



    Nat table rules



    This seems to work well, though, is there a way to explicitly reject packets before reaching the other pre-routing rule?



    (Alpine Linux 3.6.2, Docker v17.05.0-ce)







    share|improve this question






















      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      The filter table is best place to drop packets, agreed.



      But, out of the box, Docker bypasses INPUT filter rules with PREROUTING to its own FORWARD rules making Docker containers world-accessible. Inserting DOCKER-named filter INPUT/FORWARD rules fails because when Docker is restarted they are deleted then inserted (not appended).



      My best attempt is to insert another PREROUTING chain before Docker's and send unwanted packets from eth0 (WAN) to a black hole - 0.0.0.1 - because you cannot DROP/REJECT in a nat table anymore.



      # Route anything but TCP 80,443 and ICMP to an IPv4 black hole
      iptables -t nat -N BLACKHOLE
      iptables -t nat -A BLACKHOLE ! -i eth0 -j RETURN
      iptables -t nat -A BLACKHOLE -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
      iptables -t nat -A BLACKHOLE -p tcp --dport 80 -j RETURN
      iptables -t nat -A BLACKHOLE -p tcp --dport 443 -j RETURN
      iptables -t nat -A BLACKHOLE -p icmp -j RETURN
      iptables -t nat -A BLACKHOLE -p all -j DNAT --to 0.0.0.1
      iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j BLACKHOLE


      Here are what the NAT chains looks like with Docker and one container running:



      Nat table rules



      This seems to work well, though, is there a way to explicitly reject packets before reaching the other pre-routing rule?



      (Alpine Linux 3.6.2, Docker v17.05.0-ce)







      share|improve this question












      The filter table is best place to drop packets, agreed.



      But, out of the box, Docker bypasses INPUT filter rules with PREROUTING to its own FORWARD rules making Docker containers world-accessible. Inserting DOCKER-named filter INPUT/FORWARD rules fails because when Docker is restarted they are deleted then inserted (not appended).



      My best attempt is to insert another PREROUTING chain before Docker's and send unwanted packets from eth0 (WAN) to a black hole - 0.0.0.1 - because you cannot DROP/REJECT in a nat table anymore.



      # Route anything but TCP 80,443 and ICMP to an IPv4 black hole
      iptables -t nat -N BLACKHOLE
      iptables -t nat -A BLACKHOLE ! -i eth0 -j RETURN
      iptables -t nat -A BLACKHOLE -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
      iptables -t nat -A BLACKHOLE -p tcp --dport 80 -j RETURN
      iptables -t nat -A BLACKHOLE -p tcp --dport 443 -j RETURN
      iptables -t nat -A BLACKHOLE -p icmp -j RETURN
      iptables -t nat -A BLACKHOLE -p all -j DNAT --to 0.0.0.1
      iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j BLACKHOLE


      Here are what the NAT chains looks like with Docker and one container running:



      Nat table rules



      This seems to work well, though, is there a way to explicitly reject packets before reaching the other pre-routing rule?



      (Alpine Linux 3.6.2, Docker v17.05.0-ce)









      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 17 '17 at 22:31









      Drakes

      2351619




      2351619




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          I had a similar problem which was the need to "harden" network traffic even if somebody would deploy a container that was binding the application to any address: 0.0.0.0:port.



          Docker provides a DOCKER-USER filter chain but it looks like all the magic happens in the DOCKER nat chain that is referenced in PREROUTING.



          So no way around this nat happens before filtering and I don't want to touch too much at the docker rules.



          I don't like the idea of having to change the packet once again so I came up with a scheme that returns everything by default and jumps to another chain in the PREROUTING before DOCKER gets called.



          I then selectively jump back to DOCKER when I consider the traffic good.



          Here's the code:



          iptables -t nat -N DOCKER-BLOCK
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j RETURN
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-BLOCK


          That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything.



          If I want to enable a port:



          iptables -t nat -I DOCKER-BLOCK -p tcp -m tcp --dport 1234 -j DOCKER


          The nice way about it is that you never have to touch to the PREROUTING table anymore, if you want to flush, flush directly DOCKER-BLOCK.






          share|improve this answer




















          • Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
            – Ram
            Aug 31 at 2:14










          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f405364%2fdrop-packets-with-prerouting-in-iptables%23new-answer', 'question_page');

          );

          Post as a guest






























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          I had a similar problem which was the need to "harden" network traffic even if somebody would deploy a container that was binding the application to any address: 0.0.0.0:port.



          Docker provides a DOCKER-USER filter chain but it looks like all the magic happens in the DOCKER nat chain that is referenced in PREROUTING.



          So no way around this nat happens before filtering and I don't want to touch too much at the docker rules.



          I don't like the idea of having to change the packet once again so I came up with a scheme that returns everything by default and jumps to another chain in the PREROUTING before DOCKER gets called.



          I then selectively jump back to DOCKER when I consider the traffic good.



          Here's the code:



          iptables -t nat -N DOCKER-BLOCK
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j RETURN
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-BLOCK


          That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything.



          If I want to enable a port:



          iptables -t nat -I DOCKER-BLOCK -p tcp -m tcp --dport 1234 -j DOCKER


          The nice way about it is that you never have to touch to the PREROUTING table anymore, if you want to flush, flush directly DOCKER-BLOCK.






          share|improve this answer




















          • Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
            – Ram
            Aug 31 at 2:14














          up vote
          0
          down vote













          I had a similar problem which was the need to "harden" network traffic even if somebody would deploy a container that was binding the application to any address: 0.0.0.0:port.



          Docker provides a DOCKER-USER filter chain but it looks like all the magic happens in the DOCKER nat chain that is referenced in PREROUTING.



          So no way around this nat happens before filtering and I don't want to touch too much at the docker rules.



          I don't like the idea of having to change the packet once again so I came up with a scheme that returns everything by default and jumps to another chain in the PREROUTING before DOCKER gets called.



          I then selectively jump back to DOCKER when I consider the traffic good.



          Here's the code:



          iptables -t nat -N DOCKER-BLOCK
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j RETURN
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-BLOCK


          That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything.



          If I want to enable a port:



          iptables -t nat -I DOCKER-BLOCK -p tcp -m tcp --dport 1234 -j DOCKER


          The nice way about it is that you never have to touch to the PREROUTING table anymore, if you want to flush, flush directly DOCKER-BLOCK.






          share|improve this answer




















          • Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
            – Ram
            Aug 31 at 2:14












          up vote
          0
          down vote










          up vote
          0
          down vote









          I had a similar problem which was the need to "harden" network traffic even if somebody would deploy a container that was binding the application to any address: 0.0.0.0:port.



          Docker provides a DOCKER-USER filter chain but it looks like all the magic happens in the DOCKER nat chain that is referenced in PREROUTING.



          So no way around this nat happens before filtering and I don't want to touch too much at the docker rules.



          I don't like the idea of having to change the packet once again so I came up with a scheme that returns everything by default and jumps to another chain in the PREROUTING before DOCKER gets called.



          I then selectively jump back to DOCKER when I consider the traffic good.



          Here's the code:



          iptables -t nat -N DOCKER-BLOCK
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j RETURN
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-BLOCK


          That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything.



          If I want to enable a port:



          iptables -t nat -I DOCKER-BLOCK -p tcp -m tcp --dport 1234 -j DOCKER


          The nice way about it is that you never have to touch to the PREROUTING table anymore, if you want to flush, flush directly DOCKER-BLOCK.






          share|improve this answer












          I had a similar problem which was the need to "harden" network traffic even if somebody would deploy a container that was binding the application to any address: 0.0.0.0:port.



          Docker provides a DOCKER-USER filter chain but it looks like all the magic happens in the DOCKER nat chain that is referenced in PREROUTING.



          So no way around this nat happens before filtering and I don't want to touch too much at the docker rules.



          I don't like the idea of having to change the packet once again so I came up with a scheme that returns everything by default and jumps to another chain in the PREROUTING before DOCKER gets called.



          I then selectively jump back to DOCKER when I consider the traffic good.



          Here's the code:



          iptables -t nat -N DOCKER-BLOCK
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j RETURN
          iptables -t nat -I PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-BLOCK


          That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything.



          If I want to enable a port:



          iptables -t nat -I DOCKER-BLOCK -p tcp -m tcp --dport 1234 -j DOCKER


          The nice way about it is that you never have to touch to the PREROUTING table anymore, if you want to flush, flush directly DOCKER-BLOCK.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 3 at 1:47









          tehmoon

          1235




          1235











          • Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
            – Ram
            Aug 31 at 2:14
















          • Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
            – Ram
            Aug 31 at 2:14















          Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
          – Ram
          Aug 31 at 2:14




          Can you please explain on "That's all, by default everything coming from egress will end up in the filter table where I do have a catchall that drops everything."?
          – Ram
          Aug 31 at 2:14

















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f405364%2fdrop-packets-with-prerouting-in-iptables%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay