Whenever ansible makes changes to sshd in CentOS7 a random future play cannot connect

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








8















This has been an irritating enough problem now that I thought I would finally ask the community at large what a possible solution might be. It's even more irritating that I seem to be the only one experiencing this issue.



Essentially, anytime in CentOS 7.x, sshd configs, or any part of sshd gets modified, and the daemon gets restarted/reloaded at some "random point" in the next 3 minutes, the ssh connections all reset, and then that server is unreachable for a few seconds time via ssh.



This is especially a problem for ansible in that it needs to do these changes itself to sshd sometimes, and also reloading it (for instance in new CentOS 7x server builds). But then in future plays it just randomly can't connect to ssh, and the it blows up the rest of the playbook/plays for that host which failed to be contacted. This is especially bad for a large host pattern, as a few will randomly complete, but the others will fail at various stages along the playbook after sshd is manipulated. It is of note, that nothing of the sort occurs in CentOS 5x, 6x, or even on Solaris.



The best I can do to avoid this is to create a 90 second wait after any changes to sshd, and even this isn't totally foolproof. It makes those playbooks take 20+ minutes to run though if it's invoked 7-8 times.



Here are some facts on this environment:



All new installs are from official ISO DVD's.
Every server is a hyper-v 2012 guest
Every server which has this problem is CentOS 7.x



Here is some actual output of the problems and some hackneyed solutions:



The failure:



fatal: [voltron]: UNREACHABLE! => "changed": false, "msg": "All items completed", "results": ["_ansible_item_result": true, "item": ["rsync", "iotop", "bind-utils", "sysstat.x86_64", "lsof"], "msg": "Failed to connect to the host via ssh: Shared connection to voltron closed.rn", "unreachable": true]


Example of one of the changes to sshd:



- name: Configure sshd to disallow root logins for security purposes on CentOS and Redhat 7x servers.
lineinfile:
backup: yes
dest: /etc/ssh/sshd_config
regexp: '^(#PermitRootLogin)'
line: "PermitRootLogin no"
state: present
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")
notify: sshd reload Linux 7x


The following handler:



- name: sshd reload Linux 7x
systemd:
state: restarted
daemon_reload: yes
name: sshd


Finally my ghetto fix to try and account for this problem:



- name: Wait a bit on CentOS/Redhat 7x servers to ensure changes don't mess up ssh and screw up further plays.
pause:
seconds: 90
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")


There has got to be a better solution than what I came up with, and it's hard to believe that everyone else encounters this and also puts up with it. Is there something I need to configure in CentOS 7.x servers to prevent this? Is there something in ansible that is needed to deal with this, such as multiple ssh attempts per play on first failure?



Thanks in advance!










share|improve this question
























  • Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

    – sourcejedi
    Jun 4 '17 at 15:15











  • Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

    – sourcejedi
    Jun 4 '17 at 15:24











  • @sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

    – Viscosity
    Jun 9 '17 at 15:39












  • When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

    – sourcejedi
    Jun 9 '17 at 16:33












  • This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

    – Strahinja Kustudic
    Jul 18 '17 at 21:30


















8















This has been an irritating enough problem now that I thought I would finally ask the community at large what a possible solution might be. It's even more irritating that I seem to be the only one experiencing this issue.



Essentially, anytime in CentOS 7.x, sshd configs, or any part of sshd gets modified, and the daemon gets restarted/reloaded at some "random point" in the next 3 minutes, the ssh connections all reset, and then that server is unreachable for a few seconds time via ssh.



This is especially a problem for ansible in that it needs to do these changes itself to sshd sometimes, and also reloading it (for instance in new CentOS 7x server builds). But then in future plays it just randomly can't connect to ssh, and the it blows up the rest of the playbook/plays for that host which failed to be contacted. This is especially bad for a large host pattern, as a few will randomly complete, but the others will fail at various stages along the playbook after sshd is manipulated. It is of note, that nothing of the sort occurs in CentOS 5x, 6x, or even on Solaris.



The best I can do to avoid this is to create a 90 second wait after any changes to sshd, and even this isn't totally foolproof. It makes those playbooks take 20+ minutes to run though if it's invoked 7-8 times.



Here are some facts on this environment:



All new installs are from official ISO DVD's.
Every server is a hyper-v 2012 guest
Every server which has this problem is CentOS 7.x



Here is some actual output of the problems and some hackneyed solutions:



The failure:



fatal: [voltron]: UNREACHABLE! => "changed": false, "msg": "All items completed", "results": ["_ansible_item_result": true, "item": ["rsync", "iotop", "bind-utils", "sysstat.x86_64", "lsof"], "msg": "Failed to connect to the host via ssh: Shared connection to voltron closed.rn", "unreachable": true]


Example of one of the changes to sshd:



- name: Configure sshd to disallow root logins for security purposes on CentOS and Redhat 7x servers.
lineinfile:
backup: yes
dest: /etc/ssh/sshd_config
regexp: '^(#PermitRootLogin)'
line: "PermitRootLogin no"
state: present
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")
notify: sshd reload Linux 7x


The following handler:



- name: sshd reload Linux 7x
systemd:
state: restarted
daemon_reload: yes
name: sshd


Finally my ghetto fix to try and account for this problem:



- name: Wait a bit on CentOS/Redhat 7x servers to ensure changes don't mess up ssh and screw up further plays.
pause:
seconds: 90
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")


There has got to be a better solution than what I came up with, and it's hard to believe that everyone else encounters this and also puts up with it. Is there something I need to configure in CentOS 7.x servers to prevent this? Is there something in ansible that is needed to deal with this, such as multiple ssh attempts per play on first failure?



Thanks in advance!










share|improve this question
























  • Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

    – sourcejedi
    Jun 4 '17 at 15:15











  • Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

    – sourcejedi
    Jun 4 '17 at 15:24











  • @sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

    – Viscosity
    Jun 9 '17 at 15:39












  • When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

    – sourcejedi
    Jun 9 '17 at 16:33












  • This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

    – Strahinja Kustudic
    Jul 18 '17 at 21:30














8












8








8








This has been an irritating enough problem now that I thought I would finally ask the community at large what a possible solution might be. It's even more irritating that I seem to be the only one experiencing this issue.



Essentially, anytime in CentOS 7.x, sshd configs, or any part of sshd gets modified, and the daemon gets restarted/reloaded at some "random point" in the next 3 minutes, the ssh connections all reset, and then that server is unreachable for a few seconds time via ssh.



This is especially a problem for ansible in that it needs to do these changes itself to sshd sometimes, and also reloading it (for instance in new CentOS 7x server builds). But then in future plays it just randomly can't connect to ssh, and the it blows up the rest of the playbook/plays for that host which failed to be contacted. This is especially bad for a large host pattern, as a few will randomly complete, but the others will fail at various stages along the playbook after sshd is manipulated. It is of note, that nothing of the sort occurs in CentOS 5x, 6x, or even on Solaris.



The best I can do to avoid this is to create a 90 second wait after any changes to sshd, and even this isn't totally foolproof. It makes those playbooks take 20+ minutes to run though if it's invoked 7-8 times.



Here are some facts on this environment:



All new installs are from official ISO DVD's.
Every server is a hyper-v 2012 guest
Every server which has this problem is CentOS 7.x



Here is some actual output of the problems and some hackneyed solutions:



The failure:



fatal: [voltron]: UNREACHABLE! => "changed": false, "msg": "All items completed", "results": ["_ansible_item_result": true, "item": ["rsync", "iotop", "bind-utils", "sysstat.x86_64", "lsof"], "msg": "Failed to connect to the host via ssh: Shared connection to voltron closed.rn", "unreachable": true]


Example of one of the changes to sshd:



- name: Configure sshd to disallow root logins for security purposes on CentOS and Redhat 7x servers.
lineinfile:
backup: yes
dest: /etc/ssh/sshd_config
regexp: '^(#PermitRootLogin)'
line: "PermitRootLogin no"
state: present
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")
notify: sshd reload Linux 7x


The following handler:



- name: sshd reload Linux 7x
systemd:
state: restarted
daemon_reload: yes
name: sshd


Finally my ghetto fix to try and account for this problem:



- name: Wait a bit on CentOS/Redhat 7x servers to ensure changes don't mess up ssh and screw up further plays.
pause:
seconds: 90
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")


There has got to be a better solution than what I came up with, and it's hard to believe that everyone else encounters this and also puts up with it. Is there something I need to configure in CentOS 7.x servers to prevent this? Is there something in ansible that is needed to deal with this, such as multiple ssh attempts per play on first failure?



Thanks in advance!










share|improve this question
















This has been an irritating enough problem now that I thought I would finally ask the community at large what a possible solution might be. It's even more irritating that I seem to be the only one experiencing this issue.



Essentially, anytime in CentOS 7.x, sshd configs, or any part of sshd gets modified, and the daemon gets restarted/reloaded at some "random point" in the next 3 minutes, the ssh connections all reset, and then that server is unreachable for a few seconds time via ssh.



This is especially a problem for ansible in that it needs to do these changes itself to sshd sometimes, and also reloading it (for instance in new CentOS 7x server builds). But then in future plays it just randomly can't connect to ssh, and the it blows up the rest of the playbook/plays for that host which failed to be contacted. This is especially bad for a large host pattern, as a few will randomly complete, but the others will fail at various stages along the playbook after sshd is manipulated. It is of note, that nothing of the sort occurs in CentOS 5x, 6x, or even on Solaris.



The best I can do to avoid this is to create a 90 second wait after any changes to sshd, and even this isn't totally foolproof. It makes those playbooks take 20+ minutes to run though if it's invoked 7-8 times.



Here are some facts on this environment:



All new installs are from official ISO DVD's.
Every server is a hyper-v 2012 guest
Every server which has this problem is CentOS 7.x



Here is some actual output of the problems and some hackneyed solutions:



The failure:



fatal: [voltron]: UNREACHABLE! => "changed": false, "msg": "All items completed", "results": ["_ansible_item_result": true, "item": ["rsync", "iotop", "bind-utils", "sysstat.x86_64", "lsof"], "msg": "Failed to connect to the host via ssh: Shared connection to voltron closed.rn", "unreachable": true]


Example of one of the changes to sshd:



- name: Configure sshd to disallow root logins for security purposes on CentOS and Redhat 7x servers.
lineinfile:
backup: yes
dest: /etc/ssh/sshd_config
regexp: '^(#PermitRootLogin)'
line: "PermitRootLogin no"
state: present
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")
notify: sshd reload Linux 7x


The following handler:



- name: sshd reload Linux 7x
systemd:
state: restarted
daemon_reload: yes
name: sshd


Finally my ghetto fix to try and account for this problem:



- name: Wait a bit on CentOS/Redhat 7x servers to ensure changes don't mess up ssh and screw up further plays.
pause:
seconds: 90
when: (ansible_distribution == "CentOS" or "RedHat") and (ansible_distribution_major_version == "7")


There has got to be a better solution than what I came up with, and it's hard to believe that everyone else encounters this and also puts up with it. Is there something I need to configure in CentOS 7.x servers to prevent this? Is there something in ansible that is needed to deal with this, such as multiple ssh attempts per play on first failure?



Thanks in advance!







centos scripting sshd ansible






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 1 '17 at 18:09









Jeff Schaller

45k1164147




45k1164147










asked Jun 1 '17 at 18:08









ViscosityViscosity

79314




79314












  • Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

    – sourcejedi
    Jun 4 '17 at 15:15











  • Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

    – sourcejedi
    Jun 4 '17 at 15:24











  • @sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

    – Viscosity
    Jun 9 '17 at 15:39












  • When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

    – sourcejedi
    Jun 9 '17 at 16:33












  • This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

    – Strahinja Kustudic
    Jul 18 '17 at 21:30


















  • Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

    – sourcejedi
    Jun 4 '17 at 15:15











  • Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

    – sourcejedi
    Jun 4 '17 at 15:24











  • @sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

    – Viscosity
    Jun 9 '17 at 15:39












  • When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

    – sourcejedi
    Jun 9 '17 at 16:33












  • This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

    – Strahinja Kustudic
    Jul 18 '17 at 21:30

















Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

– sourcejedi
Jun 4 '17 at 15:15





Are you sure you've seen it reset existing ssh connections? Normally, restarting ssh is not supposed to affect existing connections, so this might be some sort of clue.

– sourcejedi
Jun 4 '17 at 15:15













Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

– sourcejedi
Jun 4 '17 at 15:24





Please specify the exact ansible version you're using (e.g. if there is a bug in the systemd module, people will be interested what version it was in).

– sourcejedi
Jun 4 '17 at 15:24













@sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

– Viscosity
Jun 9 '17 at 15:39






@sourcejedi ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Well, I mean it "could" be a bug, but if so, why am I the only one experiencing it? Unless there is no one else out there using CentOS 7x with ansible.... You're right however in that a service refresh shouldn't affect existing connections. Indeed, on my CentOS 6x servers, everything works flawlessly on the same playbook.

– Viscosity
Jun 9 '17 at 15:39














When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

– sourcejedi
Jun 9 '17 at 16:33






When you say it is restarted - in the system log, is that all you get? Or does systemd report that sshd exited, and was restarted according to Restart=on-failure? If so, what was the exit status? And did sshd not log any error message?

– sourcejedi
Jun 9 '17 at 16:33














This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

– Strahinja Kustudic
Jul 18 '17 at 21:30






This isn't an Ansible problem, but either an SSH or some network problem. Restarting SSH doesn't affect current SSH connections, so something else here is at play. Have you tried regularly connecting over SSH from the terminal, restart sshd and what happens with your connection? Also are you using SSH ControlMaster with Ansible? You can enable it in ansible.cfg ssh_args = -o ControlMaster=auto -o ControlPersist=60s.

– Strahinja Kustudic
Jul 18 '17 at 21:30











2 Answers
2






active

oldest

votes


















0














Rather than using the systemd module, try the service module:



- name: Restart secure shell daemon post configuration
service:
name: sshd
state: restarted





share|improve this answer


















  • 1





    Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

    – Viscosity
    Jun 1 '17 at 18:48











  • DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

    – Viscosity
    Jun 9 '17 at 15:35


















0














This seems to be a common Problem.
Patch for Ansible ssh retries from 2016



A better solution might be to wait for sshd to be ready to connect.
Original thread with this ansible code solution:



[VM creation tasks...]



  - name: Wait for the Kickstart install to complete and the VM to reboot
    local_action: wait_for host= vm_hostname port=22 delay=30 timeout=1200 state=started



  - name: Now configure the VM...






share|improve this answer























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f368630%2fwhenever-ansible-makes-changes-to-sshd-in-centos7-a-random-future-play-cannot-co%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Rather than using the systemd module, try the service module:



    - name: Restart secure shell daemon post configuration
    service:
    name: sshd
    state: restarted





    share|improve this answer


















    • 1





      Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

      – Viscosity
      Jun 1 '17 at 18:48











    • DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

      – Viscosity
      Jun 9 '17 at 15:35















    0














    Rather than using the systemd module, try the service module:



    - name: Restart secure shell daemon post configuration
    service:
    name: sshd
    state: restarted





    share|improve this answer


















    • 1





      Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

      – Viscosity
      Jun 1 '17 at 18:48











    • DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

      – Viscosity
      Jun 9 '17 at 15:35













    0












    0








    0







    Rather than using the systemd module, try the service module:



    - name: Restart secure shell daemon post configuration
    service:
    name: sshd
    state: restarted





    share|improve this answer













    Rather than using the systemd module, try the service module:



    - name: Restart secure shell daemon post configuration
    service:
    name: sshd
    state: restarted






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jun 1 '17 at 18:28









    DopeGhotiDopeGhoti

    47k56190




    47k56190







    • 1





      Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

      – Viscosity
      Jun 1 '17 at 18:48











    • DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

      – Viscosity
      Jun 9 '17 at 15:35












    • 1





      Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

      – Viscosity
      Jun 1 '17 at 18:48











    • DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

      – Viscosity
      Jun 9 '17 at 15:35







    1




    1





    Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

    – Viscosity
    Jun 1 '17 at 18:48





    Interesting, I will try that and get back to this page to let people know. But doesn't the service module just manipulate the "service" binary which really just redirects through systemctl? Well, I'll give it a shot.

    – Viscosity
    Jun 1 '17 at 18:48













    DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

    – Viscosity
    Jun 9 '17 at 15:35





    DopeGhoti, sadly your suggestion did not work. I get exactly the same issue as before, and it doesn't appear to be module dependent between the service, or systemd modules. Anyone else have any suggestions?

    – Viscosity
    Jun 9 '17 at 15:35













    0














    This seems to be a common Problem.
    Patch for Ansible ssh retries from 2016



    A better solution might be to wait for sshd to be ready to connect.
    Original thread with this ansible code solution:



    [VM creation tasks...]



      - name: Wait for the Kickstart install to complete and the VM to reboot
        local_action: wait_for host= vm_hostname port=22 delay=30 timeout=1200 state=started



      - name: Now configure the VM...






    share|improve this answer



























      0














      This seems to be a common Problem.
      Patch for Ansible ssh retries from 2016



      A better solution might be to wait for sshd to be ready to connect.
      Original thread with this ansible code solution:



      [VM creation tasks...]



        - name: Wait for the Kickstart install to complete and the VM to reboot
          local_action: wait_for host= vm_hostname port=22 delay=30 timeout=1200 state=started



        - name: Now configure the VM...






      share|improve this answer

























        0












        0








        0







        This seems to be a common Problem.
        Patch for Ansible ssh retries from 2016



        A better solution might be to wait for sshd to be ready to connect.
        Original thread with this ansible code solution:



        [VM creation tasks...]



          - name: Wait for the Kickstart install to complete and the VM to reboot
            local_action: wait_for host= vm_hostname port=22 delay=30 timeout=1200 state=started



          - name: Now configure the VM...






        share|improve this answer













        This seems to be a common Problem.
        Patch for Ansible ssh retries from 2016



        A better solution might be to wait for sshd to be ready to connect.
        Original thread with this ansible code solution:



        [VM creation tasks...]



          - name: Wait for the Kickstart install to complete and the VM to reboot
            local_action: wait_for host= vm_hostname port=22 delay=30 timeout=1200 state=started



          - name: Now configure the VM...







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 27 '17 at 10:05









        NilsNils

        12.8k73771




        12.8k73771



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f368630%2fwhenever-ansible-makes-changes-to-sshd-in-centos7-a-random-future-play-cannot-co%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown






            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay