Airflow Worker Daemon exits for no visible reason

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I have Airflow 1.9 running inside a virtual environment, set up with Celery and Redis and it works well. However, I wanted to daemon-ize the set up and used the instructions here. It works well for the Webserver, Scheduler and Flower, but fails for the Worker, which is of course, the core of it all. My airflow-worker.service file looks like this:



[Unit]
Description=Airflow celery worker daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

[Service]
EnvironmentFile=/etc/default/airflow
User=root
Group=root
Type=simple
ExecStart=/bin/bash -c 'source /home/codingincircles/airflow-master/bin/activate ; airflow worker'
Restart=on-failure
RestartSec=10s

[Install]
WantedBy=multi-user.target


Curiously, if I run the ExecStart command on the CLI as is, it runs perfectly and tasks run and everything is glorious. However, when I do a sudo service airflow-worker start, it takes a while to return to prompt and nothing shows up in the Flower UI. When I do journalctl -u airflow-worker.service -e, this is what I see:



systemd[1]: Started Airflow celery worker daemon.
bash[12392]: [2018-04-09 21:52:41,202] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/Grammar.txt
bash[12392]: [2018-04-09 21:52:41,252] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
bash[12392]: [2018-04-09 21:52:41,578] configuration.py:206 WARNING - section/key [celery/celery_ssl_active] not found in config
bash[12392]: [2018-04-09 21:52:41,578] default_celery.py:41 WARNING - Celery Executor will run without SSL
bash[12392]: [2018-04-09 21:52:41,579] __init__.py:45 INFO - Using executor CeleryExecutor
systemd[1]: airflow-worker.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: airflow-worker.service: Unit entered failed state.
systemd[1]: airflow-worker.service: Failed with result 'exit-code'.


What am I doing wrong? Any other method of using Airflow works, except when I try to daemon-ize it. Even using the -D flag after the airflow commands works (like airflow worker -D), except I'm not sure if that is the right/safe/recommended way of using it in production and would rather make it a service and use it. Please help.







share|improve this question
























    up vote
    0
    down vote

    favorite












    I have Airflow 1.9 running inside a virtual environment, set up with Celery and Redis and it works well. However, I wanted to daemon-ize the set up and used the instructions here. It works well for the Webserver, Scheduler and Flower, but fails for the Worker, which is of course, the core of it all. My airflow-worker.service file looks like this:



    [Unit]
    Description=Airflow celery worker daemon
    After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
    Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

    [Service]
    EnvironmentFile=/etc/default/airflow
    User=root
    Group=root
    Type=simple
    ExecStart=/bin/bash -c 'source /home/codingincircles/airflow-master/bin/activate ; airflow worker'
    Restart=on-failure
    RestartSec=10s

    [Install]
    WantedBy=multi-user.target


    Curiously, if I run the ExecStart command on the CLI as is, it runs perfectly and tasks run and everything is glorious. However, when I do a sudo service airflow-worker start, it takes a while to return to prompt and nothing shows up in the Flower UI. When I do journalctl -u airflow-worker.service -e, this is what I see:



    systemd[1]: Started Airflow celery worker daemon.
    bash[12392]: [2018-04-09 21:52:41,202] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/Grammar.txt
    bash[12392]: [2018-04-09 21:52:41,252] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
    bash[12392]: [2018-04-09 21:52:41,578] configuration.py:206 WARNING - section/key [celery/celery_ssl_active] not found in config
    bash[12392]: [2018-04-09 21:52:41,578] default_celery.py:41 WARNING - Celery Executor will run without SSL
    bash[12392]: [2018-04-09 21:52:41,579] __init__.py:45 INFO - Using executor CeleryExecutor
    systemd[1]: airflow-worker.service: Main process exited, code=exited, status=1/FAILURE
    systemd[1]: airflow-worker.service: Unit entered failed state.
    systemd[1]: airflow-worker.service: Failed with result 'exit-code'.


    What am I doing wrong? Any other method of using Airflow works, except when I try to daemon-ize it. Even using the -D flag after the airflow commands works (like airflow worker -D), except I'm not sure if that is the right/safe/recommended way of using it in production and would rather make it a service and use it. Please help.







    share|improve this question






















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I have Airflow 1.9 running inside a virtual environment, set up with Celery and Redis and it works well. However, I wanted to daemon-ize the set up and used the instructions here. It works well for the Webserver, Scheduler and Flower, but fails for the Worker, which is of course, the core of it all. My airflow-worker.service file looks like this:



      [Unit]
      Description=Airflow celery worker daemon
      After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
      Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

      [Service]
      EnvironmentFile=/etc/default/airflow
      User=root
      Group=root
      Type=simple
      ExecStart=/bin/bash -c 'source /home/codingincircles/airflow-master/bin/activate ; airflow worker'
      Restart=on-failure
      RestartSec=10s

      [Install]
      WantedBy=multi-user.target


      Curiously, if I run the ExecStart command on the CLI as is, it runs perfectly and tasks run and everything is glorious. However, when I do a sudo service airflow-worker start, it takes a while to return to prompt and nothing shows up in the Flower UI. When I do journalctl -u airflow-worker.service -e, this is what I see:



      systemd[1]: Started Airflow celery worker daemon.
      bash[12392]: [2018-04-09 21:52:41,202] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/Grammar.txt
      bash[12392]: [2018-04-09 21:52:41,252] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
      bash[12392]: [2018-04-09 21:52:41,578] configuration.py:206 WARNING - section/key [celery/celery_ssl_active] not found in config
      bash[12392]: [2018-04-09 21:52:41,578] default_celery.py:41 WARNING - Celery Executor will run without SSL
      bash[12392]: [2018-04-09 21:52:41,579] __init__.py:45 INFO - Using executor CeleryExecutor
      systemd[1]: airflow-worker.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: airflow-worker.service: Unit entered failed state.
      systemd[1]: airflow-worker.service: Failed with result 'exit-code'.


      What am I doing wrong? Any other method of using Airflow works, except when I try to daemon-ize it. Even using the -D flag after the airflow commands works (like airflow worker -D), except I'm not sure if that is the right/safe/recommended way of using it in production and would rather make it a service and use it. Please help.







      share|improve this question












      I have Airflow 1.9 running inside a virtual environment, set up with Celery and Redis and it works well. However, I wanted to daemon-ize the set up and used the instructions here. It works well for the Webserver, Scheduler and Flower, but fails for the Worker, which is of course, the core of it all. My airflow-worker.service file looks like this:



      [Unit]
      Description=Airflow celery worker daemon
      After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
      Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

      [Service]
      EnvironmentFile=/etc/default/airflow
      User=root
      Group=root
      Type=simple
      ExecStart=/bin/bash -c 'source /home/codingincircles/airflow-master/bin/activate ; airflow worker'
      Restart=on-failure
      RestartSec=10s

      [Install]
      WantedBy=multi-user.target


      Curiously, if I run the ExecStart command on the CLI as is, it runs perfectly and tasks run and everything is glorious. However, when I do a sudo service airflow-worker start, it takes a while to return to prompt and nothing shows up in the Flower UI. When I do journalctl -u airflow-worker.service -e, this is what I see:



      systemd[1]: Started Airflow celery worker daemon.
      bash[12392]: [2018-04-09 21:52:41,202] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/Grammar.txt
      bash[12392]: [2018-04-09 21:52:41,252] driver.py:120 INFO - Generating grammar tables from /usr/lib/python3.5/lib2to3/PatternGrammar.txt
      bash[12392]: [2018-04-09 21:52:41,578] configuration.py:206 WARNING - section/key [celery/celery_ssl_active] not found in config
      bash[12392]: [2018-04-09 21:52:41,578] default_celery.py:41 WARNING - Celery Executor will run without SSL
      bash[12392]: [2018-04-09 21:52:41,579] __init__.py:45 INFO - Using executor CeleryExecutor
      systemd[1]: airflow-worker.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: airflow-worker.service: Unit entered failed state.
      systemd[1]: airflow-worker.service: Failed with result 'exit-code'.


      What am I doing wrong? Any other method of using Airflow works, except when I try to daemon-ize it. Even using the -D flag after the airflow commands works (like airflow worker -D), except I'm not sure if that is the right/safe/recommended way of using it in production and would rather make it a service and use it. Please help.









      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 10 at 14:58









      CodingInCircles

      1013




      1013

























          active

          oldest

          votes











          Your Answer







          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f436797%2fairflow-worker-daemon-exits-for-no-visible-reason%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes










           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f436797%2fairflow-worker-daemon-exits-for-no-visible-reason%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay