Getting systemd service logs faster from my service

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1















I'm having a systemd service defined as follows, that works fine:



[Unit]
Description=my service
After=network.target

[Service]
User=myuser
Group=mygroup
WorkingDirectory=/home/myuser/myapp
Environment="PATH=/home/myuser/myapp/.venv/bin"
ExecStart=/home/myuser/myapp/.venv/bin/python3 /home/myuser/myapp/run.py
Restart=on-failure

[Install]
WantedBy=multi-user.target


This is a Python web application based on Flask framework. Normally in the stdout of the application I can see incoming requests "live", I mean when I run the app like python run.py.



Now after starting the service I'd like to follow logs of the app and I do:



sudo journalctl -f -u my_app.service


and incoming logs are awfully slow - sometimes it takes minutes or more for them to appear in logs. Afterwards they all have proper timestamp though, so it's not like they're disappearing, they do, but after a long time.



What I've tried:




  • to redirect systemd service output to files:



    StandardOutput=file:/var/log/my_app/output.log



    StandardError=file:/var/log/my_app/error.log



    with no luck - they save fine but with the same slow speed



  • to try to dump journalctl logs to offline faster setting SyncIntervalSec from default 5m to 5s - didn't help either


Is there any way to pass those logs faster from my application to journald? I don't have troubles with other services like system authentication ones - I see records immediately.
My journald.conf file has default parameters (except one above), my systemd is version 237 and I'm running Ubuntu 18.04.










share|improve this question

















  • 1





    Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

    – filbranden
    Mar 8 at 16:11






  • 1





    Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

    – JdeBP
    Mar 8 at 16:58












  • Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

    – adamczi
    Mar 8 at 17:28











  • @adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

    – filbranden
    Mar 8 at 17:39

















1















I'm having a systemd service defined as follows, that works fine:



[Unit]
Description=my service
After=network.target

[Service]
User=myuser
Group=mygroup
WorkingDirectory=/home/myuser/myapp
Environment="PATH=/home/myuser/myapp/.venv/bin"
ExecStart=/home/myuser/myapp/.venv/bin/python3 /home/myuser/myapp/run.py
Restart=on-failure

[Install]
WantedBy=multi-user.target


This is a Python web application based on Flask framework. Normally in the stdout of the application I can see incoming requests "live", I mean when I run the app like python run.py.



Now after starting the service I'd like to follow logs of the app and I do:



sudo journalctl -f -u my_app.service


and incoming logs are awfully slow - sometimes it takes minutes or more for them to appear in logs. Afterwards they all have proper timestamp though, so it's not like they're disappearing, they do, but after a long time.



What I've tried:




  • to redirect systemd service output to files:



    StandardOutput=file:/var/log/my_app/output.log



    StandardError=file:/var/log/my_app/error.log



    with no luck - they save fine but with the same slow speed



  • to try to dump journalctl logs to offline faster setting SyncIntervalSec from default 5m to 5s - didn't help either


Is there any way to pass those logs faster from my application to journald? I don't have troubles with other services like system authentication ones - I see records immediately.
My journald.conf file has default parameters (except one above), my systemd is version 237 and I'm running Ubuntu 18.04.










share|improve this question

















  • 1





    Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

    – filbranden
    Mar 8 at 16:11






  • 1





    Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

    – JdeBP
    Mar 8 at 16:58












  • Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

    – adamczi
    Mar 8 at 17:28











  • @adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

    – filbranden
    Mar 8 at 17:39













1












1








1








I'm having a systemd service defined as follows, that works fine:



[Unit]
Description=my service
After=network.target

[Service]
User=myuser
Group=mygroup
WorkingDirectory=/home/myuser/myapp
Environment="PATH=/home/myuser/myapp/.venv/bin"
ExecStart=/home/myuser/myapp/.venv/bin/python3 /home/myuser/myapp/run.py
Restart=on-failure

[Install]
WantedBy=multi-user.target


This is a Python web application based on Flask framework. Normally in the stdout of the application I can see incoming requests "live", I mean when I run the app like python run.py.



Now after starting the service I'd like to follow logs of the app and I do:



sudo journalctl -f -u my_app.service


and incoming logs are awfully slow - sometimes it takes minutes or more for them to appear in logs. Afterwards they all have proper timestamp though, so it's not like they're disappearing, they do, but after a long time.



What I've tried:




  • to redirect systemd service output to files:



    StandardOutput=file:/var/log/my_app/output.log



    StandardError=file:/var/log/my_app/error.log



    with no luck - they save fine but with the same slow speed



  • to try to dump journalctl logs to offline faster setting SyncIntervalSec from default 5m to 5s - didn't help either


Is there any way to pass those logs faster from my application to journald? I don't have troubles with other services like system authentication ones - I see records immediately.
My journald.conf file has default parameters (except one above), my systemd is version 237 and I'm running Ubuntu 18.04.










share|improve this question














I'm having a systemd service defined as follows, that works fine:



[Unit]
Description=my service
After=network.target

[Service]
User=myuser
Group=mygroup
WorkingDirectory=/home/myuser/myapp
Environment="PATH=/home/myuser/myapp/.venv/bin"
ExecStart=/home/myuser/myapp/.venv/bin/python3 /home/myuser/myapp/run.py
Restart=on-failure

[Install]
WantedBy=multi-user.target


This is a Python web application based on Flask framework. Normally in the stdout of the application I can see incoming requests "live", I mean when I run the app like python run.py.



Now after starting the service I'd like to follow logs of the app and I do:



sudo journalctl -f -u my_app.service


and incoming logs are awfully slow - sometimes it takes minutes or more for them to appear in logs. Afterwards they all have proper timestamp though, so it's not like they're disappearing, they do, but after a long time.



What I've tried:




  • to redirect systemd service output to files:



    StandardOutput=file:/var/log/my_app/output.log



    StandardError=file:/var/log/my_app/error.log



    with no luck - they save fine but with the same slow speed



  • to try to dump journalctl logs to offline faster setting SyncIntervalSec from default 5m to 5s - didn't help either


Is there any way to pass those logs faster from my application to journald? I don't have troubles with other services like system authentication ones - I see records immediately.
My journald.conf file has default parameters (except one above), my systemd is version 237 and I'm running Ubuntu 18.04.







systemd logs systemd-journald journalctl






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 8 at 14:39









adamcziadamczi

1315




1315







  • 1





    Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

    – filbranden
    Mar 8 at 16:11






  • 1





    Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

    – JdeBP
    Mar 8 at 16:58












  • Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

    – adamczi
    Mar 8 at 17:28











  • @adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

    – filbranden
    Mar 8 at 17:39












  • 1





    Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

    – filbranden
    Mar 8 at 16:11






  • 1





    Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

    – JdeBP
    Mar 8 at 16:58












  • Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

    – adamczi
    Mar 8 at 17:28











  • @adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

    – filbranden
    Mar 8 at 17:39







1




1





Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

– filbranden
Mar 8 at 16:11





Can you check if .../python3 -u /home/myuser/myapp/run.py will fix this? It looks like Python stdout buffering is what's preventing your log lines from being flushed right away... Buffering is typically turned off when writing to the terminal (so when you run it manually), but needs an option when writing to file or pipe.

– filbranden
Mar 8 at 16:11




1




1





Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

– JdeBP
Mar 8 at 16:58






Oh look! It's the python buffering question again. See unix.stackexchange.com/questions/285419 , unix.stackexchange.com/questions/164987 , unix.stackexchange.com/questions/68059 , unix.stackexchange.com/questions/206224 , and unix.stackexchange.com/questions/164987 .

– JdeBP
Mar 8 at 16:58














Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

– adamczi
Mar 8 at 17:28





Thank you very much, sorry for repeated question - I spend lots of time searching but seems that I used wrong keywords. @filbranden if you post your comment as an answer, I'll be happy to accept it.

– adamczi
Mar 8 at 17:28













@adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

– filbranden
Mar 8 at 17:39





@adamczi Totally understandable. It's hard to think that the system producing the logs is the one at fault, especially when it works fine when you run it on a terminal... Once you've been using Unix/Linux for 20+ years, that becomes "obvious" to you, but I definitely agree that's counter intuitive for a beginner. I'll go ahead and post my comment as an answer. Thanks for confirming the -u setting worked for you!

– filbranden
Mar 8 at 17:39










1 Answer
1






active

oldest

votes


















2














The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.



This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.



The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), and unbuffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)



The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.



Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.



On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:



  1. Passing a -u flag to python3 in your command.

  2. Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)

You confirmed this worked for your specific case, so that's great!



For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.



Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.






share|improve this answer























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505146%2fgetting-systemd-service-logs-faster-from-my-service%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.



    This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.



    The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), and unbuffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)



    The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.



    Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.



    On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:



    1. Passing a -u flag to python3 in your command.

    2. Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)

    You confirmed this worked for your specific case, so that's great!



    For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.



    Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.






    share|improve this answer



























      2














      The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.



      This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.



      The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), and unbuffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)



      The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.



      Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.



      On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:



      1. Passing a -u flag to python3 in your command.

      2. Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)

      You confirmed this worked for your specific case, so that's great!



      For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.



      Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.






      share|improve this answer

























        2












        2








        2







        The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.



        This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.



        The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), and unbuffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)



        The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.



        Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.



        On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:



        1. Passing a -u flag to python3 in your command.

        2. Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)

        You confirmed this worked for your specific case, so that's great!



        For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.



        Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.






        share|improve this answer













        The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.



        This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.



        The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), and unbuffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)



        The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.



        Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.



        On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:



        1. Passing a -u flag to python3 in your command.

        2. Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)

        You confirmed this worked for your specific case, so that's great!



        For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.



        Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 8 at 17:53









        filbrandenfilbranden

        10.7k21847




        10.7k21847



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505146%2fgetting-systemd-service-logs-faster-from-my-service%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown






            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?