How to get all the log lines between two dates ranges in Linux

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
1












How to get all the log lines between two dates ranges in Linux? I tried certain commands like



1)awk '$0>=from&&$0<=to' from="Wed 21 Mar 14:52:08" to="Wed21 Mar 
14:53:08" /home/db2inst1/logs/tracestart.log


but it gives me only those lines which has those exact dates in them.



2) sed -n '/Wed 21 Mar 14:52:00/,/Wed 21 Mar 14:53:08/p' 
/home/db2inst1/logs/tracestart.log /home/db2inst1/logs/traceend.log


This one gives me correct data but date(Wed 21 Mar 14:52:00) should be an exact match, otherwise there is no output for nearest time also. For example, if Wed 21 Mar 14:52:01 is the start time then there is also no output.



log file sample::



2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:199 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:177 -
*********invokeService is
2018-04-04 11:40:48 INFO ProductInfoTest:57 - Response Map:::::
RESPONSE_TYPE=application/json, EXPECTED_RESPONSE=
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

] ,
ACTUAL_RESPONSE=com.jayway.restassured.internal.RestAssuredResponseImpl@7d48651a
2018-04-04 11:40:48 INFO ProductValidator:47 - EXPECTED_RESPONSE::::
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

]
2018-04-04 11:40:48 ERROR ProductInfoTest:65 - Exception occured::: null
2018-04-04 11:40:48 INFO ProductInfoStepDefinations:27 - addProductDetailsApiTest Starting::::
2018-04-04 11:40:48 INFO ProductInfoTest:53 - getAllProductsInfo starting
2018-04-04 11:40:48 INFO RestAssuredService:170 -
*********invokeService is starting*********
2018-04-04 11:40:48 INFO RestAssuredService:247 - Final uri:::::: rest/market/item/info
2018-04-04 11:40:48 INFO RestAssuredService:258 - HeaderParametersMap :::::: {Accept=application/json, Content-Type=application/json






share|improve this question

















  • 1




    If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
    – Jeff Schaller
    May 7 at 11:14














up vote
2
down vote

favorite
1












How to get all the log lines between two dates ranges in Linux? I tried certain commands like



1)awk '$0>=from&&$0<=to' from="Wed 21 Mar 14:52:08" to="Wed21 Mar 
14:53:08" /home/db2inst1/logs/tracestart.log


but it gives me only those lines which has those exact dates in them.



2) sed -n '/Wed 21 Mar 14:52:00/,/Wed 21 Mar 14:53:08/p' 
/home/db2inst1/logs/tracestart.log /home/db2inst1/logs/traceend.log


This one gives me correct data but date(Wed 21 Mar 14:52:00) should be an exact match, otherwise there is no output for nearest time also. For example, if Wed 21 Mar 14:52:01 is the start time then there is also no output.



log file sample::



2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:199 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:177 -
*********invokeService is
2018-04-04 11:40:48 INFO ProductInfoTest:57 - Response Map:::::
RESPONSE_TYPE=application/json, EXPECTED_RESPONSE=
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

] ,
ACTUAL_RESPONSE=com.jayway.restassured.internal.RestAssuredResponseImpl@7d48651a
2018-04-04 11:40:48 INFO ProductValidator:47 - EXPECTED_RESPONSE::::
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

]
2018-04-04 11:40:48 ERROR ProductInfoTest:65 - Exception occured::: null
2018-04-04 11:40:48 INFO ProductInfoStepDefinations:27 - addProductDetailsApiTest Starting::::
2018-04-04 11:40:48 INFO ProductInfoTest:53 - getAllProductsInfo starting
2018-04-04 11:40:48 INFO RestAssuredService:170 -
*********invokeService is starting*********
2018-04-04 11:40:48 INFO RestAssuredService:247 - Final uri:::::: rest/market/item/info
2018-04-04 11:40:48 INFO RestAssuredService:258 - HeaderParametersMap :::::: {Accept=application/json, Content-Type=application/json






share|improve this question

















  • 1




    If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
    – Jeff Schaller
    May 7 at 11:14












up vote
2
down vote

favorite
1









up vote
2
down vote

favorite
1






1





How to get all the log lines between two dates ranges in Linux? I tried certain commands like



1)awk '$0>=from&&$0<=to' from="Wed 21 Mar 14:52:08" to="Wed21 Mar 
14:53:08" /home/db2inst1/logs/tracestart.log


but it gives me only those lines which has those exact dates in them.



2) sed -n '/Wed 21 Mar 14:52:00/,/Wed 21 Mar 14:53:08/p' 
/home/db2inst1/logs/tracestart.log /home/db2inst1/logs/traceend.log


This one gives me correct data but date(Wed 21 Mar 14:52:00) should be an exact match, otherwise there is no output for nearest time also. For example, if Wed 21 Mar 14:52:01 is the start time then there is also no output.



log file sample::



2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:199 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:177 -
*********invokeService is
2018-04-04 11:40:48 INFO ProductInfoTest:57 - Response Map:::::
RESPONSE_TYPE=application/json, EXPECTED_RESPONSE=
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

] ,
ACTUAL_RESPONSE=com.jayway.restassured.internal.RestAssuredResponseImpl@7d48651a
2018-04-04 11:40:48 INFO ProductValidator:47 - EXPECTED_RESPONSE::::
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

]
2018-04-04 11:40:48 ERROR ProductInfoTest:65 - Exception occured::: null
2018-04-04 11:40:48 INFO ProductInfoStepDefinations:27 - addProductDetailsApiTest Starting::::
2018-04-04 11:40:48 INFO ProductInfoTest:53 - getAllProductsInfo starting
2018-04-04 11:40:48 INFO RestAssuredService:170 -
*********invokeService is starting*********
2018-04-04 11:40:48 INFO RestAssuredService:247 - Final uri:::::: rest/market/item/info
2018-04-04 11:40:48 INFO RestAssuredService:258 - HeaderParametersMap :::::: {Accept=application/json, Content-Type=application/json






share|improve this question













How to get all the log lines between two dates ranges in Linux? I tried certain commands like



1)awk '$0>=from&&$0<=to' from="Wed 21 Mar 14:52:08" to="Wed21 Mar 
14:53:08" /home/db2inst1/logs/tracestart.log


but it gives me only those lines which has those exact dates in them.



2) sed -n '/Wed 21 Mar 14:52:00/,/Wed 21 Mar 14:53:08/p' 
/home/db2inst1/logs/tracestart.log /home/db2inst1/logs/traceend.log


This one gives me correct data but date(Wed 21 Mar 14:52:00) should be an exact match, otherwise there is no output for nearest time also. For example, if Wed 21 Mar 14:52:01 is the start time then there is also no output.



log file sample::



2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:199 - some thing.......
2018-04-04 11:40:48 INFO RestAssuredService:177 -
*********invokeService is
2018-04-04 11:40:48 INFO ProductInfoTest:57 - Response Map:::::
RESPONSE_TYPE=application/json, EXPECTED_RESPONSE=
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

] ,
ACTUAL_RESPONSE=com.jayway.restassured.internal.RestAssuredResponseImpl@7d48651a
2018-04-04 11:40:48 INFO ProductValidator:47 - EXPECTED_RESPONSE::::
"products": [

"id": 23001,
"type": "SHIRT",
"description": "Mens Wear Dresses",
"price": 850,
"brand": "PETER_ENGLAND"
,

"id": 23002,
"type": "KURTI",
"description": "Womens Wear Dresses",
"price": 899,
"brand": "ALLEND_SOLEY"

]
2018-04-04 11:40:48 ERROR ProductInfoTest:65 - Exception occured::: null
2018-04-04 11:40:48 INFO ProductInfoStepDefinations:27 - addProductDetailsApiTest Starting::::
2018-04-04 11:40:48 INFO ProductInfoTest:53 - getAllProductsInfo starting
2018-04-04 11:40:48 INFO RestAssuredService:170 -
*********invokeService is starting*********
2018-04-04 11:40:48 INFO RestAssuredService:247 - Final uri:::::: rest/market/item/info
2018-04-04 11:40:48 INFO RestAssuredService:258 - HeaderParametersMap :::::: {Accept=application/json, Content-Type=application/json








share|improve this question












share|improve this question




share|improve this question








edited May 3 at 19:22









Jeff Schaller

31.1k846105




31.1k846105









asked May 3 at 15:31









Irfan

112




112







  • 1




    If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
    – Jeff Schaller
    May 7 at 11:14












  • 1




    If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
    – Jeff Schaller
    May 7 at 11:14







1




1




If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
– Jeff Schaller
May 7 at 11:14




If any of the answers solved your problem, please accept it by clicking the checkmark next to it. Thank you!
– Jeff Schaller
May 7 at 11:14










3 Answers
3






active

oldest

votes

















up vote
2
down vote













If your system is using systemd, then journalctl has options for time and date ranges to output from logs.



From man journalctl:




-S, --since=, -U, --until=
Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date
specifications should be of the format
"2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00"
is assumed. If the date component is
omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer
to 00:00:00 of the day before the
current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally,
relative times may be specified, prefixed
with "-" or "+", referring to times before or after the current time, respectively. For complete time and date specification,
see systemd.time(7). Note that
--output=short-full prints timestamps that follow precisely this format.




Combine this with the --user option and some grep to filter out system messages to cut down on the clutter.
If your system does not use systemd, or your program's messages are not caught by journald, then you may need something else.






share|improve this answer




























    up vote
    1
    down vote













    Assuming a simplistic "time" environment (no timezone conversions, no daylight savings changes), you could tell awk your date ranges in seconds-since-the-epoch, then have awk convert each date to seconds-since-the-epoch and print only lines in that range:



    awk -v from=$(date -d "2018-04-04 11:40:45" +%s) 
    -v to=$(date -d "2018-04-04 11:40:47" +%s)
    ' "date -d ""$1 " "$2"" +%s" ' < input
    2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......


    It's not particularly efficient, as it calls date for every line; it could be enhanced to cache lookups, if that becomes a concern.






    share|improve this answer




























      up vote
      0
      down vote













      An other approach I am using frequently for log files from Java applications is to find the line number of the first occurencce of the time stamps



      FROM_DATE="Wed 21 Mar 14:52:08"
      TO_DATE=""Wed 21 Mar 14:53:08"

      FROM_LINE=$(grep -n -m 1 $FROM_DATE $FILE | cut -d ":" -f 1)
      TO_LINE=$(grep -n -m 1 $TO_DATE $FILE | cut -d ":" -f 1)


      then give out the information between, i.e.



      tail -n "$FROM_LINE" $FILE | head -n $(expr $TO_LINE - $FROM_LINE)


      or via



      sed -n -e "$FROM_LINE,$TO_LINE p" -e "$TO_LINE q" $FILE


      This will catch up stack traces, REST API content, JSON structures, etc.



      For certain applications like from the Hadoop framework, I have specific scripts to work with their log files. The from Jeff mentioned approach with date I am using to calculate the time between two events.



      For further information (and reference):



      • cat line X to line Y on a huge file

      • Subtract two variables in Bash





      share|improve this answer





















        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "106"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );








         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f441578%2fhow-to-get-all-the-log-lines-between-two-dates-ranges-in-linux%23new-answer', 'question_page');

        );

        Post as a guest






























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        2
        down vote













        If your system is using systemd, then journalctl has options for time and date ranges to output from logs.



        From man journalctl:




        -S, --since=, -U, --until=
        Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date
        specifications should be of the format
        "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00"
        is assumed. If the date component is
        omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer
        to 00:00:00 of the day before the
        current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally,
        relative times may be specified, prefixed
        with "-" or "+", referring to times before or after the current time, respectively. For complete time and date specification,
        see systemd.time(7). Note that
        --output=short-full prints timestamps that follow precisely this format.




        Combine this with the --user option and some grep to filter out system messages to cut down on the clutter.
        If your system does not use systemd, or your program's messages are not caught by journald, then you may need something else.






        share|improve this answer

























          up vote
          2
          down vote













          If your system is using systemd, then journalctl has options for time and date ranges to output from logs.



          From man journalctl:




          -S, --since=, -U, --until=
          Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date
          specifications should be of the format
          "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00"
          is assumed. If the date component is
          omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer
          to 00:00:00 of the day before the
          current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally,
          relative times may be specified, prefixed
          with "-" or "+", referring to times before or after the current time, respectively. For complete time and date specification,
          see systemd.time(7). Note that
          --output=short-full prints timestamps that follow precisely this format.




          Combine this with the --user option and some grep to filter out system messages to cut down on the clutter.
          If your system does not use systemd, or your program's messages are not caught by journald, then you may need something else.






          share|improve this answer























            up vote
            2
            down vote










            up vote
            2
            down vote









            If your system is using systemd, then journalctl has options for time and date ranges to output from logs.



            From man journalctl:




            -S, --since=, -U, --until=
            Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date
            specifications should be of the format
            "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00"
            is assumed. If the date component is
            omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer
            to 00:00:00 of the day before the
            current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally,
            relative times may be specified, prefixed
            with "-" or "+", referring to times before or after the current time, respectively. For complete time and date specification,
            see systemd.time(7). Note that
            --output=short-full prints timestamps that follow precisely this format.




            Combine this with the --user option and some grep to filter out system messages to cut down on the clutter.
            If your system does not use systemd, or your program's messages are not caught by journald, then you may need something else.






            share|improve this answer













            If your system is using systemd, then journalctl has options for time and date ranges to output from logs.



            From man journalctl:




            -S, --since=, -U, --until=
            Start showing entries on or newer than the specified date, or on or older than the specified date, respectively. Date
            specifications should be of the format
            "2012-10-30 18:17:16". If the time part is omitted, "00:00:00" is assumed. If only the seconds component is omitted, ":00"
            is assumed. If the date component is
            omitted, the current day is assumed. Alternatively the strings "yesterday", "today", "tomorrow" are understood, which refer
            to 00:00:00 of the day before the
            current day, the current day, or the day after the current day, respectively. "now" refers to the current time. Finally,
            relative times may be specified, prefixed
            with "-" or "+", referring to times before or after the current time, respectively. For complete time and date specification,
            see systemd.time(7). Note that
            --output=short-full prints timestamps that follow precisely this format.




            Combine this with the --user option and some grep to filter out system messages to cut down on the clutter.
            If your system does not use systemd, or your program's messages are not caught by journald, then you may need something else.







            share|improve this answer













            share|improve this answer



            share|improve this answer











            answered May 3 at 17:42









            Mioriin

            1,616412




            1,616412






















                up vote
                1
                down vote













                Assuming a simplistic "time" environment (no timezone conversions, no daylight savings changes), you could tell awk your date ranges in seconds-since-the-epoch, then have awk convert each date to seconds-since-the-epoch and print only lines in that range:



                awk -v from=$(date -d "2018-04-04 11:40:45" +%s) 
                -v to=$(date -d "2018-04-04 11:40:47" +%s)
                ' "date -d ""$1 " "$2"" +%s" ' < input
                2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......


                It's not particularly efficient, as it calls date for every line; it could be enhanced to cache lookups, if that becomes a concern.






                share|improve this answer

























                  up vote
                  1
                  down vote













                  Assuming a simplistic "time" environment (no timezone conversions, no daylight savings changes), you could tell awk your date ranges in seconds-since-the-epoch, then have awk convert each date to seconds-since-the-epoch and print only lines in that range:



                  awk -v from=$(date -d "2018-04-04 11:40:45" +%s) 
                  -v to=$(date -d "2018-04-04 11:40:47" +%s)
                  ' "date -d ""$1 " "$2"" +%s" ' < input
                  2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......


                  It's not particularly efficient, as it calls date for every line; it could be enhanced to cache lookups, if that becomes a concern.






                  share|improve this answer























                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    Assuming a simplistic "time" environment (no timezone conversions, no daylight savings changes), you could tell awk your date ranges in seconds-since-the-epoch, then have awk convert each date to seconds-since-the-epoch and print only lines in that range:



                    awk -v from=$(date -d "2018-04-04 11:40:45" +%s) 
                    -v to=$(date -d "2018-04-04 11:40:47" +%s)
                    ' "date -d ""$1 " "$2"" +%s" ' < input
                    2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......


                    It's not particularly efficient, as it calls date for every line; it could be enhanced to cache lookups, if that becomes a concern.






                    share|improve this answer













                    Assuming a simplistic "time" environment (no timezone conversions, no daylight savings changes), you could tell awk your date ranges in seconds-since-the-epoch, then have awk convert each date to seconds-since-the-epoch and print only lines in that range:



                    awk -v from=$(date -d "2018-04-04 11:40:45" +%s) 
                    -v to=$(date -d "2018-04-04 11:40:47" +%s)
                    ' "date -d ""$1 " "$2"" +%s" ' < input
                    2018-04-04 11:40:46 INFO RestAssuredService:184 - some thing.......


                    It's not particularly efficient, as it calls date for every line; it could be enhanced to cache lookups, if that becomes a concern.







                    share|improve this answer













                    share|improve this answer



                    share|improve this answer











                    answered May 3 at 19:20









                    Jeff Schaller

                    31.1k846105




                    31.1k846105




















                        up vote
                        0
                        down vote













                        An other approach I am using frequently for log files from Java applications is to find the line number of the first occurencce of the time stamps



                        FROM_DATE="Wed 21 Mar 14:52:08"
                        TO_DATE=""Wed 21 Mar 14:53:08"

                        FROM_LINE=$(grep -n -m 1 $FROM_DATE $FILE | cut -d ":" -f 1)
                        TO_LINE=$(grep -n -m 1 $TO_DATE $FILE | cut -d ":" -f 1)


                        then give out the information between, i.e.



                        tail -n "$FROM_LINE" $FILE | head -n $(expr $TO_LINE - $FROM_LINE)


                        or via



                        sed -n -e "$FROM_LINE,$TO_LINE p" -e "$TO_LINE q" $FILE


                        This will catch up stack traces, REST API content, JSON structures, etc.



                        For certain applications like from the Hadoop framework, I have specific scripts to work with their log files. The from Jeff mentioned approach with date I am using to calculate the time between two events.



                        For further information (and reference):



                        • cat line X to line Y on a huge file

                        • Subtract two variables in Bash





                        share|improve this answer

























                          up vote
                          0
                          down vote













                          An other approach I am using frequently for log files from Java applications is to find the line number of the first occurencce of the time stamps



                          FROM_DATE="Wed 21 Mar 14:52:08"
                          TO_DATE=""Wed 21 Mar 14:53:08"

                          FROM_LINE=$(grep -n -m 1 $FROM_DATE $FILE | cut -d ":" -f 1)
                          TO_LINE=$(grep -n -m 1 $TO_DATE $FILE | cut -d ":" -f 1)


                          then give out the information between, i.e.



                          tail -n "$FROM_LINE" $FILE | head -n $(expr $TO_LINE - $FROM_LINE)


                          or via



                          sed -n -e "$FROM_LINE,$TO_LINE p" -e "$TO_LINE q" $FILE


                          This will catch up stack traces, REST API content, JSON structures, etc.



                          For certain applications like from the Hadoop framework, I have specific scripts to work with their log files. The from Jeff mentioned approach with date I am using to calculate the time between two events.



                          For further information (and reference):



                          • cat line X to line Y on a huge file

                          • Subtract two variables in Bash





                          share|improve this answer























                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            An other approach I am using frequently for log files from Java applications is to find the line number of the first occurencce of the time stamps



                            FROM_DATE="Wed 21 Mar 14:52:08"
                            TO_DATE=""Wed 21 Mar 14:53:08"

                            FROM_LINE=$(grep -n -m 1 $FROM_DATE $FILE | cut -d ":" -f 1)
                            TO_LINE=$(grep -n -m 1 $TO_DATE $FILE | cut -d ":" -f 1)


                            then give out the information between, i.e.



                            tail -n "$FROM_LINE" $FILE | head -n $(expr $TO_LINE - $FROM_LINE)


                            or via



                            sed -n -e "$FROM_LINE,$TO_LINE p" -e "$TO_LINE q" $FILE


                            This will catch up stack traces, REST API content, JSON structures, etc.



                            For certain applications like from the Hadoop framework, I have specific scripts to work with their log files. The from Jeff mentioned approach with date I am using to calculate the time between two events.



                            For further information (and reference):



                            • cat line X to line Y on a huge file

                            • Subtract two variables in Bash





                            share|improve this answer













                            An other approach I am using frequently for log files from Java applications is to find the line number of the first occurencce of the time stamps



                            FROM_DATE="Wed 21 Mar 14:52:08"
                            TO_DATE=""Wed 21 Mar 14:53:08"

                            FROM_LINE=$(grep -n -m 1 $FROM_DATE $FILE | cut -d ":" -f 1)
                            TO_LINE=$(grep -n -m 1 $TO_DATE $FILE | cut -d ":" -f 1)


                            then give out the information between, i.e.



                            tail -n "$FROM_LINE" $FILE | head -n $(expr $TO_LINE - $FROM_LINE)


                            or via



                            sed -n -e "$FROM_LINE,$TO_LINE p" -e "$TO_LINE q" $FILE


                            This will catch up stack traces, REST API content, JSON structures, etc.



                            For certain applications like from the Hadoop framework, I have specific scripts to work with their log files. The from Jeff mentioned approach with date I am using to calculate the time between two events.



                            For further information (and reference):



                            • cat line X to line Y on a huge file

                            • Subtract two variables in Bash






                            share|improve this answer













                            share|improve this answer



                            share|improve this answer











                            answered May 3 at 19:41









                            U880D

                            399314




                            399314






















                                 

                                draft saved


                                draft discarded


























                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f441578%2fhow-to-get-all-the-log-lines-between-two-dates-ranges-in-linux%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Bahrain

                                Postfix configuration issue with fips on centos 7; mailgun relay