Log file rotation and compression interval
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
We have our the logrotate config for a service as below:
Even though compress
is not mentioned in the config, the log files are gzipped after each rotation. I believe it is because the compress
line is uncommented in the /etc/logrotate.conf file, there by enabling it globally. The questions are:
Is there a time delay or interval between the log file is rotated,
(from debug.log to debug.log-20190315) and when it is
compressed (from debug.log-20190315 to debug.log-20190315.gz)?If there is a delay, would mentioning
compress
in the specific log
rotate config file of the service would compress that log file
immediately after rotating it to debug.log-20190315 from debug.log ?
I do not see delaycompress
mentioned in any of the logrotate config files.
(Background: Our Splunk Indexer seems to be indexing a debug.log-2019xxxx file from this service. We have blacklisted *.gz$
and debug.log$
from going to the Splunk, but it seems that the file debug.log-2019xxxx exists for a few seconds or minutes because of which it gets forwarded to Splunk since during that time it does not match the blacklisted regex - *.gz$
and debug.log$
. I know that I can fix this by adding debug.log-[0-9]*
to the blacklist, but would like to know what causes the existence of xxxxxx.debug.log-20190315)
rhel logrotate
add a comment |
We have our the logrotate config for a service as below:
Even though compress
is not mentioned in the config, the log files are gzipped after each rotation. I believe it is because the compress
line is uncommented in the /etc/logrotate.conf file, there by enabling it globally. The questions are:
Is there a time delay or interval between the log file is rotated,
(from debug.log to debug.log-20190315) and when it is
compressed (from debug.log-20190315 to debug.log-20190315.gz)?If there is a delay, would mentioning
compress
in the specific log
rotate config file of the service would compress that log file
immediately after rotating it to debug.log-20190315 from debug.log ?
I do not see delaycompress
mentioned in any of the logrotate config files.
(Background: Our Splunk Indexer seems to be indexing a debug.log-2019xxxx file from this service. We have blacklisted *.gz$
and debug.log$
from going to the Splunk, but it seems that the file debug.log-2019xxxx exists for a few seconds or minutes because of which it gets forwarded to Splunk since during that time it does not match the blacklisted regex - *.gz$
and debug.log$
. I know that I can fix this by adding debug.log-[0-9]*
to the blacklist, but would like to know what causes the existence of xxxxxx.debug.log-20190315)
rhel logrotate
add a comment |
We have our the logrotate config for a service as below:
Even though compress
is not mentioned in the config, the log files are gzipped after each rotation. I believe it is because the compress
line is uncommented in the /etc/logrotate.conf file, there by enabling it globally. The questions are:
Is there a time delay or interval between the log file is rotated,
(from debug.log to debug.log-20190315) and when it is
compressed (from debug.log-20190315 to debug.log-20190315.gz)?If there is a delay, would mentioning
compress
in the specific log
rotate config file of the service would compress that log file
immediately after rotating it to debug.log-20190315 from debug.log ?
I do not see delaycompress
mentioned in any of the logrotate config files.
(Background: Our Splunk Indexer seems to be indexing a debug.log-2019xxxx file from this service. We have blacklisted *.gz$
and debug.log$
from going to the Splunk, but it seems that the file debug.log-2019xxxx exists for a few seconds or minutes because of which it gets forwarded to Splunk since during that time it does not match the blacklisted regex - *.gz$
and debug.log$
. I know that I can fix this by adding debug.log-[0-9]*
to the blacklist, but would like to know what causes the existence of xxxxxx.debug.log-20190315)
rhel logrotate
We have our the logrotate config for a service as below:
Even though compress
is not mentioned in the config, the log files are gzipped after each rotation. I believe it is because the compress
line is uncommented in the /etc/logrotate.conf file, there by enabling it globally. The questions are:
Is there a time delay or interval between the log file is rotated,
(from debug.log to debug.log-20190315) and when it is
compressed (from debug.log-20190315 to debug.log-20190315.gz)?If there is a delay, would mentioning
compress
in the specific log
rotate config file of the service would compress that log file
immediately after rotating it to debug.log-20190315 from debug.log ?
I do not see delaycompress
mentioned in any of the logrotate config files.
(Background: Our Splunk Indexer seems to be indexing a debug.log-2019xxxx file from this service. We have blacklisted *.gz$
and debug.log$
from going to the Splunk, but it seems that the file debug.log-2019xxxx exists for a few seconds or minutes because of which it gets forwarded to Splunk since during that time it does not match the blacklisted regex - *.gz$
and debug.log$
. I know that I can fix this by adding debug.log-[0-9]*
to the blacklist, but would like to know what causes the existence of xxxxxx.debug.log-20190315)
rhel logrotate
rhel logrotate
asked Mar 15 at 9:53
SreeSree
3,10572446
3,10572446
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The gzip
compression step would take a finite amount of time, so it's possible that you're seeing the original file while the compression process is ongoing. If the size of the original file is large enough, it can exist for quite some time before the compression is completed. The time taken for the compression will vary according to the settings passed to gzip (speed & level of compression) as well as the compressibility of the file. gzip
will remove the original file only after the compression process is complete.
As a small test for verification , I created a file of size 1 GiB from /dev/urandom
and another file of size 1 GiB from /dev/zero
and tested the time it took to compress them.
The file containing random data took about 2 minutes & 23 seconds:
[root@testvm1 ~]# time gzip testfile-random.txt
real 2m27.417s
user 2m22.172s
sys 0m2.839s
And the zero file took about 29 seconds:
[root@testvm1 ~]# time gzip testfile-zero.txt
real 0m28.930s
user 0m27.453s
sys 0m0.989s
While the compression was taking place, the original file was visible in both cases:
[root@testvm1 ~]# ls -lh testfile-random.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 17:49 testfile-random.txt
-rw-------. 1 root root 75M Mar 15 17:59 testfile-random.txt.gz
[root@testvm1 ~]# ls -lh testfile-zero.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 18:04 testfile-zero.txt
-rw-------. 1 root root 992K Mar 15 18:05 testfile-zero.txt.gz
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506464%2flog-file-rotation-and-compression-interval%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The gzip
compression step would take a finite amount of time, so it's possible that you're seeing the original file while the compression process is ongoing. If the size of the original file is large enough, it can exist for quite some time before the compression is completed. The time taken for the compression will vary according to the settings passed to gzip (speed & level of compression) as well as the compressibility of the file. gzip
will remove the original file only after the compression process is complete.
As a small test for verification , I created a file of size 1 GiB from /dev/urandom
and another file of size 1 GiB from /dev/zero
and tested the time it took to compress them.
The file containing random data took about 2 minutes & 23 seconds:
[root@testvm1 ~]# time gzip testfile-random.txt
real 2m27.417s
user 2m22.172s
sys 0m2.839s
And the zero file took about 29 seconds:
[root@testvm1 ~]# time gzip testfile-zero.txt
real 0m28.930s
user 0m27.453s
sys 0m0.989s
While the compression was taking place, the original file was visible in both cases:
[root@testvm1 ~]# ls -lh testfile-random.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 17:49 testfile-random.txt
-rw-------. 1 root root 75M Mar 15 17:59 testfile-random.txt.gz
[root@testvm1 ~]# ls -lh testfile-zero.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 18:04 testfile-zero.txt
-rw-------. 1 root root 992K Mar 15 18:05 testfile-zero.txt.gz
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
add a comment |
The gzip
compression step would take a finite amount of time, so it's possible that you're seeing the original file while the compression process is ongoing. If the size of the original file is large enough, it can exist for quite some time before the compression is completed. The time taken for the compression will vary according to the settings passed to gzip (speed & level of compression) as well as the compressibility of the file. gzip
will remove the original file only after the compression process is complete.
As a small test for verification , I created a file of size 1 GiB from /dev/urandom
and another file of size 1 GiB from /dev/zero
and tested the time it took to compress them.
The file containing random data took about 2 minutes & 23 seconds:
[root@testvm1 ~]# time gzip testfile-random.txt
real 2m27.417s
user 2m22.172s
sys 0m2.839s
And the zero file took about 29 seconds:
[root@testvm1 ~]# time gzip testfile-zero.txt
real 0m28.930s
user 0m27.453s
sys 0m0.989s
While the compression was taking place, the original file was visible in both cases:
[root@testvm1 ~]# ls -lh testfile-random.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 17:49 testfile-random.txt
-rw-------. 1 root root 75M Mar 15 17:59 testfile-random.txt.gz
[root@testvm1 ~]# ls -lh testfile-zero.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 18:04 testfile-zero.txt
-rw-------. 1 root root 992K Mar 15 18:05 testfile-zero.txt.gz
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
add a comment |
The gzip
compression step would take a finite amount of time, so it's possible that you're seeing the original file while the compression process is ongoing. If the size of the original file is large enough, it can exist for quite some time before the compression is completed. The time taken for the compression will vary according to the settings passed to gzip (speed & level of compression) as well as the compressibility of the file. gzip
will remove the original file only after the compression process is complete.
As a small test for verification , I created a file of size 1 GiB from /dev/urandom
and another file of size 1 GiB from /dev/zero
and tested the time it took to compress them.
The file containing random data took about 2 minutes & 23 seconds:
[root@testvm1 ~]# time gzip testfile-random.txt
real 2m27.417s
user 2m22.172s
sys 0m2.839s
And the zero file took about 29 seconds:
[root@testvm1 ~]# time gzip testfile-zero.txt
real 0m28.930s
user 0m27.453s
sys 0m0.989s
While the compression was taking place, the original file was visible in both cases:
[root@testvm1 ~]# ls -lh testfile-random.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 17:49 testfile-random.txt
-rw-------. 1 root root 75M Mar 15 17:59 testfile-random.txt.gz
[root@testvm1 ~]# ls -lh testfile-zero.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 18:04 testfile-zero.txt
-rw-------. 1 root root 992K Mar 15 18:05 testfile-zero.txt.gz
The gzip
compression step would take a finite amount of time, so it's possible that you're seeing the original file while the compression process is ongoing. If the size of the original file is large enough, it can exist for quite some time before the compression is completed. The time taken for the compression will vary according to the settings passed to gzip (speed & level of compression) as well as the compressibility of the file. gzip
will remove the original file only after the compression process is complete.
As a small test for verification , I created a file of size 1 GiB from /dev/urandom
and another file of size 1 GiB from /dev/zero
and tested the time it took to compress them.
The file containing random data took about 2 minutes & 23 seconds:
[root@testvm1 ~]# time gzip testfile-random.txt
real 2m27.417s
user 2m22.172s
sys 0m2.839s
And the zero file took about 29 seconds:
[root@testvm1 ~]# time gzip testfile-zero.txt
real 0m28.930s
user 0m27.453s
sys 0m0.989s
While the compression was taking place, the original file was visible in both cases:
[root@testvm1 ~]# ls -lh testfile-random.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 17:49 testfile-random.txt
-rw-------. 1 root root 75M Mar 15 17:59 testfile-random.txt.gz
[root@testvm1 ~]# ls -lh testfile-zero.txt*
-rw-r--r--. 1 root root 1.0G Mar 15 18:04 testfile-zero.txt
-rw-------. 1 root root 992K Mar 15 18:05 testfile-zero.txt.gz
answered Mar 15 at 12:50
HaxielHaxiel
3,62811021
3,62811021
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
add a comment |
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
That seem to the case here too. One the machine that I see this problem, I had gzipped a 3GiB file (created from /dev/zero and it took 40 seconds. So, this has to be the problem. I`ll check and update you. I have upvoted your answer.
– Sree
Mar 15 at 18:09
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f506464%2flog-file-rotation-and-compression-interval%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown