Write output file, collating groups of up to 7 input lines

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












3












$begingroup$


I have this code that reads a file and after processing a few lines writes the output to a second file:



num_reads = 7
with open('data.txt') as read_file:
with open('new_data.txt', 'w') as write_file:

while (True):
lines =
try: # expect errors if the number of lines in the file are not a multiplication of num_reads
for i in range(num_reads):
lines.append(next(read_file)) # when the file finishes an exception occurs here

#do sutff with the lines (exactly num_reads number of lines)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')

except StopIteration: # here we process the (possibly) insufficent last lines
#do stuff with the lines (less that num_reads)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')
break


Here is the input file (data.txt):



line1
line2
line3
line4
line5
line7
line8
line9


And this is the output file that has the desired state:



line1 line2 line3 line4 line5 line7
line8 line9


This works correctly but as I wish to do the same processing and writing procedure in both cases (when the number of elements is 7 and when the file finishes and the exception is raised) I think the above code violates DRY principle even if I define a new function and call it once in try block and once in except before break. Any other ordering that I could come up with was either causing an infinite loop or losing the final lines.
I appreciate any comments on handling this issue, as it is not limited to this case and I had faced it in other cases as well.










share|improve this question











$endgroup$











  • $begingroup$
    @200_success done! :)
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 5:23






  • 1




    $begingroup$
    (Welcom to Code Review!)
    $endgroup$
    – greybeard
    Jan 18 at 7:49















3












$begingroup$


I have this code that reads a file and after processing a few lines writes the output to a second file:



num_reads = 7
with open('data.txt') as read_file:
with open('new_data.txt', 'w') as write_file:

while (True):
lines =
try: # expect errors if the number of lines in the file are not a multiplication of num_reads
for i in range(num_reads):
lines.append(next(read_file)) # when the file finishes an exception occurs here

#do sutff with the lines (exactly num_reads number of lines)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')

except StopIteration: # here we process the (possibly) insufficent last lines
#do stuff with the lines (less that num_reads)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')
break


Here is the input file (data.txt):



line1
line2
line3
line4
line5
line7
line8
line9


And this is the output file that has the desired state:



line1 line2 line3 line4 line5 line7
line8 line9


This works correctly but as I wish to do the same processing and writing procedure in both cases (when the number of elements is 7 and when the file finishes and the exception is raised) I think the above code violates DRY principle even if I define a new function and call it once in try block and once in except before break. Any other ordering that I could come up with was either causing an infinite loop or losing the final lines.
I appreciate any comments on handling this issue, as it is not limited to this case and I had faced it in other cases as well.










share|improve this question











$endgroup$











  • $begingroup$
    @200_success done! :)
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 5:23






  • 1




    $begingroup$
    (Welcom to Code Review!)
    $endgroup$
    – greybeard
    Jan 18 at 7:49













3












3








3


1



$begingroup$


I have this code that reads a file and after processing a few lines writes the output to a second file:



num_reads = 7
with open('data.txt') as read_file:
with open('new_data.txt', 'w') as write_file:

while (True):
lines =
try: # expect errors if the number of lines in the file are not a multiplication of num_reads
for i in range(num_reads):
lines.append(next(read_file)) # when the file finishes an exception occurs here

#do sutff with the lines (exactly num_reads number of lines)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')

except StopIteration: # here we process the (possibly) insufficent last lines
#do stuff with the lines (less that num_reads)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')
break


Here is the input file (data.txt):



line1
line2
line3
line4
line5
line7
line8
line9


And this is the output file that has the desired state:



line1 line2 line3 line4 line5 line7
line8 line9


This works correctly but as I wish to do the same processing and writing procedure in both cases (when the number of elements is 7 and when the file finishes and the exception is raised) I think the above code violates DRY principle even if I define a new function and call it once in try block and once in except before break. Any other ordering that I could come up with was either causing an infinite loop or losing the final lines.
I appreciate any comments on handling this issue, as it is not limited to this case and I had faced it in other cases as well.










share|improve this question











$endgroup$




I have this code that reads a file and after processing a few lines writes the output to a second file:



num_reads = 7
with open('data.txt') as read_file:
with open('new_data.txt', 'w') as write_file:

while (True):
lines =
try: # expect errors if the number of lines in the file are not a multiplication of num_reads
for i in range(num_reads):
lines.append(next(read_file)) # when the file finishes an exception occurs here

#do sutff with the lines (exactly num_reads number of lines)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')

except StopIteration: # here we process the (possibly) insufficent last lines
#do stuff with the lines (less that num_reads)
processed = " ".join(list(map(lambda x: x.replace("n", ''), lines)))
write_file.write(processed + 'n')
break


Here is the input file (data.txt):



line1
line2
line3
line4
line5
line7
line8
line9


And this is the output file that has the desired state:



line1 line2 line3 line4 line5 line7
line8 line9


This works correctly but as I wish to do the same processing and writing procedure in both cases (when the number of elements is 7 and when the file finishes and the exception is raised) I think the above code violates DRY principle even if I define a new function and call it once in try block and once in except before break. Any other ordering that I could come up with was either causing an infinite loop or losing the final lines.
I appreciate any comments on handling this issue, as it is not limited to this case and I had faced it in other cases as well.







python file






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 18 at 6:37









200_success

129k15153415




129k15153415










asked Jan 18 at 3:58









Farzad VertigoFarzad Vertigo

1184




1184











  • $begingroup$
    @200_success done! :)
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 5:23






  • 1




    $begingroup$
    (Welcom to Code Review!)
    $endgroup$
    – greybeard
    Jan 18 at 7:49
















  • $begingroup$
    @200_success done! :)
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 5:23






  • 1




    $begingroup$
    (Welcom to Code Review!)
    $endgroup$
    – greybeard
    Jan 18 at 7:49















$begingroup$
@200_success done! :)
$endgroup$
– Farzad Vertigo
Jan 18 at 5:23




$begingroup$
@200_success done! :)
$endgroup$
– Farzad Vertigo
Jan 18 at 5:23




1




1




$begingroup$
(Welcom to Code Review!)
$endgroup$
– greybeard
Jan 18 at 7:49




$begingroup$
(Welcom to Code Review!)
$endgroup$
– greybeard
Jan 18 at 7:49










2 Answers
2






active

oldest

votes


















3












$begingroup$

Disclaimer: This question belongs to Stack Overflow, and I voted to migrate it. Therefore, the answer is not a review.



Keep in mind that principles are there to guide you. They should be treated like guard rails, rather than roadblocks.



I would argue that



 while (....) 
foo(7);

foo(3);


does not violate DRY. Your situation is pretty much the same.



That said, your idea of defining function is valid. You just factoring out the wrong code. Factor out reading. Consider



 def read_n_lines(infile, n):
lines =
try:
for _ in range(n):
lines.append(next(infile))
except StopIteration:
pass
return lines


and use it as



 while True:
lines = read_n_lines(infile, 7)
if len(lines) == 0:
break
process_lines(lines)





share|improve this answer









$endgroup$












  • $begingroup$
    Thank you very much. Beautiful idea. I appreciate it.
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 6:05


















6












$begingroup$

You should avoid writing code with exception-handling altogether. Usually, when you want to write a fancy loop in Python, the itertools module is your friend. In this case, I would take advantage of itertools.groupby() to form groups of lines, assisted by itertools.count() to provide the line numbers.



import itertools

def chunks(iterable, n):
i = itertools.count()
for _, group in itertools.groupby(iterable, lambda _: next(i) // n):
yield group

with open('data.txt') as read_f, open('new_data.txt', 'w') as write_f:
for group in chunks(read_f, 7):
print(' '.join(line.rstrip() for line in group), file=write_f)


A few other minor changes:



  • You only need one with block to open both files.


  • line.rstrip() is more convenient than lambda x: x.replace("n", '')


  • print(…, file=write_file) is slightly more elegant than write_file.write(… + 'n').





share|improve this answer









$endgroup$








  • 2




    $begingroup$
    Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
    $endgroup$
    – Mathias Ettinger
    Jan 18 at 8:10






  • 1




    $begingroup$
    @MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
    $endgroup$
    – 200_success
    Jan 18 at 8:19










  • $begingroup$
    @Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
    $endgroup$
    – 200_success
    Jan 18 at 15:06










  • $begingroup$
    @200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
    $endgroup$
    – Graipher
    Jan 18 at 15:07











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
);
);
, "mathjax-editing");

StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "196"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f211737%2fwrite-output-file-collating-groups-of-up-to-7-input-lines%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









3












$begingroup$

Disclaimer: This question belongs to Stack Overflow, and I voted to migrate it. Therefore, the answer is not a review.



Keep in mind that principles are there to guide you. They should be treated like guard rails, rather than roadblocks.



I would argue that



 while (....) 
foo(7);

foo(3);


does not violate DRY. Your situation is pretty much the same.



That said, your idea of defining function is valid. You just factoring out the wrong code. Factor out reading. Consider



 def read_n_lines(infile, n):
lines =
try:
for _ in range(n):
lines.append(next(infile))
except StopIteration:
pass
return lines


and use it as



 while True:
lines = read_n_lines(infile, 7)
if len(lines) == 0:
break
process_lines(lines)





share|improve this answer









$endgroup$












  • $begingroup$
    Thank you very much. Beautiful idea. I appreciate it.
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 6:05















3












$begingroup$

Disclaimer: This question belongs to Stack Overflow, and I voted to migrate it. Therefore, the answer is not a review.



Keep in mind that principles are there to guide you. They should be treated like guard rails, rather than roadblocks.



I would argue that



 while (....) 
foo(7);

foo(3);


does not violate DRY. Your situation is pretty much the same.



That said, your idea of defining function is valid. You just factoring out the wrong code. Factor out reading. Consider



 def read_n_lines(infile, n):
lines =
try:
for _ in range(n):
lines.append(next(infile))
except StopIteration:
pass
return lines


and use it as



 while True:
lines = read_n_lines(infile, 7)
if len(lines) == 0:
break
process_lines(lines)





share|improve this answer









$endgroup$












  • $begingroup$
    Thank you very much. Beautiful idea. I appreciate it.
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 6:05













3












3








3





$begingroup$

Disclaimer: This question belongs to Stack Overflow, and I voted to migrate it. Therefore, the answer is not a review.



Keep in mind that principles are there to guide you. They should be treated like guard rails, rather than roadblocks.



I would argue that



 while (....) 
foo(7);

foo(3);


does not violate DRY. Your situation is pretty much the same.



That said, your idea of defining function is valid. You just factoring out the wrong code. Factor out reading. Consider



 def read_n_lines(infile, n):
lines =
try:
for _ in range(n):
lines.append(next(infile))
except StopIteration:
pass
return lines


and use it as



 while True:
lines = read_n_lines(infile, 7)
if len(lines) == 0:
break
process_lines(lines)





share|improve this answer









$endgroup$



Disclaimer: This question belongs to Stack Overflow, and I voted to migrate it. Therefore, the answer is not a review.



Keep in mind that principles are there to guide you. They should be treated like guard rails, rather than roadblocks.



I would argue that



 while (....) 
foo(7);

foo(3);


does not violate DRY. Your situation is pretty much the same.



That said, your idea of defining function is valid. You just factoring out the wrong code. Factor out reading. Consider



 def read_n_lines(infile, n):
lines =
try:
for _ in range(n):
lines.append(next(infile))
except StopIteration:
pass
return lines


and use it as



 while True:
lines = read_n_lines(infile, 7)
if len(lines) == 0:
break
process_lines(lines)






share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 18 at 6:00









vnpvnp

39.3k13099




39.3k13099











  • $begingroup$
    Thank you very much. Beautiful idea. I appreciate it.
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 6:05
















  • $begingroup$
    Thank you very much. Beautiful idea. I appreciate it.
    $endgroup$
    – Farzad Vertigo
    Jan 18 at 6:05















$begingroup$
Thank you very much. Beautiful idea. I appreciate it.
$endgroup$
– Farzad Vertigo
Jan 18 at 6:05




$begingroup$
Thank you very much. Beautiful idea. I appreciate it.
$endgroup$
– Farzad Vertigo
Jan 18 at 6:05













6












$begingroup$

You should avoid writing code with exception-handling altogether. Usually, when you want to write a fancy loop in Python, the itertools module is your friend. In this case, I would take advantage of itertools.groupby() to form groups of lines, assisted by itertools.count() to provide the line numbers.



import itertools

def chunks(iterable, n):
i = itertools.count()
for _, group in itertools.groupby(iterable, lambda _: next(i) // n):
yield group

with open('data.txt') as read_f, open('new_data.txt', 'w') as write_f:
for group in chunks(read_f, 7):
print(' '.join(line.rstrip() for line in group), file=write_f)


A few other minor changes:



  • You only need one with block to open both files.


  • line.rstrip() is more convenient than lambda x: x.replace("n", '')


  • print(…, file=write_file) is slightly more elegant than write_file.write(… + 'n').





share|improve this answer









$endgroup$








  • 2




    $begingroup$
    Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
    $endgroup$
    – Mathias Ettinger
    Jan 18 at 8:10






  • 1




    $begingroup$
    @MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
    $endgroup$
    – 200_success
    Jan 18 at 8:19










  • $begingroup$
    @Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
    $endgroup$
    – 200_success
    Jan 18 at 15:06










  • $begingroup$
    @200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
    $endgroup$
    – Graipher
    Jan 18 at 15:07
















6












$begingroup$

You should avoid writing code with exception-handling altogether. Usually, when you want to write a fancy loop in Python, the itertools module is your friend. In this case, I would take advantage of itertools.groupby() to form groups of lines, assisted by itertools.count() to provide the line numbers.



import itertools

def chunks(iterable, n):
i = itertools.count()
for _, group in itertools.groupby(iterable, lambda _: next(i) // n):
yield group

with open('data.txt') as read_f, open('new_data.txt', 'w') as write_f:
for group in chunks(read_f, 7):
print(' '.join(line.rstrip() for line in group), file=write_f)


A few other minor changes:



  • You only need one with block to open both files.


  • line.rstrip() is more convenient than lambda x: x.replace("n", '')


  • print(…, file=write_file) is slightly more elegant than write_file.write(… + 'n').





share|improve this answer









$endgroup$








  • 2




    $begingroup$
    Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
    $endgroup$
    – Mathias Ettinger
    Jan 18 at 8:10






  • 1




    $begingroup$
    @MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
    $endgroup$
    – 200_success
    Jan 18 at 8:19










  • $begingroup$
    @Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
    $endgroup$
    – 200_success
    Jan 18 at 15:06










  • $begingroup$
    @200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
    $endgroup$
    – Graipher
    Jan 18 at 15:07














6












6








6





$begingroup$

You should avoid writing code with exception-handling altogether. Usually, when you want to write a fancy loop in Python, the itertools module is your friend. In this case, I would take advantage of itertools.groupby() to form groups of lines, assisted by itertools.count() to provide the line numbers.



import itertools

def chunks(iterable, n):
i = itertools.count()
for _, group in itertools.groupby(iterable, lambda _: next(i) // n):
yield group

with open('data.txt') as read_f, open('new_data.txt', 'w') as write_f:
for group in chunks(read_f, 7):
print(' '.join(line.rstrip() for line in group), file=write_f)


A few other minor changes:



  • You only need one with block to open both files.


  • line.rstrip() is more convenient than lambda x: x.replace("n", '')


  • print(…, file=write_file) is slightly more elegant than write_file.write(… + 'n').





share|improve this answer









$endgroup$



You should avoid writing code with exception-handling altogether. Usually, when you want to write a fancy loop in Python, the itertools module is your friend. In this case, I would take advantage of itertools.groupby() to form groups of lines, assisted by itertools.count() to provide the line numbers.



import itertools

def chunks(iterable, n):
i = itertools.count()
for _, group in itertools.groupby(iterable, lambda _: next(i) // n):
yield group

with open('data.txt') as read_f, open('new_data.txt', 'w') as write_f:
for group in chunks(read_f, 7):
print(' '.join(line.rstrip() for line in group), file=write_f)


A few other minor changes:



  • You only need one with block to open both files.


  • line.rstrip() is more convenient than lambda x: x.replace("n", '')


  • print(…, file=write_file) is slightly more elegant than write_file.write(… + 'n').






share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 18 at 7:13









200_success200_success

129k15153415




129k15153415







  • 2




    $begingroup$
    Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
    $endgroup$
    – Mathias Ettinger
    Jan 18 at 8:10






  • 1




    $begingroup$
    @MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
    $endgroup$
    – 200_success
    Jan 18 at 8:19










  • $begingroup$
    @Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
    $endgroup$
    – 200_success
    Jan 18 at 15:06










  • $begingroup$
    @200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
    $endgroup$
    – Graipher
    Jan 18 at 15:07













  • 2




    $begingroup$
    Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
    $endgroup$
    – Mathias Ettinger
    Jan 18 at 8:10






  • 1




    $begingroup$
    @MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
    $endgroup$
    – 200_success
    Jan 18 at 8:19










  • $begingroup$
    @Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
    $endgroup$
    – 200_success
    Jan 18 at 15:06










  • $begingroup$
    @200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
    $endgroup$
    – Graipher
    Jan 18 at 15:07








2




2




$begingroup$
Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
$endgroup$
– Mathias Ettinger
Jan 18 at 8:10




$begingroup$
Isn't the grouper recipe more appropriate to make fixed-length chunks? Or did you purposefully avoid it to avoid dealing with the fill values at the end of the iteration?
$endgroup$
– Mathias Ettinger
Jan 18 at 8:10




1




1




$begingroup$
@MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
$endgroup$
– 200_success
Jan 18 at 8:19




$begingroup$
@MathiasEttinger The grouper() recipe works best for complete groups; you would have to specify a fillvalue, then strip out that padding.
$endgroup$
– 200_success
Jan 18 at 8:19












$begingroup$
@Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
$endgroup$
– 200_success
Jan 18 at 15:06




$begingroup$
@Graipher I don't see any reason to copy a recipe that doesn't do what we want, then work around the unwanted behavior by stripping off junk.
$endgroup$
– 200_success
Jan 18 at 15:06












$begingroup$
@200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
$endgroup$
– Graipher
Jan 18 at 15:07





$begingroup$
@200_success: I agree now that it is too cumbersome. We should probably clean up the comments.
$endgroup$
– Graipher
Jan 18 at 15:07


















draft saved

draft discarded
















































Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f211737%2fwrite-output-file-collating-groups-of-up-to-7-input-lines%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?