print lines if comma seperated fields matching together in another line [duplicate]
Clash Royale CLAN TAG#URR8PPP
up vote
-2
down vote
favorite
This question already has an answer here:
How do I print all lines of a file with duplicate values in a certain column
7 answers
Input:
1,1,10,1
2,1,10,3
3,0,10,1
Expected Output:
1,1,10,1
2,1,10,3
So how to print lines if field number 2
and 3
repeated in another line.
text-processing awk sed
marked as duplicate by Sundeep, msp9011, Isaac, slm⦠Aug 23 at 21:38
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |Â
up vote
-2
down vote
favorite
This question already has an answer here:
How do I print all lines of a file with duplicate values in a certain column
7 answers
Input:
1,1,10,1
2,1,10,3
3,0,10,1
Expected Output:
1,1,10,1
2,1,10,3
So how to print lines if field number 2
and 3
repeated in another line.
text-processing awk sed
marked as duplicate by Sundeep, msp9011, Isaac, slm⦠Aug 23 at 21:38
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24
add a comment |Â
up vote
-2
down vote
favorite
up vote
-2
down vote
favorite
This question already has an answer here:
How do I print all lines of a file with duplicate values in a certain column
7 answers
Input:
1,1,10,1
2,1,10,3
3,0,10,1
Expected Output:
1,1,10,1
2,1,10,3
So how to print lines if field number 2
and 3
repeated in another line.
text-processing awk sed
This question already has an answer here:
How do I print all lines of a file with duplicate values in a certain column
7 answers
Input:
1,1,10,1
2,1,10,3
3,0,10,1
Expected Output:
1,1,10,1
2,1,10,3
So how to print lines if field number 2
and 3
repeated in another line.
This question already has an answer here:
How do I print all lines of a file with duplicate values in a certain column
7 answers
text-processing awk sed
text-processing awk sed
edited Aug 23 at 13:55
asked Aug 23 at 13:49
ñÃÂñýàñüÃÂÃÂùcñ÷
417419
417419
marked as duplicate by Sundeep, msp9011, Isaac, slm⦠Aug 23 at 21:38
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Sundeep, msp9011, Isaac, slm⦠Aug 23 at 21:38
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24
add a comment |Â
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Quick'n'dirty method (requiring two passes over the file, the first to count occurrences of $2,$3
, and the second to print whenever the field combination is non-unique):
$ awk -F, 'NR==FNRa[$2 FS $3]++; next a[$2 FS $3] > 1' file file
1,1,10,1
2,1,10,3
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Quick'n'dirty method (requiring two passes over the file, the first to count occurrences of $2,$3
, and the second to print whenever the field combination is non-unique):
$ awk -F, 'NR==FNRa[$2 FS $3]++; next a[$2 FS $3] > 1' file file
1,1,10,1
2,1,10,3
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
add a comment |Â
up vote
1
down vote
accepted
Quick'n'dirty method (requiring two passes over the file, the first to count occurrences of $2,$3
, and the second to print whenever the field combination is non-unique):
$ awk -F, 'NR==FNRa[$2 FS $3]++; next a[$2 FS $3] > 1' file file
1,1,10,1
2,1,10,3
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Quick'n'dirty method (requiring two passes over the file, the first to count occurrences of $2,$3
, and the second to print whenever the field combination is non-unique):
$ awk -F, 'NR==FNRa[$2 FS $3]++; next a[$2 FS $3] > 1' file file
1,1,10,1
2,1,10,3
Quick'n'dirty method (requiring two passes over the file, the first to count occurrences of $2,$3
, and the second to print whenever the field combination is non-unique):
$ awk -F, 'NR==FNRa[$2 FS $3]++; next a[$2 FS $3] > 1' file file
1,1,10,1
2,1,10,3
edited Aug 23 at 15:37
answered Aug 23 at 15:28
steeldriver
32.2k34979
32.2k34979
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
add a comment |Â
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
@Sundeep thanks, good catch
â steeldriver
Aug 23 at 15:37
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
the only bad thing it's not showing the duplicate lines under together on big file. but it's did the job ! can i sort based on second and third field to appear under together?
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:41
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
I've done it by adding sort -t, -k2 . thanks a lot.
â Ã±ÃÂñýàñüÃÂÃÂùcñ÷
Aug 23 at 15:43
add a comment |Â
Will those be adjacent lines, or scattered all over the place?
â RudiC
Aug 23 at 16:24