Make every 2nd line bold

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I have the following bash command that returns headline and URL pairs over 2 lines.



curl -s https://uk.reuters.com/assets/jsonWireNews |
awk '/"url":|"headline":/' |
cut -d'"' -f4 |
awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print'


For the first 3 headlines, this outputs:



'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G


I want to make the headlines i.e. every other line starting from the first, to be in bold:




'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G






share|improve this question






















  • You might want to include a small example input along with the corresponding desired output and actual output.
    – igal
    Dec 22 '17 at 16:10














up vote
1
down vote

favorite












I have the following bash command that returns headline and URL pairs over 2 lines.



curl -s https://uk.reuters.com/assets/jsonWireNews |
awk '/"url":|"headline":/' |
cut -d'"' -f4 |
awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print'


For the first 3 headlines, this outputs:



'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G


I want to make the headlines i.e. every other line starting from the first, to be in bold:




'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G






share|improve this question






















  • You might want to include a small example input along with the corresponding desired output and actual output.
    – igal
    Dec 22 '17 at 16:10












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I have the following bash command that returns headline and URL pairs over 2 lines.



curl -s https://uk.reuters.com/assets/jsonWireNews |
awk '/"url":|"headline":/' |
cut -d'"' -f4 |
awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print'


For the first 3 headlines, this outputs:



'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G


I want to make the headlines i.e. every other line starting from the first, to be in bold:




'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G






share|improve this question














I have the following bash command that returns headline and URL pairs over 2 lines.



curl -s https://uk.reuters.com/assets/jsonWireNews |
awk '/"url":|"headline":/' |
cut -d'"' -f4 |
awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print'


For the first 3 headlines, this outputs:



'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G


I want to make the headlines i.e. every other line starting from the first, to be in bold:




'Hamilton' takes centre stage in London's West End
https://uk.reuters.com/article/uk-britain-theatre-hamilton/hamilton-takes-centre-stage-in-londons-west-end-idUKKBN1EG02I
IAG among bidders chosen for Austrian airline Niki - sources
https://uk.reuters.com/article/uk-air-berlin-niki/iag-among-bidders-chosen-for-austrian-airline-niki-sources-idUKKBN1EG1BM
Oil eases from highs but OPEC cuts still support market
https://uk.reuters.com/article/uk-global-oil/oil-eases-from-highs-but-opec-cuts-still-support-market-idUKKBN1EG06G








share|improve this question













share|improve this question




share|improve this question








edited Dec 22 '17 at 16:24

























asked Dec 22 '17 at 16:05









Imran

83




83











  • You might want to include a small example input along with the corresponding desired output and actual output.
    – igal
    Dec 22 '17 at 16:10
















  • You might want to include a small example input along with the corresponding desired output and actual output.
    – igal
    Dec 22 '17 at 16:10















You might want to include a small example input along with the corresponding desired output and actual output.
– igal
Dec 22 '17 at 16:10




You might want to include a small example input along with the corresponding desired output and actual output.
– igal
Dec 22 '17 at 16:10










5 Answers
5






active

oldest

votes

















up vote
1
down vote



accepted










Try this



#!/bin/bash

curl -s https://uk.reuters.com/assets/jsonWireNews |
awk '/"url":|"headline":/' |
cut -d'"' -f4 |
awk '/^// print "33[0mhttps://uk.reuters.com:" $0; next print "33[1m" $0 '


if match start of "^/" then print the bash escape for not-bold and then go to the next line.
default print prefixes each line with bash escape for bold.






share|improve this answer






















  • This work thanks. Is there a way of doing it without using a temp file?
    – Imran
    Dec 22 '17 at 16:50











  • yes, sorry temp file was to save network calls when testing.
    – John
    Dec 22 '17 at 16:52










  • I cleaned up the temp file reference.
    – John
    Dec 22 '17 at 16:53

















up vote
1
down vote













You had the right idea in the first version of the question, the problem just is how to get the control codes printed by tput to awk so it can print them.



Variables and command substitutions aren't expanded within single quotes (''), so we'd need to use double quotes. But using them with awk code may be awkward (no pun intended) since there might be other characters that need to be escaped. We could close the single quotes and start a double quoted string for the duration of the part we want expanded:




$ bold="$(tput bold)"
$ normal="$(tput sgr0)"
$ echo -e 'foonbarndoo' | awk 'if (NR % 2) print "'"$bold"'" $0 "'"$normal"'"; else print;'
foo
bar
doo


(In "'"$bold"'", the first " is literal, part of the awk code, the ' ends the single quoted string, " starts a double quoted string, and the other "'" sequence is the same in reverse.)



That's a bit ugly. The alternative is to pass the control codes to awk as variables:




$ echo -e 'foonbarndoo' | awk -vbold="$bold" -vnormal="$normal" 'if (NR % 2) print bold $0 normal; else print;'
foo
bar
doo


(Of course we could pass them through the environment.)






share|improve this answer





























    up vote
    0
    down vote













    After a quick look at man tput I tried:



    $ bold=`tput smso` 
    $ normal=`tput rmso`
    $ echo "$boldPlease type in your name: $normalc"


    And it appeared to work... So that should give you enough to go on, yes?






    share|improve this answer



























      up vote
      0
      down vote













      Remember that <esc>[1m will make text bold. So you can use sed to replace every second line starting from the first with itself, but <esc>[1m prepended and <esc>[m appended (to reset the formatting). Pipe it to



      sed 's/.*/<esc>[1m&<esc>[m/;N'


      where <esc> is 0x1b.



      Sed works line-by-line, operating on them one by one. First, sed encounters the first line, and performs the substitution s/.*/<esc>[1m&<esc>[m/. Then it performs the N commands, which joins the next line with this line (separated by a linefeed). On the iteration of the next input, sed skips the second line because it was joined to the first one, and proceeds to repeat the same process to the third line.






      share|improve this answer






















      • This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
        – Imran
        Dec 22 '17 at 16:34











      • Did you use the text 0x1b or the character with its code point at 0x1b?
        – Cows quack
        Dec 22 '17 at 16:39










      • It works for me i.stack.imgur.com/cmV1U.png
        – Cows quack
        Dec 22 '17 at 16:41










      • I used the text 0x1b. I don't have gsed.
        – Imran
        Dec 22 '17 at 16:48










      • (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
        – Cows quack
        Dec 22 '17 at 17:03

















      up vote
      0
      down vote













      Here's one way to do it with perl using the LWP, JSON, and Term::ANSIColor modules. Term::ANSIColor is a core perl module, but both LWP and JSON are CPAN modules. They're very commonly used modules so are probably available pre-packaged for your distro (e.g. on debian etc, apt-get install libjson-perl libwwww-perl)



      #!/usr/bin/perl

      use strict;
      use LWP::UserAgent;
      use JSON;
      use Term::ANSIColor;

      my $bold = color('bold');
      my $reset = color('reset');

      my $base='https://uk.reuters.com'

      foreach my $url (@ARGV)
      my $ua = LWP::UserAgent->new;
      my $req = HTTP::Request->new(GET => $url);
      my $res = $ua->request($req);
      if ($res->is_success)
      foreach my $h ( @ decode_json($res->content)->headlines )
      print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
      ;
      else
      die "Error processing '$url': ", $res->status_line, "n";




      This doesn't need curl or wget or multiple invocations of awk and/or cut (that ugliness was what motivated me to write an answer - as a general rule, if you're piping grep or awk into themselves then you're doing it wrong. ditto for piping cut or grep into awk - awk can do everything that those two can do and more. As can perl), or anything else.



      Save it as, e.g. ./bold-2nd.pl, make it executable with chmod, and run it like this:




      $ ./bold-2nd.pl https://uk.reuters.com/assets/jsonWireNews
      RBS to pay $125 million to settle California mortgage bond claims
      https://uk.reuters.com/article/uk-rbs-settlement/rbs-to-pay-125-million-to-settle-california-mortgage-bond-claims-idUKKBN1EH053

      Driver charged with attempted murder over Australian vehicle attack
      https://uk.reuters.com/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044

      EasyJet says other airlines interested in feeder flights from Tegel
      https://uk.reuters.com/article/uk-air-berlin-m-a-easyjet/easyjet-says-other-airlines-interested-in-feeder-flights-from-tegel-idUKKBN1EH04W

      [...]


      This version of the script can handle multiple URLs on the command line (of course, they all need to return the same json-formatted data...or at least extremely similar with both a headline and a url field).



      btw, I've made it print a blank line between each article. I find that to be more readable.



      If you want to use curl to do the fetching rather than the perl LWP module, the script would be quite a bit simpler:



      #!/usr/bin/perl

      use strict;
      use JSON;
      use Term::ANSIColor;

      my $bold = color('bold');
      my $reset = color('reset');

      my $base='https://uk.reuters.com'

      undef $/;
      my $json = <>; # slurp in entire stdin

      foreach my $h ( @ decode_json($json)->headlines )
      print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
      ;


      Run this version as:



      $ curl -s https://uk.reuters.com/assets/jsonWireNew | ./bold-2nd.pl


      Note that both versions of the bolding script use a json parser to actually parse the json data, rather than relying on regular expressions to search for lines matching particular patterns. As has been noted many times before, parsing json, or html, or xml, or any similar structured data format with regular expressions is unreliable and fragile. In the simple case, it can be made to work but even minor changes in the input format can break the script (e.g. if Reuters stops outputting pretty-printed json with line feeds between each data element and record, and starts printing just a single line of json, any line-based regexp pattern matcher will break)



      Finally, the json data fetched by curl (or LWP) looks like this:



       "headlines": [

      "id": "UKKBN1EH044",
      "headline": "Driver charged with attempted murder over Australian vehicle attack",
      "dateMillis": "1514003249000",
      "formattedDate": "3m ago",
      "url": "/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044",
      "mainPicUrl": "https://s4.reutersmedia.net/resources/r/?m=02&d=20171223&t=2&i=1216634499&w=116&fh=&fw=&ll=&pl=&sq=&r=LYNXMPEDBM04W"
      ,
      ]


      so, id, dateMillis, formattedDate, and mainPicURL are also available for printing or other use in the perl $h hashref variable, as well as the headline and url that we're printing.






      share|improve this answer




















        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "106"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );








         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f412545%2fmake-every-2nd-line-bold%23new-answer', 'question_page');

        );

        Post as a guest






























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        1
        down vote



        accepted










        Try this



        #!/bin/bash

        curl -s https://uk.reuters.com/assets/jsonWireNews |
        awk '/"url":|"headline":/' |
        cut -d'"' -f4 |
        awk '/^// print "33[0mhttps://uk.reuters.com:" $0; next print "33[1m" $0 '


        if match start of "^/" then print the bash escape for not-bold and then go to the next line.
        default print prefixes each line with bash escape for bold.






        share|improve this answer






















        • This work thanks. Is there a way of doing it without using a temp file?
          – Imran
          Dec 22 '17 at 16:50











        • yes, sorry temp file was to save network calls when testing.
          – John
          Dec 22 '17 at 16:52










        • I cleaned up the temp file reference.
          – John
          Dec 22 '17 at 16:53














        up vote
        1
        down vote



        accepted










        Try this



        #!/bin/bash

        curl -s https://uk.reuters.com/assets/jsonWireNews |
        awk '/"url":|"headline":/' |
        cut -d'"' -f4 |
        awk '/^// print "33[0mhttps://uk.reuters.com:" $0; next print "33[1m" $0 '


        if match start of "^/" then print the bash escape for not-bold and then go to the next line.
        default print prefixes each line with bash escape for bold.






        share|improve this answer






















        • This work thanks. Is there a way of doing it without using a temp file?
          – Imran
          Dec 22 '17 at 16:50











        • yes, sorry temp file was to save network calls when testing.
          – John
          Dec 22 '17 at 16:52










        • I cleaned up the temp file reference.
          – John
          Dec 22 '17 at 16:53












        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        Try this



        #!/bin/bash

        curl -s https://uk.reuters.com/assets/jsonWireNews |
        awk '/"url":|"headline":/' |
        cut -d'"' -f4 |
        awk '/^// print "33[0mhttps://uk.reuters.com:" $0; next print "33[1m" $0 '


        if match start of "^/" then print the bash escape for not-bold and then go to the next line.
        default print prefixes each line with bash escape for bold.






        share|improve this answer














        Try this



        #!/bin/bash

        curl -s https://uk.reuters.com/assets/jsonWireNews |
        awk '/"url":|"headline":/' |
        cut -d'"' -f4 |
        awk '/^// print "33[0mhttps://uk.reuters.com:" $0; next print "33[1m" $0 '


        if match start of "^/" then print the bash escape for not-bold and then go to the next line.
        default print prefixes each line with bash escape for bold.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Dec 22 '17 at 16:51

























        answered Dec 22 '17 at 16:40









        John

        419211




        419211











        • This work thanks. Is there a way of doing it without using a temp file?
          – Imran
          Dec 22 '17 at 16:50











        • yes, sorry temp file was to save network calls when testing.
          – John
          Dec 22 '17 at 16:52










        • I cleaned up the temp file reference.
          – John
          Dec 22 '17 at 16:53
















        • This work thanks. Is there a way of doing it without using a temp file?
          – Imran
          Dec 22 '17 at 16:50











        • yes, sorry temp file was to save network calls when testing.
          – John
          Dec 22 '17 at 16:52










        • I cleaned up the temp file reference.
          – John
          Dec 22 '17 at 16:53















        This work thanks. Is there a way of doing it without using a temp file?
        – Imran
        Dec 22 '17 at 16:50





        This work thanks. Is there a way of doing it without using a temp file?
        – Imran
        Dec 22 '17 at 16:50













        yes, sorry temp file was to save network calls when testing.
        – John
        Dec 22 '17 at 16:52




        yes, sorry temp file was to save network calls when testing.
        – John
        Dec 22 '17 at 16:52












        I cleaned up the temp file reference.
        – John
        Dec 22 '17 at 16:53




        I cleaned up the temp file reference.
        – John
        Dec 22 '17 at 16:53












        up vote
        1
        down vote













        You had the right idea in the first version of the question, the problem just is how to get the control codes printed by tput to awk so it can print them.



        Variables and command substitutions aren't expanded within single quotes (''), so we'd need to use double quotes. But using them with awk code may be awkward (no pun intended) since there might be other characters that need to be escaped. We could close the single quotes and start a double quoted string for the duration of the part we want expanded:




        $ bold="$(tput bold)"
        $ normal="$(tput sgr0)"
        $ echo -e 'foonbarndoo' | awk 'if (NR % 2) print "'"$bold"'" $0 "'"$normal"'"; else print;'
        foo
        bar
        doo


        (In "'"$bold"'", the first " is literal, part of the awk code, the ' ends the single quoted string, " starts a double quoted string, and the other "'" sequence is the same in reverse.)



        That's a bit ugly. The alternative is to pass the control codes to awk as variables:




        $ echo -e 'foonbarndoo' | awk -vbold="$bold" -vnormal="$normal" 'if (NR % 2) print bold $0 normal; else print;'
        foo
        bar
        doo


        (Of course we could pass them through the environment.)






        share|improve this answer


























          up vote
          1
          down vote













          You had the right idea in the first version of the question, the problem just is how to get the control codes printed by tput to awk so it can print them.



          Variables and command substitutions aren't expanded within single quotes (''), so we'd need to use double quotes. But using them with awk code may be awkward (no pun intended) since there might be other characters that need to be escaped. We could close the single quotes and start a double quoted string for the duration of the part we want expanded:




          $ bold="$(tput bold)"
          $ normal="$(tput sgr0)"
          $ echo -e 'foonbarndoo' | awk 'if (NR % 2) print "'"$bold"'" $0 "'"$normal"'"; else print;'
          foo
          bar
          doo


          (In "'"$bold"'", the first " is literal, part of the awk code, the ' ends the single quoted string, " starts a double quoted string, and the other "'" sequence is the same in reverse.)



          That's a bit ugly. The alternative is to pass the control codes to awk as variables:




          $ echo -e 'foonbarndoo' | awk -vbold="$bold" -vnormal="$normal" 'if (NR % 2) print bold $0 normal; else print;'
          foo
          bar
          doo


          (Of course we could pass them through the environment.)






          share|improve this answer
























            up vote
            1
            down vote










            up vote
            1
            down vote









            You had the right idea in the first version of the question, the problem just is how to get the control codes printed by tput to awk so it can print them.



            Variables and command substitutions aren't expanded within single quotes (''), so we'd need to use double quotes. But using them with awk code may be awkward (no pun intended) since there might be other characters that need to be escaped. We could close the single quotes and start a double quoted string for the duration of the part we want expanded:




            $ bold="$(tput bold)"
            $ normal="$(tput sgr0)"
            $ echo -e 'foonbarndoo' | awk 'if (NR % 2) print "'"$bold"'" $0 "'"$normal"'"; else print;'
            foo
            bar
            doo


            (In "'"$bold"'", the first " is literal, part of the awk code, the ' ends the single quoted string, " starts a double quoted string, and the other "'" sequence is the same in reverse.)



            That's a bit ugly. The alternative is to pass the control codes to awk as variables:




            $ echo -e 'foonbarndoo' | awk -vbold="$bold" -vnormal="$normal" 'if (NR % 2) print bold $0 normal; else print;'
            foo
            bar
            doo


            (Of course we could pass them through the environment.)






            share|improve this answer














            You had the right idea in the first version of the question, the problem just is how to get the control codes printed by tput to awk so it can print them.



            Variables and command substitutions aren't expanded within single quotes (''), so we'd need to use double quotes. But using them with awk code may be awkward (no pun intended) since there might be other characters that need to be escaped. We could close the single quotes and start a double quoted string for the duration of the part we want expanded:




            $ bold="$(tput bold)"
            $ normal="$(tput sgr0)"
            $ echo -e 'foonbarndoo' | awk 'if (NR % 2) print "'"$bold"'" $0 "'"$normal"'"; else print;'
            foo
            bar
            doo


            (In "'"$bold"'", the first " is literal, part of the awk code, the ' ends the single quoted string, " starts a double quoted string, and the other "'" sequence is the same in reverse.)



            That's a bit ugly. The alternative is to pass the control codes to awk as variables:




            $ echo -e 'foonbarndoo' | awk -vbold="$bold" -vnormal="$normal" 'if (NR % 2) print bold $0 normal; else print;'
            foo
            bar
            doo


            (Of course we could pass them through the environment.)







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Dec 22 '17 at 16:47

























            answered Dec 22 '17 at 16:42









            ilkkachu

            49.9k674137




            49.9k674137




















                up vote
                0
                down vote













                After a quick look at man tput I tried:



                $ bold=`tput smso` 
                $ normal=`tput rmso`
                $ echo "$boldPlease type in your name: $normalc"


                And it appeared to work... So that should give you enough to go on, yes?






                share|improve this answer
























                  up vote
                  0
                  down vote













                  After a quick look at man tput I tried:



                  $ bold=`tput smso` 
                  $ normal=`tput rmso`
                  $ echo "$boldPlease type in your name: $normalc"


                  And it appeared to work... So that should give you enough to go on, yes?






                  share|improve this answer






















                    up vote
                    0
                    down vote










                    up vote
                    0
                    down vote









                    After a quick look at man tput I tried:



                    $ bold=`tput smso` 
                    $ normal=`tput rmso`
                    $ echo "$boldPlease type in your name: $normalc"


                    And it appeared to work... So that should give you enough to go on, yes?






                    share|improve this answer












                    After a quick look at man tput I tried:



                    $ bold=`tput smso` 
                    $ normal=`tput rmso`
                    $ echo "$boldPlease type in your name: $normalc"


                    And it appeared to work... So that should give you enough to go on, yes?







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Dec 22 '17 at 16:27









                    Ubuntourist

                    1065




                    1065




















                        up vote
                        0
                        down vote













                        Remember that <esc>[1m will make text bold. So you can use sed to replace every second line starting from the first with itself, but <esc>[1m prepended and <esc>[m appended (to reset the formatting). Pipe it to



                        sed 's/.*/<esc>[1m&<esc>[m/;N'


                        where <esc> is 0x1b.



                        Sed works line-by-line, operating on them one by one. First, sed encounters the first line, and performs the substitution s/.*/<esc>[1m&<esc>[m/. Then it performs the N commands, which joins the next line with this line (separated by a linefeed). On the iteration of the next input, sed skips the second line because it was joined to the first one, and proceeds to repeat the same process to the third line.






                        share|improve this answer






















                        • This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                          – Imran
                          Dec 22 '17 at 16:34











                        • Did you use the text 0x1b or the character with its code point at 0x1b?
                          – Cows quack
                          Dec 22 '17 at 16:39










                        • It works for me i.stack.imgur.com/cmV1U.png
                          – Cows quack
                          Dec 22 '17 at 16:41










                        • I used the text 0x1b. I don't have gsed.
                          – Imran
                          Dec 22 '17 at 16:48










                        • (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                          – Cows quack
                          Dec 22 '17 at 17:03














                        up vote
                        0
                        down vote













                        Remember that <esc>[1m will make text bold. So you can use sed to replace every second line starting from the first with itself, but <esc>[1m prepended and <esc>[m appended (to reset the formatting). Pipe it to



                        sed 's/.*/<esc>[1m&<esc>[m/;N'


                        where <esc> is 0x1b.



                        Sed works line-by-line, operating on them one by one. First, sed encounters the first line, and performs the substitution s/.*/<esc>[1m&<esc>[m/. Then it performs the N commands, which joins the next line with this line (separated by a linefeed). On the iteration of the next input, sed skips the second line because it was joined to the first one, and proceeds to repeat the same process to the third line.






                        share|improve this answer






















                        • This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                          – Imran
                          Dec 22 '17 at 16:34











                        • Did you use the text 0x1b or the character with its code point at 0x1b?
                          – Cows quack
                          Dec 22 '17 at 16:39










                        • It works for me i.stack.imgur.com/cmV1U.png
                          – Cows quack
                          Dec 22 '17 at 16:41










                        • I used the text 0x1b. I don't have gsed.
                          – Imran
                          Dec 22 '17 at 16:48










                        • (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                          – Cows quack
                          Dec 22 '17 at 17:03












                        up vote
                        0
                        down vote










                        up vote
                        0
                        down vote









                        Remember that <esc>[1m will make text bold. So you can use sed to replace every second line starting from the first with itself, but <esc>[1m prepended and <esc>[m appended (to reset the formatting). Pipe it to



                        sed 's/.*/<esc>[1m&<esc>[m/;N'


                        where <esc> is 0x1b.



                        Sed works line-by-line, operating on them one by one. First, sed encounters the first line, and performs the substitution s/.*/<esc>[1m&<esc>[m/. Then it performs the N commands, which joins the next line with this line (separated by a linefeed). On the iteration of the next input, sed skips the second line because it was joined to the first one, and proceeds to repeat the same process to the third line.






                        share|improve this answer














                        Remember that <esc>[1m will make text bold. So you can use sed to replace every second line starting from the first with itself, but <esc>[1m prepended and <esc>[m appended (to reset the formatting). Pipe it to



                        sed 's/.*/<esc>[1m&<esc>[m/;N'


                        where <esc> is 0x1b.



                        Sed works line-by-line, operating on them one by one. First, sed encounters the first line, and performs the substitution s/.*/<esc>[1m&<esc>[m/. Then it performs the N commands, which joins the next line with this line (separated by a linefeed). On the iteration of the next input, sed skips the second line because it was joined to the first one, and proceeds to repeat the same process to the third line.







                        share|improve this answer














                        share|improve this answer



                        share|improve this answer








                        edited Dec 22 '17 at 16:44

























                        answered Dec 22 '17 at 16:28









                        Cows quack

                        1115




                        1115











                        • This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                          – Imran
                          Dec 22 '17 at 16:34











                        • Did you use the text 0x1b or the character with its code point at 0x1b?
                          – Cows quack
                          Dec 22 '17 at 16:39










                        • It works for me i.stack.imgur.com/cmV1U.png
                          – Cows quack
                          Dec 22 '17 at 16:41










                        • I used the text 0x1b. I don't have gsed.
                          – Imran
                          Dec 22 '17 at 16:48










                        • (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                          – Cows quack
                          Dec 22 '17 at 17:03
















                        • This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                          – Imran
                          Dec 22 '17 at 16:34











                        • Did you use the text 0x1b or the character with its code point at 0x1b?
                          – Cows quack
                          Dec 22 '17 at 16:39










                        • It works for me i.stack.imgur.com/cmV1U.png
                          – Cows quack
                          Dec 22 '17 at 16:41










                        • I used the text 0x1b. I don't have gsed.
                          – Imran
                          Dec 22 '17 at 16:48










                        • (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                          – Cows quack
                          Dec 22 '17 at 17:03















                        This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                        – Imran
                        Dec 22 '17 at 16:34





                        This is what I tried and it doesn't work: curl -s https://uk.reuters.com/assets/jsonWireNews | awk '/"url":|"headline":/' | cut -d'"' -f4 | awk 'NR % 2 == 0 sub(/^/,"https://uk.reuters.com") print' | sed 's/.*/0x1b[1m&0x1b[m/;N'
                        – Imran
                        Dec 22 '17 at 16:34













                        Did you use the text 0x1b or the character with its code point at 0x1b?
                        – Cows quack
                        Dec 22 '17 at 16:39




                        Did you use the text 0x1b or the character with its code point at 0x1b?
                        – Cows quack
                        Dec 22 '17 at 16:39












                        It works for me i.stack.imgur.com/cmV1U.png
                        – Cows quack
                        Dec 22 '17 at 16:41




                        It works for me i.stack.imgur.com/cmV1U.png
                        – Cows quack
                        Dec 22 '17 at 16:41












                        I used the text 0x1b. I don't have gsed.
                        – Imran
                        Dec 22 '17 at 16:48




                        I used the text 0x1b. I don't have gsed.
                        – Imran
                        Dec 22 '17 at 16:48












                        (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                        – Cows quack
                        Dec 22 '17 at 17:03




                        (gsed is macOS's equivalent of sed) Don't use the text 0x1b, this represents the escape character, this can be typed using Ctrl+V Esc in your terminal.
                        – Cows quack
                        Dec 22 '17 at 17:03










                        up vote
                        0
                        down vote













                        Here's one way to do it with perl using the LWP, JSON, and Term::ANSIColor modules. Term::ANSIColor is a core perl module, but both LWP and JSON are CPAN modules. They're very commonly used modules so are probably available pre-packaged for your distro (e.g. on debian etc, apt-get install libjson-perl libwwww-perl)



                        #!/usr/bin/perl

                        use strict;
                        use LWP::UserAgent;
                        use JSON;
                        use Term::ANSIColor;

                        my $bold = color('bold');
                        my $reset = color('reset');

                        my $base='https://uk.reuters.com'

                        foreach my $url (@ARGV)
                        my $ua = LWP::UserAgent->new;
                        my $req = HTTP::Request->new(GET => $url);
                        my $res = $ua->request($req);
                        if ($res->is_success)
                        foreach my $h ( @ decode_json($res->content)->headlines )
                        print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                        ;
                        else
                        die "Error processing '$url': ", $res->status_line, "n";




                        This doesn't need curl or wget or multiple invocations of awk and/or cut (that ugliness was what motivated me to write an answer - as a general rule, if you're piping grep or awk into themselves then you're doing it wrong. ditto for piping cut or grep into awk - awk can do everything that those two can do and more. As can perl), or anything else.



                        Save it as, e.g. ./bold-2nd.pl, make it executable with chmod, and run it like this:




                        $ ./bold-2nd.pl https://uk.reuters.com/assets/jsonWireNews
                        RBS to pay $125 million to settle California mortgage bond claims
                        https://uk.reuters.com/article/uk-rbs-settlement/rbs-to-pay-125-million-to-settle-california-mortgage-bond-claims-idUKKBN1EH053

                        Driver charged with attempted murder over Australian vehicle attack
                        https://uk.reuters.com/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044

                        EasyJet says other airlines interested in feeder flights from Tegel
                        https://uk.reuters.com/article/uk-air-berlin-m-a-easyjet/easyjet-says-other-airlines-interested-in-feeder-flights-from-tegel-idUKKBN1EH04W

                        [...]


                        This version of the script can handle multiple URLs on the command line (of course, they all need to return the same json-formatted data...or at least extremely similar with both a headline and a url field).



                        btw, I've made it print a blank line between each article. I find that to be more readable.



                        If you want to use curl to do the fetching rather than the perl LWP module, the script would be quite a bit simpler:



                        #!/usr/bin/perl

                        use strict;
                        use JSON;
                        use Term::ANSIColor;

                        my $bold = color('bold');
                        my $reset = color('reset');

                        my $base='https://uk.reuters.com'

                        undef $/;
                        my $json = <>; # slurp in entire stdin

                        foreach my $h ( @ decode_json($json)->headlines )
                        print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                        ;


                        Run this version as:



                        $ curl -s https://uk.reuters.com/assets/jsonWireNew | ./bold-2nd.pl


                        Note that both versions of the bolding script use a json parser to actually parse the json data, rather than relying on regular expressions to search for lines matching particular patterns. As has been noted many times before, parsing json, or html, or xml, or any similar structured data format with regular expressions is unreliable and fragile. In the simple case, it can be made to work but even minor changes in the input format can break the script (e.g. if Reuters stops outputting pretty-printed json with line feeds between each data element and record, and starts printing just a single line of json, any line-based regexp pattern matcher will break)



                        Finally, the json data fetched by curl (or LWP) looks like this:



                         "headlines": [

                        "id": "UKKBN1EH044",
                        "headline": "Driver charged with attempted murder over Australian vehicle attack",
                        "dateMillis": "1514003249000",
                        "formattedDate": "3m ago",
                        "url": "/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044",
                        "mainPicUrl": "https://s4.reutersmedia.net/resources/r/?m=02&d=20171223&t=2&i=1216634499&w=116&fh=&fw=&ll=&pl=&sq=&r=LYNXMPEDBM04W"
                        ,
                        ]


                        so, id, dateMillis, formattedDate, and mainPicURL are also available for printing or other use in the perl $h hashref variable, as well as the headline and url that we're printing.






                        share|improve this answer
























                          up vote
                          0
                          down vote













                          Here's one way to do it with perl using the LWP, JSON, and Term::ANSIColor modules. Term::ANSIColor is a core perl module, but both LWP and JSON are CPAN modules. They're very commonly used modules so are probably available pre-packaged for your distro (e.g. on debian etc, apt-get install libjson-perl libwwww-perl)



                          #!/usr/bin/perl

                          use strict;
                          use LWP::UserAgent;
                          use JSON;
                          use Term::ANSIColor;

                          my $bold = color('bold');
                          my $reset = color('reset');

                          my $base='https://uk.reuters.com'

                          foreach my $url (@ARGV)
                          my $ua = LWP::UserAgent->new;
                          my $req = HTTP::Request->new(GET => $url);
                          my $res = $ua->request($req);
                          if ($res->is_success)
                          foreach my $h ( @ decode_json($res->content)->headlines )
                          print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                          ;
                          else
                          die "Error processing '$url': ", $res->status_line, "n";




                          This doesn't need curl or wget or multiple invocations of awk and/or cut (that ugliness was what motivated me to write an answer - as a general rule, if you're piping grep or awk into themselves then you're doing it wrong. ditto for piping cut or grep into awk - awk can do everything that those two can do and more. As can perl), or anything else.



                          Save it as, e.g. ./bold-2nd.pl, make it executable with chmod, and run it like this:




                          $ ./bold-2nd.pl https://uk.reuters.com/assets/jsonWireNews
                          RBS to pay $125 million to settle California mortgage bond claims
                          https://uk.reuters.com/article/uk-rbs-settlement/rbs-to-pay-125-million-to-settle-california-mortgage-bond-claims-idUKKBN1EH053

                          Driver charged with attempted murder over Australian vehicle attack
                          https://uk.reuters.com/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044

                          EasyJet says other airlines interested in feeder flights from Tegel
                          https://uk.reuters.com/article/uk-air-berlin-m-a-easyjet/easyjet-says-other-airlines-interested-in-feeder-flights-from-tegel-idUKKBN1EH04W

                          [...]


                          This version of the script can handle multiple URLs on the command line (of course, they all need to return the same json-formatted data...or at least extremely similar with both a headline and a url field).



                          btw, I've made it print a blank line between each article. I find that to be more readable.



                          If you want to use curl to do the fetching rather than the perl LWP module, the script would be quite a bit simpler:



                          #!/usr/bin/perl

                          use strict;
                          use JSON;
                          use Term::ANSIColor;

                          my $bold = color('bold');
                          my $reset = color('reset');

                          my $base='https://uk.reuters.com'

                          undef $/;
                          my $json = <>; # slurp in entire stdin

                          foreach my $h ( @ decode_json($json)->headlines )
                          print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                          ;


                          Run this version as:



                          $ curl -s https://uk.reuters.com/assets/jsonWireNew | ./bold-2nd.pl


                          Note that both versions of the bolding script use a json parser to actually parse the json data, rather than relying on regular expressions to search for lines matching particular patterns. As has been noted many times before, parsing json, or html, or xml, or any similar structured data format with regular expressions is unreliable and fragile. In the simple case, it can be made to work but even minor changes in the input format can break the script (e.g. if Reuters stops outputting pretty-printed json with line feeds between each data element and record, and starts printing just a single line of json, any line-based regexp pattern matcher will break)



                          Finally, the json data fetched by curl (or LWP) looks like this:



                           "headlines": [

                          "id": "UKKBN1EH044",
                          "headline": "Driver charged with attempted murder over Australian vehicle attack",
                          "dateMillis": "1514003249000",
                          "formattedDate": "3m ago",
                          "url": "/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044",
                          "mainPicUrl": "https://s4.reutersmedia.net/resources/r/?m=02&d=20171223&t=2&i=1216634499&w=116&fh=&fw=&ll=&pl=&sq=&r=LYNXMPEDBM04W"
                          ,
                          ]


                          so, id, dateMillis, formattedDate, and mainPicURL are also available for printing or other use in the perl $h hashref variable, as well as the headline and url that we're printing.






                          share|improve this answer






















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Here's one way to do it with perl using the LWP, JSON, and Term::ANSIColor modules. Term::ANSIColor is a core perl module, but both LWP and JSON are CPAN modules. They're very commonly used modules so are probably available pre-packaged for your distro (e.g. on debian etc, apt-get install libjson-perl libwwww-perl)



                            #!/usr/bin/perl

                            use strict;
                            use LWP::UserAgent;
                            use JSON;
                            use Term::ANSIColor;

                            my $bold = color('bold');
                            my $reset = color('reset');

                            my $base='https://uk.reuters.com'

                            foreach my $url (@ARGV)
                            my $ua = LWP::UserAgent->new;
                            my $req = HTTP::Request->new(GET => $url);
                            my $res = $ua->request($req);
                            if ($res->is_success)
                            foreach my $h ( @ decode_json($res->content)->headlines )
                            print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                            ;
                            else
                            die "Error processing '$url': ", $res->status_line, "n";




                            This doesn't need curl or wget or multiple invocations of awk and/or cut (that ugliness was what motivated me to write an answer - as a general rule, if you're piping grep or awk into themselves then you're doing it wrong. ditto for piping cut or grep into awk - awk can do everything that those two can do and more. As can perl), or anything else.



                            Save it as, e.g. ./bold-2nd.pl, make it executable with chmod, and run it like this:




                            $ ./bold-2nd.pl https://uk.reuters.com/assets/jsonWireNews
                            RBS to pay $125 million to settle California mortgage bond claims
                            https://uk.reuters.com/article/uk-rbs-settlement/rbs-to-pay-125-million-to-settle-california-mortgage-bond-claims-idUKKBN1EH053

                            Driver charged with attempted murder over Australian vehicle attack
                            https://uk.reuters.com/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044

                            EasyJet says other airlines interested in feeder flights from Tegel
                            https://uk.reuters.com/article/uk-air-berlin-m-a-easyjet/easyjet-says-other-airlines-interested-in-feeder-flights-from-tegel-idUKKBN1EH04W

                            [...]


                            This version of the script can handle multiple URLs on the command line (of course, they all need to return the same json-formatted data...or at least extremely similar with both a headline and a url field).



                            btw, I've made it print a blank line between each article. I find that to be more readable.



                            If you want to use curl to do the fetching rather than the perl LWP module, the script would be quite a bit simpler:



                            #!/usr/bin/perl

                            use strict;
                            use JSON;
                            use Term::ANSIColor;

                            my $bold = color('bold');
                            my $reset = color('reset');

                            my $base='https://uk.reuters.com'

                            undef $/;
                            my $json = <>; # slurp in entire stdin

                            foreach my $h ( @ decode_json($json)->headlines )
                            print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                            ;


                            Run this version as:



                            $ curl -s https://uk.reuters.com/assets/jsonWireNew | ./bold-2nd.pl


                            Note that both versions of the bolding script use a json parser to actually parse the json data, rather than relying on regular expressions to search for lines matching particular patterns. As has been noted many times before, parsing json, or html, or xml, or any similar structured data format with regular expressions is unreliable and fragile. In the simple case, it can be made to work but even minor changes in the input format can break the script (e.g. if Reuters stops outputting pretty-printed json with line feeds between each data element and record, and starts printing just a single line of json, any line-based regexp pattern matcher will break)



                            Finally, the json data fetched by curl (or LWP) looks like this:



                             "headlines": [

                            "id": "UKKBN1EH044",
                            "headline": "Driver charged with attempted murder over Australian vehicle attack",
                            "dateMillis": "1514003249000",
                            "formattedDate": "3m ago",
                            "url": "/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044",
                            "mainPicUrl": "https://s4.reutersmedia.net/resources/r/?m=02&d=20171223&t=2&i=1216634499&w=116&fh=&fw=&ll=&pl=&sq=&r=LYNXMPEDBM04W"
                            ,
                            ]


                            so, id, dateMillis, formattedDate, and mainPicURL are also available for printing or other use in the perl $h hashref variable, as well as the headline and url that we're printing.






                            share|improve this answer












                            Here's one way to do it with perl using the LWP, JSON, and Term::ANSIColor modules. Term::ANSIColor is a core perl module, but both LWP and JSON are CPAN modules. They're very commonly used modules so are probably available pre-packaged for your distro (e.g. on debian etc, apt-get install libjson-perl libwwww-perl)



                            #!/usr/bin/perl

                            use strict;
                            use LWP::UserAgent;
                            use JSON;
                            use Term::ANSIColor;

                            my $bold = color('bold');
                            my $reset = color('reset');

                            my $base='https://uk.reuters.com'

                            foreach my $url (@ARGV)
                            my $ua = LWP::UserAgent->new;
                            my $req = HTTP::Request->new(GET => $url);
                            my $res = $ua->request($req);
                            if ($res->is_success)
                            foreach my $h ( @ decode_json($res->content)->headlines )
                            print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                            ;
                            else
                            die "Error processing '$url': ", $res->status_line, "n";




                            This doesn't need curl or wget or multiple invocations of awk and/or cut (that ugliness was what motivated me to write an answer - as a general rule, if you're piping grep or awk into themselves then you're doing it wrong. ditto for piping cut or grep into awk - awk can do everything that those two can do and more. As can perl), or anything else.



                            Save it as, e.g. ./bold-2nd.pl, make it executable with chmod, and run it like this:




                            $ ./bold-2nd.pl https://uk.reuters.com/assets/jsonWireNews
                            RBS to pay $125 million to settle California mortgage bond claims
                            https://uk.reuters.com/article/uk-rbs-settlement/rbs-to-pay-125-million-to-settle-california-mortgage-bond-claims-idUKKBN1EH053

                            Driver charged with attempted murder over Australian vehicle attack
                            https://uk.reuters.com/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044

                            EasyJet says other airlines interested in feeder flights from Tegel
                            https://uk.reuters.com/article/uk-air-berlin-m-a-easyjet/easyjet-says-other-airlines-interested-in-feeder-flights-from-tegel-idUKKBN1EH04W

                            [...]


                            This version of the script can handle multiple URLs on the command line (of course, they all need to return the same json-formatted data...or at least extremely similar with both a headline and a url field).



                            btw, I've made it print a blank line between each article. I find that to be more readable.



                            If you want to use curl to do the fetching rather than the perl LWP module, the script would be quite a bit simpler:



                            #!/usr/bin/perl

                            use strict;
                            use JSON;
                            use Term::ANSIColor;

                            my $bold = color('bold');
                            my $reset = color('reset');

                            my $base='https://uk.reuters.com'

                            undef $/;
                            my $json = <>; # slurp in entire stdin

                            foreach my $h ( @ decode_json($json)->headlines )
                            print $bold, $h->headline, $reset, "n", $base, $h->url, "nn";
                            ;


                            Run this version as:



                            $ curl -s https://uk.reuters.com/assets/jsonWireNew | ./bold-2nd.pl


                            Note that both versions of the bolding script use a json parser to actually parse the json data, rather than relying on regular expressions to search for lines matching particular patterns. As has been noted many times before, parsing json, or html, or xml, or any similar structured data format with regular expressions is unreliable and fragile. In the simple case, it can be made to work but even minor changes in the input format can break the script (e.g. if Reuters stops outputting pretty-printed json with line feeds between each data element and record, and starts printing just a single line of json, any line-based regexp pattern matcher will break)



                            Finally, the json data fetched by curl (or LWP) looks like this:



                             "headlines": [

                            "id": "UKKBN1EH044",
                            "headline": "Driver charged with attempted murder over Australian vehicle attack",
                            "dateMillis": "1514003249000",
                            "formattedDate": "3m ago",
                            "url": "/article/uk-australia-attack/driver-charged-with-attempted-murder-over-australian-vehicle-attack-idUKKBN1EH044",
                            "mainPicUrl": "https://s4.reutersmedia.net/resources/r/?m=02&d=20171223&t=2&i=1216634499&w=116&fh=&fw=&ll=&pl=&sq=&r=LYNXMPEDBM04W"
                            ,
                            ]


                            so, id, dateMillis, formattedDate, and mainPicURL are also available for printing or other use in the perl $h hashref variable, as well as the headline and url that we're printing.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Dec 23 '17 at 5:49









                            cas

                            37.7k44394




                            37.7k44394






















                                 

                                draft saved


                                draft discarded


























                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f412545%2fmake-every-2nd-line-bold%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Bahrain

                                Postfix configuration issue with fips on centos 7; mailgun relay