How can I do a stack trace using grep and regex?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0















I have a stack trace like this:



17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection refused (Connection refused)
17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 INFO JDBCRDD: closed connection
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more
17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 12
17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 0.0 (TID 12)
17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 13
17/04/26 15:29:03 INFO TorrentBroadcast: Started reading broadcast variable 0
17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 2.0 (TID 13)


I want to extract the relevant lines so it looks like this:



17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more
17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
... 10 more


How can I get the above output format (greping all ERROR lines with details)?










share|improve this question




























    0















    I have a stack trace like this:



    17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection refused (Connection refused)
    17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 INFO JDBCRDD: closed connection
    17/04/26 15:29:03 INFO JDBCRDD: closed connection
    17/04/26 15:29:03 INFO JDBCRDD: closed connection
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
    17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 12
    17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 0.0 (TID 12)
    17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 13
    17/04/26 15:29:03 INFO TorrentBroadcast: Started reading broadcast variable 0
    17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 2.0 (TID 13)


    I want to extract the relevant lines so it looks like this:



    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
    17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more


    How can I get the above output format (greping all ERROR lines with details)?










    share|improve this question


























      0












      0








      0








      I have a stack trace like this:



      17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection refused (Connection refused)
      17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 12
      17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 0.0 (TID 12)
      17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 13
      17/04/26 15:29:03 INFO TorrentBroadcast: Started reading broadcast variable 0
      17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 2.0 (TID 13)


      I want to extract the relevant lines so it looks like this:



      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more


      How can I get the above output format (greping all ERROR lines with details)?










      share|improve this question
















      I have a stack trace like this:



      17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection refused (Connection refused)
      17/04/26 15:29:03 INFO HttpMethodDirector: Retrying request
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 INFO JDBCRDD: closed connection
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 12
      17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 0.0 (TID 12)
      17/04/26 15:29:03 INFO CoarseGrainedExecutorBackend: Got assigned task 13
      17/04/26 15:29:03 INFO TorrentBroadcast: Started reading broadcast variable 0
      17/04/26 15:29:03 INFO Executor: Running task 0.1 in stage 2.0 (TID 13)


      I want to extract the relevant lines so it looks like this:



      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR NetworkClient: Node [192.168.5.5:9200] failed (Connection refused (Connection refused)); no other nodes left - aborting...
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 9.0 (TID 4)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more
      17/04/26 15:29:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 7)
      org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
      at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
      at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:85)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[192.168.5.5:9200]]
      at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:424)
      at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
      at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:154)
      at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:609)
      at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
      ... 10 more


      How can I get the above output format (greping all ERROR lines with details)?







      text-processing awk grep regular-expression






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 26 '17 at 10:42









      terdon

      130k32255433




      130k32255433










      asked Apr 26 '17 at 10:37









      xyz_scalaxyz_scala

      113




      113




















          2 Answers
          2






          active

          oldest

          votes


















          2














          It looks as if it should be enough to filter out the INFO messages from the input:



          $ grep -v '[0-9] INFO ' file.in


          I added [0-9] and the correct spacing around INFO just to be sure not to match any of the ERROR-related lines (in case a random string with INFO in it turns up there).



          If you have a number of logfiles in a directory:



          $ grep -v '[0-9] INFO ' *.log


          where *.log is a filename pattern that matches the logfiles.






          share|improve this answer

























          • If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

            – xyz_scala
            Apr 26 '17 at 14:12











          • grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

            – schaiba
            Apr 26 '17 at 14:33



















          0














          I met exactly the same problem.



          Though grep can show context lines by flag -A, but the amount of context lines is fixed. Instead, you could try awk.



          Here is snippet which I have used before(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), saved as fullgrep.sh:



          full_grep()
          awk '
          BEGIN
          isFound = "no"


          # match lines with keyword

          # if match print the line
          if($0~/'"$keyword"'/)

          print $0
          isFound="yes"

          # if new line begin, flush the flag
          else if($0~/'"$newLinePattern"'/)

          isFound="no"

          # if isFound, print continuely
          else if(isFound=="yes")

          print $0


          '



          full_grep $@


          Then type: cat test.log| fullgrep.sh ERROR, you should just get the desired output.






          share|improve this answer
























            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "106"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f361389%2fhow-can-i-do-a-stack-trace-using-grep-and-regex%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            It looks as if it should be enough to filter out the INFO messages from the input:



            $ grep -v '[0-9] INFO ' file.in


            I added [0-9] and the correct spacing around INFO just to be sure not to match any of the ERROR-related lines (in case a random string with INFO in it turns up there).



            If you have a number of logfiles in a directory:



            $ grep -v '[0-9] INFO ' *.log


            where *.log is a filename pattern that matches the logfiles.






            share|improve this answer

























            • If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

              – xyz_scala
              Apr 26 '17 at 14:12











            • grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

              – schaiba
              Apr 26 '17 at 14:33
















            2














            It looks as if it should be enough to filter out the INFO messages from the input:



            $ grep -v '[0-9] INFO ' file.in


            I added [0-9] and the correct spacing around INFO just to be sure not to match any of the ERROR-related lines (in case a random string with INFO in it turns up there).



            If you have a number of logfiles in a directory:



            $ grep -v '[0-9] INFO ' *.log


            where *.log is a filename pattern that matches the logfiles.






            share|improve this answer

























            • If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

              – xyz_scala
              Apr 26 '17 at 14:12











            • grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

              – schaiba
              Apr 26 '17 at 14:33














            2












            2








            2







            It looks as if it should be enough to filter out the INFO messages from the input:



            $ grep -v '[0-9] INFO ' file.in


            I added [0-9] and the correct spacing around INFO just to be sure not to match any of the ERROR-related lines (in case a random string with INFO in it turns up there).



            If you have a number of logfiles in a directory:



            $ grep -v '[0-9] INFO ' *.log


            where *.log is a filename pattern that matches the logfiles.






            share|improve this answer















            It looks as if it should be enough to filter out the INFO messages from the input:



            $ grep -v '[0-9] INFO ' file.in


            I added [0-9] and the correct spacing around INFO just to be sure not to match any of the ERROR-related lines (in case a random string with INFO in it turns up there).



            If you have a number of logfiles in a directory:



            $ grep -v '[0-9] INFO ' *.log


            where *.log is a filename pattern that matches the logfiles.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Apr 26 '17 at 14:30

























            answered Apr 26 '17 at 10:40









            KusalanandaKusalananda

            129k16243400




            129k16243400












            • If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

              – xyz_scala
              Apr 26 '17 at 14:12











            • grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

              – schaiba
              Apr 26 '17 at 14:33


















            • If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

              – xyz_scala
              Apr 26 '17 at 14:12











            • grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

              – schaiba
              Apr 26 '17 at 14:33

















            If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

            – xyz_scala
            Apr 26 '17 at 14:12





            If i want to grep to a folder not a file then how can i grep? Because i want to check all files error and how i will grep only ERROR related data?

            – xyz_scala
            Apr 26 '17 at 14:12













            grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

            – schaiba
            Apr 26 '17 at 14:33






            grep -vR '[0-9] INFO ' /my/dir/location/*.log && man grep

            – schaiba
            Apr 26 '17 at 14:33














            0














            I met exactly the same problem.



            Though grep can show context lines by flag -A, but the amount of context lines is fixed. Instead, you could try awk.



            Here is snippet which I have used before(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), saved as fullgrep.sh:



            full_grep()
            awk '
            BEGIN
            isFound = "no"


            # match lines with keyword

            # if match print the line
            if($0~/'"$keyword"'/)

            print $0
            isFound="yes"

            # if new line begin, flush the flag
            else if($0~/'"$newLinePattern"'/)

            isFound="no"

            # if isFound, print continuely
            else if(isFound=="yes")

            print $0


            '



            full_grep $@


            Then type: cat test.log| fullgrep.sh ERROR, you should just get the desired output.






            share|improve this answer





























              0














              I met exactly the same problem.



              Though grep can show context lines by flag -A, but the amount of context lines is fixed. Instead, you could try awk.



              Here is snippet which I have used before(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), saved as fullgrep.sh:



              full_grep()
              awk '
              BEGIN
              isFound = "no"


              # match lines with keyword

              # if match print the line
              if($0~/'"$keyword"'/)

              print $0
              isFound="yes"

              # if new line begin, flush the flag
              else if($0~/'"$newLinePattern"'/)

              isFound="no"

              # if isFound, print continuely
              else if(isFound=="yes")

              print $0


              '



              full_grep $@


              Then type: cat test.log| fullgrep.sh ERROR, you should just get the desired output.






              share|improve this answer



























                0












                0








                0







                I met exactly the same problem.



                Though grep can show context lines by flag -A, but the amount of context lines is fixed. Instead, you could try awk.



                Here is snippet which I have used before(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), saved as fullgrep.sh:



                full_grep()
                awk '
                BEGIN
                isFound = "no"


                # match lines with keyword

                # if match print the line
                if($0~/'"$keyword"'/)

                print $0
                isFound="yes"

                # if new line begin, flush the flag
                else if($0~/'"$newLinePattern"'/)

                isFound="no"

                # if isFound, print continuely
                else if(isFound=="yes")

                print $0


                '



                full_grep $@


                Then type: cat test.log| fullgrep.sh ERROR, you should just get the desired output.






                share|improve this answer















                I met exactly the same problem.



                Though grep can show context lines by flag -A, but the amount of context lines is fixed. Instead, you could try awk.



                Here is snippet which I have used before(https://gist.github.com/maoshuai/33113ac457aca7869171942c696f46d3), saved as fullgrep.sh:



                full_grep()
                awk '
                BEGIN
                isFound = "no"


                # match lines with keyword

                # if match print the line
                if($0~/'"$keyword"'/)

                print $0
                isFound="yes"

                # if new line begin, flush the flag
                else if($0~/'"$newLinePattern"'/)

                isFound="no"

                # if isFound, print continuely
                else if(isFound=="yes")

                print $0


                '



                full_grep $@


                Then type: cat test.log| fullgrep.sh ERROR, you should just get the desired output.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jan 23 at 13:41

























                answered Jan 23 at 13:34









                mao shellmao shell

                11




                11



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Unix & Linux Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f361389%2fhow-can-i-do-a-stack-trace-using-grep-and-regex%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown






                    Popular posts from this blog

                    How to check contact read email or not when send email to Individual?

                    Displaying single band from multi-band raster using QGIS

                    How many registers does an x86_64 CPU actually have?