How can I convert a PNG to a PDF in high quality so it's not blurry or fuzzy?
Clash Royale CLAN TAG#URR8PPP
There are a lot of questions out there about how to convert a PDF file to a PNG image, but I'm looking to take a nice sharp PNG file and just basically wrap it or embed it in a PDF file without having it look blurry or fuzzy.
I realize with imagemagic installed I can do a simple conversion like:
convert sample.png sample.pdf
I've also tried a lot of the switches to set the depth, and also the quality setting:
convert -quality 100 sample.png sample.pdf
However the PDF still comes out looking blurry / fuzzy.
Here's a sample image:
http://img406.imageshack.us/img406/6461/picture3mu.png
As a png it's crisp and clean. When I convert it to a PDF, even at the same size it looks blurry:
Picture 4.png http://img803.imageshack.us/img803/9969/picture4at.png
How can I convert PNG to a PDF in high quality?
imagemagick image-manipulation
add a comment |
There are a lot of questions out there about how to convert a PDF file to a PNG image, but I'm looking to take a nice sharp PNG file and just basically wrap it or embed it in a PDF file without having it look blurry or fuzzy.
I realize with imagemagic installed I can do a simple conversion like:
convert sample.png sample.pdf
I've also tried a lot of the switches to set the depth, and also the quality setting:
convert -quality 100 sample.png sample.pdf
However the PDF still comes out looking blurry / fuzzy.
Here's a sample image:
http://img406.imageshack.us/img406/6461/picture3mu.png
As a png it's crisp and clean. When I convert it to a PDF, even at the same size it looks blurry:
Picture 4.png http://img803.imageshack.us/img803/9969/picture4at.png
How can I convert PNG to a PDF in high quality?
imagemagick image-manipulation
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
1
@cwd: Did you try to put it in a .tex file, and then generate the PDF? Withusepackage[pdftex, final]graphicx
andincludegraphics[width=516px]calendar.png
, for example.
– Emanuel Berg
Jul 13 '12 at 8:17
2
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35
add a comment |
There are a lot of questions out there about how to convert a PDF file to a PNG image, but I'm looking to take a nice sharp PNG file and just basically wrap it or embed it in a PDF file without having it look blurry or fuzzy.
I realize with imagemagic installed I can do a simple conversion like:
convert sample.png sample.pdf
I've also tried a lot of the switches to set the depth, and also the quality setting:
convert -quality 100 sample.png sample.pdf
However the PDF still comes out looking blurry / fuzzy.
Here's a sample image:
http://img406.imageshack.us/img406/6461/picture3mu.png
As a png it's crisp and clean. When I convert it to a PDF, even at the same size it looks blurry:
Picture 4.png http://img803.imageshack.us/img803/9969/picture4at.png
How can I convert PNG to a PDF in high quality?
imagemagick image-manipulation
There are a lot of questions out there about how to convert a PDF file to a PNG image, but I'm looking to take a nice sharp PNG file and just basically wrap it or embed it in a PDF file without having it look blurry or fuzzy.
I realize with imagemagic installed I can do a simple conversion like:
convert sample.png sample.pdf
I've also tried a lot of the switches to set the depth, and also the quality setting:
convert -quality 100 sample.png sample.pdf
However the PDF still comes out looking blurry / fuzzy.
Here's a sample image:
http://img406.imageshack.us/img406/6461/picture3mu.png
As a png it's crisp and clean. When I convert it to a PDF, even at the same size it looks blurry:
Picture 4.png http://img803.imageshack.us/img803/9969/picture4at.png
How can I convert PNG to a PDF in high quality?
imagemagick image-manipulation
imagemagick image-manipulation
edited Jun 26 '14 at 16:32
Braiam
23.6k2077140
23.6k2077140
asked Jul 12 '12 at 0:01
cwdcwd
13.9k52116157
13.9k52116157
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
1
@cwd: Did you try to put it in a .tex file, and then generate the PDF? Withusepackage[pdftex, final]graphicx
andincludegraphics[width=516px]calendar.png
, for example.
– Emanuel Berg
Jul 13 '12 at 8:17
2
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35
add a comment |
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
1
@cwd: Did you try to put it in a .tex file, and then generate the PDF? Withusepackage[pdftex, final]graphicx
andincludegraphics[width=516px]calendar.png
, for example.
– Emanuel Berg
Jul 13 '12 at 8:17
2
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
1
1
@cwd: Did you try to put it in a .tex file, and then generate the PDF? With
usepackage[pdftex, final]graphicx
and includegraphics[width=516px]calendar.png
, for example.– Emanuel Berg
Jul 13 '12 at 8:17
@cwd: Did you try to put it in a .tex file, and then generate the PDF? With
usepackage[pdftex, final]graphicx
and includegraphics[width=516px]calendar.png
, for example.– Emanuel Berg
Jul 13 '12 at 8:17
2
2
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35
add a comment |
6 Answers
6
active
oldest
votes
Try using the -density
option. The default resolution is 72 dots per inch. So try something like -density 300
.
For reference see -density
in the ImageMagick command-line options documentation.
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lostwith density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
add a comment |
It can be very complicated to get good pdf output from convert
. Try img2pdf
instead. From the readme:
Lossless conversion of images to PDF without unnecessarily re-encoding JPEG and JPEG2000 files. Thus, no loss of quality and no unnecessary large output file.
To clarify: PDF can embed lossless JPEG 2000 images (and most readers appear to support them). So this conversion is completely lossless:
convert sample.png -quality 0 sample.jp2
img2pdf -o sample.pdf sample.jp2
(Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: checkidentify -list format | grep JP2
.)
– Nemo
Oct 8 '15 at 20:15
1
If you want to stick to standard repositories on Ubuntu 14,convert
to tiff and thentiff2pdf
.
– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
add a comment |
I am almost certain that what you perceive as a loss of quality in the PDF, is just an effect of your PDF viewer's anti-aliasing feature.
If you use evince
to view the PDF, you can see the anti-aliasing feature automatically switched off at a certain zoom (300% in my quick test). You can see that vividly when you keep zooming in - you will notice that at some point, pixels become suddenly clearly visible. That is the point when anti-aliasing must have been switched off to allow precise image inspection.
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
add a comment |
I think that for png-to-pdf -density
parameter should small rather than large. You could try something like convert -quality 100 -density 50
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
add a comment |
PDF is a vector format (i.e., the file contains a description of lines to draw), while other formats (JPG, PNG) are raster formats (the file describes what color to paint each pixel). If you blow a PDF up, it is still just sharp lines; JPG and PNG show the pixelation.
(OK, OK, I lied. A PDF can also be a raster).
add a comment |
Brian Z above provided the below, which is the correct, fully reversible, and lossless (assuming the convert step is in fact lossless, which I think it is or at least ought to be) way to put png's into a pdf. You are required to convert from png to lossless jp2 in order to be compliant with PDF structure / readers (I think).
$ convert sample.png -quality 0 sample.jp2
$ img2pdf -o sample.pdf sample.jp2
However it is worth noting that you can supply the .png files themselves directly to img2pdf like :
$ img2pdf -o sample.pdf sample-page1.png <sample-page2.png ...>
This will produce the smallest pdf file, and will insert the png's raw hex into objects within the pdf losslessly*.
*The drawback, is this process is not reversible unlike using jp2. The header/footer and and chunk header/footer data has been stripped (which actually makes the files even smaller!) from the png that is inserted leaving only the raw picture data (the metadata that was deleted is integrated into the pdf structure), presumably to "hack" the pdf to display raw png which is technically non-compliant with the container. It displays fine in firefox, and may display fine in all modern readers, but if png is non-compliant with the container then strict readers may not render the raw png data (as they should not expect it / process it correctly).
Here you can find an extremely raw bash script that worked for me to extract and reconstruct the png files with matching hashes to the ones used for input into the pdf. https://github.com/jack4455667788/RebuildRawPNGExtractedFromPDF
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f42856%2fhow-can-i-convert-a-png-to-a-pdf-in-high-quality-so-its-not-blurry-or-fuzzy%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
Try using the -density
option. The default resolution is 72 dots per inch. So try something like -density 300
.
For reference see -density
in the ImageMagick command-line options documentation.
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lostwith density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
add a comment |
Try using the -density
option. The default resolution is 72 dots per inch. So try something like -density 300
.
For reference see -density
in the ImageMagick command-line options documentation.
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lostwith density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
add a comment |
Try using the -density
option. The default resolution is 72 dots per inch. So try something like -density 300
.
For reference see -density
in the ImageMagick command-line options documentation.
Try using the -density
option. The default resolution is 72 dots per inch. So try something like -density 300
.
For reference see -density
in the ImageMagick command-line options documentation.
edited Feb 12 '13 at 7:08
manatwork
22k38385
22k38385
answered Feb 12 '13 at 5:54
user32208user32208
33132
33132
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lostwith density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
add a comment |
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lostwith density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
Density definitely seems to be the key. Scale and density seem to be inversely related, so I have been playing with both settings to get an optimum result in terms of both appearance and file size... If there is a set formula, I wish I knew it.
– Brian Z
May 2 '15 at 4:18
How to find the best density option? How much is data lost
with density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
How to find the best density option? How much is data lost
with density 300
with any example picture? I think the result depends on the input. A new thread about it here unix.stackexchange.com/q/292025/16920– Léo Léopold Hertz 준영
Jun 25 '16 at 7:34
add a comment |
It can be very complicated to get good pdf output from convert
. Try img2pdf
instead. From the readme:
Lossless conversion of images to PDF without unnecessarily re-encoding JPEG and JPEG2000 files. Thus, no loss of quality and no unnecessary large output file.
To clarify: PDF can embed lossless JPEG 2000 images (and most readers appear to support them). So this conversion is completely lossless:
convert sample.png -quality 0 sample.jp2
img2pdf -o sample.pdf sample.jp2
(Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: checkidentify -list format | grep JP2
.)
– Nemo
Oct 8 '15 at 20:15
1
If you want to stick to standard repositories on Ubuntu 14,convert
to tiff and thentiff2pdf
.
– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
add a comment |
It can be very complicated to get good pdf output from convert
. Try img2pdf
instead. From the readme:
Lossless conversion of images to PDF without unnecessarily re-encoding JPEG and JPEG2000 files. Thus, no loss of quality and no unnecessary large output file.
To clarify: PDF can embed lossless JPEG 2000 images (and most readers appear to support them). So this conversion is completely lossless:
convert sample.png -quality 0 sample.jp2
img2pdf -o sample.pdf sample.jp2
(Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: checkidentify -list format | grep JP2
.)
– Nemo
Oct 8 '15 at 20:15
1
If you want to stick to standard repositories on Ubuntu 14,convert
to tiff and thentiff2pdf
.
– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
add a comment |
It can be very complicated to get good pdf output from convert
. Try img2pdf
instead. From the readme:
Lossless conversion of images to PDF without unnecessarily re-encoding JPEG and JPEG2000 files. Thus, no loss of quality and no unnecessary large output file.
To clarify: PDF can embed lossless JPEG 2000 images (and most readers appear to support them). So this conversion is completely lossless:
convert sample.png -quality 0 sample.jp2
img2pdf -o sample.pdf sample.jp2
(Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)
It can be very complicated to get good pdf output from convert
. Try img2pdf
instead. From the readme:
Lossless conversion of images to PDF without unnecessarily re-encoding JPEG and JPEG2000 files. Thus, no loss of quality and no unnecessary large output file.
To clarify: PDF can embed lossless JPEG 2000 images (and most readers appear to support them). So this conversion is completely lossless:
convert sample.png -quality 0 sample.jp2
img2pdf -o sample.pdf sample.jp2
(Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)
edited Apr 13 '17 at 12:50
Community♦
1
1
answered Jun 25 '15 at 12:29
Brian ZBrian Z
31126
31126
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: checkidentify -list format | grep JP2
.)
– Nemo
Oct 8 '15 at 20:15
1
If you want to stick to standard repositories on Ubuntu 14,convert
to tiff and thentiff2pdf
.
– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
add a comment |
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: checkidentify -list format | grep JP2
.)
– Nemo
Oct 8 '15 at 20:15
1
If you want to stick to standard repositories on Ubuntu 14,convert
to tiff and thentiff2pdf
.
– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
3
3
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:
convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)– Nemo
Oct 8 '15 at 20:15
This is IMHO the best answer here, but you should explain better your point, i.e. that PDF can embed lossless JPEG 2000 images. So the full command for the OP would be something like:
convert sample.png -quality 0 sample.jp2; img2pdf -o sample.pdf sample.jp2
. (Assuming the JP2 delegate is available of course: check identify -list format | grep JP2
.)– Nemo
Oct 8 '15 at 20:15
1
1
If you want to stick to standard repositories on Ubuntu 14,
convert
to tiff and then tiff2pdf
.– Camille Goudeseune
Nov 4 '15 at 17:24
If you want to stick to standard repositories on Ubuntu 14,
convert
to tiff and then tiff2pdf
.– Camille Goudeseune
Nov 4 '15 at 17:24
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
This is a great answer, thank you very and very much.
– Lyubomyr Shaydariv
Jan 26 '17 at 0:39
add a comment |
I am almost certain that what you perceive as a loss of quality in the PDF, is just an effect of your PDF viewer's anti-aliasing feature.
If you use evince
to view the PDF, you can see the anti-aliasing feature automatically switched off at a certain zoom (300% in my quick test). You can see that vividly when you keep zooming in - you will notice that at some point, pixels become suddenly clearly visible. That is the point when anti-aliasing must have been switched off to allow precise image inspection.
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
add a comment |
I am almost certain that what you perceive as a loss of quality in the PDF, is just an effect of your PDF viewer's anti-aliasing feature.
If you use evince
to view the PDF, you can see the anti-aliasing feature automatically switched off at a certain zoom (300% in my quick test). You can see that vividly when you keep zooming in - you will notice that at some point, pixels become suddenly clearly visible. That is the point when anti-aliasing must have been switched off to allow precise image inspection.
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
add a comment |
I am almost certain that what you perceive as a loss of quality in the PDF, is just an effect of your PDF viewer's anti-aliasing feature.
If you use evince
to view the PDF, you can see the anti-aliasing feature automatically switched off at a certain zoom (300% in my quick test). You can see that vividly when you keep zooming in - you will notice that at some point, pixels become suddenly clearly visible. That is the point when anti-aliasing must have been switched off to allow precise image inspection.
I am almost certain that what you perceive as a loss of quality in the PDF, is just an effect of your PDF viewer's anti-aliasing feature.
If you use evince
to view the PDF, you can see the anti-aliasing feature automatically switched off at a certain zoom (300% in my quick test). You can see that vividly when you keep zooming in - you will notice that at some point, pixels become suddenly clearly visible. That is the point when anti-aliasing must have been switched off to allow precise image inspection.
answered Jul 12 '12 at 10:59
rozcietrzewiaczrozcietrzewiacz
29.3k47392
29.3k47392
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
add a comment |
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
Hmm - that makes sense but I guess I was hoping to be able to somehow set the image and the "initial view" to 100% so that it looks crisper.
– cwd
Jul 12 '12 at 20:10
1
1
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
@cwd Don't mistake the zoom (a way of examining the file) with the actual cause of image smoothing: anti-aliasing. The image is stored properly. It is the PDF viewer that fools you. But it could be the other way around even - if you'd take some other viewing applications or change their settings. You could then see a sharp image in a PDF file and a smoothed PNG file in an image viewer.
– rozcietrzewiacz
Jul 13 '12 at 6:37
add a comment |
I think that for png-to-pdf -density
parameter should small rather than large. You could try something like convert -quality 100 -density 50
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
add a comment |
I think that for png-to-pdf -density
parameter should small rather than large. You could try something like convert -quality 100 -density 50
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
add a comment |
I think that for png-to-pdf -density
parameter should small rather than large. You could try something like convert -quality 100 -density 50
I think that for png-to-pdf -density
parameter should small rather than large. You could try something like convert -quality 100 -density 50
answered May 17 '13 at 13:33
user39384user39384
311
311
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
add a comment |
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
2
2
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
Nope higher density is definitely better, just did a test, 50 results in lots of very visible pixels, 300 is nice and crisp looking.
– shaunhusain
Sep 28 '14 at 23:13
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
This has been confusing me, but I think a higher density results in output with lower resolution. That means that if the output is fuzzy (over-aliased, like the example in the original question) then lower density is what you want. But if a PDF is pixelated then indeed you need to convert with a higher density.
– Brian Z
Apr 22 '15 at 7:32
add a comment |
PDF is a vector format (i.e., the file contains a description of lines to draw), while other formats (JPG, PNG) are raster formats (the file describes what color to paint each pixel). If you blow a PDF up, it is still just sharp lines; JPG and PNG show the pixelation.
(OK, OK, I lied. A PDF can also be a raster).
add a comment |
PDF is a vector format (i.e., the file contains a description of lines to draw), while other formats (JPG, PNG) are raster formats (the file describes what color to paint each pixel). If you blow a PDF up, it is still just sharp lines; JPG and PNG show the pixelation.
(OK, OK, I lied. A PDF can also be a raster).
add a comment |
PDF is a vector format (i.e., the file contains a description of lines to draw), while other formats (JPG, PNG) are raster formats (the file describes what color to paint each pixel). If you blow a PDF up, it is still just sharp lines; JPG and PNG show the pixelation.
(OK, OK, I lied. A PDF can also be a raster).
PDF is a vector format (i.e., the file contains a description of lines to draw), while other formats (JPG, PNG) are raster formats (the file describes what color to paint each pixel). If you blow a PDF up, it is still just sharp lines; JPG and PNG show the pixelation.
(OK, OK, I lied. A PDF can also be a raster).
answered Jan 16 '13 at 1:14
vonbrandvonbrand
14.3k22644
14.3k22644
add a comment |
add a comment |
Brian Z above provided the below, which is the correct, fully reversible, and lossless (assuming the convert step is in fact lossless, which I think it is or at least ought to be) way to put png's into a pdf. You are required to convert from png to lossless jp2 in order to be compliant with PDF structure / readers (I think).
$ convert sample.png -quality 0 sample.jp2
$ img2pdf -o sample.pdf sample.jp2
However it is worth noting that you can supply the .png files themselves directly to img2pdf like :
$ img2pdf -o sample.pdf sample-page1.png <sample-page2.png ...>
This will produce the smallest pdf file, and will insert the png's raw hex into objects within the pdf losslessly*.
*The drawback, is this process is not reversible unlike using jp2. The header/footer and and chunk header/footer data has been stripped (which actually makes the files even smaller!) from the png that is inserted leaving only the raw picture data (the metadata that was deleted is integrated into the pdf structure), presumably to "hack" the pdf to display raw png which is technically non-compliant with the container. It displays fine in firefox, and may display fine in all modern readers, but if png is non-compliant with the container then strict readers may not render the raw png data (as they should not expect it / process it correctly).
Here you can find an extremely raw bash script that worked for me to extract and reconstruct the png files with matching hashes to the ones used for input into the pdf. https://github.com/jack4455667788/RebuildRawPNGExtractedFromPDF
add a comment |
Brian Z above provided the below, which is the correct, fully reversible, and lossless (assuming the convert step is in fact lossless, which I think it is or at least ought to be) way to put png's into a pdf. You are required to convert from png to lossless jp2 in order to be compliant with PDF structure / readers (I think).
$ convert sample.png -quality 0 sample.jp2
$ img2pdf -o sample.pdf sample.jp2
However it is worth noting that you can supply the .png files themselves directly to img2pdf like :
$ img2pdf -o sample.pdf sample-page1.png <sample-page2.png ...>
This will produce the smallest pdf file, and will insert the png's raw hex into objects within the pdf losslessly*.
*The drawback, is this process is not reversible unlike using jp2. The header/footer and and chunk header/footer data has been stripped (which actually makes the files even smaller!) from the png that is inserted leaving only the raw picture data (the metadata that was deleted is integrated into the pdf structure), presumably to "hack" the pdf to display raw png which is technically non-compliant with the container. It displays fine in firefox, and may display fine in all modern readers, but if png is non-compliant with the container then strict readers may not render the raw png data (as they should not expect it / process it correctly).
Here you can find an extremely raw bash script that worked for me to extract and reconstruct the png files with matching hashes to the ones used for input into the pdf. https://github.com/jack4455667788/RebuildRawPNGExtractedFromPDF
add a comment |
Brian Z above provided the below, which is the correct, fully reversible, and lossless (assuming the convert step is in fact lossless, which I think it is or at least ought to be) way to put png's into a pdf. You are required to convert from png to lossless jp2 in order to be compliant with PDF structure / readers (I think).
$ convert sample.png -quality 0 sample.jp2
$ img2pdf -o sample.pdf sample.jp2
However it is worth noting that you can supply the .png files themselves directly to img2pdf like :
$ img2pdf -o sample.pdf sample-page1.png <sample-page2.png ...>
This will produce the smallest pdf file, and will insert the png's raw hex into objects within the pdf losslessly*.
*The drawback, is this process is not reversible unlike using jp2. The header/footer and and chunk header/footer data has been stripped (which actually makes the files even smaller!) from the png that is inserted leaving only the raw picture data (the metadata that was deleted is integrated into the pdf structure), presumably to "hack" the pdf to display raw png which is technically non-compliant with the container. It displays fine in firefox, and may display fine in all modern readers, but if png is non-compliant with the container then strict readers may not render the raw png data (as they should not expect it / process it correctly).
Here you can find an extremely raw bash script that worked for me to extract and reconstruct the png files with matching hashes to the ones used for input into the pdf. https://github.com/jack4455667788/RebuildRawPNGExtractedFromPDF
Brian Z above provided the below, which is the correct, fully reversible, and lossless (assuming the convert step is in fact lossless, which I think it is or at least ought to be) way to put png's into a pdf. You are required to convert from png to lossless jp2 in order to be compliant with PDF structure / readers (I think).
$ convert sample.png -quality 0 sample.jp2
$ img2pdf -o sample.pdf sample.jp2
However it is worth noting that you can supply the .png files themselves directly to img2pdf like :
$ img2pdf -o sample.pdf sample-page1.png <sample-page2.png ...>
This will produce the smallest pdf file, and will insert the png's raw hex into objects within the pdf losslessly*.
*The drawback, is this process is not reversible unlike using jp2. The header/footer and and chunk header/footer data has been stripped (which actually makes the files even smaller!) from the png that is inserted leaving only the raw picture data (the metadata that was deleted is integrated into the pdf structure), presumably to "hack" the pdf to display raw png which is technically non-compliant with the container. It displays fine in firefox, and may display fine in all modern readers, but if png is non-compliant with the container then strict readers may not render the raw png data (as they should not expect it / process it correctly).
Here you can find an extremely raw bash script that worked for me to extract and reconstruct the png files with matching hashes to the ones used for input into the pdf. https://github.com/jack4455667788/RebuildRawPNGExtractedFromPDF
edited Feb 10 at 3:20
answered Feb 9 at 22:08
Jack HadleyJack Hadley
112
112
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f42856%2fhow-can-i-convert-a-png-to-a-pdf-in-high-quality-so-its-not-blurry-or-fuzzy%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I hope you find a good answer to your question, but I think it is just a given property of PDF to store images in JPG format. PNG, like the one you show us, has much better quality than JPG.
– jippie
Jul 12 '12 at 6:32
1
@cwd: Did you try to put it in a .tex file, and then generate the PDF? With
usepackage[pdftex, final]graphicx
andincludegraphics[width=516px]calendar.png
, for example.– Emanuel Berg
Jul 13 '12 at 8:17
2
@jippie: No, PDF can store bitmaps losslessly. The link gives a list of compression algorithms rather than formats, because the bitmap data inside a PDF can't be extracted and viewed directly as a JPEG or TIFF, but you wouldn't go far wrong saying that PDF images are either JPEG (lossy), JPEG 2000 (also lossy) or any of several TIFF variants (lossless). What is true, however, is that a given PDF distiller may default to translating bitmaps into DCT (a.k.a. JPEG) form, and have to be told to use a lossless form instead.
– Warren Young
Jul 13 '12 at 9:38
@cwd Have you thought about accepting some answer? I think user32208's answer is rather good unix.stackexchange.com/a/64495/16920
– Léo Léopold Hertz 준영
Jun 25 '16 at 7:35