This is the first guest post ever on Down the Road. Reader P wrote such an excellent comment on how to understand dots per inch and pixels per inch in scanning and printing that, with his permission and a little editing, I’ve turned it into this post. Thanks, P, for demystifying PPI and DPI!
Pixels per inch (PPI) and dots per inch (DPI) are challenging to understand in large part because the Internet is littered with outright wrong information about what these things are. Also, plenty of people use the terms improperly.
For the sake of brevity, parts of the following discussion are oversimplified. The purpose is not to explain everything, but rather to eliminate a lot of the confusion surrounding these terms by establishing a foundational understanding what PPI and DPI are, and what they are not. Doing a deep dive into every technical aspect, of every technology, of every possible situation where these terms might be used, in order to be 100% accurate and technically correct in every possible way, would defeat the purpose entirely as it would no doubt add to the confusion instead of alleviating it.
PPI has to do with screens: monitors, televisions, cell phones, tablets, and so on. It is merely a measurement of how densely packed the physical pixels are on a display — the pixel density. This in turn tells you how much physical screen real estate a given digital image will take up when viewed at 100%. PPI is simply the ratio of the screen’s native resolution, a×b pixels, to the screen’s linear physical dimensions, x×y inches.
Horizontal PPI is a pixels/x inches = a/x PPI. Vertical PPI is b pixels/y inches = b/y PPI. These days most screens use square pixels — pixel height and width are the same, so there’s no need for separate horizontal and vertical PPI values. We just say a monitor or screen is such-and-such PPI, a single value, because it’s the same horizontally and vertically.
The closer you are to a screen, the greater the PPI needs to be to provide an image of acceptable quality. For example, a big-screen LCD TV offers far fewer PPI than your cell phone.
Understanding DPI in print
Screen PPI and print DPI are similar in concept, but they are not the same thing. However, people use them interchangeably and it causes confusion. Instead of being the density of pixels on a display, DPI is the density of dots laid down on a physical medium such as paper to form a physical image.
The closer you are to a print, the greater the print DPI needs to be to provide an image of acceptable quality. For example, the DPI of a billboard advertisement is far less than that of your 4×6 vacation photos.
Understanding DPI in optical scanning
In scanning, DPI measures the scanner’s resolution. Look at it as the number of dots per inch the scanner can allegedly resolve when scanning a given piece of film. Let’s say a scanner has a maximum optical scanning resolution rating of 3600 DPI. This means that for each linear inch of film, the scanner is capable of resolving 3600 dots of information — allegedly, as the true, effective resolution will be less, a topic outside the scope of this discussion. These individual dots of information captured become individual pixels in the output digital image, the “scan.”
For square medium format negatives (1:1 aspect ratio), which are 56mm square, the calculation is:
3600 dots/inch × 56 mm × (1 inch / 25.4 mm) = 7937 dots
In other words, you get a scan of 7937×7937 pixels.
For 35mm negatives (3:2 aspect ratio), which are 36x24mm, the calculations are:
Horizontal: 3600 dots/inch × 36 mm × (1 inch / 25.4 mm) = 5102 dots
Vertical: 3600 dots/inch × 24 mm × (1 inch / 25.4 mm) = 3402 dots
That’s a scan of 5102×3402 pixels.
Going from scanner DPI, to screen PPI, to print DPI
In print, dot density matters, combined with the pixel resolution of the digital image that’s being printed, along with the image’s overall quality or the amount of information and detail it contains. Pixel density, which is not the same thing as the pixel resolution of a digital image, applies only to screens. The term DPI as it relates to the optical resolution of a scanner and the density of dots on a physical print are not the same thing. The former is a measurement of how much information the scanner can resolve while digitizing a piece of film, while the latter is a measurement of how densely packed the dots are that form the image in a print.
That said, when printing from a digital image the number of dots per inch (DPI) the printer lays down is related to the pixel resolution (a×b pixels) of that digital image. In simplified terms, due to differences in various printer technologies and how each one lays down dots on a physical substrate such as paper (which is also beyond the scope of this discussion), if there aren’t enough pixels in the original image to match at least 1:1 the number of dots that need to be printed at a given DPI and physical print size combination, then the digital image will have to be upscaled to a higher pixel resolution to match the printer’s DPI and print dimensions. This is a problem, as it means interpolation will occur, artifacts are likely to present themselves, and image quality will be greatly diminished. If an image is already lacking in pixel resolution or resolved image detail, you can’t do anything to salvage it. You can’t create detail that didn’t exist in the first place.
Advice for scanning
As stated previously, a scanner’s actual effective optical resolution is less than the rated value. For flatbeds, it’s much, much less.
For a typical flatbed scanner, I’d scan everything at a DPI that provides scans that are at least 4 times the pixel resolution (total area — 2 times the length and width = 4 times total area) of what I want my final output resolution to be.
For square, medium-format images, if I want my final files to be 2500×2500 pixels, I’d scan them at no less than 2400 DPI as 16-bits-per-color/channel TIFFs (i.e. 48-bits-per-pixel for color images, and 16-bit-per-pixel for greyscale). I would then do all of my levels editing, dust/scratch spotting, cropping, and so on at that original large resolution, but not yet sharpen anything. I’d save these as my “master” images, again as 16-bpc TIFFs. Then, I’d resize them to 2500×2500 pixels using the Lanczos-3 method, and finally use unsharp masking to sharpen them to my liking. That would be my final image for output, for sharing online, which I’d save as JPEGs.
I’d follow the same practice for 35mm. So, if I want my final 35mm images to be 3000×2000 pixels, I’d scan at no less than 4400 DPI, and then follow the same procedure as I did for medium format, cropping 3:2 instead of 1:1.