Dots per inch (DPI) and pixels per inch (PPI) in scanning negatives and printing images

This is the first guest post ever on Down the Road. Reader P wrote such an excellent comment on how to understand dots per inch and pixels per inch in scanning and printing that, with his permission and a little editing, I’ve turned it into this post. Thanks, P, for demystifying PPI and DPI!

By P

Pixels per inch (PPI) and dots per inch (DPI) are challenging to understand in large part because the Internet is littered with outright wrong information about what these things are. Also, plenty of people use the terms improperly.

For the sake of brevity, parts of the following discussion are oversimplified. The purpose is not to explain everything, but rather to eliminate a lot of the confusion surrounding these terms by establishing a foundational understanding what PPI and DPI are, and what they are not. Doing a deep dive into every technical aspect, of every technology, of every possible situation where these terms might be used, in order to be 100% accurate and technically correct in every possible way, would defeat the purpose entirely as it would no doubt add to the confusion instead of alleviating it.

Understanding PPI

PPI has to do with screens: monitors, televisions, cell phones, tablets, and so on. It is merely a measurement of how densely packed the physical pixels are on a display — the pixel density. This in turn tells you how much physical screen real estate a given digital image will take up when viewed at 100%. PPI is simply the ratio of the screen’s native resolution, a×b pixels, to the screen’s linear physical dimensions, x×y inches.

Polka dots
A 1 DPI pattern on this cup

Horizontal PPI is a pixels/x inches = a/x PPI. Vertical PPI is b pixels/y inches = b/y PPI. These days most screens use square pixels — pixel height and width are the same, so there’s no need for separate horizontal and vertical PPI values. We just say a monitor or screen is such-and-such PPI, a single value, because it’s the same horizontally and vertically.

The closer you are to a screen, the greater the PPI needs to be to provide an image of acceptable quality. For example, a big-screen LCD TV offers far fewer PPI than your cell phone.

Understanding DPI in print

Screen PPI and print DPI are similar in concept, but they are not the same thing. However, people use them interchangeably and it causes confusion. Instead of being the density of pixels on a display, DPI is the density of dots laid down on a physical medium such as paper to form a physical image.

The closer you are to a print, the greater the print DPI needs to be to provide an image of acceptable quality. For example, the DPI of a billboard advertisement is far less than that of your 4×6 vacation photos.

Understanding DPI in optical scanning

In scanning, DPI measures the scanner’s resolution. Look at it as the number of dots per inch the scanner can allegedly resolve when scanning a given piece of film. Let’s say a scanner has a maximum optical scanning resolution rating of 3600 DPI. This means that for each linear inch of film, the scanner is capable of resolving 3600 dots of information — allegedly, as the true, effective resolution will be less, a topic outside the scope of this discussion. These individual dots of information captured become individual pixels in the output digital image, the “scan.”

For square medium format negatives (1:1 aspect ratio), which are 56mm square, the calculation is:

3600 dots/inch × 56 mm × (1 inch / 25.4 mm) = 7937 dots

In other words, you get a scan of 7937×7937 pixels.

For 35mm negatives (3:2 aspect ratio), which are 36x24mm, the calculations are:

Horizontal: 3600 dots/inch × 36 mm × (1 inch / 25.4 mm) = 5102 dots
Vertical: 3600 dots/inch × 24 mm × (1 inch / 25.4 mm) = 3402 dots

That’s a scan of 5102×3402 pixels.

Going from scanner DPI, to screen PPI, to print DPI

Polka-dotted chair
Dots per yard (DPY)?

In print, dot density matters, combined with the pixel resolution of the digital image that’s being printed, along with the image’s overall quality or the amount of information and detail it contains. Pixel density, which is not the same thing as the pixel resolution of a digital image, applies only to screens. The term DPI as it relates to the optical resolution of a scanner and the density of dots on a physical print are not the same thing. The former is a measurement of how much information the scanner can resolve while digitizing a piece of film, while the latter is a measurement of how densely packed the dots are that form the image in a print.

That said, when printing from a digital image the number of dots per inch (DPI) the printer lays down is related to the pixel resolution (a×b pixels) of that digital image. In simplified terms, due to differences in various printer technologies and how each one lays down dots on a physical substrate such as paper (which is also beyond the scope of this discussion), if there aren’t enough pixels in the original image to match at least 1:1 the number of dots that need to be printed at a given DPI and physical print size combination, then the digital image will have to be upscaled to a higher pixel resolution to match the printer’s DPI and print dimensions. This is a problem, as it means interpolation will occur, artifacts are likely to present themselves, and image quality will be greatly diminished. If an image is already lacking in pixel resolution or resolved image detail, you can’t do anything to salvage it. You can’t create detail that didn’t exist in the first place.

Advice for scanning

As stated previously, a scanner’s actual effective optical resolution is less than the rated value. For flatbeds, it’s much, much less.

For a typical flatbed scanner, I’d scan everything at a DPI that provides scans that are at least 4 times the pixel resolution (total area — 2 times the length and width = 4 times total area) of what I want my final output resolution to be.

For square, medium-format images, if I want my final files to be 2500×2500 pixels, I’d scan them at no less than 2400 DPI as 16-bits-per-color/channel TIFFs (i.e. 48-bits-per-pixel for color images, and 16-bit-per-pixel for greyscale). I would then do all of my levels editing, dust/scratch spotting, cropping, and so on at that original large resolution, but not yet sharpen anything. I’d save these as my “master” images, again as 16-bpc TIFFs. Then, I’d resize them to 2500×2500 pixels using the Lanczos-3 method, and finally use unsharp masking to sharpen them to my liking. That would be my final image for output, for sharing online, which I’d save as JPEGs.

I’d follow the same practice for 35mm. So, if I want my final 35mm images to be 3000×2000 pixels, I’d scan at no less than 4400 DPI, and then follow the same procedure as I did for medium format, cropping 3:2 instead of 1:1.

To get Down the Road in your inbox or reader six days a week, click here to subscribe!
To get my newsletter with previews of what I’m working on, click here to subscribe!


Comments

14 responses to “Dots per inch (DPI) and pixels per inch (PPI) in scanning negatives and printing images”

  1. Andy Umbo Avatar
    Andy Umbo

    Why 16 bit color when most scanners can do 24 bit color? I can understand when the final usage will be “knocked down” for computer use to 8 bit on the jpeg, but 90% of the time I’m getting a transparency scan it’s going to be used for print output, or print reproduction in a magazine, so TIFF is a given, but do you not think there is a see-able difference between 16 and 24 bit color (I read somewhere in a tech publication that the difference is over 2 million tone/color steps)? Hence the “master scan”, to be “knocked down” later for other uses, would be highest rez, highest color and tone fidelity?

    I ask this question because at the “dawn” of digital, I was managing a large multi-photographer retail photo studio, and was bombarded with manufacturers wanting us to buy their early digital equipment. I saw a lot of experimental equipment, but one of the things that made a real impression was seeing a camera system that had 24 bit color, vs. the 16 bit or even 14 bit color most pro cameras shoot today. On print proof stage, the difference was visible and the 24 bit looked “film-like” in every sense of the word! I think about this so much, I wonder why there isn’t a reasonably priced pro camera that shoots native 24 bit (and even now, most but the top of the line don’t shoot native TIFF as well).

    I know why, based on sensor design, that 24 bit color fidelity is hard to obtain, but in a scanner that offers it, I would think that would be a given when converting film, unless there’s a reason not to?

    1. brineb58 Avatar

      I am pretty sure he meant 16 bit per channel which gives you a 48 bit inmage.

      1. P Avatar
        P

        BRINEB58 — Yes, precisely. Thanks.

    2. P Avatar
      P

      Hi, Andy.

      Re-read the second to last paragraph above. You’ll see that a clear and deliberate distinction is made between bits-per-channel/color (bpc) and bits-per-pixel (bpp).

      For color images (RGB, YPbPr/YUV, etc.), 16-bits-per-channel (bpc) is equivalent to 48 bits overall, or 48-bits-per-pixel (bpp), as there are three color channels. For a grey level image, there is only one channel, so a 16-bit grey level TIFF is merely 16 bits. Likewise, an 8-bit grey level TIFF is merely 8 bits. For grey level encoded images, whether one uses bpc or bpp as a descriptor is irrelevant, because it’s going to be the same. Again, there’s only one channel. But it’s not the same thing for color images as they contain multiple channels (typically three, but as always there are exceptions — for example, images that contain alpha channels).

      An 8-bpc (24-bpp) color image contains ~16.78 million (2^24) possible color values. A 16-bpc (48-bpp) color image contains a whopping ~281 trillion (2^48) possible color values, which is ~16.78 million times more than an 8-bpc image!

      (2^48) / (2^24) = 16.78E6

      Side note: Of course, doing this calculation isn’t really necessary as basic binary math dictates that any time you double the number of available bits, going from x-bits (smaller) to y-bits (where y is two-times larger than x, or 2·x), the factor by which the number of possible values increases is simply 2^x, where x is the smaller bit value, as stated. This is because (2^y) / (2^x) = 2^(y-x) = 2^((2·x)-x) = 2^x.

      In terms of grey level images, an 8-bpc image contains 256 (2^8) possible values and a 16-bpc image contains over 65.5 thousand (2^16) possible values. So, a 16-bit grey level image contains 256 times more information (possible values) than its 8-bit counterpart. Again, (2^16) / (2^8) = 256.

      This is a massive improvement when working with greyscale film scans (i.e. black and white photography).

      All standard JPEGs (i.e. not JPEG2000, etc.) are 8-bpc, or 24-bpp (for color), or 8-bits (either bpc or bpp) for those with grey level encoding. Very seldom do people talk about JPEGs as being “24 bit images.” If anything, they generally refer to them as being “8 bit,” with the understanding being that is “per channel.” But it’s so well understood that standard color JPEGs are 8-bpc/24-bpp that they’re rarely even described using such terms at all. Likewise, very seldom do people talk about 16-bpc color TIFFs as being “48 bit.” They typically just say they’re “16 bit TIFFs,” again with the assumption that most people understand that they’re talking about bits-per-channel (bpc), not the overall bits-per-pixel count (bpp). I like to be explicit, and I think it’s important to be in order to avoid confusion. Of course, there’s a lot more regarding digital images that we’re not going to get into it here, such as the color space used (sRGB, Adobe RGB, D65 greyscale, etc.). These are important discussions, but it’s just too much to get into.

      “Standard” TIFFs are generally up to 16-bpc/48-bpp. Once you exceed 16-bpc you start getting into TIFFs that are fairly “non-standard,” and the vast majority of editing software will have a difficult time with them. In fact, most image editors probably won’t be able to open them at all. That said, the TIFF standard is so flexible that if one has a need or desire to generate higher bit depth images, such as 32-bpc/96-bpp color images, then it can certainly be done. But again, most commercial image editors will not get along with them very well, if at all. Photoshop would probably be an exception, and would likely have no issues opening them, assuming they were generated according to the TIFF spec. Also, once you get past 16-bpc images it really is an issue of diminishing returns. In real-world terms, there is generally very little practical purpose or gain to be had working with, say, 32-bpc images instead of 16-bpc. Now, using 16-bpc versus 8-bpc? Yes, the gain is worth it and there is a perceivable improvement in quality. But using 32-bpc versus 16-bit? In general, no, there isn’t a point as no perceivable improvement in quality is likely to be seen, whether the final output is to be viewed digitally or in print. As an aside, keep in mind that almost all computer monitors in use by your “everyday Joe” are 24-bit “true color” displays. That’s 8-bpc.

      Regarding modern digital cameras not shooting natively in TIFF, that’s because there’s really no need. All they need to be able to do is create a lossless file that contains 100% of the information captured by the sensor. That’s precisely what proprietary RAW files are/do and they typically take up quite a bit less space than their lossless TIFF counterparts would (even if the TIFFs are compressed). From those RAW files, one can easily produce a lossless DNG (which is based on the TIFF standard) or a lossless TIFF itself to work with in their choice of software. Many digital cameras do natively create DNG’s, which is an Adobe open standard directly based on the TIFF standard (as just noted a second ago), versus the various closed/proprietary RAW formats. Additionally, working with RAW files in some of the modern, so-called “RAW editors” does offer some distinct advantages over using traditional photo editing software.

      Regarding how many “bits” scanners are advertised as being, you have to be very careful as manufacturers will often say whatever they can (without explicitly lying) in order to sell their product. In other words, it’s oftentimes more marketing propaganda baloney than anything else. In terms of “bits,” the thing that really matters with scanners is the quality of the hardware ADC’s (analog-to-digital converters). Over time, chip designs have become smaller and cheaper, allowing ADC’s used in commercial scanners to go from being 8-bit in the very early days to being 16-bit today. In between there were scanners that employed 10-bit, 12-bit, and 14-bit ADC’s. You said “most scanners can do 24 bit color.” If you’re talking about them outputting 24-bit-per-pixel (i.e. 8-bit-per-channel) images, yes, of course they can. This is “JPEG quality.” But if you’re saying they can natively generate 24-bpc (i.e. 72-bpp) images, then, no, I don’t think so. I could be wrong, but to my knowledge, there is no commercial scanner that exists, or has ever existed, which uses anything greater than 16-bit ADC’s. This means the highest bit depth images they can natively produce, without adding padded or interpolated data to the digital file, is 16-bpc/48-bpp images. This is why the advertised “color depth” of virtually any recently manufactured consumer scanner is “48-bit.” That’s 16-bpc/48-bpp for color film, and 16-bit (period) for B&W film scanned in “B&W mode” to produce a single-channel grey level file (of course, you can scan B&W film using a “color mode,” if desired, and get 48-bpp output files — this is useful in some situations). And, once again, to go any higher really does become another “diminishing returns” situation. Furthermore, let’s pretend there was a scanner that utilized 24-bit ADC’s and thus could natively output 24-bpc/72-bpp images. Such files would be very much “non-standard.” I know of no image editor that would even open such a file. If such a file was generated, and was a TIFF (it would almost have to be), maybe Photoshop could open it. Maybe. But again, no such scanner exists to my knowledge.

      Hopefully that was helpful, and didn’t add to the confusion of all these topics.

      Take care, Andy.

  2. Andy Umbo Avatar
    Andy Umbo

    As far as I know, and as far as I’ve been able to research, it always refers to “bit per channel”, So a 24 bit scanner is 72 bits per three channels.

    1. P Avatar
      P

      See my response to you above.

      1. Andy Umbo Avatar
        Andy Umbo

        P, I think you gloss over the RAW vs. TIFF settings for professional photographers. As a 40 year film photographer, I lit and exposed transparency film for reproduction, and delivered that item to the ad agency. You make the assumption that I want to spend time working on a RAW image, which at least in the fly-over, I’m not getting paid for, and is not my area of expertise anyway! Most professional photographers, at least in my era, did not get into photography to spend hours looking at a computer screen!

        When working in large urban centers, like when I work in SFO, it is easy to use a “color service” to work on files to get them prepped for an ad agency, if you decide to work that way, and the fee is picked up by the agency. The work flow usually means you’re using a two card slot pro camera, shooting jpeg to one, and RAW to the other. The ad agency gets the jpegs and decides which images they want processed through from RAW to TIFF, which is carried out by the color service, or the “digital wrist” at your studio.

        In the “fly-over”, I’ve tested the output of my camera at different settings for contrast and color settings, and know what it can do. Then I shoot TIFF output from the camera for direct delivery to the ad agency, just like shooting transparencies of old. No intermediate steps I’m not getting paid for! If this wasn’t a “thing”, then top of the line pro cameras wouldn’t still have a TIFF setting (which they mostly all do), and at the dawn of digital, most all the cameras had a TIFF setting, until they realized the target market only shot jpeg anyway.

        I’m perfectly willing to admit that I do NOT understand the “bit” color file setting. Every time I read an article people are going back and forth arguing about the terminology of saying something “like I was shooting 16 bit”, whether that was a 16 bit per color channel file, or a total 16 bit color file? I’ve always assumes in camera, it’s a 16 bit total color file (and in many cases 14 bit anyway!). Again, as far as I’m concerned, it’s the “pre-press” world and not mine!

        This from The Adobe Community site:

        I would never even consider saving master files as anything less than 48-bit PSD/TIFF. That’s just a given. For exactly twice the file size, you get 16384 individual levels per channel to work with, instead of a mere 256. That’s a bargain if you ask me.

        Jpeg is not a working format. It is a final delivery format where bandwidth is a consideration, such as for web and email. A jpeg should never be resaved if avoidable.

        My master file archive (mainly PSD) is by now around 20 000 files and several terabytes. Plus about 60 000 raw files. Yes, I’ve had to change drives and migrate the whole thing several times, but that’s just the way it is. On the good side I’ve never seen a hard drive failure – I don’t keep them that long.

        1. P Avatar
          P

          Andy — Sorry, I didn’t intend to gloss over anything.

          I understand your frustration. But let me ask you this: Did you develop your own slides back in the day before delivering them to the ad agencies? My guess is no, probably not. Either the ad agencies had to develop your rolls/sheets of film on-site themselves upon receiving them from you (unprocessed), or they had to be processed by an outside lab before delivery was made to the agencies. My guess is the latter. Right? Either way, someone somewhere had to process those undeveloped rolls/sheets of film into slides before the ad agency could evaluate them.

          In the digital age, I see the requirement of processing RAW files as being not much different. If you don’t process the images yourself, that’s fine, but someone somewhere will have to before the ad agency can evaluate your shots. It correlates directly with film processing, albeit admittedly far more irritating. You state as much yourself by saying you give RAW files to a “color service” to perform such work at times. From the standpoint of being an intermediate step, how is that really any different than a lab developing your slides?

          I’d say shooting TIFFs and delivering those directly to the ad agency is really nothing like shooting transparencies, though, for the precise reason that there isn’t an intermediate step. Again, the rolls/sheets of film had to be processed before delivery, or by the agency’s on-site lab (if they had one). Only then could they be evaluated. Finished, processed slides don’t “pop out” of film cameras like TIFFs do on digital cameras that support such (although that would be awesome!). You seem to be overlooking that film has to be developed before it can be viewed, too, just like RAW files. An intermediate step was always required in the days of film (excluding Polaroid).

          That said, I do think the film workflow of yesterday was a simpler and better one. I have no doubt about that. I don’t like digital, and I don’t enjoy staring at computer screens either. I get it. And I do wish all digital cameras across the board would output TIFFs directly, if for no other reason than I don’t like proprietary file formats.

          Out of curiosity, have you tried asking the ad agencies you typically work for today if they’ll accept slides instead of digital? You’d think if the film was processed and scanned by a pro lab, and quality TIFF scans were supplied to the agency alongside the physical slides themselves, that they’d be more than happy to have you shoot actual film for them. At least that’s what I’d think, but I’m in no way familiar with that world.

  3. Mark Avatar
    Mark

    If I’m reading you right, your article seems to be at odds with what The Darkroom lab says; they scan at 72dpi, always! This has been a controversy before.
    From their FAQ: “Our scans are 72 dpi. It sounds low, but it’s because it’s the pixel dimensions that matter, not the dpi by itself. If you get your scans and are still concerned, convert them to 300dpi in PhotoShop. Make sure you uncheck the “Resample Image” checkbox. You’ll see the image stays the same. For example, an image that is 5″ x 7″ at 300dpi is the same resolution as a photo that is 29″ x 21″ at 72dpi”.

    1. Andy Umbo Avatar
      Andy Umbo

      Hi Mark….just an aside, I used to use a scan back on a 4X5 view camera at a large photo studio. We were always told, before the final scan, to crop the image on the computer screen to match the layout requirements, then set the size for 200% of the layout final size, and set the dpi for 300 dpi! Then fire off the final scan. It had something to do with not creating moiré patterns with the line/screen used for printing the catalog.

      Not to editorialize, but this was in an era between film and digital, when nobody knew anything about anything! I’ve always maintained that all this is a function of “pre-press”, and not anything any photographer ever knew anything about. The compression of multiple skills into one job function is a product of late 1990’s “corporate think”, where we started to see job requirements that expected photographers to know the job of pre-press scanners, and retouchers, as well as highly skilled lighting and composition elements; which in most cases, even today, means someone is going to be bad at two of those things!

      BTW, 72 dpi, is what I was always told was screen resolution for internet uses. If a lab is making you 72 dpi scans, they’re assuming your usage will be for the internet. I know a number of labs making 300 dpi as normal, but their scans ain’t cheap!

    2. P Avatar
      P

      Mark, it’s not at odds with what The Darkroom says at all. But they don’t “scan” at 72 DPI, as stated. This is part of the verbiage that creates so much confusion, and it’s precisely why I wrote this to begin with. If they “scanned” a piece of film at 72 DPI, say a standard 35mm frame, which is 36×24 mm in size, then the resulting output scan would be roughly 102×68 pixels. How’s that for laughable?

      The 72 DPI they’re referring to is simply a piece of metadata embedded in their scan files (which are JPEGs) upon the creation of said files. It could be anything. It could be 72, 100, 120, 300, 600, or even 1234 DPI, and it wouldn’t matter. It has no bearing whatsoever on the quality of the digital image itself — “the scan.” None. Regardless of the DPI value they choose to embed (or in this case that is embedded by default by the Noritsu software) in the files metadata, the pixel resolution itself does not change. It’s fixed based on the optical resolution setting of the scanner (which, yes, is measured in DPI, as discussed above). The actual DPI value that they “scan at” has nothing to do with the aforementioned embedded metadata. Absolutely nothing. All that the DPI value embedded in the scan file’s metadata tells you is what size print (i.e. the physical dimensions) that digital image will produce if it’s printed at that embedded value (which it doesn’t have to be), where one pixel in the digital image correlates to one dot in the physical print. That’s it’s only significance. For example, if you have a 3000×2000 pixel image, with an embedded DPI value of 300 DPI, whatever image editor you’re using will tell you that your print from that image, if physically printed at the embedded DPI value of 300, will be 10×6.67 inches. If you change the embedded DPI value to 600, leaving the digital image data itself alone (i.e. without re-sampling it), then your software will tell you that the physical print size will be 5×3.33 inches. If the embedded DPI metadata value is 72, then your software will tell you the physical print size will be approximately 42×27 inches. And so on and so forth.

      In other words, a

      3000×2000 pixel image with a 72 DPI embedded metadata value is the same quality, and will have an identical file size, to any of the following (which are the same image):

      3000×2000 pixels (300 DPI embedded value), or
      3000×2000 pixels (600 DPI embedded value), or
      3000×2000 pixels (1234 DPI embedded value), or
      on and on and on…

      They’re ALL still 3000×2000 pixels and contain identical image data. The embedded DPI value is absolutely irrelevant.

      Factually, the DPI that they “scan at” is the optical resolution of the scanner itself, as discussed above. In the case of The Darkroom’s “Enhanced” scans, which have a pixel resolution of 3072×2048 pixels, this is a bit shy of 2200 DPI. The maximum pixel resolution that their Noritsu scanner can produce from a standard 35mm film frame is 6774×4492 pixels (their “Super” scans). This equates to the maximum optical scanning resolution of that scanner (a Noritsu HS-1800, if I’m not mistaken) being roughly 4800 DPI, which again has absolutely nothing to do with the arbitrary DPI value embedded in the output scan file’s metadata. I don’t know how much more I can stress that, or how many more ways it can be said.

      As a side note — If the embedded DPI (i.e. the arbitrary print “output” value) in a digital image (which is a film scan) is the same as the optical resolution at which the film frame was scanned (i.e. the optical “input” value), then the size of “a print” made at the embedded DPI value as indicated by your image editing/viewing software will indeed be the actual physical dimensions of the piece of film from which that digital scan was made. Honestly, this day and age, it would probably make far more sense if the optical DPI used to scan a piece of film was the same DPI value embedded in the file’s metadata. I think it would alleviate a ton of confusion.

      1. Andrew Inglis Avatar
        Andrew Inglis

        Hi P, I’ve only just found this post, but this is a great explanation – my local scanning lab says the same thing (regarding ‘scanning at 72’) but your explanation is very clear and helpful, thanks!
        Could I ask a follow up question (if this thread is still live anyway!) I now want to send a jpeg for printing at my local print shop. I will be ordering a 6×9 print and my file dimensions are 2400×3600, so I know I have enough pixel density for a nice print (>300ppi). But my question is, do I need to adjust the dpi metadata of the file, or can I just ignore and leave it alone at 72?
        And on a related note, when using the Lightroom print module, should I set the ppi to 300 for printing? Or again can I ignore it?

        Thanks!
        Andy

  4. Scott Bennett Avatar
    Scott Bennett

    My plasma TV has rectangular pixels. It would occasionally give me fits trying to get the aspect ratio correct, especially displaying from a computer.

    1. Jim Grey Avatar

      I’m trying to think if I’ve ever knowingly owned a screen with rectangular pixels! I can’t think of a time.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: