On the retention pond

On the retention pond
Yashica-12
Fujifilm Velvia 50

You are forgiven if you think I went out into the country and found some old swimming hole to make this photograph. It’s actually the retention pond behind my house. Directly beyond it is I-65 — the drone of all the trucks makes this anything but a peaceful place.

I sent this film to Fulltone Photo for processing and scanning. They did a fine job with the processing, but I was disappointed that the scans were only 1024×1024 pixels at 72 dots/pixels per inch. That resolution makes good snapshot prints, but any larger than that and things start looking pixelated.

Many labs offer enhanced scans with much larger pixel dimensions at that same 72 dpi. I haven’t been able to figure out how to make my flatbed scanner do that. I adjust dpi to get the pixel dimensions I want, as for my online work pixel dimensions are everything. I recently shot a roll of Kodak Tri-X in the Yashica-12, and scanned the negatives at 2400 dpi. I got images of a whopping 5192 pixels square. That’s more like it — I can crop deeply if I want, and still have an image with lots of surface area to share online.

I have a lot to learn yet about scanning and the interplay between dots/pixels per inch and raw pixel dimensions.

If you’d like to get more of my photography in your inbox or reader, click here to subscribe.

Film Photography

single frame: On the retention pond

The retention pond behind my house, on colorful Fujifilm Velvia 50.

Image

16 thoughts on “single frame: On the retention pond

  1. Reminds me of the Gilligan’s Island lagoon, which was a small pond on the CBS Studio Center lot not far from a busy freeway and the hustle and bustle of the movie lot. LOL

  2. P says:

    Jim,

    In terms of digital resolution, PPI (pixels per inch) and DPI (dots per inch) are largely meaningless values.

    Effectively, PPI is just the pixel density for a given monitor or screen (obviously based on its native pixel resolution, pixel shape, and the physical panel dimensions), while DPI just specifies what physical size a print made from a given digital image will be *if* using the DPI information included in the image’s metadata at 1:1. The more dots laid down per inch for a print, the better the quality (obviously), but DPI doesn’t change the quality of the digital image.

    There is far too much confusion out there on the internet regarding this topic, due to people having no clue what they’re talking about.

    As an example, a 3000×3000 pixel digital image could have an encoded DPI of 300 (for a smaller, fine quality print) or 72 (for a much larger, low quality print).

    A 3000×3000 pixel digital image at 300 DPI will make a physical print that’s 10×10 inches.

    The same 3000×3000 pixel image at 72 DPI will make a physical print that’s nearly 42×42 inches.

    At 600 DPI, the physical print would be 5×5 inches, and be of super-fine quality (assuming the digital image itself is of excellent quality).

    But, this is the important thing, the digital resolution of the source image is still 3000×3000 pixels. Whether its metadata states it’s 72, 300, 600, or some other DPI has no bearing on the quality of the digital file itself. None.

    The same 3000×3000 pixel image encoded @ 72 DPI, or 300 DPI, or 600 DPI, are *all still* 3000×3000 pixels (9 MP). Their quality as a digital image is absolutely identical.

    So, when film labs state their scanner’s output resolution is AxB pixels at such and such DPI, the DPI is entirely meaningless in that context. The pixel dimensions are all that matter. Now, if they’re talking about the optical resolution of the scanner in use, then that’s an entirely different conversation. But the latter is virtually never discussed, especially by labs.

    I’m convinced most labs have no clue what they’re talking about, or doing. They just use the auto settings on their scanning systems without any real understanding of what’s taking place under the hood with the hardware or software, and they’re too lazy to even learn the most basic of concepts regarding how their systems function. This is a serious problem, which leads to poor quality scans for no reason. It’s just total incompetence, and is inexcusable for film labs.

    Hopefully this helped clear a few things up for you, and whoever else happens to stumble across it.

    Take care!

    • P says:

      One more thing:

      When you scanned your square medium format negatives with your flatbed at 2400 DPI (optical/input scanning resolution — NOT the same as the output DPI contained in a digital image’s metadata, as discussed above), which have an actual frame size of roughly 56×56 mm, you should have gotten images that were 5291×5291 pixels. The fact yours were slightly smaller at 5192×5192 pixels means your scanner and/or software are cropping a bit of the frame area. That’s okay, but I just thought I’d point it out. If it were me, I wouldn’t want to lose that, no matter how minute it is.

      Furthermore, the *actual* optical resolution of most consumer flatbed film scanners is much, much less than the stated value (marketing propaganda). The true, effective optical scanning resolution (what it’s genuinely capable of resolving) is probably somewhere between one-half and one-fourth what Canon/Epson claim these scanners can achieve. To determine exactly what it is would require test charts, looking at line-pair measurements, and so on. But for flatbeds it’s always substantially less than what’s claimed. This means you could probably scale your scans by 50% (both sides) and still have 99%+ of the image information contained in the original (bloated) file, if scaled correctly. Furthermore, applying conservative USM sharpening *after* scaling the image to its final output resolution (again, 50% is probably a good value to try out) would likely vastly improve the appearance of your images on computer monitors and device screens.

      Your 5192×5192 pixel images (scanned at 2400 DPI, as you stated), scaled to 50% would yield images that are 2596×2596 pixels. For your scans, I’d probably recommend using the Lanczos-3 scaling algorithm, versus bicubic or nearest neighbor (or any other). Then, I’d play around with USM sharpening until you’re pleased. For web sharing, I think that’s a much better approach than trying to sharpen the huge pixel resolution image natively output by the scanner that’s already inherently lacking in what was *actually* resolved in terms of image detail in the first place.

    • Thanks for helping me understand the relationship between pixel dimensions and pixel density better. I don’t know why it’s so challenging for me to get my head wrapped around it, but it is.

      So for digital display it’s all about pixel dimensions. It’s only in print that pixel density matters.

      When I scan medium-format negatives, I use VueScan and I hand-draw the selection box around each frame individually. I do this because VueScan is terrible at detecting frames. So I get some variance from frame to frame.

      Interesting thought, to scale my scans by 50% and then apply sharpening. I’ll try that next time I have MF negs to scan. I’m reluctant to scale my 35mm scans similarly because then they’d be quite small. Maybe if I scanned those at 3600 or 4800 dpi first?

      • P says:

        Jim,

        It’s not your fault it’s challenging to wrap your head around. It’s because way too many people have littered the internet with outright wrong information about what these things are, or at the very least they incessantly use the terms improperly. Therefore, it is genuinely confusing, albeit for no good reason.

        PPI has to do with monitors, televisions, cell phone screens, tablet screens, and so on. It has nothing to do with digital image files, other than how much physical screen real estate a given digital image will take up when viewed on a given panel at 100% (1:1), due to how many total pixels that image contains. PPI is simply the ratio of what the native resolution of a panel is, a·b pixels, to the linear physical dimensions, x·y inches, of the panel.

        The horizontal PPI is a_pixels/x_inches = a/x pixels/inch = a/x pixels per inch = a/x PPI.

        The vertical PPI is b_pixels/y_inches = b/y pixels/inch = b/y pixels per inch = b/y PPI.

        These days most display technologies employ pixels that are square and as such the pixel height and width are the same, so there’s no need for two separate horizontal and vertical PPI values. Thus, we just say a monitor or screen is such and such PPI, a single value, because it’s the same horizontally and vertically.

        The closer you are to a screen, the greater the PPI needs to be to provide an image of acceptable quality. For example, the PPI of a big screen LCD TV is far less than that of your cell phone.

        Print DPI (different than optical scanning DPI, which I’ll discuss shortly) is the exact same concept as PPI, but instead of it being the density of pixels on a display, it’s the density of the dots laid down on paper (or whatever physical medium you’re working with) to form a physical image, a print.

        The closer you are to a print, the greater the print DPI needs to be to provide an image of acceptable quality. For example, the DPI of a billboard advertisement is far less than that of your 4×6 vacation photos.

        The PPI of a panel is analagous to the DPI of a print, but they are not the same thing, nor should they be used interchangeably as it just causes confusion, although they frequently are (hence the confusion).

        Now, let’s talk quickly about the *other* DPI, the one that has to do with scanning hardware. In simplistic terms (it’s a bit more involved than the following discussion, but what follows is sufficient as a means to wrap your head around it), this DPI value is a measurement of the optical scanning resolution of the scanner itself. Look at it as the number of dots per inch the scanner can *allegedly* resolve when scanning a given piece of film. So, let’s say a scanner has a maximum optical scanning resolution rating of 3600 DPI. This means that for each linear inch of film, the scanner is capable of resolving 3600 dots of information (again, allegedly — the true, effective resolution will be less). Think about these individual dots of information captured becoming individual pixels once digitized for output in a standard digital image file format (i.e. “the scan”).

        So, for square medium format negatives (1:1 aspect ratio), which are 56mm^2, this would yield the following:

        (3600 dots / inch) · (56 mm) · (1 inch / 254 mm) = 7937 dots.

        For the output digital image (i.e. “the scan”), this obviously equates to a pixel resolution of 7937 pixels^2, or 7937×7937 pixels. Since the frame width and height are the same, only one calculation was needed.

        And for 35mm negatives (3:2 aspect ratio), which are 36x24mm, it would yield:

        (3600 dots / inch) · (36 mm) · (1 inch / 254 mm) = 5102 dots.

        and

        (3600 dots / inch) · (24 mm) · (1 inch / 254 mm) = 3402 dots. Of course, we could have just multiplied the width, calculated above, by two-thirds to get this (5102 · 2/3 = 3402), but I just thought I’d do the whole thing for the sake of clarity.

        For the output digital image (i.e. the scan), this obviously equates to a pixel resolution of 5102×3402 pixels. Since the frame width and height are different, two calculations were needed.

        As stated multiple times previously, however, the actual/effective optical resolution of the scanner will be less than the rated value. For flatbeds, it’s much, much less.

        Anyways, I hope that adds additional clarity.

        If it were me using your scanner, I’d probably scan everything at a DPI that provides scans that are at least 4x the pixel resolution (total area, not just a linear measurement of one side; 2x the length and width = 4x total area) of what I desired my final output resolution to be.

        So, for square, medium format images, if I desired my final files to be 2500×2500 pixels, I’d scan them at 2400 DPI, at least, as 16-bpp TIFFs. I would then do all of my levels editing, dust/scratch spotting, cropping to a 1:1 ratio (if necessary), and so on at that original large resolution, but NOT yet sharpen anything. I’d save these as my “master” images, again as 16-bpp TIFFs. Then, I’d resize them to 2500×2500 pixels using the Lanczos-3 method, and finally sharpen them to my liking as the final step using USM. That would be my final image for output, for sharing online, which I’d save as JPEGs. I’d follow the same practice for 35mm. So, if I want my final 35mm images to be 2000×3000 pixels, I’d scan at 4400 DPI, at least, and then follow the same procedure as I did for medium format (cropping 3:2, though, instead of 1:1, obviously).

        To summarize, you said, “So for digital display it’s all about pixel dimensions. It’s only in print that pixel density matters.”

        Kind of, but not really. This confusion is caused by people, even professional film/print labs, misusing these terms. Hopefully, after what I wrote above, you understand that now.

        In print, it’s *dot* density that matters combined with the pixel resolution of the digital image that’s being printed (along with its overall quality, or the amount of image information and detail it contains), not *pixel* density. *Pixel* density, which is NOT the same thing as the pixel resolution (i.e. AxB pixels) of a digital image, applies ONLY to physical displays (e.g. monitors, TV’s, cell phone screens, etc.). That said, it needs to be made clear that when printing from a digital image, the number of *dots* per inch (DPI) the printer is laying down is related to the pixel count (pixel resolution, AxB pixels) of that digital image because if there aren’t enough pixels in the original image to match 1:1 (at least) the number of dots that need to be laid down on the paper when printing at a given DPI and physical print size combination, then the digital image will have to be upscaled to a higher resolution (pixel count) in order to match the DPI/print dimensions the printer is operating at. This is a problem, as it means interpolation will occur, artifacts are likely to present themselves, and image quality will be greatly diminished. If an image is already lacking in pixel resolution *OR* resolved image detail to begin with, there’s nothing that can be done to salvage it. You can’t create detail that didn’t exist in the first place.

        Let me know if you have any other questions. I’m happy to help if I can.

        • This is all incredibly useful information. Would you grant me permission to combine these comments and edit them into a good flow, and make a post out of them? I’d credit you (as “P”).

        • P says:

          Sure, Jim! You’re more than welcome to take this information, organize and compile it however you’d like, and use it in a future post. That’s totally fine!

        • P says:

          There’s one more thing I thought about that I should probably mention. Really, there’s a lot more, but in the present context (i.e. scanning film and working with film scans in the digital domain, with the ultimate destination being web viewing) most of it wouldn’t be helpful. It would likely just cause unnecessary confusion. So I’m going to stick to trying to describe things in simplified terms, as I have been, even though in reality there is a whole lot more to these topics. Hopefully nobody will come along and start a quarrel with me for my simplified explanations. To those that would, I’m fully aware I’ve left out A LOT of details and minutiae.

          Some software, including Photoshop, use PPI (not DPI), along with physical dimensions, in their dialogs as a means to specify the eventual physical print size and print quality of an image upon creating a new one, or resizing an existing one. This is done by specifying the PPI value and the dimensions (in inches) of an image that’s destined to go to print, without specifying, or even caring about, the pixel resolution/count of the image being created. Once given those two variables, Photoshop (and other imaging software that does things the same way) will then calculate *for you*, and automatically generate, an image of the appropriate pixel resolution/count for you to work with in the digital domain.

          For example, if someone wants to create a *new* graphic in Photoshop, and the sole intention in creating that graphic is to ultimately print it (not super common these days, but it used to be), then upon creating a new image they can specify PPI (higher value = better quality) and the desired physical print dimensions in order to automatically create an image of adequate pixel resolution/count for their eventual printing purposes. This means they need not have known the pixel resolution/count in advance, or *ever* for that matter. In this context, PPI is a measurement of how many pixels in the *digital* domain will correspond to a linear inch in the *physical* domain, when printing a digital image. Although I’ve never cared for this notation, seen in Photoshop and other pieces of software, as it tends to cause confusion since it’s an impossibility to put a pixel on a piece of paper, it is fine — and even useful — for certain work, particularly those who are graphic designers making graphics from scratch with the sole intention of taking them to print. In their world, they have no need to focus on a digital image’s pixel resolution/count. That’s very different than what most of us are doing these days, especially us amateur photographers working with film scans or digital camera images that we intend to share only by digital means via a blog, on Flickr, etc.

          So, let’s say a graphic designer is about to make a new image from scratch, for the intention of printing it, and they know they’re going to be printing it at 9×12 inches, and the printer they’re working with requests an image that’s 300 PPI/DPI to produce a good quality print (typical). Then, upon creating that image in Photoshop, they can specify physical dimensions of 9×12 inches and a PPI of 300, and Photoshop will then automatically create an image with a pixel resolution of 2700×3600 pixels for them. Due to their purpose in creating the image for print, the graphic designer may never look at or care about the pixel resolution/count as they do their work. It really doesn’t matter for what they’re doing (i.e. creating a graphic for print).

          I wanted to bring this up because I know you use Photoshop. However, I can’t stress enough that in the context of film scans or photos from a digital camera, that are to be viewed only on a monitor/screen, this usage is likely to only cause confusion for most people, because, once again, when you’re working exclusively in the digital domain, on *already existing* digital photos, the embedded PPI/DPI (yes, sadly, the two terms are used entirely interchangeably these days, in this context) are effectively irrelevant, as I stated previously. In the digital domain, pixel resolution/count is *all* that matters. The DPI (or in the context above, PPI) value embedded in the digital photo’s metadata has absolutely no bearing on the quality of the image when viewed on a monitor or screen. None. An image that’s 3000×3000 pixels is still 3000×3000 pixels, regardless of whatever the DPI/PPI value happens to be that’s embedded in its metadata. That’s the important takeaway from all this.

          As far as I’m concerned, any time it can be helped, terminology that describes information in the digital domain should *stay* in the digital domain, and likewise terminology that describes information in the physical domain should *stay* in the physical domain. Using them interchangeably, or crossing the boundary between the two domains when using multiple terms (as above), is just asking for trouble, in my opinion. Historically, a concerted effort was made to keep them separate as much as humanly possible. If you read old books on graphic design, or computer/monitor/printer technologies in general, you’ll see that. But over time, it’s all been blurred together, and has had incredibly negative consequences. Modern books and references, as well as what’s floating around the internet, are mostly confusing fiascos.

          At the end of the day, pixel density (expressed as PPI) absolutely CANNOT apply to a piece of paper (or any other print medium) in real-world terms, regardless of its usage by certain software packages. After all, a printer cannot lay down pixels onto a physical substrate. No, a printer lays down dots. The only physically existing item that can actually, truthfully be described in terms of its pixel density (PPI), is a computer monitor, or other type of screen/display that’s used to render digital images. If you always keep that in mind, it’ll help you wade through all the confusion on the internet regarding PPI, DPI, pixel density, dot density, pixel resolution, pixel count, pixel dimensions, physical dimensions, print resolution, and so on and so forth.

          Anyways, sorry if I just caused more confusion than I added clarity to these topics. I hope I didn’t. Also, I apologize if I was overly repetitive in my attempt to avoid further confusion.

          Take care, Jim!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.