Medium Format, 3 GigaPixel Camera Puts It All On The Line (Sensor)

It’s a bit of a truism that bigger sensors lead to better pictures when it comes to photography. Of course everyone who isn’t a photographer knows that moar megapixles is moar better. So, when [Gigawipf], aka [Yannick Richter] wanted to make a camera, he knew he had to go big or go home. So big he went: a medium format camera with a whopping 3.2 gigapixel resolution.

Now, getting a hold of a sensor like that is not easy, and [Yannick] didn’t even try. The hack starts by tearing down a couple of recent-model Kodak scanners from eBay to get at those sweet CCD line sensors. Yes, this is that classic hack: the scanner camera. Then it’s off to the oscilloscope and the datasheet for some serious reverse-engineering to figure out how to talk to these things. Protocol analysis starts about 4 minutes in of the embedded video, and is worth watching even if you have no interest in photography.

As for what the line sensor will be talking to, why, it’s nothing other than a Rasberry Pi 5, interfacing through a custom PCB that also holds the stepper driver. Remember this is a line sensor camera: the sensor needs to be scanned across the image plane inside the camera, line by line, just as it is in the scanner. He’s using off-the-shelf linear rails to do that job. Technically we suppose you could use a mirror to optically scan the image across a fixed sensor, but scanner cameras have traditionally done it this way and [Yannick] is keeping with tradition. Why not? It works.

Since these images are going to be huge an SD card in the Pi doesn’t cut it, so this is perhaps the only camera out there with an NVMe SSD. The raw data would be 19 GB per image, and though he’s post-processing on the fly to PNG they’re still big pictures.  There probably aren’t too many cameras sporting 8″ touchscreens out there, either, but since the back of the thing is so large, why not? There’s still a CSI camera inside, too, but in this case it’s being used as a digital viewfinder. (Most of us would have made that the camera.) The scanner cam is, of course, far too slow to generate its own previews. The preview camera actually goes onto the same 3D-printed mount as the line sensor, putting it onto the same focal plane as the sensor. Yes, the real-time previews are used to focus the camera.

In many ways, this is the nicest scanner camera we’ve ever featured, but that’s perhaps to be expected: there have been a lot of innovations to facilitate this build since scanner cams were common. Even the 3D printed and aluminum case is professional looking. Of course a big sensor needs a big lens, and after deciding projector lenses weren’t going to cut it, [Yannick] sprung for Pantax 6×7 system lenses, which are made for medium format cameras like this one. Well, not exactly like this one– these lenses were first made for film cameras in the 60s. Still, they offer a huge image, high-quality optics, and manual focus and aperture controls in a format that was easy to 3D-print a mount for.

Is it the most practical camera? Maybe not. Is it an impressive hack? Yes. We’ve always had a soft-spot for scanner cameras, and a in a recent double-ccd camera hack, we were lamenting in the comments that nobody was doing it anymore. So we’re very grateful to [Manawyrm] for sending in the tip.

17 thoughts on “Medium Format, 3 GigaPixel Camera Puts It All On The Line (Sensor)

  1. hmm. i tried wading trhough the hackaday.io page, but i could not find the way he converted the analog signal from the ccd array to digital. that has always been the bottleneck in making an open source flatbed or film scanner.

    i hope that will be his next project, as i lack the knowledge to do that.

    1. I got served the video before I saw the article, it’s not just the sensor but the full CCD assembly from the scanner head. It’s got all the power and ADC guts on it already solved, so it’s just digital data over that flat flex. He did reverse engineer the comms enough to get it to read out that data though, among the rest of the system.

      Part of the motivation for that was that most remotely recent scanners do a check/calibration with their own light source before scanning. You can’t easily shuck a recent scanner for the sensor and all its electronics without tricking it or trying to bypass that step.

      Agreed that a good data rate ADC is a limiting factor for trying to DIY a scanner, I’ve spent a decent chunk of time looking into it myself for a smaller-scale scanning camera. A lot of components have gotten a lot better, but really high-rate CCD drive/readout is still up near the realm of ASICs.

    2. I was waiting for the OP to weigh in.

      Based on schematic and comments it looks like it’s parallel digital out right from the CCD chip, and DMA into the Pi. No ADC. Or, rather, the ADC is in the sensor.

      Looks like it’s suffering from a paucity of bits though. You have to make up in noise dither what you don’t have in bit depth.

    3. Reading between the lines on this, it looks like he’s using the original S7R77S14F00A100 from the scanner as the AFE/ADC and clock generator for the sensor. Given the timings and speeds, that’s probably the most sensible (easiest?) route to take – those CCD sensors aren’t the easiest things to drive or digitise the output from.

      1. Yes exactly this was the approach. The whole original CCD board was used with the original ADC/Clock chip.
        That allowed to sniff the relevant settings from the original scanner in all resolution presets and guess relevant parameter registers to adjust gain and offsets.

  2. If you go back to the time of large format cameras, you are also going back to slower film, so a long exposure/scan time is not unrealistic. Portraits made with natural light often relied on bracing to help people hold still long enough. The artifacts that you would get if you moved during the scanned exposure would be quite different from simple motion blur, of course.

    All unsupported speculation, but: The mirror-scan proposal might create challenges in maintaining the same focal length across the entire frame. Remember that large format traditionally also means longer focal length lens to get roughly the same perspective, which means shallower depth of field; part of the characteristic look of large format comes from using that narrow band of focus to isolate the subject from it’s surroundings. I’m not sure whether that would make it more sensitive, or less sensitive, to focal plane variation. It might be possible to compensate for that with a tilt/shift mount for the lens.

    (I’ve been doing an initial rehab on my grandfather’s 3×4 (quarter-plate) Graflex. Seems to be working well enough to justify professional tuning.)

  3. This is a very impressive project! And the resulting images are really impressive. Congrats!

    I noticed that when he zooms into the scanned image around 27:50, there’s a kind of “echo” of the black text in the bottom left direction for “LOW DISTORTION GENERATOR” at the left, and less/no more when reaching “HM-8030-4” at the right. Could it be some kind of internal reflection between the lens back and glass CCD? Or something other? Getting rid of this would make the pictures even clearer and sharper.

    Keep the good work going on [Gigawipf] !!!

    Also a question: is there some kind of database available somewhere on the net listing which sensor is in which model of scanner?

    1. This might be due to the lens itself.
      I noticed the older 75mm one is a little softer than the newer 200mm lens and might have some internal reflections or possibly the IR filter.

      I am quite sure that this sensor is used at least in the V200, V30/V300, V37/V370 models according to the datasheets.

    1. Letting the earth do the scanning for you works; a friend of mine did exactly this in 1985.

      It doesn’t work very well though: The integration (“exposure”) time is too short.

      The earth spins at 15 arcseconds per second: if you want a (rather lackluster) resolution of 1 arcsecond, the exposure time is only 1/15th of a second. Uselessly fast for deep sky objects. Even for dimmer planets this is still too short. For the Moon it would work.

      There are “time delay integration” (TDI) chips, or so-called “pushbroom” imagers, that have dozens or hundred of lines, and move the charge from one line to the next at the same rate that the image sweeps across the sensor, accumulating signal for a much longer integration time. These are often used for earth-observation satellites, seeping a continuous swath of the earth below as the satellite zips by ay 7-8 km/s. Ordinary single-line or snapshot images would need a 1/8000s shutter speed to achieve 1-meter resolution. A TDI chip can integrate a meter-size pixel over multiple lines to get a more reasonable 1/125 s exposure time without motion blurring, even though the satellite moves 60 meters in that time.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.