Featured Post

Pinned Post, A Policy Note:

I have made a decision to keep this blog virus free from this point forward, at least until the smoke clears. This is not a judgement about ...

Wednesday, October 14, 2015

A Device

A refinement of the previous remarks.

Envision a device, call it an L32. It's a medium-sized tablet, maybe a bit on the thick side. Tripod socket on the long side and another on the short side. Good quality display. Camera... stuff on the back. The details don't matter, it's computational photography stuff.

Here's how you use it:

Take a bunch of pictures. There's depth information natively in the pictures so selecting objects in the scene is as simple as tapping them. Composite together a picture out of parts in a few taps and swipes. Set focus point with a tap and select depth of field. Here's the subject, here's the background, focus on the eyes. Shallow DoF.. no, a little more. There.

Pick a lighting setup from your library or start fresh. Adjust positions, color, diffusion to taste to relight your composite. Five lights. Move the hair light back, main up, up, up, there. Dim the background light a touch, move the fill in a little. Dark card over there to control that. Now warm the color up a little.. Yep.

Save it as a 16 bit TIFF to the SD card. With, because why not, a complete set of mask layers for the various objects in frame. Off to Photoshop we go for retouching.

Is that worth something to you? If the Light guys are to be believed we can build this now. The first edition would be slow, battery life would be awful, and sometimes it just wouldn't do that great of a job. You might have to do some poking at surfaces in the picture and cueing the software "that's skin, dummy, not chrome!" here and there. But the second model would be better.

5 comments:

  1. That this kinda thing is going to happen has been obvious for a long time, its consequences, however...

    But that's not what I'm really interested in, I think we're on the cusp of the birth of a whole new art form, what with the improvements in holographic tech and VR headsets, along with cameras that capture not just slices of reality, but whole blocks of it.

    Something that video games have been the heralds of, in a way.

    Remember the fuss kicked up about that Call of Duty game where you shot up an airport? Now, imagine that, with real guns, real reproductions of people, accurate to the last hair follicle... Even a still experience of that would be incredibly powerful, even dangerous.

    Anyone who came close to mastering this medium would be on par with a god.

    ReplyDelete
  2. The focus / depth of focus part can already be done today with the Lytro camera, and I would think that the rest could also be done with the Lytro camera and suitable software. The Lytro Illum, for example, makes focus stacking automatic and should therefore be of interest for product photography.

    Now, the crux of the matter is that the Lytro camera is not a commercial success. Neither was 3D photography in its various incarnations in the 20th century, BTW.

    For some reason, the masses have never been interested by this kind of devices. All they want is devices capable of producing flat, about post-card sized reproductions of life events, sharp front to back.

    I suggest you try to read about all the failed improvements on photography of the 20th century, there have been quite a few (at a Patent Office near you, for example). It has nothing to do with internet forum users horizon being limited by the standard SLR and photoshop software. It has everything to do with the general public only interest being flat, about post-card sized reproductions of life events, sharp front to back. Something they can hold in their hand and show to their friends to illustrate a conversation.

    I don't think the light L16 camera will be a success.

    ReplyDelete
  3. Nobody wants 3d output. Nobody ever has, except as a curiosity. It's possible this will change, but it's not something I am personally smelling in the wind.

    The attentive reader would have noted that I'm not talking about 3d output. The Device uses 3d information to perform digital lighting in post, to produce 2d output (TIFF to be precise).

    ReplyDelete
  4. Indeed nobody wants 3D output. But what I was saying is that nobody wants to record 3D either. The reason is that people would not know what to do with the data. In your idea of changing lighting in post:
    -the majority of people are not good at visualising in 3D, so they would not be able to use the software to move the lights around easily
    -the majority of people are not interested in changing the lights in their pictures, but in cheap and easy testimonials.

    (Actually, it is even worse than that: the majority of people do not perceive the quality of light, check how they light their own homes...).

    I don't see a mass market for this device. I may be wrong, that is always the risk with predictions.

    ReplyDelete
    Replies
    1. I don't see a mass market for this device either. This piece is, let me gently note, a followon to the previous piece which may provide some useful context.

      Delete