Jump to content

Random question, but why don't RGB-UV CMOS sensors exist for VFX?

Snadegg

In the 1960s, Disney used the Sodium vapor process to make a traveling matte by splitting off a specific wavelength of light onto its own film strip. At the time this was a much better alternative to bluescreens.

 

Modern CMOS sensors allocate 50% of their photodiodes to green, 25% to blue, and 25% to red, arranged in a checkerboard-like Bayer pattern. This is great for green screens, but the problem is that displays aren't made the same way so these images have to go through a debayering process. What I'm wondering is why nobody (at least nobody I can seem to find) has thought to make a sensor with red, green, blue, and UV (or other wavelength) photodiodes for the purpose of creating an easy matte with a specific wavelength of light.

 

There already exist codecs that support 4-channel color, and since it would be used for creating a matte, the UV channel could just be recorded as the alpha channel. Since the sensor would no longer be in a Bayer pattern, the relatively computationally expensive process of debayering wouldn't be necessary anymore (I'm making the assumption that VFX artists would want to work with as close to uncompressed footage as possible, so something like ProRes 4444 {I'm aware EXR is the standard}). Just map RGB-UV to RGBA. For VFX artists, I would think this would be heaven. It would essentially be pre-keyed footage straight out of the camera.

 

I understand that IR and UV are typically filtered out before light hits the sensor since the RGB channels are relatively sensitive to those wavelengths, but is there no wavelength of light usable in this scenario?

 

Just curious why this isn't really a thing and I can't seem to find anything relating to this online.

Edited by Snadegg
Link to comment
Share on other sites

Link to post
Share on other sites

Not certain what real-world application this could be used for, but if nothing exists, then it's time to create it yourself.

 

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 10/28/2022 at 1:24 AM, Snadegg said:

Modern CMOS sensors allocate 50% of their photodiodes to green, 25% to blue, and 25% to red, arranged in a checkerboard-like Bayer pattern.

Going for green makes sense, since the human eye is most sensitive to that colour as well if memory serves right. Ideally, your sensor would imitate the human eye if that's who the footage is intended for.

On 10/28/2022 at 1:24 AM, Snadegg said:

What I'm wondering is why nobody (at least nobody I can seem to find) has thought to make a sensor with red, green, blue, and UV (or other wavelength) photodiodes for the purpose of creating an easy matte with a specific wavelength of light.

But as to why the UV thing probably ain't happening, I have an educated guess: passivation layers on the image sensors say nay to this idea. Silicon dioxide blocks UV, the transmittance drops off spectacularly once you go past 400 nm: https://www.researchgate.net/figure/Transmission-spectrum-of-silicon-oxide-film-produced-at-RF-power-of-200-Watt-and-O-2-to_fig5_239918953 Other popular options (polyimide, nitride, etc.) all have a similar issue. And while you can make a silicon "diode" quite sensitive to UV and switch to more exotic passivation strategies, the infrared sensitivity stays quite high, meaning it's still going to be drowned out by all the infrared light hitting the sensor unless if you have a pretty aggressive optical filter I'd suspect.

Link to comment
Share on other sites

Link to post
Share on other sites

Another issue is  400nm is problematic because its photometrically dim to our eyes but not to sensors. It wouldn't be fun to balance. 

 

Also realize camera sensors aren't bandwidth filters in any real sense. If you look at sensitivity charts for CMOS and CCD (Canon vs Sony) the RGB windows are really large. If you make the sensitivity really narrow for each channel you get more accuracy but less sensitivity. Foveon/Sigma was the worst. No Bayer to deal with, but huge accuracy  / sensitivity problems. 

 

A bigger question here is why the legacy RGB model has stuck around for so long. CCD and CMOS sensors need at least 4 sensor channels to handle the visual spectrum properly. Red is a big problem, especially with CMOS. 48bit capture can address this somewhat, but conventional RGB sensors can't handle high luminance saturation levels without compromises.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×