View Single Post
pichlo's Avatar
Posts: 6,453 | Thanked: 20,983 times | Joined on Sep 2012 @ UK
#494
Originally Posted by endsormeans View Post
A glance at the colour yellow ...
That rectangle is not yellow

Sorry, endso, but Wiki is right. It does sound like pseudoscience.

[tl;dr]
Molecules do not "vibrate" to perceive light. Light is a wave, that much is true. But we do not perceive each individual crest and trough of that wave (the same is the case with sound, BTW, and the frequencies involved there are a trillion times lower). Our light perception is based on the fact that light is a particle (called "photon") as well as a wave, and that the particle hitting an electron delivers a small jolt (called "quantum") of energy to that electron which, in certain materials, can be harvested as an electric impulse. Light sensors in digital cameras are based on exactly the same principle, only using different materials.

Even the electrons do not "vibrate" in rhythm with the frequency of the incoming light. They merely receive a quantum of energy proportional to the frequency. You might say that they "vibrate" in rhythm with the number of incoming photons: the more photons per second the more electric impulses are generated, which we perceive as the light intensity. But even then we do not perceive each individual impulse but their average gathered from a huge number of electrons involved.

Which brings us neatly back to the phone cameras and the megapixel war. More pixels per square mm means smaller pixels. Smaller pixels means fewer electrons to receive the photons. Fewer electrons means less chance of a rare photon hitting one (= lower sensitivity to low light) as well as less chance of any free electrons left if the photons come in a large torrent (= limit on the maximum light intensity resulting in overexposed or "burnt out" areas).

To preempt Dave's next question, no, there is no way around it. These are physical limits that no science fiction can overcome. Ever. You can have either more megapixels or more dynamic range (the range between the darkest and lightest area of the image). Choose one.

Come to think of it, I just talked myself into supporting multiple light sensors, one for each portion of the dynamic range. In the basic configuration you could have two: a primary with a lot of small pixels for the middle of the dynamic range and a secondary with fewer, larger, more sensitive pixels for the extremes. The post-processing could find any dark or over-exposed pixels in the primary and replace them with pixels from the secondary. I assume this is what manufacturers refer to as "the other sensor is for low light images" - a rare exception when marketing is actually (almost) right. It is a trick since the extremes have a lower resolution but most people won't notice anyway.
[/tl;dr]
__________________
Русский военный корабль, иди нахуй!
 

The Following 5 Users Say Thank You to pichlo For This Useful Post: