The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.
For any scenario short of studio lighting, there is objectively much less information.
You’re also dramatically underestimating how truly fucking awful phone camera sensors actually are without the crazy amount of processing phones do to make them functional.
No. I have worked with phone camera sensors quite a bit (see above regarding evaluating facial recognition software…).
Yes, the computation is a Thing. A bigger Thing is just accessing the databases to match the faces. That is why this gets offloaded to a server farm somewhere.
But the actual computer vision and source image? You can get more than enough contours and features from dark skin no matter how much you desperately try to talk about how “difficult” black skin is without dropping an n-word. You just have to put a bit of effort in to actually check for those rather than do what a bunch of white grad students did twenty years ago (or just do what a bunch of multicultural grad students did five or six years ago but…).
It’s exactly the same reason phone cameras do terrible in low light unless they do obscenely long exposures (which can’t resolve detail in anything moving). The information is not captured at sufficient resolution.
Rhetorical question (because we clearly can infer the answer) but… have you ever seen a black person?
A bit of melanin does not make you into some giant void that breaks all cameras. Black folk aren’t doing long exposure shots for selfies or group photos. Believe it or not but RDCWorld doesn’t need to use nightvision cameras to film a skit.
You can keep hand waving away the statement of fact that lower precision input is lower precision input.
And yes, for actual photography (where people are deliberately still for long enough to offset the longer exposure required), you do actually need different lighting and different camera settings to get the same quality results. But real cameras are also capable of capturing far more dynamic range without guessing heavily on postprocessing.
For any scenario short of studio lighting, there is objectively much less information.
You’re also dramatically underestimating how truly fucking awful phone camera sensors actually are without the crazy amount of processing phones do to make them functional.
No. I have worked with phone camera sensors quite a bit (see above regarding evaluating facial recognition software…).
Yes, the computation is a Thing. A bigger Thing is just accessing the databases to match the faces. That is why this gets offloaded to a server farm somewhere.
But the actual computer vision and source image? You can get more than enough contours and features from dark skin no matter how much you desperately try to talk about how “difficult” black skin is without dropping an n-word. You just have to put a bit of effort in to actually check for those rather than do what a bunch of white grad students did twenty years ago (or just do what a bunch of multicultural grad students did five or six years ago but…).
It’s not racist to understand physics.
It’s exactly the same reason phone cameras do terrible in low light unless they do obscenely long exposures (which can’t resolve detail in anything moving). The information is not captured at sufficient resolution.
Rhetorical question (because we clearly can infer the answer) but… have you ever seen a black person?
A bit of melanin does not make you into some giant void that breaks all cameras. Black folk aren’t doing long exposure shots for selfies or group photos. Believe it or not but RDCWorld doesn’t need to use nightvision cameras to film a skit.
You can keep hand waving away the statement of fact that lower precision input is lower precision input.
And yes, for actual photography (where people are deliberately still for long enough to offset the longer exposure required), you do actually need different lighting and different camera settings to get the same quality results. But real cameras are also capable of capturing far more dynamic range without guessing heavily on postprocessing.