I’m nearsighted and have worn eyeglasses since I was eight years old (except for a short time when I tried, but didn’t really take to, contact lenses).
It’s amazing that slices of transparent material can be precisely shaped to correct the vision of people like me. You wonder how people with blurry vision got by before corrective lenses were invented. The development of eyeglasses wasn’t very long ago in historical time: It dates back to 13th-century Italy, developed as a reading aid for monks and scholars, who were among the few literate people before modern times. Eyeglasses didn’t become a mass phenomenon until the inability to read became a major handicap for the average (literate) person.
Of course, vision has always been useful for many tasks besides reading, such as recognizing faces and avoiding head-on collisions. Those are duties for which modern technologies -- in the form of deep-learning algorithms -- are providing extraordinarily useful, though they haven’t yet been embedded in standard eyeglasses.
Deep learning is the new lens for a world that needs help bringing an overabundance of visual stimulation into focus. As I noted here, computer-vision applications powered by deep-learning algorithms are becoming common in a growing range of real-world scenarios. They will become ubiquitous in smartphones; process streaming body-cam cloud feeds; accelerate facial recognition and other cognitive functions via wearable prosthetics; serve as the eyes that drive autonomous vehicles; and even handle the delicate task of distinguishing the beautiful from the banal.
More of us are going to let the devices do our seeing for us. And if you consider all of those trends, it’s very likely that the blind will be the biggest beneficiary. Deep-learning-powered computer vision has the potential to become the next major step in vision correction.
I’m not talking about brain implants that stimulate the visual cortex directly, though they are certainly an exciting new technique. I’m talking about a far less radical or invasive possibility: wearables that leverage augmented reality, haptics, and deep learning technologies to help the blind function as if they could see. This is essentially what Intel is working on, as described in this recent article.
This got me thinking: If deep-learning-powered computer vision can keep autonomous vehicles from smashing into things, why can’t it perform the service for human beings walking around?
Let’s say a visually impaired person wears bodycams that take in 360-degree views of the immediate environment. Then let’s imagine that those cameras are executing algorithms for real-time collision avoidance, geospatial navigation, and situational awareness. Finally, imagine that the individual is wearing wristbands or other haptic, tactile, or auditory devices that, guided by those algorithms, tell the visually impaired person exactly which direction to walk, when to stop and for how long, when to turn left vs. right, how high to lift a hand to grab a doorknob, and so on.
Could these technologies make seeing-eye dogs obsolete? I don’t want to sound a sour note, but the service animals of the world might want to start looking for other employment.