Imagine a situation several years from now; you are hiking through the forest, you point your high resolution camera-phone at some foliage. Almost instantly, the smartphone lists for you all the species that are in view, from the bushes to the caterpillars eating the leaves- and pulls up images of what kind of butterfly it will grow into.
This is powered through deep learning on the cloud taking advantage of the enormous computing power of GPUs. This feat is far beyond the image recognition capabilities of the average human, challenging even for some wildlife experts.
We already have pattern recognition AIs accessible through mobile, such as the music identification app Shazam and Warblr, the app used to identify species of birds from the sounds they make.
The technology used in Google’s driverless cars can identify many things that the car might come across such as pedestrians, cars and cyclists. Driverless cars imaging in real time could understand where there might be a danger as opposed to a situation where a human may be not paying full attention and react accordingly to save lives.
There are assumed to be many undiscovered species of insect and other animals in remote parts of the world. Explorers could take an app into the jungle with them and if they see an insect, they could photograph it with an app and the AI system could cross check with a large database, and tell them instantly if it is likely to be a new species or not.
There is also huge potential and many foreseeable use cases for computer vision in medicine, which is progressing rapidly. Computers will get better and better at identifying medical problems from scans or histological samples that even an experienced pathologist might overlook.
We are not talking about superhuman general artificial intelligence here, only superhuman narrow AI in the area of visual understanding.
What use cases for this technology could you think of?