Ultimately, the API could be applied to a range of gadgets — from robots to appliances — to give them the ability to see and understand the context of images.
Google on Wednesday announced the launch of the Cloud Vision application programming interface (API).
As a tool for developers, the API can be used to add machine learning and image recognition to applications. Ultimately, the API could be applied to a range of devices — from robots to appliances — giving them the ability to see and understand the context of images.
In other words, a lot of future gadgets may at some point have the ability to identify your face when you walk into a room and react accordingly based on your expression.
With the Cloud Vision API, images are classified into thousands of categories in order to detect faces with associated emotions and to recognize printed words in various languages. The REST API can analyze images stored anywhere, or integrate with images storaged on Google Cloud, Google said.
The API is available now in limited preview through the Google Cloud Platform, product manager Ram Ramanathan wrote in a blog post.
“Advances in machine learning, powered by platforms like TensorFlow, have enabled models that can learn and predict the content of an image,” Ramanathan wrote. “Our limited preview of Cloud Vision API encapsulates these sophisticated models as an easy-to-use REST API.”
Last month Google open-sourced its artificial intelligence engine TensorFlow in a bid to broaden adoption of its machine learning system. TensorFlow is used in Google applications such as Google Photos and Google Translate, as well as features such as smart reply and search. Google also uses TensorFlow to train its neural networks faster and improve products.