How can you tell whether an emerging technology such as artificial intelligence is worth investing time into when there is so much hype being published daily? We’re all enamored by some of the amazing results such as AlphaGo beating the champion Go player, advances in autonomous vehicles, the voice recognition being performed by Alexa and Cortana, and the image recognition being performed by Google Photos, Amazon Rekognition, and other photo-sharing applications.
When big, technically strong companies like Google, Amazon, Microsoft, IBM, and Apple show success with a new technology and the media glorifies it, businesses often believe these technologies are available for their own use. But is it true? And if so, where is it true?
This is the type of question CIOs think about every time a new technology starts becoming mainstream:
- To a CIO, is it a technology that we need to invest in, research, pay attention to, or ignore? How do we explain to our business leaders where the technology has applicability to the business and whether it represents a competitive opportunity or a potential threat?
- To the more inquisitive employees, how do we simplify what the technology does in understandable terms and separate out the hype, today’s reality, and its future potential?
- When select employees on the staff show interest in exploring these technologies, should we be supportive, what problem should we steer them toward, and what aspects of the technology should they invest time in learning?
- When vendors show up marketing the facts that their capabilities are driven by the emerging technology and that they have expert PhDs on their staff supporting the product’s development, how do we evaluate what has real business potential versus services that are too early to leverage versus others that are really hype, not substance?