ERP News

Using AI to Make Knowledge Workers More Effective

147 0
AI Knowledge

AI Knowledge

AI Knowledge-New AI capabilities that can recognize context, concepts, and meaning are opening up surprising new pathways for collaboration between knowledge workers and machines. Experts can now provide more of their own input for training, quality control, and fine-tuning of AI outcomes. Machines can augment the expertise of their human collaborators and sometimes help create new experts. These systems, in more closely mimicking human intelligence, are proving to be more robust than the big data-driven systems that came before them. And they could profoundly affect the 48% of the US workforce that are knowledge workers—and the more than 230 million knowledge-worker roles globally. But to take full advantage of the possibilities of this smarter AI, companies will need to redesign knowledge-work processes and jobs.

Knowledge workers—people who reason, create, decide, and apply insight in non-routine cognitive processes—largely agree. Of more than 150 such experts drawn from a larger global survey on AI in the enterprise, almost 60% say their old job descriptions are rapidly becoming obsolete in light of their new collaborations with AI. Some 70% say they will need training and reskilling (and on-the-job-learning) due to the new requirements for working with AI. And 85% agree that C-suite executives must get involved in the overall effort of redesigning knowledge work roles and processes. As those executives embark on the job of reimagining how to better leverage knowledge work through AI, here are some principles they can apply:

Let human experts tell AI what they care about. Consider medical diagnosis, where AI is likely to become pervasive. Often, when AI offers a diagnosis the algorithm’s reasoning isn’t obvious to the doctor, who ultimately must offer an explanation to a patient—the black box problem. But now, Google Brain has developed a system that opens up the black box and provides a translator for humans. For instance, a doctor considering an AI diagnosis of cancer might want to know to what extent the model considered various factors she deems important—the patient’s age, whether the patient has previously had chemotherapy, and more.

The Google tool also allows medical experts to enter concepts in the system they deem important and to test their own hypotheses. So, for example, the expert might want to see if consideration of a factor that the system had not previously considered—like the condition of certain cells—changed the diagnosis. Says Been Kim, who is helping develop the system, “A lot of times in high-stakes applications, domain experts already have a list of concepts that they care about. We see this repeat over and over again in our medical applications at Google Brain. They don’t want to be given a set of concepts — they want to tell the model the concepts that they are interested in.”

Make models amenable to common sense. As cyber security concerns have mounted, organizations have increased the use of instruments to collect data at various points in their network to analyze threats. However, many of these data-driven techniques do not integrate data from multiple sources. Nor do they incorporate the common-sense knowledge of cyber security experts, who know the range and diverse motives of attackers, understand typical internal and external threats, and the degree of risk to the enterprise.

Read More Here

Article Credit: HBR

Leave A Reply

Your email address will not be published.

*

code