ERP News

AI

Building AI systems that work is still hard

Building AI systems that work is still hard
724 0
Building AI systems that work is still hard

Building AI systems that work is still hard

Even with the support of AI frameworks like TensorFlow or OpenAI, artificial intelligence still requires deep knowledge and understanding compared to a mainstream web developer. If you have built a working prototype, you are probably the smartest guy in the room. Congratulations, you are a member of a very exclusive club.

With Kaggle you can even earn decent money by solving real world projects. All in all it is an excellent position to be in, but is it enough to build a business? You can not change market mechanics after all. From a business perspective, AI is just another implementation for existing problems. Customers do not care about implementations, they care about results. That means you are not settled just by using AI. When the honeymoon is over, you have to deliver value. Long term, only customers count.

And while your customers might not care about AI, VCs do. The press does. A lot. That difference in attention can create a dangerous reality distortion field for startups. But don’t be fooled: Unless you create universal multipurpose AI there is no free lunch: Even if you are the VC’s darling, you have to go the last mile for your customers. So let’s get into the driver’s seat and look how we can prepare for future scenarios.

The mainstream AI train

AI seems to be different from other mega trends like blockchain, IoT, FinTech etc. Sure, its future is highly unpredictable. But that’s true for almost any technology. The difference is that our own value proposition as a human being seems in danger — not only other businesses. Our value as deciders and creatives is on review. That evokes an emotional response. We don’t know how to position ourselves.

There are a very limited number of basic technologies, most of which can be categorized under the umbrella term ‘deep learning’, that form the basis of almost every application out there: convolutional and recurrent neural networks, LSTM, auto-encoders, random forests, gradient boosting and a very few others.

AI offers many other approaches but these core mechanisms have shown to be overwhelmingly successful lately. A majority of researchers believe that progress in AI will come from improvements of these technologies (and not from some fundamentally different approaches). Lets call this “mainstream AI research’ for that reason.

Any real world solution consists of these core algorithms and a non-AI shell to prepare and process data (e.g. data preparation, feature engineering, world modelling). Improvements of the AI part tend to make the non-AI part unnecessary. That’s in the very nature of AI and almost its definition — making problem-specific efforts obsolete. But exactly this non-AI part is often times the real value proposition of AI driven companies. It’s their secret sauce.

Every improvement in AI makes it more likely that this competitive advantage is open-sourced and available to everyone. With disastrous consequences. Like Frederick Jelinek once said : “Every time I fire a linguist, the performance of the speech recognizer goes up”.

Machine learning basically has introduced the next phase of redundancy reduction: Code is reduced to data. Almost all model-based, probability based and rule-based recognition technologies were washed out by the Deep Learning algorithms in the 2010s.

Domain expertise, feature modeling, and hundreds of thousands lines of code now can be beaten with a few hundred lines of scripting (plus a decent amount of data).  As mentioned above: That means that proprietary code is no longer a defensible asset when it’s in the path of the mainstream AI train.

Read More Here

Article Credit: TechCrunch

Leave A Reply

Your email address will not be published.

*

code