Hadoop and big data platforms were originally known for scale, not speed. But the arrival of high performance compute engines like Spark and streaming engines have cleared the way for bringing batch and real-time processing together.
But what happens when your appetite for big data extends to dozens of sources, and quantities that reflect the traffic levels of a public web site? Or how about dealing with the world of IoT? The existing utilities of the Hadoop project, such as Flume, were set up for ingesting streams one-by-one to HDFS as the common target.
LinkedIn faced this issue back in 2009 when it wanted a more granular, real-time solution for tracking user behavior on its website. The problem was that existing open source messaging alternatives like RabbitMQ and ActiveMQ just didn’t scale. Instead, LinkedIn turned to a new twist on an established technology pattern: publish/subscribe (PubSub) messaging.
PubSub messaging systems, which date back to the early nineties, were considered the glue that allowed enterprises to connect new front-end systems with immovable legacy backbone financial or transaction systems. They were typically considered operationally simpler compared to more elaborate enterprise application integration schemes. PubSub is the technology around which Tibco was born.
For Full Story, Please click here.