Finance innovation relies on looking at data through a different lens.
SVB Financial Group has $40 billion in assets and offices in the U.S., U.K., Israel, and China, and is the holding company of Silicon Valley Bank. Its analytics unit, SVB Analytics, supports SVB and its clients, which include half the venture-backed technology and life science companies in the U.S. It offers 409A compliance services to help private companies to value employee stock options, as well as provides strategic advisory services to help companies and their investors to benchmark key operational and valuation statistics.
Steve Allan joined the company in 2008 to run SVB Analytics. He spoke with MIT Sloan Management Review contributing editor Michael Fitzgerald.
What has changed about analytics since you started at SVB?
Analytics used to be a competitive advantage, and now it’s becoming table stakes. It’s something you just need to have to execute on the business competitively. We’ve gone from experimenting with some analytics tools to deploying one visualization tool across the entire enterprise so every person has access to data reports and the ability to look at the data from the exact viewpoint they would like. If you had told me two years ago I was going to shift that tool out from a small group of people to all 1,400 customer-facing workers, I would have said, “I highly doubt it.”
Take me back to expanding the tool beyond the small group of people to the whole company.
As we started to deploy those insights across the organization, invaluable qualitative interpretation started occurring at the front line; the folks who are operating with the VCs, operating with the angels, operating with the entrepreneurs on a day-to-day basis. We had a lot of “aha moments” — better understanding of anomalies. The frontline team were sometimes able to give context as to why an anomaly occurred, which then allowed for further testing.
For example, we did an analysis on molecular diagnostics, basically genome sequencing for prediction of disease, and found that it cost almost two times what was projected to actually get through a product development cycle — a difference of tens of millions of dollars. Obviously, if you’re fundraising, you give up more of your company if you spend more money to achieve the end goal than you anticipated. We thought we could use this as a warning, to really help entrepreneurs understand how expensive it is to get there.
I presented this, and a client banker said, “That’s very interesting. Where is the money being spent?” And I said, “Let’s break it down into how much is spent on the actual R&D science of it versus how much is spent on the purchase of samples you get so you can run your test.” As I was going through that, the banker was surprised that the cost of samples to test and develop an algorithm for an average company was around $12 million.
He told us about a company that had spent less than half a million dollars on their samples. My reaction was, “Well, don’t trust your tester — they probably just ran some different types of computer algorithms to get predictive results, but didn’t actually run it across the real sample set. They just did a short-cut to raise the money.”
But the banker said, “No, they had actually run it across the samples.” He had a relationship with that particular CEO, and was convinced the company hadn’t spent $12 million on the samples.
So, we sat down and interviewed the CEO, who it turns out had struck deals to receive the samples from research institutions by only paying shipping and handling, in exchange for open-sourcing the data from their analysis back to those research institutions.
So here was this incredibly creative way to save a lot of money that nobody else had done. Saving that $12 million when developing your algorithm while fundraising an initial round of $50 million can really increase the pre-money value. Today, there are many more of these relationships between researchers and research institutions that exchange samples for open-source data.
That’s where you can think about [how], as you deploy your tool out to the universe, a banker could keep analyzing and see that outlier, know what that company is, and then be able to bring that insight to us. It allows us to ask if we need to look at the data a different way.
Do you have a sense of what the value was of that kind of discovery? Is there a monetary value to that, or a reputation value?
At that time, about 106 molecular diagnostic companies were in SVB’s portfolio. We had identified at that point that there were probably about 160 molecular diagnostic companies that had been professionally funded. So the idea was to reach out to those folks, especially the earlier-stage companies. One, we were able to deploy an idea for them to save tremendous amounts of money. But two, it actually also helped on the research side, so it helped to spawn more companies, because academic entrepreneurs who maybe are cash-strapped and don’t have the ability to really run these broader tests now can.
Have we tried to qualify exactly what that would have been? No. But I believe the amount that was being spent on samples in any given year was in the hundreds of millions of dollars. So, that particular insight might have been worth over a $100 million.
Was that one incident enough to cause SVB to say, “Hey, let’s move this tool out more broadly”?
It definitely wasn’t a singular tipping point. But in any given year, there are usually 10 to 12 stories that really stand out. Data enablement and interpretation was starting to represent, out of those 10 to 12 stories, two stories, then three stories, then four stories, and then half of the stories had to deal with some sort of data deployment enablement that was leveraging the collective brain power, intuition, and knowledge that is dispersed throughout the organization. The question used by the executive sponsor was, how do we enable more collisions?
More collisions? Can you expand that a bit?
Myself and that banker in Seattle, that was a collision. It wasn’t intended that way, it just happened, and it led to a good, fruitful ending. And not every collision is going to come out with a successful outcome, but increasing the number of collisions will increase the probability that you’re going to have more insights that are derived from the information.
How much did it cost to deploy the visualization tool across the enterprise?
I can’t release anything, but the shift to an enterprise deployment was not even close to prohibitively expensive. Having some folks who are more what I would call “R&D access-oriented” was not really an additional expense.
How did you handle training?
We had actually decided to do training sessions across our different geographies. But, we got lucky. In our third training session, people started saying they were disappointed by the training. A handful of folks said they were able to use the tool because of online videos, not because of the training. The people who came to classes knew less about how to use the tools than people who had just watched the video online. With the video, people would hit pause and then they’d go play with data for a little while and say, “Oh, okay, I see how that works.” So then we said, “We’re just going to use the online videos as the training mechanism.”
What are the challenges that you found in getting performance broadly out of your analytics efforts?
We probably experienced very similar struggles that other folks do. The first step, which is finding out what data you have, cleaning it, and structuring it to figure out how to begin to analyze it. It’s a little bit like just building the building. It may take you three years to build the building, and you spend two years digging a ditch, and then you spend nine months building a foundation, and then three months later everything else starts to happen.
We also have to work within the parameters that the data itself is private, confidential, non-public information. We’ve been trusted with the data and are stewards of the data, and really need to make sure that we’re leveraging it for the benefit of the ecosystem, while protecting all of the companies that produced this data by operating within the innovation ecosystem. We’ve always been excessively careful about how and when we use the data — which in some cases has definitely created an impediment to increasing ROI.
Clearly, there are people who are not using it. The bank has some people who are avid, avid users, and avid users clearly bring back the most impact. It probably follows the 20/80 rule, so 20% of those endpoints are avid users. But look at it the other way — 80% haven’t even bothered to try and use it. And then you start to deploy this as part of the goals.
This is a long game for us. As long as it continues to accelerate the findings, and that those findings then get deployed into the ecosystem more readily and more quickly, then we’re doing well by it.
Any other advice?
The cheapest thing you can do is just celebrate the win. We have what we call our Monday Morning TV, an internal broadcast every Monday to cover any key information that needed to be deployed across the organization. We leveraged it to celebrate some of these findings as they occur. It creates the almost gamification of those findings, and that insight could be useful and helpful to the ecosystem.