Who’s made the mistake of buying apps or sexy analytics software just based on appearance?
Go on, own up. I’m sure at one time or other, we have all succumbed to those impulse purchases.
It’s the same with book sales. Although it should make no difference to the reading experience, an attractive cover does increase sales. But if that is the way you approach your IT spend, you’re heading for trouble.
Now you may be thinking. Hold on, that’s what my IT department is there to protect against. That may be the case in your business, but as Gartner have predicted , by 2017 the majority of IT spend in companies is expected to be made by the CMO not the CIO.
There are advantages to that change. Software will need to be more accessible for business users, able to be configured without IT help, and the purchasers are likely to be closer to understanding the real business requirements. But, as Insight teams increase their budgets, there are also risks and this post explores some of the pitfalls I’ve seen business decision makers make. Given our focus as a blog, I’ll be concentrating on the purchase of analytics software on the basis of appearance.
1. The lure of automation & de-skilling:
Ever since the rise of BI tools in the nineties, vendors have looked for ways to differentiate their MI or analytics software from so many others on the market. Some concentrated on “drag & drop” front-ends, some on the number of algorithms supported, some on their ease of connectivity to databases & a number began to develop more & more automation. This led to a few products (I’ll avoid naming names) creating what were basically “black box” solutions that you were meant to trust to do all the statistics for you. They became a genre of “trust us, look the models work” solutions.
Such solutions can be very tempting for marketing or analytics leaders struggling to recruit or retain the analysts/data scientists they need. Automated model production, with the need for analysts seems like a real cost saving. But if you look more deeply there are a number of problems. Firstly, auto-fitted models rarely last as longer as ‘hand crafted’ versions, they tend to degrade faster as it is much harder not to have overfitted the data provided. Related to this, such an approach does not benefit from real understanding of the domain being modelled (which is also a pitfall of outsourced analysts). Robust models benefit from variable & algorithm selection both appropriate to the business problem & knowing the meaning of the data items, as well as any likely future changes. Lastly such an approach almost always exclude meaningful ‘exploratory data analysis‘, which is a huge missed opportunity as that stage more often than not adds to knowledge of data & often provided insights itself. There is not yet a real alternative to the benefits of a trained statistical eye during the analytics & model building process.
2. The quick fix of local installation:
Unlike all the work involved in designing a data architecture & appropriate data warehouse/staging/connectivity solution, analytics software is too often portrayed as a simple matter of install & run. This can also be delusory. It is not just the front-end that matters with analytics software. Yes, you need that to be easy to navigate & intuitive to work with (but that is becoming a hygiene factor these days). But there is more to consider round the back-end. Even if the supplier emphasises it’s easy of connectivity with a wide range of powerful database platforms. Even if you know the investment has gone into making sure your data warehouse is powerful enough to handle all those queries. None of that will protect you from lack of analytics grunt.
The problem, all to often, is that business users are originally offered a surprisingly cheap solution that will just run locally on their PC or Mac. Now that is very convenient & mobile, if you simply want to crush low volumes of data from spreadsheets or data on your laptop. But the problem comes when you want to use larger data sources & have a whole analytics team trying to do so with just local installations of the same analytics software (probably paid for per install/user). Too many current generation cheaper analytics solutions will in that case be limited to the processing power of the PC or Mac. Business users are not warned of the need to consider client-server solutions, both for collaboration & also to have a performant analytics infrastructure (especially if you also want to score data for live systems). That can lead to wasted initial spend as a costly server & reconfiguration or even new software are needed in the end.
3. The drug of cloud-based solutions:
With any product, it’s a sound consumer maxim to beware off anything that looks too easy or too cheap. Surely such alarm bells should have rung earlier in the ears of many a marketing director who has ended up being stung by a large final ‘cost of ownership’ for their cloud based CRM solution. Akin the lure of fast fix local installation, cloud based analytics solutions can promise even better, no installation at all. Pending needing firewall changes to have access to the solution, it offers the business leader the ultimate way to avoid those pesky IT folk. No wonder licences have sold.
But anyone familiar with the history of the market leaders in cloud-based solutions (and even the ‘big boys’ who have jumped on the bandwagon in recent years), will know it’s not that easy. Like providing free or cheap drugs at first, in order to create an addict, cloud-based analytics solutions have a sting in the tail. Check out the licensing agreement & what you will need to scale. As use of your solution becomes more embedded in an organisation, especially if it becomes the de-facto way to access a cloud-based data solution, your users & thus licence costs will gather momentum. Now, I’m not saying they aren’t a viable solution for some businesses. They are. But beware of the stealth sales model that is implicit.
4. Oh abstraction, where are you now I need you more than ever?
Back in the nineties, the original Business Objects product, created the idea of a “layer of abstraction” or what they called a “universe”. This was configurable by the business (but probably by an experienced power user or insight analyst who knew the data), but more often than not benefited from involvement of a DBA from IT. It looked like a visual representation of a database scheme diagram and basically defined not just all the data items the analytics software could use, but also the allowed joins between tables etc. Beginning to sound rather too techie? Yes, obviously software vendors though so too. Such a definition has gone the way of metadata; perceived as a ‘nice to have’ that is in reality avoided by flashy looking workarounds.
The most worrying recent cases I have seen of lacking this layer of abstraction are today’s most popular data visualisation tools. These support a wide range of visualisations & appear to make it as easy as ‘drag & drop’ to create any you want from the databases to which you point the software (using more mouse action). So, far so good. Regular readers will know I’m a data visualisation evangelist . The problem is that without any defined (or controlled, to use that unpopular term) definition of data access & optimal joins, the analytics queries can run amuck. I’ve seen too many business users end up in confusion & have very slow response times, basically because the software is abdicating this responsibility. Come on vendors, in a day when Hadoop et al are making the complexity of data accessed more complex, there is need for more protection not less!
Well, I hope those observations have been useful. If they protect you from an impulse purchase without having a pre-planned analytics architecture, then my time was worthwhile.
If not, well I’m old enough to enjoy a good grumble anyway. Keep safe!