As companies invest in data science, productivity seems low on the agenda. Iterative, incremental gains to improve unwavering KPIs is more likely to deliver team success.

I’m not saying data scientists are a lazy bunch, nor a disorganised bunch…………(though I’ve met some marketers who start by assuming their Business Intelligence team is both) but I have yet to discover a consensus on what ‘productive’ looks like in a data science team. I’ve lately been trying to learn more about how they are organised and how they measure their productivity.

To that end, I’ve been speaking to a lot of marketers (the people I assume, perhaps unwisely, are the internal ‘clients’) and asking irritating questions about their data-driven marketing, for example: Is Data Science open or opaque? Is there a common tool for collaboration that enables those parts of organisations who require the customer insights, to gain visibility and understand what their colleagues are working on? How do you know what the priorities are and how the data scientists are creating value for the business?

Many organisations have built dashboards for visibility, but I think that only goes so far in creating ‘openness’. I want to know about productivity of the disciplines that inform ’test and learn’; how do you measure a great data science team against a good one……..? Analysts have said to me, “I get reviewed on objectives, I don’t get measured on savings or revenues.” I continued, “how does the business decide what you do and which tasks will create more value for the business? Does it change direction quickly? Is that a good thing?”

The thinking behind my questions is that we have recently been working with a fintech start-up; well funded, great attitude, dynamic team & the co-founder is a strong proponent of the agile methodology – the perfect breeding ground to establish world-class habits right from the beginning.

We introduced a testing culture in the marketing team which included the analysts. We started by running a weekly metrics review meeting in which it is a ‘crime’ to turn up without knowing the latest numbers and having personally thought about what the trends might suggest. You need to inform your colleagues what the outcome of last week’s test was, whether you will continue with the test and/or start a parallel one elsewhere. The review is not for endless discussion, it’s a vital opportunity to share what everyone is testing and the longer conversations can be in scheduled break-out discussions with channel optimisers. It absolutely requires the scrum master to keep it on schedule! In the early days, we would have dynamic conversations, which left us a little perplexed by the lack of focus. It was only when we changed to the relentless introduction of each colleagues’s report with the opening line “This week we will improve [this] KPI by testing ……..”: Turns out, even when you start with a clean sheet, you have to focus on the KPIs in a razor sharp way, not pay lip-service to them.
It’s harder to move to these kind of habits when you are distracted by legacy systems, even though you have more data about your customers available to you, the struggle can be getting distracted by the house-keeping, not focusing on what you can do with the perceived ‘disorganised’ or unstructured data already available.

We’ve been talking to a national charity recently, where the BI team are implementing a build solution to assemble disparate data sources, designed to de-dupe and stitch together their multiple contact points with donators and fund-raisers. A noticeable absence of KPI focus here is replaced by more of a so-called ‘big IT’ solution; a time-consuming engineering project that will provide the promised elixir only after much time has passed, or may not. It’s been a while in the making already and I can’t help wondering if there were a pressure from the C-suite to uncover incremental benefit, whether a more organic iterative approach towards the holy grail would throw off incremental benefit along the way. For example, I believe their key challenge doesn’t necessarily require a single assembled structure of data (it would be useful) but significant incremental gains could be achieved from insight derived from comparing two disparate data sources they already have and focusing on how to improve ONE of their KPIs. Sometimes ‘getting there ugly’ but in less time, promotes further progress.
I got a CMO’s perspective from one of the retail banks which has done a good a job of ‘changing it’s stars’. She described how they benefited from a strong CEO who laid down just 13 KPIs for his management team to focus upon during turnaround. The CMO told me she selected a team of 11 analysts, they had a defined period (a quarter) to build models, create scenario predictions and test against them in order to win ANY further budget. The tests were focused on proving the models which were focused on one or more of the 13 KPIs. They mercilessly assembled data from whatever sources available and they built the models, tested with success and achieved the right to invest heavily, savings being culled from elsewhere in the organisation.

Agility, speed and rapid testing to make incremental gains are going to be the key differentiators for those who succeed in this space. Surely there should be a tool out there to help our community manage that, right?