When CambridgeData was effectively a team housed inside a digital marketing agency, we were the go-to guys for mapping social conversations and key influencers using old-fashioned network analysis tools – the kind of software police use to discover inter-linking between criminals. We also did a lot of demographic profiling, segmenting email databases and overlaying to CRM data for improving the understanding our client’s customers. We would relish the chance to delve into any data we could lay our hands on.

Our colleagues and partner agencies would typically ask a wide range of questions they wanted answered by data analysis. These would range from the business-critical, such as ‘is the lifetime value of our customers going up or down’, to the nice-to-know such as, ‘does our Twitter following demographic correlate with our target customer profile’?

These are normally significant questions that need breaking down into data investigations to increase the likelihood of driving any real action, so we developed the policy to address every marketing or consumer behaviour question that is asked of us, by breaking the question down, forming hypotheses and assembling insight, without discrimination against the model type required to generate insight.

Very soon, lots of questions were being asked and we quickly had to work out a way of managing the amount of questions and subsequent-questions we were being challenged with.

Early on, we started recording the client’s challenges and questions on collaboration tools like Evernote and Basecamp, then we would try to reconcile them regularly in our weekly scrum meetings to ensure we were dealing with the most business-valuable questions first. We wrote them all down, then played them back to each and asked are they ‘business critical’ or ‘nice to know’? If we get X-question solved, is it worth more than Y-question for our client? This wasn’t the most effective, but it served it’s purpose initially.

As a result of our spin-out from the digital marketing agency, we became a dedicated marketing data consultancy.  We had already developed expertise in a number of sectors with their own legacy models and reporting consensus. For instance, FMCG or CPG, uses a typically infrequent econometric eco-system of reporting. Similarly, many online or customer database driven businesses have a desire to understand attribution modelling and some markets are dominated by comparison engine thinking, which requires some commodity-market thinking.

These different conventions needed to be catered for in our consultancy management process and now that we were no longer beholden to the typical agency ‘job-bag’ system, we wanted to inject greater productivity into our new operation. We decided that we wanted to focus on an agile way of internal operations; an iterative approach to driving prioritisation by ‘scrum’ meeting.

Eventually, after many uncontrolled versions and expanded Google spreadsheets, we built our own ticketing & project management system for data analysis, with a broad methodology embedded in each ticket or question. One question = One ticket. We incorporated the background or ‘meta data’ to our investigations, such as data sources, data flow maps, hints, tips & notes that the data engineer learned as they prepared the data, typical unstable points of the data flow, etc. We called each ticket a ‘card’.

We also recognised through experience that a critical part of additional productivity is derived from the collaboration effect of transparency amongst the stakeholders in any investigation, and the need to exchange ideas and perspectives.  You can develop the model quicker, by providing a great deal of visibility to everyone using the tool and making it a ‘crime’ not to be using the transparency to contribute to the evolution of a data investigation. For these reasons, we built a comments section into each card to encourage specific exchange between the data scientist, business analyst and marketer.

It seems obvious, but by storing the original brief within the card, the analysts could remind themselves of the purpose behind their current investigation and the business analyst could add considerations and perspectives to consider whilst doing preliminary investigation. This has proved useful in those moments when you find yourself in a ‘rabbit hole’ line of enquiry and having to reflect upon whether you are using your time wisely.

There’s a number of other features we built in, like the ability to embed your python scratches in your github account or the ability to allocate a timesheet entry to each card. The list is long of requested features to keep adding!

As soon as we built the MVP, our own analysts started using it intuitively and making lists of additional features they would like to see within the cards. We gave it a name; TestBoard, because that’s exactly what it is; a way of prioritising the investigations your analysts are doing. This seemed like a good sign and we began a process of product iteration for stabilising the product and the serve that our analysts were giving our consultancy clients.

We now have a roadmap of product features planned, and we intend to set it free… Do join the beta list at testboard.com.