Three years into the Watson Project, IBM has invested some $1bn dollars to help translate natural language questions into data-driven answers. What the Watson project highlights is how often humans rely on their instincts, either to frame a question or define the value of analysis.
The standard algorithmic schema is a slow-thinking method; evidence based processing and screening of potential answers. Alongside this standard process is a fast-thinking methodology, where Watson references previous calculations to establish a self-taught conclusion.
The standard Schema:
– Question analysis
– Hypothesis generation
– Filtering of candidate answers
– Producing supporting evidence
– Merging and Ranking Answers
Just like very good data analysis, Watson brings two different calculation methodologies together to rank and prioritise each answer before making a conclusion and deciding whether to suggest an answer. In the fast method the hypothesis, candidate answers and supporting evidence can all be produced faster because of previous experience.
In real data analysis, without a $1bn computer to call upon, we need a balance of fast and slow thinking. The concept of fast, instinctive thinking is hard-coded into every executive’s brain, as it defines and ranks experiences to call upon. The problem with fast thinking is it can only call on previous learning and will be slow to recognise change. Slow thinking will call upon the available data to construct an evidence-based answer. The challenge with slow thinking is both speed and a reliance on the underlying data quality.
The algorithmic schema of Watson is displayed in our office to demonstrate what we as humans taught the super computer and to highlight how the perfect mixture of fast and slow thinking is a process. The crucial element is always to show your analysis workings, to articulate how individual data insights were created. This is perhaps the most powerful way to demonstrate the quality of the business instinct and highlight any weakness in the underlying data.