Time will tell.
The dwindling demand for the expertise of yellow cab drivers may be an indication.
Accurate ✔️, cheap✔️ but understandable?Predictions from machines of now and the future are increasingly accurate and increasingly cheap, but there is still a problem.
The predictions are usually not interpretable or human understandable.
Humans have traditionally made predictions and decisions simultaneously.
Indeed, separating the two is considered the very penchant of a rational being, and works in cognitive psychology have highlighted the dangers of not doing so.
It is not surprising that monarchs in the past would have advisers whose sole job was to offer predictions and bias checks.
Notwithstanding, humans if left to their elements tend not to make the most rational choices (we are naturally poor in statistics).
This is the reason that statisticians (and other experts) were trained to be aware of their own biases and help decision makers overcome this shortcoming.
However, with complex and non-linear (but highly accurate) prediction techniques like deep neural networks, the traditional expert is becoming less handy.
In fact, even the developers of these systems (like me!) consider them to be ‘black-box’ — we know that the system does very well, but we don’t know exactly what it is doing, or how.
Role of the new data scientistThis is where the newly minted field of data science can shine, perhaps.
The job description for a data scientist, depending on the size and the pocket-depth of a company, is advertised to include some or all of:Collect Data (Instrumentation, Logging, sensors, user-content)Move/Store (ETL, Data Flow, Pipelines, Horizontal and vertical expansion)Explore (Clean, anonymize, detect anomalies, cluster, sample, prepare)Aggregate (Analytics, Metrics, Segments, Engineer useful features)Learn/ Optimize (A/B testing, Experiments, Simple Machine Learning, Deep Learning)What company executives are looking for is someone who can ‘wrangle’ the data and present it, usually in the form of visualizations such as the one below.
This is so that they can make more informed decisions.
The hope is that as insights such as this are cheaper to produce, the quantity and quality of business decisions will improve.
Data Visualization ExampleVisualizations such as this and many others can now be done quickly and more efficiently thanks to the data visualization ecosystem.
Moreover, with drag and drop software like Tableau, they can also be done with minimal coding (to a certain effectiveness).
However, there are some caveats.
Almost all real-world data has many factors (the figure above has one- number of tweets across time) that make visualizations less intuitive.
Further, the relationships are almost never cleanly linear (i.
there are many factors that combine to produce a result).
This is why Deep Learning that combines these factors non-linearly and hierarchically usually has performance gains but leaves us with questions like:How should explain the prediction process of these complex models to decision makers when the developers themselves are not quite sure?How to take a deep neural network with 21 million knobs, and come up with a reasonable explanation that it learned the ‘right things’ ?How to analyze that if the data would have been different in a certain way, the results could have been different in a particular way (the counterfactual)?For that, I add a new task here for the data scientist of today:6.
Explanation: explain predictions truthfully and in a user understandable wayVisualization of layer activationsThe above figure is a visualization of one of the layers (hierarchical representation of input features) to give an idea on what the neural network is ‘seeing’ when it looks at an image of a dog and a cat.
Visualizations such as this aren’t merely about wrangling some data and using the latest plotting library.
They provide a window into the workings of a neural network and its learning mechanism and take us further into answering some of the questions above.
In this previous article, I talked a bit about explanations.
In later articles I will write about what it means for explanations to be truthful and human understandable.
Rebooting Strategy: More AutomationI mentioned earlier that the feedback from outcomes can be tracked and fed back to make the prediction process better, this is the essence of what is called active learning.
The same feedback (if plentiful) can be also used to improve the decision making itself as well.
The permission to make mistakes and then to learn from these mistakes is the essence of reinforcement learning.
This is probably what we mean when we say a certain decision maker has experience.
If the successes in applying reinforcement learning to complex processes (alpha zero, complex games and applications) is any indication, there is no reason why AI in the near future cannot be used to learn better strategies for making decisions as well.
Doing this will not only affect the quality and types of decisions being made, but will also fundamentally change the strategy and structure of businesses — all that for a later post.
Comments?.Suggestions?ReferencesHistory of ElectricityWhat to expect from Artificial IntelligenceGoogle BlogHeuristics in judgement and decision-makingVox BlogDistill.