The executive of the future is soft.
Soft skills, that is. A growing body of quantitative research on productivity and collaboration shows that soft skills, EQ, is more predictive of group / team success than IQ. More and more technology leaders are embracing this viewpoint.
Having spent last week at the gathering of world leaders in Davos, a recurring theme was how artificial intelligence and humans can work symbiotically rather than automation replacing people.
Some companies are putting their money where their mouths are: Accenture North American CEO Julie Sweet stated that she reinvested 60% of her cost savings from automation into retraining and reskilling, rather than laying off large numbers of people. These are people who know the corporate culture, understand the industry, can be reallocated to higher order tasks that involve more creative thinking once the machines automate simpler activities.
Malcolm Frank of Cognizant talked about the importance of empathy as a core job skill in the future, “which historically industry has been bad at training.”
These sentiments echo a larger theme I’m starting to play with, that I call “Responsible Innovation”. It’s not enough to simply adopt new ideas into your company, or invent new technologies and get people to use them. We need to think about the impacts of these new technologies, and our role as humans in the new society we create.
Failure to innovate responsibly creates peril for society, as dissatisfied unemployed (and fearful of becoming unemployed) voters in the US and the UK proved susceptible to AI-fueled fake news. Facebook, in turn, disclaimed responsibility, claiming that they were “just a platform” (I thought we discredited that line of reasoning years ago). Their efforts to fix the fake news problem have been halfhearted. I ran into similar Silicon Valley naïveté last week when a blockchain executive told me that the reason women didn’t work at his company was because they didn’t like staring at computer screens and weren’t interested in programming. One finds applicable the expression, “Often wrong, but never in doubt.”
We come back to my premise that responsible technologists and responsible business leaders can practice responsible innovation, if they are sensitized to the broader impacts of their decisions and their respective obligations to consider how new technologies are deployed. Artificial intelligence isn’t intrinsically evil any more than nuclear fission is (cancer cures are as much a product of the nuclear age as Hiroshima). The ethics of the tool lie in how it is used.
The views in this column are my own, and do not necessarily reflect those of the University of Oxford, the Massachusetts Institute of Technology, or other organizations with which I am affiliated.