Why responsible AI is suitable for your business

Bhumin Vadalia
3 min readAug 24, 2021

As companies adopt artificial intelligence (AI) en masse, a new imperative emerges: AI must be “responsible,” “ethical,” or even “trustworthy.” Questions of bias in the data, transparency, or robustness of algorithms, primarily ignored three or four years ago, are increasingly highlighted. They have given rise to more than 60 ethical codes or lists of recommendations of intergovernmental organizations, States, groups of researchers, or private actors.

Many companies have realized this and have started to communicate on the subject, for example, by appointing a “Chief AI Officer” or by creating an internal working group. But this is not enough. Adopting a fundamental responsible AI approach means, above all, taking concrete actions: ensuring that the teams and managers who deal with AI on a daily basis have the necessary tools to make the right decisions, but also verify that AI systems have been sufficiently tested before they go into production, or that the engineers who develop them are not the same as those who validate them. The transparency of algorithms is also fundamental and requires the implementation of audit and control tools.

Are companies using AI fully aware of this? How many have started to implement concrete actions? To find out, BCG GAMMA has just conducted a global survey of the leaders of 1000 large organizations (*). The objective was to determine their level of maturity in six dimensions considered to be the pillars of responsible AI: data and privacy governance; security and robustness of algorithms; transparency and explainability; responsibility; impartiality and fairness; social and environmental impact. In addition, we added the seventh dimension, less often mentioned but just as crucial: the combination of artificial intelligence and human.

A distorted view of maturity in responsible AI

From concrete questions on these seven criteria, we were able to establish a score out of 100 and then classify the companies according to four levels of maturity in terms of responsible AI:

  • 14% of them are “late.”
  • The majority are in a “development” (34%) or “advanced” (31%) stage.
  • 21% stand out as “leaders.”

Internationally, European and American companies are the best placed, with a maturity index of 66.8 and 66.3, against 62 for Asia and 60.7 for the Middle East.

Beyond the scores themselves, this study reveals two essential elements. The first is that most executives overestimate their progress in responsible AI: 55% of companies give themselves a higher score than reality. It’s true regardless of their level of maturity: even among companies that believe they have fully implemented a responsible AI program, barely 46% have a correct view of their situation.

Direct benefits

The other major lesson concerns the motivations of companies to switch to responsible AI. The majority expect direct benefits as a priority: 42% of the executives questioned hope for benefits for their activity, and 20% wish to meet consumer expectations. Risk reduction (16%), regulatory compliance (14%), or social responsibility (6%) come far behind. And the more advanced companies are in a responsible AI approach, the more they expect concrete benefits. They do not do it for moral or legal reasons. Still, because it is a winning investment, on multiple levels: the reduction of biases in the data, the transparency of the algorithms, pre-production checks improve system performance; the protection of personal data and explainability consolidate the confidence of customers and partners, etc.

On arrival, establishing a culture of responsible innovation around artificial intelligence is a solid signal to attract and retain the best talents in a context of a shortage of data scientists and young people’s quest for meaning. Generations.

We are convinced that trust in artificial intelligence tools will become a business imperative and a competitive advantage. As a result, there will be an increasingly vital distinction between companies that demonstrate that they are implementing responsible AI and others.

Sylvain Duranton, Global Director of BCG GAMMA

(*) A survey was conducted among executives of 1,034 organizations (at least 2,000 employees and CA $ 500 million) in six regions and nine activity sectors.

--

--

Bhumin Vadalia

Bhumin is a tech enthusiast. As an occasional blogger, He loves to share knowledge regarding technological advancements in the domains of web & mobile app.