Getting humans to trust face-less machines remains a challenge for tech giants

With AI tools fast improving, many more human functions will be taken over by machines.
With AI tools fast improving, many more human functions will be taken over by machines.

Summary

  • Tech companies must examine key insights of human behaviour to generate trust in technology. They should work to craft a personality for what they have on offer.

The Meteorological department was monitoring the development of Cyclone Tauktae. On 14 May 2021, more than 90 vessels in the areas that could be impacted by the cyclone were advised to move to safe locations. While almost all of them moved as advised, the captain of barge P-305 chose to move just 200 metres away from its initial location. Despite the Met department’s warning, the captain’s assessment was that his vessel was safe, since the maximum predicted wind speed was only 40 knots and his location was 120 nautical miles away from the eye of the tropical storm.

But early on 17 May, Cyclone Tauktae intensified into an extremely severe cyclonic storm, reaching its peak intensity soon afterwards. P-305’s anchors came undone, drifted away and hit a well-head platform. The captain who trusted his intuition more than the warnings of machines lost his life along with 75 other people abroad the barge.

There is no industry that trains its personnel to abide by the decisions of machines more than global civil aviation. Airline pilots are trained to go by the decisions of a machine and not by their intuitions, especially when there is a conflict between the two. So on 1 June 2009, when the pilots of Air France Flight 447 from Rio de Janeiro to Paris were bombarded with confusing messages and alarms from the aircraft’s computer system, they were not mentally prepared for such a scenario. Unfamiliar with such a situation, they made several errors of judgement, and in conjunction with severe weather conditions, this led to a complex ‘error chain’ that ended in a crash and the loss of 228 lives. The captain’s last known words, captured on the flight recorder, were “We’re going to crash... this can’t be true. But what’s happening?" A case of a machine letting down humans.

So what do we trust more, the decisions of a machine or the intuition of a human expert? Given the huge improvements made in the availability of data and its analytics, increasing computational power and recent advancements in artificial intelligence (AI), machines are often in a far better position than humans to analyse past data and arrive at accurate forecasts. Machines are still not perfect, but in many ways and fields, they will continue to be better than humans in predicting future events. With AI tools fast improving, many more human functions will be taken over by machines. Generating trust in machines is a prerequisite for good human-machine team performance. But this is easier said than done.

There are many humans like the captain of the ill-fated barge P-305 whose professional identities are based on analysing various inputs of data and taking decisions based on these. For these professionals to abide by the output of a machine instead of their intuition, built over years of experience, will tantamount to abdicating their professional expertise. Many a human will not be happy with laying down one’s expertise in front of a machine. So any attempt to position the superiority of a new technology by comparing it with human expertise will always strike a discordant chord with established experts.

In trying to build a relationship of trust with a machine, it is important to understand the finer nuances of human-machine relations. In the book How Humans Judge Machines, by Cesar A. Hidalgo and others, an interesting experiment is mentioned. Over 6,000 respondents were asked to react to a scenario in which a car swerves to avoid a falling tree and runs over a pedestrian in doing so. Do people judge this action differently if they believe it was a self-driving car than they would if they believe a human was behind the wheel? It was found that people judge the car’s action as more harmful and immoral if it’s thought to be self-driven, even though the action performed and end result were the same if it was a human who had decided to swerve.

Machines and humans are judged by very different yardsticks. People judge humans by their intentions and machines by their outcomes. So 100% accuracy in what they do will always be the expectation of a machine. But just delivering a fully accurate outcome is not enough to build human trust in a machine. Several other factors need to be in place to build a trusting relationship between a machine and its users. Humans are more comfortable interacting with fellow humans than with non-humans. So making the interaction between the machine and its users as human-like as possible is important. In this regard, one could learn from existing use-cases like the success of bank automated teller machines (ATMs). Attempts to anthropomorphize our interactions with machines as much as possible should get a further boost with recent developments in multi-modal technology.

Most technologies will remain faceless. But the leaders of technology companies could provide a much-needed ‘brand personality’ to their technology. Apple’s co-founder Steve Jobs (1955—2011) performed that responsibility very well. His own personality rubbed off on the Apple brand. Today’s tech-sector leaders must understand that their actions and words, as well as developments in their board rooms, will have a bearing on how much trust users place in their technology.

Creating a powerful technology product is only the first step. To generate human trust in that technology, tech companies will have to go beyond the world of technology and enter the domains of human behaviour and adaptive design.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS