It pays to use AI on the sly at work

Unlock the Editor’s Digest for free

At the start of last month, the 1,500 staff at a British law firm called Shoosmiths received some unexpected news.

The firm had created a £1mn bonus pot that would be shared out between them, as long as they collectively used Microsoft Copilot, the firm’s chosen generative AI tool, at least 1mn times this financial year.

In other words, they had 12 months to rack up enough Copilot prompts between themselves to unlock the £1mn.

David Jackson, their chief executive, did not think this would be too hard.

As he pointed out to colleagues, the 1mn target would be easily reached if everyone used Copilot just four times each working day.

To help, the firm would track and publicly report prompt numbers each month, all the better to boost the use of what Jackson called the “powerful enabler” of AI.

I did not hear about Shoosmiths’ move from Shoosmiths, but from two academics at the HEC Paris business school, Cathy Yang and David Restrepo Amariles.

They spotted it as they prepared to publish some relevant and eye-opening research, on the very human ways in which Copilot, ChatGPT and other generative AI products are being used in the office.

Their work shows something that makes perfect sense when you think about it, but is nonetheless unnerving. It is possible to get ahead at work if you use AI — as long as you don’t tell your boss. And your boss, furthermore, is unlikely to know if you have used AI or not.

The researchers discovered this after they decided to look at why so many businesses have been so hesitant to roll out AI, despite the apparent productivity gains it offers.

In an experiment, they asked 130 mid-level managers at a large, unnamed consulting firm to assess a series of briefs two junior analysts had compiled. These were typical of those done for potential clients seeking consultants for a project. 

Some documents were done with the help of ChatGPT and some were not. The managers turned out to be entirely clueless about which was which. 

Although 77 per cent of their assessments correctly said ChatGPT had been used, this was close to the 73 per cent that incorrectly said ChatGPT had been used when it had not.

Also, even when the managers were told AI had definitely not been used, 44 per cent of them still thought it had.

The finding that has stayed with me is this: the rating that managers gave to briefs done with ChatGPT was nearly 10 per cent higher than for those done by mere humans.

When the managers learnt of the AI use, they downgraded their rating, perhaps assuming the analysts had taken less time to do their work.

This suggests that, unless you work for an organisation that encourages the transparent use of AI, you may be strongly motivated to use it on the sly. And the trouble with this “shadow adoption”, as the researchers call undisclosed AI use at work, is that it exposes the organisation to serious risks, such as security breaches.

A number of companies have at times curbed access to AI tools amid fears that staff could inadvertently leak sensitive data by feeding information into the platforms that then finds its way to outside users.

There is also the problem of staff placing too much faith in generative AI tools that produce biased results or invent “hallucinations”. And monitoring employees to see who is or isn’t using AI risks setting off complaints about intrusive surveillance.

To avoid all this, the HEC researchers think employers should draw up AI use guidelines that encourage employees to use AI openly.

Since their study shows staff are likely to be downgraded for owning up to AI help, they also recommend some form of enticement to encourage disclosure — like the Shoosmiths law firm’s £1mn prompt bonus.

“It’s a very smart incentive, because it means people have to report the prompts,” says Restrepo Amariles.

Shoosmiths says the bonus was actually created because the firm believes AI is fundamental to its future competitiveness and wants to boost its use. So far, Copilot prompts are “broadly on track” towards the 1mn target, says Tony Randle, the partner in charge of client-facing technology.

“We’ve got one partner that has used it 800 times in the last month,” he says, sounding pleased. “AI won’t replace the legal profession, but lawyers who use AI will replace lawyers who don’t.”

 

Leave a Comment