Should we take over the office policy personally

Are AI systems team players?

Artificial intelligence has already made itself felt as a tool, as a colleague and also as a manager. Each of these areas brings its own set of problems. High time to think about the future relationship between man and machine.

We are being overwhelmed with predictions of AI systems taking jobs away from people in terms of numbers and percentages and trillions of dollars. But all of my previous jobs have always been in organizations, in teams, with colleagues and structures, social dynamics and office politics. The question that we should rather ask ourselves is: What role will AI systems play in companies? Will you be managers? Or colleagues? Or just better tools?

AI as a manager

AI systems already fulfill all possible management functions today.

If you think you've never met someone whose boss is an AI system, then think again. The work of every Uber driver is assigned to him by an algorithm; he only has 15 seconds to accept or decline a driving request without knowing the destination or the fare. The drivers' performance is checked automatically, their salary is determined by the system, and if their ratings are too bad, the same system will prevent them from working. An app is also used to promote motivation; in this case, help comes from a service employee in a distant country and not from the HR manager.

In human resources, AI systems pre-select applicants. In fact, AI systems are already performing all of the traditional management functions¹ in companies around the world. But today's AI managers are creating dehumanized systems in which workers are treated without respect and dignity. Ninety percent of Amazon's Madrid logistics workers stopped working on Black Friday in 2018, there were protests in five locations across the UK, and in Staten Island, near New York, the workers are trying to become the first union among Amazon employees to establish in the USA. AI-led employees protest against their working conditions with banners saying “We are not robots”. Ironically, it's their superiors who are robots.

Constant monitoring, attempts at motivation disguised as a game and the micromanagement of individual tasks would not necessarily be seen as positive qualities in a modern manager; But that's exactly how today's automated AI systems behave in leadership roles. We clearly have work to do² before we can make AI management a success.

AI as a tool

The easiest way to use an AI system in a company is as a tool that complements human work. The AI ​​system active in Gmail, which completes your sentences as you type, is a clever AI tool. It helps you at work, it can take the initiative and make suggestions while you work, but it doesn't act autonomously. It won't automatically reply to all of your unread messages.

Even better AI tools will have a massive impact on companies. You will help people become more efficient or even create new jobs. Or not?

"I think when you work as a radiologist you're like Wile E. Coyote in the cartoon," Hinton told me. “You're already on the edge of the cliff, but you haven't looked down yet. There's no floor underneath. ”Deep learning systems for radiology and cardiology have already been developed commercially. “It's just very obvious that five years from now, deep learning systems will be better than radiologists,” he continued. “Maybe it'll be another ten years. I said this in a hospital. It wasn't so well received. " In the article “A.I. versus M.D. ”, Siddhartha Mukherjee interviews top AI researcher Geoff Hinton, New Yorker, April 2017.

You will often hear predictions that AI tools will replace certain professions like radiology. The opposite view is that developing a better tool will only make radiologists better at their jobs, as better and more accurate scanning technologies have done. In fact, since the first medical X-ray in 1896, the field of radiology has grown along with technological developments in new imaging methods such as MRI, ultrasound, PET and CT. Doctors have constantly developed and used better tools and will continue to do so. AI tools will soon be just another part of their toolkit.

Tools and jobs will evolve together.

Another example comes from the field of design. “Generative” machine learning techniques generate novel designs when an AI system is trained with sample data and some restrictions. Initial trials show that the AI ​​systems can help designers design all kinds of products, from shoes to fonts, and manufacturing companies are already developing improved products such as the seat belt holder shown here. Just as photographers and illustrators have used tools like Photoshop, we expect designers of all kinds to use generative AI to speed up their workflows or improve their designs.

I believe that tools and workplaces will evolve together, as they always have. With smarter AI tools, we're getting new kinds of designers and artists, lawyers and accountants, writers and editors, engineers and architects³.

AI as a colleague

So if we're concerned about AI managers and we're excited about AI tools, what about AI systems as colleagues? In order for an AI system to earn the title of “colleague”, it must demonstrate independence and be able to take on responsibility. We need to be able to give it some degree of control.

An AI room thermostat has control and autonomy, but its job is certainly not one to qualify as an employee. A robot vacuum cleaner? Still not enough. A self-propelled tractor that works with a farmer? Automated trading system? Now we're getting closer to it. But the idea of ​​an autonomous automated helper is not new. We have had autopilots on airplanes for 100 years and they can give us valuable insights into how to deal with AI staff in our organizations.

In order for an AI system to earn the title of “colleague”, it must demonstrate independence and be able to take on responsibility.

The problem with autopilots is that they are too good. Pilots rely on them for most of a flight. So if an autopilot fails and human pilots suddenly have to take control, accidents happen. When an Asiana Airlines plane crashed in 2013, when the autopilot-controlled aircraft started to approach too slowly, none of the four pilots on board noticed. In the recent crashes of the Boeing 737 Max, the pilots could not take control quickly enough. An employee who suddenly gives up when there is an emergency is not a safe team member.

Autonomous car designers thought about this and defined five levels of autonomy. At the moment we are mainly on level 2, with functions such as lane tracking or automated parking. At level 4 we will have full autonomy, but with restrictions: within a well-mapped area, little traffic, good weather. According to level 3, the driver must be ready to take control of the vehicle at all times. The question is: can level 3 ever be safe when a car needs to be able to return control to the driver at short notice? After all, people are slow, inattentive, easily distracted and, as we know from the experience of the autopilot, hardly constitute the most reliable fallback system in an emergency. Most automakers hope to skip level 3.

An employee who suddenly gives up when there is an emergency is not a safe team member.

The issue of handing over control is not only relevant to cars and planes. The question will be asked in each area. Chatbots for the mentally ill rely on a human therapist in difficult situations and hand over control of the conversation. AI-based diagnostic systems in medicine, which prioritize medical assistance, act independently of one another as autonomous employees and transfer control via various channels to human employees.

Some court rulings insist on being transparent about where that control lies - California, for example, is now requiring AI systems to identify themselves as "not a natural person." Google's extremely impressive duplex AI assistant, which can telephone to make appointments on behalf of its user, will from now on identify itself as a computer system after it has been accused of fraudulent behavior. We need clarity about authorship and liability. They will be in charge of the work, but will they ever be held accountable?

"Do you realize," Ng said to Darcy, "that Woebot has spoken to more people today than a human therapist could ever do in his life?" Andrew Ng (noted AI researcher) in conversation with Alison Darcy (Woebot creator and clinical psychologist), as described in “May A.I. Help You? ”, New York Times, Nov 2018 quoted.

We shouldn't assume that an AI employee will be a single entity like a human teammate. Not only can any number of them be assigned to work in parallel around the clock, they can all learn as a single instance. As Elon Musk said of the Tesla fleet, "When a vehicle learns something, every vehicle learns."

Woebot is a therapy chatbot designed to help people suffering from depression or anxiety. In the first week, he spoke to 500,000 people, giving him the opportunity to learn from more interactions than a human therapist could ever do. There will be strong economic incentives that encourage the use of such systems in companies. To make these missions successful, we need to address issues of control and autonomy.

Today's debates about AI and workplaces are too simple-minded. There won't be many professions that will go away completely. The automation of our work environment within organizations will be complex, chaotic and full of unintended consequences. We already see them.

This site uses a YouTube channel operated by Google Ireland Ltd. belongs. Information on which data is processed by Google and for what purposes can be found in Google's data protection declaration

Load video

I've tried to portray the problems differently by thinking about how we're going to interact with corporate AI systems. The use of AI tools opens up new markets and new professions. AI systems for controlling human work are full of ethical concerns and pose the greatest risks. If we can create effective new interaction patterns for the transfer of control and for trust between humans and machines, then AI systems that work autonomously as employees are the most fascinating area.

Ultimately, these AI systems will be successful in companies if they are good team players and can work together with their human colleagues. But first, our AI systems will be opinionated people who think they are always right. They take things very literally and occasionally, like the magic broom in Disney’s Fantasia, are beyond our control. Constructs like the five levels of autonomy will influence development debates and, over time, good interaction patterns will emerge. In order for AI systems to become team players, however, we need a new discipline in organizational AI design.


1What does a manager actually do? The usual definition by Harold Koontz and Cyril O'Donnell from 1955 includes planning (goal setting), organization (structural development, resource allocation), personnel (selection, evaluation, development), leadership (influencing, guiding, monitoring) and control (monitoring ). There are AI systems on the market for each of these activities. Anaplan uses machine learning to propose new business plans to customers like Del Monte so they can quickly reschedule in the face of big changes like El Nino. Kronos software, used by large companies like Starbucks, automatically shifts employees to minimize costs, but mostly at the expense of employees who are no longer able to predict their working hours and organize childcare. There is a market for AI systems pre-selecting candidates in human resources, with vendors like Pymetrics, Entelo, and HiredScore. Unilever uses video interviews with AI facial and behavioral analysis to identify candidates. New Google spin-off, Humu, analyzes thousands of employee data to “nudge” them with messages that lead to more effective behavior. Percolata, a provider of algorithmic management software for the retail sector, ranks employees according to their sales success and evaluates the performance of each individual employee. Deliveroo's algorithm automatically compares actual and predicted delivery times in order to evaluate the couriers. AI companies target every aspect of leadership behavior.

2Frederick Winslow Taylor developed the concept of "scientific management" in the late 19th century. By keeping a record of all factory activity and times, his method sought to maximize productivity by "establishing many rules, laws, and formulas to replace the judgment of the individual worker." The ideas of “Taylorism” are still there, but we've come a long way since then: a modern factory expects workers to propose and implement improvements themselves rather than blindly following instructions. Algorithmic management is a recent development and has actually been referred to as "digital Taylorism". The social and legal constructs around them will only grow together after a while. UNI Global Union (which represents 20 million workers worldwide) has published its Workers Privacy Policy, sparking a wider debate on workplace surveillance and how personal information should be used. The Fairwork Foundation is a project to “Certify Online Work Platforms Using the Leverage of Workers, Consumers and Platforms to Improve the Wellbeing and Job Quality of Digital Workers”. Some claim that future automated systems could eliminate bias, reduce discrimination, and optimize for fairer and more humane working hours.

3As radiography has evolved as a discipline alongside the development of new instruments like MRI, we can see that new creative industries have emerged alongside the tools that enable them. The AI ​​is just the newest feature to be used. Musicians have adopted developments such as the electric guitar (1932), the tape (1935), synthesizers and the Hammond organ (1938). The music industry today relies on technology and tools, from creation to production to distribution. Genres like electronic dance music are billions of dollars. Spotify has around £ 5 billion in sales and relies on AI tools to recommend personalized playlists to 190 million users on a weekly basis. The animation, special effects and games market is valued at over £ 200 billion and is largely based on tools that have evolved steadily since the 1970s and now incorporate AI techniques. We are in the very first stages of augmented reality adoption, where we can see the same pattern: a new set of AI-based technical skills and authoring tools are enabling the birth of a new creative economy.

Final remark

Anyone who's worked in AI for a while will say at some point at the beginning of the article, “Wait a minute! You're not just talking about AI systems, you're talking about any kind of computer automation! ”. Quite right. The impact these systems have on our lives doesn't really depend on whether they are engineered algorithms that a computer scientist would classify as AI or machine learning, even if such differences were widely recognized. But it is undeniable that the power of AI has brought these problems to the surface in recent years. I believe this is really a debate about automation, not AI, and I brazenly use the term "AI" to get attention.I would love to hear your thoughts on whether I've wandered too much!


Thanks to Kevin Marks, Peter Bloomfield, and others at Digital Catapult for their suggestions. Earlier versions of this article were presented during lectures at the Digital Catapult and the King’s Fund in early 2019.

About the author

Marko Balabanovic is responsible for innovation and AI at Medopad and a non-executive director at NHS Digital, the digital division of the UK's national health service.