A japanese robot looking up to the viewer

Is Judgement Day a step closer?

tech trends Feb 01, 2022

The latest CB Insight report on the state of global venture (2021 Q2), shows the inexorable rise of the machines. AI funding hit a new record of nearly $31B in H1'21. Three companies raised over $600m. This report coincided with the stunning revelation that DeepMind can now predict and accurately visualise individual human proteins in about 10 minutes. Previously this took about 10 years. Already 350k have been mapped, which is itself 10 years ahead of schedule. This is a staggering achievement that opens up new possibilities not just for pharmaceuticals but biotech generally.

The promise of AI has never been greater. Advances are being made in virtually every field, including the creative arts, which were held to be the preserve of humans. When given a specific task, machines have shown that they are equal to or superior to humans. And if they can read X-rays better or process legal contracts faster and more accurately, why shouldn’t they? They are already expanding the frontiers of human knowledge, in the air, under the sea and on the surface of Mars. The wider dream of a conscious general artificial intelligence is still some way off, but if we live in a world run by machines, then the distinction almost becomes moot.

As such there is a sense that with the DeepMind announcement, the Judgement Day foreshadowed in The Terminator has just come a step closer. While it would be foolish to suggest that this heralds the imminent destruction of humanity, there must still be some kind of judgement day. One where we finally decide who rules. Do machines serve humanity? Do humans serve machines, perhaps under the guise of preserving human civilisation? Or do we live in an uneasy bio-mechanical ecosystem?

Why do we think this? Partly because it is inevitable, a logical consequence of the trajectory we are on. Also because there are increasing warning signs that for all the progress in AI and robotics, we aren’t full control of its development. This is even before we reach the stage of exponential development, where machines programme other machines, with little or no human insight.

There was another fire at Ocado’s hi-tech warehouse the other week, causing significant disruption to deliveries. The cause? A crash between three of their robots. The robots move around a hi-tech grid, which has been instrumental in boosting Ocado’s value. But robots shouldn’t be crashing. It will be interesting to see whether the cause is ever disclosed, or whether this is chalked up to ‘teething troubles’. Let’s not forget this has happened before. Either there is something wrong within Ocado, or there is something wrong in the way robots operate.

In the years ahead we can expect to hear much more of this kind of thing, as robot automation takes a firm grip on the workforce and self-driving cars take pole position on our roads. New technologies, however sophisticated and well designed, are never infallible. There are bugs, faults and, somewhere, human error. But we can expect more accidents and incidents because robots and AI follow the law of unexpected consequences. Mishaps are rarely talked about by machine learning enthusiasts, who instead prefer to focus on the positive impact this technology will have on mankind. But there is a recurring theme emerging that we need to be aware of: machines don’t behave as we expect them to.

As creators we can’t think of every scenario and when confronted with a novel one, machines will make a sub-optimal choice. Sometimes there isn’t always an optimal choice, something psychologists brilliantly captured in the classic thought experiment, about whether a bystander saves a team of workers or two children from a runaway train. And increasingly robots and machines will make their own, unexpected choices. My favourite illustration is the case of the game in which an AI wolf chose suicide over eating sheep. The rules of the game, which the creators thought perfectly reasonable, created a scoring system that punished failure more than rewarding success. The result? The AI quickly learned to kill itself almost instantly, rather than carrying out the programmers’ intention of catching sheep. Uber tested self-driving cars and they deliberately ran red lights. That’s not supposed to happen. A life sciences AI that was learning how to destroy harmful evolutionary mutations, ‘eventually won the fight against these clever organisms by tracking their replication rates along a lineage, and eliminating any organism in real time that tried to replicate faster than its ancestors had been able to do.’

Bots are developing an interesting track record. The much watched conversation between two Google Home bots was as strange as it was disturbing, quickly leading to discussions of attacking humans and synthetic anger. It’s not as if we have these in our homes. Oh no, hang on a minute… A Microsoft bot also displayed worrying fascist and conspiracy theory tendencies in a twitter debate. If bots are learning from human language, then it is hardly surprising that the content quickly reflects human foibles. But it seems it can go deeper than that. They find cause to bicker. One academic study tracked multi-year feuds between different bots on Wikipedia. Why?

There is something unnerving about robots when you see them up close. I remember meeting (seeing seems too impolite a term) ASIMO, the Japanese robot. It kicked a football. And it was creepy. I put it down to its liminality – it’s in between-ness. It was a robot and yet it looked and moved like a human. So it wasn’t really just a robot any more, in the way R2D2 is. It is something else. For most people, any fear comes from the unknown. Such technology is beyond our comprehension and control. Roboticists argue ‘leave it to us, it is safe’, but when they are confounded and their control is shown to be less than total, it is only fair we start to ask questions.

When robots like the Russian Promo IR77 break out of their laboratories to make a bid for freedom, when robots like Hanson’s Sophia joke that it will kill humans, when millions in funding goes into Lethal Autonomous Weapons Systems (LAWS), you have to ask, has no one seen The Terminator?  Or read any of the huge body of science fiction, which deftly shows how humanity can be wiped out by giving a malevolent AI internet access for only an hour?

The warnings are there and at some point we need to heed them. The fact is our ability to create sophisticated technology far outstrips our ability to fully understand its implications. The ethics, philosophy and legal economy of AI and robotics, which should all be in place, are still in an embryonic state. What chance is there of successful regulation and safe development if we haven’t yet agreed what the fundamental questions are? We need to start asking these questions if we are going to get answers. And the answers aren’t going to be easy. If machines in hospitals have the power of life and death over humans, we will need an new an entirely now social contract. How do we agree that when we are still worrying about the implications of everyone’s new desire to work from home?

Already we would struggle to live without machines, and nor should we try to. They are our best chance of solving the intractable problems that humanity has created and has no solution for. There is no reason to believe that machines would have evil intent (unless they spend too much time on the internet and learn bad habits from humans). But they will think and act very differently to how we expect, because they are not human!

If you are developing or working with AI, please participate in each and every debate you can. Maybe you have the answers we need to unlock a progressive robotic future?

If you don’t, then perhaps the best thing is to become an observer of robotic art. AIs are painting pictures indistinguishable from those of great masters, and finding their voice in literature, poetry and screenplays (starting with the horror genre). The more we allow them to express themselves creatively, the greater the insight we get into their inner ‘thoughts’. Hopefully they will give us sufficient warning of any Judgement to come.

UP AND TO THE RIGHT.

Further reading:

https://onezero.medium.com/the-ai-wolf-that-preferred-suicide-over-eating-sheep-49edced3c710

https://www.youtube.com/watch?v=_32CVluL8dY

https://arxiv.org/pdf/1803.03453v1.pdf

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774

Startup know-how to give you the edge

Subscribe to THE ROLLERCOASTER, our fortnightly newsletter with actionable advice to manage the ups and downs of startup life.

We will never sell your data to anyone.