Extinction of Humans Reiner Kraft

The Extinction of Humans

by Reiner Kraft
April 25th 2017

Is the rise of AI and robotics the beginning of the end of civilization as we know it?

“The main business of humanity is to do a good job of being human beings.” Kurt Vonnegut, Piano Player (1952).

 

Vonnegut’s post-war novel, Piano Player depicts a dystopia in which the majority of humans are gracelessly shouldered out of the way by robots who are faster, cheaper and, grimly, less likely to get ‘tangled up in the machinery’. Vonnegut isn’t alone in imagining the negative impact automation can have on our societies; Hollywood makes millions of dollars based on the notion. Elon Musk and Stephen Hawking have also voiced their concerns in no uncertain terms.

Should society be so troubled by the growing involvement of AI and robotics in our lives? Only by examining the current impacts of deep tech, strategizing for the future, and considering larger ethical questions can we better understand our relationship with automation and erase the carnal fear of extinction by the hands of our own creations.

Deep Tech Impact
At the end of last year, Japanese daily newspaper, Mainichi released a story about Fukoku Mutual Life Insurance, a Tokyo-based firm that would reportedly replace thirty-four of its insurance claim staff with IBM’s Watson Explorer from January 2017. Until then, much of the mainstream dialogue surrounding the impact of AI and robotics on society centred around blue collar workers such as truckers and logistics workers. It seems right that the anxieties prompted by the Fukoku story were so exquisitely characterised by 2001: A Space Odyssey’s HAL; Kubrick’s dark vision was not about the mechanisation of labour, but of intellect.

There are two ways to interpret and prepare for the inevitable shift in workplace norms. IBM’s Watson Explorer increases productivity by 30% and is predicted to save the company ¥140m in its first year after installation – the gains for company leaders are hard to ignore. Therefore, in the first, we must isolate the areas in which AI or robots will excel. Five years ago, a data scientist and engineer colleague of mine speculated: “Will there be a need for data scientists ten years from now?” He foresaw that the trend in data science would impact his job very soon. He accepted he would have to do something else if he wanted to stay ahead of the curve. Office jobs are as up for debate as farmworkers. The disruption is a phenomenon throughout all industries, which leads me onto my second point: what then is a safe choice for a job?

Creativity will remain a safe option for the foreseeable future, despite the rumblings surrounding content creation bots. Equally, for jobs that require coaching, care-giving, and that most human quality: error-making. Deep tech researchers are focused on making improvements and optimising, less so mimicking the errors that make the arts and invention so innovative. We could have a situation where the machine makes an error and the clickthrough rates go up, but it would ultimately still be data driven. These human errors – random inspirations – create interesting outcomes and help people grow. The human consciousness is central to our apartness from AI. No matter how fast a machine can compute, or how successful it becomes at Go, consciousness can’t be copied; a disadvantage that places significant limitations on how far AI capabilities can potentially advance.

These human errors – random inspirations – create interesting outcomes and help people grow.

We, Robot
Augmented humanity is already here. It has been since Roger Bacon’s invention of the magnifying glass in 1250. As with any new technology, there are negatives: sometimes obvious, most often ambiguous. Our reliance on ‘digital memory’ is a regular feature in tech-wary newstreams. University of Birmingham cognitive neuroscientist, Dr Maria Wimber speaks of this as, “the risk that the constant recording of information on digital devices makes us less likely to commit this information to long term memory, and might even distract us from properly encoding an event as it happens.” Similarly, while Spotify and Netflix’s algorithms are delightfully sophisticated and efficient, when people don’t have to manually find what they want, agency is diminished and a homogenous comfort zone is synthesized. I shouldn’t have to stress the risks inherent therein.

However, what doomsday think-pieces often miss is the remarkable and natural progress technology enables. Comparisons to science fiction are unavoidable, but the combination of organic and artificial lifeforms already has concrete cases in very standard procedures. “If you will power,” says metahumanist philosopher, Dr Stefan Sorgner, “then it is in your interest to enhance yourself.”  Augmentations will become commodities, with consumers automatically taking advantage of them: just as we did the eyeglasses, prosthetics, pacemakers and hip replacements.
Visual and audio augmentation is part of the transhuman vanguard, but DNA enrichments and brain-machine interfaces are just around the corner, as Elon Musk so contentiously opined at this year’s World Government Summit. Special implants will allow for enhanced computational capabilities; the first iterations trembling and uncertain, but growing exponentially more advanced as the technology becomes more mainstream. It’s then a question of where human development will go: not the extinction of humans, but the self-determined evolution. Whatever is possible will happen; invention is unstoppable.

The line between human and machine will continue to blur over time.

Fear and Coding in the Future
According to Moore’s Law, computing power continuously doubles at least every eighteen months. Cloud-based GPO infrastructure is making ever more computations available to build even larger and more sophisticated models. There is no doubt that the line between human and machine will continue to blur over time. Eventually, artificial intelligence will pass the Turing Test with ease – perhaps not in all areas, but under certain test conditions this is a certainty. Researchers have unprecedented access to deep learning, just one click away. They can experiment with and create new models or new data, which in turn leads to new applications or improvements on existing applications. The prospects for business and for society are fiercely exciting: existing applications that currently do not leverage AI will start doing so, giving us better functionality and quality, while new types of applications that are not currently possible will be generated within ever-expanding borders.

Are anxieties somewhat justified? Certainly. When the Luddites revolted to save their livelihoods, they did so from a very real fear of being replaced. What followed the initial economic fallout of the industrial revolution, however, was the creation of hundreds of new functions and an uptick in the quality of life. It is a pattern that could repeat itself, with basic living wages and increased free time both on the discussion table. At the same time Stephen Hawking spoke about his fear around AI, he also discussed its incredible possibilities. What is to be debated therefore are the philosophical questions at hand.

Famously, Silicon Valley operates as autonomously as possible from politicians and lawmakers, so the tussle for the reins when it comes to who governs AI will be interesting. Huge companies such as Alphabet/Google wield significant influence, as evidenced last winter when then president-elect Donald Trump invited tech leaders to a summit in New York. Regardless of who’s in charge, in the short term, the human capacity for intuition and ingenuity remains the ace in our deck, while transparency and a well-educated population mitigate the potential for matters getting wildly out of hand. Vonnegut’s world of the piano player remains a fantasy and we’re not quite at the Singularity yet, but we can still indulge in a little imagination. It’s what makes us human after all.