Next year it will be two hundred years since Mary Shelley’s Frankenstein was first put into print: if not the first science fiction novel to ever be written, it is certainly one of the first. Ever since, we as a society have not only been fascinated with the monsters that the science fiction genre has imagined into being, but have also understood (or at least intuited) that guidelines would have to be placed on technological advances: not to limit or stifle scientific inquiry, but to avoid the monsters that can be created from it. Indeed, the subtitle to Shelley’s novel Frankenstein is The Modern Prometheus, which is very revealing. For like Prometheus in Greek mythology, who gave mankind the gift of fire – which gives light and warmth, but can also burn or destroy – the applications of science can also be a double-edged sword. Ethical responsibility and caution, then, must be the watchwords when pushing the boundaries of science and developing new technologies off its back. Or has the genie already escaped from the bottle? …
His genius in theoretical physics aside, Stephen Hawking is on record for saying that Artificial intelligence or AI: ‘will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.’ His is not the only voice of concern. No less than Bill Gates, Steve Wozniak and Elon Musk have expressed their reservation about where this rapidly evolving robotics and AI technology is heading – Hawking’s vision being the most apocalyptic of them. As science fiction gradually becomes science fact, it seems that AI is destined to play an increasing role in our day-to-day lives over the next few decades. Alongside the prospects of leading to the eradication of poverty and diseases and even give us control over climate change, Hawking warns us: ‘AI will also bring dangers like powerful autonomous weapons, or new ways for the few to oppress the many, It will bring great disruption to our economy; and in the future, AI could develop a will of its own, a will that is in conflict with ours.’
In his disquieting book Our Final Century, Martin Rees tells us that not only is modern science the genie which long ago broke free of our control, it’s also far too clever to be tricked back into its bottle. Nuclear ‘megaterrorism’ is a major concern, he writes. But threats posed by biotechnology or nanotechnology are far greater. Entire populations could be erased by engineered airborne viruses. Self-replicating nanobots may swiftly spiral out of control and devour the biosphere, reducing it to dust in a matter of days. We are in an age, writes Rees, ‘when a single person can, by one clandestine act, cause millions of deaths or render a city uninhabitable for years, and when a malfunction in cyberspace can cause havoc globally … Indeed, disaster could be caused by someone who is merely incompetent rather than malign.’ Now whether it is down to bioterror or bioerror, or to atom-smashing particle accelerators that produce concentrations of energy intense enough to create a black hole that sucks in our entire planet, Rees, one of the world’s leading astrophysicists and space scientists, asserts with a straight face: ‘[T]he odds are no better than 50-50 that our present civilisation on Earth will survive to the end of the present century.’
At the end of his bold book, Sapiens: A Brief History of Humankind, Yuval Noah Harari concludes his grand sweep of history with some spooky stuff. Currently in labs across the world, he says, genetic engineering is allowing scientists to transcend the laws of nature, replacing them with the laws of intelligent human design. Science is not only cloning sheep, manipulating organic tissue to grow a human ear on the backs of mice, or redesigning a white rabbit at the cellular level to make it fluorescent green. It’s also looking to implant reconstructed DNA of a mammoth into the womb of an elephant, or DNA of a Neanderthal into a woman’s womb; thus producing the first mammoth to be born in the last 5000 years, and the first Neanderthal child in the last 30,000! But it’s not only biological and technical change. Science, especially genetic computer programming, is gearing to alter human consciousness and identity too. It is devising ways in which computers and human brains could fully interface with one another, each being able to retrieve and send data to the other. There are also attempts afoot to recreate a complete human brain inside a computer, with electronic circuits in the computer emulating neural networks of the brain. Such transformations could be so fundamental, they will call into question the very notion of human memory, human consciousness and human identity; or what it would even mean to be human. Science and technology are turning things upside down and inside out like never before. And many of these developments are happening at breakneck speeds. Little wonder, then, that Hariri called this last chapter in which he explores all these projects, ‘The End of Homo Sapiens’.
I spent all of my teen years growing up on a council estate in East London, constantly surrounded by the sounds of ‘conscious lyrics’: in this case, reggae music that spoke of the tribulations, injustices and desperations of life in a ‘concrete jungle’. So when rap and hip-hop came along at the end of the 70s, it wasn’t really my cup of tea. Early hip-hop was anything but conscious. Women, wild parties, boasting, lusting and craving material things were its usual concerns. But in mid 1982, Grandmaster Flash and the Furious Five’s The Message broke this material mould. It caught my attention because of its conscious lyrics; its social commentary about the hardships and anxieties of the then urban life. But it’s another set of verses from the Furious Five which first set me thinking about where we could be heading with all this (then) new public accessibility to computer technology. In 1984, Beat Street voiced these edgy vexations:
‘Peoples in terror, the leaders made an error
And now they can’t even look in the mirror.
Cause we gotta suffer, while things get rougher
And that’s the reason why we got to get tougher.
So learn from the past and work for the future
And don’t be a slave to no computer.
Cause the children of Man inherit the land
And the future of the world is in your hands.’
Between this and Hazel O’Conner’s earlier haunting Eighth Day, computers have both delighted me and disturbed me. Of course, 2001: A Space Odyssey and The Terminator didn’t exactly help endear the notion of completely autonomous AI to my generation. Indeed, the prophets of doom have long envisioned a harrowing future for mankind, where AI systems become super intelligent and threaten the very survival or normalcy of humanity. The Prophets of God, by contrast, foretell of an ultimately better, more humane future – either due in significant part to science and technology, or inspite of them. The monotheistic teachings of God’s Prophets reveal that this future will come about as we work for immediate and sincere human welfare, under a compassionate God. That entails putting ethical imperatives before all else, and objectively weighing-up risks; especially in terms of experiments in science with a conceivable ‘Doomsday downside’. Never before in the long history of our planet have these words been quite so alarmingly literal: the future of the world is in our hands.