Theoretical physicist Stephen Hawking died recently at the age of 76.
He was a man who had a significant influence on the way we view science today, noted for his work with Sir Roger Penrose on the singularities at the origins and future of the universe, starting with the Big Bang, and ending in black holes. His work had significant implications for the search for a unified theory that would link Einstein’s general relativity with quantum mechanics, and discussions that originated from his work continue to reverberate in the field of theoretical physics.
Beyond doing an excellent job of raising the public profile of black holes, Hawking also wrote and spoke publicly on issues beyond his research. He expressed concerns about the possible impacts of artificial intelligence, and the questionable wisdom of attracting alien visitors.
Was he presenting new concerns? Or were these ideas already deeply rooted in prior science, or envisaged in fiction? The answer lies in the complex relationship between science and science fiction.
A brief history of fictional science
There was a time when science fiction writers may have imagined they were exploring the frontiers of the future. When the science caught up with the fiction, and in many cases exceeded it, this relationship turned on its head. Enduring themes of science fiction, which survived the impact of this scientific apocalypse, include interests expressed by Stephen Hawking – putting ourselves at the mercy of machines, communicating with non-human life and phenomena that are so grandly cosmic that they defy normal comprehension: sentient machines, alien visitors and black holes.
Science fiction authors used to make mileage out of technological speculation. From the 1930s through to the 1950s, video telephones, atomic bombs and thinking machines were wonderful things to speculate about, and no one knew for certain what was out there in the rest of the universe.
Robert Heinlein talked about bases on the Moon run by free-wheeling libertarians and Isaac Asimov wrote of future star-spanning, galactic-scale human empires. Alien visitors were common – whether for good or bad – and ravening beams of destruction had been tearing through the black emptiness of space since the mid-1930s for E.E. ‘Doc’ Smith. You could even make cities fly.
Science overtakes science fiction
In 1957 the Russians launched the first orbital satellite – Sputnik – and perhaps this was the beginning of the end for scientific fantasy.
It is strange to think today that when the meticulous director Stanley Kubrick was working on 2001: A Space Odyssey – released in mid 1968, and now celebrating its 50th birthday – no-one even knew for certain what the surface of the Moon was like.
Kubrick had access to in-depth, technical support by NASA and other space technology experts, and this strongly influenced his designs. But even NASA didn’t know whether the lunar landscape was rocky or smooth, or exactly how Earthrise on the moon might appear.
The first pictures of Earth from space had been taken in 1946, but it was not until Christmas Eve 1968 that a high quality colour image of the Earth rising over the Moon was taken by the crew of Apollo 8. Despite Kubrick’s access to the best information you can see the differences between his imagery and the real thing.
But Kubrick’s 2001: A Space Odyssey has elements of realism that are not found in modern science fiction films – the silence of space being perhaps the most striking. What people remember about 2001, however, more than the realism, is HAL – the sentient machine who goes haywire.
2001: A Space Odyssey touched on subjects that were significant to Hawking – artificial intelligence, alien contact, and even wormholes in space-time, or whatever it is that happens when Bowman goes through the stargate. These were still being presented on the basis of well-informed guesswork, however – and it might be argued that the release of this movie, which attempted to portray space travel and technology as realistically as possible, marked a point of crisis for science fiction.
The Apollo missions revealed Earth to be a blue marble, and, as Jean Baudrillard has suggested: when you have seen people go to the Moon and come back again, in a “two‑room apartment with kitchen and bath” the magic and wonder may have evaporated. Astronauts might indeed just be “spam in a can…”, as the legendary test-pilot Chuck Yaegercynically suggested.
The future now
After this, science fiction had two choices. Choice one: do realistic science, and get the science right so people couldn’t criticise it (which has even inspired an academic paper on the work of author Greg Bear). Or choice two: go beyond it. Create science so speculative and conjectural that it could not be categorically denied.
The future has become now, as British New-Wave science fiction author J.G. Ballard observed, and our fears about the future are that it will simply be more of the same, and boring. For his part, Ballard explored the “inner space” of human psychology in extraordinarily ordinary environments and alternate universes, approaches which enable some writers to evade criticisms based on scientific credibility.
Science fiction has to build a vision of the future that is not just more of the same. As human knowledge, and the application of that knowledge through technology advances, it becomes harder to find scientific subjects that are truly inspiring.
These days, 2001: A Space Odyssey has appeared at number 12 on a list of “the most boring films ever”.
Artificial intelligence at the level of sophistication and consciousness portrayed in science fiction, with the potential to cause the concerns raised by Hawking, is a long way away. But Larry Tesler – former Chief Scientist at Apple – has suggested this will always be the way people think about it because “intelligence is whatever machines haven’t done yet.”
Hawking was not alone in prophesying the end of humanity as the logical endpoint of successfully building a sentient machine. We may think of this concern with what machines may do to us as recent, but in 1863 Samuel Butler encouraged us to rise up against the machines before we become their servants. He predicted that our increasing reliance upon technology would end with us serving it rather than it serving us, and that the more science and technology progressed, the more dependent we would become on it until it was indispensable. Butler’s proposal was immortalised in science fiction as the inspiration for the “Butlerian Jihad” in Frank Herbert’s seminal 1965 novel Dune, with the edict:
Thou shalt not make a machine in the likeness of a human mind.
The signs of this dependence on machines are around us now, and subtly pervasive – most of us have smart phones, and many other devices too.
Artificial intelligence is frightening for several, good, reasons. Perhaps the least threatening is that sentient machines could do our jobs as well as, or better, than we can – making us redundant. Robots have done this already with many manufacturing jobs. But robots who think could conceivably make human minds as unnecessary as our manual labour.
These, however, are not the “holy grail” of artificial intelligence – these examples are better described as “expert systems” that simulate human capabilities, like your fridge ordering some more milk because it has realised there’s none left.
A more disturbing recent development is the ability of algorithms and expert systems aided by humans to influence public opinion, and voter intentions. When machines can play poker better than humans, it demands we consider how else they might out-think us.
What people tend to think of as true artificial intelligence, and the type that appears most often in science fiction, and in the fears of people like Stephen Hawking, is the achievement of “general intelligence” – human level abilities. With the addition of consciousness, this is known as “strong AI”.
Strong AI is the stuff of science fiction nightmares – such as HAL in 2001, Ava in Ex Machina, and apparently more benevolent, but no less disturbing by implication, Her, the self-actualising virtual companion.
Perhaps our biggest issue with artificial intelligence is the ethics of it – not whether it is ethical to build one, but whether an AI could ever be part of a human ethical environment that relies on communal concepts of moral accountability.
Would an AI have any feelings of responsibility towards humans, regardless of how we feel about them? What is to stop an AI with sufficient access to resources from exterminating all human life because it finds it convenient to do something that will incidentally cause us harm, as has been suggested by the philosopher Nick Bostrom?. Or would it stick to fixing elections in its favour?
AI researchers suggest that there is quite a lot that can be done to stop this, not least including a hardware off-switch, and not being silly enough to give an AI autonomous control of anything particularly important.
There are also suggestions that we could program an AI to be ethical in a human sense – and not just Asimov’s Three Laws of Robotics, whose flexibility and loop-holes were the basis of the majority of Asimov’s robot stories.
Regardless of how carefully we try to protect ourselves from programming an AI to “do the right thing” by us, there is always the possibility of the AI finding internal exceptions, as Gödel’s Theorem implies. Determinism, and complexity theories also suggest that to believe we might begin to programme such a sophisticated machine to unequivocally respond to our orders may be doomed to failure. As Stephen Hawking would remind us, failure is not an option.
Alien real-estate agents
Hawking’s other words of warning were on the subject of contacting aliens – the logical premise being that any aliens who could both (a) pick up our communications, and (b) pop over for a visit, would be in the possession of powers to transform space-time which are simply inconceivable to us. Our theoretical approaches to faster-than-light travel have some serious obstacles to overcome.
Theoretical approaches include the Alcubierre drive, which requires the creation of “exotic” matter at the limits of, or beyond, our very concepts of physics.
Again, the question of ethics arises – why would an advanced alien civilisation be interested in, or feel any responsibility towards humans? Cautionary tales abound in science fiction about the possibilities. A particularly gruesome example is The Screwfly Solution – a story by James Tiptree Jr. that won a Nebula Award in 1977. Spoiler alert: in the story, we discover that the horrific genocide committed on humanity may just be the result of some alien real-estate agents tidying up the back yard before putting the “house” on the market.
Science fiction writers and directors are fond of the trope of the alien menace. Director Ridley Scott has imagined the awful consequences of an AI believing an alien species is more deserving of survival than the human one, in Alien Covenant.
Is there any reason to believe that visiting aliens would have any more noble or less disruptive intentions than colonists reaching the Americas, or Pacific islands? Perhaps they might consider Earth a good place to send convicts, like Botany Bay in Australia. It might not bode well for the indigenous Earth people.
Stephen Hawking’s most significant contributions to science have been on the nature and characteristics of black holes. These were already imagined in physics and in science fiction, becoming more topical for science fiction writers towards the end of the 1960s when Hawking’s work was emerging.
Probably the most popular book to deal with the concept of black holes was Hawking’s A Brief History of Time, published in 1988. Black holes had appeared in popular media before, even in a Disney film in 1979, but realism had not been a strong point.
Testament to the increasing knowledge and fascination with these phenomena, faults in the portrayal of the effects of the black hole Gargantua in Interstellar – despite being well researched – were considered interesting enough to the general public to be worthy of critical attention in mass-media news reporting. They also inspired a detailed explanation in academic literature of how a black hole might actually appear.
To infinity and beyond
Did Hawking and other scientists discover things that had a significant influence on science fiction, or were they publicists of things that authors and specialists already knew?
The answer may be a bit of both – certainly the public comprehension of “grand science” has made it possible to create science fiction that is more readily comprehended, and discussed, by the non-expert. This, along with scientific progress, has changed the nature of science fiction – writers and film-makers can no longer produce “lazy” work, but can sidestep by presenting the unknowable, as Kubrick did at the end of 2001: A Space Odyssey.
The history of debates about and representations of artificial intelligence, aliens and even black holes pre-dates Hawking, even though he, and his contemporaries, have raised public awareness of these outside of a science fiction audience.
One thing is certain, however, even though science has rendered the premises of much historic science fiction obsolete, the relationship between science and science fiction is just as strong today as it has ever been.
This article was written by: