Investigation, Extrapolation, Frustration
The joy of predicting a fictional future diminishes with increasing years (3/3).
This is Part 3 of my three-part ‘Predicting the Future’ series. Jump to Part 1 or Part 2.
Over the last two weeks I’ve dug into the creative processes for creating fictional technologies, and used two examples from my own writing as illustration.
This week I want to pivot onto the trickier task of trying to align fiction with fact.
‘Why bother?’ you might ask. After all, isn’t fiction just that? True. But I see three issues to be aware of when attempting to generate, via mere words, a made-up world inside another’s mind:
The first is consistency. Introducing random tech in a scattergun fashion is generally a bad idea. Genius-level authors like Douglas Adams are rare exceptions. Just because something is shiny, cool, useful or even amusing doesn’t automatically mean it will be a good fit for a novel.
The second issue, related to the first, is suspension of disbelief. A fictional technology, by definition, doesn’t (yet) exist. A reader must conjure it into their brain, helped or hindered by prior knowledge and experience, familiarity with similar constructs and accepting author-given assumptions. They have to feel comfortable with a tech’s supposed purpose and the logic ascribed to its existence. If they don’t then this unbridged gap will only yawn wider and the reader will eventually fall into a chasm of disbelief. This is rarely their fault.
The third issue is a problem which often besets cultural or corporate institutions: l’art pour l’art – ‘art for art’s sake’. Just because a technology can be introduced, doesn’t mean it should. I don’t mean on ethical or environmental grounds (like nuclear weapons, face recognition or even carbon-burning automobiles). Fictional devices can sit quietly in the background, referred to only in passing; or obstruct a character’s progression; even cause trauma or prevent war. But they should always have some part to play in fleshing out a character or moving a plot forward.
With the above understood, it still remains to effectively align the present with the future to create an accessible and believable speculative story. This means asking some difficult questions.
What’s the best way to predict the future?
Unfortunately, there’s no definitive answer to this question, otherwise economists would be billionaires instead of erudite gamblers for overdressed emperors. (Actually, they wouldn’t, but that’s a whole other topic). But there are some methods which seem to improve futurists’ predictive accuracy - and reputation – when assessing current trends, scientific discoveries, and technological innovations:
A simple rule might help to predict the future. The ‘Lindy Effect’ states that the longer something has existed, the more likely it is to survive in the future. You can apply this principle to various domains, such as technology, culture, and politics.
A research project has identified a group of “super-forecasters” who can accurately predict the outcome of world events. It also reveals some of the traits and techniques that make them so good at forecasting, such as being open-minded, curious, and humble.
Simply looking at cycles and sequences of events might help, although timing and probability are important. If you have an American-oriented bent towards individual endeavour, then you might even attempt to take charge of your own future in the pursuit of some worthy life-goals.
Finally, some commentators argue that the key to predicting the future is to understand people, their motivations, and their actions. It also advises you to be aware of your own biases and assumptions, and to seek feedback and diverse perspectives.
Those who study and predict the future often widely publicise their personal vision of how they think the world will change over the coming decades or centuries. Some dare to make specific and quantifiable predictions, whilst others are more visionary, depicting futures which are not necessarily wholly realistic or desirable, but are still instructive and help focus debate.
Famous futurists include inventor Ray Kurzweil, physicist Michio Kaku and transhumanist Aubrey de Gray. I also want to include author William Gibson, who helped pioneer the gritty cyberpunk subgenre and projected cyberspace’s virtual constructs into my head almost four decades ago.
However, if science and rational reasoning don’t appeal for creating future scenarios, you could instead pay more attention to your emotions and intuition. This seems to work for those carving out a virtual existence within social media. You never know, mass visualisation of positive events might mean welcoming a utopia into our future, instead of a doom-laden post-apocalyptic dystopia. Just try to forget about the causes of major historical conflagrations when doing this.
How accurate are predictions about the future?
Predicting the future is not an exact science, and even the best forecasters can be wrong or surprised by unforeseen events. It’s easy to say that today will be much like yesterday, with tomorrow similar to today. This principle, on average, works very well for weather forecasting, road traffic jams and crowd control, but less so for mini-tornadoes, car accidents and riots. Also, the further into the future you go, the more likely a Black Swan event might disrupt a knowable trend.
Prediction accuracy varies depending on the methods uses, assumptions made, and how much feedback is received and taken into account. Generally speaking, the more specific, quantifiable, and testable a prediction is, the more likely it is to be accurate. The converse is also the case: vague, qualitative, and untestable predictions are more likely to be wrong or unfalsifiable - the latter effect is still utilised to sell snake oil and promulgate religious belief.
Below is a summary of factors which can help or hinder predictive accuracy:
Factors contributing to accurate predictions include:
Being open-minded, curious and humble.
Seeking diverse perspectives and information sources.
Updating beliefs and expectations based on new evidence.
Using probabilistic reasoning and scenarios.
Applying simple rules and heuristics that have proven effective over time.
Factors hindering accuracy include:
Being overconfident, dogmatic, or ideological.
Relying on a few big ideas or theories that explain everything.
Ignoring or dismissing feedback and criticism.
Using vague or ambiguous language that can be interpreted in multiple ways.
Failing to account for uncertainty, complexity and randomness.
Some predictions of my own
As with my second futurism post in this short series, I’ve put into practice some of the above principles and methods by offering a few of my own predictions. These orient mostly on the theme of climate change and extrapolate the consequent geophysical and environmental changes and subsequent societal reactions. To make it more interesting, I’ve phrased each of these forecast events as a question based on my particular assumption about the future’s trajectory.
Here they are:
How soon will Canada be invaded from the south?
Which large country will first lay claim to Greenland?
Is pollination a prerequisite for feeding people?
When will the Arabian Peninsula become vacant territory?
Can the Rhine and Ganges be canals after the glaciers disappear?
Will genetics always matter more than ethics?
Who will Europe use as mercenaries in the First Climate War?
Note that, firstly, that I’ve postulated reasonably foreseeable events of large magnitude and potential consequence – events which an actuary might be comfortable with when calculating insurance premiums (even if most governments continue to be negligent in mitigating their predictable consequences).
Secondly, unpredictable events (so called ‘unknown unknowns’) are still allotted a more dominant place in human history. Such extreme outliers tend to loom larger in our collective consciousness and are often rationalised in hindsight.
We will continue to remain the authors of our collective future.
What about AI?
Overhyped, and included here more for completeness as a topic du jour than for any greater predictive accuracy, machine learning models are not magic bullets able to solve (or eliminate) our problems. They are limited by the same data sources and quality constraints as other methods; by inherited assumptions and biases, and the uncertainty and complexity of the real world. They are, more simply, limited by their creators. In the same way we continue to place more trust in decisions made by doctors and pilots, people will continue to be final arbiters on ethical, legal, and social questions – for better or worse.
But such tools can assist in improving forecasts and decisions by identifying unseen trends and better qualifying risks and opportunities. They can also help politicians explore the consequences of their decisions under different future scenarios, as well as evaluating potential solutions for the challenges we continue to unwittingly create for ourselves. They can can also help monitor developing social movements and nudge citizen behaviour towards a desired goal. Elections have already attracted such nefarious use and coercive control of populations remains a possibility.
However, the fictional field of psychohistory, created by science fiction author Isaac Asimov in the 1940s is not yet upon us. This combined history, sociology and mathematical statistics to make general predictions about the behavioural dynamics of large human populations.
Therefore, rather than relying on AI to predict the future for us, we should use it as a tool to augment our intelligence, creativity and judgment. We should also be aware of the limitations and risks of AI, and ensure that it is used responsibly, transparently and fairly.
In conclusion...
I believe the future is not predetermined and humanity’s outcomes are not fixed. They instead depend on the choices and actions we make now, without us being overly deferential to the past. Rather than meekly accepting or vociferously rejecting predictions, we should use the possibilities they portray to challenge our assumptions, stimulate our imaginations, and inspire our creativity.