100 Days Blog

Day 100 - The Singularity

Submitted by Sam on 28 August, 2011 - 23:47

The impact of the trend for increasingly rapid technological advancement is set to profoundly change what it means to be human in a very short amount of time. Some futurologists see the exponential pace of advancement in information-based technologies as inexorably leading to a technological singularity, a period of technological growth so fast and so far-reaching that human life will be transformed forever. Following co-dependent, mutually reinforcing advancements in fields such as artificial intelligence, molecular nanotechnology and genetic engineering, the singularity would represent the intersection of our natural biological intelligence with our designed technologies, a merger that will dissolve the distinctions between biological and machine, the virtual and the real. After a singularity of this kind, perhaps the only identifiable trait shared between biological humanity and its non-biologically enhanced descendants will be the inherent drive towards self-improvement, the irresistible desire to extend and expand physical and mental reach, a quality unique to us amongst all living species.

The singularity may begin with the emergence of non-biological intelligence based on the architecture of the human brain. By default, such a being will be at least three million times faster at processing and transmitting signals than its biological model, simply because electrical transistors can switch at close to the speed of light whilst electrochemical signals in mammalian brains are limited to a communicative speed of just a hundred metres per second. A network of these machines could share knowledge and processing resources at extremely high speed, factorially faster than human-to-human knowledge-transmission through language. Linked to the internet, non-biological intelligence could digest and master all of the knowledge of the human-machine world to create entities with the knowledge, intelligence and raw processing power to iteratively self-re-engineer, improving their own abilities to think (and to self-improve again) at an exponentially faster rate.

If nanotechnology brings non-biological neural prosthesis into the biological human brain, the exponential doubling of processing power could rapidly lead to non-biological human intelligence dominating the fixed capacity of the biological portion, until only non-biological intelligence remains. Through a continuum of upgrades and replacements, the personality, memories, talents, hopes and dreams of the biological human would be preserved but enhanced into just a tiny fraction of an intelligence trillions upon trillions of times more powerful than its non-enhanced counterpart. Compared to biological humans, post-human subjective time would expand indefinitely, as thoughts travel millions of times faster, intelligence capacity is unbounded, and every hour results in many years of human technological progress when measured in today's terms, enhancing the entity's ability to compute and to think still faster.

Eventually, the singularity could spread throughout the observable universe, reclaiming dumb matter and energy for use in an ever-expanding optimally-engineered mind, growing at the fastest speed at which information can travel. A post-singularity race could saturate the entire universe with intelligence, and would seem indistinguishable from magic – or God – to another non-transcendent intelligence.

Day 099 - Staying around to see it

Submitted by Sam on 28 August, 2011 - 02:57

The future is a very exciting place. The exponential progression of technology points to a revolution that will not only improve living standards around the world, but reduce and then cure global poverty. Nanotechnology will provide clean drinking water where before there was none, enable targeted drug delivery and early-warning detection and defence against disease, and allow for the cheap and efficient manufacture of any product or material that can possibly be built, using only the most basic of raw materials. Life expectancy will grow as regenerative medicine matures, and human intelligence, creativity and memory will be enhanced far beyond their natural biological limits. These tantalizing glimpses into human (and post-human) potential are logical extensions of the trends we see today, but we cannot accurately predict when they will become manifest. We can, however, confidently predict that many millions of people will die of age-related illness before the longevity breakthrough extends human life indefinitely, that many millions will die of hunger and through inadequate sanitation before nanotechnology decontaminates water and brings agricultural crops to the desert, and that many, many millions will die of disease and accident before minds and memories can be uploaded from neural architecture to non-biological substrates.

But perhaps death today need not be as tragic and irreversibly final as it is. We have technology today that can preserve a body for the technologies of the future to revive. With the expectation that a future medical technology based on advanced nanotechnology will be able to restore health in all but the most extreme of cases, we can use the premise of cryonics to store bodies in liquid nitrogen, where they will be preserved without deterioration for centuries, awaiting revival from an as-yet unrealized medical technology. Cryopreservation is based on the idea that historically the medical definition of death has changed as our medicine has advanced, and that what might constitute death today may be within the capabilities of medical technology to revive tomorrow.

At present, only tissues and small organs have been reversibly cryopreserved, and the whole premise of human cryopreservation relies on the theoretical possibility that personality and memories encoded in brain structures will persist long enough after legal death to be preserved by the process. As molecular nanotechnology researcher Ralph Merkle has pointed out however, we can evaluate cryonics in only one of two ways: we can either sign-up, and die if it doesn't work and live if it does, or we can ignore it entirely and die regardless of whether the process works or not. Given that the cryopreservation procedure can only legally begin after the pronouncement of death, there seems to be nothing but money to lose in taking the cryopreservation gamble, and the chance to cheat death and experience an indescribably advanced future to be gained if it pays off.

Day 098 - Harnessing the sun

Submitted by Sam on 27 August, 2011 - 01:42

The laws of physics impose very definite constraints on the information processing capabilities of intelligent civilizations, placing limitations on how much heat can be dissipated away from information processing systems, how fast computers can run, how much information can be stored, and what size and density storage and computational devices can be. Barring technological regression caused by anti-technological movements, laws or accident, the information processing needs of a civilization will grow exponentially until limits set by physical law are met. In order to service this expansion to the very limits of possibility, a technologically advancing civilization will need to harvest ever more and more energy with ever greater efficiency in a logical escalation which will eventually lead to a Type II Kardashev civilization, where most or all of the energy output of their local star is captured.

If a civilization survives long enough to demand the total energy output of its star, it will have to embark on a megastructural engineering project to create a network of orbiting solar collectors around the star to maximally intercept its energy output. These hypothetical structures are referred to as Dyson spheres, after the physicist Freeman Dyson who first formalized the concept of a shell-like system of orbital solar collectors as a means of achieving maximal solar energy efficiency. In its most feasible form, a Dyson sphere is more strictly a series of Dyson rings, which are huge energy-gathering satellites deployed in rings that orbit around the sun, sharing a common axis of rotation with an orbital radius approximately equal to the earth-sun distance. A similar variant is the Dyson swarm, where solar collecting satellites and space habitats are arranged in a denser and more complex patterns to intercept more of the star's output, at the risk of orbital perturbations and instability as the independent orbits of the cloud of collectors interact with each other. Additional variants of the Dyson sphere include non-orbital collectors, anchored by light sails to remain stationary in relation to the star.

The idealized Dyson sphere is the Dyson shell, which would be a shell of solid matter that would completely enclose a star and thus intercept all of its emissions, but it would be likely to require more matter to construct than is available in our entire solar system, even if all the planets and gas giants were broken down into building materials. Furthermore, no existing or theoretical material would be able to stand the compressive pressure a static, rigid sphere large enough to encompass the sun would exert on itself.

Practical Dyson spheres will therefore never be 100% efficient, with gaps between collectors allowing energy to escape and the collectors themselves absorbing and then reradiating more of the total available energy. Theoretically, the wavelengths of the reradiated energy from the collectors would be sufficiently different from the expected wavelengths of light for the star's spectral type that they could be detected at interstellar distances. If an alien civilization has constructed a Dyson sphere using the kind of materials that are available in our solar system, then the infrared radiation re-emitted by the solar collectors should be detectable from earth. If there are Kardashev Type II civilizations out there, we should be able to see them through the altered light of their star system.

Day 097 - Biophilic universes

Submitted by Sam on 26 August, 2011 - 01:44

It is almost tautologically obvious that the conditions of our universe are very finely tuned for life. The fundamental physical constants which underlie all of time and space are all very life-friendly, and if they happened to have slightly different values then they would fall outside of a remarkably narrow range which conspire to produce a universe conducive to life. The smallest difference in the electric charge of an electron, for instance, or the ratio between the masses of a proton and an electron, or the strength of the interactions between nucleons in an atom, would produce a radically different universe, with stars with drastically different physics, unable to produce the diversity of elements necessary for the building blocks of life, where the pathway to biological evolution would be forever blocked.

These facts have been compiled into the philosophical argument called the 'strong anthropic principle, which states that the universe must necessarily take on values consistent with the conditions necessary for life, otherwise there would be no intelligent observers to observe them. In an attempt to construct a plausible scientific reasoning for this effect, complexity theorist James Gardner has synthesized the startling 'selfish biocosm' hypothesis, an explanatory paradigm that accounts for the life-friendly qualities of the anthropic principle by placing the emergence of life and intelligence not as secondary and accidental adjuncts to the primary physical phenomena of the universe, but instead as crucial parts of creation itself.

In the selfish biocosm theory, the fundamental laws and constants of nature function to encode the emergence of life and intelligence at the most fundamental level, like cosmic-scale DNA. All physical laws are therefore a developmental code, providing a blueprint prescribing how the universe will grow and how it will produce life. In this scheme, life's developmental destiny is to evolve into high intelligences with a command over the physical laws of the universe themselves, able to engineer them to give birth to new, intelligent universes. The universe itself is a replicator, guided to spawning other universes with similar life-generating properties by the intelligent life it encodes. Through experimentation and engineering, intelligent life will create a directed universe-scale evolution where the laws and constants that are conducive to life are selected and optimized in each newly created universe. The universe creates life to spawn more universes to create more life.

Day 096 - Expanding inwards

Submitted by Sam on 24 August, 2011 - 23:03

Self-replicating autonomous space probes are one of the fastest theoretical ways for a civilization to colonize space. They allow an exponentially fast expansion which would rapidly move a civilization through the three stages of the Kardashev scale – a measure of the total amount of energy harnessed by a civilization. The scale relates an increase in the amount of energy a civilization has at its disposal with its level of space colonization, with Type I civilizations commanding the resources of their home planet, Type II those of their star system and Type III those of their galaxy. The Kardashev scale is a widely accepted expansion hypothesis predicting that some fraction of advanced civilizations will tend to meet their increasing energy demands through physical expansion, space travel and colonization.

An alternative hypothesis for the fate of intelligent civilizations has been proposed by futurist John Smart, which predicts that the drive towards expansion will not be satisfied by space colonization but rather by “transcension” into inner space. Contrary to expansion hypotheses, the transcension hypothesis states that all sufficiently advanced civilizations will advance using progressively less space, time, energy and matter in each step of their evolutionary development, becoming ever more miniaturized and doing more and more with less and less. The hypothesis is based on observations that complex systems frequently grow with accelerating speed through increases in both the efficiency and the density of their physical resources inputs, using fewer finite resources to encode ever more information.

The transcension hypothesis predicts that civilizations will be driven, with few exceptions, towards technologies which increase both in spatial locality and time density, as systems become smaller and faster. Computational devices will be developed beyond the nanoscale to produce femtotechnology, where information is encoded and computed by subatomic particles, perhaps using a combination of single electron transistors, photonics or spintronics. All sufficiently advanced civilizations will reduce their intelligent systems through the 25 orders of magnitude between the level of atoms and the Planck length (the smallest viable measurement), a vast inner space available for engineering almost as broad as the 30 orders of magnitude inhabited by terrestrial biological life.

Once the physical limit of computational miniaturization is reached, the only way for continued exponential growth is to reach a black-hole-like singularity of energy density. So much computational circuitry would be packed into such a small space that it would collapse into itself to form a black hole, so dense that light would not escape its gravity. According to some theories, only processed information would be able to pass from beyond the black hole's event horizon, encoded by entangled high-energy particles released when matter falls through it. With the right manipulation a black hole would be the ultimate computer, and will be the destiny of all civilizations who exhaust the computational capacities of their local resources.

Day 095 - Colonizing the Galaxy

Submitted by Sam on 24 August, 2011 - 02:49

If we do ever encounter extraterrestrial intelligence first hand, then it seems most likely to be machine intelligence rather than a biological intelligence like us. Biological beings are simply not robust enough (being too huge and too squishy) in comparison to purpose-built engineered solutions for crossing interstellar distances. It seems likely that any space-faring civilization, including our own, will eventually reach a level of sophistication where it becomes more effective and more economical to send intelligent machines to colonize and explore the universe than it would be to send spacecraft built to sustain bulky, vulnerable biological life.

A civilization in possession of advanced nanotechnology and equipped with nano-scale universal constructors could in theory create self-replicating robot spacecraft that would be highly efficient at galactic exploration, moving from planetary system to planetary system to source the materials to build copies of itself. If the planetary system was anything like our own solar system, then the replicators would have a wealth of raw material to work with, harvesting asteroids, comets, dust and planets as appropriate. This type of machine is known as a Von Neumann probe, and is named after the mathematician John von Neumann who was one of the first to develop a mathematical theory of how machines could make replicas of themselves.

Equipped with a basic propulsion system and carrying a payload as small as a nanoscale self-replicating constructor, these probes could colonize the entire Galaxy in only 4 million years. Directed by on-board programming, perhaps even an artificial intelligence powered by nanocomputers, the probes would replicate in each new system, sending some copies further into the Galaxy and instructing others to explore the current system and make scientific observations to be transmitted back to the home planet. Equipped with a universal constructor, some probes could be programmed to terraform suitable planets, creating life-sustaining environments to be 'seeded' with a biological payload (which could either be synthesized from the molecular level up using genomic instructions stored on-board, or alternatively derived from stored frozen embyros) to create a living colony of biological pioneers tended by artificial sentinels. This process would repeat with exponential rapidity to create swarms of trillions of self-replicating probes to allow a race to explore and colonize a vast area of space in an astonishingly short amount of time.

As with the risk of a replicator-induced earth-bound grey-goo scenario, Von Neumann probes carry the threat of dramatic misfirings which could consume entire planetary systems in obeyance of their directive to reproduce. Whilst artificially intelligent probes would be resistant to error, there is always the risk of mutation in the replication process, just as there is mutation in biological reproduction. A cosmic ray could cause a misalignment of the atomic architecture of the probe during its construction, creating a mutation that would eventually evolve a new “species” of probe, potentially with a different interpretation of its programming. All it would take is one of the trillions of probes to malfunction for the Galaxy to be threatened by a technological cancer.

Day 094 - Rare Earth

Submitted by Sam on 23 August, 2011 - 00:11

Most arguments for the existence of extraterrestrial life rest on the 'principle of mediocrity', which states that the properties and evolution of our solar system, including the processes that led to life on Earth, are not unusual in any important way and could be common throughout the Universe. The principle of mediocrity's counter argument is the 'rare earth hypothesis', which concludes that all of the conditions which conspire to permit complex life on Earth are exceptionally rare. The rare earth hypothesis has been presented as a plausible solution to the Fermi paradox, and an application of it concludes Stephen Webb's book Where is Everybody?.

In order to support life as we know it, a star system needs to be located in a very specific segment of the galaxy. It must be close enough to the galactic centre that its star contains a certain proportion of heavy elements, which are crucial for the formation of rocky planets containing the molecular components for life's building blocks. Additionally, the planetary system needs to be located far enough away from the dangerous centre of the galaxy, away from its high levels of radiation and its supermassive black hole, so that carbon-based life can develop. This is known as the galactic habitable zone, and it may only encompass around 20% of the stars in our galaxy; the other 80% are exposed to conditions that make the evolution of complex lifeforms unviable.

Of the stars in the galactic habitable zone, perhaps only about 5% are like our sun – stable, not too bright and not too dull, and so we need only consider planets orbiting sun-like stars in our search for complex life in the form that we know already. And then of these planets, we need only consider those with circular orbits which allow them to remain in their star's continuously habitable zone for billions of years at a time – that is, those planets without erratic orbits and which are close enough to their star to maintain liquid water on their surface for large, continuous periods of time. As a star's habitable zone moves outward as it ages, as the star grows hotter and brighter with time, a planet needs to be in a very particular orbit to avoid either permanently icing over or burning up over the course of the billions of years required for complex life to evolve. Perhaps, as Webb suggests, as few as 0.1% of the remaining planets orbit this continuously habitable zone.

Now, with the conditions on these remaining planets set to support the kind of carbon-based life that we know here on Earth, we have to contend with the probability that life will evolve. In itself, this could be an exceptionally rare event, or it could be a probable occurrence whenever conditions permit. We don't know, but if we say that the chance of life evolving now that everything is prepared is 0.05, then there are half a million planets in our galaxy supporting life.

Whittling the chance that these planets will evolve intelligent life will be the disasters and mass-extinctions that life cannot recover from. Asteroid impacts, global glaciation and supervolcanoes are potential culprits, and may account for up to 20% of all life-bearing planets permanently losing their inhabitants, or at least preventing the formation of complex multi-cellular life. In fact, the evolution of eukaryotic cells (those containing complex structures like mitochondria or chloroplasts) from prokaryotic cells (those without a nucleus or other complex structures) took many millions of years on earth, and is by no means necessarily inevitable. It may not happen on many of these worlds at all, and they may never see multicellular life. Webb estimates that one in forty life-viable planets might have the conditions that conspire to permit multicellular life to evolve from single-celled ancestors, but he cautions that this (like all of the other variables) is merely a guess.

Finally, in order for these planets to bear the kind of advanced, abstractedly intelligent civilizations that could have developed the technologies to contact or visit us, Webb believes that the remaining planets need to have the conditions necessary for complex language to develop – the crucial enabling step which allowed our species to truly master technology. This final criterion is perhaps sufficiently unlikely to evolve that it has only ever happened once, here on Earth. In this scheme, there are still planets in our galaxy where life is common, but it is frequently only unicellular, rarely multicellular, and uniquely rarely intelligent. This model of the universe leaves us with many fabulously diverse extraterrestrial ecosystems to explore, but ultimately alone as an advanced intelligence. It neatly explains why we have never heard nor seen an alien.

Day 093 - Nested realities and the afterlife

Submitted by Sam on 22 August, 2011 - 00:07

What are the chances that we're not in a simulation? We would have to assume that no civilization in the history of the universe – no individual or group from any civilization at any time – would ever have the resources or the inclination to run a simulation of our own universe. As it breaks no fundamental laws of physics to grant that a civilization with a steady rate of technological progress will one day attain the computational power necessary to run such a simulation (in fact, barring a mass-extinction and given sufficient time, it seems something of an inevitability), we would have to assume that either no civilization in the history the universe ever attains this level of computational resource, or no civilization ever achieves the ability to programme such a simulation, or that all sufficiently endowed civilizations choose not to.

Speculation about what a civilization trillions of times more advanced than ours would want to do is unavoidably moot, but at a minimum we can safely say that it would once more break no physical laws for a simulation to be programmed; it is perfectly feasible in theory that given astronomical intelligence and a sufficient amount of time a perfect simulation of our perception of our universe could be programmed and run. And so we are left in the position that either no civilizations run human-simulations because they can't, or because they have all converged to reliably prevent their computational resources from being used to do so. Perhaps all advanced civilizations throughout the universe all happen to enforce laws or codes of ethics or other prohibitions which successfully forbid the running of human-universe simulations. Or perhaps they all lose interest in doing so, having moved on to loftier goals with more scientific or aesthetic value.

If these conditions don't hold, then it is likely that we're in a simulated universe, and thus it is likely that we will develop the technology to simulate a universe ourselves. Perhaps before we succeed our simulation will be terminated, in which case long-term planning is futile. But perhaps we will be permitted to run our own nested-simulation, much as a virtual computer can run inside a real computer today (and then a virtual computer inside that...). If this possibility became a reality we would have to conclude that the probability that we were living within a simulation was high, and furthermore we would be encouraged to suspect that our simulator's universe was itself a simulation as well, and so on.

The theological overtones of the simulation argument have not escaped notice. Each simulator would in many ways occupy a Godlike relationship to their simulated humans; they would be omnipotent in that they could pause, modify and re-run the simulation; they would be omniscient in that they could monitor everything in the simulation; and they are necessarily sole creator and destroyer of the whole universe.

In fact, in a world of nested simulators, the simulants can reasonably infer there is the possibility of an afterlife, and that their simulators have the power (but perhaps not the inclination) to judge, reward and punish actions by some ethical standard. Perhaps each person's life is recorded by the computer running the universe, and can be restored as a whole into a new simulation at will after death. Further, simulators could theoretically upload your consciousness into an artificial body to allow you to interact in their own universe (whether simulated or otherwise). In order to increase the chance of being resurrected by simulators, a reasonable strategy would be to strive to increase the likelihood of being preserved by being interesting with the aim of catching the eye of the simulators. Recurrently catching the eye of ever higher hierarchies of simulators could perhaps result in being born into the “basement-level”, the real universe.

Day 092 - Is it real?

Submitted by Sam on 21 August, 2011 - 02:01

If an advanced alien civilization has enough computing power to create a virtual reality sufficiently diverting to negate their need or desire for galactic colonisation, it follows that such a civilization would have the computational capacity (and coding skill) to create a simulation for and of us. Oxford University professor Nick Bostrom has estimated that a computer with the mass of a planet, operating with the efficiency of what we know to be theoretically capable with nanotechnology today (which is probably not even approaching the physical limits of optimal computation), would be able to perform 10^42 operations per second, and thus able to simulate the entire 'mental history' (or every thought, feeling and memory) of humankind in less than one millionth of a second. And a sufficiently transcendently advanced civilization may have a huge number of computers of this scale.

In fact, as Bostrom has argued, we don't need to postulate superintelligent extraterrestrials in order to suspect that we might be living in a computer simulation – all we need to concede is that at some point in the future our own descendants may reach a level of technological sophistication that permits such a complex virtual reality simulation to be programmed and run. And our descendants may have a clear motive for simulating us, unlike our hypothetical extra-terrestrials – it may be that later generations of humans wish to turn their phenomenal supercomputers into detailed simulations of their own past, to understand more about their forebears. With this goal, they could set about creating fine-grained simulations of the physical world and historic human brains to create a believable virtual-universe, peopled by conscious simulants (us?) unaware of their artificiality.

But how highly resolved does this virtual reality have to be? Certainly, our experience of the world seems unfathomably rich, covering a spectrum where our experiments into quantum permit us to look at the very small, and our space programmes permit us to look at the very distant and the very large. The beauty of a simulation though, is that it can cheat. It doesn't need to persistently model the microscopic structure of the entire observable universe, but merely those parts necessary to ensure that the simulated consciousnesses don't perceive any irregularities. Whenever a human looks into microscopic phenomena with an electron microscope, or peers into the vastness of space with a radio telescope, the observed details can be filled in by the simulation on an ad-hoc basis, rendering only what is necessary for each human to remain unsuspicious, whilst leaving the unobserved universe unresolved.

In a completely unrelated (and incidentally unverified) demonstration of this principle of on-the-fly rendering to conserve computing requirements, an Australian software company called Euclideon Pty Ltd, has claimed that it has developed a new kind of computer graphics technology that allows a graphic environment to be created from an essentially infinite number of virtual 'atoms', rather than a (very) finite number of traditional polygons, by using a search algorithm to work out which of these atoms need to be rendered to generate any given scene from the viewer's perspective at any given time. The rest of the millions of atoms can go unrendered in the background, and are only called into view when they are needed. This new kind of technology, if it is verified, will allow graphics of unlimited detail, limited only by the granularity the artist and modeller work to.

And so extending this 'show only what is needed' principle, a hugely advanced human or alien civilization could cut a lot of computational corners and still create an utterly compelling (and indistinguishably realistic) simulation of all of human experience, requiring 'only' around 10^36 operations to simulate the brains of 100 billion humans, with the environmental rendering as an additional cost. With an excess of computational power and programming ability, a simulation could keep track of the brain-states of every conscious human, and modify them and/or the simulated environment whenever needed to maintain the integrity of the illusion – for instance, filling in worldly detail whenever a human were about to make a microscopic observation. Should any errors ever occur, the software of the simulation could be paused, modified and re-run, and no-one need be any the wiser.

If a civilization possessed planetary-scale computers, they would have enough computational power to run such simulations many, many times over, while only using a fraction of their total computational resources. Bostrom realized that if there is a substantial chance of a civilization ever developing such a capability, be it our children making an ancestor-simulation or an alien civilization making a virtual human zoo, then the number of simulated humans ever to have existed is likely to outweigh the number of biological humans ever to have existed, making it more probable that we are living in a simulation than living as the 'original' biological template for one.

Day 091 - Goodbye universe, hello virtual reality

Submitted by Sam on 20 August, 2011 - 00:10

One of the most interesting partial solutions to the Fermi paradox hypothesizes that we haven't been visited by alien civilizations – and nor should we expect to hear from them – because they do not travel, colonize or have very much engagement with the physical universe at all, having engineered Matrix-esque virtual realities for themselves that are far more compelling and fulfilling than the real thing.

This scenario returns to an earlier topic of the blog, and imagines that the universe is indeed described by a small set of laws, and that sufficiently advanced civilizations will tend to discover them, eventually decoding rules for all phenomena in the universe (including themselves). At such a peak of understanding, such a society would essentially find that its science was complete; its physicists would have found the theory of everything, and there would be nothing left that could not be explained. To be clear: they would have discovered the origins of life, understanding the exact chemical conditions that gave birth to their ancestors, and the full range of alternative conditions that could give birth to all other lifeforms throughout the universe. All of their astronomers' observations about the universe at large would have been arranged into an infallible theory of knowledge, and they would have models explaining the exact origins and ends of time and space, and everything in between.

Having unlocked the secrets of this universe, perhaps without even leaving their own solar system, such a race could readily determine that further space exploration would be unnecessary, and turn instead to a self-serving and insular virtual reality. With the immense computing power available to such a highly developed civilization, such a simulation would be utterly compelling. Their artificial realities could create universes for them more rich, more sensorily stimulating and more complex than our own.

This speculation, again, cannot solve the Fermi paradox unless it is a sociological condition that applies to every advanced civilization, but it does seem to me to have a rather compelling logic. Even without a retreat into virtual reality, an extraterrestrial civilization in possession of a workable theory of everything could very easily be imagined to conclude that interstellar travel is too costly, too difficult or ultimately too pointless to merit the effort, and hence why we have never heard from anyone else out there. Perhaps it is every civilization's destiny to ultimately slake its own curiosity so fully that its continued exploration and participation in the physical universe itself becomes at best unnecessary, and at worst too predictable to contemplate.

Syndicate content
Drupal theme by Kiwi Themes.