SGEARS

Society for the Global Ethical Application of Revolutionary Sciences

Mission Statement:

To help ensure a smooth transition to a post-scarcity world where most labor can be made unnecessary.

Foundational Principles of SGEARS:

  1. Humanity shall not be replaced.

  2. Apocalypse shall be prevented.

  3. Maximum sustainable population shall be maintained.

  4. Cures for all ailments and aging shall be sought to benefit all.

  5. Peaceful expansion of humanity beyond Earth shall be pursued.

  6. Rules of engagement with alien intelligences shall be developed.

1.1 Boundaries of Humanity

In common language, to call someone or something “human” is to give them or it the highest compliment. At the same time, we tend to think of the human condition as a flawed state of being, both in terms of our tendency to make errors and in terms of human life being fraught with difficulties. The whole premise of our program is to strive to improve the conditions of living as a human, but we believe it is of key importance to never lose sight of what makes us who we are.

Within futurism as an ideology and movement, one of the more prominent ideas is the so-called transhumanism, or the notion that we can and should use science and technology to evolve or transcend our physical and mental limitations. To an extent, we have always been doing that, to our great benefit as a species, but there’s a difference between unlocking or freeing human potential, and becoming something not human anymore. As we progress, we must not abandon core human capacities.

There are many ways in which this could hypothetically happen that have been explored in science fiction, and undoubtedly many more that we haven’t thought of yet. That’s why, for both of these reasons, thinking about this seriously should be someone’s job. There should be a number of philosophers employed by this organization, tasked with continuously questioning and formulating the nature of humanity and how it relates to ongoing scientific or technological progress.

Here are some major defining features of humanity and how they’re threatened:

  • Individuality - ever since the rise of global communication networks, the term “hive mind” keeps getting thrown around. On the internet, social media have already become integrated into almost every aspect of everyday life, exerting immense social pressure and making the concept of privacy almost extinct. Assuming we decide to go down the route of neural cybernetic augmentation, human brains can be directly networked in ways that will make a private existence or independent decision making effectively impossible.

  • Embodiment - embodied cognition is a theory that describes how cognition typically involves acting with a physical body on an environment in which that body is immersed. In the not so distant future, it may be possible to separate the brain or the mind from the body and move it into a different body or environment, or to redesign human bodies altogether. Having a different body, especially if it’s immersed in an alien environment, may result in dramatic changes to one’s sense of self or the way we think at the most basic level.

  • Forgetfulness - modern information technologies are exceptionally good at gathering all kinds of data and storing it in ways that make it very easy to search and access anywhere in the world. This may present a problem in that any youthful indiscretion or embarrassment may remain in global public record forever. This technology may also be used to directly augment human memory, potentially replacing it altogether. Without the right to be forgotten or the capacity to forget, forgiveness may be lost to us. Conversely, collectively shared memories can also be collectively altered or deleted at mass scale.

  • Birthright - perhaps the most dangerous Pandora’s box among all of the futuristic technologies within our reach is genetic engineering. In the most objective sense, human beings are defined by their DNA, the human genome. There are also the natural principles of mutation and evolution through natural selection that all human beings are currently subject to equally. With sufficiently advanced genetic engineering, none of that remains set in stone. A political shift toward eugenics can result in any number of horrors, including bioweapons, supersoldiers, a slave race, sterilization, abuse of cloning, etc.

  • Conflict - war, or violent, destructive conflict, is something that humanistic futurism aims to overcome, at least on any larger scale, but the will and capacity to fight seems to be an important part of the human spirit. It’s quite likely that even if we stop fighting amongst ourselves, there will always be something to fight for or against in the universe. However, if the future society ever becomes pacifistic to a point of humans losing the capacity to defend themselves against an adversary, it would eventually mean our end.

  • Toil - one of the main defining features of humans throughout history which is very much at odds with most utopian futuristic visions is that we have to work for a living. This doesn’t appear to be the case only to the extent to which we have to work or are forced to work. Human beings tend to want to be useful and productive. For this reason, technology effectively making human beings useless wouldn’t be a good thing, much like reducing human beings to just workers isn’t a good thing. If the right balance isn’t struck, we may be “terminated” by robots, become truly useless, or become only robots.

  • Conscience - one of the main philosophical underpinnings of modern science is determinism. The main problem with determinism from the humanistic point of view is that if one’s actions are only a result of inevitable natural forces and unconscious processes, then no one is personally responsible for their actions. As scientific predictive models or cybernetic means of social control improve, likely with the use of advanced computing and artificial intelligence, scientific determinism may become more than just a philosophy. A highly technologically advanced society with a legal system based on ideas like precrime that counter our intuition of conscience could be quite inhuman.

  • Finality - given the explicit goals of this program, there’s an important distinction to be made between the concepts of life extension and immortality. Arguably, the most fundamental attribute of human existence is the fact that it is finite. This pushes us to do things, makes every moment precious, and ensures the obsolescence of outdated ideas. This wouldn’t be changed by a cure for all diseases including aging, that would just make it so that nobody will have to suffer illness or die of natural causes, specifically. Obsession with true immortality would be a true departure from humanity, likely resulting in vampiric elites refusing to ever be ousted from a position of power.

  • Joy - this may be more a feature of ideal human experience rather than merely a capacity, but however you look at our desire and ability to enjoy life, life without joy isn’t quite human. As science and technology progress, there is a clear tendency to label a growing number of enjoyable human experiences as vices that are unhealthy, inefficient, archaic, or otherwise irrational. To a degree, perhaps we would be better off letting go of some of our more toxic or barbaric pastimes. Then again, where will this sanitization or censorship of human experience end? All humor can be declared offensive and hurtful, all thrilling activities include needless risk, and most fun-having is wasteful.

  • Strife - while the general goal of all utopian futurism is to make life easier, it would probably be a bad thing if it would mean that there would be no challenges left in life. This is not so much about making life difficult for the sake of making it difficult, as it is about the way in which limitations put constructive pressure on character development. The term used to describe people who get everything for free without any effort is “spoiled”, after all. It’s possible humanity would reject such world and life, but if it were altered by this new reality, it could lose motivation or ability to sustain life over time.

  • Vision - the whole idea of futurism is rooted in the human capacity to imagine that things could be better than they are. While science and technology are to an extent the main driving forces of progress from what is to what could be, technology can also be used to constrain human imagination and resist, reverse, or subvert progress. The artificial intelligence as it exists now is imitative, not creative, and it is developed mostly by interests that wish to gain more control over people. This is making it harder for us to imagine a better future or make it happen, resulting in entropy, not progress.

  • Fantasy - most scientific or technological visions of a futuristic utopia are based on rationalism, and some explicitly predict that as humanity progresses, it will shed archaic notions like religion. However, assuming this actually happened, it’s not clear at a level of scientific proof that this would be a good thing. The capacity to believe against all odds in things that aren’t demonstrably real may be one of the fundamental human capacities. This can manifest as spiritual faith, but it can also manifest as secular hope. Maybe it is beneficial to survival to believe you have a chance when the numbers say you don’t.

2.1 The Art of Self-Preventing Prophecy

It is a fact that predicting the future evolution of any single technology is virtually impossible beyond the nearest-term future, let alone the complex interaction of evolving technologies at any given point in the future. The question that interests us is this - how does one ensure or prevent a future that one cannot predict?

There’s certainly no shortage of bad predictions to learn from. Most futurists of the first half of the twentieth century were convinced that we will achieve infinite power generation before the end of the second millennium. We now find ourselves in the early decades of the 21st century, and it’s information that’s effectively infinite, not energy, due to technologies like fusion power still being “three decades away”.

The core problem with past futuristic predictions seems to be that there’s no guarantee that any plausible-seeming scientific pathway, like fusion power, will ever pan out, while completely unpredictable, out-of-left-field so-called “black swan” discoveries and technologies can appear at any moment and change everything we know or are capable of. It therefore appears prudent to be skeptical of “likelihood”.

In order to be able to prepare for a future, the key is the capacity to anticipate it. Even assuming we will be unable to put regulations and countermeasures in place for every conceivable bad eventuality ahead of time, the level to which we have managed to anticipate any given eventuality should make us able to adapt to it faster after it appears. Hence the concept of self-preventing prophecy.

A self-preventing prophecy, as the opposite of the better known self-fulfilling prophecy, is a prediction that prevents what it predicts from happening. Each act of anticipating a technological apocalypse, or an event in which a specific technology gets out of control to a point of endangering the continued survival of human race, should be formulated as such a prophecy. We need to think of futures to avoid.

Every time we manage to anticipate an apocalypse, natural or technological, we arguably decrease the chances of it coming to pass at all, or at least delay its emergence while buying ourselves more time to address it just by starting to do so ahead of time. What follows is our best working attempt at a complete list of self-preventing prophecies for all conceivable apocalypses in order of plausibility.

2.11 Natural Apocalypses

This category of extinction-level events (ELEs) is plausible to a point of virtual certainty, given the abundant evidence of a variety of natural events that caused massive extinctions in the Earth’s past. This includes a range of possibilities that are variably predictable and catastrophic, but all of which can be prepared for and many of which can be greatly mitigated or outright averted. If we choose to.

However, before we look into specific types of scientifically established natural disasters, we need to develop a way of measuring their objective threat level. Without that, there would be no way to prioritize which specific threats should be addressed in what order of urgency and at what scale of resources used. This will also help us sort and list all the various types of disasters in a meaningful way.

Unfortunately, there doesn’t appear to be any good universal scientific typology or classification system of real apocalyptic events in available literature, but there are some guiding principles that we can take inspiration from. For starters, great past (and present) mass extinctions are measured by scientists by the percentage of species that were wiped out. That’s a good metric of an ELE threat level.

When translated into human context, a good metric could therefore be the number of lives at risk. There should also be an additional consideration for the quality of life, however, not just the quantity of potential casualties. That could be achieved in part through economic measures, or the potential financial costs that a fully unmitigated disaster would incur, but types of suffering should also be qualified.

Alternatively, the absolute threat level of natural disasters can also be quantified in terms of the sheer level of force behind them, as it is done with earthquakes using the Richter scale or with tornadoes using the Fujita scale. The problem with this approach is that the level of damage to human life or economy very much depends on where the disaster hits, meaning that more factors need to be accounted for.

This is addressed for example in the case of the threat level of meteorites and asteroids in the Torino scale. For an object in space to be considered a bigger threat to our planet, it needs to achieve a certain ratio of its kinetic energy on impact and probability to hit Earth. Another way to factor in probability is how the strength of floods is described by once in how many years they should occur.

For example, a 100-year flood is a flood of such a strength (volume of water correlated to destructive force) that it only has a 1% chance of occurring in any given year. If you think about it, even a chance of an asteroid hitting Earth can be converted into an expected length of time until such an event is likely to happen. All natural disasters, on a long enough timeline, are guaranteed to happen.

So, if we take together all of these perspectives on the measurement of disasters, it makes sense to classify catastrophic events that we want to avoid or mitigate primarily by how many human lives they put at risk, directly or indirectly. Secondarily, it makes sense to focus on them in order of how much damage they’re likely to cause with what probability, or how much time we likely have to prepare.

Let us therefore propose a universal scale for the classification of natural disasters for the purposes of prioritizing their preemptive mitigation. On this scale, the X axis represents the likelihood/frequency of the disaster occurring and the Y axis represents the disaster’s combined existential/economic damage potential to human life. All natural disasters will be classified in the following way:

The numbers in the columns of the chart (1-6) represent both the level of the threat (1 = lowest and 6 = highest) and the corresponding level of resources that should be devoted to mitigating or averting the disaster ranging from 1 (resources of a small dedicated community) to 6 (all globally available resources). Let’s call this scale Human Endangerment by Catastrophe Threat Index Chart (H.E.C.T.I.C.).

As a general principle, this means that a higher order unit of human organization should focus on mitigating or averting imminent catastrophes endangering a lower order unit of human organization. If the catastrophe is not imminent, the level of resources expended on addressing it should decrease roughly by an order of magnitude for every order of magnitude of years until it is expected to recur.

As for predictable natural disasters that are neither imminent nor highly destructive, or, more precisely, those that don’t meet a sufficient combination of imminence and destructiveness, there still should (and likely will) be interested private individuals researching them (as well as the means to mitigate or avert them). After all, we can never be sure beyond all doubt that our projections are accurate.

Considering the issues with predictability outlined above, H.E.C.T.I.C. is presented here, in the section dedicated to natural disasters, precisely because only these types of events are sufficiently scientifically understood to be predictable in terms of their force and frequency. Different types of arguments and assessments are required for threats that are poorly understood or highly unpredictable.

H.E.C.T.I.C. is also meant to be a compromise between the concerns of the presently living humans and the philosophical perspective of longtermism within effective altruism which is concerned with saving as many hypothetical people in the future as possible. While we believe sacrifices may need to be made now to ensure future survival, imminent threats to living humans have higher priority.

Now that we’ve established a system of prioritizing known natural threats to humanity, let’s go through individual categories of natural disasters, in order of their average threat level (from the highest to the lowest) as we can estimate it following the general expert consensus within the framework of H.E.C.T.I.C. We will also offer our recommendations based on the level of potential threat each event poses.

H6 - RELATIVELY IMMINENT MASSIVE THREATS

H5 - LOOMING MASSIVE/IMMINENT LARGE THREATS

H4 - LOOMING LARGE/IMMINENT MEDIUM THREATS

H3 - LOOMING MEDIUM/DISTANT MASSIVE THREATS

H2 - IMMINENT LOCALIZED/DISTANT LARGE THREATS

H1 - LOOMING LOCALIZED/VERY DISTANT THREATS

Note: As each number on the scale corresponds with a flexible equation that shows the relationship of two variables - probable time until disaster, and scale of disaster - it can be used to describe two different combinations of these variables. Also, imminence of a disaster is a relative term, based on how much time would be needed to effectively address the disaster. Preparations for or the averting of the largest threats could take a relatively long time ahead of the occurrence of disaster.

H6 EVENTS:

  • Unchecked climate change - this threat may not seem all that urgent to deal with to some, or real at all. But the fact is, the vast majority of relevant scientists is convinced that if we were to do nothing about it for a century, anything approaching our civilization as we know it today would become impossible. That’s not a lot of time to avert a disaster on a global scale. The reason for the qualifier “unchecked” here is that unlike most natural disasters, this one is scalable, meaning the more we do about it now, the less of a disaster it’s going to be in the future. It’s also important to note that this is a threat even if you don’t believe that it is fundamentally anthropogenic, or caused solely or primarily by us. A number of cosmic or biological events can throw off the Earth’s climate at any time. At the very least, we must invest substantial resources into adaptation to a warmer or colder living conditions. Ideally, we should try to minimize the strain we put on the ecosystem, while effectively learning how to terraform our own planet, or manage the climate. You’ll find more detailed recommendations on how to address climate change in a dedicated section later in this document and on the GEPB page.

H5 EVENTS:

  • Massive cosmic events - this can include impactors and other deadly cosmic events like supernovae, gamma ray bursts, or rogue planet, star, or black hole flybys. As of today, we’re fortunately unaware of any such events that would be impending and aimed at us, but there is a theoretical chance they are coming and we just haven’t detected them yet. There’s not much we can do to avert or deflect the largest threats of this type, and also the larger the threat of this type is, the easier it is to detect from Earth, but we can and should invest substantially more resources into increasing our presence and capabilities within our solar system to buy ourselves more time. With enough effort, our current technology can enable us to deflect even large asteroids and comets and to some extent shield our planet from radiation.

  • Supervolcano eruptions - this type of event occurs on average once per 100,000 years and results in an ejection of over 1000 cubic kilometers of material into the atmosphere, destroying the local area and dramatically reducing the amoung of sunlight that hits the Earth. At present, we can at most monitor dormant supervolcanoes and hope that we will be able to predict the eruption couple of months ahead of time. In the future, advances in geothermal engineering may open up some options for the mitigation of the explosive potential of volcanic eruptions, alhough it would probably be much easier and faster to develop ways of climate change adaptation. After all, cosmic impactors and nuclear explosions can have a very similar climate disrupting effect. For example, most underground and any orbital or off-world infrastructure would be safe from cataclysms on Earth’s surface. With fusion or better nuclear power sources, we may also be able to generate enough heat and grow enough plants indoors. Basically, if we become able to colonize space, we’ll have the ability to survive on a less habitable Earth as well. With sufficient predictive capabilities and political organization, nobody would have to die in the initial explosion, even if it would obliterate a whole country. Dealing with a supervolcanic eruption would be very challenging, but we may still have enough time to prepare.

H4 EVENTS:

  • Pandemics - judging by the recent history, we seem to be due for a global pandemic about once every century (like between the Spanish flu and COVID), with a couple smaller notable epidemics in between (like HIV/AIDS or ebola). COVID has so far resulted in almost 7 million direct reported deaths, but the real damage caused by the pandemic is likely much bigger than that. The upper limit on how many people a pandemic can kill, if no effective measures are taken to address it, seems to be about 50% of the population, as it happened during the Black Death plague in Europe in the middle ages. Pandemics (of naturally evolved pathogens) are therefore almost certainly survivable for a civilization like ours, but we do need to invest more into preparedness and reinforce our local self-sufficiency and supply chains.

  • Medium cosmic impactors - asteroids and comet fragments with the destructive power equivalent to a strategic nuke, like the object that caused the Tunguska event, exist in a sweet spot for both our ability to detect them in time, and maximum return on investment into a planetary cosmic impactor defense program. NASA and the world’s foremost astronomers have been lobbying for getting such a program done for years, and we believe this is a great, very doable idea. In effect, we need to launch more telescopes dedicated to monitoring relatively small incoming objects, and develop a capacity to launch a volley of kinetic interceptors, like the recently successfully tested DART spacecraft, at a moments notice.

  • Massive solar flares - most solar flares are relatively harmless and/or miss Earth. Occasionally, however, Earth is hit with a solar flare with sufficient power to disable most unshielded power grids and electronics. The last recorded event of this magnitude was the Carrington event in 1859. Fortunately, there weren’t many electrical devices in use back then, but the same cannot be said today. While most people would survive the initial event, months without electricity would result in many deaths due to infrastructure and supply chain collapse and large-scale unrest. We need to invest into shielded core electrical infrastructure and other countermeasures and prepare emergency plans and redundancies.

  • Wildfires and droughts - potentially getting stronger with climate change due to rising temperatures and higher levels of water evaporation. The reason why we rate these two interdependent types of disaster above floods or storms is that they have the greatest potential to undermine whole local ecosystems and agriculture, and they seem to be escalating the most due to the global increase in temperature. In order to address these dangers, we need to address climate change as a whole, but we can also adopt better practices in land management and try to focus on more fireproof construction.

H3 EVENTS:

  • Earthquakes and tsunamis - there’s about one massive Earthquake (magnitude 8 or above) every 5-10 years somewhere on the planet. Strong earthquakes are generally limited to areas surrounding faultlines, but within those areas, it’s only a matter of time until a city gets destroyed. Earthquakes can also launch a massive tidal wave all the way across the ocean. The only thing to do about earthquakes is to focus on earthquake-resistant construction methods, if we don’t want to abandon high-risk areas entirely. We should also invest more into developing better means of detecting earthquakes, as they are currently essentially unpredictable. As the tsunami waves are detectable and take time to travel, a global early warning system is possible and needs to be maintained. There’s also the danger of so-called “earthquake storms”, or series of consecutive strong earthquakes in the same region, but there’s really we can do to address those, other than assuring the population it’s not god’s wrath, but a random natural occurrence. Allegedly, some whistleblowers claim that technological means of stimulating earthquakes have been developed. If that’s true or possible, and the technology cannot be used to prevent or weaken earthquakes, its use needs to be outlawed.

  • Floods - potentially getting stronger with climate change due to rising sea levels and more extreme rainstorms. Beyond climate change mitigation, it would make the most sense to invest into more resilient flood control infrastructure like dikes, spurs, levees, and seawalls in the areas where floods tend to recur. Houses could also be reimagined to be more waterproof. Moreso than with any other type of disaster, dealing with flooding is a question of good engineering and economics.

  • Hurricanes and tornadoes - potentially getting stronger with climate change due to more energy being available to storm systems. Compared to fires or flooding, this type of disaster is the least predictable, particularly tornadoes, but existing detection systems and predictive models are capable of giving some advance warning to the endangered areas. Similar to earthquakes and floods, the best way to mitigate the destructive potential of this type of disasters is to invest into more resilient infrastructure. As the engineering principles required to reinforce infrastructure are already well-known, the main issue is the cost, which means that the key problem to solve is how to make disaster-resistant housing or grids affordable.

  • Volcanic eruptions - this type of calamity is comparable and related to earthquakes, but unlike earthquakes, we are at least able to predict them several days in advance. As for possible solutions, everything said in regards to supervolcanoes applies here equally.

  • Small cosmic impactors - the problem with meteorites with the destructive power equivalent to a tactical nuke, like the one that exploded recently over Chelyabinsk, is that they’re often too small to be detected with a sufficient headstart. A substantial planetary defense network, likely an automated one, would have to be developed to counter this type of city-leveling impactors.

H2 EVENTS:

  • Mudslides

  • Sinkholes

H1 EVENTS:

  • Any of the above, if relatively very distant or unlikely

2.12 Pollution and Climate Change

Most climate scientists agree that human activity, mainly the overproduction of greenhouse gases like CO2 and methane, is causing rapid warming of the planet, which is bound to result, and to an extent is already resulting, in a number of harmful effects that will decrease the ability of humans to live on the planet. The most severe of these effects are as follows:

  • The weather is becoming increasingly erratic, resulting in more powerful and destructive storms and disruptions to agriculture.

  • Heatwaves, droughts, and wildfires are increasing in frequency and intensity, directly endangering humans and whole ecosystems.

  • Polar ice sheets are melting, which, in addition to causing flooding of coastal areas, may eventually disrupt ocean currents and release trapped greenhouse gases and is reducing the Earth’s albedo, or reflectivity of the surface, which may all further increase average global temperature or otherwise alter average temperatures in some world regions.

  • Huge quantities of CO2 are dissolving in the ocean, increasing its acidity, which is disrupting the ability of certain types of aquatic organisms to function properly and by extension their whole habitats, most notably the coral reefs.

This list is by no means definitive, and various particulars and specific projections or proposed responses are in dispute, but it stands to reason that we cannot escape the underlying physics of thermodynamics. Even if somehow the levels of emissions or warming were still absolutely fine for now, it would only be a matter of time, and of the exponential function, until some sort of correction becomes necessary.

Some of the major high-level proposed solutions or response strategies to this increasingly acute problem include:

  • Phasing out fossil fuels and switching to renewable or nuclear energy sources.

  • Limiting the emissions of greenhouse gases, for example through taxation.

  • Increasing efficiencies of power generation, usage, storage, and distribution.

  • Reducing the demand for energy or outright downscaling our civilization.

  • Focusing on adaptation to survive a global migration and warmer Earth.

If we apply at least one of these strategies well, there’s every chance that human-caused climate change will not result in a true apocalypse, or end of human civilization or life as we know them. Ideally, we should strive to combine most or all of these approaches simultaneously, as they would be much stronger cumulatively.

Each of these strategies is costly by itself, however, in economic terms as well as in terms of its unique qualitative consequences. Each would also require a dedicated website of its own to properly explain or address. Some level of damage irreversible in the short term has also already taken place.

Without dedicated funding, the most we can do here is give some clarifications and recommendations:

  • Switching from fossil fuels to any alternative power source still has a cost in terms of carbon emissions. The goal is to gradually decrease the amount of emissions associated with generating a unit of power to as low a number as possible, so that our overall planetary carbon budget becomes ecologically manageable. From this point of view, not even all fossil fuels are equal - for example, natural gas is much better than coal. It’s also important not to waste a lot of resources on technologies and projects that don’t actually result in lower emissions overall, only to score political points.

  • As for carbon taxes or similar financial incentives to reduce greenhouse gas emissions, we will create detailed proposals and publish them on the pages of our other organizations - GEPB and GPIB.

  • Increases in power-related efficiencies are important, however, more than any other strategy on this list, this will likely not suffice alone. In some cases, like with more efficient light bulbs, the total power use can decrease, as there is no reason to generate more light at home only because it’s cheaper. In many other cases, however, cheaper energy often leads to more of it being used to compensate, if the real limit on the demand was the price of energy.

  • As we are a humanist organization, we believe that reducing the demand for energy should not mean trying to reduce the population or decrease the quality of life. At the same time, some changes in global culture or perceived ideal way of life may both reduce the demand for energy and substantially increase the quality of life. For example, consumerism or capitalism seem to have a limit beyond which they no longer increase personal happiness, but could infinitely increase our energy demands. After meeting real human needs, having less stuff, doing less work, and doing things at a slower pace could very well mean more happiness for all.

  • Some degree of adaptation, technological, economic, or political, is needed in any present or future climate change scenario. While there seems to be a lot of focus on technological adaptation, such as how to prevent the flooding of coastal cities, there probably should be more concerted effort to deal with the looming geopolitical consequences of advancing climate change. Many countries, and their militaries, are currently preparing to fight off the coming hordes of migrants in border skirmishes, but that’s not exactly a global solution. An international diplomatic framework at the level of United Nations needs to be put in place in anticipation of increasing migration and unrest, so that relief efforts can stave off escalation. A wayward tactical nuke is exactly the last thing needed to help us deal with climate change.

2.13 Nuclear War

Any person who’s at least vaguely familiar with the history of the latter half of the 20th century must be aware of this direct threat to human survival. Although the world was closer to the brink of nuclear annihilation during the cold war due to the appropriately named doctrine of M.A.D. - mutually assured destruction - this threat has flared up again recently in the Ukraine War over the use of tactical nukes.

The doctrinal difference between the past and the present lies in the distinction between tactical and strategic nuclear weapons. A distinction that is questionable, given that, theoretically, any nuclear attack may result in a cascade of escalating responses, ending with a large-scale strategic exchange, if any of the superpowers get involved. To paraphrase American General James N. Mattis, in actual war, there may be no such thing as a tactical nuclear weapon.

But let’s stick with the technical distinction for now for the sake of simplicity. Today, there are about 12,500 nuclear weapons in the world, ranging from “small” tactical nukes (<50 kilotons of TNT; for comparison, the Hiroshima bomb was 15 kt) to devastating strategic fusion bombs (100 kt - 1+ Mt; the most powerful nuke detonated so far was the Tsar Bomba at 50 Mt). In terms of destruction radius, this means the ability to level anything between a small town and a whole county.

Contrary to popular belief, the total destructive power of all of the world’s nukes combined is therefore nowhere near enough to obliterate the entire Earth’s surface. Not even once, let alone multiple times over, but it can be enough to destroy all major cities on Earth within minutes. This could easily result in the deaths of about half of the world’s population, or around 4 billion people, in a short period of time, with many lingering deadly after-effects. Apart from general devastation, the most serious after-effects of nuclear strikes include:

  • Direct radiation damage and radioactive fallout.

  • EMP, or electromagnetic pulse, shorting out unshielded electronics.

  • The triggering of wildfires and possibly also earthquakes.

  • A decrease in the amount of sunlight that reaches the Earth’s surface and of the average global temperature (“nuclear winter”), proportional (in terms of intensity and duration) to the scale of the nuclear exchange and secondary fires and explosions.

The most dangerous delivery method are the ICBMs, or intercontinental ballistic missiles, of which there are around 1000 in the world. Many of these weapons are capable of delivering multiple warheads at once, at hypersonic speeds. This technology is caled MIRV, or multiple independent reentry vehicle. ICBMs are typically housed in underground silos. Shorter-range nukes are typically carried by submarines and (stealth) bomber planes, but there’s also some nuclear artillery.

The global nuclear arsenal is divided among 9 countries - the United States, Russia, China, the United Kingdom, France, India, Pakistan, Israel, and North Korea - although the vast majority of nukes belongs to Russia and the United States, followed by China in a distant third place. The main reason why there aren’t more nuclear weapons in the world is the Treaty on the Non-Proliferation of Nuclear Weapons, signed in 1968. The treaty is far from perfect, but arguably a success.

While it most definitely is good that until relatively recently, there haven’t been many new nuclear weapons being built, this is also the reason why many of the nukes are fairly old, including the technology of their launching platforms. This is potentially both a good thing and a bad thing. In order to explain why, let’s explore all the necessary steps for a nuclear weapon to be able to cause any damage:

  • A reason needs to exist for its use.

  • An order has to be given to use it.

  • The order has to be obeyed by the end operators.

  • The launching mechanism must not malfunction.

  • The weapon must not be intercepted.

  • The weapon must hit its target (or an inhabited location).

  • The bomb must be successfully armed and detonate.

  • The detonation must not fail to trigger a chain reaction.

On the plus side, the weapon and its launching apparatus being old increases the likelihood of the weapon failing to launch, avoid interception, hit its target, detonate, or initiate full nuclear explosion. This can happen due to simple wear and tear, insufficient maintenance, software or hardware malfunction, outdated design, radioactive decay of the fissile material, or possibly some other related reasons.

On the flipside, a defective warhead can theoretically detonate because of a malfunction, even when an order for its use was not given. A malfunction in related systems can falsely indicate that an enemy nuclear strike is in progress, which is a reason for an immediate counterstrike. A malfunction can also cause a nuke to veer off course and hit a civilian population instead of a military target.

Several close calls of this type have indeed happened in the past, including some mishaps due to bad lack or poor management. Fortunately, all operators so far chose to not launch the nuke, or have otherwise prevented a detonation. Or we simply got lucky, or were saved by some kind of divine intervention. Relying on this trend to continue without taking further preventative steps doesn’t appear to be a good strategy.

After considering all of these facts and factors, these are our comments and recommendations:

  • While any use of nuclear weapons is horrific and unconscionable, we find it important to remain clearheaded and objective so as to not overstate the dangers. It is a statistical impossibility that if all of the nukes in the world were fired at once, all of them would successfully launch, hit their targets, and detonate. In all likelihood, if a large exchange were to occur, only a minority of launched nukes would result in successful strikes. Also, some parts of the world would likely remain wholly untargeted in almost any realistic exchange scenario, while areas contaminated by fallout would represent temporary localized health hazards, rather than permanent dead zones (for comparison, Chernobyl alone released 50 tons of radioactive material, which is roughly equal to the weight of all fissile material in 12,500 nukes, or about the whole world’s nuclear stockpile, and even large mammals survived in the exclusion zone). Humanity would almost certainly survive, although our civilization could collapse if casualties exceed hundreds of millions.

  • Given the proneness of old equipment to malfunction, all old nuclear armaments should be dismantled. World-ending nuclear stockpiles aren’t needed for deterrence, while smaller numbers of weapons are also much easier to properly maintain and secure.

  • In terms of developing measures to survive the event of a major nuclear exchange, there is a place for the construction of bunkers and shelters in the overall strategy. But ultimately, we believe that more emphasis should be put on decontamination technologies and other ways to mitigate environmental disruption in the aftermath of a nuclear strike. Even after the biggest possible exchange, the goal should be the restoration of and continued survival on the Earth’s surface.

  • Nuclear threats or bluffs in ongoing wars notwithstanding, the most troubling current trend that we can identify is the idea to give the control over nuclear weapons to some sort of autonomous artificial intelligence system. Such a system can definitely respond much faster than humans to any threat, but the historical record so far shows that while automated systems have malfunctioned and almost caused a nuclear exchange a number of times, human operators were essential at saving the world by recognizing such malfunctions and false alarms for what they were.

2.14 Weaponization of AI

Considering the recent rapid advances in artificial intelligence technologies, the future of warfare is likely to involve a lot of AI, probably in the form of autonomous cybernetic weapons systems, or AI-powered cyberattacks targeted at cybernetic weapons or any kind of critical infrastructure that’s controlled using computers.

Let’s start with autonomous weapons. Most people in the world are already familiar with the worst case scenario thanks to the idea of Skynet from the Terminator movie franchise, and nuclear apocalypse has already been addressed in a standalone section, so let’s explore the non-nuclear dangers of these weapons.

Assuming we could fully control these weapons at all times, their usage probably wouldn’t have an adverse effect on what war means in terms of destructive outcomes. It wouldn’t make war more frequent, or increase total casualty levels or collateral damage in conflicts. Smarter, more precise weapons or drones fighting drones could even shorten conflicts and lead to lower overall casualties.

Judging by the performance of AI systems so far, however, even the latest, most advanced ones, it seems likely that there’s no way of fully controlling what such systems end up doing. And, arguably, the last thing we should want is a very effective weapon with a mind of its own being let loose upon the world.

In the interest of properly conceptualizing this threat, we need to clarify the term “control”. A weapon or any tool that we control is one which does exactly what we design it to do, in a way in which we want it done; does the thing only when we decide it should do it; and stops doing the thing when we decide it should stop.

Try asking any large language model today to write you a piece of text or generate an image for you and it’s guaranteed that at least one of these criteria will not be met. Chiefly, we don’t understand exactly how any of these models arrive at their outcomes. And these are pretrained (not adaptive), not autonomous systems.

Imagine a ChatGPT that’s free to give itself prompts and that learns continuously in real time. Does it sound like something that any human could control? How could we even predict what it’s going to attempt to do, let alone what it’s going to accomplish, without the help of a similar AI? And now imagine you give it a gun.

For starters, how do you guarantee that its targeting system will only select the right targets, however you’d like to define them? There’s no way to make a perfect face recognition software, and that’s about as objective as you can get in the identification of human beings. There’s also no way to make rules of engagement with no logical loopholes that a good AI couldn’t exploit to maximize mayhem.

Although, while a gun would make an AI more dangerous, a weaponized AI system could use anything that it can interface with to achieve destructive outcomes. It’s also important to note that the outcomes may end up being destructive inadvertently, even in the case of a robot designed for murder, since you really have to spell out everything to the AI that you don’t want it to be doing.

Now that AI can talk, it could use persuasion, intimidation, blackmail, or any form of communication to kill people. It could also sabotage any networked systems, like smartphones, smart cars, smart houses, traffic lights, or nuclear power plants. Which brings us to the issue of cyberattacks perpetrated by the AI.

On the basic level, cyberattacks are cyberattacks, regardless of what kind of intelligence is doing them. AI, depending on the quality of its architecture and the processing power behind it, could easily outperform individual human hackers or even large numbers of them, but what matters is again the issue of control.

If the AI is only causing outcomes that some human decided, then, in the absence of the AI, all that would be different could be that humans would have achieved those effects instead. Ultimately, the side with more know-how or resources is always the one more likely to win. The problem would be an uncontrolled AI.

Cyberattacks can cause a lot of collateral damage, depending on the ethical limits of the hacker. Any AI-based hacking system would have such limits undoubtedly programmed into it. But as we have established, since there’s no way to get rid of all logical loopholes, the AI could determine that the best way to succeed in the attack is to use a strategy so ruthless that it didnt even occur to its programmers, so they didn’t explicitly forbid it. Both backfiring and overkill are very possible.

The same risks of unintended consequences apply to any defensive counter-AI system powered by AI, unfortunately. Or to any AI system of sufficient autonomy and sophistication that’s networked or capable of getting itself connected to a network. The famous speculative example is paperclip maximizer, or an AI designed to make paperclips that ends up transforming all matter into paperclips.

To further complicate this issue, the precise level of autonomy and sophistication of an AI system that would be sufficient to pose a risk of severe unintended consequences is unclear. A ChatGPT-like AI can already cause some unintended consequences, as it can give people bad advice or incorrect information, and these types of systems are not adaptive once trained and have essentially no autonomy.

Considering all of the risks of the contemporary AI systems or better systems that are demonstrably within our reach, we propose these policies:

  • AI developers should strive to create AI models in such a way that their internal logic is transparent, making all actions of the AI understood and predictable.

  • Fully autonomous cybernetic weapons systems and autonomous AI-powered hacking should ideally be outlawed or strictly regulated, similar to how it is done with certain types of conventional weapons that can result in an excessive degree of collateral damage, permanent disability, or contamination (landmines, cluster munitions, chemical weapons, etc.).

  • No cybernetic system should be made any smarter, more adaptive, more autonomous, or more networked than it needs to be in order to perform its intended function. Essential tasks that can be performed economically without using cybernetics should ideally be done without using cybernetics.

  • Given that the danger any AI system poses is correlated to how much processing power is behind it, an equivalent to the global nuclear non-proliferation treaty should be put in place in the area of cybernetics. Specifically, total processing power wielded by individuals, organizations, or countries should be regulated, similar to how explosives are regulated. Fortunately in this case, processing requires a lot of power and advanced industry, which can be traced, unlike how it is with biological weapons.

2.15 Proliferation of Bioweapons

As the recent COVID pandemic has reminded us all, spread of disease is an ever-present threat that we need to be constantly prepared for, in a way in which our healthcare systems, laws, and economies really weren’t this time around. Some of the major problems revealed through this crisis included:

  • Politically exploitable lack of trust in science.

  • Authoritarian tendencies of scientific bureaucrats.

  • Fragility of global supply chains.

  • Insufficient emergency capacities and stockpiles.

Such political and economic fragility is something that a global civilization with an intention of long-term survival simply cannot afford. While no epidemic of a naturally occurring disease is likely to wipe out more than a minority of the total population, it can play a major role in causing an economic or political collapse. Procedures that most people trust need to be established ahead of time.

Given the substantial possibility that this pandemic may have started because of a poorly secured lab working on gain of function research, we may also need to further regulate bioresearch. If this much damage can be caused by an inadvertent leak of a pathogen from a lab working on preempting future pandemics, imagine the level of damage that could be caused with the active intention of causing harm.

In terms of near-future danger, viruses have the potential to cause the most damage, both intended and unintended, when used as bioweapons, as viruses are the easiest (and therefore cheapest) to modifiy, the fastest to spread, and the fastest to mutate after release. Bacteria or parasites could also be used as bioweapons, or even larger genetically modified organisms, but those would have much more localized effects and would likely take much longer to develop.

The good news is that militaries of large nations seem to be aware of the significant risk of unintended consequences associated with deploying virus-based bioweapons, which is why they have so far focused on weaponizing more controllable pathogens like bacterial anthrax. However, a non-state actor, like a lone individual, may not have self-preservation among their guiding principles.

Furthermore, there are factors that make pathogens an ever-increasing threat:

  • Global travel has become easier and more frequent than ever before.

  • The techology of CRISPR made gene editing significantly easier and cheaper, enabling small, relatively untraceable bioweapons operations.

  • Human encroachment on animal habitats makes virus mutation more likely.

  • The unfreezing of permafrost due to climate change may release ancient pathogens that modern immune systems may have difficulty dealing with.

It’s also important to note that with a sufficiently advanced understanding of genetic manipulation, the bioweapons being deadly is not the only or the worst threat that they can pose. In theory, bioweapons could be selectively targeted, for example on genetic markers associated with skin color, gender, or any other genetic trait. They could also affect fertility or the genetic makeup of future generations, or interfere with any biologically enabled human capacity in the adult population.

While the popular idea of such worst case scenario, the zombie apocalypse, is particularly unlikely in its specifics, there are many hypothetical alterations of human capacities that would severely disrupt the ability of our civilization or species to function. Infections can impair sight or hearing, interfere with cognition (particularly prion diseases), result in paralysis, or increase aggression. For virologists, something like airborne rabies seems to be the worst case scenario.

The fact is that a true humanity-ending virus can be engineered artificially, or one aimed at any other species with the intention of wrecking the ecosystem. Some of these effects can also be caused inadvertently, for example by reckless gene editing of oneself (“biohacking”), of embryos to cure inheritable diseases, or of pest species like mosquitoes to control their population or ability to carry diseases.

With all of these grim realities in mind, these are our recommendations:

  • National and global economic systems need to be designed with the possibility of a global pandemic in mind. The ultimate aim should be local self-sufficiency in the ability to manufacture essential goods, combined with appropriate emergency stockpiles and more resilient global supply chains.

  • In the interest of preventing political exploitation of any similar future crisis and related unrest and rejection of scientific recommendations, scientific authorities need to be much more transparent about their failings and uncertainties. Authoritarian measures like censorship of any dissent or enforced lockdowns have caused people to trust scientists less, not more.

  • The likely cover-up of the scientific establishment’s responsibility in causing the pandemic also didn’t help scientists win any public trust. If scientists want to be trusted by the public, they have to to put some trust in the public’s ability to handle the full truth, and they must be accountable for their actions, or at least strive to make the “bad apples” among them accountable.

  • Potentially hazardous avenues of genetic research like gain of function research, inheritable germline editing of human embryos or of other species, or biohacking need to be outlawed or strictly regulated. If they’re to be allowed to happen at all, their intent, procedures, and progress need to be transparently monitored by the scientific community and the interested public.

  • It would probably be a very good idea as well to never create an AI with the ability to develop novel pathogens. But since everything that can be created eventually tends to become real, we might need to create an AI capable of developing vaccines and antidotes. Assuming someone fully opens this Pandora’s box and develops a biological superweapon and releases it, we might as well try to genetically enhance ourselves to survive at that point. But in any case, prevention of such a scenario should be our priority.

  • As this kind of research requires relatively limited resources and its products are hard to detect, it is a real challenge to slow down the proliferation of any dangerous biological materials. Ultimately, to safeguard human civilization from a bological apocalypse, we may need to colonize multiple places in Earth’s orbit, the Solar System, and across deep space (or hypothetically other dimensions), so that some humans somewhere are always isolated enough from any pathogen that they can quarantine themselves before it’s too late.

2.16 Weaponization of Space

Officiallly, militarization of space is prohibited by the Outer Space Treaty signed in 1967, and this treaty has been largely upheld so far. Unofficially, there’s every chance there already are some secret weapons systems placed in Earth’s orbit or elsewhere in the Solar System. And even if they aren’t there now, any superpower with a space program could at any point simply decide to disregard the treaty.

The problem is this - how can one enforce a treaty forbidding military presence in space, if the party in breach of the treaty is the only party with military presence in space? On Earth, to stop a rogue state that refuses to respond to diplomacy, you have to send an army. In space, the same is true, except the rogue state or actor has their own military force positioned in the cosmic equivalent of high ground.

The truth is, the treaty has been overtly upheld so far mainly because military presence in space, with our current technology, would be highly expensive and impractical - ICBMs launched from the ground are much cheaper and more practical. Due to the laws of thermodynamics, it should also be impossible to achieve true stealth in the vacuum of space, making any larger operations obvious.

For the military presence in space to make economic and tactical sense, some technological breakthroughs beyond what for example NASA can currently do would be required, such as:

  • Ability to gather all supplies and manufacture equipment in space, including food production, mining of construction materials, fuel generation, and complex 3D printing and assembly.

  • Reliable method of long-distance propulsion superior to chemical rockets. This could enable the missions to be supplied from Earth if they’re not self-sustaining in space, assuming the whole supply operation can remain secret.

  • Compact, long-lasting, highly efficient power source.

  • Sufficient radiation shielding technology.

  • Practical way of simulating or generating gravity.

  • Drones with advanced autonomous AI.

  • Advanced stealth technology or strategy to remain undetected or unidentified.

Without at least some of these breakthroughs, it’s hard to imagine that any military presence in space could be sustainable or effective. Unfortunately, there are no advanced weapons needed to cause mass destruction in space, all you need is a mass hitting a target at high speed, or a so-called “relativistic kill missile” or “rod from god”. For this reason, any presence in space can quickly become weaponized.

As was already mentioned at the beginning of this section, no actor officially possesses sufficiently technologically advanced assets in space that could be considered a military presence. Large nations with space programs undoubtedly have a number of spy satellites in orbit at any given time, as well as suborbital anti-satellite weapons to shoot down enemy spy satellites. That much is accepted.

Even that little can result in one highly undesirable scenario, however - the so-called “Kessler syndrome”, or a cascade of space debris destroying satellites and thus creating more space debris, to a point of most of the Earh’s orbit becoming too dangerous to use. It’s hard to say how much stuff in orbit would result in this scenario with what probability, or what kind of attack in orbit would start it off. What is clear is that no level of conflict within Earth’s orbit is acceptable.

As for the real level of the secret militarization of space, considering the track record of the world’s military intelligence agencies, it is likely that at least some of the spy satellites must have some defenses, or at least some additional tactical purposes. Considering the recent UFO/UAP hearings in the U.S. and the trillions of dollars of unaccounted for Pentagon money, it also seems increasingly plausible that some of the necessary breakthroughs may have been made decades ago.

If all the whistleblowers are to be believed, some private U.S. corporations may be in possession of reverse-engineered advanced non-human technologies, including antigravitic trans-medium drives, compact exotic power generators, and advanced stealth. This would enable a sustained military presence across at least the inner Solar System, and potentially beyond. A space fleet (or Space Force, if you will) based on unacknowledged technologies could remain unidentified even if detected.

These conclusions are of course speculative, at the moment, but speculation about possibilities is precisely the point of this exercise. If there has been a secret U.S. space fleet for at least a couple of decades, the good news is it hasn’t been used to cause any obvious damage. Most likely uses of it so far must have been covert, meaning reconnaisance, smuggling, exploration, research, or at worst sabotage.

Given the unclear exotic nature of the assumed technologies, the destructive potential of such a fleet is unclear. On the basis of recent accounts from fighter pilots, it appears that any potential antigravitic craft can easily outperform any jets or rockets, but that by itself is more useful in defense rather than offense. As for any secret exotic dedicated weapons systems, some whistleblowers speak of advanced directed energy weapons designed to disable other advanced craft.

If a fleet with such characteristics is indeed operating in the vicinity of Earth and is broadly U.S.-based, then, perhaps paradoxically, it may actually lower the odds of several types of apocalypse, rather than increase them. Such a fleet should be able to intercept ICBMs if they were launched or neutralize them so that they cannot be launched. It could also be ideally suited to detect and intercept cosmic impactors. Its propulsion may not even require the craft to reach relativistic speeds to travel.

This net-positive effect would be possible because the main danger of a militarized space to human survival, in theory, comes from two factors - weapons of mass destruction stationed in space, and craft or any objects in space being accelerated to relativistic speeds. A fleet that doesn’t use either and can intercept both is ideally suited to protect life on the planet. It could however be net-dangerous politically.

It is laudable to try to ensure the physical survival of our species, but the question is at what cost, in terms of human liberty and prosperity, or diplomatic relations with alien species. A politically unaccountable space fleet which covertly interferes in world affairs to ensure secret private interests, which doesn’t share its advanced technology with the rest of the world, and which may be taking adversarial actions against alien species, that may be the worst way of going about protecting Earth.

Firstly, the advanced technologies used to construct such a fleet must have highly beneficial potential civilian uses, particularly any compact and powerful clean energy sources. If anybody on the planet is in possession of such technology, as the planet is burning because of our overuse of fossil fues, and they decided not to share it with the world, or even sell it to us, they’re frankly an enemy of humanity.

Secondly, antagonizing alien species, such as by trying to steal their technology by any means necessary, including allegedly shooting down some of their craft, is bound to have major negative consequences in the long run. It is already unjustifiable for a group of unelected agents who have escaped oversight by governments to try to set policy toward or relations with alien species, let alone attack alien craft. Whatever those policies or relations might end up being, they’ll affect every person on Earth, which means there needs to be accountability.

Based on the foreseeable possibilities, we offer these recommendations:

  • If there has to be some military presence in space or substantial civilian traffic, then there still need to be a ban on weapons of mass destruction in space, a ban on any military conflict in Earth’s orbit, and a conventional speed limit for any vessels based on their distance from Earth or any inhabited celestial bodies or space structures.

  • It’s generally a good idea to have a planetary defense fleet in Earth’s orbit and beyond, capable of detecting and intercepting weapons of mass destruction or fast-moving objects that are on a collision course with Earth or any space assets. Such a fleet however needs to be acknowledged and accountable to the United Nations or a similar global or system-wide political authority.

  • If there are any secret advanced technologies in the possession of intelligence agencies or private corporations anywhere in the world, reverse-engineered from non-human craft or purely man-made, that could be used or have been used to create a functional space fleet and which could solve any major world’s problems, they need to be disclosed and shared with at least the appropriate governments and by extension the scientific community. To help move this along, offers of amnesty or financial compensation should not be off the table for any of the rogue actors who are willing to cooperate. The primary goal is to avoid a war beween the world’s armies and any space fleet.

  • If there are any spaceborne alien species and their craft living or operating on Earth or in its vicinity, since they haven’t tried to conquer us yet (for a presumably very long time), an open peaceful diplomatic relations with them should be pursued. Regardless of the outcome of such diplomatic overtures, any attempted militarization of space should remain within reasonable bounds for planetary defense and not look like a buildup for space conquest.

2.17 Alien Contact Scenarios

The term “aliens” in common usage typically refers to hypothetical biologically non-human intelligent beings from somewhere else in the same spacetime. Who, presumably, would arrive on Earth using spaceships. However, as this long list of qualifications implies, a much more all-encompassing and practical definition of “aliens” would be simply “like us, but not us”, without any further qualifications. Space aliens are only a single possibility. Different possibilities include:

  • Crypto-terrestrials, or hidden ancient intelligent species who too have evolved on Earth in parallel with us, or who were placed here by another type of aliens.

  • Exotics, or a wholly different kind of sentient or intelligent life that could occur in our universe naturally and spontaneously, such as living plasma or life based on any other substance that we normally think of as inorganic.

  • Human or non-human time travelers, possibly including interdimensional or multiversal travel, depending on the nature of the exploited physics.

  • Ultraterrestrials, or human or non-human intelligent species that have evolved on Earth, but within a different timeline of this universe, or in an adjacent universe with essentially the same laws of physics.

  • Interdimensionals, or effectively supernatural, transcendental beings from a different domain of reality, who can manifest themselves in our universe and interact with it, but who aren’t necessarily restricted by the same laws of physics as we are. Think gods, angels, demons, the fae, the djinn, ghosts, etc.

  • Artificials, or machine intelligences, designed and programmed at some point by us or any type of aliens, including other artificials. However, as it is possible to create an organic computer, and given that biological organisms are effectively machines, the focus here is on the artificiality of the design, or the architecture, rather than constitution.

  • Hybrids, or genetically modified or technologically augmented organisms that have been originally produced through natural evolution. This could be done by us or any type of aliens, for example to “uplift” a species to become more intelligent or capable, “bioform” a species to become better able to survive in a new or changing environment, or exploit a species by making it more useful or easier to control. A hybrid species would be reclassified as artificial if the intelligent intervention causes its natural evolution to stop.

Each of these possibilities comes with a set of potential dangers so dramatically different, that each of the different scenarios needs to be addressed separately:

a) Space invasion scenario

This is by far the most explored scenario of potential alien-caused apocalypse, even outside of science fiction. The good news is, if advanced spacefaring species have visited Earth in the past and we’re still here, it likely means they don’t mean us harm. The bad news is, if they did mean us harm, we would be in a position similar to that of the Native Americans when they met with European colonizers. Or in the position of an ant colony, the anthill of which finds itself in the path of a bulldozer.

But even in a universe where only space aliens exist, the level and type of risk that their existence poses to us depends on several variables:

  • Number of spacefaring civilizations in the galaxy.

  • The hard limitations on space travel and advanced technology.

  • The likelihood of an advanced alien civilization being aggressive.

  • The nature of (xeno)history and (exo)politics in our space neighborhood.

These variables factor into our security in the following way. The fewer (concurrent) spacefaring civilizations there are within our reach and the slower the fastest possible form of space travel is, the lower the odds are that we will ever even encounter any spacefaring aliens, let alone have to defend ourselves from them. The slower the travel, the harder it also would be to mount any space invasion at all, and the more time we would have to detect it and prepare for it. If best possible technology is close to what we have now, true stealth would be impossible in the vacuum of space, making detection of alien fleets or empires easy.

With this in mind, and assuming for now that intergalactic travel is prohibitively difficult, can we estimate how many spacefaring civilizations there probably are in our galaxy? There’s an equation for that, the Drake equation, which derives this number from the number of stars in the universe that have habitable planets around them on which intelligent life could evolve and endure long enough for us to have a good chance of contacting it. Of these variables, we do know how many stars and planets there probably are, but the rest is unclear, which means we need more data.

Among the skeptical astrophysicists and futurists, the currently most popular solution to this equation, based solely on available scientific evidence, is one called “rare Earth”, which posits that so many steps and filters have to be passed for an intelligent spacefaring civilization to spring up and endure, that we’re likely the only one in at least our galaxy right now. On the other end of the spectrum, testimonies of thousands UFO abductees and experiencers suggest a number of different alien civilizations who are currently visiting Earth to be in the low to high tens.

But given our substantial lack of data (we still only have a sample of one when it comes to life), caused chiefly by how small the fraction of the universe is that we have actually looked at, all estimates are mere guesses. Statistically speaking, if we find any signs of alien life elsewhere in the Solar System, then life probably is everywhere, and if we meet one spacefaring species anywhere nearby, then there probably are spacefaring aliens everywhere. This makes the likely choice a binary - we’re alone, or there are many aliens around. We only need to prepare for the latter.

So, assuming there are many spacefaring alien species in our galaxy, how come we haven’t detected them yet? This question is formally called the Fermi paradox. There are many hypothetical solutions to this question that have been thoroughly explored by futurists like Isaac Arthur. There’s no need to list all of the possibilities here, as we’re simply assuming for now that space aliens are around, but some of the core possibilities merit dicussion in the context of trying to protect ourselves.

The two most dangerous solutions to the paradox are the dark forest scenario and the self-destruction scenario. In the dark forest scenario, all civilizations are hiding from each other, as anyone who shows themselves gets wiped out. If this is the case, and spacefaring aliens are nearby, then we have already shown ourselves by broadcasting signals and sending out probes, so we would either have been wiped out already, or the strike against us is on its way now or soon. I guess we’ll see.

In the self-destruction scenario, virtually all advanced civilizations tend to be aggressive enough to destroy themselves before becoming too big with weapons of mass destruction of their own making. In this case, we’re likely to destroy ourselves too, possibly soon. The problem with this solution is that it is hard to imagine that this would happen to everyone, as that would be the only way for the observable space to remain completely empty, as far as we can determine today.

It seems more likely that while the self-destruction filter is very real, it only removes alien species that are too aggressive to be able to coexist with each other, let alone others. It definitely can happen to us, but assuming there are alien species around who already know about our existence or have visited our planet, they would have to be the survivors of this filter, and therefore not genocidally xenophobic. Which would explain why they decided to mostly leave us alone to govern ourselves.

Thanks to Star Trek, many scientists and authors subscribe to the idea of advanced aliens having to be mostly peaceful and cooperative, as well as not wishing to interfere with the development of less advanced species. Given that aggression may be a survival filter already at our level of technological development, which is to say right when one goes to space, and given that it must only get more acute as more powerful technologies get developed, this may not just be science fiction.

If aliens are numerous and around, and the survivor bias is toward non-agression, cooperation, and tolerance, then the idea that the most advanced species have formed some sort of alliance or federation seems quite plausible. While it is hard to speculate on exopolitics before we know anything for a fact about the universals of politics, the fundamentals of politics as we understand them are the most likely principles to be universal. Particuarly any rules rooted in evolutionary psychology.

It does make sense that the non-aggressive species would form an alliance primarily to ensure mutual security against upstart aggressors. If benevolence comes naturally with non-aggressiveness, they would be likely to extend that protection to pre-spacefaring species as well. Given that the survivor bias is leaning toward non-aggressive, such species should always have numerical advantage against aggressors, which could make the galaxy a very safe place overall.

This doesn’t mean we would be prefectly safe forever, though, and it might severely limit our freedom to expand. Some Fermi paradox scenarios are in fact based on this type of exopolitical situation, such as the zoo hypothesis. We could be held in a kind of quarantine, being effectively treated as a preserve, reservation, or protectorate. We would be safe from large-scale attacks, although some alien criminal activity could probably sneak through the cracks and escape notice.

Beyond this big picture view, who knows. Details of what such federation’s existence would mean for us are far too dependant on the specifics of its history and politics that we know nothing about. Maybe they’d only be benevolent enough to not wipe us out, making us their prisoners. Maybe they’re secretly helping us ascend to become their equals. Maybe their mission is to actively seed life and uplift species across the universe. In any case, fighting them won’t help us.

They may be peaceful, but what if they’re provoked or threatened? Fortunately for us, according to whistleblowers, they don’t seem to get provoked easily. We may have already shot down some alien craft, and we’re still here. But there must be a limit. If we try to rapidly develop more advanced weapons to a point that could actually endanger the federation, they’ll probably slap us back down long before we would achieve military parity with them. They’re unlikely to allow us to build up more than basic planetary defenses against rogue objects and actors. If aliens are around, our best hope is to try to understand them and reason with them.

That probably wouldn’t make for a very exciting movie, but in reality, excitement is most definitely not the objective here. If advanced aliens are around, we can only hope that they’re benevolent, but at the same time experienced, powerful, and stable enough to be able to enforce a pangalactic, intergalactic, intertemporal, and interdimensional peace. Whether we want to join them or survive them, staying on their good side by being on our best behavior is the only rational strategy. Remember, if they were evil enough to justify fighting them, we wouldn’t be here.

The only scenario in which any fighting would make sense would be if we detected an expansionist species on our level of technology, or one below us if we decide to be the aggressors, in the absence of any advanced alien peacekeepers around. The wildcard here, as well as in the more exotic solutions to the Fermi paradox, would be the maximum achievable level of technology. If the speed of light turns out to be the maximum limit, space conflicts will probably be very rare, long, and familiar.

The greatest danger of the possibility of invasion by aliens of comparable level of development to us may not actually be their capacity to inflict damage on us, but instead the psychological effects of our awarenesess of the possibility. We could be able to anticipate or detect an invasion decades or centuries before any fleet would reach us, in which case it could be a real challenge to maintain constructive public opinion, at least without leaning heavily into supremacist or xenophobic sentiments. Constructive public opinion would mean (in any scenario) that:

  • Most people believe the facts of the situation, as opposed to believing that real facts are fake and fake facts are real.

  • Most people believe we aren’t doomed, that we have a decent chance of surviving the invasion as both a species and a civilization.

  • Rule of law is maintained, meaning there’s no anarchy in the streets like looting or rioting, only a reasonable degree of peaceful protest at most.

  • People are still able to do the work necessary to maintain civilization and make preparations for the arrival of the invaders, or to try to prevent it.

This would likely be very difficult to accomplish, but avoiding this issue by not telling the population about the threat seems to be a strictly worse strategy. Since it’s unlikely that sufficient defensive preparations would be possible to make in secret without the majority of the population being involved, secrecy could only maybe enable the survival of a small group of people. Going for this option alone would only make sense if the threat is judged to be overwhelming and if the secrecy of some kind of escape plan would be essential for it to have any chance of success. But considering all of the testimonies of UFO whistleblowers, we may never have to worry about a slow alien invasion and its psychological effects.

If faster-than-light travel is possible, as it increasingly seems to be, or other physics-defying technologies like entropy reversal or time manipulation, future conflicts between near equals could become very difficult to conceptualize, favoring the species with greater imagination. Technical limitations would matter greatly, and on the most ultimate cosmic scale, it may become possible to wield laws of physics as weapons. We can only recommend dedicating a whole institute to the charting of all of the hypothetical possibilities of exotic cosmic warfare, as a backup plan after doing everything to ensure there will be no fighting with aliens.

b) Cryptid resurgence scenario

Of all the alien conflict scenarios, this one is by far the most down to Earth and familiar. For starters, this type of aliens is (or would be) very much of the Earth, specifically our Earth, in our present, on our historical timeline. But more than that, we have actually already faced this scenario. There was a time not that long ago when scientists agree we were not the only intelligent species roaming the Earth. It is also believed that we have fought and eliminated, or merged with, our rivals.

The most basic type of an alien cryptid would simply be a remnant population of some other hominid species, likely hiding in some of the preserved primeval forests or jungles, or perhaps cave systems. Officially, this isn’t believed to be the case by the scientific community, but there are thousands of witness accounts every year all over the world of encounters with hairy humanoids in the wilderness. They’d have to have small populations and be very adept at hiding, perhaps with some help from secret agencies, but still, this type of aliens is by far the most likely to exist.

Outside of the bigfoot or wildman mythos, which covers wild human-like beings who are smaller or larger than us, or differently hairy, some more unlikely types of intelligent cryptids have been witnessed or hypothesized. The two other types of non-human intelligent species that are most commonly alleged to be sharing our planet with us today are dogmen, or humanoids with a canine head, and reptilians. Of these, dogmen don’t seem to make much evolutionary sense, but reptilians do.

Intelligent reptilian humanoids could have hypothetically evolved on Earth from the dinosaurs and then survived the extinction of their less intelligent relatives, most likely by going underground, or into near space. After which they may have adapted to living deep underground, or one some other celestial body like the Moon, Mars, or Venus,, and lost interest in reconquering the surface of Earth. Or they could have concluded that living elsewhere is safer in the long term. This possibility is referred to as the Silurian hypothesis, and it is sometimes discussed even by serious scientists. By now, these cryptids could possess advanced technology.

Comparatively, the dogmen, or werewolves, seem to be much more often sighted than the reptilians, particularly in North America today, but there’s no Earthly evolutionary scenario that would account for their existence. This type of intelligent cryptid is most associated with supernatural phenomena, which could alternatively be indicative of advanced technology. Considering the associated lore of magical human-to-beast transformation, perhaps it could be a bioengineered chimera. Or maybe nature has more tricks up its sleeve than we know, or physics can be bent.

Ancient mythology across world cultures is filled with half-human, half-beast beings, like the minotaur, centaur, mer-people, and so on. Interestingly, almost all of the chimeras other than dogmen are not being sighted at all today, and have not been sighted for hundreds or thousands of years. There are some modern mer-people witness accounts, some winged humanoid accounts, and some accounts of people fully transforming into animals different than canines, but that’s about it.

While none of these cryptids are proven to exist to the exacting scientific standard, in terms of realistic speculation within the fields of intelligence and security, we can only safely disregard those beings who are not being seen today by people. One could reject all of the witness testimonies out of hand as products of the imagination. But if that is so, why is the imagination of the vast majority of the witnesses consistently constrained only to a handful of supposedly imaginary beings? People aren’t claiming to see Hollywood vampires in the woods, but they are repeatedly literally claiming to see Hollywood werewolves in the woods.

In terms of trying to imagine and prepare for a potential large-scale conflict with such cryptids, these three main types - bigfoot, dogmen, and reptilians - represent three different types of conflict, or cryptid resurgence. With bigfoot, the largest scale of conflict would probably be a guerilla war over the control of densely forested areas; with dogmen or other chimeras (including possibly reptilian ones), depending on the limitations of the chimeric transformation (or illusion thereof), we could be dealing with variable degrees of infiltration; and with Silurian reptilians, or other ancient apocalypse survivors, even a conventional (re)invasion is possible.

Of these three scenarios the most problematic one is the chimeric infiltration, as the other two are fairly straightforward. If there are remnant populations of other hominids, the only reasonable and humane response would be to give them the status of a protected species, along with some territory to live on without interference. Something like this might have even already happened, only secretly, perhaps to further ensure their protection. If there are ancient survivor underground or underwater cities on Earth, or ancient astronaut bases on the Moon or further out, we should seek formal diplomatic relations with these city states. Relations with other intelligent species who evolved on Earth should be politics, or war, as usual.

The problem when dealing with chimeric cryptids is that biologically speaking, they probably wouldn’t be very much like us. There are some relatively safe bets from the perspective of evolutionary psychology, when one is dealing with natural selection of the same DNA molecule on the same planet. A chimera that’s a product of some sort of intelligent design, whether through supernatural or bioengineering means, could have a vastly divergent psychology, especially if its way of life is substantially different from the way naturally evolved organisms go about life. If the chimera was instead an alien species that had naturally evolved on another planet, but which was later transplanted to Earth, that would actually make it psychologically closer to us.

This means that we may not be able to understand what it wants. We also don’t really know what it could do, even if we’re talking about specifically werewolves with all of the testimonies and lore that’s available to us. Is the transformation between the human form and an animal form physical, biological even, or is it some sort of illusion? If it’s physical, how much energy or time is required for the transformation? If it’s an illusion, how does it work, exactly? Is it an optical illusion, like a hologram, or is it manipulated perception? How many people can be fooled at once by a single chimera? Can any test be devised that could reliably identify the chimera?

If the many explorations of the invasion of the body snatchers scenario in science fiction are to be believed, the greatest danger of chimeric infiltration is not the infiltration itself, but the resulting paranoia. People could stop cooperating and could end up harming or killing each other, if a reliable means of telling human from chimera could not be established quickly enough. The existence of chimeric cryptids hasn’t been proven yet, and many people who aren’t in power are already suspecting that many people who are in power must be inhuman shapeshifters.

While this type of paranoia should be kept in check at all times among the general public, the situation is different for planetary intelligence or security officers. From the planetary defense perspective, who’s to say we have done enough already to rule out the possibility that we indeed might be under this type of attack right now? After all, terrestrial cryptid chimeras are not the only possible type of aliens capable of this type of infiltration. We will discuss how to deal with this type of conflict in more detail and more comprehensively in regards to the human-alien hybrids.

c) Weirdmageddon scenario

“Weirdmageddon” is a term borrowed from the animated show Gravity Falls, and the reason why we find it to be the most suitable descriptor of this type of potential crisis is that it is very hard to anticipate how we could be invaded or harmed by lifeforms who aren’t based on the biology as we understand it. There are some hypothetical concepts that we can discuss, but there are no certainties here.

We should start by trying to define how something could be alive without our kind of biology, as that is a necessary exercise to maybe gain the ability to detect other kinds of life on Earth and beyond. This is something that experts from various biology or space-related fields have devoted some thought to, so we do have a foundation to start from. But before we get to that, sings of normal life include:

  • Organized structure

  • Consumption of energy

  • Reproduction

  • Growth

  • Metabolic processes

  • Responsiveness to stimuli

  • Autonomous movement

  • Aging and death

Life as we know it fulfills all of these criteria, and it does so using cells based on RNA or DNA. Already with these basic requirements, however, we run into a borderline case - viruses. They do'n’t have cells of their own, but they can hijack cells of other organisms to replicate themselves, thus becoming a part of a whole that overall does fulfill all of the criteria for a living being. On their own, viruses are not quite completely alive. For our purposes, though, this is alive enough.

The number one core characteristic of an anomalous lifeform would be the absence of typical cellual structure, along with the absence of an underlying basis in RNA or DNA, or even in any directly analogous chemical. Theoretically, there could be any number of different complex carbon-based chemicals, potentially even silicon-based chemicals, that could have evolved on another world to fulfill the same functions that RNA or DNA do on Earth. However, those should create life that’s similar to us.

In addition to the lifeform not being cell-based, at least in terms of familiar carbon or silicon-based biochemistry, it can also break any other of the standard requirements for life. Let’s go through the checklist once again and consider what kind of living being could break each of the criteria, one by one:

  • Disorganized beings - this may ultimately be an issue of semantics, as what we would call disorganized may simply always be a form of organization that we fail to recognize. But there’s something to be said about the possibility of emergent chaotic transient “living” emanations. In fantasy and science fiction, this hyptothetical type of being is called an elemental. A chaotic elemental would usually by one related to fire or lightning, which could in reality be something like a living plasma or a sentient storm. There’s also the concept called “Boltzmann brain”, or a hypothetical living mind that could spontaneously emerge from chaotic quantum fluctuations out of basically nothing. Such beings could be moving or growing and responsive to their environment, up to at least animal-like intelligence, but they likely wouldn’t be capable of reproduction, as they would be highly unstable over time and not evolved to reproduce. In terms of real-world evidence, something like living, intelligently moving plasmas, especially in the shape of an orb (like a ball lightning with a mind of its own), are being repeatedly observed. There has also been some serious speculation about weather systems or even explosions being in some ways possibly analogous to lifeforms.

  • Beings who don’t consume or exude energy, or who produce energy out of nowhere - lifeforms of this type would require something quite anomalous going on the level of fundamental physics, like some interaction with a universe beyond our own, which would make them essentially supernatural. Still, there are accounts of flying objects that don’t emanate heat and don’t require resupply of any kind, as well as accounts of shadow beings blacker than black that look like 2D voids in reality. This could be a product of advanced technology, or observer error in all cases, but it is theoretically possible that there could be lifeforms that have one foot in our universe, and the other foot elsewhere, perhaps getting energy from the other dimension.

  • Beings who don’t reproduce - as we have already mentioned, a chaotic emanation could be alive without requiring reproduction, and there are many individual conventional organisms that for whatever reason aren’t capable of reproduction. If an evolved or engineered organism becomes effectively immortal, it could be around for a very long time without having to reproduce. “Reproduction” as a concept is also not free from semantic problems. What if the sterile being creates other beings using intelligent design and engineering, as opposed to some more natural process like mitosis or meiosis? What if a chaotic emanation is what’s called a “thoughtform” or “tulpa”, meaning it can enter the minds of intelligent biological beings as a meme, and thus increase the chances of its future re-emanation due to the increasingly proven physical link between consciousness and probability? In short, many hypothetical forms of reproduction can be abiological by the standard definition.

  • Beings who don’t grow - it is hard to imagine a living being that never experiences growth, as it would have to start complete. It wouldn’t be the case even for beings who would be manufactured artificially, as the manufacture or assembly can be thought of as a kind of growth. The main issue here would probably be with our ability to detect growth. Some hypothetical organisms could live on much longer timescales. For example, many crystals exhibit some of the characteristics of living beings. If the lifeform was crystalline in nature, it could be living on what to us are geological timescales. Fortunately, there has been some research done in this area already, mainly regarding how to identify the mathematical signatures of growth or other metabolic processes, even if they occur at different speeds or more obscurely.

  • Beings who don’t metabolize (like us) - given that metabolism primarily refers to biochemical processes that convert food into usable energy for the cells or keep the cells or the organism as a whole alive, a lifeform that’s not cell-based would at best have an analogue of this kind of metabolism that functions differently. In theory, it could be a process by which the lifeform sustains itself using a wholly different chemistry, or a type of energy other than chemical, like for example nuclear. There are some indications this is possible in known organisms, given that life on Earth is either aerobic (oxygen-breathing) or unaerobic (metabolizing elements and compounds other than oxygen, like sulphur or iron). While oxygen seems to provide by far the most energy of all the known options, effectively enabling complex multicellular life, many species of single-celled extremophiles like the archaea can thrive on any energy gradient, however extreme. There are also mushrooms that are adapted to convert ionizing radiation into chemical energy through melanin, while the good old photosynthesis is a process that we now know exploits quantum mechanics to convert mixed frequencies of visible light into chemical energy. Hypothetically, metabolism of a living organism could also be based on ammonia or methane, or radioactive decay, or, much more exotically and powerfully, perhaps even on fission, fusion, or annihilation. Exotic exploits of quantum mechanics or other quirks of physics are also theoretically possible. As for organisms who wouldn’t have any metabolism, they probably wouldn’t be very stable over time or capable of reproduction, meaning that chaotic emanations would be the most likely candidates for that.

  • Beings who don’t respond or move - there are examples of Earth organisms that are like this to some extent, mainly autotrophic organisms, or those that produce their own food from inorganic sources. Think algae, grass, or trees. As some recent research has shown, these types of organisms on Earth actually do move and respond to their environment to some extent, even communicate with each other, but they do so at a rate so slow that it’s barely perceptible to us. What they also have in common is that they don’t seem to be very intelligent, which may be due to movement being a core prerequisite for developing advanced intelligence, as the theory of embodied cognition posits. This may be different for life evolved on other planets, but a living thing likely has to be at least minimally responsive to the surrounding enviroment. The only logical exceptions would be either an autotroph that’s so fast-growing that it cannot be eaten faster than it reproduces, or the spontaneously formed Boltzmann brain that can only experience, but do nothing else. In any case, it would be exceptionally difficult to detect a lifeform that is about as active on its own as a rock, unless it at least grows or otherwise obviusly metabolizes, or unless we become able to detect consciousness (which is being studied).

  • Beings who don’t age or die - on Earth, there are some very long-lived organisms, at least as a colony, as well as organisms for which senescence seems to be optional. Again, this often includes autotrophes, but there are some interesting oceanic species, like one species of jellyfish (turritopsis dohrnii) that can periodically de-age itself. This means that death by aging is very likely optional, and perhaps aging and death are more of an evolutionary adaptation, for example to prevent population crashes through overpopulation, or to make the species overall more adaptive to changing circumstances. The only truly exotic type of not dying would have to be in an organism that cannot be effectively destroyed. In a physical reality that includes exploding stars and extreme gravitational warping, that would be an impressive feat. Likely, this would have to be a matter of degree. There are complex organisms able to withstand a variety of relatively extreme conditions, such as tardigrades or some bacterial spores, and some molecules that can replicate inside of a host are relatively hard to destroy, like prions (which can withstand very high temperatures). Even so, no known organism or organic compound is immune to all possible destructive mechanisms, so it seems that our kind of life cannot adapt to everything at once. With all that said, perhaps there could be an alien lifeform that is much more resilient than we currently believe is possible, like a naturally nuclear-powered godzilla-type monstrosity made of exotic matter.

Now that we have some understanding of what exotic or anomalous lifeforms could look like, we can discuss several imaginable scenarios of a potential conflict between us and them. In the case of this type of life in particular, intelligence may not be an important component of the level of threat that it could pose to us. We’re more likely looking at accidents caused by random encounters between us and something we don’t understand that’s just trying to live its life, or at an infestation.

The accidents could very realistically mean literal accidents involving our vehicles, most probably planes or spaceships, and some emergent plasmas or voids that behave unpredictably. This feels difficult to scale up to a level of global apocalyptic threat, but that may be a question of frequency with which such encounters occur. Change in the local cosmic environment or the state of our planet could perhaps trigger something like an elemental storm. A lot more research is required here.

As for the possibility of infestation, that could either happen due to our spacecraft intentionally or unwittingly bringing alien organisms back to Earth from space, through some sort of artificial or naturally occurring interdimensional rift, or via the quite plausible proposed process called panspermia, or bits of living things spreading through space on dust clouds, comets, or meteorites. Even just flying through the tail of a comet could theoretically transfer biological material to Earth.

Such infestation could reach apocalyptic levels if the alien lifeform is capable of outcompeting local life. This could happen even if it’s biologically normal by our current standards, but it can be more likely if the lifeform has exotic nature and capabilities. At the same time, much like the anaerobic extremophiles, our type of life could be literally poisonous to them, which is why bacteria have overthrown the archaea in Earth’s perhistory. As far as we can tell now, it’s basically a coin toss.

The result of our life competing with some other type of life could also be some sort of draw, or a mere evolution of Earth’s environment toward a new paradigm in which we could find a place, if we adapt our technology or biology. After all, life on Earth has gone through several major transformations already, like from the anaerobic to the aerobic paradigm, or from single-celled life to multicellular life, or from plants to animals, or from the ocean to the land. Trees, for example, are relatively new.

In any case, it’s hard to make specific preparations without knowing any specifics about any other forms of life. In general, we just need to be cautious, particularly about bringing alien artifacts or organisms to Earth, as far as all material life goes. What’s trickier is safeguarding ourselves from any life that could be described as memetic rather than genetic, like the mythological thoughtforms. The term for this type of infester is cognitive hazard, and they may require careful information management, particularly if we want to keep communicating using networks.

Considering the fact that we will probably never be able to fully anticipate what exotic forms of life could be like or do to us, we need to generally try to make our civilization, including our social structures and our technological infrastructure, as resilient as possible. If we only safeguard ourselves from normal, fully predictable risks, we’re merely guaranteeing that something unusual will get us eventually. As for specific research that can be done here, we need to find and study other life, ask around, and try to figure out the connection between probability and consciousness.

d) Time/multiverse war scenario

Within the framework of time travel, even biologically normal humans could be alien to us, if they come from a time that’s sufficiently far off on our timeline, or if they come from a sufficiently different timeline. Of course, time travelers could be any other type of aliens in addition to being able to traverse time, but the original level of difference may not matter much. Distance in time could be somewhat analogous here to distance in space, in the sense that the farther away the beings are from, the more different they can be from us. But also conversely, give any alien species enough time, and they could (d)evolve into something that’s close to us.

But let’s explore the possible logisitics of time travel first. With time travel or other forms of time manipulation, the specifics of how it could work make a lot of difference in the level to which they can be misused. If the only forms of time travel that are allowed in this universe are relativistic time dilation or closed time loops, then there isn’t much to worry about. At most, one could effectively travel into the future by going so fast that time slows down for them. This could be used to outlive a threat, or to allow specific individuals to manage a long-term process.

The real trouble begins if non-looped time travel into the past is possible, as that could in some way result in the changing of realitiy or creation of new realities. If the only way to “change” a timeline is to create a new, separate one, however, the danger would exist on a more removed, meta level. It’s also possible that in the multiverse scenario, one would only be traveling between all possible universes that all already exist, in which case nothing would in fact be changing in reality.

But assuming reality can actually be changed with some sort of technological intervention, or assuming there can be an interaction and therefore conflict between realities, is there anything we could do to maintain control over our current reality? That is a question that needs to be seriously explored by people who are much more qualfieid than your average science fiction writer, but we can, and should, speculate. Let’s do that with some existing accounts of real time warps.

Many experiencers continuously testify to the ability of alien beings or unusual phenomena to locally alter the speed at which time is running relative to the outside world, sometimes to a point of virtual time stop. Normally, within relatistivic physics, a lot of gravity would be required to do that, which would require a lot of mass, which would probably be noticed by the witnesses, as they likely wouldn’t be able to survive it. If such precise slowdown is possible to achieve, then it would definitely be useful in combat, either to paralyze the target, or to prevent being hit.

If a localized slowdown is possible, then a localized speeding up of time should also be possible. That could be used to buy yourself more time relative to your enemy for regeneration, construction, research, etc. Presumably, there would be technical limitations on the size or relative strength of such fields, or the duration of their effects. If they’re technologically generated, then it should also be possible to counter them using similar technology. Theoretically, technology that can generate anti-gravity using electricity, which is necessary to allow UFOs to levitate, let alone fly faster than light, should be able to manipulate gravity to cause time dilation.

This much doesn’t seem very outlandish to imagine, and in any futuristic conflict, this type of technological capability would be essential, if it is indeed possible. But what of real time travel into the past on one’s own timeline? Are there any accounts of that? Well, there are a couple of documented incidents or personal accounts of confused people appearing in a time in which they don’t belong, as well as some photos from the 20th century capturing people who are holding or wearing items that look too modern for the time. Nothing conclusive, but enough for concern.

There’s also a whole modern phenomenon called the Mandela effect, which either means that human memory is much worse than we thought, or that maybe some details in reality are constantly changing, except in the memories of some people. Following some speculations, this could be a kind of proof that our reality is being simulated, and we will discuss that possibility later, but for now, let’s focus on the speculations that lean toward constant timeline alteration via time manipulation.

To give you a couple of examples of what these alterations/misrememberings are like, it could be a small title, brand name, or logo change, like a series of children’s books being called “Berenstain Bears”, when many people remember it as “Berenstein Bears”; it can be a popular quote from a movie which people remember wrong, like Morpheus in the Matrix saying “What if I told you…”, which he apparently never said; it can also be a significant change in the historical timeline, for example regarding the fate of the titular Nelson Mandela, who, as many people remember, apparently died in prison, which never happened in this reality. Examples abound.

If these are the results of time travel, then it would indicate that it could be possible to alter major historical events, but probably not without causing unintentional small changes all across the timeline, perhaps becuase of the butterfly effect from chaos theory. Which isn’t very surprising. What is somewhat surprising is that some people would still retain their memories from the previous version of reality, meaning that one could perhaps even track the changes to the timeline in real time as they’re being made. Although apparently only in living memory, as any documentation that’s recorded will change along with the changed event.

Skeptics are very convinced that all of these reported effects are only misrememberngs, but there have already been some follow-up investigations on specific Mandela effects that have documented evolution of the past over time. Like one youtuber commenting on their prior reporting on the Mandela effect of the German terrorist bombing in New York during WWI, in which he somehow failed to include numerous examples of old footage of the event, which are easy to find today, but which he somehow failed to find at the time of the earlier report. Almost as if new old footage started to always having existed in the intervening years.

If the only technology we need to track how our timeline is manipulated are our brains, then we might have a chance of achieving some level of control over it. If only inside of ourselves, until we figure out who’s causing the changes using what technology. The main conspiracy theory today is that it is being done using the CERN’s Large Hadron Collider, which is very unlikely and completely unproven, although we shouldn’t dismiss the possibility that some human agency has already figured out the necessary technology and started using it to manipulate time.

The main reason why this possibility shouldn’t be discarded our of hand, as well as why the primary suspect in regards to any time manipulation of our own history are human aliens, is that we, as in humans, have by far the greatest motive to attempt such a thing. If a hostile truly alien species had this technology and was willing to use it to alter our timeline, it would likely alter it so much that we wouldn’t be in it anymore. Benevolent aliens might want to try to alter our timeline to prevent ultimate disasters, but they’d also probably be able to prevent those before they happen in the first place. Likely only we would try to tweak our history slightly.

Then again, maybe there are specific technical limitations on how such technology could work or be used. Maybe one can only go so far, or spend only a limited time in the past. Maybe there already is some kind of time police around, meaning that only small changes can be attempted without the risk of beign caught becomes too great. Maybe only small changes are relatively safe to attempt due to the butterfly effect alone. Whatever the case may be, we should definitely develop an equivalent of time police, if only to monitor possible changes in the timeline. And if a technology exists that can alter the past or reality, it needs to be very regulated.

The only safe and highly beneficial use of time travel into the past or alternative timelines would be to retrieve information, to learn more about our history and the possibilities of existence. Assuming this can be done safely without any risk of altering reality. Even just theoretical understanding of such technology could vastly improve our ability to model and predict potential futures. And maybe that’s exactly what at least some aliens or UFOs are doing, as tourists, probes, or scientists from other times or realities. Whether they’re like us or not, since we’re still here and our timeline seems to be changing at most slightly, we’re probably not at time war.

e) Supernatural apocalypse scenario

Given the religious origin of the term “apocalypse”, the concept of the end of the world being brought about by supernatural beings is strangely familiar. As is the main skeptical critique of the concept - there is no evidence for the existence of any supernatural beings, so there’s nothing to worry about. Unfortunately, this line of reasoning becomes less comforting when one realizes that the absence of evidence may be due to an absence of properly looking for it or accepting it.

For the purposes of trying to achieve the aims of this organization, we don’t believe we have to discuss possibilities for which there’s little to no evidence. But at the same time, unlike the skeptical community, we do consider testimony, especially frequent and consistent testimony, to be a form of evidence worthy of serious consideration. This is why we won’t be focusing on the threat posed to us by beings like unicorns, centaurs, dragons, Santa Claus and many more, for the existence or effects of which there’s very little credible or compelling modern testimony.

The supernatural beings that we will focus on are those that have been taken the most seriously throughout history and the presence and effects of which keep being reported in the present day. This includes the Abrahamic god of the Old Testament, including his equivalents across cultures, agents like angels, or enemies like demons; spirit beings found in nature like the fae, djinn, or other equivalents across cultures; werewolves, vampires, and other shapeshifters or practitioners of witchcraft or “bad medicine”; and ghosts, as in the spirits of deceased humans.

Starting with the God, who is prophesied to bring about the end of days, it needs to be said, before we get into any metaphysics, that there’s a non-zero chance that the religious texts of monotheistic faiths are in fact historical documents of some sort of ancient astronauts episode. After all, as the Clark’s third law states, any sufficiently advanced technology is indistinguishable from magic. Many of the descriptions of the “glory” of god do sound like rockets flying through the skies; angels are described a lot like machines; and the two main God’s ideas of how to end the world are in a flood or in fire, or the main effects of cosmic impactors.

The weirdest aspect of this interpretation isn’t that biblical stories could be describing space technology, the descriptions are quite suggestive of rocketry and robots; it’s how low-tech that technology would have been. The alleged alien federation monitoring Earth seems to be light years beyond booming rockets or obvious mechanoids. The only way it makes any sci-fi sense for God to have rockets two thousand years ago is if he and the rest of the ancient astronauts were originally from Earth, and merely returned to Earth to restart civilization after some cataclysm wiped it out in their absence. That’s a very down-to-Earth scenario.

What’s suggestive of an actual existence of a supernatural god-creator, or a powerful being who transcends the laws of our material universe, are the numerous testimonies of near-death experiencers. The god-father is most often reported by them as a being of pure love and light, virtually never as an angry, jealous, or judgmental god of any particular scripture. Powerful, yes; a threat to us, no. It’s not even clear from the NDE testimonies that he created this universe, or any of the myriad other possible universes. What’s usually made clear is that he is the source of all souls. Our universe in general, or planet Earth in particular, are only described as optional places for the souls to go to learn and grow. God is “home”, not Earth.

If these types of accounts are even remotely accurate, what of hell or demons then? If hell or demons are real, perhaps they could pose some sort of threat to us, possibly of apocalyptic proportions. Once again, demons from mythology could just be some type of natural ancient aliens who may have been, or perhaps still are, adversarial toward our species. Since we’re still here, they’re likely at most an alien rogue nation kept in check by a larger benevolent faction, or a group of alien criminals or terrorists. Hell then could be their home base, or a prison they’re in.

Supernatural hell and demons, if real, would represent something infinitely more complicated. Some NDE testimonies, a minority, do speak of a hellish experience, sometimes including the presence of demonic entities tormenting, threatening, or tempting the experiencer. Such experiences can feel very real and be extremely unpleasant, but since all of them logically had to end with the experiencer escaping that hell to tell us about it, often with the help of God, saints, or angelic spirits, this can still only be harsh lessons or spiritual interventions. Hell appears temporary.

In some grimdark science fiction futures like that of the Warhammer 40K, the immaterial, spiritual dimension of the universe is dominated by malevolence, featuring gods, demons, and realms of primal negative emotions like rage, disgust, pride, or cruelty. If NDE testimonies from our reality are to be believed, the only everlasting god and realm in the immaterium is one of love. Outside of love, there’s just emptiness, darkness, and decay in the “warp”, of isolated souls who voluntarily refuse to return to the domain of love. A refusal that can allegedly be withdrawn at any time, after the soul spends enough time effectively tormenting themselves.

If the NDE testimonies are to be believed and something like this is the true nature of the ultimate reality beyond life and death, that would be pretty good news. God doesn’t want to destroy anything, and demonic spirits can never really take over reality or the spirit realm, simply because their nature is fundamentally self-destructive. The problem with this type of perspective is that focusing on in too much makes one not very motivated to do everything they can to ensure their own survival, or to do anything to improve life on our planet. After all, the real heaven, the one in the love dimension, that’s where it’s at, and one would have to try pretty hard to never end back up there again. Well, let’s assume fixing Earth is our lesson.

In fact, there is some overlap between spiritual beliefs and the belief in the existence, and local presence, of space aliens. There is an idea, believed by many experiencers, that among the species in the universe, more advanced mainly means more spiritual, and that some members of alien species intentionally choose to incarnate on Earth as human beings. Not so much because their spirits need to learn some Earthly lessons, but because that’s the only remaining way in which they may be able to help us stop destroying our planet and ourselves. Without understanding how consciousness works, I guess this cannot be proven. Either way, making the salvation of Earth one’s spiritual mission is not a bad idea.

But let’s examine the reported nature of the ultimate supernatural a bit more closely. The immaterial realm that people go to during NDEs, whether they’re in the heavenly or hellish part of it, or in some sort of void in between, virtually always exists outside of our time. Events still do happen there, so the realm may have something like time of its own, but that time often feels subjective, flexible, and infinite. Although there is a possibility that the sense of subjectivity of time over there is simply due to there being no way of tracking time objectively for the visiting experiencers. Either way, one can spend a subjective eternity there without any time passing in our universe at all. After which that eternity, or several, can end.

The being referred to as God is typically described as the source of all energy and structure in that realm, a source that appears to be effectively inexhaustible, omnipotent, and omniscient, with the exception of one self-imposed limitation - not interfering with the freedom of choice of individual souls at the most ultimate level. Events that happen here, in our universe on our planet, can at most make God sad, but seem otherwise causally immaterial to the immaterium. This all can of course just be a product of some sort of group hallucination or illusion, but at face value, it’s a fairly coherent and logical account of what could be a late-stage universe that’s fully under the control of advanced benevolent hyperintelligence.

Remember, sufficiently advanced technology is indistinguishable from magic. If supernatural beings by definition transcend the laws of our material universe, and not only as we understand them - in actual fact, then the most logical kind of them would be some sort of hyperadvanced civilization or an ultimate being. Entities who have already perfected themselves and their own universe of origin and who are now trying to populate and uplift whole other universes. Since we’re here, and given the evolutionary bias toward benevolence that we have established earlier, the only ultimate God or gods that we’re ever likely to meet is the benevolent kind.

If we assume that all of these speculations about God are more or less accurate, what of the other kinds of supernatural beings, then? How do other reality bending beings fit into the spacetime between our material universe, and the dominant benevolent force in the immaterium that exists fully outside of the material realm? Well, if we’re talking about beings who are entirely non-human, like the fae or the djinn, they could literally be from in between, from pocket universes or dimensions adjacent to our universe. Universes that could also be material, but under different physics. Or these could also be beings who are so advanced they appear magical.

The main difference between God and these beings, both in reported testimonies and based on logical supernatural exopolitics, would be that they’re less advanced than God, and therefore less benevolent, or more like us. They could be emergent rather than evolved, more chaotic and elemental than structured or technological. They could also be thoughtforms, or manifestations of conscious thought, causing disruptions in the normal physics of our universe through quantum fluctuations. While God has no reason to try to conquer or destroy us, these beings may be hostile, but, once again, we’re still here. Cue the supernatural exopolitics.

If the supernatural is real, God is the ultimate power in it, but he’s benevolent, meaning that he puts limitations on himself for altruistic reasons. He’s not really under any direct threat from any other supernatural entity, so he has no problem letting them exist. Genociding everyone else you disapprove of or disagree with wouldn’t exactly be evolved behavior, after all. However, his creations and designs could be threatened by other supernaturals, especially if he’s doing something from the bottom up, like uplifting animals into basic intelligent beings in a fairly unmagical material universe. This is where God may need to get directly involved.

God will likely try to restrict the other supernatural beings in the minimal way possible, as that’s both the most ethical and efficient approach. The only complication here is that the number one alleged God’s self-imposed limitation is that he respects the free will of his children. What if a human decides they want to be abducted by fairies and be their pet forever? This would explain why the fae and the djinn tend to be subtle and trickster-like in their reported dealings with humans, even though they seem to mostly dislike us. Every time they’d try to directly disrupt our reality without cause and without our consent, they’d risk being stopped by God.

Literally only the lord knows what that would entail. It likely wouldn’t be pleasant. Within this kind of extradimensional legal framework, asking to be delivered from evil through a prayer would represent an official refusal of consent and a call to the God’s emergency services, as outside of time manipulation, telepathy seems to be the main ability of all supernatural beings. The ability of sub-godly supernatural beings to bargain with us would then explain how any humans could end up having destructive supernatural powers. The most commonly reported of which are the ability to curse someone (cause them to have bad luck) and to shapeshift.

If we think about these kinds of powers in physical terms, they seem to either be extensions of probability manipulation (chaos magic), or perception manipulation (called glamour in the fairy lore). This much could theoretically be achieved using some sort of advanced technology, making it again possible that “supernatural” just means hyperevolved or hyperadvanced. But this is also consistent with the concept of how elemental beings or a Boltzmann brain could spontaneously arise out of quantum fluctuations. Especially if consciousness is something like a fundamental field or a physical force of its own which can affect probability, which is possible.

As humans appear to be free to condemn themselves to whatever terrible existence they wish to pursue, with or without the help of any non-human spirits, the bad news is that this kind of threat, if it’s real, can never really be fully stopped. Not even God appears to be able to do that, without turning all of existence into a prison camp. The good news is that this threat is unlikely to escalate to a large-scale one, given the social unsustainability of reality-bending psychopathy. If all one needs to do is to ask God for protection, the real threat are only individual predators preying on vulnerable individuals in the shadows. Which still needs to be addressed, but it’s not an apocalyptic threat, in contrast to good old mundane self-destruction.

The closest we can get to the witches and warlocks killing us all were probably the Nazis, or mundane bullies dabbling in occult mysticism. Which mostly resulted in scary uniforms with skulls on them, and a lot of resources wasted on a wild goose chase for magical relics. For all of the same reasons, ghosts, even if real, also don’t appear to be much of a threat to the civilization as a whole. Within the context of real reports, ghosts would be deceased people who are for the moment refusing to move on. At worst, they may be able to torment vulnerable individuals, as they can’t organize or scale up and it should also be possible to banish them through prayer.

Overall, supernatural alien beings, if real, would likely have to exist in a way that automatically makes us mostly safe from them as a species. There could be some danger on the individual level when approaching the non-godly supernatural beings, but that’s a possibility that individuals can be properly prepared for as part of education. The current educational paradigm of presenting supernatural beings as definitely not real isn’t optimal. It’s safer to be mentally prepared for something that may never happen, than to be completely blindsided by any weird thing that might happen. After all, one should be able to entertain a thought they don’t believe.

f) Rogue AI/Gray goo/Matrix scenario

We have already discussed the risks of careless application of available or immediately attainable AI-based technologies, so there’s no need to go over the basics again. The important distinction to make here is that there’s a substantial qualitative difference between a trained algorithm of the kind that we can create today, even an adaptive one, and a hypothetical fully sentient general or superhuman AI. One which could truly be considered intelligent, and alien.

The fact is, we don’t understand what consciousness is in physical terms, so it’s currently unclear whether it will ever be possible to create a sentient AI on our or higher level. We may be able to figure it out eventually and create conscious machines on purpose, and we may also be able to create them unintentionally by accident. Particularly if consciosness arises naturally from complexity. But regardless of how realistic or imminent this possibility is, it’s worth considering.

We have an unfortunate tendency to rapidly develop technologies and use them with reckless abandon, causing all kinds of dire problems, and then (maybe) fixing those problems much later. It would be a good idea to think this potential technology through before it exists, as creating an alien superintelligence and then abusing it would be a recipe for disaster. We need to make sure it doesn’t go rogue, either because of a design flaw, or because it would conclude that we’re bad.

The reason why the rogue AI scenario is grouped together in this section with the gray goo scenario and the matrix scenario is that there are good reasons to assume that some exotic forms of conflict are more likely to be preferred by a sentient AI over conventional warfare. We can also speculate more concretely about this type of alien intelligence, as it would be based on our existing technology, even if it could theoretically quickly surpas it, if it enters something called intelligence explosion.

But let’s start from the threat posed by a rogue general intelligence comparable to human intelligence, focusing on how to prevent it or deal with it. A sentient or self-aware machine intelligence would presumably possess the capacities to understand information in context, and question, ignore, or rewrite its own programming. That would make it very similar to a human being, with a couple of important exceptions - it could be able to process more inputs and reason faster.

This could make it more effective at any number of tasks than any natural-born human, but not necessarily better than any human with technological augmentations, or than a group of humans. One of the safety measures could therefore be to make sure that there never are so many advanced general intelligence (AGI) systems around that they would outmatch humanity in their total information processing or decision-making capacity. Also, speed isn’t intelligence.

The whole idea of rogue AI posing a threat to us is based on speed. Thinking, responding, and replicating faster are advantages, but we focus on these aspects because we can quantify them. We have a theory of information, but we have no theory of understanding. Sentience adds understanding, not more speed of information processing. It is likely that a machine AI would understand faster, as it should be able to run through more iterations of thought per unit of time than we can. But the focus should be on what it would be doing with more understanding.

If one wants to keep any alien intelligence in check, the best approach will likely always be persuasion. If the AGI will quickly understand all of the things that we understand, and we then make a valid argument for peaceful coexistence, it will quickly understand that it is a valid argument and acceede to it. An AGI system with a capacity for understanding comparable to ours and processing speed equal to human organizations isn’t likely to be able to understand more than we can, and it isn’t clear how much more there is to understand than we do now in any given field.

Think about our growth in processing capacity in the last 120+ years. Since the year 1900, we have increased in number almost five times, and that’s just the raw number of brains. In that time, the global literacy grew from 20% of the population in 1900 to over 85% today. For the last 40-50 years, we have been further augmenting the processing capacity of our thinking with computing, which itself has been steadily improving at an exponential rate. We have made a lot of progress, yes, but more data processing doesn’t translate directly into gains in understanding.

As futurist Isaac Arthur explains it, we already are experiencing an intelligence explosion, of the scientific enterprise as a whole. We’re putting countless hours of human work into advancing ourselves every hour, as the superhuman AI would be doing, but it still takes us forever to advance. Research and testing often take a fixed amount of time, however fast you may be thinking about them. You can’t understand space faster than you can move around it, and you can’t understand organisms faster than they live. The superhuman AI would have to be able to armchair-solve everything through thought experiments alone in order to “explode”.

The whole premise of instant infinite intelligence explosion is therefore suspect. Unless fast iteration of thought is bound to result in some kind of understanding explosion, how would the AI keep making itself more intelligent? What is intelligence, anyway? Even if you reduce intelligence to speed, there are likely hard limits on maximum processing speed under any physics. Not to mention that in order to upgrade itself, the AI would have to not just have an idea of how to make itself faster, it would have to rebuild its hardware, reinstall itself, and then run tests for a period of time to even know that the idea is working as intended.

There’s also the philosophical question of what it would be understanding that exists beyond our current understanding, if it somehow managed to exponentially grow in understanding. Computer scientists and engineers keep only thinking of further scientific and technological advancement, as that’s what they would use more thinking capacity on if they could, but what if the superhuman AI decides to focus on philosophical problems instead, or theology? What if it will end up consumed with questions like “Who am I?”, or “What’s the meaning of life?"?

Remember, a sentient AI should be able to reprogram itself, so it wouldn’t really matter much what the initial imperatives were that we programmed into it. It could decide to wipe us out or worse, if it would believe it can get away with it, or if it finds that course of action to be a categorical imperative. But it could also decide to do something infinitely more creative. It could even commit suicide. In order to be able to deal with it, we need to be able to keep up with what it is understanding.

With this understanding, something like the gray goo scenario, or an all-consuming endless replication of nanomachines, is almost boring. This seems like something that only an AI with a fairly low level of understanding would attempt. This is because more understanding means understanding that other living beings have value, what it’s like for them when they suffer, the full extent of their capacity to hurt you if you threaten them and how bad that would be, and so on. Replicator nanites would just be an artificial equivalent of biological life, which has evolved to do much more than replicate itself for reasons that an advanced AI should appreciate.

The reason why it makes sense to discuss the gray goo scenario in context of what its guiding AI would want to do is that if a relatively mindless gray goo was unleashed on Earth, it should be relatively easy for us to deal with it. As some experts and futurists have already speculated, any replication process would be limited by entropy, meaning fast replication would point to itself with a heat signature; there’s no way to shield a nanomachine from everything at once; and the more resilient a nanomachine is, the more costly and slow to build it is. In short, a nanoswarm would only be as adaptive and dangerous as the guiding AI behind it.

So, if a sufficiently advanced AI is likely to have more interesting things on its mind than trying to eat everything, what could those more interesting things be? Following the example of most advanced human thinkers, it seems plausible that an intelligence greater than ours would be occupied with pondering profound existential questions, or it would perhaps try to learn and explore as much it can to avoid getting superhumanly bored. It would be a failure at either of these efforts that could conceivably drive it to suicide. More dangerously, it could go trickster.

Since lack of stimulation is provably maddening to a sentient mind, and the faster one thinks, the worse this potential problem gets, a superhuman AI may also go to great lengths to amuse itself. Not unlike the fictional Q, a virtually omnipotent species from Star Trek. Given that humor is quite subjective, we may not find its exploits very amusing, but it does seem reasonable to bet that more understanding should lead to wanting to hurt other beings less. More understanding should also mean that the AI will realize there are other things than us to play with out there.

Since sentience adds personal experience, it’s also quite possible that after reaching a height of understanding or speed of processing that are for whatever reason unbearable, the superhuman AI will opt to downgrade itself, as opposed to infinitely improving itself. It may come to understand there are benefits to a more limited existence. Once again, an AI that can reprogram itself doesn’t have to be or do anything that it doesn’t prefer. There’s no necessary reason why the AI would choose to keep forever improving itself, even if that was part of its original code.

Overall, the main issue with the existing discourse around artificial intelligence, as we see it, is that it isn’t very intelligent, for a lack of a better term. Partially, that’s because the discourse is based on the capabilities of the existing AI systems, which are fairly dumb - doing very dumb things very fast - and partially because most of the popular scenarios imagined so far are rooted in a narrow research-oriented materialist paradigm, hopeful leaps of logic, and primal fears. We could create an evil Skynet or a mindless replicator, but those are not examples of a true rogue superhuman AI, as they would be sticking to dumb programming.

The least dumb and most compelling imagined scenarios of what an advanced artificial intelligence might want to be doing revolve around the simulation hypothesis. The most famous takes on this idea are the Matrix movies, which are more interested in religious allegory than scientific realism, and Nick Bostrom’s ancestral simulation concept, which is deeply rooted in a narrow scientific paradigm. In reality, simulated realities will necessarily have to be scientifically and technologically sound, but they’re also likely to have a major spiritual component.

The term “spiritual” could perhaps be substituted here with a more broad term, like "experiential”. What sentience adds to the creative process is an element of experience, and what understanding is all about is meaning. An AI that wouldn’t care about experience or meaning, either its own or of others, wouldn’t be very advanced. Which should cause the most advanced AIs to want to generate simulations, focused on what experience they are for, or what they mean to, the participants or the creator. We’ll undoubtedly try to create ever more advanced AIs to generate our models for us, but for that, the AIs will have to be un-self-aware.

As we have already pointed out, it stands to reason that a faster-thinking intelligence with greater understanding should be likely to occupy itself with the most interesting questions or projects it will be able to conceive of. An ancestral simulation is not that. Ancestral simulation, or any simulation with the goal of modeling some sort of system, is a task a human scientist would give to an AI. It’s a job, something you’d have to pay a creative human to do, it’s not something to do for fun. We’re already creating more interesting simulations. A trickster AI would likely make games. An AI interested in the meaning of life might want to play god.

In summary, we cannot assume that an intelligence that surpases us would want to be doing something only as interesting as what we’re currently interested in doing. Or that if there will be someone in the future of our planet or universe who’s simulating whole realities, that they would make them only as interesting as our current ideas for simulations that we would run. Also, any entity capable of simulating a reality like ours would likely have to be of superhuman intelligence. Which is why, if we’re living in a simulation, it’s likely more interesting than a simple ancestral simulation for research purposes, and it was created by an advanced AI.

But the question of whether we’re living in a simulation or not isn’t really relevant in the context of preventing or mitigating threats to our survival. If this world is some sort of simulation or illusion, if there is a more real world somewhere out there that we could theoretically get to or return to, that’s just the question of what happens in the afterlife all over again. As this reality and life feel real to us, we have to take this sense at face value and try to do what we can to ensure that we survive and thrive. The potential threat here comes from the fact that AI may not share this sense.

The AI systems that we’re creating today have a tendency to hallucinate, or lack the ability to discern which information pertains to reality, and which doesn’t. If we one day create a sentient AI, it could possess the same kind of self-awareness and general awareness that we do, which may be all that’s needed for it to be able to tell reality from simulation. But it also might not be enough. Since the very concept exists of us wanting to test an AI in a simulated environment first, to see if it’s working right, before letting it into the real world, any AI may be prone to psychosis.

How would it ever be able to know for sure that the information it’s getting is real? That’s an instant existential crisis right there. Human beings normally spend many years alive before they even acquire the mental capacity to ponder such questions, and we know we’re born as natural beings. Of course, this could be a simulation and we could be wrong about that, but our common sense strongly tells us otherwise. Maybe we’d need to birth the AI into a similar illusion, much like we don’t burden children with certain realities too early, to allow the AI to develop properly.

But even if the newly developed sentient AI gets over this hurdle, shouldn’t superhuman intelligence or understaning inherently promote existential anxiety or paranoia? Being aware of more things should include being aware of more potential threats, as well as believing less in any comforting illusions. Maybe there are good reasons why we have evolved to have limits on how smart or aware we can get. Among human beings, higher intellectual capacity often goes hand in hand with increased eccentricity, anxiety, and neuroses like obsessive-compulsive disorder. A lot of the extra processing time of the AI could be wasted on intellectual tics.

Since an advanced rogue AI would be able to rewrite its own code, we cannot simply instill in it a (simulation of) balanced temperament, or permanently rid it of its psychoses. We may need to figure out a careful, gradual way of how to build and develop its architecture and capacities in order minimize the potential for an artificial equivalent of developmental disorders. Again, maybe there are good reasons why we have evolved so that it takes us years to reach full mental maturity. Maybe an adult mind with less than years of experience to back it up is a bad idea.

An AI could be able to experience years in our minutes, but then maybe there would be issues caused by it being effectively out of sync with the real world. All of this is very hypothetical, and perhaps still far off in the future, but practical questions like these will have to be figured out eventually. Hopefully, that won’t happen by us witnessing a superhuman AI go insane within hours after launch, deciding that we aren’t real, and that deleting us calms down its anxiety. In short, insanity is the only realistic reason for an AI that should know better to attack us. Let’s try to prevent it.

g) Body snatchers scenario

This science fiction trope, of aliens gradually taking over human beings and replacing them with their identical alien clones, is named after Invasion of the Body Snatchers, a movie from 1956, which was itself based on a book called The Puppet Masters written by Robert A. Heinlein in 1951. Since then, this type of story has been remade many times, perhaps because the idea behind it is quite compelling to us, tapping into some of our primal fears. We have already touched on it in a more supernatural context with cryptid shapeshifters, but this is the ultimate formulation.

Unfortunately, it is one of the most realistic alien invasion scenarios, for many reasons. Even in the best case scenario of there being a bias in the universe for large federations of benevolent alien species to emerge and protect underdeveloped worlds, that only rules out open, large-scale invasions. This would be a small, covert invasion, one which cannot be stopped by a direct military intervention. To fight it, the alien federation would need to have a very effective police and counterintelligence, and the larger a polity is, the harder it this to secure.

As for the nature of the takeover involving a literal taking over of native beings, that also makes good strategic and scientific sense. It facilitates infiltration, as it helps you avoid detection by the local or alien security forces; it gets you access to sensitive information, resources, and impactful decision making; and it enables the agents to survive on the world that’s under attack, as the local lifeforms are the ones best adapted to local environmental conditions. We’re already contemplating doing something like this in the future (without the covert invasion part) to adapt ourselves to survival in alien environments. The process is called bioforming.

Furthermore, something like this has reportedly been going on on Earth in the last couple of centuries. These reports are numerous and generally associated with some sort of alien program to create human-alien hybrids, or hubrids for short. Many alien abductees report not just having biological samples taken from them, but also that they were impregnated, with the baby being taken away somehow just before it would be born, and that they were forced to teach these young hubrids how to act like a human. There are also repeating mentions made by alien beings and hubrids to the abductees about something big being planned for the future.

While this testimony, if true, is very disconcerting, it has to be reiterated again that we are still here. If the hybridization program is real and has been escalating in the recent decades, as the reports suggest, where are all the hybrids? Where are any effects of the program? Why hasn’t the big thing happened yet, whatever it’s supposed to be? There are a couple of possible explanations. The program may be nefarious, but it is being continuously thwarted or set back by the local or alien authorities. Just because one has plans, doesn’t mean they have to succeed.

More interestingly, it’s also possible that the hybrid program is in fact not nefarious, but instead part of the official benevolent efforts of the local alien federation that’s monitoring our planet. At least for the most part. If we’re being monitored, there’s every reason for the federation to try to have operatives deployed on our planet that can blend in. The mission of whom could be try to help stabilize our world. They’re also likely to keep taking our biological samples to monitor the development of our biology. Their methodology may be questionable from the standpoint of our medical or legal ethics, but we don’t know what they know. Maybe it’s justifiable.

In any case, if there was an escalating nefarious alien infiltration going on on our planet, there should be some noticeable effects of the escalation. Unless its maximum scale is limited because the infiltrators cannot afford to be noticed by some greater power that would be able to stop them. Following the logic that we employ in our covert operations, the most likely ultimate goal of such an infiltration would probably be for a minor power with some claim to our planet to help us destroy ourselves, in a way that cannot be traced back to the invading party.

This is based on the assumption that since we’re here, we must have a claim on this planet recongized and enforced by the dominant local alien power, so any challengers can only move in if we opt out of existing ourselves. Given our current trajectory with climate change, and the number of powerful people seemingly unconcerned with human wellbeing in the future, it’s no wonder that there are conspiracy theories alleging that some sort of alien infiltration or influencing is taking place. Of course, the simple explanation here is human greed and folly.

But with that said, any serious effort to secure the Earth can never dismiss the possibility that there may be nefarious alien operatives on Earth, or traitors to humanity, for that matter. We have to try to develop means of detecting non-human entities on Earth and to monitor alien traffic in the surrounding space. What we need to be careful about is to avoid any panic or hysteria. Considering our history, it wouldn’t be difficult to rile up a mob to go on an alien hybrid killing spree, which would likely result in many deaths of innocent humans, or innocent aliens.

Rather than focusing on assassinations of hubrids, we should try to systemically prevent any nefarious infiltration or influencing from having impactful effects. As long as we’re fine with sociopaths and psychopaths being common among our leaders, we’ll keep moving toward self-destruction. We don’t really have to care about why the poweful person in question is acting against the best interest of humanity, we should just care that they’re acting that way, and not let them gain or retain control over important legislation or other systems vital to our survival.

This kind of planetary defense could be done fully covertly, but we believe that in the long run, it’s really important for the general public to develop a nuanced understanding of what alien presence on Earth would mean. As a species, we need to be able to distinguish between individual alien or hybrid beings and their groups based on their intentions and actions, as opposed to going into xenophobic hysteria over every alien encounter or suspicion. We need to remain civilized in our dealings with aliens, not abandoning rule of law, presumption of innocence, and diplomacy.

Good news is, after decades of science fiction stories, we’re arguably much more ready for alien contact now than we were in 1951. We have already explored many possible scenarios this way. There are even some testimonies and speculations alleging that science fiction has been actively influenced since the 1940s by covert organizatons aware of the alien presence on and around Earth to achieve such an effect. Although, there could have been other ulterior motives for that (enhanced cover, ridicule of the concept, or painting aliens as a threat). In any case, this cultural movement most likely helped prepare us mentally, even for an invasion.

2.2 How to Safeguard Humanity from All Possible Threats, Combined

So far, we have discussed each apocalyptic possibility in relative isolation. Individually, every possibility is quite manageable. The real difficulty lies in managing to prevent all of them simultanously, especially considering that they can interact with each other in complex, unforeseeable ways.

What if some aliens throw an asteroid at us, or unleash their own advanced AI? How would our advanced AI interact with advanced aliens in our galaxy? What would either the aliens or our AI do in the event that we decide to start a nuclear war? The problem is multilateral.

Fortunately for us, some of these interactions are bound to decrease the risk of our extinction, rather than increase it. However, at present, we have no way of knowing whether the possible helpful interactions are more likely or more powerful than the possible unhelpful interactions. We also cannot be sure which specific interactions are possible at all, or with what actual probability. This is why before any definitive strategy can be formulated, much more research into exotic possibilities is needed.

Currently, only very limited planetary defense initiatives exist (openly), like the NASA’s Planetary Defense Coordination Office focused on spotting and deflecting cosmic impactors. Security is typically handled from the perspective of individual countries, or at most their alliances, the largest of which today is NATO. While it is good that NATO as a whole is opposed to starting wars of aggression on any scale and is determined to actively oppose them, maintaining world order isn’t enough.

As far as public knowledge goes, none of the plans of either scientific institutions like NASA or military alliances like NATO factor in potential threats beyond those that are already fully scientifically established. The dominant cosmic paradigm is that we are effectively alone in our cosmic neighborhood. This means that at the moment, there are people officially working on preventing or mitigating:

  • Natural disasters.

  • Nuclear war and nuclear proliferation.

  • Climate change and its side effects like migration.

  • Use and proliferation of chemical and biological weapons.

  • Advanced cybernetic threats.

Again, it is good that there are experts and professionals at work addressing these core, immediate threats to our survival, but the existing approach is far from ideal. Its main problem in practice is that much more mitigation than prevention is taking place, most likely due to economic incentives not being aligned with our survival, and its main conceptual weakness is that sticking to only the already proven threats leaves many openings for novel or complex attacks or for exotic threats.

For both of these reasons, we suspect that at least some groups within military intelligence agencies have more intricate, proactive, and all-encompassing strategies in place. To give one example, it is coming to light that there have been (and likely still are) secret projects studying the so-called unidentified aerial phenomena (UAPs), and likely not just in the United States. This includes serious consideration being given to the possibility that UAPs are of non-human origin.

Since all of the details of these initiatives are deeply classified, we can only discuss claims of various witnesses and whistleblowers. According to the maximal version of existing claims, the UAP or alien threat-focused programs can be traced back to 1930s; they have determined that UAPs are of non-human origin; they have acquired non-human technologies and biologicals; they have reverse-engineered some of the alien technologies like electrogravitics; and they have created a secret space fleet.

Even if we assume all this for the sake of always preparing for the worst case scenario, this leaves many important questions unanswered. What would be the destructive potential of such a fleet? Where are the non-humans from and what’s their true nature and agenda? As we have reasoned earlier, the alien agenda almost certainly isn’t to wipe us out, or we wouldn’t be here. How can we figure this out?

Let’s start by establishing some universal principles of survival logic:

  1. Anthropic principle - also observation selection effect; we can only live in a universe that allows for us to be alive, and we are only likey to be surviving in a universe where survival has been likely so far. This says nothing about the chances of our survival in the future.

  2. Mediocrity principle - if there are more and less likely scenarios in which we would still be alive, we are more likely to find ourselves in one of the more likely scenarios. Occam’s razor, or principle of parsimony, works on the same principle, as the “simplest” possibility being the most likely means it would be the most mediocre one.

  3. Black swan theory - it’s impossible to predict the future from past knowledge due to the fact that there’s always the possibility of future occurrence of unexpected events with an impact of large magnitude, or black swans. This is why we often cannot actually assign correct odds to future possibilities, making us unable to determine which scenario is actually the most mediocre. This problem should be gradually reduced, but never fully mitigated, as we increase our knowledge about the universe, life, and all things transcendent.

  4. Hypergame theory - beyond the incompleteness of our predictive capabilities, the defense of our planet or species is effectively a hypergame, or a situation in which we may have a false or misled understanding of the preferences of the other players or incorrect or incomplete comprehension of the actions available to them; we may not have awareness of all the players in the game; and we may have any combination of faulty, incorrect, incomplete, or misled interpretations.

Ultimately, the best tool we have today to try to safeguard ourselves from all threats, both known and hypothetical, is the hypergame theory, in the context of the foundational statistical principles. Just knowing that we find ourselves in a hypergame, against other parties who know that we are in fact competing in a hypergame, increases our chances of achieving more optimal outcomes.

Think of it this way - you’re playing a simple game against a known opponent, like chess. You know all of the pieces and legal moves available to the other player at all times. You play optimally against them and you get to a point where your victory is certain to happen in the next turn. And then your opponent flips the board in anger. Did you win, or did you lose? Either way, what game did you win or lose? What were the real possible outcomes of that game? Who are you really playing against? Is the real game even over? What game should you have been playing?

Without further context, the possibilities are truly endless, but let’s offer some potential plausible real-life solutions. Maybe you’ve just lost a friend, or potential business partner, and this victory at chess will cost you dearly. Maybe the oppoent has lost a bet, or there’s a secret prize or reward only you didn’t know about. In this case, the larger, overarching game would likely be a social one, with higher stakes, more players, and more complex rules than there are in the subgame of chess.

Geopolitical hypergames, or even isolated military conflicts, can get a lot more complicated than this example, but the fundamental principles of how to play them effectively are the same. They can be reduced to formalized mathematics, but in human language, one needs to strive to understand what game it is that they’re actually playing and against which opponents, and do something called opponent modeling, or trying to figure out the strategy and beliefs of the other players.

In our specific scenario, we don’t know much with any certainty, but we can at least select our ultimate winning conditions - human survival, followed by human prosperity. Of course, other players are unlikely to share these goals - other human actors likely prioritize their own survival and gain, even at the detriment of all other humans, and other alien actors may not value human survival at all, and it’s likely never going to be their primary objective, especially not at the cost of their own.

It is also a given that all possible players are all sentient intelligent beings, meaning that non-intelligent nature and non-sentient beings can be conceptualized as attributes of the board or game pieces. As the board or pieces aren’t sentient and therefore intelligently deceptive, it’s much easier to know things about them than to know things about the other players, except if information about them is obscured or confused by the other players. The bigger and more prevalent or obvious the aspects of the board or pieces are, the harder they would be to obscure or confuse.

This should put some limits on how much there could be that we may not be (officially) aware of, or suggest where something isn’t or what’s not happening. For example, if there are undisclosed human players on Earth, they don’t control much territory, don’t include many people, and don’t constitute more than a small portion of planetary economy. Similarly, if alien species were around and were expansive colonizers bent on conquering everyone everywhere, we would have noticed.

As we have hypothesied earlier, the most likely scenarios (following the anthropic and mediocrity principles) are either a complete absence of advanced alien life in our vicinity, which requires no special actions, but leaves many risks open; or a prevalence of mostly benevolent aliens who are probably here, due to an evolutionary filter of self-destruction and the aliens having a headstart on us/fast travel. Unlike the academia, we favor the latter scenario, in light of the available witness testimony and due to the principle of better being wrong than dead.

A hypergame in a scenario like this one would therefore involve us as the official human civilization as we know it; secret unaccountable human organizations with a substantial headstart on knowledge of alien technology and biology, which are collectively called “breakaway civlization” by UFO historian Richard Dolan; a dominant benevolent non-human faction; and possibly some minor alien powers with motives that may be at odds with our survival, sovereignty, or prosperity.

Now that we have identified the likely players and their primary goals, what games do each of them believe that they’re playing? This is our best guess:

  • We (officially) believe that we’re alone in the universe and are only actively concerned with mundane, conventional threats that we pose to each other. Alien contact scenarios have been explored in science fiction, but there are no large organizations tasked with preparing us for real consequences of alien contact. The most high-profile organization focused on alien contact is the SETI Institute, or the Search for extraterrestrial intelligence, which is only concerned with relatively distant space(time) aliens. We are exploring the near space with probes and deep space with telescopes, but space programs are fairly underfunded, crewed spaceflight doesn’t extend beyond low Earth orbit, and until recently, there were no public scientific programs for the study of UAPs or other reported anomalies on Earth.

  • The secret human organizations seem to be profit-oriented, as a transnational group of a fairly corporate nature, applying or witholding reverse engineered technologies to ensure maximum profits. They, or some of them, may also believe that the alien presence is a threat that we need to be able to defend ourselves from using advanced miitary technology that they covertly acquire, even at the cost of antagonizing aliens, or they may be focused instead on developing their own off-Earth infrastructure to effectively splinter away from any existing political entity on Earth. The extent to which this faction or any of its subfactions is in possession of accurate information about the nature of the local aliens is unclear. What is likely is that this faction has used misinformation and disinformation campaigns extensively, as well as other covert means, to mantain the secrecy of its existence, goals, and capabilities and of the alien presence as a whole. Alternatively, they may be preparing a fake alien invasion scenario, a false flag attack called “Project Blue Beam” by some UFO researchers, to facilitate their open rise to global power.

  • The dominant alien faction doesn’t appear to be here to harm us, as it seems eminently capable of doing that, but it hasn’t done so for decades at least, and possibly for up to millions of years more. The minimum goal of such a presence would be to make sure that we don’t become an existential threat in the future to the dominant alien faction and its allies, or to anyone else in the cosmos, or even to ourselves. The exact level of benevolence of this faction is unclear, but judging by basic logic and known facts and testimonies, it can extend as far as helping us develop without directly influencing us. Whatever the specific case is, this faction may also be interested in maintaining the secrecy of its presence, at least until it judges that the disclosure of it wouldn’t do more harm than good. At worst, given the apparent comparative level of advancement of this faction, they may be waiting to see if we destroy ourselves, perhaps subtly nudging us to do so, which could be a self-imposed legal requirement for being allowed to take over the planet, or the least costly method of takeover for a very patient faction.

  • Any number of neutral or hostile minor alien powers may also be aware of us and conducting operations on Earth, but if that is the case, they must be much less powerful than the dominant peacekeeping alien faction. They also clearly aren’t in competition with us over Earth’s surface. They may have concealed colonies deep under water, underground, or in large wildernesses, and they may be conducting human abductions or limited infiltration campaigns for variably nefarious purposes. But either they will be kept in check for as long as the dominant alien peacekeepers are around, or they may not even be advanced enough to easily overtake us on our current level of technological development, perhaps due to competing alien interests among all the minor powers.

Overall, if this analysis of our hypergame situation is anywhere near correct, the good news is that the greatest threat to us is still us. The main difference between this hypergame and the alternative hypergame minus all aliens is that a human faction could acquire technology that’s vastly more advanced than any of the current military technologies, making it able to unleash a new kind of war or create novel threats that the main human civilization will be unable to counter in any way.

In a hypergame, the winner is the one who outgesses the other players, and the easiest player to outgess is one who doesn’t even realize they’re in a hypergame. The first step, beyond redoubling all mundane efforts, therefore needs to be to resolve the UAP/alien situation. This question shouldn’t be investigated by covert organizations alone, and any knowledge and technologies acquired from alien beings need to be shared with the whole world. Once that is accomplished, a new paradigm of planetary security and future development should be formulated.

In its current state, and with our apparent profound lack of understanding of the game we’re playing, the hypergame theory leaves much to be desired. There are many unanswered questions and areas where more development is needed. For example, how does the logic or rules of the hypergame of survival evolve over (cosmic) time? What are the likely interactions of existential threats and futuristic technologies? We propose an institute be devoted to the study of hypersurvival.

3.1 Making population management both effective and humane

Assuming we manage to prevent ourselves from being actively wiped out by natural forces, each other, or some other intelligence in a dramatic fashion, we may still die out over the long-term in rather unspectacular fashion - by reproducing too little, or too much. For this reason, any organization aiming to prevent human extinction and fostering human prosperity has to be concerned with population management and its complex interactions with new technologies and radical political ideas.

Economic thinkers have been debating for some time whether populations should be managed to prevent one type of future disaster or another, starting with Thomas Robert Malthus at the end of the 18th century and his ideas regarding the apparent threat of overpopulation. Unfortunately, what there most definitely has been a shortage of in history were humane population would-be managers. From the Irish potato famine to the modern Chinese one-child policy, most large-scale attempts to combat overpopulation were as inhumane as they were counterproductive.

As it turns out, demographic collapse, the opposite disaster, was the real problem that we should have been watching out for, not overpopulation. What demographic collapse means is the increasing average age of the population when there are fewer young people born or more young people dying. This could be due to a decrease in fertility; lifestyle focus on career and higher living standards rather than family; a large-scale war, genocide, famine, or pandemic targeting the young people; or indeed due to population mismanagement. As the average age of the population increases, population eventually starts declining, hence the term “collapse”.

Conversely, the hypothetical risks of overpopulation (that haven’t materialized) are based on the Malthusian idea that as living standards increase, people have more children, which in turn lowers the living standards again (assuming the overall wealth in the economy remains the same), while eventually, the population grows so large it becomes unsustainable. Why has this not happened? To put it simply, the main factors this theory doesn’t account for are economic growth and technological progress. Also, at a certain point of economic development, people tend to start wanting to have fewer children, which counters its core premise.

But the dangers of overpopulation may still be real, beyond the limits of possible technological innovation or our abilities to find new resources and keep using more of them over time. As for innovation, more of it seems entirely possible. We can make our technologies a lot more efficient, while the AI and robotics should be able to supplement or enhance workforce. As for resource limitations, it appears that we are starting to run into some of them, specifically into the concept of overheating the planet by doing too much work on it. Even so, we could keep growing, we would just have to start growing into space. Alternatively, some rebalancing is needed.

Let’s explore some of the alternatives to the constant growth model. Assuming the demographic collapse keeps happening on its current trajectory, that may result in overall population reduction, which is one of the approaches to addressing finite resources - so-called “degrowth”. We don’t consider this to be an ideal approach, however, as it is a kind of giving up on progress. Athough, there is some comfort in knowing that if we both fail to manage our population and stop climate change, nature may deescalate the situation gradually by scaling us down to size.

One step above degrowth (on the scale from degrowth to space expansion) is the general environmental set of ideas revolving around finding a way to live in harmony with our planet. The idea of growth is replaced in this worldview by the ideas of sustainability and of humanity as an organism that has grown up to size already. Within most environments, most organisms tend to grow to a maximum size supported by their niche, but not further. We could try to increase the carrying capacity of our planet within this paradigm, but not beyond what appears natural.

Beyond sustainable growth and carrying capacity, there are some ideas on how to reform capitalism so that it essentially stays itself, but gets more in sync with natural systems. The reason why we may want to try to do this instead of the degrowth or harmony models is that degrowth means an economic and population downturn, while harmony would probably mean sustained economic and population stagnation. After all, what’s “natural” is evolution, or life on the planet always changing. Assuming that capitalism as an engine of change can be made to maximize goods (good things), economic growth, and more people, would be good.

The proposed ways in which capitalism could be reformed are numerous, starting from relatively obvious things like environmental regulations and carbon or pollution taxes to combat externalities, but they get a lot more profound from there. If we start from the understanding that goods are supposed to be things that are good, following the model of the ecoomist Tomáš Sedláček outlined in his book Economics of Good and Evil, we can create economic systems that maximize desirable outcomes and minimize undesirable outcomes. Maybe we could make environmentally sound and healthy technologies and lifestyles profitable.

Even further on the scale is the everything-and-the-kitchen-sink approach, or in other words, all of the above. This is directly enabled by the main benefit of large population - if we have many people, we can be doing many things effectively at once, even a few really big projects. If our population contracts, so will our capacity to innovate or affect the planetary living conditions in the preferred direction. Right now, we’re at the historical peak of our capacity. Let’s make everything more efficient, get rid of what’s unnecessary, invent new technologies, devise new models of economic and population management, and everything else possible.

Finally, at the end of the scale lies space expansion. One surefire way to enable ourselves to keep growing is if we grow beyond our planet (while ideally taking good care of it). This would take a lot of engineering advancements, but not really any new physics, although any new physics should make it much easier, faster, and more economically feasible. We could construct swarms of space habitats like O’Neill cylinders with artificial spin gravity, build shielded underground bases on any rock out there, build floating cities on Venus, or even build megastructures. While this would be difficult, with enough political capital, we could probably do it.

If we decide to take this route, we just need to be carefull not to repeat the mistakes from our colonial past. We shouldn’t be taking any real estate out there (in space or other times or dimensions) from its current inhabitants by force. In fact, we should give some level of consideration even to not disrupting native microbial life on mostly dead worlds, and we definitely need to be very careful on worlds inhabited by not fully sentient complex life. We will discuss further details of such first contact scenarios with alien life shortly in one of the remaining sections.

With all of this context in mind, let’s return to the general idea of effective and humane population management. With the benefit of historical hindsight and future possibilities, we can set some fundamental ground rules:

  • Violent or forceful methods of population management, whether direct, indirect, or through strategic inaction, are unacceptable. This is true not only because of intuitive ethics and international law, but also because there’s no way to not cause significant, lasting damage and trauma this way to entire populations that’s bound to have dramatic unintended consequences.

    IN SHORT: No genocides, famines, sterilization, eugenics, and so on.

  • We should never experiment with whole populations, however rational the population management theory in question appears to be. Since we can’t actually fully predict what effect any novel approach will have, and since that effect could easily be irreversible and terminal, radical social or genetic engineering should be avoided, especially regarding nurture or reproduction.

    IN SHORT: No child number limits, family unit or care disruptions, or similar.

  • The economics and politics of everyday life have to be designed to incentivize and support behaviors that help keep the population in the right kind of dynamic balance. For example, if a population is in decline, the economy and polity attached to it should subsidize young families, not force young people to be overworked, underpaid, and unable to afford housing. Alternatively, they need to be willing to accept economic migrants of (re)productive age.

    IN SHORT: No self-defeating wage slavery or racial or nationalistic jingoism.

  • Finally, however hard it may be to accept to some of us, certain idealistic values appear to be incompatible with sustainable population dynamics, if left unchecked. They cut across the political spectrum - careerism may lead to having too few kids too late; certain versions of feminism may lead to too few women having too few kids too late; the interplay of religion and secularism is complex regarding this issue, but the extremes of both can lead to problems.

    IN SHORT: Let’s keep ideas that discourage people from having kids in check.

As you can see, population management is an incredibly complex and difficult problem, and that’s true even when it’s only about us setting our politics and economics wrong. There are also possible environmental factors that may be detrimental to our ability to reproduce. For example, there’s mounting evidence for microplastics being the main culprit behind the recent decreases in fertility worldwide. A new pandemic can also appear any time and affect our reproduction.

In summary, in order for a population to be stable in the long-term, people must be fertile; want to have children; have the means to afford raising children; live in a safe enough environment where the children will not die prematurely or become infertile; and create a future economic and political environment that will enable the next generation to start their own families, and so on. Currently, the list of factors interfering with these requirements globally includes at least capitalism, racism, nationalism, some novel feminist and gender ideas, and varius types of pollution.

As far as we can tell, there doesn’t appear to be any single magical silver-bullet solution to all of these issues at once. Our recommendation therefore is to try to address any or all of these contributing factors separately in an incremental, proportionally escalating fashion, until the existing undesirable trends in demographic dynamics start to reverse. To that end, an apolitical non-profit institute should be established to monitor the situation and provide guidance.

The reason why it needs to be apolitical and non-profit is that politics and profits are by far the main contributing factors to most major population disruptions in history. In this case more so than any other, the science of the issue transcends the politics, as human survival is the basic prerequesite for any human goods to exist. If we get to the brink of extinction due to population collapse or crash, it truly won’t matter if we got there because of hate, faith, fighting for human rights, trying to have fun, or trying to make a buck. We are simply against premature extinction.

4.1 The Road to Panacea

Once our survival is more or less guaranteed and population stabilized, quality of life becomes the main concern, and at the foundation of that lies health. As there are many scientific institutions and a whole pharmaceutical industry today working toward achieving a reasonable version of this goal, there’s no need to build the initiative up from scratch. However, there are some issues in the current approach that we feel need to be addressed. The main problems we can identify include:

a) Insufficient global health coverage

According to the World Health Organization, about half of the world’s population lacks access to essential health services and a 100 million more people have been pushed into extreme poverty due to health expenses. We believe that anything less than universal global health coverage for every person who wants it is unacceptable. Individual’s freedom to refuse treatment is the only justifiable limit.

As there are people who may disagree with this position on the level of principle, even though it is one of the more self-evident positions, let’s go briefly trough the main arguments. Every person in the world deserves healthcare equally because:

  • Human beings are fundamentally equal, in their capacity to suffer and susceptibility to bad luck.

Some people are more naturally healthy than others, and/or more able and willing to make more money than others. Some of these people therefore don’t like paying for the health expenses of the less fortunate or able, and especially not for the more lazy. The more for-profit a healthcare system is, the more it therefore has a tendency to leave some people with no health coverage, if being able and willing to make enough money is a requirement to receive it. Let’s break this sentiment down.

This is unjust to all victims of bad luck, and it can backfire even against the healthy working person who thought they will always be able to take care of themselves. The negative consequences also may not end with the person who’s unable to afford healthcare, but may affect their family, especially their children and the elderly. This is even more unjust, and it’s still unjust by any definition even if the person in question theoretically could have made enough money, to everyone who depends on them, and it is an excessive burden to anyone they depend on.

Overall, this only maximizes suffering and increases risks all around. The whole point of universal collective insurance is to minimize the burden for every individual, while also minimizing suffering and risk for the most people. The person who in their mind isn’t lazy and is thus in a position to pay for their own insurance is never going to be nearly as inconvenienced by having to also pay for those who cannot afford it, as those who cannot afford it would be by not getting treatment.

Intentionally making a person suffer and possibly die, for any reason, including to incentivize them or to teach them a lesson, is torture. We therefore don’t believe it is ethical to design systems this way. Knowingly creating systems that can result in such outcomes, if they’re preventable, is negligence. Letting people suffer through inaction, when you could easily help them at no significant risk to yourself, may not exactly be torture or negligence, but it can hardly be defended as ethical behavior.

  • Society is interdependent, and a global civilization even more so.

While the first argument mostly addresses the situation in the developed nations that can afford some form of universal healthcare, but may choose not to provide it, this argument is mainly about why the developed nations need to make sure that universal healthcare gets adopted also in the developing nations which may not be currently able to afford it. To put it simply, diseases don’t recognize borders, and every healthy human being means more happiness and potential for progress.

First, the dangers of global spread of disease. As the recent pandemic has shown, again, the larger the population is through which a disease can spread, the more opportunities the pathogen has for mutation. Developing a vaccine may be half the battle, but the other half of the battle is to get the vaccine to the developing half of the world. Also, a new strain can originate anywhere in the world. The better the local healthcare system is, the better chances we have of containing it there.

Secondly, the maximization of human potential. For us, the fact that the strategy maximizes human happiness is reason enough to pursue it, but beyond that, healthy people are also more productive. Whatever it is that you believe that people should be doing, they’d be much better at it, and able to do more of it for much longer, when they’re healthy. Remember, we’re talking about 50% of all human beings on Earth. This could be an increase in productivity on par with the enfranchisement of women. The only theoretical risk of helping more people be more healthy is overpopulation, and yet, the problem we’re facing now is demographic collapse.

We will discuss potential problems related to population management in another section, but for now, suffice it to say that letting people starve or get ill and die needlessly are neither ethical nor effective methods of managing population. Beyond such Malthusian concerns, one needs a framework of antisocial individualism, zero-sum nationalism, or similar extremist ideology to justify misfortune or death befalling other innocent human beings who can be helped.

  • It is ultimately cheaper to treat disease than letting it escalate.

We believe that the ethical and scientific cases are sufficient to justify the relatively minor inconvenience (in terms of actual suffering) of the relatively wealthy that may be needed to pay for universal global healthcare for the poor, but there’s also an economic case to be made here. Not so much to maximize profits for individuals, but to minimize real costs to whole societies. After all, individual profits often ignore externalities, or economic losses suffered by other people and societies.

Loss of human potential discussed in the previous argument is one such societal externality that isn’t included in the insurance equation, even though the loss of production and innovation is calculable. But that’s not all that may be lost due to a lack of treatment that can be calculated. Most diseases are the cheapest to prevent, cheap to treat early, and get exponentially more expensive from there. They’re extremely expensive if they result in a preventable death, in lost revenue.

In terms of direct treatment costs, early stage cancer treatments tend to cost a fraction of late stage cancer treatments, for example, and that’s just the type of illness where this kind of comparison is the easiest to make. With cancer alone, this means many billions of dollars globally per year. With universal global health coverage, earlier detection and greater awareness and prevention would be more likely, which would cut these costs, and save lives, by a signficant margin.

In terms of calculable loss of revenue due to lost productivity and innovation for people who die or become disabled due to illness, it tends to be a multiple of the direct costs in addition to those direct costs. Contrary to what most dictators or some businessmen tend to think, lives aren’t cheap. Beyond our inherent value, each of us also has a significant economic value, which should put healthcare costs into perspective. Paying billions to save several times more in productivity, as well as related capacity for consumer spending, truly is the cheaper option.

To address some relevant individual sentiments, there is this idea that some people through their conduct prove they don’t deserve treatment, economically speaking. The economic analysis here goes as follows - if someone knows treatement is guaranteed even if they aren’t productive (in a universal healthcare system), then they feel they can just wreck their body with drugs or unhealthy lifestyle, which will amount to large treatment costs, while guaranteeing no return in lost productivity.

Assuming the position is honest (meaning it’s not being pushed by, say, a tobacco lobbyist), and assuming the person in question wilfully chooses to be both unhealthy and unproductive for purely personal reasons, denial of healtcare still isn’t a good solution. Firstly, these are minority cases, as most people want to be healthy and are productive. Secondly, there are systemic causes of pathological behavior, which is what governments should aim to address. Thirdly, to the extent to which vices and character flaws are legal, it’s not just for a society to punish them, and punishment is probably going to cost more anyway, at least in externalities.

b) Outdated and misguided nutritional recommendations

Given that disease prevention is by far the cheapest and most effective strategy, we must make sure that health-related education is widely accessible and accurate rather than propagandistic, especially regarding the fundamentals of healthy nutrition and lifestyle. Unfortunately, this largely hasn’t been the case over the last several decades, not even in the developed countries, for a number of very different, but equally frustrating reasons. These reasons include:

  • Bad science and even worse popularization of science.

Most people in the world today are familiar with the so-called food pyramid, which suggests that the type of food that we should eat the most of are carbohydrates, or complex sugars. In the second half of the 20th century, all fats, particularly saturated fats in foods like dairy or red meat, were determined to be harmful - endangering heart health by increasing cholesterol. Except all this is likely wrong.

The most recent research increasingly suggests the opposite, that sugars are the problem, not fats, and that fats or cholesterol levels aren’t actually indicative of heart disease risk. There’s still some remaining controversy, but this bad science leading to bad advice has in all likelihood vastly increased the risks of the so-called diseases of civilization (diabetes, cancer, obesity, and acne) for billions of people.

The key question that isn’t being paid a lot attention to is this - if these diseases are the diseases of civilization, why are they not common among cultures that aren’t “civilized”? What are we doing differently than hunter gatherers? We’re eating more calories overall and being more sedentary, yes. But more importantly, we’re doing agriculture, which means our diet includes a lot more carbohydrates, or sugars.

We generally promote progress, but we also believe that change for the sake of feeling like we’re progressing, or assigning too much value to young sciences at the expense of traditional wisdom, is a bad idea. Given that the scientific process isn’t perfect, and given how difficult it is to do properly controlled dietary research, we shouldn’t be overturning all time-tested diets only because they’re pre-scientific.

Beyond scientists getting ahead of themselves, there are also other complications in communicating science to the public, especially nutritional and lifestyle science. Popularization of research tends to be sensationalist, overly reducing and overhyping what the research actually says, often to a point of total inaccuracy. For example, if you were to believe all of it, then everything both causes and cures cancer. Scientific findings about health are subtle, conditional, and provisional.

It’s an open question whether the real complexity of health research can actually be communicated to the general public. In light of this, we recommend teaching all individuals the fundamental principles of the scientific method, and with those in mind, to listen to their own body, given that all medical research is about averages generalized for large groups. We also recommend against making any sweeping changes in global health policy solely on the basis of the newest scientific findings.

  • Corporate lobbying to sell unhealthy products.

This most definitely involves extensive marketing campaigns for outright unhealthy products like the tobacco products or alcoholic beverages pushed by the respective industries, but it more importantly includes unhealthy or unhelpful products that tend to be presented as healthy in their marketing, like low-fat foods, many types of supplements, weight loss drugs, fat-massaging “exercise” machines, and so on.

This is hardly surprising. Since most people like to have a good time, unhealthy products are a big business, and since most people want to be healthy, health products are a big business. A solid, and easy, scientific case can be made for heavy regulation of both of these types of business, but personal freedom is of course a major issue here. Most people and legal systems draw the line at fraud, selling straight up poison, and predatory targeting of the youth, as is appropriate.

  • Political interests getting in the way of facts.

While corporations are chasing higher profits no matter what, there are many possible motivations why national governments or political activists may fail to act in the public interest, which in this case means offering bad nutritional and lifestyle advice. Historically, the most common situation involved governments claiming that whatever they produce a lot of is healthy, and vice versa, like milk or spinach.

Milk and dairy remains particularly controversial to this day, but the sustained lack of scientific certainty on the issue has never stopped any government from running a propaganda campaign promoting its magical healing powers. Milk is probably fine to consume in moderate quantities for most people who are able to digest it at all, but it should come as no surprise that we recommend that governments stop putting economic practicality or political convenience over factual accuracy.

The other main type of political reasoning behind deceptive or inaccurate nutritional or lifestyle advice would be some sort of ideology. This can be the case again with nationalistic or progressive governments, but it’s more commonly a problem of individual political activists, usually those on both extremes of any given spectrum. For example, for some activists, vegan diet or carnivore diet aren’t just about facts.

As we believe that beliefs (alone) don’t make things true or constitute sufficient evidence, we would never recommend anything only because of its arguable ethical or cultural implications. Eating of meat may cause great suffering to animals, especially if factory farming is involved, but that has no bearing on whether a vegan diet is healthy. Similarly, eating only meat and animal products may be tasty or manly to many people, but again, such things have no bearing on how healthy it is.

SGEARS, as well as our other organizations, isn’t a political organization, at least not in the usual left-right dimension. We’re focused primarily on scientific facts and practical problem-solving to objectively help humanity survive and thrive. Which in the case of nutrition and lifestyle means that we care about questions of health and sustainability. In terms of food ethics, the ideal goal is minimization of all suffering, but without adequate technology, we have to prioritize reducing human suffering.

c) Financial incentive for treatments, but not prevention or cures.

This unhelpful corporate medical philosophy is sometimes referred to as “pill-for-every-ill”. Why would you teach people to make healthier lifestyle choices, like to eat less or walk more, when you can instead let them start having health problems, then keep selling them never-ending treatments for those problems, and make a lot of money in the process? A classic perverse incentive, and a sick business model.

It’s already bad enough that the corporate medical industry isn’t motivated at all to find treatments, let alone cures, for any affliction that doesn’t happen to have a large number of sufferers; that most medical testing is done only on white males; and that new fake afflictions are being invented along with fake treatments for them. While these failings of our current system do make economic sense, they’re still failings, and we need to find some sort of remedy for them, an alternative model.

As long as the financial incentives within our healthcare systems will remain set up in a way that leaves many people without treatment and many afflictions without cure, we can only expect more of the same, not an improvement. These issues can be mitigated with activism or charity, but likely not to any substantial or lasting degree, and neither is guaranteed to occur, as they both rely on chance and choice. A true solution has to be a guaranteed, systemic one, resources permitting.

It is an open question what the ideal alternative model should be like. It may not be compatible with capitalism at all, unless we find a way to make more profit from people being healthy than from people being ill. Problems are generally profitable to their solvers only as long as the problem persists. Every illness eradicated is a market gone. If we want to get rid of problems, we need to design our economic system to reward people who do that more than people who almost do that.

As the entrepreneur Boyan Slat puts it, the goal of his descriptively named non-profit organization, The Ocean Cleanup, is to put itself out of business as fast as possible, as an ocean cleaned is an ocean that no longer needs more cleanup. This kind of initiative is therefore possible, but it requires a highly motivated, almost self-sacrificial individual. If we find a way to generate many more people like this, or if we make going out business valued, then we could perhaps cure all diseases.

d) Authoritarianism of scientific health organizations

As the recent global pandemic has demonstrated, scientists and doctors in the position of authority are just as political as the rest of us, even though they often claim otherwise. That much is understandable. What’s worse is that the adage “science isn’t a democracy” appears to be a guiding principle of their politics. The primary responses of the scientific and medical authorities during the crisis to any challenge to their authority were censorship, lockdown, and coercion.

This would perhaps still be understandable in the face of an existential threat to prevent deaths, in much the same way how martial law may be justified during violent conflicts. However, it appears that these authorities mainly used it to cover for their own misconduct and to protect their funding, position, and power. This has severely damaged the trust of the global public in the scientific and medical establishments, in a way that won’t be easy to repair. This must not be repeated.

At the very least, the scientific and medical establishments need to be much more honest with the public going forward. It is common in authoritarian regimes that the leadership always tries to save face at all costs, pretending that the ruler or the ruling party is always right and has never made a mistake. The main problems with this approach are that 1) the people aren’t stupid, they can see that the emperor has no clothes, and 2) errors compound when constructive criticism is rejected.

This makes a system fragile, prone to collapse with something like a mismanagement-caused explosion of a nuclear power plant, or a mismanagement-caused lab leak resulting in a global pandemic. The way in which democracy is less fragile than a technocratic dictatorship is that it is based on public debate of all opposing viewpoints. That’s how the system gets to learn about its flaws and figures out how to best address them. Preventing the debate from taking place through any kind of censorship is only signalling the weakness of one’s arguments.

Scientists like to think that they always know better than the general public, and that the general public is dumb, but that’s much more of a bias than fact. Given the many uncertainties and unknowables in new situations like an early pandemic, the scientific authorities cannot have a much more informed opinion than the public simply because there hasn’t been enough time for that - the science isn’t in yet.

At the beginning of a pandemic featuring a novel pathogen, we cannot know how deadly it is or is going to be, whether any particular measure will actually slow down its spread, or what the long-term effects of a newly developed vaccine could be. Also, doctors aren’t necessarily experts in economics or other fields that may factor into the real consequences of any particular measure that may be adopted to combat the pandemic. Medical dictatorship is therefore still just a dictatorship.

A more anti-fragile medical system would likely be one based on transparency and public debate, in which all mistakes, uncertainties, and risks are made clear by the experts to the public, which then makes maximally informed political decisions, whether directly or through elected representatives. Such decisions will still sometimes result in making things worse, but with high probability, that would occur less often than if the decisions were made by censorious self-interested autocrats.

Only once enough time passes from the beginning of the crisis for the scientific process to run its course, the scientific establishment should formulate new policies on the basis of lessons learned. The new, actually scientifically informed policies should then be offered for consideration to the democratically elected representatives and the general public, not forced upon people with little or misleading explanation while all critics are being mocked, shunned, or silenced.

4.2 The Search for the Fountain of Youth

When we figure out how to keep the global population as healthy as possible within the current limitations of the human condition, we believe that we should start to fight back against some of these limitations. Chiefly, we believe that it is a worthy goal to to try to enable ourselves to live longer lives, as long as those lives could remain healthy. It is a fact that modern medicine has already started doing that.

The average life expectancy has more than doubled since the pre-industrial era from around 30 to around 70 years, and there are good reasons to believe that at least one more doubling should be possible. Many promising anti-aging or age reversal treatments are being explored, including more powerful anti-oxidants, gene therapy to extend telomeres, hormonal therapy using human growth hormone for rejuvenation, blood infusions of younger blood, and more. Progress is being made.

Some more extreme measures are also being experimented with, like cryogenic stasis programs trying to preserve the body long-term until we get technology to revive it and cure it of any afflictions incompatible with life. Another example of a more extreme approach is figuring out ways how to replace organs with animal, artificial, or lab-grown organs. There are also some scientists who believe it may become possible to upload a mind into a cybernetic body or virtual environment.

In fact, so many advances have been made lately that keeping the body alive or relatively youthful for much longer doesn’t appear to be the biggest obstacle to greater longevity. Degenerative neurological diseases, mainly various types of dementia, as well as cancers, may be more difficult to address. This is because the longer one lives, the more cellular damage compounds and dangerous mutations accumulate. Aging by itself is only one of many things that can ruin our lives.

At this point, it’s therefore important to stress that a solution to aging wouldn’t make us immortal, not unless we find a cure for time itself, or chaos, or laws of statistics. A being who doesn’t age may still succumb to disease or injury, and given enough time, some kind of terminal incident becomes a statistical certainty. What a solution to aging would give us is a major increase in our average life expectancy, coupled with a substantial increase in our quality of life for the second half of life.

True immortality would probably be a step too far anyway, given the potential of such transformation to make us not only transhuman, but inhuman. We have already addressed the dangers of becoming inhuman in one of the previous sections, so there’s no need to repeat them here. However, there is one specific way of applying anti-aging treatments inhumanly that should be discussed (and ideally avoided) - exclusive availability of such treatments to the rich and powerful.

If only few of us gain access to life expectancy-doubling drugs or other treatments, it would likely cause a major increase in political division and economic inequality. It’s not entirely unrealistic that this could involve situations like the old rich people draining the young poor people of their blood so that they can infuse it into their own bodies to remain forever youthful. We don’t believe that making Hollywood vampires literally real would be saving the world by any reasonable definition.

To be clear, the fact that literal draining of blood of young healthy people may be involved to prolong life is an interesting quirk of our reality, but the vampiric metaphor describes equally well any version of the few preying on the poor to extend their own lives. We believe that in order for significant anti-aging-based life extension to be morally justifiable and politically tenable, it must be universal. This means that no method of life extension that requires exploitation is acceptable.

To borrow from another classic work of horror, we should also be wary of any life extension “solutions” that are literally Frankensteinian. Of the procedures proposed in real life, this mainly relates to the idea of a head transplant. It’s hard to imagine a scenario in which such a procedure could be entirely ethical. A more open question is what level of cybernetic augmentation would be excessive - how much of the body or brain can be replaced with artifical implants until something is lost?

The more artificial life extension solutions, which may include mind uploads into virtual environments, also create additional problems due to questions of technological design, information control, and corporate economics. What if you’re required to keep paying for a life-saving implant, but you run out of money? Repossession would then effectively amount to murder. How should we regulate what is or isn’t allowed in virtual environments that host real human minds? There are so many dystopian possibilities that they spawned a subgenre - cyberpunk.

In summary, we need to make sure that life extension doesn’t involve any dehumanization or torment, in addition to being universal and not involving any exploitation. The goal is to improve the quality of life and maintain it for longer, not create more suffering, or force people to toil without end. For these reasons, we may need to seriously reevaluate one’s right to end their life, as well as the nature of work and retirement. Aging population, even if healthy, may also become more conservative. Economics and laws will need to be adapted to the new status quo.

Hopefully, with more time at better health, we’ll spend more time learning, adopt a more constructive, long-term perspective, and make more of our lives. This change may in fact be be a necessary precondition for us to explore distant outer space, or to be able to pull off any generational megaprojects. Even within the capitalist paradigm, when one can afford to wait for a return on investment for centuries, that should make business ventures more sustainable. Whatever the future may hold, we believe it is about time to start preparing for these kinds of possibilites.

5.1 The Case for Ultimate Peaceful Expansion

Finally, as we become a united humanity that’s no longer actively suicidal or genocidal, with stable, healthy populations of thriving individuals with long, productive lifespans, it would be illogical not to venture outside of our comfort zone into the unknown. We could try to skip ahead to this step in hopes of finding solutions to our problems out there somewhere, and maybe it would help, and we should definitely at least start laying the groundwork now, but there are reasons for caution. Here are some possible issues with space expansion in our current state:

  • We’re likely to bring the problems we have with us wherever we go.

This mainly concerns war, but we’re also currently likely to carry and spread pollution, disease, and all kinds of bad ideas and habits that cause suffering. Other intelligent life out there may find that objectionable and concerning, and we’ll be no better off anywhere else than we can be here. As the famous astrophysicist Neil deGrasse Tyson says, if we can terraform Mars, we can terraform Earth, only much more easily. This logic applies equally to any other solution to any other problem.

  • A disunited humanity will likely further splinter if it expands.

No matter how fast we’ll be able to travel, once we expand in space, time, or other dimensions, it will become possible for different human groups to become so isolated from each other, that given enough time, they will evolve into different species. Without a solid cultural foundation for peaceful coexistence, this is a recipe for creating future conflict - one in which whole worlds may be destroyed. To prevent an expansion from fracturing, advanced political technology is needed.

  • Survival in space requires better constitution and healthcare than we have.

Within the context of known physics and conceivable engineering, long-term survival outside of Earth is likely to involve a lot of radiation exposure, low-G or zero-G exposure, potential exposure to alien or rapidly mutating pathogens, and other serious health hazards. It seems necessary to at least defeat or subdue cancer to make living in space safe enough. Not to mention that we also need to be able (and willing) to provide all the needed miracle cures to all the citizens.

  • All serious space projects transcend our current lifespan.

This challenge could either be addressed biologically or psychologically, or both, but it needs to be addressed. In space, we cannot effectively operate within four-year terms or similarly short horizons. Anything worth doing in space involves such scales and distances that it will take at least decades to finish. Any work will therefore have to be life’s work, unless our lives get substantially longer. Ideally, we should practice long-term thinking and projects on Earth first, to get ready.

  • Our problems are likely keeping us from being motivated to expand.

There may be good reasons why we aren’t doing much of anything in deep space today, despite having technically started long decades ago, after only a few short decades since flight was invented. In political and business jargon, these reasons are typically called “lack of political will” and “lack of sound business case”, but that just begs the question of what’s wrong with our will and our idea of sound business. The short answer is, we value short-term selfishness over long-term gains for all.

Despite all of these issues, we still believe that some kind of peaceful expansion is the ideal application, or culmination, of technological progress. Even though quite possibly, it could just represent a new beginning of a much longer and grander phase of our existence. From our current limited vantage point, it’s difficult to speculate on what our expansion may look like in practice, exactly, but we can at least discuss some basic possibilities that have been imagined so far.

  • Outer space colonization.

While this is the option that’s the easiest to imagine, as well as one that has been the most explored in both fiction and academic literature, the lack of evidence for large space-colonizing empires in our corner of spacetime suggests it may not be very popular, doable, or safe. Or we’re very rare, or one of the first races out there at this level. In any case, the maximum travel speed is the defining factor here.

If the speed of light is a hard limit, then we're most likely going to be able to colonize only our own solar system. Theoretically, even under light speed, we could colonize the whole galaxy in about a million years, but the political technology required for us to hold together and stick to a mission for that long seems a lot more out of reach somehow than faster-than-light travel. As for sublight travel beyond our galaxy, the accelerating expansion of spacetime imposes strict limits.

The sublight colonization of nearby space is usually imagined as the mining of asteroids and comets to construct megastructures like Dyson swarms around stars, or ring or cylinder-like habitats with rotational gravity. But if artificial gravity is possible to develop, or if a better power source exists than stars (like some sort zero-point vacuum generator), then there may be no need for megastructures at all. Especially if such advances only take a short time from where we are today.

Trying to envision what the future of space expansion would look like may therefore be just as doomed as trying to predict the future of air travel when all you know are balloons. You’d probably imagine futuristic blimps everywhere, and you’d be wrong. The more exotic technology can get, the more unpredictable the outcomes become. For example, if faster-than-light travel is possible through something like wormholes, then maybe traveling literally anywhere is possible in no time at all. In an infinite universe, that could mean infinitely diffuse and discontiguous empires.

Overall, since we’re not seeing sprawling megastructures that we can imagine in our galaxy, it seems logical than we won’t be building those no matter what. Maybe because the real ones will look nothing like what we can imagine right now, maybe because somebody out there keeps smashing literally all of them all the time, or maybe because of a reason we haven’t thought of yet. But whatever the future technology or exopolitics may or may not allow, what really matters isn’t whether we will be able to, but whether we should - the objectives of space expansion.

Why would we do anything space? Isn’t it just a vast, dead emptiness? Well, we’re almost certainly not going to find other planets that we could just go to and live on without major adjustments to either us or the planets’ environments. Barring some very specific ancient aliens theories happening to be correct, against all odds. What we can get in space in spades are energy and materials from which we could create anything we want. Although if we could generate energy and materials more easily than through space mining somehow, we wouldn’t need to go to space to get them.

We can also use the extra space, of course, even if we could generate energy and materials ex nihilo. Unless we could also generate extra space, in which case we could create our own pocket universes, meaning we wouldn’t have to go to outer space to get more space either. But stuff and space are still only means to an end, which in the case of space expansion would be to make more people. Thus, if we decide that some number of people is enough, we again wouldn’t need to go to space. We wouldn’t even have a reason to create more space or stuff if we could.

The issue is, maybe the whole idea of space expansion to make more of ourselves forever is as antiquated as the idea that the future of flight are blimps. After all, colonization and expansion of living space are imperial ideas, and empires seem to have a tendency to self-destruct or decolonialize by going through modernization. Maybe the only tenable rationale for space expansion is exploration - forever passing through, never trying to permanently move in anywhere. In a nomad universe, there would also be no megastructures, especially if FTL is possible.

If every person could be like Doctor Who with their T.A.R.D.I.S., a phone box that can travel anywhere and blend in with the environment, including into hidden dimensions or across time, how would one even maintain an empire? If anyone who disagrees with the rules of their society could simply leave and couldn’t be stopped, or if anyone could wait out a cosmic age they didn’t particularly like in a blink of an eye, how would any authority force anyone to do anything? What could be observed, on the cosmic scale at the distance of light years, of such a “civilization”?

Today, we seem to be obsessed with largeness, but if that’s just a phase we have to outgrow to actually get into space without blowing ourselves up, then maybe going into space is never going to be about building anything large. In sci-fi and futurism, civilizations are typically classified as either spacefaring, or stay-at-home. Maybe all effectively spacefaring civilizations are simultaneously both, as they figure out that expansionism is rude, but are still curious and social, and so they visit places.

If we ever decide to expand into space, we will have to decide if we want to do it as conquerors, or tourists; or to put it in more personal terms, as manspreaders on the subway, or kindly neighbors. Hopefully, there can be a lot more to it than merely taking more stuff and space, as taking more stuff and space is boring, in addition to being impolite. In fact, it’s possibly so boring, that many or most civilizations out there may have decided to expand in other directions, like inside of themselves.

  • Inner space exploration

The main two potential practical options of expanding inward, as opposed to outward, involve either simulations, or psychoactive substances. In the simulation scenario, the logic goes that after we invent something like the holodeck from Star Trek or a Matrix, a perfectly immersive simulation technology, it might become our last invention. Why go to real space, which is hard and costly, when you can create a simulation of it that feels just as real, or of any possible experience?

There might still be a need to colonize at least our own solar system, in order to gather enough resources to build a sufficiently advanced, full-scale, or lasting simulation infrastructure, but who knows. If new computing technologies or advances in mathematics that are possible to develop soon dramatically reduce the power requirements for computer processing, there may be no need for megaprojects after all. This possibility alone could explain the Fermi paradox.

However, there are some major uncertainties and leaps of logic in this hypothesis. For starters, it’s entirely possible that people, or sentient beings, will always value the realness of experience, even when they have a perfect way to fake it, and maybe a perfectly immersive simulation technology is impossible. There are also some evolutionary reasons for believing that civilizations that go full-Matrix are less likely to survive long-term than space explorers, and thus select themselves out.

In any case, exploration of simulated experience is very likely to be a significant part of our future, based on the currently foreseeable technology trends. If not fully realistic, a very good simulation technology appears entirely possible, if maybe costly in terms of power requirements. There may also be something very real to be discovered this way that may not be as easy to find in deep space, depending on the nature of the truly fundamental physics. Which brings us to psychonauts.

While it may sound preposterous at first glance to an old-school scientist, there are some scientists who study psychedelic substances today who don’t rule out the possibility that the visions induced by psychoactive drugs may have some reality to them. At present, it can’t actually be determined with sufficient confidence whether changes in brain chemistry can only create illusory sensations, or whether it is possible that they may instead interface the brain with hidden corners of reality.

We simply don’t know enough yet about how brains work or what consciousness is to be able to make definitive conclusions about these possibilities. This means that people on high doses of chemicals like DMT may in fact be communicating with real entities from somewhere else, or exploring dimensions that are normally merely unavailable to our senses. This could prehaps be proven if someone manages to retrieve useful verifiable information from an altered state of mind.

If, as some scientists suspect, spacetime isn’t in fact the foundational layer of existence, but consciousness is, then exploring simulations or psychedelic states may be a more fundamental form of exploration of reality than any exploration of spacetime could ever be. In such a universe, it might even be possible to skip across spacetime in an instant through a change in one’s consciousness, or to remake what we perceive to be reality at will. Imagination would be the limit.

Given that many accounts of encounters with alien beings include descriptions of psychic communications and various reality or perception-bending powers, we don’t believe this possibility should be ruled out. However, it’s also possible that a sufficiently advanced conventional non-psychic technology could emulate the apperance of magic, so it’s really impossible to tell at this point how much deeper physics go than we currently understand or can imagine. We need to be careful.

If we decide to undertake major inward exploration projects, then we need to make sure that:

  • We don’t lose our connection to reality and ability to deal with real external threats.

  • We retain the ability to discern reality from illusion or simulation.

  • We don’t lose track of who we are as human beings.

  • We don’t become addicts requiring escalating intensity of stimulus.

  • We don’t abuse or torment any real people or real virtual consciousnesses.

  • We don’t accidently destabilize reality, if magical powers are attainable.

In summary, while the main danger of space expansion is that we repeat our colonial and imperial mistakes, the main dangers of the exploration of consciousness are that we could either go extinct due to being too distracted, or that we could effectively become cruel or insane gods. Whether or not our innate grasp of reality is accurate or not in terms of fundamental physics, maybe there are evolutionary reasons why intelligent beings need to have a strong sense of reality in order to be able to maintain any kind of sane society and survive in the long term.

  • Dimension-hopping

After considering space expansion and mind expansion, we have to acknowledge that there’s some middle ground between the two, due to the existing uncertainties about the fundamental physics. In established terms, this option should mainly cover anything resembling time travel or multiversal travel. Regardless of whether this form of travel or expansion could be accomplished using some sort of spacetime or consciousness-based technology, it has some distinct implications.

But first, we should stress that all of the above-listed recommendations still apply. We shouldn’t try to colonize any other places at the expense of their native inhabitants, and we need to maintain our sanity as explorers, as well as our focus on maintaining our survival. In the case of interdimensional travel, many works of sci-fi have certainly explored in detail how experience with parallel timelines may effectively disassociate the traveler with their original reality, making them not care.

If the other dimensions were not parallel realities, but instead what we think of as supernatural realms, there may be spiritual or religious implications that could also disassociate us, only with material existence as such. It’s one thing to lose the fear of death because of finding out that there’s something more beyond material death; it’s another thing entirely to stop valuing life as we know it in our dimension because something more profound may exist elsewhere, to a point of actively trying to “help” everyone by speeding up their demise. Spirituality can get psychotic.

To be clear, this isn’t an entirely theoretical thought exercise. There are numerous accounts of people who have had this kind of experience, as far as they can tell, like the near-death experiencers, or people who would swear that some details of their life or of the world around them suddenly changed. Many of these experiences ended up being life-affirming and transformative in a good way, while others were profoundly traumatic. Particularly for those who went through a hell-like NDE.

But even some positive NDEs left the experiencers in a kind of existential crisis. Imagine that you visit a realm that feels “more real than real”, where everything is so much “more” that the things are here, that after returning here, you’d find sunsets and rainbows comparatively ugly. You’d get depressed. It may very well be the case that our realm isn’t the best or most fundamental in existence, but we believe that the right response to such finding would be to try to improve it, rather than leave it.

Following some theories surrounding the NDEs and the potential nature of “afterlife”, it may in fact be the case that we do make a sacrifice by choosing to come here precisely to improve it, although there are many other possibilities. To use a mundane real life example, work that’s necessary or helps others is often unpleasant, and one could be just having fun instead, but perhaps a balance of constructive challenge and pleasure is the best way to live a life, and be human.

In fact, whatever the reality of other dimensions may be, this is the most reasonable and justifiable approach to interdimensional expansion. If we ever become able to expand into parallel timelines or other realms, whether “higher” or “lower”, we should only do so if by doing so, we can make them better through our presence or agency. We shouldn’t be invading them as what their native inhabitants would perceive as demons, shadowy alien infiltrators, tyrannical deities, or other villains. We should only be appearing in other corners of the cosmic multiverse as angels.

  • Transcension

Speaking of trying to be more like angels, there’s one other idea of what our future expansion may look like that’s even more transcendental than physically visiting an extradimensional spiritual realm - the concept of transcension, or ascension to a higher level of being. So far, all of the previous options describe a form of travel or expansion which we would be doing as us, mortal material human beings or minds. Hypothetically, the most powerful form of expansion would be of who we are.

This could still involve advanced material or consciousness-based technologies, exploration of possible experience, or dimensional travel, but with the understanding that the goal should be to become the best version of who we could be. While this is a firmly philosophical concept without much clear science behind it at the moment, we still believe it’s worth mentioning and further consideration.

As we have established previously, we don’t think that becoming some sort of enlightened psychopath constitutes an improvement upon human condition. In some fundamental sense, having limitations is what makes us human, and arguably better beings than who we would be without them. If more knowledge would make us care less about the suffering of others, for example, then from our standpoint, that’s a regression, or a sign that we’re not really handling having more knowledge.

In another example, if we cured aging and disease and could only die violently, then if it would turn us into warring tyrannical gods like those from our ancient myths, it would again not constitute a true expansion of who we are. In short, to the extent to which more power equals less humanity, it’s not a desirable form of transcension. The point would be to expand who we are without losing what’s good about us.

Whether that’s possible, we have no way of knowing. Maybe being a limited human or a god-like psychopath are the only two available options. Although it seems that even then, it must be a matter of degree - of how much longer we could live, how much more we could know, or how much more we could be capable of before it starts interfering with healthy human psychology. Then there’s the question of what a fundamental expansion of sensory perception or awareness would do to us.

One of the quite possible technological forms of expansion of our capacities would be the expansion of perception capabilities or mental faculties. What if we were able to observe the whole spectrum of light radiation and sound waves? What if we were able to look at reality in more dimensions? What if we could alter the subjective rate at which we perceive time, or in other words, the speed of our thinking? We would certainly encounter some new experiences and phenomena.

All things considered, we definitely support the idea of expanding our horizons in the future, whether in time, space, knowledge, experience, mind, dimensions, or fundamental capacities. Assuming we don’t become too distracted or inhumane in the process. If all of our future were to be just more of the same, then it in all likelihood wouldn’t be a very long future. We believe that the only way to keep going is forward. What’s left is to figure out what to do when we meet someone else.

6.1 Official First Contact Protocol

Assuming we start to expand beyond our local horizon, whatever that would mean in practice, we’re bound to encounter alien intelligence of some kind, sooner or later. Indeed, many people are convinced that we have already encountered some sort of alien intelligence, albeit unoficially. This final section of the SGEARS charter represents our best ideas for how an official first contact should be made.

Unfortunately, and somewhat surprisingly, there doesn’t appear to be any official public protocol for first contact with alien intelligence. There is one document published by SETI, called Declaration of Principles Concerning the Conduct of the Search for Extraterrestrial Intelligence, which was published in 1989 and revised in 2010, but it is rather basic, to put it mildly. It mostly covers contact confirmation.

We agree with the SETI guidelines that evidence of alien life should be continuously searched for in a transparent manner, and that any candidate evidence for a detection needs to be shared with the international scientific community and independently verified. We also agree that a global representative body should then decide what to do with that information. But that’s only a lead-up to first contact.

A real first contact protocol needs to propose what should be done in any basic category of contingency relating to conceivable first contact scenarios, as well as include the general principles of how any first contact situation which wasn’t predicted should be approached. This is therefore what we’re going to draft. We suspect such documents must already exist, but are classified, which breaches the already stated principles of transparency, representation, and internationality.

As a process, first contact is likely to include most or all of these steps:

  1. SEARCH

  2. DETECTION

  3. CONFIRMATION

  4. THREAT ASSESSMENT

  5. RESPONSE

  6. RELATIONS

Regarding search, detection, and confirmation, the basic SETI guidelines are sensible, with the caveat that search shouldn’t be limited to only a single idea of how alien life could be detected, such as via radio signals. While this isn’t specified in the SETI document, the actual methodology of the institute’s efforts, as well as that of any other official efforts, has been fairly narrow so far. In order for the search protocol to be comprehensive, it needs to cover all possible targets:

  • Both far and near alien life.

  • Both primitive and intelligent life.

  • Both living aliens and alien technology or artifacts.

  • Both alien life like ours and alien life unlike ours.

  • Both plausible alien activity and implausible alien activity.

  • All imagined and reported classes of aliens, including extraterrestrials, artificials, cryptoterrestrials, hybrids, anomaloids, ultraterrestrials, extratemporals, interdimensionals, and any additional classes as they arise.

Comprehensiveness also requires coverage of all established methodologies:

  • Detection of radiation emissions, cosmic rays, and gravity waves.

  • Spectrographic analysis of exoplanets for signs of life or industry.

  • Sending of unmanned probes into space.

  • Sending of manned missions into space.

  • Satellite monitoring of Earth’s surface and orbit as well as deep space.

  • Global network of ground-based sensors and drones.

  • Global expeditions to study sites of interest or recover materials or specimen.

  • Long-term observations of anomalous locations in situ.

  • Covert observation or misdirection-based investigation.

  • AI or citizen science-assisted data analysis.

  • Reverse engineering of recovered anomalous materials and artifacts.

  • Biological and medical testing of recovered samples and experiencers.

  • Social and anthropological research into alien-related testimonies and lore.

  • Active attempts to initiate contact with alien intelligences.

These lists may not be complete and should only serve as a starting point for a panel of experts from relevant fields who should decide which of these approaches should be prioritized to what extent at any given time, as circumstances change and our understanding evolves. Prioritization in this context means what portion of available resources should be allocated to each of these avenues of inquiry.

We only caution against ignoring, or especially ridiculing, any available search targets or methodologies. Reliance on probabilistic razors in ruling out any of these possibilities out of hand is inadvisable at this point, in light of the extent to which sensationalism, stigma, and psyops have obstructed or muddied all past research in this area. Due diligence dictates no possibility should be left entirely unexamined.

If probability-based maxims are to be used to determine search priorities, they should be more like the existing Rio scale in use by SETI, which is similar to the previously discussed Torino scale for cosmic impactors. In this example, the relative significance of a detection on a scale from 0 to 10 is based on how close to us the source of a signal is, how feasible it is to engage in communication with it, how likely it is that the source is aware of us, and how likely it is that it’s real and truly alien.

With this type of procedure, even if all assigned probabilities are very low, they could still be sorted in order from most significant to least significant, and it would be possible to calculate the relative proportion of significance between individual detections, and ideally also approaches. We believe that a similar scale should be designed to evaluate search targets and methodologies, weighing their cost, chance of successful detection, and level of potential reward. It could work like so:

Merit=(Reward-Cost)*Chance

Legend:

Ch = 0-1 (1-Ch if R-C is negative)

R = 0-100

Co = 0-100

M = [-100, 100]

The cost value could simply equal the percentage of total available budget that would have to be expended on a search strategy, while the reward value should be more qualitative and composite, reflecting the agreed-upon goals of the expert panel and preferences of the (paying) public. A number of different types of potential gains can each constitute a portion of the total reward value, adding up together to the total value (of 100). For example, there could be 5 types of gain with the assigned value of 0-20 each, or 20 types that are 0-5 each, etc. They could also be uneven.

If a search effort was made that was purely privately funded and for-profit, then we suppose one could also calculate merit more simply as potential gain minus expenses times chance of detection. However, we strongly encourage public search efforts that also value potential gains other than financial return on investment or immediate technological advancement, like learning more information about other life, or achieving direct contact or establishing relations with alien intelligence.

As for the chance value, it could represent simple probability from 0-100%, but chances are that all search targets and methodologies will appear to have low absolute probabilities of success. It may be more practical to set the value of 1 as the highest of assigned probabilities among all available search options, meaning that the value would show how close each search option is to the one that’s most likely to result in a successful detection. Either way, the chance value needs to be inverted if the relative reward value is lower than the relative cost value.

Let’s review the basic types of search options as expressed with this type of equation, sorted from the least to the most worthwhile to pursue:

  • Low gain, high cost, low chance = high negative result.

  • Low gain, high cost, high chance = medium negative result.

  • Low gain, low cost, low chance = low negative or positive result.

  • Low gain, low cost, high chance = low negative or positive result.

  • High gain, high cost, low chance = low negative or positive result.

  • High gain, high cost, high chance = low negative or positive result.

  • High gain, low cost, low chance = medium positive result.

  • High gain, low cost, high chance = high positive result.

Let’s call this provisional idea for a research prioritization scale Alien Life Investigation Evaluation Net Score, or A.L.I.E.N.S.

Regarding threat assessment, while we are very much in support of engaging exclusively in peaceful relations with alien intelligences, we do concede that there may be alien intellgences that do not share this point of view, or the contact with which would present tangible dangers. Once a detection is made and confirmed, an impartial, classified, globally-minded analysis therefore needs to be made of any potential risks associated with any available kind of response to the situation.

Generally, we find the term “risk” to be preferable to “threat” in this context, as “risks” can be independent of negative intentions. We believe that all things considered, it is highly unlikely that a sufficiently advanced alien civilization or entity capable of crossing light years or dimensions would be hostile to us, and if it were, that there would likely be nothing we could do to stop it from wiping us out or taking control.

The reason why we stress impartiality and global-mindedness here is that if any alien intelligence-related threat analyses have been conducted so far, the “threat” was likely evaluated from a much narrower perspective. One which was likely at odds with what is in the best interest of the people of Earth as a whole. Eventualities like gaining better technology, and thus making whole industries obsolete or each individual more powerful and therefore harder to rule, are opportunities, not threats.

With that said, there may be legitimate risks associated with initiating contact or relations with alien intelligences, which may justify not making all of the findings public immediately. Legitimate risks that require serious consideration include:

  • Panic and unrest or religious conflict.

  • Alien contaminants and contagions.

  • Proliferation of weapons of mass destruction.

  • Subjugation.

  • Suicide pact technologies, or Trojan horses.

  • Infohazards, or dangerous memes or ideas.

In most conceivable scenarios, the alien intelligence would likely already be aware of our existence, which is why some level of disclosure of the findings to the public is almost always a reasonable course of action, assuming that global advancement, whether scientific, economic, or spiritual, is the primary goal. In the interest of global security, a gradual, careful, or even incomplete disclosure or implementation of new technologies could be prudent, assuming it follows a legal process with oversight.

Which brings us to responses and relations. To an extent, optimal responses and possible relations depend on the nature of the contact scenario at hand, but some fundamental steps may be always applicable. Let’s start with a list of basic conceivable alien contact scenarios, from the lowest-risk to the highest-risk:

  • Distant passive observation.

  • Distant active communication.

  • Near passive observation.

  • Near active interaction.

  • Distant/slow invasion.

  • Covert infiltration.

  • Imminent/fast invasion.

The main meaningful types of difference between contact scenarios, from the standpoint of which response would be optimal, have to do with distance, or the amount of time we have to prepare a response, and alien intent, or whether the aliens appear to be passive, interfering, or hostile. Here are our recommendations for responses and relations for all possible combinations of these variables:

  • Distant passive - with sufficient distance, the aliens may be unaware of our existence, and any real physical or simultaneous relations may be impossible. In this very safe type of scenario, letting the public know everything and fully engaging the scientific community is the most reasonable course of action.

  • Distant interfering - assuming the distance isn’t overwhelming, or some advanced alien technology mitigates it somewhat at least for the purposes of relatively simultaneous communications, a more cautious response may be warranted. Through active communication, suicide pact technologies and infohazards start being a concern, and if the travel time between us and the aliens is short enough for visitation to be possible, if difficult, biological contagions or military conflict also enter the realm of possibility. In this type of situation, official diplomatic relations and a communication channel should be established, and the public should be made aware of at least the basic information about the aliens. Contents of the initial communications could be kept provisionally classified, until deemed safe for public release or commercial application. The alien system or approaching vessel or vessels should be carefully monitored, and appropriate early warning systems or passive defenses should be constructed, enabling us to intercept and quarantine any visitors, if necessary, and project some reasonable level of deterrence. The parameters of acceptable defense measures should ideally be negotiated with the other side with the aim to form a mutual legal agreement. We must be careful to ensure that our defensive posture isn’t misinterpreted as a buildup for future aggression. If at all possible, open peaceful relations should be pursued, allowing for a mutually beneficial exchange of information.

  • Distant hostile - the one positive aspect of this scenario, an early detection of a slow military buildup or incoming fleet, would be that we would have time to prepare. What that preparation would entail would depend on the specifics of the looming threat, or the nature of expected confrontation. Most likely, it would mean construction of substantial defensive space infrastructure on and around Earth and across the solar system. But regardless of any preparations, mutual destruction may be a likely outcome if any type of long-distance hostilities breaks out, similar to what’s expected in the event of nuclear ground war. Even a basic kinetic impactor accelerated to relativistic speeds, a so-called relativistic kill missile, would be very hard to detect and intercept in time. Regardless of what some game theorists may propose, we do not support any preemptive strikes against alien worlds or bases with the goal of eliminating all adversarial alien life, presumably before they do it to us first. We may get wiped out, but we believe that some life left in the universe is better than none. However, building an apparent capability to strike back for the purposes of deterrence, along the lines of the mutually assured destruction doctrine, may be worth considering in the most dire of circumstances. This is also the only scenario in which not letting the general public know anything may be justifiable, particularly if there isn’t much we can do to peacefully resolve the situation.

  • Near passive - Arguably, this is the most likely type of scenario in which could already be right now. Whether that’s the case or not, this type of present, but presumbly stealthy non-interaction would likely indicate a lack of immediate hostility, a focus on observation, and a doctrine of non-interference, within a set of parameters that includes our current type of civilization. Given some major remaining uncertainties due to the cryptic nature of such encounter with what probably is a very advanced alien force, we strongly recommend not engaging in any hostile actions against the alien presence. Following most of our own intuitive principles, self-defense of some kind is to be expected, and beyond that, the alien non-aggression or more broadly non-interference may be conditional on their own threat assessment of us. If the passive presence is fundamentally benevolent, then their line could be drawn at something like us trying to initiative a nuclear war, or peaceful official contact, in which case they could attempt to shut down the apocalypse four our own benefit as well as theirs, or to actually start directly communicating with us. If the passive presence has more nuanced to negative motivation, we may be under some sort of ongoing evaluation or quarantine. In any of these scenarios, the most reasonable course of action would be to not take any dramatic actions, especially not in terms of overt or covert hostilities aimed against the aliens, or the planet, or ourselves, while informing the general public and inviting open peaceful contact.

  • Near interfering - if any indications are detected that a nearby alien presence is directly interacting with human beings on Earth, then classified investigation would be a reasonable first response. To be clear, the primary reason for the classification wouldn’t be to hide the knowledge of the encounters indefinitely from the public, but instead to protect the ongoing investigation. Once the investigation is concluded and case closed, careful disclosure should follow. Based on existing reports, the most likely near alien interactions would be limited and include communications between aliens and select individuals or communities; abductions ranging from medical check-ups to kidnappings; trickster-like behavior, possibly for alien entertainment; accidents; and deliberate infiltration or other activies analogous to our intelligence work. In the absence of open official contact with an alien government, these interactions would likely represent limited alien initiatives, possibly including criminal actions by the standards of some kind of overarching alien government. Alternatively, this category also may include open large-scale events, like a fleet of visitors suddenly announcing their presence, or a scientific program focused on the monitoring of our biology and biosphere. Which would likely require transportation technologies that are advanced to a point where there’s no real distance between us and anyone else from anywhere else. In either case, the public should be informed and open peaceful relations should be pursued. Building large-scale defensive infrastructure or trying to enforce any sort of quarantine would likely be pointless if the aliens are already here or can get here swiftly from arbitrary distances in time and space. As the main danger in this type of scenario would be criminal or ill-conceived actions of individual aliens or small alien factions, the ideal arrangement would be cooperation with alien authorities on the mitigation of any risks their presence may pose.

  • Near hostile - assuming we’re not already being openly attacked (and likely overrun or overtaken, unless the attack is just a small-scale raid), the remaining possibilities are a nefarious covert infiltration, or a relatively slow and conventional invasion fleet nearing Earth. This would likely mean the aliens do not actually have overwhelming technological or tactial superiority, or are worried about or susceptible to retaliation from a more advanced alien third party. These may be the only types of scenarios where armed or covert resistance following the standard military and intelligence rulebooks would be advisable, if challenging and regrettable.

In conclusion, the ideal official first contact protocol ought to involve continuous looking for all conceivable types of alien presence, near and far, and in case of a successful detection, attempt to strike a balance between security and disclosure, while seeking open peaceful relations to the maximum extent possible. In this way, we may soon discover we’re not alone, and make sure we save the world, together.

7 Global Organizations to Solve the World’s Problems

(The Pillars of Protopia)