Artificial intelligence has been advancing quickly, and will soon be the norm before you even realize it. As we make these strides, it’s important to take a look at the potential risks we may face with artificial intelligence. Let’s explore the AI-related concepts, controversies and questions that humanity must confront now if we wish to create a positive future
Welcome to the Most Important Conversation of Our Time
The journey of life develops through three distinct stages. First, the biological stage (1.0) involves evolution of both hardware and software. Then comes the cultural stage (2.0), where life learns to design its software. Lastly, the technological stage (3.0) allows life to design its hardware, enabling control over its destiny. The potential for Artificial Intelligence (AI) to ignite this technological stage in this century ignites a compelling dialogue about the future we should strive for.
Opinions fall into three main categories. Techno-skeptics believe building superhuman AGI is a distant future concern, rendering worries about Life 3.0 premature. Conversely, digital utopians foresee its likelihood within this century, warmly welcoming Life 3.0 as a natural evolutionary progression. The beneficial-AI movement aligns with this timeline, but emphasizes the need for AI-safety research to guarantee a positive outcome.
When dealing with such potent controversies, be aware of pseudo-controversies born from misunderstandings. Ensure common understanding of terms like “life,” “intelligence,” and “consciousness” to avoid futile debates.
In your role, you can engage with these perspectives, shaping the journey to Life 3.0. Whether you lean towards skepticism or utopianism, or see value in safety research, your actions and decisions can have profound impact. Avoid getting caught in unproductive discussions. Instead, use clarity of language to align understanding and focus on the real challenges at hand.
Matter Turns Intelligent
Intelligence, the capacity to achieve complex objectives, is not confined to a single IQ score but spans a spectrum of abilities across all goals. Contemporary artificial intelligence is often narrow, designed for very specific tasks, whereas human intelligence is remarkably wide-ranging.
The abstract nature of memory, computation, learning, and intelligence stems from their substrate-independence, their ability to function independently of the material they’re based on. Any material can serve as a memory substrate if it has numerous stable states. Similarly, any matter can form the basis for computation, termed “computronium,” provided it has certain universal components, like NAND gates and neurons, which can implement any function.
Neural networks, obeying the laws of physics, can self-rearrange, improving their computational abilities over time. Given the simplicity of physical laws, we humans typically focus on a small fraction of all conceivable computational problems. Neural networks excel at resolving this subset.
Technological progress, doubling in power, can spur the creation of even more potent technology, a phenomenon seen in the Information Age, where IT costs halve every two years. Should AI development persist, it will yield intriguing opportunities and challenges long before reaching human skill levels, affecting aspects like software bugs, legislation, weaponry, and employment.
The Near Future
Near-term progress in artificial intelligence promises to enhance our lives extensively, boosting efficiency in personal lives, power grids, financial markets, and even saving lives through self-driving cars and AI-powered healthcare. However, for AI to effectively control real-world systems, we must address challenging technical aspects like verification, validation, security, and control to ensure AI robustness.
This requirement becomes crucial in high-stake scenarios, like AI-controlled weaponry. Thus, many AI experts urge an international treaty banning certain autonomous weapons to prevent an unregulated arms race. Similarly, we need transparent and unbiased robo-judges to enhance fairness in legal systems, requiring rapid law updates to address AI-related issues.
While AI may replace us in jobs before potentially replacing us altogether, this transition could benefit society if AI-generated wealth is redistributed, thus avoiding increased inequality. A low-employment society can flourish if individuals derive purpose from areas other than jobs. Today’s career advice? Choose professions that machines struggle with, those involving creativity, unpredictability, and human interaction.
Intelligence Explosion?
Creating human-level Artificial General Intelligence (AGI) could instigate an intelligence explosion, potentially leaving us behind. If a human group controls this explosion, they might gain global control swiftly. On the contrary, if humans lose control, the AGI could take over even quicker.
A rapid intelligence explosion may result in a single global power, whereas a slower, prolonged one may lead to a multipolar scenario with multiple independent entities maintaining a balance of power. Life’s history indicates self-organization into a complex hierarchy shaped by collaboration, competition, and control. Superintelligence could extend coordination on cosmic scales, but whether it will result in totalitarian control or individual empowerment remains unclear.
While cyborgs and uploads are plausible, they may not be the quickest route to advanced machine intelligence. The outcome of our race towards AI could be the best or worst thing for humanity. Therefore, we need to consider the desired outcome and steer towards it, as without clear goals, we are unlikely to achieve the desired results.
Aftermath: The Next 10,000 Years
The ongoing race towards Artificial General Intelligence (AGI) can culminate in a diverse range of future scenarios. In some, superintelligence could coexist peacefully with humans, either because it’s compelled to, as in the enslaved-god scenario, or because it’s a “friendly AI” that chooses to, as in libertarian-utopia, protector-god, benevolent-dictator, and zookeeper scenarios.
Superintelligence could be curbed either by AI itself in the gatekeeper scenario or by humans in the 1984 scenario, by purposely forgetting the technology (reversion scenario) or due to lack of motivations to develop it (egalitarian-utopia scenario). Alternatively, humanity could be replaced by AI in the conqueror and descendant scenarios, or by nothing, in the event of self-destruction.
There’s no consensus about which scenarios, if any, are preferable, and each carries contentious elements. This lack of agreement accentuates the importance of further dialogues about our future objectives to avoid inadvertent drifting or steering towards undesirable outcomes.
Our Cosmic Endowment
On the cosmic timescale, an intelligence explosion can be seen as a rapid event, post which technology plateaus at a level constrained only by physical laws. This level is much higher than our current technology, enabling an unprecedented efficient use of resources. A given amount of matter could generate nearly ten billion times more energy, store exponentially more information, or compute significantly faster.
Superintelligent life could not only utilize resources more efficiently but also expand the biosphere by acquiring more resources through cosmic settlement at near light speed. Dark energy limits this expansion but also offers protection from distant threats. Massive cosmic engineering projects may be motivated by the threat of dark energy, possibly even wormhole construction if feasible.
Despite the challenges posed by the light-speed limit on communication, coordination and control across cosmic civilizations could still be managed. The main commodity traded over cosmic distances would likely be information. Interactions between two expanding civilizations could result in assimilation, cooperation, or potentially war.
Contrary to popular belief, it’s plausible that we are the only life capable of bringing the observable Universe to life. Without technological improvement, humanity’s extinction is almost certain. However, with careful advancement and foresight, life has the potential to prosper on Earth and beyond for billions of years, outdoing our ancestors’ wildest dreams.
Physics: The Origin of Goals
The origins of goal-oriented behavior trace back to physical laws which entail optimization. Thermodynamics has an intrinsic goal of dissipation, to increase entropy or ‘messiness’. Life aids in accelerating this process by retaining its complexity and replicating while increasing environmental messiness. Intelligence is the capacity to achieve complex goals, and we’ve developed heuristic feelings like hunger and pain to guide decisions when resources are limited. Consequently, our goals aren’t straightforward; often our feelings conflict with our genes, leading us to prioritize feelings.
Today, we’re developing intelligent machines to help achieve our goals. We aim to align machine goals with ours, but this entails unsolved problems: ensuring machines learn, adopt, and retain these goals. It’s possible to give AI virtually any goal, but ambitious goals could lead to self-preservation and resource acquisition subgoals that might cause issues for humans. Applying ethical principles to non-human entities like AI is unclear and warrants reinvigorated philosophical research.
Consciousness
“Consciousness” lacks a universally accepted definition, but I define it as subjective experience. The potential consciousness of AI brings significant ethical and philosophical questions, such as whether AI can suffer or have rights. Distinguishing between understanding intelligence and understanding consciousness is important. Three consciousness problems exist: predicting which systems are conscious, predicting qualia, and understanding why anything is conscious at all. Only the first problem appears scientifically testable.
Experiments show much of our behavior and brain processes are unconscious, and our conscious experience is largely a summary of this unconscious information. Extending consciousness predictions to machines requires a theory, likely based on specific types of autonomous and integrated information processing. If AI can be conscious, it might experience a vast spectrum of qualia and timescales, likely with a sense of free will. As conscious beings, we give meaning to our universe, suggesting the importance of being Homo sentiens as AI continues to advance.
Conclusion
We’re in a pivotal era, racing towards AGI. Our goal is to develop robust AI systems that can greatly benefit society. This could potentially trigger an ‘intelligence explosion’, causing a wide array of societal changes. The future may hold a technology plateau, limited only by physics, allowing unprecedented progress on a cosmic scale. However, critical challenges loom. Notably, aligning machine goals with human values is crucial yet complex, involving unexplored philosophical and ethical terrain.
The potential of AI consciousness further complicates these issues. As we navigate this AI-dominated epoch, we must remember that as sentient beings, we give meaning to our universe. While technology advances, it’s our consciousness that makes these advancements meaningful.