The technological singularity – the theoretical point in time when technological superhuman intelligence emerges, surpassing human intelligence exponentially and allowing one to transcend biological mortality – is a widely (and wildly) deliberated concept which generates an immeasurable number of unanswerable questions, with one of the most prominent debates being whether or not we should embrace this upheaval of our current existence or work to prevent it. It is uncertain whether an intelligence explosion of this proportion would be beneficial or harmful, or even if it would prove to be an existential threat, so the discussion of such must be conducted with the potential ramifications being of utmost importance.
One of the first problematic dimensions of the singularity that must be approached is the issue of whether or not machines of superhuman intelligence would take a stance of cooperative cohabitation or competitive antagonism opposite humanity. To begin, if the singularity arrives, how do we determine whether an entity is human or machine? The human-computer spectrum has an infinite gradient separating the opposing ends of the dichotomy (think smart bionic limbs and artificial cardiac pacemakers), and locating an objective, definitive point of sentience is highly impractical. If machines attain intellectual capability beyond that of the human mind, they could assumedly engage in logical argument to deduce the paramount action necessitated by any situation, but it seems impossible that one could program a computer to actually feel compassion, sympathy or regret (not merely programming them to analyze external stimuli using a determinate code or set of comparisons that we use to differentiate “right” and “wrong”). Likewise, our essential humanity would be unutterably altered beyond recognition with the transmutation from corporeal body to wholly digital cyberexistence, so it seems unlikely that we could bring our human capacity for emotion with us into the realm of bits and bytes.
Some machines have already acquired various forms of semi-autonomy, and the question is now how to prevent them from making autonomous choices that could lead to the annihilation of mankind. Anthony Berglas notes that there is no direct evolutionary motivation for an AI to be friendly to humans; there is little reason to expect an arbitrary optimization process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. Even Raymond Kurzweil, the focus of this article and one of the biggest proponents of welcoming the singularity as a utopian future, admits that “there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources.”
What do you think? Should we embrace the technological singularity or fight against its coming as if our lives depend upon it?
Editor: Lindsay Duncan