John Pellman - Systems Administrator

Home · Writings · About · Projects ·


A Bigger Threat than Unfriendly AI

By John Pellman

If you sample the discourse of futurists and transhumanists, you’ll quickly discover the following common talking point: if humanity develops a form of artificial intelligence that equals (via strong AI) or even exceeds (via a technological singularity) the intelligence endowed to us by natural selection, what prevents this artificial intelligence from being actively hostile towards humanity? Such a so-called “unfriendly AI” would strive to blow us all to dust similar to Skynet from the Terminator film franchise.

I would (in a brief moment) like to dispel such speculative fears and replace them with some very real, contemporary fears. To provide some context, it’s helpful to go back to the years after World War II, when artificial intelligence research experienced one of many frenzied booms. During these years, two parallel, non-contradictory schools of thought developed regarding computing. In one camp, you might find Marvin Minsky and Seymour Papert (proponents of symbolic AI, which has largely been abandoned for neural nets at this point, but remains influential via the legacy of LISP machines) and Frank Rosenblatt (the creator of the original perceptron). Complementary to this, however, there was a decent clique of researchers discussing intelligence amplification.

These researchers (most notably the “father of the internet”- J.C.R. Licklider) did not preclude the possibility of generalized artificial intelligence, but also did not think that such intelligence would arrive for some time (an assumption that is likely true today as well). Instead, they viewed computers as adjuncts or co-processors to the human brain that would greatly increase its efficiency when engaging in goal-directed behavior. Their perspective has had tangible results that have served humanity greatly, and is largely why an individual today can achieve in seconds (through scripting, automation, or even just by engaging with online interfaces) tasks that took weeks in bygone days. In more concrete terms, this group accurately forecast my ability to find a recipe for oat milk in under a minute. More importantly, the intelligence amplification provided by services such as Google has allowed me to find items that I never would have been able to find using practical pre-internet means, such as velociraptor pin-ups.

This brings me to my next point: general intelligence is not strictly related to the belief systems held by an individual. The fact that an individual human mind might be better at processing sensory input and making logical connections (i.e., have a higher IQ) does not automatically guarantee that that same human mind will reach accurate conclusions about reality. Even the most intelligent humans among us cling to erroneous beliefs- it doesn’t take too long to discover a Nobel Laureate that has succumbed to Nobel disease or an engineer that has caught engineers’ sydrome. One of the highest IQ humans (Christopher Langan) appears to be applying his exceptional mental abilities to generate a philosophy that is largely indistinguishable from Timecube (or similar ravings of madmen). More broadly, it should be noted that individuals at all levels of intellgience are susceptible to cognitive biases.

To use a computer analogy, general intelligence is like hardware, and belief systems are like the operating system or a program. You could make a powerful desktop build with an Intel i9, an NVIDIA 2080, and 64 GB of RAM to run a protein-folding simulation, but if the code you’re running doesn’t use floating point arithmetic, the results of the simulations would be wildly inaccurate. Similarly, you could try running a very advanced program such as a raytracer on a 386 with 16 MB of RAM. Assuming Turing completeness (and a lot of painstaking debugging and porting work, perhaps with software implementations of modern hardware features), the 386 could theoretically complete such a program- it would just take a substantially longer amount of time for execution to finish. Similarly, a high IQ individual (the powerful desktop) could be running a belief system that is glitchy or lacking important features, while a low IQ individual (the 386) could have a mental model that is closer to reality. The latter individual would stumble and sputter slowly through existence, but ultimately their beliefs would be more accurate, even if they never did come to any definitive conclusions about reality during their lifetime.

Given that even the most intelligent among us are imperfect, and that the applications of our intelligence do not always lead to accurate conclusions, it stands to reason that intelligence amplification might not only be increasing our efficiency with common tasks- it might also be increasing the efficiency with which we arrive at erroneous conclusions right now. These erroneous conclusions might be strengthened via phenomena described by psychology such as the “echo chambers” of groupthink or the tendency for members of groups to conform when under social pressure. We can see amplifications of erroneous beliefs today in the many internet mobs of trolls that lurk out there, and in the active misinformation campaigns being orchestrated by foreign powers such as Russia. In short, the internet- through its ability to form collective intelligences on platforms such as Twitter and Facebook- might actually constitute a bigger threat to humanity than any hypothetical synthetic life form.