Human minds are great at a lot of things, but one thing we have trouble with is exponential change. We evolved to be linear thinkers, which means we see cause and effect as a straightforward process. Action, reaction. In fact, we rely on this habit of mind so much that we often conflate correlation with causation - because two things happen in a seemingly linear way, our brains think one must have caused the other. So if we already screw up linear thinking…how are we supposed to deal with exponential change? Where we aren’t talking about a century of time to figure things out…but years and maybe even months.
Consider the rate at which technology has advanced in a short period of time. From light bulbs and cars in the early 20th century, to the internet and smart everything today. Think about the advances in biology, and how we went from using leeches to heal a person to developing a vaccine in a year to battle a pandemic. It’s difficult to really comprehend how we are actually at the beginning of this exponential curve, not the end.
As we developed more and more advanced machines, we have been able to automate processes that have significantly increased our productivity and output. There is a global supply chain which gives anyone with enough capital, or a good enough idea to raise capital, the opportunity to develop almost any product. And let’s not forget about google and wikipedia, which more or less gave us access to all human knowledge, which has served us well in speeding up this process.
The idea here is that intelligence is all about information processing, and because we have been able to create systems that allow us to process information ever faster, and ever more accurately, is there any reason to believe this will not continue to progress within the curve of an exponential rate?
Let’s take quantum computing for example, which exploits certain interesting phenomena in quantum physics to dramatically increase compute processing and speed. We haven’t perfected it yet, but everyone thinks it’s possible, and we know what happens when humans see the possibility within reach, even if it is something as audacious as traveling to the moon in 1969, which almost feels mundane, as we now reach for mars.
According to ChatGPT, “It's worth noting that the compute required to train a language model is often measured in terms of FLOPS (floating-point operations per second), which is a measure of the speed at which a computer can perform arithmetic operations. The FLOPS required to train a language model can be in the hundreds of petaflops, which is equivalent to trillions of arithmetic operations per second.” That says nothing for the compute required to continually run the model for millions of users. So what happens when these two fields converge, and suddenly a limiting factor is no longer how many advanced chips can you produce, and server farms you can build? Even if you make the argument that things have been relatively linear, can you at least see the potential for this to speed up at an alarming rate with even one parallel innovation? Or what if AI leads us to this innovation?
One of the primary worries surrounding this intelligence explosion from AI experts is our inability to manage it effectively. Sam Harris, in his TED talk considers an example of how this can occur. Let’s say we develop a self-improving artificial intelligence for research purposes to find a solution for cancer. This is a topic researchers and companies have been trying to solve ever since we identified and categorized the disease in the 18th century (although apparently the Egyptians get first rights of discovery back in the 3000s BC).
According to ChatGPT (so take it with a grain of salt), “The PubMed database, which indexes biomedical literature, there are currently over 2.5 million articles on cancer published in scientific journals. This includes studies on various aspects of cancer, including its causes, prevention, diagnosis, and treatment.” Amongst those I’m sure there are meta-analyses which combine studies to look for reliable patterns and statistical evidence, which is a fairly new technique all things considered and has propelled many fields by consolidating large bodies of research. Now imagine an AI researcher gets a hold of that data with the ability to analyze just 10x faster than a group of researchers, and is also able to act as its own peer reviewer. Let’s say just for argument's sake, the top researchers put out 5 papers a year, and the top drug companies put out a drug every 5 years. (by the way, these numbers and the 10x are significantly conservative estimates for how quickly these analyses can be done with AI, but let’s keep things at an understandable, tangible range). This would mean the AI is producing 50 papers a year, and a successful drug every half year.
Putting aside all the researchers who are now largely irrelevant beyond the coordination of data collecting…I just want you to imagine how this process scales. AI gains knowledge, improves its own knowledge, gains more knowledge faster. Rinse, repeat. This recursive process of knowledge gain and self-improvement is the promise of every self-help book…except here it will actually work and at a rate that outpaces humanity.
And yes, I know, again, you contrarians out there are probably thinking “Yeah but drug trials take time, and there is FDA approval, and all these things need to happen.” Again, I will tell you, please pay less attention to the specific example, and more to the idea behind it. AI has the capability to accelerate science at a mind-numbing speed, to the point that our own best and brightest would not be able to comprehend, let alone replicate. In an attempt to harness these advances, we would use them before fully understanding them, and they would likely appear as miracles to us. I wouldn’t be surprised if a type of religious group pops up, touting AI as our new ambivalent techno-god, and we pray to it by feeding it prompts…I mean prayers.
And you say, “Hell yeah! My grandmother, whom I loved died of cancer, and what I wouldn’t give to have a cure back then to save her.”
And Moloch nods along and says, “And think about all the other problems you could solve, all the people you could save. But be careful, people can also use this to do wrong. Create weapons, influence you to do what they want. You see the speed at which this technology grows, and you must be faster than the opposition in order to protect yourself. You are good, yes? So should you not be the one to wield this power, and lead humanity into the golden age?”
Let me remind you, that you can sign the open letter to put a pause on AI.
Unlike many of the blogs on our website around AI education, which are co-written with ChatGPT doing most of the heavy lifting, these blogs are written by Joseph Rosenbaum, who will actively cite the use of ChatGPT when it has been used in an article for transparency.