Skip to content
Hamburger
Menu Close Icon

Moloch and Generative AI's Race to the Bottom Pt. 2: An Exponential Leap Forward

 

Moloch Pt. 1

Transformers! More than meets the eye!

I’m not sure about you, but I was completely caught flat-footed by ChatGPT, and similar generative AIs that have exploded into the market in the last year, such as the art generation tool Midjourney. I feel like I have whiplash at the speed of advancement in this field and the tools to use. At first I thought this was an aberration, but when you look back to 2017 it’s a little easier to see this change coming due to the advent of a deviously simple algorithm called Transformers. It’s what the “T” stands for in GPT (Generative Pre-Trained Transformer). You may have heard of it referred to as deep learning or a neural network. There are nuances of course, but the important takeaway here is to understand the impact of this technology on the field of AI, and how it, within a few years of discovery, has created an exponential curve in progress.

Prior to transformers, the field of AI was fairly siloed. You had your natural language processors over here, your speech synthesizers over there, and your art generators doing their own thing. The dynamic at the time, due to the isolated nature of the research, was that each subcategory of the technology was making incremental progress in their niche, rather than contributing to the greater whole, which resulted in slow advancements.

Then transformers came along, and completely flipped the paradigm. Think about it as swapping out the engine that powered the technology with one that was more generalizable. Whether you are in art generation, code generation or whatever, it’s all just a type of “language” which can be learned with the proper inputs and incentive structure. 

Robots in disguise!

Here’s how it works. You set up what you want the AI to pay attention to, some expectations, and a reward system for accomplishing a task correctly. The task is often one of prediction based on data fed to the model. Take ChatGPT for example - you feed it an almost unfathomable collection of text data, and then you give it incomplete sentences, and ask it to complete them. If the output is accurate and it correctly predicts the next set of words, it gets a thumbs up and is rewarded, if not it gets a thumbs down, and tries again. 

This process creates a raw pre-trained model, which can then be fine-tuned by something called Reinforcement Learning from Human Feedback (RLHF). You probably remember the story of Pavlov’s dogs, or Skinner boxes from your undergrad psychology class, and the idea is fairly similar for RLHF. You have humans rate and evaluate outputs from an AI system like ChatGPT, so the system can better predict the type of response the user is looking for. Like clicker training for AI. 

To summarize, transformer technology allows for a simple process of pre-training a model by asking it to predict the next whatever (words, musical notes, image) in a series to get it to good enough, and then further improving with human feedback.

Lost in a Digital Delta Stream

Why is this so transformative (pardon the pun) for the industry? Because when a computer can think of anything as a language, it can learn by simply getting guidance on predicting what comes next. I hope you can start seeing how this scales across different modalities, especially considering apps like ChatGPT are the fastest growing in history…which means SIGNIFICANT real world RLHF. Every time you use ChatGPT your interaction is going into the system. Your identity might be secure, but your information is not. You probably heard the story about Samsung employees accidentally “leaking” proprietary information. I’m sure they are not the only ones who have suffered from this, as the wild west of this application allows for us to get so much more done, to help think through tasks, and to act as a co-pilot. But all that data goes into the blackbox of ChatGPT, so it can get increasingly better at predicting the next word. At this point, there is no retrieving what you input to ChatGPT, which is why Italy temporarily banned it in response to the “data breach involving user conversations and payment information.” 

OpenAI quickly put together an option that allowed users to turn off the data sharing with the model, and has announced enterprise options that also keep it from feeding the larger model. They also included an age verifier for Italy, because of specific laws regarding child data. This is a great example of the iterative game OpenAI is playing, and a big part of their philosophy - release things in a deliberate sequence, test for holes, patch them up, and test again. I would also 100% bet they are using ChatGPT to help them generate and test ideas at a blinding rate.

It’s important to mention that Europe approaches privacy laws differently than the US. The General Data Protection Regulation (GDPR). It puts a lot more control of data in the hands of individuals, and requires organizations collecting data to give people the right to delete it, or show what data has been collected about them. Except OpenAI is having difficulty doing that because they don’t actually have a way to reliably retrieve or delete information because it’s in a complex neural network with a trillion parameters. It takes the needle in a haystack to a whole other level.

Press Pause

I hope this gives you pause. Let me say this again. We currently have no way to retrieve the data you input to it, and as they update the model, everything private you have shared could potentially be spit out from someone else’s account by prompting it correctly once an updated version is released. 

This accelerating process is a compounding one, and the Generative Large Language Multimodal Models, the Golems have been engaged on multiple fronts in a decentralized fashion. Many for fun, or to be more productive. You’ve likely tried these technologies and recognized the revolutionary nature of them. It’s almost addictive, right? Novel in some fundamental way. 

And Moloch stands behind us, barely a shadow, whispering in our ears “Isn’t this amazing? Think of all the good it will do for you and others. Think of all the money you can make…just keep using it. Just one more prompt…What harm is there?”

And we listen because on an individual level, and in the short term, we have difficulty seeing what we’re sacrificing.