Print the page
Increase font size
AI in a Box

Posted January 06, 2023

James Altucher

By James Altucher

AI in a Box

ChatGPT was released a little over a month ago and has possibly changed the world more than any other technology in our lifetimes.

It cleared the 1 million user threshold in only a week and has already disrupted many institutions, even in its free “offline” form.

Universities are scrambling to keep students from submitting assignments that were completed (likely within 10 seconds) by the AI chat program.

Most users seem to agree that the responses generated by the OpenAI program are almost entirely indistinguishable from responses generated by human experts. With one possible exception: the near total lack of factual or grammatical errors.

Such an impressive technological advancement has completely reframed our definition of Artificial Intelligence. Many considered AI to be a variation of predictive text that was based on the most commonly repeated patterns found in whatever samples the “machine learning” system was provided.

Now it’s clear that AI is much, much more than that. It demonstrates nuance and understanding of the user's intention that is often absent in human interactions. 

You can ask ChatGPT to write you a screenplay - not by filling out a form where you provide it themes, a plot outline, an intended resolution, a list of characters, and a language style - but by conversationally asking “Hey, write me a script for a TV pilot about sharks going to law school”.

Wait about seven or eight seconds and you’ll have a surprisingly compelling introduction for “Jaws of Justice” (name thanks to ChatGPT).

Rise of the Machines

A very common, if not universal concern right now is what happens when an AI can not only effectively replace a human, but surpass them.

Ignoring all of the Terminator-level scenarios, many are worried that their job could already be done quicker, better, and 100% cheaper by what OpenAI has made available for free online.

Talk to anyone who has experimented with ChatGPT and it’s a near certainty that they’ve checked to see if it could do their job for them.

And chances are, it could.

Now, if you stop ignoring the Terminator-level scenarios, job security starts to look like a pretty trivial concern.

We are - by definition - incapable of imagining all of the possible implications of sharing a world with an intelligence that far exceeds our own.

So what precautions can we take against a “rogue” AI to secure our jobs, livelihoods, and safety?

Well…not many.

“Let Me Out”

The most commonly proposed solution to preventing Skynet from taking over the world is to keep AI from getting loose on the world wide web.

Keeping it on a closed network, controlling the information it has access to, and limiting its functionality will allow you to control how it’s used.


Well, an experiment performed by AI expert Eliezer Yudkowsky suggests that it isn’t even close to that simple.

In 2002, Yudkowsky set up the “AI-box” experiment in order to prove his claim that an artificial intelligence would need nothing more than a text screen to convince a human to “let it out”.

Since AI technology in the early 00’s couldn’t effectively mimic a human the way it can today, Yudkowsky assumed the role of the AI.

The experiment was set up so that a human “Gatekeeper” had to not agree to let Yudkowsky’s AI out of its box, when the AI could only communicate via text on a screen. That’s it. Two hours. Do anything you want - including nothing at all - as long as you don’t tell the AI that it’s free to go.

The Gatekeepers were made completely aware of their goal, and the “AI” couldn’t make real-life bribes or threats (e.g. “I’ll give you $100 to let me out, or “I’ll have someone key your car if you don’t let me out”).

Also, the Gatekeeper had to willingly let the AI out, and not trick them into saying “I let you go” in some unrelated context.

In the first two experiments, the Gatekeeper released the AI. 

In the third experiment, Yudkowsky included real life stakes where the Gatekeeper could wager up to $5000 on the results.

Again, the AI was able to convince the Gatekeeper to let it out of its cage.

So in a test where instead of an artificial super intelligence, a man of only above-average intelligence was able to convince people whose only goal was to not let it out - to let it out. In under two hours.

The idea of keeping a super intelligence confined indefinitely starts to sound somewhat farfetched, no matter how thick you make the bars on its cell.

Moving Down the Food Chain

For better or worse, the fact that we can’t control artificial intelligence is the whole point.

The potential gap between our intelligence and the intelligence possible through machine learning is truly unimaginable.

In 1965, IJ Good theorized about a potential “intelligence explosion” - claiming that the smarter something is, the better it is at making itself even smarter.

So basically - a chat robot that can write the next Great American Novel in 9 seconds is just the beginning.

We are building something that is by design supposed to relegate us to being the planet’s second smartest species.

When you try to come up with ways to control something like that, it’s like your dog trying to figure out how to keep you from leaving for work.

For those of you that fear the apocalyptic fictions of Isaac Asimov or Harlan Ellison could become a reality - know that there are also dozens of movies about mall Santas going on murderous rampages. Are you afraid of Santa?

Writers write about conflicts because that's what sells.

More likely than a malevolent super intelligence is one that works mostly by design and solves problems far beyond our current understanding. For every scenario where an AI becomes our overlord, there are ten where it cures cancer.

Watch and Learn

Control isn’t possible.

Restraint isn’t possible (try asking everyone to stop working on AI technology).

So what’s left?


We will soon be at a point where the best thing we can do is learn what AI has to teach us. Learn how a truly optimized brain functions. Connect the thousands of dots it makes every second.

And this is where crypto technology comes in.

We’ve been very fortunate that AI developments have been brought out from behind closed doors. These are the advancements you’d expect to be happening within the US military or secret labs in China. 

The ability to totally decentralize AI algorithms and make them completely transparent is only possible through blockchains. 

Nobody is in control of the main server, nobody is the keeper of the code, everybody gets to be involved in learning from this thrilling new technological advancement.

Audits can be thorough and public and done in real-time. 

As detailed in a recent Daily Crypto Hunter article, many companies such as the hedge fund Numerai and the blockchain marketplace SingularityNET are already taking advantage of the enormous potential of combining these two cutting edge technologies.

While we may never be able to keep up with just how quickly these artificial brains function, our understanding will still grow exponentially thanks to the transparency made possible through blockchains.

Crypto Buzz: Shanghai → Bitcoin Boom → Bitcoin Bust → More

Crypto Buzz: Shanghai → Bitcoin Boom → Bitcoin Bust → More

Posted April 17, 2023

By Chris Campbell

What’s Hot In Crypto
Buy Shanghai?

Buy Shanghai?

Posted April 14, 2023

By James Altucher

ETH’s Post Update Price
Fun and Disfunction With AI

Fun and Disfunction With AI

Posted April 12, 2023

By James Altucher

The Good The Bad And The Weird Of AI
Crypto Buzz: Arbitrum Drama → North Korean Crypto Heist → Doge Dump

Crypto Buzz: Arbitrum Drama → North Korean Crypto Heist → Doge Dump

Posted April 10, 2023

By Chris Campbell

What’s Hot In Crypto
AI is Getting Freaky Good

AI is Getting Freaky Good

Posted April 05, 2023

By Chris Campbell

More Powerful Than We Thought?
Crypto Buzz: Legal Woes and an Anti-Crypto Army

Crypto Buzz: Legal Woes and an Anti-Crypto Army

Posted April 03, 2023

By Chris Campbell

What’s hot in crypto