I Stopped Worrying About AGI, So Long As We All Step Up

Published by Dan on

Woman in dark suit alone at a stark white desk making a phone call in a bright antiseptic room with AI neural network patterns on the wall, evoking the 1964 film Fail Safe and the urgency of AGI safety decisions. Image generatd by ChatGPT.

I am excited about artificial intelligence. It is an ever present thinking partner I use to identify opportunites and deliver incredible value in every aspect of my life.

And I am also genuinely concerned — like almost to the point of being crippled by intense existential dread — about what might happen if we don’t get Artificial General Intelligence, or AGI right.

There is a very real possibility we won’t get this right. A company or a government or one bad actor could unleash a technology which wreaks incalculable damage on humanity.

But, although I have no direct control over what is going to happen with AGI, after watching the new movie, “The AI Doc: Or How I Became an Apocaloptimist,” I have hope.

Honest. I do.

Even though there are more than a few reasons not to.

Why Care About AGI

Told from the perspective of a young man about to become a father for the first time, “The AI Doc” explores the approach of AGI through interviews with some of the smartest people working in and around the technology.

After explaining what AI is, director Daniel Roher leads experts in the field through discussions both about why AGI will go terribly and why everything will be fine. Considering scientists who build the newest AI models don’t fully understand what they’re capable of, I was almost gasping for breath as the pendulum swung from doom to delight.

The key construct I took away, though, was why I should even care about AGI. Technological advances are not new. Why be concerned about AGI?

As explained by Tristan Harris, cofounder of Center for Humane Technology, the reason is simple:

AGI is an inflection point because it means you can accelerate all other intellectual fields all at the same time…. That’s why AI dwarfs the power of all other technologies combined.

That got my attention. But it didn’t necessarily equate to optimism.

For that, I needed some historical context.

Reasons for Hope

The reason I’m optimistic is because, time and again, humanity has faced Big Hairy Monsters and made it through. Not unchanged, but we survived.

I think about the Cuban Missile Crisis, the Cold War, and the COVID pandemic. Each time, we worked through our differences and came out the other side. In other words, we figured it out because we had no other choice.

We’re faced with a similar situation now.

How do I know?

Anthropic and OpenAI both decided to not publicly release their latest models because they’re so powerful. In the case of Anthropic, it launched Project Glasswing so selected partners could, “secure the world’s most critical software.”

As with the other crises I mentioned, we’re now approaching an AGI possible inflection point. However, even with AGI looming on the horizon, I believe we will find a way forward.

We have no other choice.

Making it there, though, won’t come without you and me taking a few actions ourselves to help get us over humanity’s latest hurdle.

What We Need To Do

If we want to see AGI work out well for humanity, we must each get involved. Sitting back and letting others figure this out won’t fly.

Public pressure is needed to get governments involved in regulations. Some of the foundation labs are advocating for this, but they’re going to need help to make it happen.

Some governments are cooperating with AI foundation labs. Both sides need to be pressured to make sure this continues.

And the labs also must put skin in this game. Independent, objective third parties need to verify and analyze AI models, and we need a system to hold AI companies legally liable for the AI systems they produce.

None of these will be easy. Regardless, here’s three simple ways you and I can get started:

  1. Contact an elected representative directly. Call or write to your U.S. Representative and both U.S. Senators. Be specific. Reference AI safety, model transparency, and liability frameworks. Calls are more effective than emails. You can find contact info at congress.gov. Five minutes, three calls once a quarter, is more impactful than it sounds.
  2. Support and signal-boost organizations doing the work. Groups like the Center for AI Safety, the Center for Humane Technology, the AI Now Institute, and the Future of Life Institute are actively working on exactly these policy levers. A modest recurring contribution and occasionally sharing research helps sustain the pressure.
  3. Sign and circulate petitions from credible organizations working on AI governance. The Future of Life Institute, Center for AI Safety, and similar groups periodically publish open letters and petitions directed at governments and labs. Signing takes two minutes. Sharing it with a brief personal note in a group chat, email thread, or online community adds social proof that moves people who trust you but haven’t paid attention to this issue.

I’m starting with the calls to elected representatives during a lunch break this week.

Join me, please, because, as Tristan Harris, cofounder of Center for Humane Technology, said in “The AI Doc” movie:

“What matters is that the forces that are working towards solutions start to exceed the forces that are working against solutions.”

And educate yourself by watching “The AI Doc.” It’s in theaters now and streaming on multiple platforms.

Categories: AI

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *