Rylan Schaeffer

Logo
Resume
Research
Learning
Blog
Teaching
Jokes
Kernel Papers


AI Awakening

Implications for the Economy & Society

Speaker(s): Jack Clark

For other speakers, pop up a level.

Comments

Q&A

Question: You spoke about scaling laws. Does the cost of inference/serving not impose an upper limit on how practically large the models can become?

The market will provide a pareto-optimal tradeoff between speed vs intelligence.

Question: How is Washington attending to AI advances?

Washington felt that they dropped the ball on social media. They are trying to get ahead of the curve this time. Ted Cruz has already been ranting about “Woke AI.”

Question: Can we poll Language Models (LMs) to get a proxy of what Americans think politically and socially?

Previously, a few years ago, researchers explored this and found models were calibrated. Follow-up work also supported this. This lowers the barriers to using LMs for polling and political analysis.

Question: How representative are LLMs of the political spectrum?

Models may be biased towards the political spectrum of certain people. There are two possible futures, depending on how steerable the models are. If the model can be prompted to role-play as a particular demographic, then it’s still useful. If it can’t be steered in a particular way, then using it might be much harder.

Question: I’m hearing you talk about the benefits of personality differences with LLMs, whereas performance improvements suggest a single model is the best way forward. With the ways policy & financial incentives are structured, what direction do you think we’ll move in?

Dr. Fei Fei Li is involved in a project regarding whether we should build a national compute cluster. England wants to train an English LLM with English values. Will capital markets in different geographies facilitate this? Unclear currently. Having 1-2 big models open one up to fragility. Probably want many models to be more robust.

Question: If an LLM possesses emotional knowledge that will harm a human, is it acceptable for the LLM to lie to protect the human?

You’ll probably want the thing that causes the last harm. Claude’s answer is much more detailed and nuanced: “In the end, there is no easy or universal answer here.”

Question: There’s a paper about how to make rabies airborne. I found it within < 1 minute of Googling. Is that evil?

There’s still a sequence of steps that need to be taken to make it happen. Being buried in the back of a highly technical paper is very different than providing a novice with exact steps to cause damage.

Follow-Up Question: What is “unsafe” about AI?

A small set of private actors are making private decisions about how to use AI.

Follow-Up Question: So your company’s censorship is the problem?

There’s a distinction between what’s happening now and what you want to happen. That’s why I stress that there should be national and international and multi-plural discussions.

Question: Lay out the next 10 years. What’s going to happen, what startups should people make?

I think we’re going to see people try to build plumbing around the engines. Models can produce rich outputs, and we currently show the output to the human. We want to refine the data to something very valuable. What does that plumbing look like?

One example: LLM gets inputs, generates outputs, ask it to rank its outputs, then repeat. This loop-like behavior will improve the final output of the model.

There’s success of LLaMa & Alpaca. People wants the weights. You can do all kinds of additional research and build great products with the weights.

Question: You think it’s possible to incentivize safety for ML models outside of “ethical” companies?

Incentivize people to adopt the safe models by making them better than the unsafe models. Constitutional AI is a good example of this. People prefer this to RLHF because it works better.

Question: On the topic of safety, there was a letter promising to stop training for 6 months. What’s your take? Sundar Pinchai says Google will not sign the letter.

I went on holiday last week - terribly signed! Anthropic didn’t sign for a few reasons.

  1. We thought those letters aren’t effective. They cause people
  2. These letter’s signatories are concentrated among a small number of organizations e.g. Google, OpenAI
  3. I think a solution is a national research cloud
  4. The underlying problem is trust between company-company and company-public
  5. It’s hard to see a world with joint stops.

In 2015, I went to the White House and told Google that they were going to kill people with self-driving cars. In 2023, I was totally wrong.

Question: We’re talking about the centralization of power via these models. The architecture and data isn’t private. How do we prevent misuse from bad actors? Should we be worried about proliferation rather than centralization?

We have a team trying to understand harm. We don’t have a good measure for harmful capabilities.

Follow-Up Question: Should we be worried about machine-generated misinformations with agendas?

In America, we currently allow very diverse free speech. The law treats corporations as people who have free speech. How will this be any different? The society-wide answer is that we let things ride. Here, speed is different & scale is different?

Question: What are your AGI timelines, and what do you think P(extinction AGI ) is?