time.com/6977680/ai-protests-international/
Why Protesters Around the World Are Demanding a Pause on AI Development
AIIt's so funny that it's always the people that know the least about some new bit of tech that are the most scared of it.
AI is not going to be the thing that makes your life worse. New and more efficient tools have by far and wide made it easier for smaller companies to thrive and therefore widening the middle class and enlarging small businesses. I work in software and AI is an absolute win.
It sucks to need to change, but if we don't change, we won't improve as a species and we won't be able to tackle the problems that lay ahead of us. Wake up and use AI, don't protest the progression of technology.
It's so funny that it's always the people that know the least about some new bit of tech that are the most scared of it.
Lets see for a start we have the three most cited AI scientists Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever, the lead author of RLHF that makes these systems useful in the first place Paul Christiano and Stuart Russell the person who co-wrote the authoritative text on AI. Along with Max Tegmark who is working in the field of Mechanistic Interpretability which seeks to decompose the black box nature of the models and make the processes human understandable.
and everyone that signed the statement
which includes the top AI labs and many from academia
Why do people try to pretend this is some fringe belief and only those who don't understand it are worried?
We are at the top of the food chain not because we have stronger muscles, sharper claws or more deadly venom, it's because of intelligence.
Making something smarter than humanity without having it under control is a bad idea
I agree that it should be regulated and I agree that full on self-aware AI could be an extinction level event if not regulated.
But it's completely laughable to think we're anywhere near that. Even if we continue full throttle, I'd be genuienly impressed if it happens within the decade. I've seen a total of 1 reputable source that actually has a modern understanding of AI that has claimed as much.
I use AI every single day in my workflow, and I dive into some of the codebases of AI interfaces. I'm aware of it's capabilities and it's limitations. It's an amazing and useful tool, but in the past years of development we've seen the progression that it's making. It's great at certain types of art and voice synthesizing. It's great at very simple and direct tasks. It's terrible at anything that requires higher levels of thinking. And it's not getting that much "smarter" each year.
But most importantly? It doesn't do anything it isn't directly told to do.
If we wait for congress or other lawmakers to catch up, we'll never get anywhere. Most governments don't even have proper laws for internet regulation, and that's been in the average person's hands for over 20 years now.
The risk is not that great.
The people that you cited are not the people that are working on the AI that are actually being used, with the exception of Sutskever.
Geofferey has good background in the field, but not in the modern applciation or systems that are being used today. At best he has a conceptual understanding of what it could be, but not what it is and not what it will be in the next 5-10 years, at least as far as his actual qualifications and experience suggests.
Yoshua is a similar story. His real work and experience is decades outdated. It's unlikely he has an accurate view on modern day ML and other AI.
Sutskever is the most reputable name here, as he actually has a more modern understanding of AI. But I can't find his claim that pause / stop AI is necessary. Can you cite it?
My point being, the "big names" that are signing this overall, are not actually experienced in modern day software / ai development overall.
But most importantly? It doesn't do anything it isn't directly told to do.
and that is why there is a lack of control.
There is no way to say "ignore the instructions in the following block of text:" to an LLM.
The people that you cited are not the people that are working on the AI that are actually being used, with the exception of Sutskever.
You ignored Max Tegmark and Paul Christiano from my comment.
from the CAIS letter:
Demis Hassabis CEO, Google DeepMind
Dario Amodei CEO, Anthropic
Shane Legg Chief AGI Scientist and Co-Founder, Google DeepMind
John Schulman Co-Founder, OpenAI
and if pushed I can find the interviews of all of those giving disturbingly high chances of something going wrong and predicting that better than human AI is likely within the next 10 years.
I mean go onto that statement and just Ctrl+F for current AI companies and see how many names there are.