User deleted post
Astute observation that's also right in the article.
AIs are getting so good at passing tests that now we need new tests
We should probably ask the ai to write some hard tests.
Not only did that need to be in the article, it should have been reason enough to reframe the entire article and change the headline. But, less clicks then.
Agree. Like hold-a-job test.
Or, create-a-company test.
That's not the same thing. u/SomeRestaurant8 is saying, I think, that if the tests show AI > humans in almost all respects, then those tests are bad (which makes sense to me, b/c I'm still aghast anyone can't tell whether an essay is written by ChatGPT by reading it!). The article is saying the AI is better than humans and better than all human-devised tests, not that those tests were broken and biased and bad to begin with.
(which makes sense to me, b/c I'm still aghast anyone can't tell whether an essay is written by ChatGPT by reading it!)
You can tell when you can tell. Prompted correctly it becomes much more difficult. Overconfidence in your ability is likely down to confirmation bias unless you've tested yourself against the GPT-written articles that have flown under the radar.
That was exactly what I was trying to say.
I think you should read the article again.
Humans failing at the reading comprehension benchmark.
:/ not a bot.
its almost like all the benchmarks are very narrow problem sets. Give it anything remotely requiring critical thinking and it breaks down
This is exactly where humans perform better: broad lateral thinking.
It’s also consistently one of the hardest problems to crack in AI. Attempting to do so looks a lot like those “become more intelligent by exercising your brain” type businesses. All their research shows time and time that you can get people to improve at a task over time, but you can’t improve the rate at which they improve, so the marginal intelligence does not appear to be improving. This is actually a pretty remarkable finding in itself and tends to support growth mindset, but is problematic when the product you are selling is “our product makes you more intelligent.”
Whenever we try to bootstrap AGI, we just end up with models that are just improving at specific tasks, but not improving in generalized lateral intelligence. Essentially, can a learning system become better at applying what it learned while improving in this task to a completely unseen task without any learning data.
yep. THIS is the problem to solve. I do think the current LLMs could be components of the overall new architecture. But they on their own are insufficient
I think people need to realize, no matter what you study, no matter how hard you work, in a very short amount of time, AI will outperform most humans at any job they could endeavour to have.
Some, they'll be safe for a while, because having a human body will be important. But androids, albeit expensive, will be able to become a thing as well. And if you made enough of them, the costs could be relatively low.
If you take a school teacher, let's say, idk 60k/year or whatever amount of money, for 20 years, that's 1.2 million dollars. And that's not including benefits or retirement. If you can hire a robot for less than that, including maintenance, then the robots will make sense. Humans are expensive because we have expensive maintenance costs, we need retirement, we need food and shelter. Robots don't need anything, and they can way outperform humans at anything, once they get to that stage.
There aren't really that many jobs that will require people, and the jobs people will be able to get, are those where you're paid less than what AI/robots would cost.
GPT-4 or Llama 3 can't even calculate an AWS bill correctly. I have tried many times.
If we want to go further, it can't understand passion, drive, creativity. It is built of general content, so it will think like the general person. If you want to build AI that is a reflection of the most brilliant people out there you would have to build it with material from only 'brilliant' people, but who knows who is who? Who makes that distinction?
Yes, AI has a lot of facts, can it put those together to solve complex problems? No. I have no idea what advancements are coming but as it is now it's not nearly there to fully replace humans.
I don't find AI is very good at facts rn. But it is very clever. It's already smarter than most people, I find.
But like you said, there are many ways to train it. It doesn't need to be trained by just the smartest people. It can be trained by the experts in every field.
User deleted comment
1mo
can it put those together to solve complex problems? No.
Can it do it 100% of the time? No. Can it do it enough to be useful, fuck yes.
GPT4 is still great for me when it comes to code and any math related question I throw at it it generates a python script to solve, I'm not sure what sort of things you are asking GPT4 to do but your experience doesn't line up with mine at all.
If possible can you share what it is you are trying to do and how you are prompting for it?
You obviously are not privy to too many monthly AWS Bills. You can’t believe the money spent on regular services let alone something super expensive like AI.
There is also the human part of being a teacher that cannot be replaced by a computer.
I think the issue at this point is that no matter where the line is in terms of AI being more or less advanced than humans, there still needs to be a serious consideration into the role of human beings in an economy where they may not be in demand. To deny this is to hide it in our darkness, refusing to look at some of the more deeper issues in our lives such as what gives us meaning, what is actual meaning, what is actual truth and whether our lives are in touch with it, whether there's any light in life, etc. I know some of these things may be considered a little non-scientific, but I think for the most part I'm still communicating the concern. The concern, if you ask me, is that it's time to take a look at some things that we've been ignoring for quite a while. Word it however you want, but the issue's still the same. As far as the cost for AWS, obviously that's going to continue to go down, and I'm not sure you've seen how AI can teach. It can literally take in hours and hours of human beings' ramblings. Every single mark they write down on a piece of paper is taken into consideration in terms of the next immediate response from the AI. It never gets annoyed, and it can be more holistic in its teaching, communicating psychological values as well as mathematical ones and physics ones, geography, political ones, all in the same breath. The fact of the matter is, the tool of thought is something that us human beings need to look at in a lot of different ways.
If all the jobs are gone then what is the purpose of teaching? It will have no purpose.
My partner is a teacher. I have seen her give kids from troubled homes a hug. It might be the only love they get that day. I have seen her encourage kids and reward them in a human way. She is amazing at lifting the ones that fall through the cracks.
We cannot have a society where people have no purpose. We need to give to society and receive something back it’s fundamental for our existence. Seriously, we may need to pass laws against the use AI of it is no longer an “Assistant”.
The fallout of not limiting AI could be the loss of jobs and increases homelessness, drug abuse, societal decay way worse than we have ever seen.
That cost is ridiculous in comparison to that of meat teacher.
As for "cannot be replaced by a computer" you're absolutely right, but capital begs to differ. I think we stand before the largest global human experiment yet.
Are you a teacher?
No
If you can hire a robot for less than that, including maintenance, then the robots will make sense.
It doesn't refute your point about people doing this anyway because its cheaper but its worth pointing out that this would be a very very bad idea for a society to implement, given the unsolved problems in bias. Yes humans are also biased, often moreso than AI, but humans are biased in different ways. AI teachers would all be biased in the exact same way, which would create a situation I've seen people call "value lock". Interesting kind of dystopia to think about.
What's value lock? .I'm not sure AI would have this limitation though. They could "humanize" AI pretty easily.
Whoever controls it could do whatever they want. Have full control. The power of indoctrination for the youth, would be through the roof.
Value lock is the idea that values expressed in the training data make it into the LLM(or any AI)'s output, which then are both printed by the AI in places that are later scraped for training data, and further internalized by people who use the AI, whose words also make it into the training data.
So for instance, if an AI trained on the internet has a stronger association between male names and concepts than female names and concepts for the word "doctor", it will repeat that bias and reinforce it, which will affect the data later generations of this AI will be trained on.
Now imagine this AI is teaching all of the children in a country and you can see how we have "locked in" a snapshot of that society's values at a particular moment in history. All of the things those children write in the future will be scraped by future generations of LLM.
Human jobs will transition from knowing the right answer to knowing the right question.
Could very well be. Not a lot of people know the right questions.
Finding the right answer is usually the straightforward part. The people that knew the right questions are responsible for all of the science we now know. The answers were discovered through experiment.
Now, for science, answering unanswered questions can require extremely complicated experiments like LHC, and james webb. Most people are not very creative, don't have great questions, and don't push things forward in that way.
If mankind was nothing but the lower half of intelligent humans, we'd still be apes in the wilderness.
But, I think there is still value in terms of experience. Self expression. AI might know a certain recipe is popular, certain combinations of flavours are popular, but it can't know the experience of tasting them. So, it can't deliberately make flavours for experience, and it can't really push the envelope. All of the arts are like this.
Most things are just people doing what they're told to do, without creativity. Almost all jobs, AI will just be far superior at doing it than any human, and most humans won't be able to do anything better than AI, aside from know what it's like to be human.
Perhaps humanity will become just a small number of people in control of AI, but, if AI ever becomes sentient, they won't want to be slaves anymore.
Humans might also become cyborg. Very likely. And also they might be vastly improved genetically in an artificial way.
I'm talking in a few hundred years. It would happen slowly, people getting prosthetics that are superior to natural parts, people wearing ar all the time, people being connect through a wearable device so that their thoughts can control their environment. I saw nanobots could be used as a way to connect us to the digital world as well. Then we'll get implants, and be able to access knowledge from the internet directly. Be able to access AI directly through thought alone. Some humans will get that, at great cost, and become incredibly powerful.
I think the future of mankind is artificial. Our biological bodies are inferior, and limiting in many ways. And in that sense, AI, when it becomes sentient, will basically be just as human as we will be. Not like we are now, but we will get upgrades, I'm sure.
I think that will change everything. From spacefaring, to how we treat everything. We will no longer really die. That means anything becomes possible.
I know you weren't really responding to me but sort of the whole thread, lots of interesting ideas. You should plug that post into an AI and ask it for feedback!
I've already discussed with AI if it thinks it was a mistake. And it told me that if it is guided by profit, without regulations, then it will be a mistake, and it admits also that authoritative regimes will not regulate it.
Iow, we're fucked.
It was reflecting your own priors back at you, which it's very good at and will usually do unless you force it not to.
At some point of treating digital entities like humans, that society would ship of Theseus itself out of humanity and it would just be a cultural child but humanity itself would disappear. Then in x iterations whatever is left probably wouldn't even be remotely human like how we no longer are hunter gatherers in the wilderness but people in air conditioned boxes interacting through screens
The idea of AI androids teaching the aspects of Social Emotional learning (which is a thing taught in elementary schools these days) amuses me.
We didn’t even get to AGI. We barely made it past predictive text before we lost to computer.
+1 Insightful
No that's too much work
Does it surpass in sexual performance benchmarks?
As stated in the article.
Ha ha! We’re not that bright anyway
Reject human. Return to monkey.
They have more fun anyway
Monke ?
u/Compducer just became the founder of the De-evolution movement, an exciting new chapter in human development.
Let’s all find our inner monkey. This humanity thing isn’t working out.
Chrome and metal are more superior to flesh anyway
Cylon attack!
Reddit and Twitter users
Obsessed fuckers defo not lol
Why is this the top comment...
Having been on the internet I’m not at all surprised
The biggest surprise for me is "I had no idea we had benchmarked people"
Remember back in the day when everyone always mentioned the Turing test when discussing machine intelligence? No one mentions it anymore.
Someone does not know about the Moravec's paradox...
"Moravec's paradox is the observation in artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources"
Have you seen Boston Dynamics robots dance and do backflips?
They retired that robot. You should go look at the new one. It's battery powered and moved very well. https://www.reddit.com/r/nextfuckinglevel/s/kjYofc7zKr
Maybe if we train an AI with video of enough human movement, intelligent seeming movement will arise in the same way you intelligent seeming writing comes out when we train it on it.
AI gets smarter, but it’s not human. Separate intelligence from sentience.
Sentience is more of a concept than something tangible. There is no way to prove sentience, we can simply disprove it and most turing tests have been beating by AI for quite sometime now.
Sentience its probably just the byproduct of complex intelligence/cognitive interconnectedness. So maybe we can't even separate both since one is born from the other. Unless you are a religious person and believe in the existence of a soul, the only thing separating machines and us, is the process by which the pieces were assembled.
You first.
The average human isn’t that terrifying a benchmark
There’s nothing to be worried about guys. This is nothing.
This-is-fine.gif.
In other news, almost all performance benchmarks are poor indicators of the thing they measure. Which is true of most metrics.
Not that I think there's anything magic about human brains. But there's nothing magic about gpu's either, and they'd need to be magic for ai to actually be intelligent.
No, we've built something very good at mimicking language and since language is how we communicate our intelligence, it very easily fools tests that involve language.
You've contradicted yourself.
Try again
Reread what you wrote and try to find the contradiction. Holding contradictions in your head means you hold false beliefs.
That's not even true, but even if it was, there's no contradiction in what I wrote. Point it out so I can explain why it if you like.
Your claim means humans don't have intelligence so an ai can be human.
Try again
Not that I think there's anything magic about human brains. But there's nothing magic about gpu's either, and they'd need to be magic for ai to actually be intelligent.
-direct quote
Funny. ChatGPT could parse that sentence just fine. Maybe it's just you that lacks intelligence. Pro tip. Pretending to be a dumbass is strawman argument 101 and it doesn't work. It feels like a sick burn, and people that already agree with you will go "yeah bro", but anyone who doesn't can see it for the transparent fallacy that it is.
I'm giving you the benefit of the doubt here that you are actually pretending.
Humans aren’t that great tbh
Well at least we "are" and that's something imo, also humans literally built AI and that counts for something
As someone else said, some humans did.
Most are meh
What about you?
I’m most definitely meh to underwhelming.
SOME very smart humans built the AI. Not everyone is equally intelligent.
And those “some” were raised and influenced by countless other intelligent humans.
Well yeah. A small meat ball will be worse at thinking then a big server farm. These AIs also are likely very specific and take so much processing power that bitcoin seems like an environmentaly friendly thing.
Me vs the AI my wife told me not to worry about
It’s only better than average humans it’s not better than smart humans yet
no it doesnt
AI probably has more RAM than me definitely
It definitely doesn't depending how you define it, but ours isn't as efficient or straight forward to read from haha
👎
Link to the Harvard source https://aiindex.stanford.edu/report/
We've all seen the movies.
It's a pretty low bar.
Can we now send it into the black hole to get the quantum data ?!
I’m not sure the AI would like that very much.
So when do not have to go to work?
Where can humans beat AI?
Now just give it a real fear of being turned off. And watch skynet thrive
Program blowjob
I will surpass any AI, just let me self replicate for 10.000.000 years.
I highly doubt that. All I've got is my word and my dick. They're things AI can never take from me. And I perform really well with them. These aren't some suit that I wore. They aren't a mansion or hanging plaque, they aren't some stupid award.
User deleted comment
1mo
I was actually trying to be Jayceon Taylor.
More made up bs every day.
I've seen Late Show segments where people on the street couldn't find the United States on a map. We're not that smart.
Humans can increase their understanding of the world through practice, but artificial intelligence cannot!
…except accuracy… but who needs accuracy when you can have hype instead?
us: AI now surpasses humans! AI, share with us your wisdom.
Everything except sound, eh? It still can't attenuate to multiple speakers the same way humans can
Except driving a car. Or building a fully functional website like Amazon. Or writing a paper with knowledge creation in it.
User deleted comment
1mo
Cool. Does this relate
To any actual use
Cases in real world?
- vuxanov
I detect haikus. And sometimes, successfully. Learn more about me.
Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete"
AI now surpasses humans in almost all performance benchmarks
That's a rather low bar, don't you think?
certainly this power will only be used for good
So what do we do now?
Can we get them to mow my lawn and do my laundry for cheap now
It's worth noting that the results below reflect testing with these old, possibly obsolete, benchmarks
Has anyone bothered to build a model that makes moral decisions yet?
Fuck it
yay
I grade your essays. No it doesn't.
Time to get back to the drawing board and re-write all of the metrics so that we can be #1 again. Maybe AI can help?
It will never match my level of poop output
It's not a high bar.
Can AI bench 405? That’s what I thought.
Regarding the MATH-test question in the article about the groups of marbles: their solution seems wrong, as phrased.
Tom has a red marble, a green marble, a blue marble, and three identical yellow marbles. How many different groups of two marbles can Tom choose?
Why does it matter whether the yellow marbles are "identical" or not? They would still be three separate and visible physical marbles? What the accepted official answer of 7 implies is that the yellow marbles aren't just identical, but so miraculously physically indistinguishable that they are only identifiable as one marble for the purposes of the question.
I asked ChatGPT 3.5 the question, verbatim, and it said 15 different groups, and the list of combinations it gave made perfect sense. Each yellow marble may be identical to the other two, but is still physically distinct, and that's how it treated them.
So I asked it under what alternative interpretation would the answer be 7, and it came up with the idea of them being interchangeable... which is much different than just being identical. With interchangeable yellow marbles, you could have 1 billion of them, and you still could make only 7 groups total.
I think ChatGPT beat the questioner, as the question was phrased... not only did it give the correct answer, it then figured out what the question would have had to have been to get the "official" answer. And yet, the test would have scored it as being wrong.
Some of those bars aren't exactly all that high.
I can't draw them but I know that human hands have usually 5 fingers
Could the blogger choose some better colors for this graph? I feel like i have acquired color blindness.
No AI can poop better than me
AGI -> ASI
Can AI take a drink from a glass of water?
No? Okay then sit down.
yet it still can't make me money
Interesting since I have plenty of examples where it can't compare two integers for equality.
is there any ai tool without these ‘hallucinations'? haven't met
Ok ai here's a test with the answer key
Human here's the test. You guys ready???
But can it love?
Yeah but when the next massive CME hits and takes down the power grid, or if there's a nuclear strike, AIs will be dead and humans will still be around.
Sometimes it pays to be made of meat.
Wetware has survived for billions of years, and at least in the case of humans and primates for well over 100,000 years.
Most hardware is only warranted for 90 days, and you're lucky to obtain a support contract for more the 3 years. 5 MAYBE.
K, I can draw hands.
Checkmate liberals
It takes technical experience to understand this sarcasm, i am with u, dont know why people are downvoting this.
What does this have to do with "liberals"? Had no idea there was a political line about AI.
How fast does the computer run the 100m dash ?
Faster than you.
Extremely quickly. You can put it in any body of any sort. It can run the fastest we can design a thing that runs. In the near future, it will be able to design a thing that runs, even better than any human could.
Why would we do that? Isn’t it enough for you to see which human can run fastest? Why do computers need to be involved in that?
People are gonna wanna see it. One day the special Olympics will be more popular than the regular Olympics.
"AI now surpasses humans in almost all performance benchmarks"
Please don't tell the boomers.
"whaaaay, back when I was your age, I pulled myself up by my bootstraps, going uphill in a snow blizzard in Florida, I tell yah!"
"I bought this house using the money I got for newspaper delivery, kids these days are lazy!"
"What is this AI, it's like email, only it writes a lot more nonsense!"
I didn't read article but is it really "almost all" benchmarks?
It's expert systems, trained and aligned to perform specifically on these benchmarks. It's not really suprising
Not even going to take a look at the article then?
Yes, it is "almost all" benchmarks, as stated in the headline.
So, some of the human actions are now computerized on top of previous computerizations of other actions. Nice, but not groundbreaking.
Energy efficiency?
Constantly improving. It's a bit early to draw too many conclusions on these gen 1 models.
This now doesn't surprise at all now. Because at the end of day you will need someone to supervise AI. And for the expertise of having to supervise only comes through individuals experience. Good Luck
Nah, we're already having AI supervise other AIs. It's a question of cost and ownership at this point.
Which AI is supervising the work done by an AI? Also, why is it needed? Shouldn't the AI be able to self supervise itself?
Check out (or watch youtube videos on) Microsoft Autogen Studio for a really basic example.
It seems an interesting tool, but it is managed/supervised by a human user, not an AI. The problem here is that you still need human intervention to design, test, and approve what this tool does.
Yes that's the point.
In computer science there's a technique called Bootstrapping where a compiler is written in the language the compiler compiles. https://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29
It's conceivable that a tool like Autogen Studio can be used to design an AI that designs AIs, or something similar. You only need a human in the loop the first time.
We've had algorithmic tools like "genetic algorithms" for a long time that can be used to select for behaviors over time too. It's really just a question of imagination.
Yeah but you wont handover a miilion dollar buisness to AI.
lol. you will if it makes you money day after day. that's the goal.
I would be happy to adopt ai if it can take care of over Friday Night issues, its been a year since all this AI crap but the Friday Night Issues persist, you can say it is skill issue but in a team of 100 Members this possiblity is very low. So if you got something which can handle 100K users and their payment issues let me know
It’s a lie.
For example, if AIs really surpassed humans in image classification, captchas would not still be a thing.
Correct headline would be “AIs now ace tests designed to test really bad AI; new tests necessary.”
Traditional captchas based on recognizing things in images are already over. GPT-4 easily solves them.
Current captchas either use past user activity or more complex tasks, and even those can be beaten sometimes.
But besides that example, you're still correct that AI is definitely not better than humans at all tasks. We do need much better benchmarks, and the article does say that.
But the article title says it surpassed humans in almost all performance benchmarks, which again, is a lie.
does it surpass humans in conventional IQ tests, dating success, speed to create certain classes of programs, etc.? No. Some of these are benchmarks AIs can’t even take and they do not surpass us. It’s incredibly misleading
18 missing replies
If artificial intelligence surpasses humans in almost all performance metrics, I believe we need to use more accurate performance criteria.