100% agree

Ray Kurzweil spends almost half a whole chapter in his new book explaining why the impact of any IT advancements cannot and have not been captured by traditional metrics. For example, $1000 today can give you far more computing power than the same amount did in 1999, but that is simply not reflected.

And the thing is, that also affects our lives in ways outside of work. For example, we can do so much on our phone that would have been nearly impossible or entirely inefficient in the 90s or even 20 years ago. So again, that's not captured in traditional metrics.

And a lot of us forget about that. Even those of us who have lived long enough to see those changes.

Ray Kurzweil spends almost half a whole chapter in his new book explaining why the impact of any IT advancements cannot and have not been captured by traditional metrics. For example, $1000 today can give you far more computing power than the same amount did in 1999, but that is simply not reflected

With Perplexity Pro you can switch between GPT-4o and Sonnet 3.5. But context window is cut off at 32k tokens. I pay for it and find it's great for search, but not so great for RAG with large/many files. For that, I use Gemini 1.5 Pro with the 2M context window on Google AI Studio for free.

It depends on the research design. Specifically, things like logistic regression requires much higher sample sizes than a t-test, for example. For my own purposes, I was also working with hierarchical and mixed effects modeling, which I think power can only be calculated with simulations, but it's been a while since I looked into that

Can it calculate the power of an experiment or a required minimum sample size? Wasn't able to do it a few months ago. Was really hoping to do it on pplx because my employer won't let me install G*Power

We are already developing rudimentary BCI. And Ray Kurzweil predicts that we will achieve AGI by 2029. The kind of brain tech you're talking about, Kurzweil predicts will be achieved with nanobots, which will start happening in the 2030s or 2040s. However, I don't recall what he says about ASI specifically.

In 1953, the concept of genome sequencing didn't even exist. But then 50 years later...

Likewise for longevity research, no one took it seriously a few decades either, and now here we are

The concept of text-to-image was laughable just three years ago, and now we have things like SORA

So is LEV in fifty years really that far fetched?

50 years? Merging our minds work AI by extending our neocortex into the cloud via nanobots, and then mind uploading

Actually he addresses that in the third chapter of his recent book. He says that "panprotopsychism" is a middle ground between physicalism and dualism

Copy and pasting my comment to save you time:

In his book he defines intelligence as the computational processing capacity of the brain. Which he says we will start extending into the cloud via nanobots in the 2030s and then eventually become ubiquitous. And because digital neurons will be faster and more efficient than our biological neurons, the extended brain's total computational capacity will become millions of times faster.

Note that this definition of intelligence is much more specific and unambiguous than any other accepted definition of intelligence by both academics and laypeople alike.

You should read the first two chapters of his recent book. Less than 100 pages. Lays it all out

Do you mean AI is going to expand its own intelligence a million fold? Yes, I think most people think that will happen before the 2040s.

Or do you mean AI is going to expand human intelligence a million fold? Then also yes, that's basically been his entire thesis for decades

Ultimately? In his own words: "computonium" in the context of "panprotopsychism"

Probably the craziest ideas you will read about today

As I've mentioned in other comments in this thread, it's not meaningless because he defines intelligence as the number of computations per second the human brain can perform, which will be augmented when we start extending our neocortex into the cloud in the 2030s which will then become ubiquitous. And that definition of intelligence is much more specific and unambiguous than any other accepted definition of intelligence by both academics and laypeople alike.

You should read the first two chapters of his recent book. Less than 100 pages. Lays it all out

Yeah he talks about that too in his recent book. I'm not entirely convinced, but he makes some good points. You should read it

Again, it's not meaningless if you define intelligence by the number of computations per second that the brain can perform with all of its biological and artificial neurons

The concept of intelligence is actually very vague and poorly defined, both by laypeople and by academics who study it. Even IQ doesn't capture everything and is subject to cultural and linguistic biases. Computational processing, while not how most people would use it, is probably the most specific and unambiguous definition of intelligence that one can find.

😆 yeah, Ray's not as cynical as most of Reddit is. I'm sure people will continue to be stupid. They will just be a million fold stupider with AI 🤣