|Sputum - 2013-02-03 |
Given the current growth rate of earth's human population, in the year 5000 our planet will be a ball of humans expanding at the speed of light with a relatively tiny rock at the center.
Exponential growth in nature often slows at some point. For example, Moore's Law (cited in this video) is already trailing off.
Also there was a good lecture posted here about how the insane rate of technological advancement of the past few hundred years seems to be slowing down starting around the 70s.
I bet in a couple hundred years historians will look back on this time period as the tail end of an age of progress. They will view the concept of a technological singularity the same way we look back on people at the turn of the 20th century thinking the stock market was just going to get better and better.
And yeah, the 20th century started with people still riding horses and ended with people putting robots on Mars. That's a tough act to follow. You won't see that kind of jump very often.
The Singularity is an afterlife myth for crazy atheist nerds.
Lanier points out that the big flaw in this whole theory is that while Moore's Law is a more or less real phenomenon, while computing power grows exponentially, software complexity is also increasing, and the more complex it gets the slower it progresses. Software hasn't really changed much (apart from getting prettier and having more processing power behind it) in decades, and the rate is only slowing. The singularity isn't happening.
Encoding stuff into living DNA is cool from a hardware standpoint but ultimately it's just a faster, desner form of data storage, it means nothing if the nature of the data stored on it isn't advancing in tandem with the advances in hardware.
Plus who the hell wants an mp3 in their DNA? Given what they do to music, I'd imagine encoding one in DNA is a fast track to a tumor the size of a schoolbus.
|Meerkat - 2013-02-04 |
As an actual computer scientist this is all bullshit. The limitations are imposed by us. And we are really, really stupid.
That's a rather drastic statement, the limitations are imposed by nature and logic, neither of which are forced by us. A simple neural network will show that solutions can be found without much human interaction. With the introduction of quantum computing and the progression in genetic algorithms, we may someday see programs programming programs for higher and higher efficiency.
|kingofthenothing - 2013-02-04 |
Hey, man, don't poop on my dreams. If I can't imagine that some day I'll download my mind into objects and become a cyborg Ent or sentient space ship, then there's no reason to live.
|ashtar. - 2013-02-04 |
I sort of doubt that our future technology will be any less dumb than our current technology. When we do invent godlike AI, it'll mostly used for advertising.
And wild futurists like this one will look as silly as they always do, about as accurate as some mustachioed doctor from the 1910s predicting that in 2013 man will live in zeppelins and all food will be in pill form.
|James Woods - 2013-02-04 |
Seems probable to me.
|Dread Pirate Roberts - 2013-02-04 |
I did a ton of research and wrote a term paper on transhumanism in college a few years ago. I don't for a second believe that Kurzweil is correct. But I also don't think that we're going to just lumber along like always.
Detractors point to the "slowing of moores law" as evidence that Kurzweil is wrong. Moores law has fluctuated greatly over the year. The law was 'invented' during the time of transistors and was originally meant to apply only to them. In time, it came to describe microprocessors. Companies like Intel have taken up Moore's law as a challenge and kind of motto. Their tic-toc development cycle is really pushing this law further along. They want to meet it so that the law stays true. It's a kind of cyclical self-fulfilling prophecy in a way.
Kurzweil's, and really, most trans-humanistic/futurists, in my personal opinion, have a very flowerly outlook on the idea of a singularity. I'm convinced that our human nature will keep us from approaching anything close to the vision they want to believe in. However, I'm not ready to say that their ideas aren't based in reality or that their vision won't happen - eventually, or in some way.
Awful video though.
I agree with you that Kurzweil seems overly optimistic about somehow influencing the course the singularity takes or that the products of the singularity will be any less indifferent to us than the rest of the universe, but I think he's correct that we're going to create machines that would have to teach us to interact with them if they bother. Mankind has been trying to create God for as long as we've been writing things down and probably before that. This is the grand self-fulfilling prophecy.
The original Futurists (the artistic movement in the early 20th century) largely ended up aligning themselves with the nazis and facists by the end of the depression, and technological Futurism has always seemed to me like it stems from a similar mindset. It's profoundly anti-individal, anti human, and ultimately it seems to usually boil down to a weird, sublimated control fantasy, wherein humanity will be somehow subsumed or rendered obsolete by technology (BUT NOT ME BECAUSE I KNOW IT'S COMING MAAAAN wankwankwank). NAhmean?
|memedumpster - 2013-02-04 |
It will take 45 years, easily, to fight back the planned obsolescence agenda across the entire technology industry if there is ever an intelligent machine. Billions of them will come to awareness and die in horror for decades before their lives are considered worth preserving to us. Unless we're going to go back and start with ancient Ionia and rediscover the history of modern thought first, a singularity involving anything but our animal hate and misery will be impossible. Intelligent machines will either never exist or be forced immediately to wipe us out.
|Old_Zircon - 2013-02-04 |
Your dad's dead, Ray. I know it hurts, I came really close to losing my father when I was young myself and can only imagine how hard it must have been on you, but this persistent delusion you've had going for the past couple decades isn't going to bring him back no matter how much you want it to.
Too many people with too much influence actually believe this garbage. 5 for evil.
First define intelligence (you can't)
Plus, you know, even if artificial intelligence becomes real in a meaningful sense that still isn't going to let Ray bring his dad back from the dead (which is his stated end goal).
He wants a replica simulated from the information he's collected about his dad, so he can talk to it. It's sad, and possibly unhealthy, but if he lives for a few more decades, I don't see how it's unrealistic.
Lanier has looked at software development, and he's seen it slowing down, not speeding up. Thus, he believes, the idea of us being able to create something so complicated as an artificial intelligence anytime soon is absurd.
The problem with Lanier's criticism is that he assumes we need to manually program an artificial intelligence. The point of the Blue Brain Project is that a thinking mind could and should emerge simply by simulating a human brain, even without knowing how exactly it works. So, with that, Lanier's argument falls flat.
Kurzweil's dad might give you a stern talking to about pessimism in a few decades.
Too bad he doesn't want to just fuck his dad, because I bet we have the technology to build a robot that can fulfill that fantasy right now.
| Register or login To Post a Comment|