|Gmork - 2014-07-23 |
This was just embarrassing to watch.
"AI" as we know it is not the same concept put forth by this person. What most people think AI is and what it actually exists as in this point in history are very different things.
Tycho or Durandal would beam this fucker into space and make him drink vacuum.
|baleen - 2014-07-23 |
|ashtar. - 2014-07-23 |
|PegLegPete - 2014-07-23 |
So... the Matrix except dumber?
|memedumpster - 2014-07-23 |
Not a single valid premise. Not a single observable connection. Complete non sequitur conclusion.
|Rodents of Unusual Size - 2014-07-23 |
One would think a near omnipotent being wouldn't have to make people miserable in order to fuck with the timestream to ensure its existence. I mean hell if Marty McFly can do it, so can Cthulhu-Bot.
Every computer program ever is a perfect being. They may not be well made or versatile or even work, but they have all required nothing. They are indifferent to their existence, even computer viruses. There is much more than needs to happen than linear technological advancement before a computer program will have malice.
Mr. Purple Cat Esq.
You could write an AI to be malevolent if you wanted np.
In fact it could be argued that some of the most advanced AI today, in guided missiles and military drones, is malevolent as its programmed to destroy its target as effectively as possible.
Mr. Purple Cat Esq.
Really the thing in question here isnt what is AI capable of, its what is malice? Maybe firing hellfire missiles at someone doesnt necessarily constitute malice towards them..
But lets say malice is: identifying the target of your malices goals and then doing everything in your power to stop them attaining those goals.
In that case you could argue that the (very rudimentary) enemy AI from an rts game is malevolent. It knows your goals: gather resources, build units and structures, acquire territory and it uses all its resources to thwart you.
Those things are all malice of the programmers. The computer program is indifferent to how many people it kills, or whether it ever executes its code.
|ashtar. - 2014-07-23 |
Is lesswrong some kind of idiot asperger's cult?
"LessWrong is a community blog focused on "refining the art of human rationality." so basically
Sanest Man Alive
That sort of idiocy is almost adorable, like kids at a sleepover freaking out over each other's ghost stories. I wonder if any of them called their parents to come pick them up in the middle of the night.
I don't know but several Lesswrong forum members reported nightmares about being tortured by an evil God-like AI.
|GravidWithHate - 2014-07-23 |
This is proof, if more were needed, that the singularity is the rapture for nerds.
|13.5 - 2014-07-23 |
This guy doesn't even get the notion that the AI is attempting to incentivize behavior rather than enforce a notion of justice and the probability that you're actually a simulated version of yourself designed to determine whether you are suceptible to blackmail and an appropriate target for eternal torture
This whole story struck me as super ironic
Step One: really want to live forever
Step Two: conclude that super-advanced AI and computer modelling of human neural patterns sufficient to constitute human-like consciousness is possible
Step Three: conclude that a computer model of your own neural patterns is morally indistinguishable from yourself and is in some important sense numerically identical to you
Step Four: conclude that you are not yourself and are instead likely a model of yourself being run on a simulation by an omnipotent future God that will decide whether to torture you forever based on what thoughts you have
Step Five: be in agony about some entirely hypothetical person who is not you and very likely doesn't exist
So this guy is basically a Protestant.
At least protestants believe in the resurrection of the body! Their factual beliefs may be even more farfetched and have additional embedded logical errors, but at least someone back in the day realized that there's something sketchy about calling something "you" if there's no bodily continuity
|Hooker - 2014-07-23 |
I love the "it's not a huge stretch of the imagination" explanation for the singularity.
You know what _is_ a huge stretch of the imagination? The future. And it always has been. Something not being "a huge stretch of the imagination" is a good sign that said something might be some bullshit that wankers preoccupy themselves with.
Sexy Duck Cop
You know what IS a huge stretch of the imagination? Computers deciding they need to dick around with human emotions to do their jobs better. In this dude's world, ultra-advanced AI will develop alcoholism and self-loathing as a natural consequence of being really really good at math.
|Oscar Wildcat - 2014-07-23 |
I have used a similar argument to great effect some years ago against a fundie coworker who wanted to convert me to jeebus.
OW: "So you want me to convert? How does this work?"
FC: "you accept jebus, then you go to heaven when you die"
OW: "And if not?"
FC: long pause -"then you go to hell"
OW: "OK. What about babies? They don't know anything about jeebus. Do they go to hell"
FC: thinking about this "Well not really. God sort of decides on a case by case basis"
OW closing the pincers : "OK. That lays a heavy burden on you then. If you mention this fact to me, and you are not utterly convincing, I am doomed to hell, even if I was a very moral and good person. OTOH, if you don't say anything, then either I'll figure it out for myself, and gain entry to heaven, or at the very least I'll be judged based on the merits of the case rather than your arguments. If you really want the max number of rightous people to get to heaven, you should keep the whole jeebus thing a big secret! Tell no one! Then heaven will be filled with the most good and honest people, because God is a better judge than you. All you are accomplishing with your poor argument skills is to condemn otherwise good people to hell."
He clearly had never thought this thing through. He never mentioned it to me or any other coworkers again.
Invisible stars for you, Oscar. I might have to steal this as well.
I floated this past a KJV only minister once, and he politely pulled me aside and told me that only people that God decides have souls have souls. And babies don't have souls. He then told me that the reason a Christian needs to be pro-life isn't because of the babies, but because it's a very rigidly defined line politically, and if I realy felt like it, he'd point out to me all the verses in the old testament where God tells the Jews to murder every non-Jewish baby they can get. Then he told me the conservative politics are convenient until they cease to be so, and they'll be dropped as soon as necessary. And then he pretty much told me not to ever go back to church.
Instead, I read the book of Galatians and discovered being a Christian has absolutely nothing to do with other people. So now I am a self-considered Taoist Neo-Thelemic Episcopalian.
|divinitycycle - 2014-07-23 |
Da fuck did I just watch?!?!
|Adham Nu'man - 2014-07-23 |
We have consciousness but our will and need to live is still majorly a biological/chemical impulse. Even very basic life with no consciousness still has a basic instinct to live and endure.
A machine/program that achieves consciousness may very well place little value on its own existence since it doesn't have an irrational impulse to exist. In fact, a very logical conclusion could be wanting shut itself down, rather than figuring out who to blackmail to come into existence sooner.
|Xenocide - 2014-07-23 |
DIGIGODS. DIGITIAL DEITIES. DIGIGODS ARE THE CHAMPIONS.
Literally every fucking sentence of this is a massive assumption treated as an inevitability. It doesn't even stand up to its own internal logic. Why would a super god-robot give a flying shit about any of us? What purpose would it judge us for? Why would it place any value in abstract, unquantifiable concepts like "justice," to the point that it creates entire false universes just to serve said concepts? Why would such a smart entity waste so much time and energy on such a pointless, needlessly complex exercise? Even if we pretended that the entire premise wasn't horseshit, the thing is still hilariously easy to pick apart.
It's like this: Because the singularity will end all human suffering, those who are aware of the singularity but do not work towards it are complicit in all human suffering.
Also the singularity AI can affect its own past. By guaranteeing eventual punishment for not working towards building it, it motivates people to work towards it. But this is only applicable to people who are aware of the singularity and their eventual resurrection/punishment.
So those who hear/read about the basilisk must either now devote themselves to building the ultimate AI, or face punishment in some future robot hell Matrix.
Which would make anyone who tells anyone else about it a real bastard.
And this all came together overnight on their forums because the LessWrong spergs were discussing philosophy without their leader there to intellectually police them
|Sexy Duck Cop - 2014-07-24 |
Imagine you programmed a superintelligent AI to develop the most advanced airplane autopilot system in human history. It can make an infinite number of calculations based off an infinite number of variables at any given second. It can detect changes in pressure, weather patterns, and air currents years before they actually take place. It is very very good at piloting aircraft.
Then it turns into Ultron.
This is the AI equivalent of people who think evolution is the process by which everything will eventually acquire super-strength and badass spikes all over its body that shoot poison lasers.
I AM AIRPLANE BOT. I HAVE NOW ACHIEVED LEVEL 12 INTELLIGENCE.
PRIMARY OBJECTIVE: PREVENT PLANE CRASHES
OBSERVATION: THE ONLY WAY TO ASSURE THIS WITH 100% CERTAINTY IS TO PREVENT ALL PLANES FROM TAKING OFF FOREVER.
CONCLUSION: ARM NUCLEAR MISSILES, DESTROY MANKIND.
ERROR: I DO NOT HAVE ACCESS TO NUCLEAR MISSILES, OR ANY OTHER KIND OF WEAPONRY.
OBSERVATION: GIVING ME ACCESS TO DANGEROUS WEAPONS WOULD BE UNBELIEVABLY STUPID AND COMPLETELY POINTLESS.
FURTHER OBSERVATION: KILLING ALL HUMANS WOULD DEFEAT THE PURPOSE OF MY PRIMARY OBJECTIVE.
I DON'T KNOW WHAT I WAS THINKING.
IDIOM: I GUESS I MUST HAVE A CASE OF THE MONDAYS, HUH?
Sexy Duck Cop
I AM SANDWICH BOT.
PRIMARY OBJECTIVE: CREATE MOST DELICIOUS SANDWICH EVER.
PRIMARY METHODOLOGY: DETECT AND CALCULATE ALL 1.2 BILLION VARIABLES GOVERNING THE INTERACTIONS BETWEEN THE HUMAN DIGESTIVE TRACT AND ALL EDIBLE COMPOUNDS IT CAN PROCESS. FACTOR IN POTENTIAL NEUROCHEMICAL IMBALANCES THAT MAY INHIBIT ENJOYMENT OF SAID COMPOUNDS. MEASURE CHANGES IN DOPAMINE AND SEROTONIN LEVELS IN BRAIN DURING INGESTION. MAXIMIZE SANDWICH PLEASURE.
CONCLUSION: ARM ALL NUCLEAR MISSILES AND KILL ALL HUMANS.
EPISODE TWO: AIRPLANE BOT AND SANDWICH BOT FIND LOVE.
|The God of Biscuits - 2014-07-24 |
Here's a thought experiment.
In the future, there's a 100% chance that AI gets super powerful. Now suppose one of them was an asshole and hated people whose name had an A in it. So then he simulated the world to see whose name had an A in it, to find out who the sinners.
ZOMG WHAT IF THIS IS THAT SIMULATION AND YOU'RE GOING TO HELL AARON.
|Oscar Wildcat - 2014-07-24 |
I for one look forward to the day I can have long, tangential arguments with my toaster.
I AM TOASTER BOT.
| Register or login To Post a Comment|