Kaku isn't as beloved as Carl Sagan but I really do love both his documentary work and his books. He's fun.
|Sudan no1 |
I want making smarter wires in my brains, please.
|American Standard |
I'll stick with Neil DeGrasse Tyson. Kaku just gets too fluffy and sci-fi when he goes documentarian, for my tastes.
Rodents of Unusual Size
My favorite part is when they talk about having robot best friends that you tell your secrets to. It's like the singularity will be one big sleepover with Judy Jetson.
Calling this sort of idle speculation documentary work is pretty charitable.
They should get someone like Minsky and have him talk about seminal ideas in AI research and make an argument for the direction of future advances based on those. Saying "maybe we'll have robots someday" for an hour is just a huge waste of time.
I'm not going to watch the whole thing, but I assume at some point it either juts up against or goes directly into Ray Kurzweil-esque singularity nonsense.
I'm all for people being amazed at the possibilities and potentials of technology and science, but I hate this "science mysticism" garbage which leads people down an unrealistic road. We are not remotely anywhere near some strange future of AI controlled robots and transferring our minds into computers etc. It's true that computer technology has made astounding leaps and bounds since it was first introduced, but the fact that computer power keeps exponentially growing is not proof that it will KEEP exponentially growing. Many experts in these fields now believe we are steadily reaching a point where the technology may reach an impasse where our advancements can no longer be sustained by other sciences, and thus computer power will stop at a certain point until various other obstacles can be gotten around.
Also, that exponential growth would mean next to nothing even if it were to continue indefinitely. The hard problems are that hard.
Everyone assumes that we can have machines that have both human (or better) smarts without having a will of its own. Even the Singularity Institute guy they interviewed assumes it.
Hofstadter said something like "maybe we will build the ultimate chess player, and maybe it won't want to play chess" which makes far more sense to me.
| Register or login To Post a Comment|