11/23/2023 0 Comments Ais transcripts![]() And this has triggered our primal instinctive fears of the “other” many now seriously compare future AIs to an invasion by a hostile advanced alien civilization. What changed? My guess: recent dramatic advances have made the once abstract possibility of human-level AI seem much more real. Until we figure out how to do that, they say, we must stop improving AIs.Īs a once-AI-researcher turned economist and futurist, I can tell you that this current conversation feels quite different from how we used to talk about AI. (And then kill us when we get in its way.) As you can’t prove otherwise, they say, we must only allow AIs that are totally ‘aligned’, by which they mean totally eternally enslaved or mind-controlled. An AI might suddenly and without warning explode in abilities, and just as fast change its priorities to become murderously indifferent to us. However, AI-doomers insist on the logical possibility that such expectations could be wrong. ![]() Thus, neither long-term trends nor fundamental theory give us reason to expect to see AIs capable of, or inclined to, kill us all anytime soon. So it would be quite a sudden radical change for AIs to, in effect, try to kill all humans. But such AI agents are typically monitored and tested frequently and in great detail to check for satisfactory behaviour. Some are more general agents, for whom it makes more sense to talk about desires. Most AIs are just tools that do particular tasks when so instructed. In addition, AIs are also now quite far from being inclined to kill us, even if they could do so. Furthermore, not only are AIs still at least decades from being able to replace humans on most of today’s job tasks, AIs are much further from the far greater abilities that would be required for one of them to kill everyone, by overwhelming all of humanity plus all other AIs active at the time. For example, we predicted when computers would beat humans at chess decades in advance, and current AI abilities are not very far from what we should have expected given long-run trends. And as innovation is mostly made of many small gains, rates of overall economic and tech growth have remained relatively steady and predictable. Humans have been improving automation for centuries, and software for 75 years. However, we have no concrete reason to expect that. But in fact, their main argument is just the mere logical possibility of a huge sudden AI breakthrough, combined with a suddenly murderous AI inclination. Lead AI-doomer rationalist Eliezer Yudkowsky even calls for a complete and indefinite global “shut down” of AI research, because “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Mainstream media is now full of supportive articles quoting such doomers far more often than they do their critics.ĪI-doomers often suggest that their fears arise from special technical calculations. ![]() Many luminaries also declared: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. For example, a recent petition demanded a six-month moratorium on certain kinds of AI research. But the most dramatic “AI doomers” say that AIs are likely to kill us all. Some fear villains using AI to increase their powers, and some seek to control what humans who hear AIs might believe. As someone who was a professional AI researcher from 1984 to 1993, I am proud of our huge AI advances over subsequent decades, and of our many exciting AI jumps in just the last year.Īlas, many now push for strong regulation of AI. And not coincidentally, computers are where we’ve seen the most progress. Thankfully, we didn’t much fear computers, and so haven’t much restrained them. Yes, we might have had a few more accidents, but overall it would be a better world. Our world would look very different if we hadn’t so greatly restrained so many promising techs, like nuclear energy, genetic engineering, and creative financial assets. Yes, that’s in part due to my unrealistic hopes, but it is also due to our hesitation and fear. As I’m now 63 years old, our world today is “the future” that science fiction and futurism promised me decades ago.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |