Is artificial intelligence an existential threat to humanity?
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 24, 2024, 07:00:04 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  General Politics
  Political Debate (Moderator: Torie)
  Is artificial intelligence an existential threat to humanity?
« previous next »
Pages: [1] 2
Poll
Question: ?
#1
Yes
 
#2
No
 
Show Pie Chart
Partisan results

Total Voters: 56

Author Topic: Is artificial intelligence an existential threat to humanity?  (Read 3518 times)
Aurelius
Cody
YaBB God
*****
Posts: 4,170
United States


Political Matrix
E: 3.35, S: 0.35

P P
Show only this user's posts in this thread
« on: June 12, 2022, 11:35:27 AM »

Yes, very urgently so.
Logged
GregTheGreat657
Junior Chimp
*****
Posts: 7,928
United States


Political Matrix
E: 0.77, S: -1.04

P P
Show only this user's posts in this thread
« Reply #1 on: June 12, 2022, 02:03:21 PM »

It's not yet developed enough to be an existential threat to humanity
Logged
Ferguson97
Atlas Star
*****
Posts: 28,106
United States


P P P
Show only this user's posts in this thread
« Reply #2 on: June 12, 2022, 08:33:57 PM »

No, The Terminator and The Matrix are just movies.
Logged
Boobs
HCP
Sr. Member
****
Posts: 2,526
Show only this user's posts in this thread
« Reply #3 on: June 12, 2022, 08:36:45 PM »

I think you meant to write “climate change”?
Logged
Since I'm the mad scientist proclaimed by myself
omegascarlet
Junior Chimp
*****
Posts: 7,031


Show only this user's posts in this thread
« Reply #4 on: June 12, 2022, 11:04:38 PM »

Artificial intelligence has the potential to both severely damage our world and make it much better, depending on how we play our cards.
Logged
Middle-aged Europe
Old Europe
Atlas Icon
*****
Posts: 17,217
Ukraine


Show only this user's posts in this thread
« Reply #5 on: June 13, 2022, 07:25:28 AM »

We must stop Skynet from becoming operational... again.
Logged
John Dule
Atlas Icon
*****
Posts: 18,421
United States


Political Matrix
E: 6.57, S: -7.50

P P P
Show only this user's posts in this thread
« Reply #6 on: June 13, 2022, 09:59:34 AM »

Only in the sense that it will automate away many jobs (including mine).
Logged
Aurelius
Cody
YaBB God
*****
Posts: 4,170
United States


Political Matrix
E: 3.35, S: 0.35

P P
Show only this user's posts in this thread
« Reply #7 on: June 13, 2022, 03:36:48 PM »
« Edited: June 13, 2022, 04:00:42 PM by Kill All Robots »

I think you meant to write “climate change”?



"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.
Logged
Aurelius
Cody
YaBB God
*****
Posts: 4,170
United States


Political Matrix
E: 3.35, S: 0.35

P P
Show only this user's posts in this thread
« Reply #8 on: June 13, 2022, 03:45:10 PM »
« Edited: June 13, 2022, 04:03:31 PM by Kill All Robots »

Of these statements:

1. AI capability is advancing at a very fast rate.
2. As AIs get smarter, they will be able to iterate and get even smarter faster and faster. Once they get smarter than a human, this will happen exponentially faster.
3. The alignment problem - having an AI that is given a task, how do you get it to only consider solutions to that task that fit within human moral standards? - is extremely difficult, and despite a tremendous amount of research remains completely unsolved.
4. As the number of AIs increases (probably exponentially, as with basically every other technological advance), even if the probability of any one AI going rogue is extremely low, the probability of some AI somewhere going rogue increases exponentially.
5. Protein folding is a solved problem. This means a sufficiently computationally powerful AI can bootstrap a nanofactory very straightforwardly.

Which one is false or a non-sequitur and why?

----

 In all seriousness, those of us who are firmly convinced of the existential threat of runaway AI seriously need to  Start working on explanations of this, in an accessible manner, in a way that is understandable to normal people. 99.9% of people  I'll tune out when you tell them "read these 50 blog blog posts on less wrong. com, and then read then read these other 100 blog post add blog posts explaining the jargon in everything you just read, and" and justifiably so.
Logged
Pedocon Theory is not a theory
CalamityBlue
Jr. Member
***
Posts: 839


Political Matrix
E: -7.94, S: -8.61

P

Show only this user's posts in this thread
« Reply #9 on: June 13, 2022, 04:16:21 PM »

Of these statements:

1. AI capability is advancing at a very fast rate.
2. As AIs get smarter, they will be able to iterate and get even smarter faster and faster. Once they get smarter than a human, this will happen exponentially faster.
3. The alignment problem - having an AI that is given a task, how do you get it to only consider solutions to that task that fit within human moral standards? - is extremely difficult, and despite a tremendous amount of research remains completely unsolved.
4. As the number of AIs increases (probably exponentially, as with basically every other technological advance), even if the probability of any one AI going rogue is extremely low, the probability of some AI somewhere going rogue increases exponentially.
5. Protein folding is a solved problem. This means a sufficiently computationally powerful AI can bootstrap a nanofactory very straightforwardly.

Which one is false or a non-sequitur and why?

----

 In all seriousness, those of us who are firmly convinced of the existential threat of runaway AI seriously need to  Start working on explanations of this, in an accessible manner, in a way that is understandable to normal people. 99.9% of people  I'll tune out when you tell them "read these 50 blog blog posts on less wrong. com, and then read then read these other 100 blog post add blog posts explaining the jargon in everything you just read, and" and justifiably so.

All five.

Anyone going ballistic over the existential threat 'posed' by today's AI has been deeply deluded about much of modern computing, particularly regarding the likelihood of death via the mass paperclip-ification of humanity.

Please try to read some other science fiction, preferably something that doesn't involve paperclips.
Logged
Dr. MB
MB
Atlas Politician
Atlas Icon
*****
Posts: 15,860
Libyan Arab Jamahiriya



Show only this user's posts in this thread
« Reply #10 on: June 13, 2022, 04:16:39 PM »

Depends. At its current level of capability no but in the future you can't predict. Who knows?
Logged
Boobs
HCP
Sr. Member
****
Posts: 2,526
Show only this user's posts in this thread
« Reply #11 on: June 13, 2022, 04:27:16 PM »

I think you meant to write “climate change”?

"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.

First of all, thank you. Thank you endlessly for giving me the gift of laughter. I haven't laughed this hard at an atlas post in a good while.

Second, the only thing funnier than your post is that you wrote it believing that you are the one coming off as smart, reasonable, and generally not-a-lunatic.

Purple heart Purple heart
Logged
Santander
Atlas Star
*****
Posts: 27,931
United Kingdom


Political Matrix
E: 4.00, S: 2.61


Show only this user's posts in this thread
« Reply #12 on: June 13, 2022, 04:34:12 PM »

Human stupidity is a far more acute existential threat.
Logged
beaver2.0
YaBB God
*****
Posts: 4,777


Political Matrix
E: -2.45, S: -0.52

P P

Show only this user's posts in this thread
« Reply #13 on: June 13, 2022, 04:34:29 PM »

Not right now, but I could see it happening.  Probably by the time any of us know it will be too late to do anything.
Logged
John Dule
Atlas Icon
*****
Posts: 18,421
United States


Political Matrix
E: 6.57, S: -7.50

P P P
Show only this user's posts in this thread
« Reply #14 on: June 13, 2022, 04:43:50 PM »

I think you meant to write “climate change”?

"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.

First of all, thank you. Thank you endlessly for giving me the gift of laughter. I haven't laughed this hard at an atlas post in a good while.

Second, the only thing funnier than your post is that you wrote it believing that you are the one coming off as smart, reasonable, and generally not-a-lunatic.

Purple heart Purple heart

This is one of those instances where the line between parody and sincerity is completely obscured by the internet.
Logged
Devout Centrist
Atlas Icon
*****
Posts: 10,127
United States


Political Matrix
E: -99.99, S: -99.99

P P
Show only this user's posts in this thread
« Reply #15 on: June 13, 2022, 04:47:04 PM »

I think you meant to write “climate change”?

"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.
lmao haha get fu.cked, I yell at my broken iphone as I lay dying from dehydration
Logged
Aurelius
Cody
YaBB God
*****
Posts: 4,170
United States


Political Matrix
E: 3.35, S: 0.35

P P
Show only this user's posts in this thread
« Reply #16 on: June 13, 2022, 04:53:34 PM »

I think you meant to write “climate change”?

"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.

First of all, thank you. Thank you endlessly for giving me the gift of laughter. I haven't laughed this hard at an atlas post in a good while.

Second, the only thing funnier than your post is that you wrote it believing that you are the one coming off as smart, reasonable, and generally not-a-lunatic.

Purple heart Purple heart

...you're welcome. Upon re-reading my post, I do realize that it sounds almost Time-Cube levels of insane to most people. This is often what happens when people who've spent a lot of time thinking about AI dangers sound like when talking about it to others, and a lesson to me on spouting narrow technical jargon to someone unfamiliar with it.

I have a computer science background, and rogue AI is something I've been thinking and worrying about for years. Until the last couple days, this wasn't something I was worrying too much about, because I assumed that sufficiently capable AI was 40-50 years away. The recent news out of Google made me realize that I haven't been paying close enough attention to the progress of AI in the last year or two while I've been doing other stuff, and that current AI capabilities are at least 2-3 years ahead of where I thought they were. This means the rate of development is much faster than I had thought, and it's forced me to revise my estimate of the arrival of sufficiently capable AI to 20-30 years from now, if not less. This has made the problem much more urgent in my mind, and is forcing me to think about the risks more intensely than before, hence my freaking out on here over the past couple days.

The "paperclip maximizer" is the go-to example, among those who talk about AI risks, of an AI programmed to do one specific thing, that overly narrowly focuses on that one thing to the exclusion of literally everyone else. So yes, talk of being turned into paperclips is commonplace in those circles. But yes, it definitely comes across as bad scifi to the uninitiated, now that I think about it.

  Probably by the time any of us know it will be too late to do anything.

This is exactly the concern. As AI get smarter, they will not only accelerate in their capability improvement rate (as I mentioned above), they will also get more adept at lying to human handlers if they decide that serves their goal.

Generally, AIs work by doing millions and millions of matrix multiplications over some search space to find an output that best satisfies some mathematical function based on the input. Currently, this process is incredibly opaque and we don't really understand what the numbers in these matrices "mean". In other words, we are unable to track the AI's "thought process" in terms of how it gets from input to output. This obviously has enormous implications in terms of our ability to detect whether AI systems are lying to human operators, or whether they are on the verge of doing something incredibly dangerous.
Logged
Kleine Scheiße
PeteHam
Sr. Member
****
Posts: 2,778
United States


Political Matrix
E: -9.16, S: -1.74

P P

Show only this user's posts in this thread
« Reply #17 on: June 13, 2022, 04:54:05 PM »

Elon musk needs to buy google. He has to step in.
Logged
Kleine Scheiße
PeteHam
Sr. Member
****
Posts: 2,778
United States


Political Matrix
E: -9.16, S: -1.74

P P

Show only this user's posts in this thread
« Reply #18 on: June 13, 2022, 04:56:29 PM »


That settles it. Boobs pwned epic style
Logged
Boobs
HCP
Sr. Member
****
Posts: 2,526
Show only this user's posts in this thread
« Reply #19 on: June 13, 2022, 04:58:41 PM »

"At least I'm not dying of climate change," the dying man says, as a swarm of nanobots vacuums the iron out of his body to assemble paperclips, shortly before dying of hypoxia due to total loss of hemoglobin.

a lesson to me on spouting narrow technical jargon to someone unfamiliar with it.
Logged
Virginiá
Virginia
Administratrix
Atlas Icon
*****
Posts: 18,890
Ukraine


Political Matrix
E: -6.97, S: -5.91

WWW Show only this user's posts in this thread
« Reply #20 on: June 13, 2022, 05:00:57 PM »

The thing that worries me about AI is that there are so many nations, corporations and other groups racing to develop one that I worry we're rushing into something we don't fully understand nor will be able to control.

But we're a long ways away from a superintelligent or even general AI, and the technology to let an AI manifest massive nanobot swarms to suck the iron out of our blood doesn't even exist yet. There will be many more major steps before we create an AI that can threaten humanity.
Logged
Farmlands
Jr. Member
***
Posts: 1,201
Portugal


Political Matrix
E: 0.77, S: -0.14


Show only this user's posts in this thread
« Reply #21 on: June 13, 2022, 05:59:29 PM »

100 percent. I really don't understand how one can think climate change, a long process which hasn't been predicted by any scientist to possibly wipe out human life is one, but exponential and generally overlooked AI evolution is of no concern. In the future, yes, but some more oversight is definitely needed.
Logged
Antonio the Sixth
Antonio V
Atlas Institution
*****
Posts: 58,137
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #22 on: June 13, 2022, 07:02:41 PM »

100 percent. I really don't understand how one can think climate change, a long process which hasn't been predicted by any scientist to possibly wipe out human life is one, but exponential and generally overlooked AI evolution is of no concern. In the future, yes, but some more oversight is definitely needed.

One is an actual, proven, and at least partially inevitable fact of the world. The other is a series of conjectures resting on conjectures resting on conjectures.

There's definitely much to be potentially worried about if we create conscious AI, but to jump from that to "OMG LITERALLY TERMINATOR" is a symptom of severe popculture brainrot. Most likely, such AI will likely become a tool (as well as potentially a victim) of the same oppressive systems that shackle humanity today. Men with machines enslaving other men, as Frank Herbert envisioned. I mean, that's clearly the direction the tech industry is going right now.
Logged
PSOL
Atlas Icon
*****
Posts: 19,191


Show only this user's posts in this thread
« Reply #23 on: June 13, 2022, 09:54:59 PM »

AI that is both sentient and can actually feel as a person, as claimed to be the case by the new Google AI, is where I draw the red line and say that under no circumstances should this exist. By all accounts this is a bad idea on both abusing the new persons and ruining an already tough labor market where wages are immensely depressed and there not being enough non-temp positions most can live well on. Dealing with new people entering the job market like this, and the chance for immense abuse that will drag down every laborer just as free labor was during times of slavery, is a hellscape to be made.
Logged
Person Man
Angry_Weasel
Atlas Superstar
*****
Posts: 36,689
United States


Show only this user's posts in this thread
« Reply #24 on: June 15, 2022, 08:32:17 AM »
« Edited: June 21, 2022, 12:06:14 PM by Person Man »

We don't know. It might never become a bad thing even when the "singularity" is well in the rear mirror or it could serious and immediate problem without it advancing much more than it already has.
Logged
Pages: [1] 2  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.066 seconds with 13 queries.