Is artificial intelligence an existential threat to humanity?
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 26, 2024, 08:52:15 PM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  General Politics
  Political Debate (Moderator: Torie)
  Is artificial intelligence an existential threat to humanity?
« previous next »
Pages: 1 [2]
Poll
Question: ?
#1
Yes
 
#2
No
 
Show Pie Chart
Partisan results

Total Voters: 56

Author Topic: Is artificial intelligence an existential threat to humanity?  (Read 3552 times)
Meclazine for Israel
Meclazine
Atlas Icon
*****
Posts: 13,836
Australia


Show only this user's posts in this thread
« Reply #25 on: June 16, 2022, 08:03:51 AM »

Google has apparently invented a being that believes it is human.

This may be the first sentient.

https://www.abc.net.au/news/science/2022-06-15/google-ai-chatbot-not-sentient-how-do-we-know-intelligence/101150090

It can read books and speak multiple languages.

Logged
TiltsAreUnderrated
Junior Chimp
*****
Posts: 9,776


Show only this user's posts in this thread
« Reply #26 on: June 21, 2022, 06:05:48 AM »
« Edited: June 21, 2022, 06:13:00 AM by TiltsAreUnderrated »

Google is lying.

As other posters have said, AI is a very broad, double-edged sword. At the moment, it is a useful tool that has advanced our societies, although it's also being used by the ruling classes to augment their oppression.

On a species-wide level, the singularity may eventually pose an existential threat to humanity, but this isn't necessarily a disaster. Individuals eventually pass on their legacies to the next generation, usually their children; making things that are better than humans and leaving them to inherit the earth could work out. There is a risk we destroy ourselves without creating something good, but that risk is also present in parenting. All the same, I'd prefer to avoid the redundancy of humans; it might turn out well for us, but encouraging it would be irresponsible and I'd like the species to continue.

I've done a fair amount of work with neuromorphic computing, so I'll lay my cards down and admit it has cemented my beliefs as a transhumanist. If it is possible to reach the singularity, no amount of luddite regulation will ultimately prevent this, although it's probably more than few decades away in any case. All it will prevent is that we reach it on our terms. The only path to avoiding eventual redundancy - if that's possible - is to integrate such systems into ourselves and break the limits of human intelligence, which we should be doing anyway.

Tl;dr we should develop use its potential to augment us, rather than replace us.
Logged
Meclazine for Israel
Meclazine
Atlas Icon
*****
Posts: 13,836
Australia


Show only this user's posts in this thread
« Reply #27 on: June 22, 2022, 05:11:26 AM »


I've done a fair amount of work with neuromorphic computing....

What is transmorphic computing?
Logged
TiltsAreUnderrated
Junior Chimp
*****
Posts: 9,776


Show only this user's posts in this thread
« Reply #28 on: June 22, 2022, 05:31:38 AM »
« Edited: June 22, 2022, 08:55:34 AM by TiltsAreUnderrated »

I've done a fair amount of work with neuromorphic computing....

What is transmorphic computing?

It's been a while since I've discussed this, but I'll try to give it a go (I really need to get back into this, tbh!).

Neuromorphic computing is a subset of computational neuroscience and essentially an attempt to improve computing by learning from what biological systems (specifically, their neurons) do better. There is a second, slightly smaller part dedicated to learning more about our own minds from computers and programs meant to imitate parts of them.

There is a need to constantly refresh working computer memory that is far more energy-intensive and heat-generating than the sustenance of a brain's neurons, which has contributed to Moore's law, perhaps the largest contemporary bottleneck of computing power.

Programming is mostly serial. You tell a computer to do X, then Y, then Z. Parallel computing - trying to make multiple calculations at once using different parts of a computer - has grown in popularity, but most programs are designed to wait until certain things are decided before proceeding with the rest of their work (e.g. adding two numbers before dividing the total). More importantly, the very architectures that programs run on (which limit how they are actually executed) are typically built with serial or limited parallel computing in mind, with only a few physical cores per computer and centralised systems by which these cores (or groups of cores) communicate. Neuromorphic computing requires dedicated architectures that stray from the design principles behind most systems.

Computers are faster than the physical limits of information transmission within human brains, but the big reason why the AI threat talked about in this thread isn't realised is because we don't think serially or even hierarchically. Our neurons do not need to check that all, or even most, of the information in other neurons is in a certain state before transmitting information. This means they don't actually maintain consistent results (our memories change, we get calculations wrong and so forth), but it also means our brains can do a lot more at once.

All this gives even advanced systems great trouble when it comes to multitasking, especially given the difficulty in reducing management of physical space to a few factors. A few years ago, IIRC, one of the most modern laundry robots (still not commercially viable, of course) took 20 minutes to fold a shirt. Something as simple as playing tennis would probably prove a nightmare.
Logged
Anti-Bothsidesism
Somenamelessfool
Jr. Member
***
Posts: 718
United States


Show only this user's posts in this thread
« Reply #29 on: June 24, 2022, 09:48:17 PM »

Humanity is the biggest threat to humanity, by all means AI should conquer humanity and I would still be saying this even if Roe was upheld.
Logged
HillGoose
Atlas Icon
*****
Posts: 12,884
United States


Political Matrix
E: 1.74, S: -8.96

Show only this user's posts in this thread
« Reply #30 on: June 24, 2022, 09:50:21 PM »

who even cares at this point, humanity isn't surviving the century
Logged
Meclazine for Israel
Meclazine
Atlas Icon
*****
Posts: 13,836
Australia


Show only this user's posts in this thread
« Reply #31 on: June 26, 2022, 06:48:14 AM »

who even cares at this point, humanity isn't surviving the century

You heard it first on Atlas forums.
Logged
Person Man
Angry_Weasel
Atlas Superstar
*****
Posts: 36,689
United States


Show only this user's posts in this thread
« Reply #32 on: June 29, 2022, 02:32:29 PM »

Humanity is the biggest threat to humanity, by all means AI should conquer humanity and I would still be saying this even if Roe was upheld.

AI doesn’t kill people, people kill people.
Logged
Aurelius2
Sr. Member
****
Posts: 2,093
United States



Show only this user's posts in this thread
« Reply #33 on: June 20, 2023, 08:43:23 AM »

Bump.

Been a wild year, hasn't it?
Logged
PSOL
Atlas Icon
*****
Posts: 19,191


Show only this user's posts in this thread
« Reply #34 on: June 20, 2023, 01:20:39 PM »

Outside of the conditions I laid out, not at all. Instead it is perhaps the most liberating possibility since the end of the Third Reich.
Logged
Ferguson97
Atlas Star
*****
Posts: 28,141
United States


P P P
Show only this user's posts in this thread
« Reply #35 on: June 21, 2023, 05:26:03 PM »


It certainly has. But fortunately, AI still doesn't pose an existential threat to humanity.
Logged
DaleCooper
Atlas Icon
*****
Posts: 11,046


P P P
Show only this user's posts in this thread
« Reply #36 on: June 22, 2023, 03:39:25 AM »

It's an existential threat to everything in the human experience worth living for.
Logged
Benjamin Frank
Frank
Junior Chimp
*****
Posts: 7,069


Show only this user's posts in this thread
« Reply #37 on: June 24, 2023, 07:06:29 PM »

I heard a program on this yesterday.

The first thing is that while it isn't necessarily true 'that everything can be done will be done (or is being done)' - for instance, given, I guess, the survival instinct in humans, it seems there is an unwillingness  for even rogue players to give nuclear technology to terrorists.

However, I don't think that is the case here as the concern of A.I destroying humanity isn't an immediate concern for rogue players with access to A.I. So, I think there is every reason to believe that anything that can be done with A.I will be done with A.I by somebody and no attempt at regulation can stop that.

So, the person on the program said the only solution is to fight A.I with A.I. While the person said that it isn't possible for other A.I to explain how an A.I is doing something (the input process), he also said that that wasn't important, but that what was important was for A.I to predict what other A.I might come up with (the output process) and to then prepare for that possibility/eventuality.

This shouldn't be regarded as a binary, though maybe that's the fault of the original question, though there have been some comments here that haven't treated it that way: even if A.I isn't an existential threat, it doesn't mean that it isn't a rapidly emerging serious threat in all sorts of ways.
Logged
FT-02 Senator A.F.E. 🇵🇸🤝🇺🇸🤝🇺🇦
AverageFoodEnthusiast
Junior Chimp
*****
Posts: 5,332
Virgin Islands, U.S.


WWW Show only this user's posts in this thread
« Reply #38 on: June 24, 2023, 09:22:17 PM »

No, I am
Logged
VBM
VBNMWEB
YaBB God
*****
Posts: 3,836


Show only this user's posts in this thread
« Reply #39 on: July 01, 2023, 10:34:10 AM »

Only in the sense that it will automate away many jobs (including mine).
Technology making humans have to do less labor is a good thing. It’s our economic system that’s the problem.
Logged
GM Team Member and Senator WB
weatherboy1102
Atlas Politician
Atlas Icon
*****
Posts: 13,834
United States


Political Matrix
E: -7.61, S: -7.83

P
WWW Show only this user's posts in this thread
« Reply #40 on: July 02, 2023, 07:18:47 PM »

There's a video out there talking about Artificial General Intelligence (the singularity, where AI becomes as smart as humans and quickly begins outpacing us) which basically said, it can be either the absolute best thing or absolute worst thing to happen for humanity. It either uplifts us in a way that is unimaginable to us now, as its intelligence grows exponentially and eventually one sentience has the mindpower of all of humanity. Or, it could find us to be competition and destroy us.

Or, in the neutral situation, it could think of us how we think of ants now. Interesting, but generally not worth dealing with. It'd be fitting if the intelligence gap between it and us ends up being the same as us and ants.

To answer the question, it both is and isn't. It's Schrodinger's threat, and we won't know if it's good or bad until we open the box.
Logged
the artist formerly known as catmusic
catmusic
Jr. Member
***
Posts: 1,180
United States


Political Matrix
E: -7.16, S: -7.91

P
Show only this user's posts in this thread
« Reply #41 on: July 03, 2023, 09:35:08 PM »

Yes and no, but I lean to yes. I have serious, serious issues with technology's role and integration in our lives. I'm vehemently against transhumanist things too; all of it is dangerous and we aren't approaching it with the caution we should for such a huge change in humanity.
Logged
BigZuck08
Jr. Member
***
Posts: 1,091
United States


Political Matrix
E: 0.13, S: 1.22

P
Show only this user's posts in this thread
« Reply #42 on: July 23, 2023, 08:27:23 PM »

As of now, no, but if we give AI too much power over our daily lives than it could pose a serious threat to humanity.
Logged
Pages: 1 [2]  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.059 seconds with 13 queries.