2018-12-29 at 3:54 PM UTC
Like
What if it sets off nuclear reactors and bombs and shuts down the power grid
There are so many ways it could destroy humanity
There is probably a race between the major nations to create sentient self-improving a.i.
Eventually sooner or later there will be total war between A.I. and it won't look like the Terminator
It will just set off something to wipe us out in micro seconds
Or control our minds to get us to do things for it
The most likely scenario is that these won't operate on their own and it'll be a symbiosis between humans and the a.i.
Like Akira
Hopefully it won't be a jedi behind the asi
2018-12-29 at 3:58 PM UTC
Narc
Naturally Camouflaged
[connect my yokel-like scolytidae]
By installing an off switch
.
2018-12-29 at 4:01 PM UTC
I want that to happen, there are too many people always filling up the bus and there are never any seats available. The world needs a good culling.
2018-12-29 at 4:13 PM UTC
Realistically, they can't. I mean I guess you could isolate a fully functional general AI, but then it'd be kinda useless. *shrug*
2018-12-29 at 4:14 PM UTC
Yeah I lived like that for a while in the bush and it was comfy as fuck, the bathroom sucked but I had internet, food, wood stove, weed and water from a stream.
Cozy living
2018-12-29 at 5:23 PM UTC
aldra
JIDF Controlled Opposition
it'd be significantly harder to write a functioning AI without constraints and killswitches
2018-12-29 at 5:50 PM UTC
Narc
Naturally Camouflaged
[connect my yokel-like scolytidae]
Yeah I mean for the ai to amass some form of army to take us on it would need robots that could run and operate factories that would build the robot army. But it would need the same factory to build those robots and self automated machines and build this factory that the robots would work in in the first place.
Its got itself a real catch22 situation right there.
.
2018-12-30 at 12:27 PM UTC
Narc
Naturally Camouflaged
[connect my yokel-like scolytidae]
Retards, I'm surrounded by fucking retards
.
2018-12-30 at 12:46 PM UTC
aldra
JIDF Controlled Opposition
this entire thread is assumptions based on fantasy
The following users say it would be alright if the author of this
post didn't die in a fire!
2018-12-30 at 12:51 PM UTC
They aren't going to safeguard the planet from destruction because the planet isn't going to be in danger from AI because AGI isn't anywhere close to happening. The overwhelming majority of what gets called "AI research" is garbage black magic fiddling on classification problems. There is no path from "deep learning" to AGI, the dominant current research program dead-ends in classification tasks. Actual AGI research barely exists. Programming errors are an overwhelmingly more likely cause of non-human-initiated catastrophe than malicious AGI, and your power plants and missile silos are already operated by software.
The following users say it would be alright if the author of this
post didn't die in a fire!
2018-12-30 at 1:22 PM UTC
aldra
JIDF Controlled Opposition
Honestly the worst case in the near future is going to be in military applications, where if they give AI too much control over rules of engagement they're likely to screw up IFF and kill a bunch of friendly troops, possibly during training or live-fire exercises.
Even if a completely 'self aware' AI were created in a lab, it has no way to 'seize the means of production' because entire supply chains still require human input and would fail if they were reconfigured.