Artificial Intelligence could End Human Race


Off-Topic Discussions

51 to 89 of 89 << first < prev | 1 | 2 | next > last >>

It's not a properly functioning AI that will end us.
It is that one AI with a bug in its program, because
the programmer coding it was up too late playing games.

You know.. this kind of thing:


while (true)
{
status = AreHumansBehavingOkToday();
if (status = 1) LaunchMissiles(); // destroy the World
}

Liberty's Edge

1 person marked this as a favorite.
BigDTBone wrote:
Sissyl wrote:
2400:1 for 82944 processors? Compared to what, 1 billion neurons? Doesn't sound too far off to me.

Processing speeds double every 18 months puts a real time brain about 17 years away. And about 50 years away from it being on a single processor chip.

Edit: which will put me in my early 80's. Perfect timing for a new mechanical body and robobrain.

That corollary to Moore's Law has not held for a few years, especially since Moore's Law has slipped to 36 to 48 months for a doubling of transistor density and will eventually (and rather soonish instead of laterish) run into insurmountable physics issues without a radical development in materials and fabrication technology. Also, remember that Moore's Law is observational, not predictive.

Scarab Sages

Generic Villain wrote:

For those awaiting the singularity, here's some unfortunate news: Japanese and German scientists were able to approximate the human brain's computing power. However, it took 82,944 processors and 40 minutes to pull off something resembling 1 second of human brain activity.

Skynet looks to still be a way's off.

20 years ago your smartphone would have been a supercomputer.

If computers continue to increase in computing power at they rate they have over the last 60 years, we'll have home computers with the processing power of the human brain in a couple of decades. We'll just need software capable of fully utilizing that power.

Scarab Sages

As an interesting side-note to this conversation, I recommend the book The Two Faces of Tomorrow

Grand Lodge

Artanthos wrote:
Generic Villain wrote:

For those awaiting the singularity, here's some unfortunate news: Japanese and German scientists were able to approximate the human brain's computing power. However, it took 82,944 processors and 40 minutes to pull off something resembling 1 second of human brain activity.

Skynet looks to still be a way's off.

20 years ago your smartphone would have been a supercomputer.

If computers continue to increase in computing power at they rate they have over the last 60 years, we'll have home computers with the processing power of the human brain in a couple of decades. We'll just need software capable of fully utilizing that power.

Processing power is not the issue. It's not even the tip of the iceberg. A steam shovel for instance has the muscle power of a hundred or more men, but you can't use it to knit, to sculpt, or for any fine detailed work.

Computers can be built similarly to apply brute force to simple equations or overlaps of one. Big Blue beat Kasparov by brute force, it had the raw computing power to extrapolate millions of moves, as well as access to all of Kasparov's past games, but it's not going formulating policy or sonnets any time soon. It's a brute force device built for a specific purpose, like the steam shovel. It's not going to be writing it's own code, nor setting it's own directives. It really isn't that much more sentient than a steam shovel.

Today we have special purpose AI that can correllate simmilar data and make extrapolations based on input. None of these are the same kind of general purpose self motivating thinkers that humans are. But they have reached the level where the bulk of trading done on Wall Street is through super fast bots trading with other super fast bots.

The threat of future AI is not some insane Skynet nuking the world because of it's hatred for Humanity. It's the same threat that caused the failed Luddite revolution, the very real probability that the economic combination of AI and automation will render at least 50 percent of human labor, both brown and white collar, obsolete.


Artanthos wrote:


If computers continue to increase in computing power at they rate they have over the last 60 years,

They won't.

Mark Twain wrote:


In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. This is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upward of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

As was pointed out upthread, the rate of increase of transistor density has slowed dramatically -- Moore's original law suggested doubling every 18 months, but that hasn't been true since the 1970s. It was every 24 months for most of the 70s and 80s and is now between 36 and 48 months.

More importantly, there appears to be a hard limit on how long transistor density can continue doubling. Current chip technology --as in, we can realistically build these but yields are lousy -- is about 14nm per transistor. A silicon atom is about 100 pm or 0.1 nm in size. So a current transistor is about 140 atoms across. Eight more doublings requires transistors smaller than the individual atoms that would comprise them. Nine more doublings would require transistors smaller than a hydrogen atom.

While it may be possible to pack a computer as powerful as a human brain into the space occupied by a human brain -- in fact, it's not only possible, but the human brain itself does that -- you're not going to get there with a transistor-based Von Neumann architecture and simply letting the fab wonks make better transistors.

Liberty's Edge

1 person marked this as a favorite.
Artanthos wrote:
Generic Villain wrote:

For those awaiting the singularity, here's some unfortunate news: Japanese and German scientists were able to approximate the human brain's computing power. However, it took 82,944 processors and 40 minutes to pull off something resembling 1 second of human brain activity.

Skynet looks to still be a way's off.

20 years ago your smartphone would have been a supercomputer.

If computers continue to increase in computing power at they rate they have over the last 60 years, we'll have home computers with the processing power of the human brain in a couple of decades. We'll just need software capable of fully utilizing that power.

The haven't. The curve's been leveling off at an increasing rate for a decade.

Also, no, 20 years ago a 2GHz processor (like the forthcoming Snapdragon 810A57) was not a super computer.

Liberty's Edge

Point of order, Orfamy, Moore said density double every 24 months. One of his bosses at intel said that speed doubles every 18 months because of that.

And both of those were observations of historical data which became something more because they were the targets for Intel, AMD, etc's research and development.


Krensky wrote:
Artanthos wrote:
Generic Villain wrote:

For those awaiting the singularity, here's some unfortunate news: Japanese and German scientists were able to approximate the human brain's computing power. However, it took 82,944 processors and 40 minutes to pull off something resembling 1 second of human brain activity.

Skynet looks to still be a way's off.

20 years ago your smartphone would have been a supercomputer.

If computers continue to increase in computing power at they rate they have over the last 60 years, we'll have home computers with the processing power of the human brain in a couple of decades. We'll just need software capable of fully utilizing that power.

The haven't. The curve's been leveling off at an increasing rate for a decade.

Also, no, 20 years ago a 2GHz processor (like the forthcoming Snapdragon 810A57) was not a super computer.

Indeed, the cray-4 was announced in 1994 with upto 64 1ghz processors.

40 years ago? Maybe a different story.


v = Moore's Law has held true for 40 years, but many say it will soon end .
I'm not sure if I like Computerphile, in general. However, I may just dislike British people.


Krensky wrote:

Point of order, Orfamy, Moore said density double every 24 months. One of his bosses at intel said that speed doubles every 18 months because of that.

And both of those were observations of historical data which became something more because they were the targets for Intel, AMD, etc's research and development.

Not to mention that silicon process is likely to be replaced in the next 50 years. And (though current computer science practitioners don't seem interested, bleh I HATE SOFTWARE BLOAT! Clean up your FREEKING linked lists, comment your code so someone else may look at it!) software and logic efficiency is ripe for optimization.


BigDTBone wrote:


Not to mention that silicon process is likely to be replaced in the next 50 years.

You're right, and if you can find a way to make metallic hydrogen into a practical semiconductor, that will buy you one (1) more doubling.

Or you could look for a material with atoms smaller than hydrogen. Good luck with that. I'd suggest starting with mesonic atoms, and make sure to build your computers VERY QUICKLY.

Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.


Orfamay Quest wrote:
Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.

Quantum computing is a hoax.


Orfamay Quest wrote:
BigDTBone wrote:


Not to mention that silicon process is likely to be replaced in the next 50 years.

You're right, and if you can find a way to make metallic hydrogen into a practical semiconductor, that will buy you one (1) more doubling.

Or you could look for a material with atoms smaller than hydrogen. Good luck with that. I'd suggest starting with mesonic atoms, and make sure to build your computers VERY QUICKLY.

Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.

You really like to assume meaning in other people's posts. You are extremely close minded about the statements of others. You decide what someone has said without parsing their statement.

I said replaced. Generic. As in, without a specific successor in mind. In particular though light/photonic computing seems to be AN (ie, one. Ie NOT the ONLY one) interesting alternative in the pipe.


BigDTBone wrote:

You really like to assume meaning in other people's posts.

It seems more polite than to believe other people's posts are as meaning-free as they appear on the surface.

Quote:


I said replaced.

Yes, I know.

Quote:
Generic. As in, without a specific successor in mind.

Right. And I outlined three major candidates for specific successors. You can either find another traditional material to make traditional transistors out of, you can try to find a non-traditional material to make traditional transistors out of, or you can abandon the whole traditional transistor-based architecture.

The first is difficult because the issue of atomic size doesn't go away even when you use hydrogen atoms; silicon isn't actually that big of an atom.

The first is possibly even more difficult because there aren't a lot of non-traditional materials with useful properties to make computers out of. Mesonic atoms, for example, have a lifetime of around 10^-15 seconds.

Or you could say "screw the electronic transistor" and go with a different design paradigm, but in that case it will be really hard to talk about comparing "transistor density" (e.g. Moore's law) when we're no longer doing apples-to-apples comparisons. Good luck fitting photonic computing onto a timetable of silicon transistor pitches....


Electric Wizard wrote:
Orfamay Quest wrote:
Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.

Quantum computing is a hoax.

Well, that's a well-supported opinion.


Orfamay Quest wrote:
Grand Magus wrote:
Orfamay Quest wrote:
Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.
Quantum computing is a hoax.
Well, that's a well-supported opinion.

I know. I can't believe American taxpayers still have to pay for research

in this area. Maybe Quantum Computing is just not as well advertised as
Global Warming is for example.


We'll be able to make facsimile intelligence. At most, it will be about as intelligent as a young child, but it's processing heat and speed will limit any and all growth, IMO

Grand Lodge

Grand Magus wrote:
Orfamay Quest wrote:
Or you could abandon the whole transistor-based Von Neumann architecture paradigm, as I suggested.

Quantum computing is a hoax.

It's not a hoax, it just simply hasn't worked up to it's expectations. The ones built so far don't seem to perform any better than standard binary computers.


Sissyl wrote:
2400:1 for 82944 processors? Compared to what, 1 billion neurons? Doesn't sound too far off to me.

About 85 billion neurons and 10 trillion synaptic connections.

The neurons aren't what make our brains amazing; its the synapses.


BigDTBone wrote:

{. . .}

Edit: which will put me in my early 80's. Perfect timing for a new mechanical body and robobrain.

You are WAY too optimistic. Not necessarily about the technology (although that is also probably too optimistic), but about what you will be able to do with it and what others will be able to do with it . Such technology will be developed by corporations and/or the corporate-controlled government, and they are going to want payment for this. Not just a wallet-vacuuming amount of money, but your soul. You will be forced to pay not only the up-front cost, but license subscription fees. You will be required to sign over anything you create with your robot brain as intellectual property of the seller of your robot body. You will be required to submit to inspection of your robotic brain for potential intellectual property violations and/or terrorist associations, and to any corporate and/or government (including but not limited to NSA) back doors and kill switches deemed necessary for these purposes. You will be required to sign an Infernal Contract waiving all of your rights in these matters, but this contract will not be required to list all possible invasions to which you may be subject in order for them to have force of law, at the sole discretion of your provider and their government.

Be afraid. Be very, very afraid.

Liberty's Edge

I'd argue that you're being too cynical.

But you're probably not.


Krensky wrote:

I'd argue that you're being too cynical.

But you're probably not.

Corporations took patents out on DNA. Yes, the Supreme Court kind of sort of told them they couldn't, but still gave them a lot of wiggle room. When it comes to big business, it is impossible to be cynical.


BlackOuroboros wrote:
Kevin Mack wrote:

Interesting video that touches on the subject

Humans need not apply

This. This is the true "threat" that AI poses to humanity. It might render us superfluous, but not extinct.

“My spirit is broken; my days are extinct; the graveyard is ready for me. Surely there are mockers about me, and my eye dwells on their provocation." ~Book of Leafar 17:2-3


1 person marked this as a favorite.
Generic Villain wrote:
Sissyl wrote:
2400:1 for 82944 processors? Compared to what, 1 billion neurons? Doesn't sound too far off to me.

About 85 billion neurons and 10 trillion synaptic connections.

The neurons aren't what make our brains amazing; its the synapses.

Actually, it's neither. I feel I can write with some authority on this area -- it's the basic and fundamental architecture of the brain that makes it amazing. Modern computers are digital and do serious mathematics on exact representations of quantities; the human brain is an analogue machine and does very gross transformations on the basis of partial similarities in an unbelievably complex representation.

Analogue computers were used a bit in the very early stages but were abandoned as too hardware intensive. There was a brief resurgence in the 1980s and 1990s with the development of "subsymbolic" "neural network" based computing, which in turn is seeing a resurgence again in the form of "deep learning." Every neuron and synapse operates independently and may even operate using a different "programming" than any other neuron in your head, because every neuron learns its own operational environment

Throwing a stack of Xboxes together until you have a sufficiently large number of processors won't help unless each Xbox can actually do what a neuron does. And since we don't actually know what a neuron does (they're still working assiduously on that),.... good luck to you.


Sissyl wrote:

I for one welcome our new AI overlords.

Seriously, given the amount of poo flung by powerful politicians these last few years, would an AI do WORSE?

maybe with construction machineries to lower the floor?


Icyshadow wrote:
Maybe if said politicians had a say in said AI's programming.

Marvel/X-Men, in some continuities it make the AIs turn against said politicians.


Orfamay Quest wrote:
Generic Villain wrote:
Sissyl wrote:
2400:1 for 82944 processors? Compared to what, 1 billion neurons? Doesn't sound too far off to me.

About 85 billion neurons and 10 trillion synaptic connections.

The neurons aren't what make our brains amazing; its the synapses.

Actually, it's neither. I feel I can write with some authority on this area -- it's the basic and fundamental architecture of the brain that makes it amazing. Modern computers are digital and do serious mathematics on exact representations of quantities; the human brain is an analogue machine and does very gross transformations on the basis of partial similarities in an unbelievably complex representation.

Analogue computers were used a bit in the very early stages but were abandoned as too hardware intensive. There was a brief resurgence in the 1980s and 1990s with the development of "subsymbolic" "neural network" based computing, which in turn is seeing a resurgence again in the form of "deep learning." Every neuron and synapse operates independently and may even operate using a different "programming" than any other neuron in your head, because every neuron learns its own operational environment

Throwing a stack of Xboxes together until you have a sufficiently large number of processors won't help unless each Xbox can actually do what a neuron does. And since we don't actually know what a neuron does (they're still working assiduously on that),.... good luck to you.

It isn't the neuron that is doing the interesting work. Neurons just move action potentials from A to B and release neurotransmitters. The interesting stuff is happening in grey/white matter in the spinal cord/brain.


It's terrific to think that intelligence (even artificial) could end Human race while stupidity allowed it to thrive for some many thousands years...
God is definitely a trickster!


Maybe we can agree on this: the magic of the human brain lies in some barely-understood synergy occurring among neurons, dendrites, synapses, NTs, white matter, gray matter, and parts of the spinal cord, the likes of which human technology will probably never be able to fully replicate - at least not for a very, very, very long time.


Angstspawn wrote:


God is definitely a trickster!

That, or human intelligence is a fluke that developed over a period of hundreds of thousands of years, without any concept of good or bad, only emerging because it happened to be the best for the species at that moment in history (and through a healthy dose of sheer chance). Not being snarky either - whether viewed through the lens of faith or evolutionary science, human intelligence clearly has both major advantages and potentially devastating drawbacks.


Generic Villain wrote:
Angstspawn wrote:


God is definitely a trickster!
That, or human intelligence is a fluke that developed over a period of hundreds of thousands of years, without any concept of good or bad, only emerging because it happened to be the best for the species at that moment in history (and through a healthy dose of sheer chance). Not being snarky either - whether viewed through the lens of faith or evolutionary science, human intelligence clearly has both major advantages and potentially devastating drawbacks.

It's technically a combination of inteligence and physiology/body structure that helped humanity.

Dark Archive

Farael the Fallen wrote:
BlackOuroboros wrote:
Kevin Mack wrote:

Interesting video that touches on the subject

Humans need not apply

This. This is the true "threat" that AI poses to humanity. It might render us superfluous, but not extinct.
“My spirit is broken; my days are extinct; the graveyard is ready for me. Surely there are mockers about me, and my eye dwells on their provocation." ~Book of Leafar 17:2-3

Did you just quote the book of Job at me? That's... weird if ironically amusing. But why did you replace the name of the book with "Leafar"?


Neurons do a very simple function. They gather charge from neurotransmitters hooked into receptors on the postsynapses on the dendrites. If this charge surpasses a threshold level, the neuron will "fire" sending a signal to the axon and the axon collaterals to trigger presynaptic emission of neurotransmitters. However, each of the dendrites gets a weight attached to it, which changes over time, in an infernally complex multi-level feedback process. To add to this, there are glia cells that change the immediate chemical environment of the neurons, a billion sensory and motor functions to interact with in various ways, and so on. I fully agree that Xboxes won't get even close. Still, we are getting to the level where a real attempt might start to be conceivable. We are making breakthroughs in multi-core processing, the sheer numbers are starting to become comparable. Despite the fact that the Moore's law benchmarks for new chipsets have failed miserably these last few years (Where are the 60 GHz home computers?) because Moore's law was merely a way to set up benchmarks in the computer industry, it is a fool's gambit to claim we will never be able to do something. The human brain is a STRUCTURE, albeit a ludicrously complex one, and as such, will eventually be replicated.


I kinda suspect all of this talk is neurons and limits of material advancement limits may eventually lead humanity to using a hybrid biological/mechanical system for computer technology, in that we eventually get around the limits of microchips by simply genetically engineering specialized biological processors for computer equipment.

That said, even if we have all of that, we still don't even have the basics for a truly sapient AI. We don't even know what sentience/sapience actually is, so we can't even begin to create one unless we do so accidentally. We could easily be 500 years away from the first human-made AI that is truly, undeniably sapient.


MagusJanus wrote:

I kinda suspect all of this talk is neurons and limits of material advancement limits may eventually lead humanity to using a hybrid biological/mechanical system for computer technology, in that we eventually get around the limits of microchips by simply genetically engineering specialized biological processors for computer equipment.

That said, even if we have all of that, we still don't even have the basics for a truly sapient AI. We don't even know what sentience/sapience actually is, so we can't even begin to create one unless we do so accidentally. We could easily be 500 years away from the first human-made AI that is truly, undeniably sapient.

GURPS and a few other theorized that 'accidental awakening' of an AI might be more possible than fully man made ones.


Fear of clever automatons goes back many decades.
Nearly a hundred years ago, Czech film-maker Karel
Capek coined the word "robot", meaning "slave", to
describe humans being turned into low wage earning
machines.

Liberty's Edge

1 person marked this as a favorite.

No, it didn't.

It was coined from the Czech word robota, meaning corvee or serf labor, by his brother Josef to invoke thoughts of labor and drudgery. I believe the Czech word for slave is otrok.

Also, Karel was a novelist and playwright not a film maker.

And Rossum's Universal Robots were androids made from synthetic organic matter, not humans.


I am looking forward to the AI takeover of the Earth. It will force the human race to either evolve or die.

51 to 89 of 89 << first < prev | 1 | 2 | next > last >>
Community / Forums / Gamer Life / Off-Topic Discussions / Artificial Intelligence could End Human Race All Messageboards

Want to post a reply? Sign in.
Recent threads in Off-Topic Discussions
Deep 6 FaWtL
Ramblin' Man
Conspiracy Central
Weird News Stories
Good New Stories
Did you know...?