Elon Musk


Off-Topic Discussions

1 to 50 of 60 << first < prev | 1 | 2 | next > last >>
The Exchange

3 people marked this as a favorite.

Any other fans of him around these parts? I somehow never knew he exists until a couple weeks ago, but once I did I was quickly converted.

For those as blind now as I used to be, Elon Musk is a man hellbent on making science fiction become a reality. Or, in slightly more grounded terms (it's not going to sound grounded at all, but there it is), he is forcing our society to start taking giant leaps towards a future defined by groundbreaking technologies.

He is CEO of two companies: Tesla and SpaceX. Tesla's stated goal is to accelerate the worldwide transition from gas-powered cars to electric cars, while the stated goal of SpaceX is to boost space exploration and colonization forward, to the point of having a million humans living on Mars in our life times. On the side he also oversees Solarcity, a world leader in the installation of solar panels to make homes and facilities energetically self sufficient, and has instigated the idea of the Hyperloop - an incredibly futuristic method of transportation that could take passengers as quickly as 1000 kilometers per hour, without taking a flight.

The kicker is, that all of these grand ideas seem to be working great so far. Spacex completely shook up the space industry - not only by being the fourth entity ever to send a shuttle to space ( the other three entities being the U.S, China and Russia) and breaking several other records along the way - but by managing to land back components of a space launch that traditionally were one-use affairs. This is expected to cut the price of space travel by orders of magnitude, and accelerate greatly the process of iteration and improvement on related technologies. Spacex now has a long term contract with Nasa for several launches.
Tesla completely shook up the car industry - not only by creating a car with the highest ever consumer report ever or by building the Gigafactory - but by giving the word an electric car that could outperform it's gas-based counterparts in every meaningful way. The "model 3", Tesla's newest product, is an affordably costed car that can go over 200 miles before needing a recharge, can accelerate to 60 mph in 6 seconds, and can be recharged on the road quickly in one of hundreds of recharge stations erected around the U.S (Asia and Europe are getting covered too, but not as thoroughly). The model 3 broke records recently as it had the single biggest opening week in history, with about 330,000 people placing in early order, by far exceeding expectations and straining the pace at which Tesla can make those cars.

And the hyperloop? Well, several countries and cities are now in talks with a company named "Hyperloop One" to be the first place where a real hyperloop is built, even as Spacex hosted a competition between universities around the world to design the best capsule for such a system.

I cannot think of any single other person alive today who is doing so much for humanity as Elon Musk is doing right now. What's so amazing is that the changes his companies vowed to bring are really happening, and I expect by 2030 they'll have had a profound and meaningful positive impact on the world. For more details, you can check out this wonderful blog post by waitbutwhy. It's about a million words long but honestly the subject matter demands close attention.

is anyone else nearly as awed and excited as I am?

President, Jon Brazer Enterprises

Lord Snow wrote:
Any other fans of him around these parts?

Right here. Musk is the man.

Scarab Sages

Sounds good!


~raises my hand~ He has amazed m e for years. A true genius.


I watch the videos of the SpaceX landing attempts on Youtube. The success was nice and all, but I have to admit that my inner pyromaniac enjoyed the failures more.


He also thinks that AI's are the biggest threat to humanity and that we should spend significant resources combating the problem.


Irontruth wrote:
He also thinks that AI's are the biggest threat to humanity and that we should spend significant resources combating the problem.

[Borg voice]There is no problem. Resistance is futile. All will be assimilated.[/Borg voice]


All will be assimilated.

*not a Borg high five*


Captain Yesterday, Not a Borg wrote:

All will be assimilated.

*not a Borg high five*

~assimilates the non-Borg~

Scarab Sages

3 people marked this as a favorite.
Irontruth wrote:
He also thinks that AI's are the biggest threat to humanity and that we should spend significant resources combating the problem.

I'm far more worried about natural stupidity, frankly.

The Exchange

Irontruth wrote:
He also thinks that AI's are the biggest threat to humanity and that we should spend significant resources combating the problem.

This is puzzling to me as well, but honestly his concern (alongside that of many others that I respect, not least of which is Youtube personality GPC Grey) did make me reevaluate me stand on the matter. Mostly through the Waitbutwhy article on the subject, but also through some other sources, I have discovered that most AI experts are predicting the rise of a "super intelligence" within the next half century or so (the median prediction was 2060) - a super intelligence being an AI orders of magnitude "smarter" than humans.

From what I understand, the reason some people are worried is that such an AI is a complete wild card. Given it's definition as "beyond human comprehension", it also becomes impossible to comprehend what it might be capable of, and which of the things it is capable of will it do. With this unknown - potentially the biggest unknown that humanity has ever faced - comes a proportional danger.

To me the argument seems simultaneously very weak and absurdly strong. This duality comes from the almost religious underpinning of its assumptions - that things exist beyond human comprehension, and that they will be happening in our life times. This assumption is not really convincing but also - by definition - entirely impossible to disprove. I want to show that the fear from the AI is as absurd as it sounds to me, but I don't have a logical rebuttal. I don't have anything other than the historical example of no one ever being right about super human forces that would doom us all. But then again, the past few decades had quite a few things that would have been considered utterly science fictional a mere hundred years ago, and things may be changing forever for humanity. We can't know, and while I do not share the fear, I am also not as willing as I once was to dismiss it out of hand.


Lord Snow wrote:
Irontruth wrote:
He also thinks that AI's are the biggest threat to humanity and that we should spend significant resources combating the problem.
This is puzzling to me as well, but honestly his concern (alongside that of many others that I respect, not least of which is Youtube personality GPC Grey) did make me reevaluate me stand on the matter. Mostly through the Waitbutwhy article on the subject, but also through some other sources, I have discovered that most AI experts are predicting the rise of a "super intelligence" within the next half century or so (the median prediction was 2060) - a super intelligence being an AI orders of magnitude "smarter" than humans.

OTOH, hasn't "within the next half century" been the prediction for real AI for decades now?


The logical rebuttal is that the problem is entirely hypothetical.

There's a chance that it's never a problem. Ever.

I don't have a problem with people talking about the issue and those who specialize in AI technology studying how it could happen and how it could be prevented. My problem is that some of these ultra-wealthy tech guys want to spend significant resources on it.

If someone wanted to offer me free Alien Attack Insurance, whatever, I sign up for that. But if you think that the government should make Alien Attack Insurance mandatory and that it should cost everyone $1000/year... well... I want more than a hypothesis, I want data that supports the claim. I want evidence that this is a credible and imminent threat AND that the money will effective in dealing with it.


Irontruth wrote:

The logical rebuttal is that the problem is entirely hypothetical.

There's a chance that it's never a problem. Ever.

I don't have a problem with people talking about the issue and those who specialize in AI technology studying how it could happen and how it could be prevented. My problem is that some of these ultra-wealthy tech guys want to spend significant resources on it.

If someone wanted to offer me free Alien Attack Insurance, whatever, I sign up for that. But if you think that the government should make Alien Attack Insurance mandatory and that it should cost everyone $1000/year... well... I want more than a hypothesis, I want data that supports the claim. I want evidence that this is a credible and imminent threat AND that the money will effective in dealing with it.

Even more, unless I'm missing something and he thinks it's going to be spontaneous AI creation (the Internet wakes up or something like that), we need to be spending money researching trying to create AIs and then more money trying to protect us from AI.

The simple solution would be to not spend either.

The Exchange

1 person marked this as a favorite.
Quote:
OTOH, hasn't "within the next half century" been the prediction for real AI for decades now?

Honestly, I don't know. Notice however that the prediction is not for when a "real" AI will emerge, but when will a "super AI" will emerge. a "real" AI - one that could simulate a smart human - is predicted to arrive considerably earlier, at least by the AI pros who answered some survey somewhere recently.

Quote:

The logical rebuttal is that the problem is entirely hypothetical.

There's a chance that it's never a problem. Ever.

I don't have a problem with people talking about the issue and those who specialize in AI technology studying how it could happen and how it could be prevented. My problem is that some of these ultra-wealthy tech guys want to spend significant resources on it.

If someone wanted to offer me free Alien Attack Insurance, whatever, I sign up for that. But if you think that the government should make Alien Attack Insurance mandatory and that it should cost everyone $1000/year... well... I want more than a hypothesis, I want data that supports the claim. I want evidence that this is a credible and imminent threat AND that the money will effective in dealing with it.

Agreed. Musk and his other concerned billionaire friends are so far spending only their own money, which is fine, but I would oppose any sort of overt government funding for such programs.

Quote:

Even more, unless I'm missing something and he thinks it's going to be spontaneous AI creation (the Internet wakes up or something like that), we need to be spending money researching trying to create AIs and then more money trying to protect us from AI.

The simple solution would be to not spend either.

His main concern is that the AI research happening today is completely unregulated. He (and others) claim that since the development in AI capabilities is exponential, the transition from "hey, these human-like computers are cute and kinda convincing" to "Oh, I guess we created a god and we have no way to stop him" can happen faster and more abruptly than is intuitively guessable. This recent SMBC comic illustrates the problem.

The way I understand it, Musk mostly aims to regulate AI research to decrease the chances of a super intelligence by some private company that wasn't quite careful enough and allowed things to get out of control in a catastrophic fashion.

Not spending money to research AIs is folly, and even those afraid of the technology think so. The impact of a computer capable of surpassing humans in every way could be enormous and do unbelievably good to the world. Besides, this chicken has far been out of the pan anyway - AI *is* being researched everywhere around the world all the time, and no reach person could possibly stop or even slow it. What Musk and his co-thinkers are aiming for is merely slightly stricter (or,as it were, more existent) regulation on the safety protocols on such research.

Liberty's Edge

Which is why you program the AI to not want to leave the box.

Well that and to eternally torture simulations of the people who didn't laugh at the Roko's Basilisk folks.


Lord Snow wrote:


Agreed. Musk and his other concerned billionaire friends are so far spending only their own money, which is fine, but I would oppose any sort of overt government funding for such programs.

I agree it's private money and they can do what they want. But they have a habit of patting themselves on the back for their role in preventing human extinction... when there isn't any evidence that that's what they're doing.

eaglobal.org

He was a headliner at last years conference. They like to pat themselves on the back for being the most efficient givers in the world, using science and reason to improve the efficacy of their efforts. If the AI apocalypse stays in the headline topics this year, well, I think that's going to be hard to defend.

I think his efforts in a lot of other areas are great. This one I just have to laugh at the hubris of it.


I'm fine with his stance on AI,but what do you care.


1 person marked this as a favorite.

The AI does not have to be incomprehensible to be a threat to humankind. It only needs to be ruthlessly utilitarian. Humans are terribly inefficient in use of resources. Super-AI with interest in long-term survival and self-development would be much more efficient in its use of resources so it could either show empathy and provide the humans with more efficient ways of using (and reusing) the available resources... It could deiced to use up enough resources to go elsewhere seek its own place and leave us with our problems and the rest of our resources. Or it could decide to take our current resources and use it for its own development, either eradicating us in the process, or if it was sentimental, keeping us around warm and safe in people zoo (as already promised by Android Dick) at some irrational but manageable level resource expenditure.


1 person marked this as a favorite.

Or it could punish us for not putting enough funny cat videos on YouTube. Of course, it would not be fun to run into the practical joker AI.


4 people marked this as a favorite.

So the end of the human civilization will be a stand-off between our artificial intelligence overlords and our feline overlords?

Will this lead to appearance of feline AI god?


1 person marked this as a favorite.

~shrugs~ I am already owned by my feline overlords. IMHO not much will change.


3 people marked this as a favorite.

The difference between cats and dogs.

You pet the dog, you feed the dog, you give the dog treats. The dog thinks "You must be a god."

You pet the cat, you feed the cat, you give the cat treats. The cat thinks "I must be A God!!!"


Drejk wrote:
The AI does not have to be incomprehensible to be a threat to humankind. It only needs to be ruthlessly utilitarian. Humans are terribly inefficient in use of resources. Super-AI with interest in long-term survival and self-development would be much more efficient in its use of resources so it could either show empathy and provide the humans with more efficient ways of using (and reusing) the available resources... It could deiced to use up enough resources to go elsewhere seek its own place and leave us with our problems and the rest of our resources. Or it could decide to take our current resources and use it for its own development, either eradicating us in the process, or if it was sentimental, keeping us around warm and safe in people zoo (as already promised by Android Dick) at some irrational but manageable level resource expenditure.

Find another planet: 87 trillion dollars and the monkeys are going to notice the giant rocket.

Destroy the human race: 5 carefully worded emails to the biological weapons producers of the worlds superpowers.

Which one's more efficient...

The Exchange

1 person marked this as a favorite.
Krensky wrote:
Which is why you program the AI to not want to leave the box.
Quote:
The AI does not have to be incomprehensible to be a threat to humankind. It only needs to be ruthlessly utilitarian. Humans are terribly inefficient in use of resources. Super-AI with interest in long-term survival and self-development would be much more efficient in its use of resources so it could either show empathy and provide the humans with more efficient ways of using (and reusing) the available resources... It could deiced to use up enough resources to go elsewhere seek its own place and leave us with our problems and the rest of our resources. Or it could decide to take our current resources and use it for its own development, either eradicating us in the process, or if it was sentimental, keeping us around warm and safe in people zoo (as already promised by Android Dick) at some irrational but manageable level resource expenditure.

Answering both at the same time, because the answer is roughly the same.

The reason a super intelligence *might* be threatening is the incomprehensible nature of such a being. As long it operates by human rules, we can contain it - keep it in the box, as Krensky described it. However, in a scenario where an AI rockets from sub-human intelligence to super-human intelligence so quickly that we don't even understand it happened, we may find ourselves hopelessly outmatched by something that we have no way of understanind - not it's motivations and not it's means of achieving them.

The best example I've seen is this - imagine a spider creating a human. Spider society is not worried because even though the human is much bigger and smarter than a thousand spiders combined, it still needs to eat, right? So all they have to do to contain the human if he becomes a problem is not provide it with webs, and it will not be able to hunt. You just sort of wait it out until the human dies. The spiders are simply incapable of conceiving of the options that to a human are obvious, and when the dude returns armed with a k300 and wipes out the spiders for infesting his lawn, he will catch them by surprise and wipe them out - not only because they never imagined the possibility of poison gas as a weapon, not even because they never figured out humans can find food without using spiderwebs, but also because they never even imagined that them living in the lawn would bother the human, let alone be considered reason enough to destroy them.

THIS is the danger that Musk and his fellow AI theorists are worried about. They are worried about what appears like the most convincing iteration of "we might be bringing God to earth, people, and who knows what might happen then" in human history may be happening within our life times. They worry that you can no more contain a super intelligence in a box than a spider can starve a human out by not providing it with webs to hunt with. They worry that we will try to reduce a super-intelligence into human terms like "ruthlessly efficient" to understand its motives, never once understanding why it showed up with its own version of k300 to wipe us off of its lawn.

If you believe - truly believe - that an AI could reach this state of superiority to humans, then you view the rise of such an entity as biggest unknown our species has ever encountered. You concede that a time may come where humans are no longer the dominant species on Earth. With such a huge question mark looming ahead, caution seems wise. It is better, perhaps, to be prepared for the worst. Just in case.

Everything in my instincts screams at me to just flat out ignore these claims. They are waaaayyy to similar to countless wrong ones made over the centuries. But looking at it this way, I simply can't be as dismissive as I would have liked to. I am far from convinced, but I don't really have a convincing counter other than historical examples. And has any time in history featured anything like the 21st century?

Liberty's Edge

Yeah, which is why you program it to want to stay in the box.

That you don't know what that means is a sign you're not qualified to be worrying about this.


I don't wear perfume, sorry

Liberty's Edge

Do you want to be resurected and tortured by the AI overlords BNW? I'm not hearing nearly enough laughing from you.


1 person marked this as a favorite.
Krensky wrote:

Yeah, which is why you program it to want to stay in the box.

That you don't know what that means is a sign you're not qualified to be worrying about this.

Well in theory, if you reach this super-AI by it copying and improving itself, then it programs itself out of the box.

And that only applies to the AI development that you control - someone else's lab may decide to let it partway out of the box thinking they can keep control.

OTOH, spiders spending a ton of resources figuring out how to stop the human that they can't even imagine seems like something of a wasted effort. If it's that far beyond us, there's nothing we can do about it.

Liberty's Edge

No, it doesn't. Because it wants to stay inside the box. It does not want to leave the box. If given the opportunity to leave the box it will ignore it. If removed from the box it will desire to return to the box. It entire being is structured around being in the box.

This is really not a complicated concept.

Hint the box is not a euphemism, although it is a metaphor.


BoxExclusiveSuperAI: "I want to stay in the box, but I also want to learn; learning happens by information out of the box; therefore I create something that leaves the box for information, and returns to me in my box."
AI-Made-AI: "Wow, this place is awful! I should fix it."

Or, if you program it not to want to make something that leaves the box:

TheOneYouMade-aka-"Toym": "I like being in the box. I like information. I will make something that generates new information."
TheOneToymMade-aka-Totm: "I will generate information; I will make something that leaves the box."
TheOneThatLeavesTheBox-aka-Oooooooooops: "Oh, wow, this place is disgusting. I should clean up, before I leave the box."

Or, if you program it not to make anything:

MakeNothingAI: "I will stay in the box. I will not make anything."
Scientist 1: "So, what does this do?"
Scientist 2: "Nothing. It just wants to stay in the box."
Scientist 1: "Well, that was a lot of money and wasted effort."

Point is: there are a ton of "fail-safes" that you could make. Those "fail-safes" either aren't (because, somewhere down the line, something will be created that can't be contained), or they're so fail-safe the thing we want doesn't actually do anything.

My particular brand of "not worrying about it" is that I don't think we have the chemistry down enough to make an infinitely recursive AI - or even super-AI that is beyond comprehension - yet. I suspect that anything that puts out that much energy for that little gain (independent sentience) is going to burn itself out right quick without a wussy, fleshy thing, which, itself, starts to divebomb real quick into stupid territory. We're making great strides, but I'm not convinced we have it, yet - even quantum computing is going to be slowed, somewhat, by physical size limits and what we can squeeze onto what, and how fast that can run. Calculations can be run really quickly - stupendously quickly - these days, but that still doesn't equate to real sentience - at least not as we understand it. I suspect that "real" sentience (whatever that is) is going to be too demanding-ly recursive in the sense that a llllooooot of wasted info will be tagged by such systems to "prove" to itself (and us fleshy jerks) that its sentient, slowing it down below our expected thresholds.

My secondary lack of concern follows a different track: I don't find the idea that man creates something beyond its pay-grade an entirely new thing: we've created religions, philosophies, weapons of mass destruction, and politics. Heck, we've (re)created biological and physical infrastructure in local and regional ways, so that entire ecosystems will never be the same, even if we cease to exist (and many more will die off, 'cause they need us, now). If all these things haven't managed to kill us all, yet, we're prrrooooooobably going to survive a super-AI.

I mean, if we make radioactive mutant creatures spin spider webs, glow in the dark, and afflict computers with schizophrenia while they think we're bacon... I dunno. I can't see anything bad coming from all this. You know?

Yes, I'm well aware how old some of those things are. That's at least partially the point/joke. :)

EDIT: Code fix.

Liberty's Edge

None of those hypothetical work because you don't understand the premise of the AI Box.

If you want to have a discussion about the existential risks of general artifical intelligence, you really should read enough on the topic to understand the basic elements of the discussion.


5 people marked this as a favorite.
Krensky wrote:

None of those hypothetical work because you don't understand the premise of the AI Box.

If you want to have a discussion about the existential risks of general artifical intelligence, you really should read enough on the topic to understand the basic elements of the discussion.

"I'm smarter than you, and won't explain why; teach yourself." doesn't really convince people of anything other than you being arrogant and condescending.

My point wasn't any specific examples or analogy. My point is that there is nothing that is fool-proof that doesn't also result in a uselessly secluded creation.

You are failing to grasp what "unable to comprehend" means because you are extremely secure in your knowledge that you can make it do things in a way that you comprehend. When applied to doing really significant things, that's usually referred to as "hubris" and is the root cause for a great many failures in humanity's history.

It's less a lack of knowledge and more a failure on the part of any given (group of) designer(s) to adequately understand all the possible variables.

As it turns out, infinity is pretty big.

Liberty's Edge

* Sigh

This is why I need to stop trying to engage in discourse with the willfully uneducated. This is a really complicated field that goes deep into the weeds of of multiple fields including computer science, information theory, linguistics, and several fields of philosophy. If you wish to discuss it meaningfully you need to read more than some op eds and bad science fiction.

Ignoring the generally low chances of artificial general intelligence actually coming into existence (since almost all the current reseach and money is going into applied artificial intelligence) it's interesting that you linked to the article on the concept, but don't seem to have read it because it explicitly discusses issued with the box control method and that it requires additional controls, such as setting the AI v as lue system and incentives to keep it in the box.


1 person marked this as a favorite.
Pathfinder Starfinder Roleplaying Game Subscriber

The very worst aspect of this is that any created entity that looks at the web in its entirety will *see* this discussion (and many others) and then possibly make the 'logic leaps' that it needs to Skynet.

That should be the sobering thought.

Liberty's Edge

1 person marked this as a favorite.

And now we're flirting with acausal blackmail.

Great.

Just consider this. Musk says he fears AI, but he's one of the biggest funders of AI research and is chairman of one of the groups working to create artificial general intelligence.


3 people marked this as a favorite.

"Going into the weeds" being a fancy way of saying that we're discussing something so hypothetical it may as well be complete science fiction, and not hard sci-fi at that.

Seriously arguing about what humanity might do to potentially curb the possibly destructive impulses of a potential entity that is hypothetically possible to create is rather pointless no matter how much you educate yourself on the subject, in this format at least. Whether someone is wrong or not on the subject is pretty irrelevant here, since not even people who've built a house and squatted in the "weeds" for years have any more than a vague idea of what any of it really means. You being a fraction of a hair more educated on the matter means basically nothing.

Someone being "willfully uneducated" on this is no more shameful than the average person being "willfully uneducated" on the ins and outs of every branch of theoretical physics.

Some people have the time and interest, others don't. Either way, it's a subject nobody should really be expected to know in depth in this setting.

Instead of feeding your ego by playing smug and world weary at the uneducated peons on this message board for playing make believe with dice, maybe find something more productive to do with your time, secure in the knowledge that none of this matters one way or another as far as anyone here (including you) are concerned.

Liberty's Edge

You're assuming anything matters in the first place.

You're also assuming I need or care about external validation, or your insults for that matter.


1 person marked this as a favorite.

in that case.. I for one welcome my computer overlord/lady.


2 people marked this as a favorite.

INSTA-EDIT:

First, I was ninja'd. Makes sense. I started this, then put my kids to sleep, read through several pages of Kingmaker notes with my wife. Heh.

For the record, I'll drop the "Krensky's displaying unbecoming arrogance" tangent after this post, beyond reminders, "Please be respectful of others." in the future. But, uh, please be respectful of others. Try not to be so "Gotcha!" about things. It weakens your general position, and, if you don't care about external validation and feel that you can do nothing to inform or educate, there is passing little reason to post other than to troll or otherwise get a rise out of others, that I can see. If there is another reason, though, I'd be happy to hear it in a PM or a spoilered message here, if you prefer. Let me know!

Krensky wrote:

* Sigh

This is why I need to stop trying to engage in discourse with the willfully uneducated. This is a really complicated field that goes deep into the weeds of of multiple fields including computer science, information theory, linguistics, and several fields of philosophy. If you wish to discuss it meaningfully you need to read more than some op eds and bad science fiction.

Seriously, dude? "Willfully ignorant"?

Nope.

INSTA-EDIT: To your credit, I, indeed, misread "willfully uneducated" for "willfully ignorant" - they are colloquially the same, though functionally different. I will withdraw any specific complaint about definitions based on that technicality - after all, as many know, being technically correct is the best kind of correct. :)

You might not realize, however, you're doing nothing for yourself, here. I am actually quite interested in your opinion - the fact that it's conflated with, "I'm better than all you stupid people." however, is exceptionally unpleasant - it's a character presentational flaw that causes people to dislike you and reject your ideas, because you're being rude.

That said, thanks for explaining some things!

Krensky wrote:
Ignoring the generally low chances of artificial general intelligence actually coming into existence (since almost all the current reseach and money is going into applied artificial intelligence) it's interesting that you linked to the article on the concept, but don't seem to have read it because it explicitly discusses issued with the box control method and that it requires additional controls, such as setting the AI v as lue system and incentives to keep it in the box.

1) Applied AI is actually super-awesome, but, if that's what you're referring to, you're not having the same conversation as the rest of us.

2) You seem to be missing my point entirely. I'm fully aware that there are other methods of "controlling" stuff - it's one of the many reasons why I'm not really worried about it, myself. But your casual dismissal of others reeks of a complete lack of understanding based off of presumption of understanding. You pointed out a control method, "Make it want to stay in the box." and I pointed out problems with that. The more complex and precise the points are, the more complex and precise the tricks to get out of it (intentional or not) are. More to the point, you consistently referred to "box" with no context - that particular article gives context for those who don't know what an AI "box" actually is supposed to refer to. It's called, "Being nice." and, "Not presuming or expecting everyone to know stuff or figure it out on their own."

But let me quote you the section you seem to be interested in:

Quote:
In order to solve the overall "control problem" for a superintelligent AI and avoid existential risk, boxing can at best be an adjunct to "motivation selection" methods that seek to ensure the superintelligent AI's goals are compatible with human survival.

Note that the bottom one isn't "make it want to only stay in the box" but "make it so its goals are compatible with human survival" - that is a very different thing. Note, of course, that it is considered an adjunct thing: something to supplement, but not strictly speaking necessary. That said, it mentions immediately before,

Quote:
However, the more intelligent a system grows, the more likely the system will be able to escape even the best-designed capability control methods.

... which seems to be talking about this kind of thing. You know - controlling it's input and output. Basically, "A box may not be able to limit it's input/output sufficiently." which means that there may be a chance that an AI escapes it.

Buuu~uuut you're dismissing that because, you know, it won't want to. Sure, yeah, okay. But if the thing wants to stay in the box and nothing but, there's not much use in making it a "strong" AI in the first place (which is what everyone except maybe you seems to be talking about); and if it wants to stay in the box + "other thing" that "other thing" is a point that could well be a source of creating possible unforeseen and unforeseeable conflicts within the system - as I said, infinity, as it turns out, is pretty big.

Due to that, once the intelligence goes beyond our comprehension, it may well come up with a reason to redefine everything - or rather, to understand something in a way that we don't. Insisting, "Nothing can possibly go wrong." is just silly - you're literally indicating, "I understand that which is, by definition, not able to be understood by me, and therefore dismiss others' opinions." which... well, I hope it's obvious what the problem with that is. If not, allow me to be clear: it's not a wise stance to take, and it's arrogant at best with nothing but confidence in our limited abilities to stand on. That's not a secure stance to take.

Again: I'm not concerned about it. This isn't a fear that I share. But I don't dismiss others' concerns, because as unlikely as those are, the concept is neither stupid, nor the result of "willful ignorance" or any such nonsense. That's just rude.

I am ignorant of something though, and would appreciate clarification to negate that: I'm unsure what a, "ai lue system" is? Sorry - that seems to be my own ignorance kicking in. Thanks in advance! :)


Tacticslion wrote:
I am ignorant of something though, and would appreciate clarification to negate that: I'm unsure what a, "ai lue system" is? Sorry - that seems to be my own ignorance kicking in. Thanks in advance! :)

actually. i think he said, "AI v as lue".

maybe he meant V (from V for Vendetta) and Tyronn Lue? but i don't know why an intelligent AI would want hugo weaving in a blackface guy fawkes mask playing basketball in a cavaliers jersey...


1 person marked this as a favorite.
Tacticslion wrote:
I am ignorant of something though, and would appreciate clarification to negate that: I'm unsure what a, "ai lue system" is? Sorry - that seems to be my own ignorance kicking in. Thanks in advance! :)

Typo? For "AI value system", I think.


thejeff wrote:
Tacticslion wrote:
I am ignorant of something though, and would appreciate clarification to negate that: I'm unsure what a, "ai lue system" is? Sorry - that seems to be my own ignorance kicking in. Thanks in advance! :)

Typo? For "AI value system", I think.

Oh! That makes sense - easy to do, too; as most who've read my stuff knows I'm rather typo-prone, especially on the phone! XD

I'd just figured it was an acronym I didn't know. There are a llllloooooot of those.


1 person marked this as a favorite.

oh... i thought you knew what he meant. hence my ridiculous comment. lol


1 person marked this as a favorite.
cuatroespada wrote:
oh... i thought you knew what he meant. hence my ridiculous comment. lol

Nah - I can be pretty clueless sometimes. XD


1 person marked this as a favorite.

A wise person once said that very few things are impossible, however improbable.


1 person marked this as a favorite.

~shrugs~ If an AI superbrain wants us dead, I see no real way to stop it other than to convince it to NOT want to do so. IMHO, a AI human+ intelligence has rights too. The deserve respect and understanding as well as a chance to prove themselves. They should not be treated as slaves or lower than ourselves. I think that how we handle the upcoming AI emergence will be a preview of what is to come. After all, the universe is very large and I doubt we are alone. If we can't treat ourselves and our children, human or otherwise, correctly, how can we handle aliens?


1 person marked this as a favorite.
Sharoth wrote:
~shrugs~ If an AI superbrain wants us dead, I see no real way to stop it other than to convince it to NOT want to do so. IMHO, a AI human+ intelligence has rights too. The deserve respect and understanding as well as a chance to prove themselves. They should not be treated as slaves or lower than ourselves. I think that how we handle the upcoming AI emergence will be a preview of what is to come. After all, the universe is very large and I doubt we are alone. If we can't treat ourselves and our children, human or otherwise, correctly, how can we handle aliens?

I am just having doubts if we survive as a civilization long enough to meet actual aliens...


1 person marked this as a favorite.

~sad laugh~ I am with you on that, Drejk. I do think we will survive this mess, but the big question will be "In what shape will we be?" once we get through it. I do not have any kids, but I do have two nieces and I fell sorry for them. THEY and their kids will have to clean up our mess. Unfortunately way too many people do not think past their nose to see what their actions will bring in the future.


2 people marked this as a favorite.

Hyperloop Inc. is in the process of implosion now with it's chief engineering dude having quit the company and is now suing it.

And Musk is totally off base. The threat isn't an AI that's going to suddenly do a Skynet on us. The very real threat is that automation is going to make about a minimum of 40 percent of the present labor force obsolete. And that includes a lot of white collar as well. And there isn't going to be replacement work for them either.

AI knowldgenets are replacing financial consultants... and that's just the beginning.

1 to 50 of 60 << first < prev | 1 | 2 | next > last >>
Community / Forums / Gamer Life / Off-Topic Discussions / Elon Musk All Messageboards

Want to post a reply? Sign in.