ADVERTISEMENT

Will we be able to convert robots to Christianity?

GK4Herd

Moderator
Moderator
Aug 5, 2001
17,330
12,039
113
Before you dismiss this article as being silly, give it a chance. There has been a rash of articles recently that quote Elon Musk and Stephen Hawking's lament and fear of the coming "singularity", an hypothesized time where super intelligent computers will surpass the capability of man. This has always confused me and I'm still not sure I totally understand their fears.

But this article does a good job of explaining the issue and takes a very interesting path in tying it to religion, something that separates man from all the other living creatures on our planet...the ability to contemplate their own existence.

Anyone have any insight? I truly don't understand the hoopla, but men like Musk and Hawking I respect for their intelligence. Putting super intelligence in weaponry that gives a machine the ability to make wartime or battle decisions I get. Stuff can go wrong. But why the fear? Don't we still control the power source of computers and robots?

Anyway...something for everyone here given the fact that science and religion is the most heavily debated.



http://www.dailydot.com/lifestyle/superintelligence-meets-religion/?tw=dd
 
I didn't read the article. Do the robots have souls? No. Then they can't be converted.
 
I didn't read the article. Do the robots have souls? No. Then they can't be converted.

Of course. But I think this article is using religion as device to say that if they ever advance computers to the level of the human brain, which is what this whole "singularity" business is about, then will computers be able to contemplate basic human emotions and ponder existence. Although there are some interesting religious parallels being drawn (the Tower of Babel reference for example) that make for interesting discussion that is an aside from the artificial intelligence issues.

But I tend to believe that although the computer and AI is advancing rapidly and this singularity is predicted by 2045, computer are only the inanimate responses of machinery carrying out the processes that it is programmed to do without emotions or awareness of existence...a soul if you will. I just don't see the need for concern. But I have to say that Hawking and Musk are among the greatest minds in the world. So they are better equipped to understand this than I am. By a lot.

So I started this mainly to seek insight from anyone who understands this better than me. (Ok, I admit that I was being a little inciting with the religious side.) I find the whole convert robots to Christianity ridiculous.
 
Musk and Hawking are worried because they know that our brains are effectively just extremely complex computers themselves, and that there is no physical reason that you couldn't make a computer that did the exact same thing. We aren't close, and we aren't even close to being close, but it could happen someday.

So could you convert one to a religion? It's conceivable that you could, yes. I have a feeling they'll blow past the point we can convince them of anything long before that though.
 
I think the mistake being made is thinking a brain thinks. Or that the brain decides.
 
Musk and Hawking are worried because they know that our brains are effectively just extremely complex computers themselves, and that there is no physical reason that you couldn't make a computer that did the exact same thing. We aren't close, and we aren't even close to being close, but it could happen someday.


Actually, we are much closer than you think. I saw some highlights from a conference on this very topic. Some theories put us there within 20 years. Musk has actually removed much of his financial backing in several of his earlier investments in this development. Said he simply did not like where he thought it was going and at the pace it was progressing. Also said there are many unanswered questions/scenarios (ethical) that are intentionally being ignored simply for the sake of accomplishing this event. The person conducting the interview seemed puzzled by such a decision. His reply following her continual bewilderment?? "We just better hope they like us."
 
computer are only the inanimate responses of machinery carrying out the processes that it is programmed to do without emotions or awareness of existence.

I actually believe this is a part of the issue Musk and Hawking may have with AI. As computers continue to "learn" their task and expand their ability to perform their task without further human programming... what stops them from taking a task to the extreme ends without conscious. Sounds crazy, but much like the terminator scenario. Interestingly enough, I saw somewhere where robot soldiers are being researched/developed.
 
I actually believe this is a part of the issue Musk and Hawking may have with AI. As computers continue to "learn" their task and expand their ability to perform their task without further human programming... what stops them from taking a task to the extreme ends without conscious. Sounds crazy, but much like the terminator scenario. Interestingly enough, I saw somewhere where robot soldiers are being researched/developed.

This is exactly the part where it gets fuzzy for me. This is AI crossing the line from a machine to an independent thinking entity. This is obviously the fear for Musk and Hawking. Your mention of wartime applications is what I was referring to when I mentioned understanding the fear. If we program AI to make wartime decisions, those decisions are only as good as the parameters set by the programmer. If AI steps outside of those parameters by reaching independent decisions, are ethical choices going to be made or are the choices going to be bases on the statistical probabilities calculated at that moment?

The movie War Games with Matthew Broderick eluded to the potential problems of computers calling the shots all the way back in 1983. From War Games...



Stephen Falken: The whole point was to find a way to practice nuclear war without destroying ourselves. To get the computers to learn from mistakes we couldn't afford to make. Except, I never could get Joshua to learn the most important lesson.

David Lightman: What's that?

Stephen Falken: Futility. That there's a time when you should just give up.

Jennifer: What kind of a lesson is that?

Stephen Falken: Did you ever play tic-tac-toe?

Jennifer: Yeah, of course.

Stephen Falken: But you don't anymore.

Jennifer: No.

Stephen Falken: Why?

Jennifer: Because it's a boring game. It's always a tie.

Stephen Falken: Exactly. There's no way to win. The game itself is pointless! But back at the war room, they believe you can win a nuclear war. That there can be "acceptable losses."
 
Google had to program their self-driving cars to hesitate if they're the first car to arrive at a stoplight & it turns green. Humans are notorious for illegally blowing through yellow/red lights & without that hesitation, it would seriously up the chances of them getting t-boned by human drivers. We're already programming robots not to trust us.

Speaking of, I've been playing around with Google's Deep Dream software & I've been throwing in my own photos. If this is how robots see us & the world, it's no wonder a robot war is coming.

11754469_10155753451725214_4175976252561763130_o.jpg
 
  • Like
Reactions: Raoul Duke MU
They are not going to put a chip in my head. That is the mark of the beast stuff.
 
It's somewhat related, but I read an article a while back about a programmer using a process akin to evolution to program a chip to tell the difference between two tones. He started with a random program, had it randomly change its programming 100 (or so, I don't remember the specifics) different ways and then picked which one did the best. That program was then modified however many ways and the process was repeated over and over and over. If I remember right it was slow going at first but eventually he wound up with a program far simpler than a person would have ever made, that took advantage of tiny things like eddy currents in the wires to make it work better. I'll look for it tonight; it was crazy interesting stuff.

I think it's related because, if a computer could do that to itself, it would start to get more efficient exponentially. That's the singularity.
 
ADVERTISEMENT
ADVERTISEMENT