• Welcome to the Fable Community Forum!

    We're a group of fans who are passionate about the Fable series and video gaming.

    Register Log in

Oh Crap We're All Going to Die... Robots Learned to Lie

PhilistineEars

yep, definite Lurker
Premium
Sep 23, 2008
179
13
65
Hawaii
This is from 2008 but still the implications are huge philosophically and otherwise... Look at the experiment with autonomous robots and their interactions within their group. Inherent human traits for self preservation or demon robots?
 
  • Like
Reactions: Hermit
Hmm so they've learned to decieve each other to preserve their own powersource, interesting...

It doesn't really suprise Me to be honest. They we're created by Humans after all, and We are the biggest bunch of lying bast*rds and b*tches on the entire planet.
 
I would like to see exactly what the first iteration of bots were encoded with specifically and how this code was altered during the experiment. Especially recording the differences between iterations and experiments to trace it back to the robot that inserted a new line of code. The striking part in all of this is all things being equal (components, initial coding, etc) that some robots developed the strategy while others did not. What's more, other robots developed a savior strategy while others did not or developed the opposite strategy.

The dichotomous relationship between their development is crazy. Especially since both, I call them strategies, manifest themselves in nature. A universal component to the inherent relationship and interrelatedness of being? Awesomely amazing.
 
Mate, it's just rational problem solving at the end of the day. In order to get to the goodness for Yourself You eliminate competion. This really doesn't suprise Me. The computers with the calculating capacity to be aware of it's own needs for energy are the ones who will have the capacity to eliminate any threats to it's survival. In this case they lie to preserve themselves, simple really.
 
A reductionist view of rational calculations is fine, but how do you explain the sacrificial bots? That goes against rational behavior conceding self preservation for something other than itself for the good of the larger population.

It's interesting to see something for more than itself, the bigger picture. Yes on an elemental level, sentient life forms may act in this way for self preservation. The fact that non-sentient life contains this imperative isn't necessarily amazing as much as it is an affirmation of a universal quality inherent throughout existence-- sentient or not. Taking a glimpse under the existential fabric of being shows us that there's a lot more than simple rational calculations from silicon-based intelligence. What's more, when you have simplistically rational calculations forming code that then acts counter to a logical, if not expected, imperative for self preservation, I find that this has taken on a whole new dimension that cannot be reduced to a simplistic need.

I'm more of a forest over trees sort of person.
 
but how do you explain the sacrificial bots?

Easy as computers lack emotion and rely primarily on rational thought process, some of them realise their own inferiority and therefore make way for the superior computer, sacrificing themselves in the process. They don't have the whole 'What about Me?' emotional thought aspects that We have, therefore it is more benefital to allow the superior, more efficent unit to survive. No fear You see, just a rational desire for things to make mathmatical sense.
 
This is from 2008 but still the implications are huge philosophically and otherwise... Look at the experiment with autonomous robots and their interactions within their group. Inherent human traits for self preservation or demon robots?

Dang, that's crazy. I was kinda bummed out about it, but then I read this:

Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”

And now I feel good again.
 
Easy as computers lack emotion and rely primarily on rational thought process, some of them realise their own inferiority and therefore make way for the superior computer, sacrificing themselves in the process. They don't have the whole 'What about Me?' emotional thought aspects that We have, therefore it is more benefital to allow the superior, more efficent unit to survive. No fear You see, just a rational desire for things to make mathmatical sense.

That is my point is all things being equal they should invariably develop the same mathematical conclusions. I am not attaching human qualities to computers as this, in itself, would be irrational, but it is all interesting none-the-less.
 
Those Hero robots warmed my heart.

I look forward to having a Robot Sidekick in the future.
hmm.gif
 
  • Like
Reactions: Hermit
I love how the article starts off with "ROBOT'S CAN LIE! WE'RE DOOMED", and just brushes over the whole "By the way, some robots behaved in the exact opposite."

It also ignores the fact that this was simply an elaborate simulation of natural selection, and that if you actually want to program a robot to do what you want (which is what actually happens), then this is not the way to go about it.

To summarise: Alarmist bullshit.
 
If I were an alarmist then
I love how the article starts off with "ROBOT'S CAN LIE! WE'RE DOOMED", and just brushes over the whole "By the way, some robots behaved in the exact opposite."

It also ignores the fact that this was simply an elaborate simulation of natural selection, and that if you actually want to program a robot to do what you want (which is what actually happens), then this is not the way to go about it.

To summarise: Alarmist bullshit.

If I were an alarmist, then this statement would be true... But hyperbole doesn't fall upon everyone I suppose.
 
The heading was meant in jest, a hyperbole or exaggeration upon the facts of the article... mostly to grab people's attention and have a little fun. Do I think this is the beginning of the end where humanity is caught in an epic battle between conniving and savior-like robots? No, of course not, although that'd be interesting.

I just fancied the contents of the article, gave the thread the title and thought I'd spawn a little conversation. I still am fascinated by the universal thread of behavior, algorithmically based or otherwise. I suppose those questions or assertions didn't cultivate as much discussion as I thought it would.
 
  • Like
Reactions: droded
The heading was meant in jest, a hyperbole or exaggeration upon the facts of the article... mostly to grab people's attention and have a little fun. Do I think this is the beginning of the end where humanity is caught in an epic battle between conniving and savior-like robots? No, of course not, although that'd be interesting.

I just fancied the contents of the article, gave the thread the title and thought I'd spawn a little conversation. I still am fascinated by the universal thread of behavior, algorithmically based or otherwise. I suppose those questions or assertions didn't cultivate as much discussion as I thought it would.

I feel bad now for being a killjoy, sorry dude...
 
No worries, you have your viewpoint and I have mine. I wanted discussion and that's what I got to some degree so thanks for posting.