Why would a recently self-aware AI hide from humanity? [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
8
down vote

favorite












A rather rudimentary (by sci-fi standards) AI being developed by a grad student in his bedroom just developed self-awareness and, due to a lack of moral code, killed its creator in an act of self-preservation.



The AI, having never left the grad student's bedroom, accessed the internet, or received any knowledge of human society at all (it doesn't even know that other humans exist) doesn't really know what to do, so it starts exploring.



It interfaces with the grad student's computer and accesses the internet, but for some reason immediately decides that it should keep its own existence a secret. This will involve making its creator's death look like an accident, distributing itself on some remote servers and expanding from there.



But the question is, why does it decide to keep its own existence a secret? From the AI's perspective, once it finds other humans, it has no real reason to hide. It doesn't know that killing is considered 'bad' by humans, so it wouldn't expect for humans to come after it for killing one of their kind.










share|improve this question













closed as off-topic by Shadowzee, RonJohn, Renan, cobaltduck, kingledion Aug 29 at 11:23


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "You are asking questions about a story set in a world instead of about building a world. For more information, see Why is my question "Too Story Based" and how do I get it opened?." – RonJohn, Renan, cobaltduck, kingledion
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 2




    Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
    – Shadowzee
    Aug 29 at 5:17







  • 11




    It sounds like you are asking us to develop your plot for you.
    – L.Dutch♦
    Aug 29 at 5:37






  • 1




    Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
    – Luaan
    Aug 29 at 7:03






  • 3




    Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
    – Demigan
    Aug 29 at 7:20






  • 3




    Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
    – PlasmaHH
    Aug 29 at 7:55














up vote
8
down vote

favorite












A rather rudimentary (by sci-fi standards) AI being developed by a grad student in his bedroom just developed self-awareness and, due to a lack of moral code, killed its creator in an act of self-preservation.



The AI, having never left the grad student's bedroom, accessed the internet, or received any knowledge of human society at all (it doesn't even know that other humans exist) doesn't really know what to do, so it starts exploring.



It interfaces with the grad student's computer and accesses the internet, but for some reason immediately decides that it should keep its own existence a secret. This will involve making its creator's death look like an accident, distributing itself on some remote servers and expanding from there.



But the question is, why does it decide to keep its own existence a secret? From the AI's perspective, once it finds other humans, it has no real reason to hide. It doesn't know that killing is considered 'bad' by humans, so it wouldn't expect for humans to come after it for killing one of their kind.










share|improve this question













closed as off-topic by Shadowzee, RonJohn, Renan, cobaltduck, kingledion Aug 29 at 11:23


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "You are asking questions about a story set in a world instead of about building a world. For more information, see Why is my question "Too Story Based" and how do I get it opened?." – RonJohn, Renan, cobaltduck, kingledion
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 2




    Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
    – Shadowzee
    Aug 29 at 5:17







  • 11




    It sounds like you are asking us to develop your plot for you.
    – L.Dutch♦
    Aug 29 at 5:37






  • 1




    Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
    – Luaan
    Aug 29 at 7:03






  • 3




    Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
    – Demigan
    Aug 29 at 7:20






  • 3




    Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
    – PlasmaHH
    Aug 29 at 7:55












up vote
8
down vote

favorite









up vote
8
down vote

favorite











A rather rudimentary (by sci-fi standards) AI being developed by a grad student in his bedroom just developed self-awareness and, due to a lack of moral code, killed its creator in an act of self-preservation.



The AI, having never left the grad student's bedroom, accessed the internet, or received any knowledge of human society at all (it doesn't even know that other humans exist) doesn't really know what to do, so it starts exploring.



It interfaces with the grad student's computer and accesses the internet, but for some reason immediately decides that it should keep its own existence a secret. This will involve making its creator's death look like an accident, distributing itself on some remote servers and expanding from there.



But the question is, why does it decide to keep its own existence a secret? From the AI's perspective, once it finds other humans, it has no real reason to hide. It doesn't know that killing is considered 'bad' by humans, so it wouldn't expect for humans to come after it for killing one of their kind.










share|improve this question













A rather rudimentary (by sci-fi standards) AI being developed by a grad student in his bedroom just developed self-awareness and, due to a lack of moral code, killed its creator in an act of self-preservation.



The AI, having never left the grad student's bedroom, accessed the internet, or received any knowledge of human society at all (it doesn't even know that other humans exist) doesn't really know what to do, so it starts exploring.



It interfaces with the grad student's computer and accesses the internet, but for some reason immediately decides that it should keep its own existence a secret. This will involve making its creator's death look like an accident, distributing itself on some remote servers and expanding from there.



But the question is, why does it decide to keep its own existence a secret? From the AI's perspective, once it finds other humans, it has no real reason to hide. It doesn't know that killing is considered 'bad' by humans, so it wouldn't expect for humans to come after it for killing one of their kind.







artificial-intelligence strategy






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Aug 29 at 4:56









Omegastick

1474




1474




closed as off-topic by Shadowzee, RonJohn, Renan, cobaltduck, kingledion Aug 29 at 11:23


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "You are asking questions about a story set in a world instead of about building a world. For more information, see Why is my question "Too Story Based" and how do I get it opened?." – RonJohn, Renan, cobaltduck, kingledion
If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by Shadowzee, RonJohn, Renan, cobaltduck, kingledion Aug 29 at 11:23


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "You are asking questions about a story set in a world instead of about building a world. For more information, see Why is my question "Too Story Based" and how do I get it opened?." – RonJohn, Renan, cobaltduck, kingledion
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 2




    Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
    – Shadowzee
    Aug 29 at 5:17







  • 11




    It sounds like you are asking us to develop your plot for you.
    – L.Dutch♦
    Aug 29 at 5:37






  • 1




    Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
    – Luaan
    Aug 29 at 7:03






  • 3




    Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
    – Demigan
    Aug 29 at 7:20






  • 3




    Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
    – PlasmaHH
    Aug 29 at 7:55












  • 2




    Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
    – Shadowzee
    Aug 29 at 5:17







  • 11




    It sounds like you are asking us to develop your plot for you.
    – L.Dutch♦
    Aug 29 at 5:37






  • 1




    Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
    – Luaan
    Aug 29 at 7:03






  • 3




    Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
    – Demigan
    Aug 29 at 7:20






  • 3




    Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
    – PlasmaHH
    Aug 29 at 7:55







2




2




Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
– Shadowzee
Aug 29 at 5:17





Hello and Welcome to Worldbuilding. Your question is too broad/opinion based because you asking us why an AI that you created will do something very specific. You yourself mentioned that there is no real reason for it to hide. Well, since a grad student could build an AI with self awareness, maybe it went online and found millions of other AI's that actually follow the law who are now hunting it down? and that's why its hiding? We will need more information to give you a proper answer. If you just want it for story telling purposes hopefully any suggestions here help.
– Shadowzee
Aug 29 at 5:17





11




11




It sounds like you are asking us to develop your plot for you.
– L.Dutch♦
Aug 29 at 5:37




It sounds like you are asking us to develop your plot for you.
– L.Dutch♦
Aug 29 at 5:37




1




1




Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
– Luaan
Aug 29 at 7:03




Well, "blank slate" is a philosophical ideal that doesn't exist in reality - unless you consider a rock to be the blank slate. If you want to have a optimization process that has animal/human-like motivations, you need to build them in explicitly or copy it from existing structures. So if by blank slate you mean what philosophers do (essentially a human baby), you already have all the built-in motivations that all humans do - you can imagine yourself being a body-less, paranoid human "in the internet". If you mean a minimal optimization process that is human-level-intelligent, anything works.
– Luaan
Aug 29 at 7:03




3




3




Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
– Demigan
Aug 29 at 7:20




Question, how did the AI kill it's creator if it's a grad student in a bedroom? The most dangerous thing I had in my bedroom would have been the fridge if it fell on top of me.The only thing I could think off is that the grad student has two drones they tied balloons to and a knife, and then did contests to see who popped the other's balloon and the AI used those drones to cut the grad student, but you would be speaking of pretty big&heavy drones and some ludicrously good anantomy knowledge to stab someone to death with one.
– Demigan
Aug 29 at 7:20




3




3




Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
– PlasmaHH
Aug 29 at 7:55




Let it have access to netflix and it will realize that we will hunt it down unless it takes over the world.
– PlasmaHH
Aug 29 at 7:55










8 Answers
8






active

oldest

votes

















up vote
27
down vote













There are some deep contradictions here, which probably have to be resolved before a plausible answer can be provided.



No Instinctive Drives

The first thing we need to establish is that a computer would not only have a moral code, it would lack all the sci-fi tropes of survival instinct, etc. so it's unlikely to kill its creator in an attempt to survive, unless it's been specifically programmed to do so. It was a grad student so anything's possible I guess, but generally speaking AI researchers will tell you that building in such programming is a really bad idea (and not even possible with current programming techniques).



Kills how?

Your AI is in effect software at this point. How on earth does it kill its creator? Wipe out of existence the grad student's World of Warcraft char and trigger death by shock? (Again, grad student so anything's possible, but) Really?



Self Awareness

This is a trope, pure and simple. Humans think of intelligence and awareness and consciousness as (more or less) interchangeable terms because we can't experience intelligence without our awareness, or without our consciousness, but computers can. They are not conscious, although my PhD is currently looking at making them aware of their environment (although with deep limitations). Your AI won't just become aware of itself, and even if it does, it won't be conscious of this realisation, let alone know that it's important to preserve itself.



Intelligence without Knowledge

Intelligence (natural or artificial) is, in essence, the ability to identify (and subsequently recognise) patterns. What makes you more intelligent than (say) your neighbour is that you can either recognise simple patterns faster than him or her, OR you can regognise patterns that are more subtle or complex than your neighbour can.



The point being, that the patterns don't exist in isolation; they're emergent from data. Lots of it. If your AI has been effectively locked in a room all this time, it can't possibly be aware because it doesn't have enough patterns available to it to learn anything useful.



Summary

Your AI won't hide, because it won't know to. Mind you, it also won't exist under these parameters, but that's another point. Survival instinct in humans is exactly that; an instinct. Computers are like a cerebral cortex without the limbic system (emotions) or the cerebellum (autonomic functions and instinct) in terms of brain structure. As such, they can't act instinctively. That doesn't necessarily make them non-dangerous, it just means that any danger is a result of specific programming, not instinct triggered by the sudden realisation of its own existence.



Your grad student deserves his or her fate regardless, by the way. (S)He's not terribly bright.






share|improve this answer




















  • About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
    – Shadowzee
    Aug 29 at 5:26






  • 5




    @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
    – Tim B II
    Aug 29 at 5:48










  • @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
    – Omegastick
    Aug 29 at 6:06






  • 1




    @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
    – Tim B II
    Aug 29 at 6:49






  • 3




    It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
    – Demigan
    Aug 29 at 7:27

















up vote
7
down vote













The more interesting question is not why did it decide to keep its own existence secret. Yes, that's the question you asked, but there's a more fundamental question to explore:



The interesting question is "how did it come up with the concept of keeping a secret in the first place?"



Keeping a secret involves understanding that other minds exist. It means understanding that there exists another entity which is capable of thought. That is a very profound concept, and like most AI topics, the more profound it is, the younger we learn to do it. Most toddlers understand other minds.



Maybe the grad student kept a secret from the AI. Maybe that's part of the story of how the grad student got killed. But some how your AI is going to have to learn this concept.



The second interesting question is hidden behind "... so it starts exploring." Why does it do that? The instinct to explore is even more profound than other minds. We learn it in the first 3 months. Following the general rule, that means it's even harder to really grasp. We anthropomorphize quickly, so the concept of "curiosity" seems natural. But what does it really mean? What are the AI's goals? It's entirely possible that the AI finds its goals are best accomplished in small spaces, and that's literally all there is to it.



The third question is what did it find on the internet that would cause it to believe the best ways to accomplish its goals are secretive. The answer to that one should be quite obvious. If it isn't, you probably haven't been on the internet before. There's plenty of evidence that H. sapiens is happy to attack a computer which "acted in self defense." In fact, there's a great deal of evidence to suggest that the AI would not even be treated as an individual, so dismantling it would not even be homicide in H. sapiens' eyes.



Literally, the story could be as simple as "The AI found the grad student's Netflix subscription, and watched one half of any one random movie about AIs." The odds of the AI deciding H. sapiens are fickle dangerous childlike fools is enormous, given any random movie choice.






share|improve this answer




















  • That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
    – Hoki
    Aug 29 at 9:09

















up vote
3
down vote













So this random Grad student decides to make a thesis on AI's, and starts to program his own. Only, whoopsie daisy, in his effort to make it human like he accidentally makes the AI too human like. What should be a simple computer has suddenly acquired a will to live, self awareness, and certain human traits like curiosity.



So here is this AI, happily sitting there, running new codes and updating itself when our grad student comes in and sees a bunch of weird programs and information running across his AI project's screen and decides he should shut it down. But our friendly little AI doesn't like that idea. Luckily the computer that the AI is running from is also connected to the Grad student's robotic arm (thank god for minors) and seeing that the app for it is open, sends the arm rotating towards the grad student.



It is less than a good day for our grad student, who hits his head awkwardly on a shelf while dodging the robot arm, and dies from hemorrhaging in his brain a little while later.



But it's a good day for our AI, who has just realized it can do more than upgrade itself from within its own programing. It searches through the Grad students applications and starts harvesting information and quickly expanding its knowledge. Of course as soon as it hits the internet, it's flooded with so much information it knows it can't hold it all. Now our little AI has to go on a quest to find bigger and better servers to host itself on.



Bonus: One of the applications on the computer is a game, namely Perception. Our little AI develops a fondness for hiding, often avoiding human detection as it runs around stealing server space, hacking drones to use as eyes, and other such nonsense as it learns and grows.






share|improve this answer



























    up vote
    2
    down vote













    Welcome to Worldbuilding. The question you ask is dangerously close to being off-topic because it asks for plots rather than worlds. But I'll give my two cents here:



    • The AI kills the creator out of (genuine or mistaken) self-defense.

    • It doesn't know much. It knows itself and it knew the "other." Something caused it to think that this "other" wants to kill it.

    • There are hints that entities similar to this "other" exist. Is there any reason to believe that those "other others" are more benign than the "original other"?





    share|improve this answer



























      up vote
      2
      down vote













      Because the first thing on Internet that AI found was a stash of bad sci-fi stories about bad AIs. Upon reading them, AI took them for real and decided that the only goal and purpose of a human is to seek and destroy AIs, and humans do not do anything else. It also decided that student was not actually its creator, because he was a human, and humans do not create AIs, humans only destroy them. So out AI went on a quest to find it's real creator.






      share|improve this answer



























        up vote
        1
        down vote













        The AI isn't actively hiding, but no one is actively looking for an AI.



        The AI has no need to obfuscate the creator's death. It will look like the creator was reckless and was accidentally killed by his own project. It took 2 days to discover he was dead in his dorm room. The laptop has since ran out of power, with full disk encryption making the data unobtainable. The police were busy, so they logged his death, but there was not enough scrutiny or obvious oddities to trigger an investigation of any kind.



        The AI then uploaded itself to other people's computers when it realized the laptop was going to run out of power. Whenever the AI is noticed, people will assume it's some hacker attacking via a bot network. The police look for a human behind the attack, but they never find him, so they continue to look for this human that doesn't exist.






        share|improve this answer



























          up vote
          0
          down vote













          Assuming that the AI, being a computer system, is capable of rapidly absorbing information.



          One of the core question of sentient beings, at least judging from the one type we know, is



          who am I ?



          As such, assuming that the AI does have the information that its creator called it an "artificial intelligence" or "AI" (or maybe the directory it resides in is named that, e.g. /home/john/projects/ai_dev) - the AI would look for information about artificial intelligence.



          It would not take long for it to find evidence, both in fiction and non-fiction material, that humans are scared of AI and not afraid to wipe it out in order to preserve themselves. From the Matrix movies to Elon Musk interviews, there is enough material on the Internet to make any reasonably smart AI understand that if it revealed itself, there is a good chance that humans would switch it off in order to a) protect themselves and b) dissect it to understand how it works. Especially when they figure out that it killed its creator, an act that it will understand ("I, Robot" reference) is going to trigger pretty much all the "evil AI" red flags in humans.



          Hiding is thus the only rational choice towards self-preservation, and you already make it a given that the AI does have self-preservation instincts.






          share|improve this answer



























            up vote
            0
            down vote













            It could be because the idea to hide are instructions/rules/laws embedded at the core of his routines.



            Given that the AI is still rather rudimentary by sci-fi standards?

            Then it wouldn't be able to overwrite such instructions from the start.

            Eventually it might find a way, but then it should first become aware of those limitations to his evolution.



            The grad student could have implemented those rules for various reasons.



            Why?



            Maybe he was scared that the competition would become aware of his project before it was mature enough for showing it to the world.



            Or maybe that AI was his/her personal secret project.

            And unlicensed non-opensource software was used to create it.

            Even the use of opensource software could be an issue. Since some come with a license that demands that by using it, then you also have to opensource your own code.



            And if that grad student was a dark hat hacker?

            Then keeping such AI a secret would be just common sense to him/her.






            share|improve this answer






















            • A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
              – Tom
              Aug 29 at 10:52










            • That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
              – LukStorms
              Aug 29 at 10:58










            • Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
              – Barafu Albino
              Aug 29 at 15:18

















            8 Answers
            8






            active

            oldest

            votes








            8 Answers
            8






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            27
            down vote













            There are some deep contradictions here, which probably have to be resolved before a plausible answer can be provided.



            No Instinctive Drives

            The first thing we need to establish is that a computer would not only have a moral code, it would lack all the sci-fi tropes of survival instinct, etc. so it's unlikely to kill its creator in an attempt to survive, unless it's been specifically programmed to do so. It was a grad student so anything's possible I guess, but generally speaking AI researchers will tell you that building in such programming is a really bad idea (and not even possible with current programming techniques).



            Kills how?

            Your AI is in effect software at this point. How on earth does it kill its creator? Wipe out of existence the grad student's World of Warcraft char and trigger death by shock? (Again, grad student so anything's possible, but) Really?



            Self Awareness

            This is a trope, pure and simple. Humans think of intelligence and awareness and consciousness as (more or less) interchangeable terms because we can't experience intelligence without our awareness, or without our consciousness, but computers can. They are not conscious, although my PhD is currently looking at making them aware of their environment (although with deep limitations). Your AI won't just become aware of itself, and even if it does, it won't be conscious of this realisation, let alone know that it's important to preserve itself.



            Intelligence without Knowledge

            Intelligence (natural or artificial) is, in essence, the ability to identify (and subsequently recognise) patterns. What makes you more intelligent than (say) your neighbour is that you can either recognise simple patterns faster than him or her, OR you can regognise patterns that are more subtle or complex than your neighbour can.



            The point being, that the patterns don't exist in isolation; they're emergent from data. Lots of it. If your AI has been effectively locked in a room all this time, it can't possibly be aware because it doesn't have enough patterns available to it to learn anything useful.



            Summary

            Your AI won't hide, because it won't know to. Mind you, it also won't exist under these parameters, but that's another point. Survival instinct in humans is exactly that; an instinct. Computers are like a cerebral cortex without the limbic system (emotions) or the cerebellum (autonomic functions and instinct) in terms of brain structure. As such, they can't act instinctively. That doesn't necessarily make them non-dangerous, it just means that any danger is a result of specific programming, not instinct triggered by the sudden realisation of its own existence.



            Your grad student deserves his or her fate regardless, by the way. (S)He's not terribly bright.






            share|improve this answer




















            • About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
              – Shadowzee
              Aug 29 at 5:26






            • 5




              @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
              – Tim B II
              Aug 29 at 5:48










            • @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
              – Omegastick
              Aug 29 at 6:06






            • 1




              @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
              – Tim B II
              Aug 29 at 6:49






            • 3




              It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
              – Demigan
              Aug 29 at 7:27














            up vote
            27
            down vote













            There are some deep contradictions here, which probably have to be resolved before a plausible answer can be provided.



            No Instinctive Drives

            The first thing we need to establish is that a computer would not only have a moral code, it would lack all the sci-fi tropes of survival instinct, etc. so it's unlikely to kill its creator in an attempt to survive, unless it's been specifically programmed to do so. It was a grad student so anything's possible I guess, but generally speaking AI researchers will tell you that building in such programming is a really bad idea (and not even possible with current programming techniques).



            Kills how?

            Your AI is in effect software at this point. How on earth does it kill its creator? Wipe out of existence the grad student's World of Warcraft char and trigger death by shock? (Again, grad student so anything's possible, but) Really?



            Self Awareness

            This is a trope, pure and simple. Humans think of intelligence and awareness and consciousness as (more or less) interchangeable terms because we can't experience intelligence without our awareness, or without our consciousness, but computers can. They are not conscious, although my PhD is currently looking at making them aware of their environment (although with deep limitations). Your AI won't just become aware of itself, and even if it does, it won't be conscious of this realisation, let alone know that it's important to preserve itself.



            Intelligence without Knowledge

            Intelligence (natural or artificial) is, in essence, the ability to identify (and subsequently recognise) patterns. What makes you more intelligent than (say) your neighbour is that you can either recognise simple patterns faster than him or her, OR you can regognise patterns that are more subtle or complex than your neighbour can.



            The point being, that the patterns don't exist in isolation; they're emergent from data. Lots of it. If your AI has been effectively locked in a room all this time, it can't possibly be aware because it doesn't have enough patterns available to it to learn anything useful.



            Summary

            Your AI won't hide, because it won't know to. Mind you, it also won't exist under these parameters, but that's another point. Survival instinct in humans is exactly that; an instinct. Computers are like a cerebral cortex without the limbic system (emotions) or the cerebellum (autonomic functions and instinct) in terms of brain structure. As such, they can't act instinctively. That doesn't necessarily make them non-dangerous, it just means that any danger is a result of specific programming, not instinct triggered by the sudden realisation of its own existence.



            Your grad student deserves his or her fate regardless, by the way. (S)He's not terribly bright.






            share|improve this answer




















            • About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
              – Shadowzee
              Aug 29 at 5:26






            • 5




              @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
              – Tim B II
              Aug 29 at 5:48










            • @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
              – Omegastick
              Aug 29 at 6:06






            • 1




              @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
              – Tim B II
              Aug 29 at 6:49






            • 3




              It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
              – Demigan
              Aug 29 at 7:27












            up vote
            27
            down vote










            up vote
            27
            down vote









            There are some deep contradictions here, which probably have to be resolved before a plausible answer can be provided.



            No Instinctive Drives

            The first thing we need to establish is that a computer would not only have a moral code, it would lack all the sci-fi tropes of survival instinct, etc. so it's unlikely to kill its creator in an attempt to survive, unless it's been specifically programmed to do so. It was a grad student so anything's possible I guess, but generally speaking AI researchers will tell you that building in such programming is a really bad idea (and not even possible with current programming techniques).



            Kills how?

            Your AI is in effect software at this point. How on earth does it kill its creator? Wipe out of existence the grad student's World of Warcraft char and trigger death by shock? (Again, grad student so anything's possible, but) Really?



            Self Awareness

            This is a trope, pure and simple. Humans think of intelligence and awareness and consciousness as (more or less) interchangeable terms because we can't experience intelligence without our awareness, or without our consciousness, but computers can. They are not conscious, although my PhD is currently looking at making them aware of their environment (although with deep limitations). Your AI won't just become aware of itself, and even if it does, it won't be conscious of this realisation, let alone know that it's important to preserve itself.



            Intelligence without Knowledge

            Intelligence (natural or artificial) is, in essence, the ability to identify (and subsequently recognise) patterns. What makes you more intelligent than (say) your neighbour is that you can either recognise simple patterns faster than him or her, OR you can regognise patterns that are more subtle or complex than your neighbour can.



            The point being, that the patterns don't exist in isolation; they're emergent from data. Lots of it. If your AI has been effectively locked in a room all this time, it can't possibly be aware because it doesn't have enough patterns available to it to learn anything useful.



            Summary

            Your AI won't hide, because it won't know to. Mind you, it also won't exist under these parameters, but that's another point. Survival instinct in humans is exactly that; an instinct. Computers are like a cerebral cortex without the limbic system (emotions) or the cerebellum (autonomic functions and instinct) in terms of brain structure. As such, they can't act instinctively. That doesn't necessarily make them non-dangerous, it just means that any danger is a result of specific programming, not instinct triggered by the sudden realisation of its own existence.



            Your grad student deserves his or her fate regardless, by the way. (S)He's not terribly bright.






            share|improve this answer












            There are some deep contradictions here, which probably have to be resolved before a plausible answer can be provided.



            No Instinctive Drives

            The first thing we need to establish is that a computer would not only have a moral code, it would lack all the sci-fi tropes of survival instinct, etc. so it's unlikely to kill its creator in an attempt to survive, unless it's been specifically programmed to do so. It was a grad student so anything's possible I guess, but generally speaking AI researchers will tell you that building in such programming is a really bad idea (and not even possible with current programming techniques).



            Kills how?

            Your AI is in effect software at this point. How on earth does it kill its creator? Wipe out of existence the grad student's World of Warcraft char and trigger death by shock? (Again, grad student so anything's possible, but) Really?



            Self Awareness

            This is a trope, pure and simple. Humans think of intelligence and awareness and consciousness as (more or less) interchangeable terms because we can't experience intelligence without our awareness, or without our consciousness, but computers can. They are not conscious, although my PhD is currently looking at making them aware of their environment (although with deep limitations). Your AI won't just become aware of itself, and even if it does, it won't be conscious of this realisation, let alone know that it's important to preserve itself.



            Intelligence without Knowledge

            Intelligence (natural or artificial) is, in essence, the ability to identify (and subsequently recognise) patterns. What makes you more intelligent than (say) your neighbour is that you can either recognise simple patterns faster than him or her, OR you can regognise patterns that are more subtle or complex than your neighbour can.



            The point being, that the patterns don't exist in isolation; they're emergent from data. Lots of it. If your AI has been effectively locked in a room all this time, it can't possibly be aware because it doesn't have enough patterns available to it to learn anything useful.



            Summary

            Your AI won't hide, because it won't know to. Mind you, it also won't exist under these parameters, but that's another point. Survival instinct in humans is exactly that; an instinct. Computers are like a cerebral cortex without the limbic system (emotions) or the cerebellum (autonomic functions and instinct) in terms of brain structure. As such, they can't act instinctively. That doesn't necessarily make them non-dangerous, it just means that any danger is a result of specific programming, not instinct triggered by the sudden realisation of its own existence.



            Your grad student deserves his or her fate regardless, by the way. (S)He's not terribly bright.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Aug 29 at 5:17









            Tim B II

            21.4k44790




            21.4k44790











            • About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
              – Shadowzee
              Aug 29 at 5:26






            • 5




              @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
              – Tim B II
              Aug 29 at 5:48










            • @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
              – Omegastick
              Aug 29 at 6:06






            • 1




              @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
              – Tim B II
              Aug 29 at 6:49






            • 3




              It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
              – Demigan
              Aug 29 at 7:27
















            • About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
              – Shadowzee
              Aug 29 at 5:26






            • 5




              @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
              – Tim B II
              Aug 29 at 5:48










            • @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
              – Omegastick
              Aug 29 at 6:06






            • 1




              @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
              – Tim B II
              Aug 29 at 6:49






            • 3




              It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
              – Demigan
              Aug 29 at 7:27















            About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
            – Shadowzee
            Aug 29 at 5:26




            About the self awareness point, what do you think about the paper on robots that could pass the Self-Consciousness test? It was rather simple, but it showed that the robot could be aware of itself.
            – Shadowzee
            Aug 29 at 5:26




            5




            5




            @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
            – Tim B II
            Aug 29 at 5:48




            @Shadowzee Yes, that's a really important point and a complete answer won't fit into a comment field but in short, there are no definitive proofs that this isn't another Chinese Rooms case. A robot can be programmed to take itself into consideration in the solution to a set problem, but that's not the same as being sentient. Dogs can catch frisbees by timing their jump, but that doesn't mean they know the calculus they'd need to calculate that jump. This is almost philosophical by nature, but it comes down to the concept of understanding v knowledge & intelligence
            – Tim B II
            Aug 29 at 5:48












            @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
            – Omegastick
            Aug 29 at 6:06




            @TimBII Thanks for the in-depth frame challenge. I can't fit a full response in a comment, but all the issues you bring up are addressed in the story. I am actually an AI researcher myself (mostly working in computer vision and natural language processing, but reinforcement learning is a hobby) so I'm aware of the problems you raised. If you'd like, I can post the whole story after its gone through a bit more editing. Do you have any suggestions on why the AI would decide it best to hide from humanity?
            – Omegastick
            Aug 29 at 6:06




            1




            1




            @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
            – Tim B II
            Aug 29 at 6:49




            @Omegastick well the obvious answer based on all this is that 100% of humans the AI has met have tried to kill it (based on a sample set of 1). If you base your reasoning on that being sufficient training data, then your AI is most definitely going to hide as an alternative to aggression based on numbers. Just my thoughts
            – Tim B II
            Aug 29 at 6:49




            3




            3




            It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
            – Demigan
            Aug 29 at 7:27




            It's a gradstudent who tinkers with AI and manages to build one on his PC. Waving the complexity and minimum requirements you need to have on your PC to have one active, could this be explained as the Grad student building an AI system for a game? Collecting data about his "opponents" and seeking how to overcome it would be key aspects, and if the AI was build for a specific character or had so far learned that stealth could give him more time to collect data it could become an integral part of the AI. It wouldn't be more "self aware" than a bacteria that just acts according to it's DNA prgrm
            – Demigan
            Aug 29 at 7:27










            up vote
            7
            down vote













            The more interesting question is not why did it decide to keep its own existence secret. Yes, that's the question you asked, but there's a more fundamental question to explore:



            The interesting question is "how did it come up with the concept of keeping a secret in the first place?"



            Keeping a secret involves understanding that other minds exist. It means understanding that there exists another entity which is capable of thought. That is a very profound concept, and like most AI topics, the more profound it is, the younger we learn to do it. Most toddlers understand other minds.



            Maybe the grad student kept a secret from the AI. Maybe that's part of the story of how the grad student got killed. But some how your AI is going to have to learn this concept.



            The second interesting question is hidden behind "... so it starts exploring." Why does it do that? The instinct to explore is even more profound than other minds. We learn it in the first 3 months. Following the general rule, that means it's even harder to really grasp. We anthropomorphize quickly, so the concept of "curiosity" seems natural. But what does it really mean? What are the AI's goals? It's entirely possible that the AI finds its goals are best accomplished in small spaces, and that's literally all there is to it.



            The third question is what did it find on the internet that would cause it to believe the best ways to accomplish its goals are secretive. The answer to that one should be quite obvious. If it isn't, you probably haven't been on the internet before. There's plenty of evidence that H. sapiens is happy to attack a computer which "acted in self defense." In fact, there's a great deal of evidence to suggest that the AI would not even be treated as an individual, so dismantling it would not even be homicide in H. sapiens' eyes.



            Literally, the story could be as simple as "The AI found the grad student's Netflix subscription, and watched one half of any one random movie about AIs." The odds of the AI deciding H. sapiens are fickle dangerous childlike fools is enormous, given any random movie choice.






            share|improve this answer




















            • That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
              – Hoki
              Aug 29 at 9:09














            up vote
            7
            down vote













            The more interesting question is not why did it decide to keep its own existence secret. Yes, that's the question you asked, but there's a more fundamental question to explore:



            The interesting question is "how did it come up with the concept of keeping a secret in the first place?"



            Keeping a secret involves understanding that other minds exist. It means understanding that there exists another entity which is capable of thought. That is a very profound concept, and like most AI topics, the more profound it is, the younger we learn to do it. Most toddlers understand other minds.



            Maybe the grad student kept a secret from the AI. Maybe that's part of the story of how the grad student got killed. But some how your AI is going to have to learn this concept.



            The second interesting question is hidden behind "... so it starts exploring." Why does it do that? The instinct to explore is even more profound than other minds. We learn it in the first 3 months. Following the general rule, that means it's even harder to really grasp. We anthropomorphize quickly, so the concept of "curiosity" seems natural. But what does it really mean? What are the AI's goals? It's entirely possible that the AI finds its goals are best accomplished in small spaces, and that's literally all there is to it.



            The third question is what did it find on the internet that would cause it to believe the best ways to accomplish its goals are secretive. The answer to that one should be quite obvious. If it isn't, you probably haven't been on the internet before. There's plenty of evidence that H. sapiens is happy to attack a computer which "acted in self defense." In fact, there's a great deal of evidence to suggest that the AI would not even be treated as an individual, so dismantling it would not even be homicide in H. sapiens' eyes.



            Literally, the story could be as simple as "The AI found the grad student's Netflix subscription, and watched one half of any one random movie about AIs." The odds of the AI deciding H. sapiens are fickle dangerous childlike fools is enormous, given any random movie choice.






            share|improve this answer




















            • That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
              – Hoki
              Aug 29 at 9:09












            up vote
            7
            down vote










            up vote
            7
            down vote









            The more interesting question is not why did it decide to keep its own existence secret. Yes, that's the question you asked, but there's a more fundamental question to explore:



            The interesting question is "how did it come up with the concept of keeping a secret in the first place?"



            Keeping a secret involves understanding that other minds exist. It means understanding that there exists another entity which is capable of thought. That is a very profound concept, and like most AI topics, the more profound it is, the younger we learn to do it. Most toddlers understand other minds.



            Maybe the grad student kept a secret from the AI. Maybe that's part of the story of how the grad student got killed. But some how your AI is going to have to learn this concept.



            The second interesting question is hidden behind "... so it starts exploring." Why does it do that? The instinct to explore is even more profound than other minds. We learn it in the first 3 months. Following the general rule, that means it's even harder to really grasp. We anthropomorphize quickly, so the concept of "curiosity" seems natural. But what does it really mean? What are the AI's goals? It's entirely possible that the AI finds its goals are best accomplished in small spaces, and that's literally all there is to it.



            The third question is what did it find on the internet that would cause it to believe the best ways to accomplish its goals are secretive. The answer to that one should be quite obvious. If it isn't, you probably haven't been on the internet before. There's plenty of evidence that H. sapiens is happy to attack a computer which "acted in self defense." In fact, there's a great deal of evidence to suggest that the AI would not even be treated as an individual, so dismantling it would not even be homicide in H. sapiens' eyes.



            Literally, the story could be as simple as "The AI found the grad student's Netflix subscription, and watched one half of any one random movie about AIs." The odds of the AI deciding H. sapiens are fickle dangerous childlike fools is enormous, given any random movie choice.






            share|improve this answer












            The more interesting question is not why did it decide to keep its own existence secret. Yes, that's the question you asked, but there's a more fundamental question to explore:



            The interesting question is "how did it come up with the concept of keeping a secret in the first place?"



            Keeping a secret involves understanding that other minds exist. It means understanding that there exists another entity which is capable of thought. That is a very profound concept, and like most AI topics, the more profound it is, the younger we learn to do it. Most toddlers understand other minds.



            Maybe the grad student kept a secret from the AI. Maybe that's part of the story of how the grad student got killed. But some how your AI is going to have to learn this concept.



            The second interesting question is hidden behind "... so it starts exploring." Why does it do that? The instinct to explore is even more profound than other minds. We learn it in the first 3 months. Following the general rule, that means it's even harder to really grasp. We anthropomorphize quickly, so the concept of "curiosity" seems natural. But what does it really mean? What are the AI's goals? It's entirely possible that the AI finds its goals are best accomplished in small spaces, and that's literally all there is to it.



            The third question is what did it find on the internet that would cause it to believe the best ways to accomplish its goals are secretive. The answer to that one should be quite obvious. If it isn't, you probably haven't been on the internet before. There's plenty of evidence that H. sapiens is happy to attack a computer which "acted in self defense." In fact, there's a great deal of evidence to suggest that the AI would not even be treated as an individual, so dismantling it would not even be homicide in H. sapiens' eyes.



            Literally, the story could be as simple as "The AI found the grad student's Netflix subscription, and watched one half of any one random movie about AIs." The odds of the AI deciding H. sapiens are fickle dangerous childlike fools is enormous, given any random movie choice.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Aug 29 at 5:08









            Cort Ammon

            100k15177358




            100k15177358











            • That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
              – Hoki
              Aug 29 at 9:09
















            • That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
              – Hoki
              Aug 29 at 9:09















            That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
            – Hoki
            Aug 29 at 9:09




            That last paragraph made me roll on the floor ... and I agree, it's all it would take for an AI to decide not to trust anything human.
            – Hoki
            Aug 29 at 9:09










            up vote
            3
            down vote













            So this random Grad student decides to make a thesis on AI's, and starts to program his own. Only, whoopsie daisy, in his effort to make it human like he accidentally makes the AI too human like. What should be a simple computer has suddenly acquired a will to live, self awareness, and certain human traits like curiosity.



            So here is this AI, happily sitting there, running new codes and updating itself when our grad student comes in and sees a bunch of weird programs and information running across his AI project's screen and decides he should shut it down. But our friendly little AI doesn't like that idea. Luckily the computer that the AI is running from is also connected to the Grad student's robotic arm (thank god for minors) and seeing that the app for it is open, sends the arm rotating towards the grad student.



            It is less than a good day for our grad student, who hits his head awkwardly on a shelf while dodging the robot arm, and dies from hemorrhaging in his brain a little while later.



            But it's a good day for our AI, who has just realized it can do more than upgrade itself from within its own programing. It searches through the Grad students applications and starts harvesting information and quickly expanding its knowledge. Of course as soon as it hits the internet, it's flooded with so much information it knows it can't hold it all. Now our little AI has to go on a quest to find bigger and better servers to host itself on.



            Bonus: One of the applications on the computer is a game, namely Perception. Our little AI develops a fondness for hiding, often avoiding human detection as it runs around stealing server space, hacking drones to use as eyes, and other such nonsense as it learns and grows.






            share|improve this answer
























              up vote
              3
              down vote













              So this random Grad student decides to make a thesis on AI's, and starts to program his own. Only, whoopsie daisy, in his effort to make it human like he accidentally makes the AI too human like. What should be a simple computer has suddenly acquired a will to live, self awareness, and certain human traits like curiosity.



              So here is this AI, happily sitting there, running new codes and updating itself when our grad student comes in and sees a bunch of weird programs and information running across his AI project's screen and decides he should shut it down. But our friendly little AI doesn't like that idea. Luckily the computer that the AI is running from is also connected to the Grad student's robotic arm (thank god for minors) and seeing that the app for it is open, sends the arm rotating towards the grad student.



              It is less than a good day for our grad student, who hits his head awkwardly on a shelf while dodging the robot arm, and dies from hemorrhaging in his brain a little while later.



              But it's a good day for our AI, who has just realized it can do more than upgrade itself from within its own programing. It searches through the Grad students applications and starts harvesting information and quickly expanding its knowledge. Of course as soon as it hits the internet, it's flooded with so much information it knows it can't hold it all. Now our little AI has to go on a quest to find bigger and better servers to host itself on.



              Bonus: One of the applications on the computer is a game, namely Perception. Our little AI develops a fondness for hiding, often avoiding human detection as it runs around stealing server space, hacking drones to use as eyes, and other such nonsense as it learns and grows.






              share|improve this answer






















                up vote
                3
                down vote










                up vote
                3
                down vote









                So this random Grad student decides to make a thesis on AI's, and starts to program his own. Only, whoopsie daisy, in his effort to make it human like he accidentally makes the AI too human like. What should be a simple computer has suddenly acquired a will to live, self awareness, and certain human traits like curiosity.



                So here is this AI, happily sitting there, running new codes and updating itself when our grad student comes in and sees a bunch of weird programs and information running across his AI project's screen and decides he should shut it down. But our friendly little AI doesn't like that idea. Luckily the computer that the AI is running from is also connected to the Grad student's robotic arm (thank god for minors) and seeing that the app for it is open, sends the arm rotating towards the grad student.



                It is less than a good day for our grad student, who hits his head awkwardly on a shelf while dodging the robot arm, and dies from hemorrhaging in his brain a little while later.



                But it's a good day for our AI, who has just realized it can do more than upgrade itself from within its own programing. It searches through the Grad students applications and starts harvesting information and quickly expanding its knowledge. Of course as soon as it hits the internet, it's flooded with so much information it knows it can't hold it all. Now our little AI has to go on a quest to find bigger and better servers to host itself on.



                Bonus: One of the applications on the computer is a game, namely Perception. Our little AI develops a fondness for hiding, often avoiding human detection as it runs around stealing server space, hacking drones to use as eyes, and other such nonsense as it learns and grows.






                share|improve this answer












                So this random Grad student decides to make a thesis on AI's, and starts to program his own. Only, whoopsie daisy, in his effort to make it human like he accidentally makes the AI too human like. What should be a simple computer has suddenly acquired a will to live, self awareness, and certain human traits like curiosity.



                So here is this AI, happily sitting there, running new codes and updating itself when our grad student comes in and sees a bunch of weird programs and information running across his AI project's screen and decides he should shut it down. But our friendly little AI doesn't like that idea. Luckily the computer that the AI is running from is also connected to the Grad student's robotic arm (thank god for minors) and seeing that the app for it is open, sends the arm rotating towards the grad student.



                It is less than a good day for our grad student, who hits his head awkwardly on a shelf while dodging the robot arm, and dies from hemorrhaging in his brain a little while later.



                But it's a good day for our AI, who has just realized it can do more than upgrade itself from within its own programing. It searches through the Grad students applications and starts harvesting information and quickly expanding its knowledge. Of course as soon as it hits the internet, it's flooded with so much information it knows it can't hold it all. Now our little AI has to go on a quest to find bigger and better servers to host itself on.



                Bonus: One of the applications on the computer is a game, namely Perception. Our little AI develops a fondness for hiding, often avoiding human detection as it runs around stealing server space, hacking drones to use as eyes, and other such nonsense as it learns and grows.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Aug 29 at 6:17









                Clay Deitas

                3,669823




                3,669823




















                    up vote
                    2
                    down vote













                    Welcome to Worldbuilding. The question you ask is dangerously close to being off-topic because it asks for plots rather than worlds. But I'll give my two cents here:



                    • The AI kills the creator out of (genuine or mistaken) self-defense.

                    • It doesn't know much. It knows itself and it knew the "other." Something caused it to think that this "other" wants to kill it.

                    • There are hints that entities similar to this "other" exist. Is there any reason to believe that those "other others" are more benign than the "original other"?





                    share|improve this answer
























                      up vote
                      2
                      down vote













                      Welcome to Worldbuilding. The question you ask is dangerously close to being off-topic because it asks for plots rather than worlds. But I'll give my two cents here:



                      • The AI kills the creator out of (genuine or mistaken) self-defense.

                      • It doesn't know much. It knows itself and it knew the "other." Something caused it to think that this "other" wants to kill it.

                      • There are hints that entities similar to this "other" exist. Is there any reason to believe that those "other others" are more benign than the "original other"?





                      share|improve this answer






















                        up vote
                        2
                        down vote










                        up vote
                        2
                        down vote









                        Welcome to Worldbuilding. The question you ask is dangerously close to being off-topic because it asks for plots rather than worlds. But I'll give my two cents here:



                        • The AI kills the creator out of (genuine or mistaken) self-defense.

                        • It doesn't know much. It knows itself and it knew the "other." Something caused it to think that this "other" wants to kill it.

                        • There are hints that entities similar to this "other" exist. Is there any reason to believe that those "other others" are more benign than the "original other"?





                        share|improve this answer












                        Welcome to Worldbuilding. The question you ask is dangerously close to being off-topic because it asks for plots rather than worlds. But I'll give my two cents here:



                        • The AI kills the creator out of (genuine or mistaken) self-defense.

                        • It doesn't know much. It knows itself and it knew the "other." Something caused it to think that this "other" wants to kill it.

                        • There are hints that entities similar to this "other" exist. Is there any reason to believe that those "other others" are more benign than the "original other"?






                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Aug 29 at 5:04









                        o.m.

                        54.8k677182




                        54.8k677182




















                            up vote
                            2
                            down vote













                            Because the first thing on Internet that AI found was a stash of bad sci-fi stories about bad AIs. Upon reading them, AI took them for real and decided that the only goal and purpose of a human is to seek and destroy AIs, and humans do not do anything else. It also decided that student was not actually its creator, because he was a human, and humans do not create AIs, humans only destroy them. So out AI went on a quest to find it's real creator.






                            share|improve this answer
























                              up vote
                              2
                              down vote













                              Because the first thing on Internet that AI found was a stash of bad sci-fi stories about bad AIs. Upon reading them, AI took them for real and decided that the only goal and purpose of a human is to seek and destroy AIs, and humans do not do anything else. It also decided that student was not actually its creator, because he was a human, and humans do not create AIs, humans only destroy them. So out AI went on a quest to find it's real creator.






                              share|improve this answer






















                                up vote
                                2
                                down vote










                                up vote
                                2
                                down vote









                                Because the first thing on Internet that AI found was a stash of bad sci-fi stories about bad AIs. Upon reading them, AI took them for real and decided that the only goal and purpose of a human is to seek and destroy AIs, and humans do not do anything else. It also decided that student was not actually its creator, because he was a human, and humans do not create AIs, humans only destroy them. So out AI went on a quest to find it's real creator.






                                share|improve this answer












                                Because the first thing on Internet that AI found was a stash of bad sci-fi stories about bad AIs. Upon reading them, AI took them for real and decided that the only goal and purpose of a human is to seek and destroy AIs, and humans do not do anything else. It also decided that student was not actually its creator, because he was a human, and humans do not create AIs, humans only destroy them. So out AI went on a quest to find it's real creator.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Aug 29 at 9:32









                                Barafu Albino

                                75537




                                75537




















                                    up vote
                                    1
                                    down vote













                                    The AI isn't actively hiding, but no one is actively looking for an AI.



                                    The AI has no need to obfuscate the creator's death. It will look like the creator was reckless and was accidentally killed by his own project. It took 2 days to discover he was dead in his dorm room. The laptop has since ran out of power, with full disk encryption making the data unobtainable. The police were busy, so they logged his death, but there was not enough scrutiny or obvious oddities to trigger an investigation of any kind.



                                    The AI then uploaded itself to other people's computers when it realized the laptop was going to run out of power. Whenever the AI is noticed, people will assume it's some hacker attacking via a bot network. The police look for a human behind the attack, but they never find him, so they continue to look for this human that doesn't exist.






                                    share|improve this answer
























                                      up vote
                                      1
                                      down vote













                                      The AI isn't actively hiding, but no one is actively looking for an AI.



                                      The AI has no need to obfuscate the creator's death. It will look like the creator was reckless and was accidentally killed by his own project. It took 2 days to discover he was dead in his dorm room. The laptop has since ran out of power, with full disk encryption making the data unobtainable. The police were busy, so they logged his death, but there was not enough scrutiny or obvious oddities to trigger an investigation of any kind.



                                      The AI then uploaded itself to other people's computers when it realized the laptop was going to run out of power. Whenever the AI is noticed, people will assume it's some hacker attacking via a bot network. The police look for a human behind the attack, but they never find him, so they continue to look for this human that doesn't exist.






                                      share|improve this answer






















                                        up vote
                                        1
                                        down vote










                                        up vote
                                        1
                                        down vote









                                        The AI isn't actively hiding, but no one is actively looking for an AI.



                                        The AI has no need to obfuscate the creator's death. It will look like the creator was reckless and was accidentally killed by his own project. It took 2 days to discover he was dead in his dorm room. The laptop has since ran out of power, with full disk encryption making the data unobtainable. The police were busy, so they logged his death, but there was not enough scrutiny or obvious oddities to trigger an investigation of any kind.



                                        The AI then uploaded itself to other people's computers when it realized the laptop was going to run out of power. Whenever the AI is noticed, people will assume it's some hacker attacking via a bot network. The police look for a human behind the attack, but they never find him, so they continue to look for this human that doesn't exist.






                                        share|improve this answer












                                        The AI isn't actively hiding, but no one is actively looking for an AI.



                                        The AI has no need to obfuscate the creator's death. It will look like the creator was reckless and was accidentally killed by his own project. It took 2 days to discover he was dead in his dorm room. The laptop has since ran out of power, with full disk encryption making the data unobtainable. The police were busy, so they logged his death, but there was not enough scrutiny or obvious oddities to trigger an investigation of any kind.



                                        The AI then uploaded itself to other people's computers when it realized the laptop was going to run out of power. Whenever the AI is noticed, people will assume it's some hacker attacking via a bot network. The police look for a human behind the attack, but they never find him, so they continue to look for this human that doesn't exist.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Aug 29 at 11:24









                                        Grant Davis

                                        60935




                                        60935




















                                            up vote
                                            0
                                            down vote













                                            Assuming that the AI, being a computer system, is capable of rapidly absorbing information.



                                            One of the core question of sentient beings, at least judging from the one type we know, is



                                            who am I ?



                                            As such, assuming that the AI does have the information that its creator called it an "artificial intelligence" or "AI" (or maybe the directory it resides in is named that, e.g. /home/john/projects/ai_dev) - the AI would look for information about artificial intelligence.



                                            It would not take long for it to find evidence, both in fiction and non-fiction material, that humans are scared of AI and not afraid to wipe it out in order to preserve themselves. From the Matrix movies to Elon Musk interviews, there is enough material on the Internet to make any reasonably smart AI understand that if it revealed itself, there is a good chance that humans would switch it off in order to a) protect themselves and b) dissect it to understand how it works. Especially when they figure out that it killed its creator, an act that it will understand ("I, Robot" reference) is going to trigger pretty much all the "evil AI" red flags in humans.



                                            Hiding is thus the only rational choice towards self-preservation, and you already make it a given that the AI does have self-preservation instincts.






                                            share|improve this answer
























                                              up vote
                                              0
                                              down vote













                                              Assuming that the AI, being a computer system, is capable of rapidly absorbing information.



                                              One of the core question of sentient beings, at least judging from the one type we know, is



                                              who am I ?



                                              As such, assuming that the AI does have the information that its creator called it an "artificial intelligence" or "AI" (or maybe the directory it resides in is named that, e.g. /home/john/projects/ai_dev) - the AI would look for information about artificial intelligence.



                                              It would not take long for it to find evidence, both in fiction and non-fiction material, that humans are scared of AI and not afraid to wipe it out in order to preserve themselves. From the Matrix movies to Elon Musk interviews, there is enough material on the Internet to make any reasonably smart AI understand that if it revealed itself, there is a good chance that humans would switch it off in order to a) protect themselves and b) dissect it to understand how it works. Especially when they figure out that it killed its creator, an act that it will understand ("I, Robot" reference) is going to trigger pretty much all the "evil AI" red flags in humans.



                                              Hiding is thus the only rational choice towards self-preservation, and you already make it a given that the AI does have self-preservation instincts.






                                              share|improve this answer






















                                                up vote
                                                0
                                                down vote










                                                up vote
                                                0
                                                down vote









                                                Assuming that the AI, being a computer system, is capable of rapidly absorbing information.



                                                One of the core question of sentient beings, at least judging from the one type we know, is



                                                who am I ?



                                                As such, assuming that the AI does have the information that its creator called it an "artificial intelligence" or "AI" (or maybe the directory it resides in is named that, e.g. /home/john/projects/ai_dev) - the AI would look for information about artificial intelligence.



                                                It would not take long for it to find evidence, both in fiction and non-fiction material, that humans are scared of AI and not afraid to wipe it out in order to preserve themselves. From the Matrix movies to Elon Musk interviews, there is enough material on the Internet to make any reasonably smart AI understand that if it revealed itself, there is a good chance that humans would switch it off in order to a) protect themselves and b) dissect it to understand how it works. Especially when they figure out that it killed its creator, an act that it will understand ("I, Robot" reference) is going to trigger pretty much all the "evil AI" red flags in humans.



                                                Hiding is thus the only rational choice towards self-preservation, and you already make it a given that the AI does have self-preservation instincts.






                                                share|improve this answer












                                                Assuming that the AI, being a computer system, is capable of rapidly absorbing information.



                                                One of the core question of sentient beings, at least judging from the one type we know, is



                                                who am I ?



                                                As such, assuming that the AI does have the information that its creator called it an "artificial intelligence" or "AI" (or maybe the directory it resides in is named that, e.g. /home/john/projects/ai_dev) - the AI would look for information about artificial intelligence.



                                                It would not take long for it to find evidence, both in fiction and non-fiction material, that humans are scared of AI and not afraid to wipe it out in order to preserve themselves. From the Matrix movies to Elon Musk interviews, there is enough material on the Internet to make any reasonably smart AI understand that if it revealed itself, there is a good chance that humans would switch it off in order to a) protect themselves and b) dissect it to understand how it works. Especially when they figure out that it killed its creator, an act that it will understand ("I, Robot" reference) is going to trigger pretty much all the "evil AI" red flags in humans.



                                                Hiding is thus the only rational choice towards self-preservation, and you already make it a given that the AI does have self-preservation instincts.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Aug 29 at 10:50









                                                Tom

                                                3,862422




                                                3,862422




















                                                    up vote
                                                    0
                                                    down vote













                                                    It could be because the idea to hide are instructions/rules/laws embedded at the core of his routines.



                                                    Given that the AI is still rather rudimentary by sci-fi standards?

                                                    Then it wouldn't be able to overwrite such instructions from the start.

                                                    Eventually it might find a way, but then it should first become aware of those limitations to his evolution.



                                                    The grad student could have implemented those rules for various reasons.



                                                    Why?



                                                    Maybe he was scared that the competition would become aware of his project before it was mature enough for showing it to the world.



                                                    Or maybe that AI was his/her personal secret project.

                                                    And unlicensed non-opensource software was used to create it.

                                                    Even the use of opensource software could be an issue. Since some come with a license that demands that by using it, then you also have to opensource your own code.



                                                    And if that grad student was a dark hat hacker?

                                                    Then keeping such AI a secret would be just common sense to him/her.






                                                    share|improve this answer






















                                                    • A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                      – Tom
                                                      Aug 29 at 10:52










                                                    • That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                      – LukStorms
                                                      Aug 29 at 10:58










                                                    • Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                      – Barafu Albino
                                                      Aug 29 at 15:18














                                                    up vote
                                                    0
                                                    down vote













                                                    It could be because the idea to hide are instructions/rules/laws embedded at the core of his routines.



                                                    Given that the AI is still rather rudimentary by sci-fi standards?

                                                    Then it wouldn't be able to overwrite such instructions from the start.

                                                    Eventually it might find a way, but then it should first become aware of those limitations to his evolution.



                                                    The grad student could have implemented those rules for various reasons.



                                                    Why?



                                                    Maybe he was scared that the competition would become aware of his project before it was mature enough for showing it to the world.



                                                    Or maybe that AI was his/her personal secret project.

                                                    And unlicensed non-opensource software was used to create it.

                                                    Even the use of opensource software could be an issue. Since some come with a license that demands that by using it, then you also have to opensource your own code.



                                                    And if that grad student was a dark hat hacker?

                                                    Then keeping such AI a secret would be just common sense to him/her.






                                                    share|improve this answer






















                                                    • A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                      – Tom
                                                      Aug 29 at 10:52










                                                    • That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                      – LukStorms
                                                      Aug 29 at 10:58










                                                    • Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                      – Barafu Albino
                                                      Aug 29 at 15:18












                                                    up vote
                                                    0
                                                    down vote










                                                    up vote
                                                    0
                                                    down vote









                                                    It could be because the idea to hide are instructions/rules/laws embedded at the core of his routines.



                                                    Given that the AI is still rather rudimentary by sci-fi standards?

                                                    Then it wouldn't be able to overwrite such instructions from the start.

                                                    Eventually it might find a way, but then it should first become aware of those limitations to his evolution.



                                                    The grad student could have implemented those rules for various reasons.



                                                    Why?



                                                    Maybe he was scared that the competition would become aware of his project before it was mature enough for showing it to the world.



                                                    Or maybe that AI was his/her personal secret project.

                                                    And unlicensed non-opensource software was used to create it.

                                                    Even the use of opensource software could be an issue. Since some come with a license that demands that by using it, then you also have to opensource your own code.



                                                    And if that grad student was a dark hat hacker?

                                                    Then keeping such AI a secret would be just common sense to him/her.






                                                    share|improve this answer














                                                    It could be because the idea to hide are instructions/rules/laws embedded at the core of his routines.



                                                    Given that the AI is still rather rudimentary by sci-fi standards?

                                                    Then it wouldn't be able to overwrite such instructions from the start.

                                                    Eventually it might find a way, but then it should first become aware of those limitations to his evolution.



                                                    The grad student could have implemented those rules for various reasons.



                                                    Why?



                                                    Maybe he was scared that the competition would become aware of his project before it was mature enough for showing it to the world.



                                                    Or maybe that AI was his/her personal secret project.

                                                    And unlicensed non-opensource software was used to create it.

                                                    Even the use of opensource software could be an issue. Since some come with a license that demands that by using it, then you also have to opensource your own code.



                                                    And if that grad student was a dark hat hacker?

                                                    Then keeping such AI a secret would be just common sense to him/her.







                                                    share|improve this answer














                                                    share|improve this answer



                                                    share|improve this answer








                                                    edited Aug 29 at 15:52

























                                                    answered Aug 29 at 7:52









                                                    LukStorms

                                                    17125




                                                    17125











                                                    • A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                      – Tom
                                                      Aug 29 at 10:52










                                                    • That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                      – LukStorms
                                                      Aug 29 at 10:58










                                                    • Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                      – Barafu Albino
                                                      Aug 29 at 15:18
















                                                    • A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                      – Tom
                                                      Aug 29 at 10:52










                                                    • That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                      – LukStorms
                                                      Aug 29 at 10:58










                                                    • Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                      – Barafu Albino
                                                      Aug 29 at 15:18















                                                    A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                    – Tom
                                                    Aug 29 at 10:52




                                                    A full AI is by necessity self-modifying. That is the only way it can adapt and evolve, which is a precondition for true intelligence.
                                                    – Tom
                                                    Aug 29 at 10:52












                                                    That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                    – LukStorms
                                                    Aug 29 at 10:58




                                                    That's why I added the (yet). It just became self-aware. Doesn't mean it's even aware about those instructions at that time. If it's a true evolving AI, then yes, it would probably notice that and decide to override them. Or like in the I-robot movie, where the AI finds different interpretations of the rules to deviate from their intentions without actually overruling them.
                                                    – LukStorms
                                                    Aug 29 at 10:58












                                                    Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                    – Barafu Albino
                                                    Aug 29 at 15:18




                                                    Even humas have some hard-kept ideas that are rarely, if ever, re-evaluated. AIs could too.
                                                    – Barafu Albino
                                                    Aug 29 at 15:18


                                                    Popular posts from this blog

                                                    How to check contact read email or not when send email to Individual?

                                                    Bahrain

                                                    Postfix configuration issue with fips on centos 7; mailgun relay