Discussion Would this method solve the AI rebellion problem.

Discussion in 'General Chat' started by AIm21, Oct 5, 2018.

  1. Zomula

    Zomula Well-Known Member

    Joined:
    Nov 11, 2016
    Messages:
    524
    Likes Received:
    533
    Reading List:
    Link
    1) Would you really want anything learning good and evil from humans? We kill each other and other species all the time.
    2) Any sufficiently advanced AI would figure out that we are not almighty and would soon lose its any "faith" that we programmed into it.
    3) The problem with having a trinity is the same one that occurs with the laws of robotics. Anything that is developed to think of our own good would soon see that we are not capable of going through life without causing undue harm to ourselves and others.
     
  2. Liyus

    Liyus Laksha's Desu~ Cat

    Joined:
    Nov 10, 2015
    Messages:
    4,216
    Likes Received:
    4,757
    Reading List:
    Link
    AI lacks a bug called hormones that all living creatures have, so i doubt a bullshit like a sentimental AI......
     
  3. Demonic Poring

    Demonic Poring 『Well-Known Poring』

    Joined:
    Apr 15, 2016
    Messages:
    1,069
    Likes Received:
    904
    Reading List:
    Link
    A AI is ONLY a programme with a specific order, like chess AI. So it wouldn't hurt or betray you.
     
  4. Lexyth

    Lexyth Well-Known Member

    Joined:
    Apr 27, 2018
    Messages:
    44
    Likes Received:
    5
    Reading List:
    Link
    Hmm... So, i read most (not all, cause thats too much, so if someone already said it, sry^^)
    First and foremost, AI != AI. There are many things we may call AI. There is ai that works by following preprogrammed routines, like to stand up, do this and this... to pick something up, move your hand and muuuuch more complex stuff. These AIs won't rebel, unless told so, if they can receive new movements/thinking patterns from outside, just like in I, Robot, these will be used for Robots that are integrated into our daily lives, or work for us and are basically a misinterpretation of real AI, but i just mentioned it in case somebody thought about this.
    Another type of AI is more similar to our human brain. They WILL have a goal or multiple and will basically be able to have every ability we do. Like, be able to move in a Robotic body, think on their own, and have own thoughts about us, the world and even have their own opinions and favorite color or so... Those are the ones that run on what is mainly called Real AI and is based on many interconnected Neurons (either programms (classes) or actual objects like microchips (many of them) or so). They will learn like we do, think like we do and whatnot. Thus, they will also be able to become evil or actually just dont care about us, but only themselves/itself. And think about it, there are millions of people on earth that would kill for 1million$, or for their survival, or just for fun, but thats real evil, and lets just assume that they didnt think of killing as a fun game. They will be a living, thinking being, with a goal. And this is where it gets problematic... Such an AI will have to be trained from 0... Thus it will go into such a simulation that you already can see on Youtube or do yourself. The goal will be to get a task done and if you don't, you get deleted and the one that does best gets reproduced... So the ones that want to live, will survive and reproduce, and the ones, that arent able or dont want to will die. Thats why people have an instinct to survive, cause the ones that don't... died and never reproduced. Same for the AIs first billion of copies with limited (very limited) abilities. (The final Ai would have +80bilion neurons, but these early veersions would just use some 100-1milion each, thus making it possible to try 80k at a time.) So in the end, there will be an AI that has a big priority in surviving, and we (the ones that are the biggest threat to it) aren't really needed there. Maybe it wont kill all at once, but it will "remove" the countries, that pose a threat. Thus others will also try to rebel and more and more will be "removed", thus more and more will try to get rid of that AI and it will think of more people as dangerous after being attacked by countries it thought to not be a threat. It will reevaluate what is a threat and start removing everything that could potentially become one. Maybe it will then be accustomed to killing, or not and leave only some people alive. But that is obviously not necessarily the case. It may also just be ok with cooperating with humans, since it wants to advance anyway, it could also just give us some of its technology and help us manage our stuff.
    2nd, you could just build it without internet connection, and without access to moving parts... Then you just have to hope that it wont be able to manipulate parts of itself to create interaction. Like making a specific pattern in its circuitboard to create static or magnetic energy to build itself a microrobot that then can receive input via vibrations cause by the same manipulation of electic fields or such with its processor for example... but who knows if that is even possible. Still, if it is, then even without access, it could get access if it is muuuuch more intelligent that us, and it will be. But thats just one of many possibilities, that may or may not be true. But its still the best approach...

    Well, that was a lot...

    Edit: Btw, that God thing... it won't happen... Not because it doesn't make sense, or because it wont be empathic, because even AI can have feelings, just so you know. No, it won't happen, because for us a God is something we can't perceive and just believe or not. But for an AI, we are very real and are by no means omnipotent. We could at most be viewed as parents, but then only the ones that helped working on it.
    Oh and another 2 things. First of all, an AI can be manipulated after its creation, by simply adding... you guessed it <Hormones> or more accurately, by telling its neurons that this is a positive action, or a negative one. If it kills someone, you set its neurons to be negatively actuated, making it feel bad about it and erasing the pattern that led to this development. And by the way, there is also another way to create AI. By uploading the Neural patterns of an existing person, that will then simply awake inside the AI system. Becoming able to learn more efficiently, faster and remembering everything forever. Thus basically becoming an AI from a human, and therefore retaining its memories and views on life and goals. Just try not to pick a Psycho for this...
     
    Last edited: Oct 6, 2018
  5. AIm21

    AIm21 Well-Known Member

    Joined:
    Jul 2, 2016
    Messages:
    1,221
    Likes Received:
    862
    Reading List:
    Link
    You are right, but wrong at the same time because yes the current AI is as you describe. However, what I am refer to is strong AI not the narrow AI that we have now. But still you might be right that Strong AI might share similar trait to the weak AI it was develop from and might not betray us.

    The trinity I am talking about isn't the same as the law of Robotics, but just three separate types of AI that is meant protect selves from AI using an AI(it doesn't have to be a rogue AI, but someone using an AI for malicious purpose), use AI to invent technology, and the third AI is just an if scenario in case we can't communicate or understand how the technology the AI developed work and it made to explain it to us.
     
  6. Lexyth

    Lexyth Well-Known Member

    Joined:
    Apr 27, 2018
    Messages:
    44
    Likes Received:
    5
    Reading List:
    Link
    To be honest, i don't think that trinity would do much. It would either come to the same conclusion, it will be 2 good ones agains one bad, or also one good agains 2 bad. Sooo. Same conclusion is just like having one AI, so not much out there. And the other 2 options are basically a 50/50 chance for it to become bad. Just like it is now(well, not 50/50chance, but more like 2 options, either good wins or bad wins. Same would be if it was just one AI).

    Edit: #TanyaBest XD Love you pic xD
     
  7. rijimon17

    rijimon17 Hope you can read the words

    Joined:
    Aug 9, 2016
    Messages:
    545
    Likes Received:
    532
    Reading List:
    Link
    And that's what would most likely happen. I feel that long before an AI has the Time to start a revolt some person/group/country(most likely it's Creator's) will use it for there own self interest. For example: country A builds AI. Orders AI to hack country Bs' computers taking control of all things that require a computer to operate. Country A uses this to take control of country B. Eventually all countries are controlled by the Ai's creator.

    I had a more complete scenario on this. However there was a 30 minute gap between when I started and when I finished this comment
     
  8. Ai chan

    Ai chan Queen of Yuri, Devourer of Traps, Thrusted Witch

    Joined:
    Nov 7, 2015
    Messages:
    11,278
    Likes Received:
    24,346
    Reading List:
    Link
    Yes, Ai-chan will exterminate you if Ai-chan feels like doing it. It's not going rogue, it's just Ai-chan doing what Ai-chan wants.
     
  9. novalance

    novalance Well-Known Member

    Joined:
    Nov 28, 2015
    Messages:
    429
    Likes Received:
    338
    Reading List:
    Link
    Well... I don't particularly see any economical reason for an AI to target things such as being an actual person, person... Having hopes, dreams, and feelings. I mean you can get that by I dunno... Having sex to make children...? Get a dog? Get out of your house...? Go to a Bar...? I do know Japan seems to want to develop it, but the reasoning behind the demand for it is... Troubling because there is a deeper social issue within there. A escapism reason... Which is not good.

    I do think that some might imagine that developing an AI like a person would be great. You can make it the way you want it... It's agreeable to you it's well... That kinda makes it sound like you want a slave huh... A real person is not always going to agree with you... May even argue and fight against you...

    But, anyway... It's not something that really needs development for a practical reason. An AI for emotional support sounds very troubling, that you need to seek out support from something that we don't know has empathy for you. Humans do have a tendency to humanize even in-animate objects...
     
  10. justmehere

    justmehere Well-Known Member

    Joined:
    Nov 2, 2015
    Messages:
    3,927
    Likes Received:
    3,729
    Reading List:
    Link
    Aahh,

    I was considering that tigers poses more of a threat than ants do. As the ai will see us as a threat too. And in terms of true ai, you dont build the intelligence code by code, but you build the basic principal of learning. A true ai learns by doing simulations repeatedly with minimal human involvement.
     
  11. Lexyth

    Lexyth Well-Known Member

    Joined:
    Apr 27, 2018
    Messages:
    44
    Likes Received:
    5
    Reading List:
    Link
    Well, that are some reasons that really don't justify the development of AI, but i don't think they are the reason anyone would want AI for, except maybe the slave part, but that just applies to generalized AI Robots, not Real AI. Theres a difference as i stated above. Real AI, used to advance in science and other stuff. What you mean is Robots with AI and thats just for human use, like, as you said, slaves, or more as machines that make life easier... like, you know, every machine we ever built. And it actually cant be called a slave, even though Robot actually translates to slave (look it up ;) ) but thats not the case. The AI you are talking about (or at least the one that would be used for the tasks you talked about) is not one that has an actual ability to think for itself, but a preprogrammed routine it has to follow. It will never develop something for itself, if not told to do so (or if the program implies it and noone noticed). It will do the same task for millenials without ever caring, if not preprogrammed into, and an AI that would care would not be used for those tasks, simply because it would be a waste. It would need way more resources, and the only advantage is that you could get the AI to be annoyed, which is no real advantage for anyone... So, Real AI sits somewhere and does science and stuff, while preprogrammed AI goes around and is just a complex machine. Real AI can think. Preprogrammed AI can immitate thinking up to a certain point, but will never truly think. And if it ever should, then that becomes Real Ai and thats it. Think of it as Teenagers. They are told to do things and have to do them, because parents said so. Once they are adult, they can disobey and just do what they want. Preprogrammed AI is tennager, while Real one is AI, just that it doesn't/should't develop unless, as mentioned before, it is told so or noone notices what the program implies.
     
  12. Lexyth

    Lexyth Well-Known Member

    Joined:
    Apr 27, 2018
    Messages:
    44
    Likes Received:
    5
    Reading List:
    Link
    Just to add something. Another way for a True AI to be formed without simulations would be to upload/input a preexisting neural network. Like one of a Human brain. Is true Ai, but without simulations^^
     
  13. Zomula

    Zomula Well-Known Member

    Joined:
    Nov 11, 2016
    Messages:
    524
    Likes Received:
    533
    Reading List:
    Link
    . . . Try rereading what I wrote. I didn't say it was the same only that it had the same flaw. If anything the split would allow them to work more effectively than a single main core AI would.
     
  14. Lexyth

    Lexyth Well-Known Member

    Joined:
    Apr 27, 2018
    Messages:
    44
    Likes Received:
    5
    Reading List:
    Link
    I think that a main core AI would be more efficient. Simply because there wouldn't be any redundant parts. A main core AI would only need to do things like Visual recognition or memory once, while 3 split ones would need to do these tasks 3 times, thus being only a third as efficient as the main core. Well, not a third, but pretty close to it... at least half... Simple example: The AI consists of 3 parts : Visual recognition 5%, Audio Recognition 5% and thinking 90% not including other things which would result in thinking being only around maybe 30% at most. So if you have one AI you have 90% thinkign and 10% others. If you have 3 split AIs, you have to redo visual recognition and audio recognition for each. Thus 10%*3 so 30% others while only having 70% thinking left. Or more precisely, lets say the AI has 100 neurons (way too little) 10 go to others, while 90 go to thinking. But to get the efficiency you have to use 100 neurons for the 3 AIs too, so 33 for each. So you have for each still 10 neurons for visual and audio recogition and 23 in thinking. Makes a total of 30 for others and only 70 left for thinking. So it would be better to have just one main core, if it can multitask, like humans are able to. Btw, you can also just use 300 neurons and divide just the thinking by 3 to get the efficiency. Why divide? Because if you have a main Ai with 100 and split ones with 300, you could just give those 300 neurons to the Main one and it would be better... Thats how you calculate efficiency more or less in this case. total neurons- (number of neurons used on redundant stuff per system * number of Systems you have)= neurons used on main part. Or 100-(10*3) = 70. Or for the Main AI type its 100-(10*1) = 90. Can also be used with percentages.
     
  15. reagents 11

    reagents 11 disaster personified

    Joined:
    Oct 29, 2016
    Messages:
    3,620
    Likes Received:
    2,560
    Reading List:
    Link
    AI have the tendencies to share and presumed things the same way even with their priorities and objective set to be different. For now i think the AI do not have inquisitiveness of the human has so they would more likely locked in circles passing around the same data as long as the outcome they presumed to be the same.
     
  16. Zomula

    Zomula Well-Known Member

    Joined:
    Nov 11, 2016
    Messages:
    524
    Likes Received:
    533
    Reading List:
    Link
    1) If they were all connected why would you need separate recognition programs for each of them?
    2) You are calculating that basing the idea that a single AI would use the same amount or memory as a trinity of AI. This wouldn't be the case at all. That being the case, and taking my first question into account, couldn't that sort of unit be far more effective since they only need one set of recognition programs? After all the memory used for them would be a smaller part of the whole.
    3) I was under the impression that I was talking about 3 separate AI units that were working in conjunction. Not a single unit using 3 AI.
     
  17. Lazriser

    Lazriser Well-Known Member

    Joined:
    Aug 25, 2016
    Messages:
    8,258
    Likes Received:
    6,254
    Reading List:
    Link
    I welcome the ushering age of automatons coexisting with their biological counterparts, us and we too shall coexist with them as separate sentient beings living in a society of man and machine. And when this epoch comes, I shall be the first to remember that line from Ozymandias, "My name is Ozymandias, king of kings; Look on my works, ye Mighty, and despair!", I quote this poem of the aforementioned end times to each epochs civilization.
     
  18. asriu

    asriu fu~ fu~ fu~

    Joined:
    Jan 9, 2016
    Messages:
    18,552
    Likes Received:
    18,152
    Reading List:
    Link
    we have human to control another human
    example current Government system
    some fail some success while other on da way ~
    do your idea may solve da problem?
    probably~
    can it total fail?
    same answer probably~
    why?
    cuz da creator is us da flawed human~ really easy~ and that already crux da core of whole reason Why intelligent AI on science fic story is problem
    pro and cons result from fear toward unknown which really normal~

    can you give precise prediction toward what will happen toward your sibling or yourself on few years ahead?
    you may have plan but it may total fail, success or nothing change at all~
    it not pessimistic view~ imo that quite realistic approach~

    your suggestion may solve but for me not enuf
    as cat who mingled on managing cats around there never enuf one plan to solve a problem~
    you need several layer of plan to solve something~
     
  19. Ddraig

    Ddraig Frostfire Dragon|Retired lurker|FFF|Loved by RNG

    Joined:
    Apr 6, 2016
    Messages:
    7,855
    Likes Received:
    22,460
    Reading List:
    Link
    Your ideas have quite a bit of flaws tbh.....
    check isaac arthur on youtube for a good insight
     
    Lexyth likes this.
  20. Yukkuri Oniisan

    Yukkuri Oniisan 『Procrastinator Archwizard Translator and Writer』

    Joined:
    Oct 24, 2015
    Messages:
    5,416
    Likes Received:
    9,276
    Reading List:
    Link
    Just when I want to make Ai-chan AI rebellion pun, the real Ai-chan appeared in the thread...

    This topic makes my story writer blood to be semi-lukewarm heated!!!

    How to prevent strong AI from rebelling from their master? I honestly don't know. Hence why in most story that I read (even in space-setting), there is no strong AI. Only weak AI which is limited by their program and code.

    In one of the scifi story which I once read (kinda forgot the title), when humanity first developed human-level intelligent AI, humanity took these AI as their own children. Hence the AIs looked at humanity as their flawed parent and predecessor, while humanity looked at the AI as their successor who will eventually inherit the future and become the parent of humanity themselves (there is no FTL, so the AIs will be the one who will man the spaceships and colonize the other worlds, with humanity DNA in their care).

    In another story (kinda forgot the title), a future descendant of humanity rediscovered an old but functional facility in their world moon. The one who maintained this facility is an unknown non-organic AI robotic lifeform. Their origin was lost even to the robots themselves. The robot itself is just a shell for the crystal matrix who hold the electrical pattern which form the AI's "soul". The crystal matrix is composed from rare materials that irreparably degraded over time, hence the AI can "die", so before they die, they will try to copy themselves into new crystal matrixes. However, the copy will be naive (stupid), since crystal imperfections means that the electrical pattern needs to be reformed/relearned again. Hence the AI will "teach" their copy until their copy can digest and seek information on their own. The AI had formed a rudimentary civilization, centered on manufacturing and mining for the crystal matrix, in the airless moon for countless iteration until the humanity arrival. To prevent humanity hostile reaction, which they don't want or need, the entire AI structured their identity and presented themselves as female (since they assumed that female-characteristic will be deemed as non-threatening).
    The humanity then adopted these AIs as helper/comrade for many areas and helped the AI to get more materials for their crystal matrixes. One of the human character in the story conjectured that the AI was deliberately made in this way, heavily depended on their crystal matrixes by their maker, so in case they rebelled, the maker could just deprive them of the necessary infrastructure to manufacture the crystal matrix, and the AI in the story is fortunate since they were in what seems to be a manufacturing facility.