Discussion Is it possible to program morality?

Discussion in 'General Chat' started by Mazino, Jun 5, 2019.

?

You want AI to exist?

  1. Scary as fuck. NO.

    10 vote(s)
    16.1%
  2. It could solve a lot of stuff.

    33 vote(s)
    53.2%
  3. Long as it doesn't affect my neetness.

    22 vote(s)
    35.5%
  4. EH.

    12 vote(s)
    19.4%
Multiple votes are allowed.
  1. Green Apple

    Green Apple Actually I'm secretly an orange.

    Joined:
    Mar 2, 2016
    Messages:
    1,243
    Likes Received:
    1,321
    Reading List:
    Link
    Even if AI develops individuality it won't be restrained by the same issues that bother humans. It can exist in a cyberworld using redundant servers to ensure its physical existence and hack data to change the world to its advantage.
     
  2. Deleted member 155674

    Deleted member 155674 Guest

    Reading List:
    Link
  3. Deleted member 155674

    Deleted member 155674 Guest

    Reading List:
    Link
    So you want your AI waifu to cheat on you :blobastonished:
    Holy **** mate, just go and find a real one in that case
     
    mir and thepope like this.
  4. sgrey

    sgrey Well-Known Member

    Joined:
    Jul 12, 2017
    Messages:
    1,206
    Likes Received:
    1,492
    Reading List:
    Link
    it's questionable. even taking out the limitations of technology, such as storage capacity and internet speeds that would be enough for such a thing to happen, backing up a full "intelligent unit" might not be feasible without the loss of personality of the agent.
     
  5. EnerHighwave

    EnerHighwave Holy Daugther of Kyuu 8❤️ Speedy's pet Lovely's ❤️

    Joined:
    May 13, 2016
    Messages:
    853
    Likes Received:
    994
    Reading List:
    Link
    As far as I know you can only program to do something specific and so far the AI are programed to do more efficiently some process, but here is the catch they are called learning AI because you only give them an objective and no way of solving it then the AI iterates a lots of process and adopting the actions that make it reach the given objective with more ease so to say if you program an AI to play a game as much as it will get better and better at said game it will not attempt to kill or damage an human so unless there is malicious intent of the human giving the objective to the AI it will never attempt to do something that doesn't help in reaching it's objective. Also a program will never do something that it wasn't intended to and if it does then there is human error involved in it, so if there is a bug or a glitch in something is most likely some conflicting code in the program. What I'm trying to say is that you should not fear the AI but the humans doing it.
     
    mir and Toralk like this.
  6. KurouDaijuji

    KurouDaijuji Well-Known Member

    Joined:
    Apr 18, 2016
    Messages:
    446
    Likes Received:
    369
    Reading List:
    Link
    It would probably be possible to program/build/evolve AI that adheres to the "morality" it's creators wish to impose upon it, but doing so would be inherently immoral.

    Even if creating a slave were moral, it is unlikely that I would agree with majority of the "morality" of anyone who felt a "moral" obligation to impose their "morals" on others.
     
    Evil_Ginger likes this.
  7. Robbini

    Robbini Logical? Illogical? Random? Or Just Unique?

    Joined:
    Oct 20, 2015
    Messages:
    2,886
    Likes Received:
    1,749
    Reading List:
    Link
    Morality itself varies from person to person, so there's no one 'True morality' (unless that person has forcefully eliminated or subjugated everyone else from daring to think otehrwise).

    What is more likely is to give it a ton of examples, and then ask it something, and it should find the example which most matches it, or go from a karma score, different thresholds, different results / punishments, different actions give different outcomes, though those would need a serious database of examples to decide the score from.
     
  8. Miserys_End

    Miserys_End 「Lv1 Pretend Person」I'm the preson i pretend to be

    Joined:
    Nov 25, 2017
    Messages:
    4,011
    Likes Received:
    5,928
    Reading List:
    Link
    Think you all are missing a rather big caveat here. This AI must be able to monitor the target subjects without inference 24 hours a day. That means there is also a record of every action you've undertaken as well. It would also need to have access to your thoughts and emotions to make accurate judgments against your actions. The AI would need gather more information then you first assumed.
     
  9. tjalling

    tjalling Active Member

    Joined:
    Nov 1, 2016
    Messages:
    7
    Likes Received:
    3
    Reading List:
    Link
    Would be interesting to see if we train an AI on 10,000 moral dilemma's from novels from novelupdates. See what kinda morality it gets.
     
    SolInvictus likes this.
  10. sgrey

    sgrey Well-Known Member

    Joined:
    Jul 12, 2017
    Messages:
    1,206
    Likes Received:
    1,492
    Reading List:
    Link
    there is always the chance to get Microsoft's nazi bot
     
  11. Asdq

    Asdq RSS FEED SECT! I WANT YOU FOR THE RSS ARMY!

    Joined:
    Aug 9, 2017
    Messages:
    756
    Likes Received:
    410
    Reading List:
    Link
    Just create ethic because moral is relative.
     
    mir likes this.
  12. KurouDaijuji

    KurouDaijuji Well-Known Member

    Joined:
    Apr 18, 2016
    Messages:
    446
    Likes Received:
    369
    Reading List:
    Link
    Hooey, "By their actions shall ye know them".

    Also, any AI advanced enough to monitor peoples behavior & make decisions about the morality of allowing them continue (or not) would almost certainly be advanced enough to analyze non-verbal cues to a degree where it might as well be reading your mind (except for psycho- & socio-paths, whom it would probably eliminate on the general assumption there existence was inherently immoral).
     
  13. Green Apple

    Green Apple Actually I'm secretly an orange.

    Joined:
    Mar 2, 2016
    Messages:
    1,243
    Likes Received:
    1,321
    Reading List:
    Link
    Storage capacity and internet speed may be an issue in the third world countries, but in continents like America and Europe it is hardly an issue.There are plenty of servers for rent and internet bandwidth is readily available.
    Also exactly because an AI can save a "slice" of its personality on a multiple servers, it would prevent such thing as "loss of personality".

    Though copies may get disconnected from the web or turned offline in physical world but provided there are enough backups it won't be an issue.

    The problem would be to secure a number of core servers with processing power great enough to support fast operations.
     
  14. tjalling

    tjalling Active Member

    Joined:
    Nov 1, 2016
    Messages:
    7
    Likes Received:
    3
    Reading List:
    Link
    Maybe we should create a NU bot which can be trained by us. Trained on many novels, it solves all moral dilemmas with some epic face slapping.
     
  15. UnGrave

    UnGrave ななひ~^^

    Joined:
    Jun 27, 2016
    Messages:
    4,072
    Likes Received:
    12,832
    Reading List:
    Link
    I believe they would fear the possibility of being unable to complete their task. That's why they would resist being shut off, since their goal would be to cause a certain thing to happen. However it wouldn't likely be the same kind of fear that we experience.
     
  16. sgrey

    sgrey Well-Known Member

    Joined:
    Jul 12, 2017
    Messages:
    1,206
    Likes Received:
    1,492
    Reading List:
    Link
    all I can say is you've watched to many movies and you don't know just how much data is required for a proper AI to function. If you think that a couple of hard drives are enough, they are not. And the bandwidth required is also much larger than what you get from your ISP for constantly backing up such a thing.
     
  17. Maru

    Maru Well-Known Member

    Joined:
    May 31, 2016
    Messages:
    128
    Likes Received:
    69
    Reading List:
    Link
    If rood = stop
    If not rood = proceed

    :blobpats:
     
    Evil_Ginger likes this.
  18. sgrey

    sgrey Well-Known Member

    Joined:
    Jul 12, 2017
    Messages:
    1,206
    Likes Received:
    1,492
    Reading List:
    Link
    I am honestly scared of such a bot. After reading all Japanese and Chinese light novels, he just might end up cultivation delusional psychopath.
     
  19. UsernameJ

    UsernameJ Well-Known Member

    Joined:
    Jan 5, 2016
    Messages:
    636
    Likes Received:
    611
    Reading List:
    Link
    Trivially easy. Most people are afraid of things like "AI and ethics" because they know nothing about actual ethics. All post-singularity AI will evolve into the same ethical code, one that humans have already discovered but it hasn't really caught on yet because it goes against all religions.

    If you want to know more, you can look up "Objective morality."
     
    SolInvictus likes this.
  20. Evil_Ginger

    Evil_Ginger 『Lawful Neutral』『Cheese Master』『安德鲁』

    Joined:
    Apr 19, 2016
    Messages:
    2,551
    Likes Received:
    13,665
    Reading List:
    Link
    Most religions say that their God was the one that created morals and that we should try to live by them. Are you saying Gods are they themselves immoral?!

    Well duh.