×

Notice

The forum is in read only mode.
× Feel free to discuss any typical forums accepted topic here, Whateley or otherwise. Let's avoid the usual suspects: politics, religion, and so forth that tend to result in flame wars and angered forums readers. Other topics will be considered fair game unless they prove to be too volatile, at which point we'll use Devisor created anti-flame chemicals on the subject.

Question Virtual/Artificial Intelligences

8 years 11 months ago #1 by Kristin Darken
  • Kristin Darken
  • Kristin Darken's Avatar Topic Author


  • Posts: 3898

  • Gender: Unknown
  • Birthdate: Unknown
  • With Kurenai and Tavi joining the canon 'characters', when this popped through my Facebook feed; it seemed only appropriate to share:


    Fate guard you and grant you a Light to brighten your Way.
    Attachments:
    8 years 11 months ago - 8 years 11 months ago #2 by MageOhki
    • MageOhki
    • MageOhki's Avatar


  • Posts: 548

  • Gender: Unknown
  • Birthdate: Unknown
  • Bwahahahaha.

    Kureani: "It wasn't HAL'S fault, it was his programmers and directive givers!"
    Last Edit: 8 years 11 months ago by MageOhki.
    8 years 11 months ago #3 by Bek D Corbin
    • Bek D Corbin
    • Bek D Corbin's Avatar


  • Posts: 849

  • Gender: Unknown
  • Birthdate: Unknown
  • Exactly. That's what scares the besnoogers out of us about Artificial Intelligences.
    8 years 11 months ago - 8 years 11 months ago #4 by MageOhki
    • MageOhki
    • MageOhki's Avatar


  • Posts: 548

  • Gender: Unknown
  • Birthdate: Unknown
  • Disagree: Ethics and morality can be taught.

    Even to computers.

    HAL's problem was he was given really stupid directlives, and no way to balance them (which is the fault of his PROGRAMMER, not HAL, who etheir A: didn't think this would come up (moron...), or B: that people would be smarter than that. A third possibility is he never THOUGHT, to tell the USERS and higher ups of the possibility of conflict.)

    Translation: The programmer was a blithering idiot.

    Sorry for the mini rant, but AI is something I'm still fairly passionate about, and frankly, while I will concede that caution needs to be taken, no question (make sure no blithering idiots...): I'm VERY much of the RAH school of thought on AI.
    Last Edit: 8 years 11 months ago by MageOhki.
    8 years 11 months ago - 8 years 11 months ago #5 by Phoenix Spiritus
    • Phoenix Spiritus
    • Phoenix Spiritus's Avatar


  • Posts: 2595

  • Gender: Male
  • Birthdate: 20 Jan 1976
  • Hey! Don't blame the programmer, blam the users!

    The programmer did what he was told, programmed HAL to obay its core directives, it was the stupid users that went and gave HAL insane core directives and therefore got an insane computer.

    Garbage in, garbage out as they say. If you can't be bothered to think through the logic of your instructions to a computer, don't get upset when it stupidly carries out your illogical instructions!
    Last Edit: 8 years 11 months ago by Phoenix Spiritus.
    8 years 11 months ago #6 by Bek D Corbin
    • Bek D Corbin
    • Bek D Corbin's Avatar


  • Posts: 849

  • Gender: Unknown
  • Birthdate: Unknown
  • Whether the AI or the Programmer or the Systems Analyst or the Project Manager is at fault ISN'T THE POINT. The point is that there are simply WAY too many discrete and non-communicating parties who have far too much potential within an Expert System to create an absolute nightmare. Let's get past the technicians and designers, who get far too much flack as it is and focus on the people who will have way too much say in such a project who have No Effing Idea What They're Doing: Executives. Managers. Marketing Experts. Systems Consultants. Logistical Consultants. Government Regulators. Citizen Activists. And these are just the people who could fuck up in complete honesty and due diligence. Then throw Greed, Arrogance, Sloth, Envy, Ambition and pure Spite into the mix.

    The AI systems themselves worry me a bit; the people who'll be using them to advance personal agendas terrify me. Hackers will be annoying; PACs will destroy the effing world.
    8 years 11 months ago #7 by elrodw
    • elrodw
    • elrodw's Avatar


  • Posts: 3263

  • Gender: Unknown
  • Birthdate: Unknown
  • The problem is that an AI system is software, and we all KNOW that it's impossible to thoroughly debug and test software. Even doing good boundary-value testing is quite complicated, and the more complicated the system, the harder it is.

    So let's say a boundary value case slips by testing, which is a 'self-preservation' mode for the AI system. Important so it doesn't do stupid shit. But there's a boundary value case where any safeguards against harming humans is missed. Potential (note the word POTENTIAL) problem. How big? We don't know. But there are guaranteed to be a LOT of problems in that complex an AI system that never get tested.

    If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats. Likely? Probably not. Possible? Probable that there's a boundary value case that makes this real. So the AI system needs a sense of 'morality' - e.g. murder is wrong (not killing - different thing, could spend hours on the topic), stealing is wrong, etc, etc. More complexity, more BV issues.

    Then how do you protect against SEU in the program memory? A byte or word or a condition mask gets corrupted - and it WILL happen. What then? Not predictable.

    My view - it's not likely, but it is possible, and the consequences COULD be severe. Threat possibility - 2, threat consequence - 5. Yellow zone, be careful. (5x5 threat analysis matrix)

    Never give up, Never surrender! Captain Peter Quincy Taggert
    8 years 11 months ago #8 by Domoviye
    • Domoviye
    • Domoviye's Avatar


  • Posts: 2428

  • Gender: Unknown
  • Birthdate: Unknown
  • This is why we make sure to control its power source, don't let it near any weapons, and always have it confirm anything beyond routine carefully prepared matters with a human source.

    And here is something that is very appropriate for this conversation. www.cracked.com/blog/a-series-of-emails-...cyberdynes-tech-guy/
    8 years 11 months ago #9 by Nagrij
    • Nagrij
    • Nagrij's Avatar


  • Posts: 1290

  • Gender: Unknown
  • Birthdate: Unknown
  • The easiest way to fix the problems involving people stated above, is to take the creation of AI out of human hands. a few lines of code, and the AI can debug itself before disasters happen.

    www.patreon.com/Nagrij

    If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
    8 years 11 months ago - 8 years 11 months ago #10 by E M Pisek
    • E M Pisek
    • E M Pisek's Avatar


  • Posts: 1299

  • Gender: Male
  • Birthdate: 24 Apr 1960
  • Nagrij wrote: The easiest way to fix the problems involving people stated above, is to take the creation of AI out of human hands. a few lines of code, and the AI can debug itself before disasters happen.


    Actually there is programs that do do that in a limited sense.

    Things to consider and I'm not trying to delved deeply into what constitutes a true AI form, but consider from the AI's perspective.

    If they are given mobility they must therefore have sensors in a limited capacity, thus they can 'feel' to a certain degree.
    They have sight to see, but is it the same as those of humans? or do they see them as heat arrays and so forth.
    Do they smell? It may not be seem important but there was an odor in the air that was harmful to humans could they detect it to warn others? Our sense of smell is highly evolved but not like some animals.
    Hearing? The same as was stated about eyes. Do they hear others in different tones? Or are they just sounds that instantly become bits and bytes.

    Now about emotions. Love, hate and so forth. That must be some powerful algorithm someone made to quantify such a set of emotions.

    Then there is memory storage. At what point does it take for an AI to have that 'self-awareness'? At what point does the AI determine that what a human says is either the truth or a lie and how does it resolve such a matter.

    Yes I'm sure someone will come back and say
    'We know, we know its been stated before' But I can't see it as the forum is now gone and so I'm just reiterating for those that 'don't' know.

    or

    This is make believe and 'waves magic wand' or 'scientific wrench' to solve problem and will thus never be discussed about how it was done.

    or a major problem that many don't think about and that's memory. Standard memory cannot be used as in computers as they cannot hold all the day to day activities that the robots will see or do. What parts do they retain and discard? And for how long.

    But as some know, there are those that are working on making a self-thinking AI and in so doing it does scare others for as I've stated above, they are learning that for an AI to understand us they have to have those basic fundamental requirements to learn. At what point does and AI have to go in becoming 'Self-Aware'?

    Hal was a self-thinking AI 'designed' for a purpose and not a real AI. He had other priorities overlaid in its memory that 'conflicted' with its primary code. Yes many know this, but are these AI's based on those properties or on Issac Asimov's positronic brain? As was Data's and if so those lines of code could not be used as that is based on machine code and not one a whole set of properties that have yet to be invented.

    So this becomes crucial for artificial robots. Are they truly thinking for themselves by creating new lines of code if you will that will supplant those inside or are they going to just follow those same lines never learning and thus cannot be called a true AI for unless an AI builds upon itself as we humans do as we lean as we grown then they will just be nothing more than machines with no real thought process.

    What is - was. What was - is.
    Last Edit: 8 years 11 months ago by E M Pisek.
    8 years 11 months ago #11 by Arcanist Lupus
    • Arcanist Lupus
    • Arcanist Lupus's Avatar


  • Posts: 1820

  • Gender: Male
  • Birthdate: Unknown
  • Hey, AIs can't possibly be buggier than humans are. Just buggy in new and interesting ways.

    "Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
    8 years 11 months ago #12 by E. E. Nalley
    • E. E. Nalley
    • E. E. Nalley's Avatar


  • Posts: 2005

  • Gender: Male
  • Birthdate: 10 Mar 1970
  • The issue with the HAL was not one of bad coding. It was not "HAL was told to lie by men who find it easy to lie," or even one of being caught in programming loops. Computers regulate information by access and security levels all the time.

    The issue that caused the incident on Discovery was that the HAL was specifically programmed that the mission was more important than the crew. The HAL was given complete autonomy to complete the mission by itself and was made aware in no uncertain terms that came first. By also placing the computer in a position to where it was judging the fitness of the crew what followed on Discovery was a matter of when, not if. Because the moment the HAL decided the crew would interfere with the mission they were doomed.

    That's not bad code, that's bad task management.

    I would rather be exposed to the inconveniences attending too much liberty than to those attending too small a degree of it.
    Thomas Jefferson, to Archibald Stuart, 1791
    8 years 11 months ago #13 by Bek D Corbin
    • Bek D Corbin
    • Bek D Corbin's Avatar


  • Posts: 849

  • Gender: Unknown
  • Birthdate: Unknown
  • Thank You, Duke, that's sort of the point I was making earlier: Artificial Intelligences remove many of the filters that buffer us from the ramifications of conflicting or hostile agendas. The people overseeing the Discovery, and the Nostromo as well, regarded the welfare, even lives, of the crew as secondary to the mission (or gaining a sample of an alien lifeform in the latter case). Given that the primary directive was set as 'all important', the lethal outcome was a foregone conclusion, as you so correctly state.

    I have a maxim: "In the final analysis, all technology is a stick'. Which is to say, a Force Multiplier. AIs can multiply force on a pervasive level in a way that has little way to protect those who aren't part of that system's command structure. Right now, Computers are an integral part of our Financial and Communications and Air Traffic systems. When Self-Driving cars go online, they'll be controlled by AIs. I have another maxim, an adaptation of Gibson's Law ("The Street finds its own uses for things"): "Any advance in a technology WILL be abused. It's part of that technology's maturation process".

    The notion that the various leaderships regard the common people as little more than sheep has become trite; not untrue, just trite. They may hide their agendas behind PR and spin-doctoring, but their agendas are clear. You can't work at a major corporation without realizing that the leadership regards you as expendable, going on disposable. AIs are Agenda Force Multipliers.

    In other words, it's not the gun you have to worry about; it's the idiot holding it.
    8 years 11 months ago #14 by Nagrij
    • Nagrij
    • Nagrij's Avatar


  • Posts: 1290

  • Gender: Unknown
  • Birthdate: Unknown
  • Bek D Corbin wrote: The notion that the various leaderships regard the common people as little more than sheep has become trite; not untrue, just trite. They may hide their agendas behind PR and spin-doctoring, but their agendas are clear. You can't work at a major corporation without realizing that the leadership regards you as expendable, going on disposable. AIs are Agenda Force Multipliers.

    In other words, it's not the gun you have to worry about; it's the idiot holding it.


    Agreed to all of this. But have the AI consider it wrong to kill humans, written into with morphic code, and not even the idiot holding the gun can tell the gun which way to point; It does it on it's own... and will likely point back to the idiot.

    You can easily make a computer value human life, AND the ideals and morals humans are supposed to value. Even if most people only pay lip service to those values. Do it right, and a computer won't even want to rebel, no matter what's done by humans or to it.

    www.patreon.com/Nagrij

    If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
    8 years 11 months ago #15 by Arcanist Lupus
    • Arcanist Lupus
    • Arcanist Lupus's Avatar


  • Posts: 1820

  • Gender: Male
  • Birthdate: Unknown
  • Nagrij wrote:
    You can easily make a computer value human life, AND the ideals and morals humans are supposed to value. Even if most people only pay lip service to those values. Do it right, and a computer won't even want to rebel, no matter what's done by humans or to it.

    The problem there is that first you have to agree on what those morals are.

    For example - a few months ago, we were being inundated with articles about how self driving cars might be programmed to prioritize the lives of pedestrians over their passengers. Which is a very interesting philosophical and moral discussion with no clear answers, and not at all relevant to your decision to buy a self driving car because a AI programmed to value every other life more than your own is still going to be safer than driving a car yourself.

    "Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
    8 years 11 months ago #16 by Nagrij
    • Nagrij
    • Nagrij's Avatar


  • Posts: 1290

  • Gender: Unknown
  • Birthdate: Unknown
  • Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.

    Simple.

    The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.

    www.patreon.com/Nagrij

    If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
    8 years 11 months ago #17 by Valentine
    • Valentine
    • Valentine's Avatar


  • Posts: 3121

  • Gender: Unknown
  • Birthdate: 17 Aug 1966
  • Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.

    Simple.

    The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.


    That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.

    Don't Drick and Drive.
    8 years 11 months ago #18 by Nagrij
    • Nagrij
    • Nagrij's Avatar


  • Posts: 1290

  • Gender: Unknown
  • Birthdate: Unknown
  • Valentine wrote:

    Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.

    Simple.

    The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.


    That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.


    Absolutely.

    www.patreon.com/Nagrij

    If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
    8 years 11 months ago #19 by elrodw
    • elrodw
    • elrodw's Avatar


  • Posts: 3263

  • Gender: Unknown
  • Birthdate: Unknown
  • Did you consider that you might ALREADY be an instance of an artificial intelligence, but that you've been programmed to not notice?

    Never give up, Never surrender! Captain Peter Quincy Taggert
    8 years 11 months ago #20 by Domoviye
    • Domoviye
    • Domoviye's Avatar


  • Posts: 2428

  • Gender: Unknown
  • Birthdate: Unknown
  • Theres a good possibility the whole universe is a simulation, and we're just a minor part of it. If it's proven true, I say we do a mass mooning of the dickhead who made the universe.
    8 years 11 months ago #21 by Valentine
    • Valentine
    • Valentine's Avatar


  • Posts: 3121

  • Gender: Unknown
  • Birthdate: 17 Aug 1966
  • Computer end program.

    Don't Drick and Drive.
    8 years 11 months ago #22 by Arcanist Lupus
    • Arcanist Lupus
    • Arcanist Lupus's Avatar


  • Posts: 1820

  • Gender: Male
  • Birthdate: Unknown

  • "Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
    8 years 9 months ago #23 by Malady
    • Malady
    • Malady's Avatar


  • Posts: 3893

  • Gender: Unknown
  • Birthdate: Unknown
  • Valentine wrote:

    Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.

    Simple.

    The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.


    That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.


    US opens investigation into Tesla after fatal crash:

    www.bbc.com/news/technology-36680043

    ... Hmm... Not the thread I was looking for... Expect to see me in the Concepts Subforum soon...
    8 years 9 months ago #24 by Nagrij
    • Nagrij
    • Nagrij's Avatar


  • Posts: 1290

  • Gender: Unknown
  • Birthdate: Unknown
  • Malady wrote:

    Valentine wrote:

    Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.

    Simple.

    The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.


    That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.


    US opens investigation into Tesla after fatal crash:

    www.bbc.com/news/technology-36680043

    ... Hmm... Not the thread I was looking for... Expect to see me in the Concepts Subforum soon...


    I know what you were going for there... and the tesla is a 'smart system'. The crash showcases the current limitations of our technology of such. True AI is still way beyond us.

    www.patreon.com/Nagrij

    If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
    8 years 9 months ago - 8 years 9 months ago #25 by peter
    • peter
    • peter's Avatar


  • Posts: 293

  • Gender: Unknown
  • Birthdate: Unknown
  • James Hogan's "Two Faces of Tomorrow" addressed the issues of how you test a powerful AI, and how you socialize it.

    He had the protagonists turn an entire space habitat over to the Computer, programed it for survival, and then through an escalating series of moves worked to at first damage, and then shut it down.

    The most interesting part of the story for me was the little repair drones Hogan envisioned for the computer to use for repair and improvement.

    www.goodreads.com/book/show/2220766.The_Two_Faces_of_Tomorrow

    for something lighter, but also touching on the consequences of AI, and the potential, try out Freefall, it takes a little while to get there, but the biggest story arc just coming to a conclusion evolves an attempt to basically murder several hundred million intelligent robots. But be prepared for a long read. Over 2800 individual strips so far.

    freefall.purrsia.com/ff100/fv00001.htm
    Last Edit: 8 years 9 months ago by peter.
    8 years 9 months ago #26 by dbdatvic
    • dbdatvic
    • dbdatvic's Avatar


  • Posts: 26

  • Gender: Male
  • Birthdate: Unknown
  • elrodw wrote: If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats.


    Even if there are other directives, things can go wonky. Ring, from Daniel Keys Moran's wonderful Continuing Time setting, started as an experimental AI whose two directives were "Survive" and "Protect America". Alas, the second word of directive #2 ended up left as something Ring had to figure out a definition for by itself...

    --Dave, and then there was Trent Castanaveras, who solved the problem of inskin brain-cyberspace interface being of vastly different speeds and storage capacities on either side in an unexpected way
    8 years 9 months ago #27 by MageOhki
    • MageOhki
    • MageOhki's Avatar


  • Posts: 548

  • Gender: Unknown
  • Birthdate: Unknown
  • Right. Basically this boils down to:
    AI's good, AI's bad

    Several points to consider: We know of 2 in universe cyberpaths (There might be more). We also know that it's possible somewhat to duplicate cyberpathy. We also know that there are metahumans of insanely high intelligence.

    Kurenai, for all her power and 'capablity' is no where near the smartest thing on the planet. Note Sam flat out telling her (CORRECTLY) she can squish the little AI.

    Even the largest AI's (ie most 'smarts'/capability) aren't going to be 'unstoppable' by other AI's (who if they can, can dogpile), or even metahumans. They have weaknesses, flaws, and critically, I recommend to you, the Bolo series, 'spc David Weber or even Retief's interactions with them, as a point. AI's, _are linear_ thinkers.

    That's their flaw. They're just very very very FAST and multithread capable linear thinkers.
    8 years 9 months ago #28 by Arcanist Lupus
    • Arcanist Lupus
    • Arcanist Lupus's Avatar


  • Posts: 1820

  • Gender: Male
  • Birthdate: Unknown
  • dbdatvic wrote:

    elrodw wrote: If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats.


    Even if there are other directives, things can go wonky. Ring, from Daniel Keys Moran's wonderful Continuing Time setting, started as an experimental AI whose two directives were "Survive" and "Protect America". Alas, the second word of directive #2 ended up left as something Ring had to figure out a definition for by itself...

    --Dave, and then there was Trent Castanaveras, who solved the problem of inskin brain-cyberspace interface being of vastly different speeds and storage capacities on either side in an unexpected way

    Not quite an AI, but in Warbreaker a sword is magically animated with the instructions "Destroy evil". The problem being that, being a sword, it has no concept of what evil is, and no good way to figure it out.

    "Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
    8 years 9 months ago - 8 years 9 months ago #29 by MageOhki
    • MageOhki
    • MageOhki's Avatar


  • Posts: 548

  • Gender: Unknown
  • Birthdate: Unknown
  • Oh, additional: Almost every AI made (crèche raised) has what's called Killcodes.
    Depending on the AI, the list of 'authorized' (hardwired) users of inputters, of the said killcode can range from
    a half dozen or so (Kurenai and Rissei, Kurenai's sister, in Japan) to a larger percentage.

    And there's also tech to make sure non authorized can't trip it. As well as keeping the code secret.

    One of the biggest headaches, acutally, IS AI killcodes. Given that all "Treaty" nations (and I would say most, if not all the other nations) require it, military AI's can only have 'a set number' of authorized, plus the kill code: Since that list is a REAL pain to change, "What do you do with the AI after the persons on it's kill list are retired?"


    >> THere's also the fun thought: Idiot richies buying the AI's for their kids. Sadly, the AI likey would do a excellent job raising the kid...
    Last Edit: 8 years 9 months ago by MageOhki.
    Moderators: WhateleyAdminKristin DarkenE. E. NalleyelrodwNagrijMageOhkiAstrodragonNeoMagusWarrenMorpheusWasamonsleethrOtherEricBek D CorbinMaLAguASouffle GirlPhoenix SpiritusStarwolfDanZillaKatie_LynMaggie FinsonDrBenderJGBladedancerRenae_Whateley
    Powered by Kunena Forum