Question Virtual/Artificial Intelligences
- Kristin Darken
-
Topic Author
- MageOhki
-
Kureani: "It wasn't HAL'S fault, it was his programmers and directive givers!"
- Bek D Corbin
-
- MageOhki
-
Even to computers.
HAL's problem was he was given really stupid directlives, and no way to balance them (which is the fault of his PROGRAMMER, not HAL, who etheir A: didn't think this would come up (moron...), or B: that people would be smarter than that. A third possibility is he never THOUGHT, to tell the USERS and higher ups of the possibility of conflict.)
Translation: The programmer was a blithering idiot.
Sorry for the mini rant, but AI is something I'm still fairly passionate about, and frankly, while I will concede that caution needs to be taken, no question (make sure no blithering idiots...): I'm VERY much of the RAH school of thought on AI.
- Phoenix Spiritus
-
The programmer did what he was told, programmed HAL to obay its core directives, it was the stupid users that went and gave HAL insane core directives and therefore got an insane computer.
Garbage in, garbage out as they say. If you can't be bothered to think through the logic of your instructions to a computer, don't get upset when it stupidly carries out your illogical instructions!
- Bek D Corbin
-
The AI systems themselves worry me a bit; the people who'll be using them to advance personal agendas terrify me. Hackers will be annoying; PACs will destroy the effing world.
- elrodw
-
So let's say a boundary value case slips by testing, which is a 'self-preservation' mode for the AI system. Important so it doesn't do stupid shit. But there's a boundary value case where any safeguards against harming humans is missed. Potential (note the word POTENTIAL) problem. How big? We don't know. But there are guaranteed to be a LOT of problems in that complex an AI system that never get tested.
If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats. Likely? Probably not. Possible? Probable that there's a boundary value case that makes this real. So the AI system needs a sense of 'morality' - e.g. murder is wrong (not killing - different thing, could spend hours on the topic), stealing is wrong, etc, etc. More complexity, more BV issues.
Then how do you protect against SEU in the program memory? A byte or word or a condition mask gets corrupted - and it WILL happen. What then? Not predictable.
My view - it's not likely, but it is possible, and the consequences COULD be severe. Threat possibility - 2, threat consequence - 5. Yellow zone, be careful. (5x5 threat analysis matrix)
Never give up, Never surrender! Captain Peter Quincy Taggert
- Domoviye
-
And here is something that is very appropriate for this conversation. www.cracked.com/blog/a-series-of-emails-...cyberdynes-tech-guy/
- Nagrij
-
www.patreon.com/Nagrij
If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
- E M Pisek
-
Nagrij wrote: The easiest way to fix the problems involving people stated above, is to take the creation of AI out of human hands. a few lines of code, and the AI can debug itself before disasters happen.
Actually there is programs that do do that in a limited sense.
Things to consider and I'm not trying to delved deeply into what constitutes a true AI form, but consider from the AI's perspective.
If they are given mobility they must therefore have sensors in a limited capacity, thus they can 'feel' to a certain degree.
They have sight to see, but is it the same as those of humans? or do they see them as heat arrays and so forth.
Do they smell? It may not be seem important but there was an odor in the air that was harmful to humans could they detect it to warn others? Our sense of smell is highly evolved but not like some animals.
Hearing? The same as was stated about eyes. Do they hear others in different tones? Or are they just sounds that instantly become bits and bytes.
Now about emotions. Love, hate and so forth. That must be some powerful algorithm someone made to quantify such a set of emotions.
Then there is memory storage. At what point does it take for an AI to have that 'self-awareness'? At what point does the AI determine that what a human says is either the truth or a lie and how does it resolve such a matter.
Yes I'm sure someone will come back and say
'We know, we know its been stated before' But I can't see it as the forum is now gone and so I'm just reiterating for those that 'don't' know.
or
This is make believe and 'waves magic wand' or 'scientific wrench' to solve problem and will thus never be discussed about how it was done.
or a major problem that many don't think about and that's memory. Standard memory cannot be used as in computers as they cannot hold all the day to day activities that the robots will see or do. What parts do they retain and discard? And for how long.
But as some know, there are those that are working on making a self-thinking AI and in so doing it does scare others for as I've stated above, they are learning that for an AI to understand us they have to have those basic fundamental requirements to learn. At what point does and AI have to go in becoming 'Self-Aware'?
Hal was a self-thinking AI 'designed' for a purpose and not a real AI. He had other priorities overlaid in its memory that 'conflicted' with its primary code. Yes many know this, but are these AI's based on those properties or on Issac Asimov's positronic brain? As was Data's and if so those lines of code could not be used as that is based on machine code and not one a whole set of properties that have yet to be invented.
So this becomes crucial for artificial robots. Are they truly thinking for themselves by creating new lines of code if you will that will supplant those inside or are they going to just follow those same lines never learning and thus cannot be called a true AI for unless an AI builds upon itself as we humans do as we lean as we grown then they will just be nothing more than machines with no real thought process.
What is - was. What was - is.
- Arcanist Lupus
-
"Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
- E. E. Nalley
-
The issue that caused the incident on Discovery was that the HAL was specifically programmed that the mission was more important than the crew. The HAL was given complete autonomy to complete the mission by itself and was made aware in no uncertain terms that came first. By also placing the computer in a position to where it was judging the fitness of the crew what followed on Discovery was a matter of when, not if. Because the moment the HAL decided the crew would interfere with the mission they were doomed.
That's not bad code, that's bad task management.
I would rather be exposed to the inconveniences attending too much liberty than to those attending too small a degree of it.
Thomas Jefferson, to Archibald Stuart, 1791
- Bek D Corbin
-
I have a maxim: "In the final analysis, all technology is a stick'. Which is to say, a Force Multiplier. AIs can multiply force on a pervasive level in a way that has little way to protect those who aren't part of that system's command structure. Right now, Computers are an integral part of our Financial and Communications and Air Traffic systems. When Self-Driving cars go online, they'll be controlled by AIs. I have another maxim, an adaptation of Gibson's Law ("The Street finds its own uses for things"): "Any advance in a technology WILL be abused. It's part of that technology's maturation process".
The notion that the various leaderships regard the common people as little more than sheep has become trite; not untrue, just trite. They may hide their agendas behind PR and spin-doctoring, but their agendas are clear. You can't work at a major corporation without realizing that the leadership regards you as expendable, going on disposable. AIs are Agenda Force Multipliers.
In other words, it's not the gun you have to worry about; it's the idiot holding it.
- Nagrij
-
Bek D Corbin wrote: The notion that the various leaderships regard the common people as little more than sheep has become trite; not untrue, just trite. They may hide their agendas behind PR and spin-doctoring, but their agendas are clear. You can't work at a major corporation without realizing that the leadership regards you as expendable, going on disposable. AIs are Agenda Force Multipliers.
In other words, it's not the gun you have to worry about; it's the idiot holding it.
Agreed to all of this. But have the AI consider it wrong to kill humans, written into with morphic code, and not even the idiot holding the gun can tell the gun which way to point; It does it on it's own... and will likely point back to the idiot.
You can easily make a computer value human life, AND the ideals and morals humans are supposed to value. Even if most people only pay lip service to those values. Do it right, and a computer won't even want to rebel, no matter what's done by humans or to it.
www.patreon.com/Nagrij
If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
- Arcanist Lupus
-
The problem there is that first you have to agree on what those morals are.Nagrij wrote:
You can easily make a computer value human life, AND the ideals and morals humans are supposed to value. Even if most people only pay lip service to those values. Do it right, and a computer won't even want to rebel, no matter what's done by humans or to it.
For example - a few months ago, we were being inundated with articles about how self driving cars might be programmed to prioritize the lives of pedestrians over their passengers. Which is a very interesting philosophical and moral discussion with no clear answers, and not at all relevant to your decision to buy a self driving car because a AI programmed to value every other life more than your own is still going to be safer than driving a car yourself.
"Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
- Nagrij
-
Simple.
The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.
www.patreon.com/Nagrij
If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
- Valentine
-
Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.
Simple.
The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.
That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.
Don't Drick and Drive.
- Nagrij
-
Valentine wrote:
Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.
Simple.
The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.
That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.
Absolutely.
www.patreon.com/Nagrij
If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
- elrodw
-
Never give up, Never surrender! Captain Peter Quincy Taggert
- Domoviye
-
- Valentine
-
Don't Drick and Drive.
- Arcanist Lupus
-
"Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
- Malady
-
Valentine wrote:
Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.
Simple.
The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.
That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.
US opens investigation into Tesla after fatal crash:
www.bbc.com/news/technology-36680043
... Hmm... Not the thread I was looking for... Expect to see me in the Concepts Subforum soon...
- Nagrij
-
Malady wrote:
Valentine wrote:
Nagrij wrote: Simple arcanist. Make it general; the self-driving car values human life, period. then it can make the choice to properly save both the pedestrian and the passenger if at all possible. If it isn't, it makes the risk assessment itself, and chooses who to save.
Simple.
The real difference is rather than having a smart system in the car, something that only fakes intelligence and reasoning capability, you use a true AI - something that actually can. The difference between tavi and kureani, if you will.
That will make the lawsuits more fun after the first injuries and deaths involving self-driving cars.
US opens investigation into Tesla after fatal crash:
www.bbc.com/news/technology-36680043
... Hmm... Not the thread I was looking for... Expect to see me in the Concepts Subforum soon...
I know what you were going for there... and the tesla is a 'smart system'. The crash showcases the current limitations of our technology of such. True AI is still way beyond us.
www.patreon.com/Nagrij
If you like my writing, please consider helping me out, and see the rest of the tales I spin on Patreon.
- peter
-
He had the protagonists turn an entire space habitat over to the Computer, programed it for survival, and then through an escalating series of moves worked to at first damage, and then shut it down.
The most interesting part of the story for me was the little repair drones Hogan envisioned for the computer to use for repair and improvement.
www.goodreads.com/book/show/2220766.The_Two_Faces_of_Tomorrow
for something lighter, but also touching on the consequences of AI, and the potential, try out Freefall, it takes a little while to get there, but the biggest story arc just coming to a conclusion evolves an attempt to basically murder several hundred million intelligent robots. But be prepared for a long read. Over 2800 individual strips so far.
freefall.purrsia.com/ff100/fv00001.htm
- dbdatvic
-
elrodw wrote: If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats.
Even if there are other directives, things can go wonky. Ring, from Daniel Keys Moran's wonderful Continuing Time setting, started as an experimental AI whose two directives were "Survive" and "Protect America". Alas, the second word of directive #2 ended up left as something Ring had to figure out a definition for by itself...
--Dave, and then there was Trent Castanaveras, who solved the problem of inskin brain-cyberspace interface being of vastly different speeds and storage capacities on either side in an unexpected way
- MageOhki
-
AI's good, AI's bad
Several points to consider: We know of 2 in universe cyberpaths (There might be more). We also know that it's possible somewhat to duplicate cyberpathy. We also know that there are metahumans of insanely high intelligence.
Kurenai, for all her power and 'capablity' is no where near the smartest thing on the planet. Note Sam flat out telling her (CORRECTLY) she can squish the little AI.
Even the largest AI's (ie most 'smarts'/capability) aren't going to be 'unstoppable' by other AI's (who if they can, can dogpile), or even metahumans. They have weaknesses, flaws, and critically, I recommend to you, the Bolo series, 'spc David Weber or even Retief's interactions with them, as a point. AI's, _are linear_ thinkers.
That's their flaw. They're just very very very FAST and multithread capable linear thinkers.
- Arcanist Lupus
-
Not quite an AI, but in Warbreaker a sword is magically animated with the instructions "Destroy evil". The problem being that, being a sword, it has no concept of what evil is, and no good way to figure it out.dbdatvic wrote:
elrodw wrote: If an AI system has a directive to survive, and no other, then there is nothing to prevent AI systems in some cases from interpreting humans as threats.
Even if there are other directives, things can go wonky. Ring, from Daniel Keys Moran's wonderful Continuing Time setting, started as an experimental AI whose two directives were "Survive" and "Protect America". Alas, the second word of directive #2 ended up left as something Ring had to figure out a definition for by itself...
--Dave, and then there was Trent Castanaveras, who solved the problem of inskin brain-cyberspace interface being of vastly different speeds and storage capacities on either side in an unexpected way
"Shared pain is lessened; shared joy, increased — thus do we refute entropy." - Spider Robinson
- MageOhki
-
Depending on the AI, the list of 'authorized' (hardwired) users of inputters, of the said killcode can range from
a half dozen or so (Kurenai and Rissei, Kurenai's sister, in Japan) to a larger percentage.
And there's also tech to make sure non authorized can't trip it. As well as keeping the code secret.
One of the biggest headaches, acutally, IS AI killcodes. Given that all "Treaty" nations (and I would say most, if not all the other nations) require it, military AI's can only have 'a set number' of authorized, plus the kill code: Since that list is a REAL pain to change, "What do you do with the AI after the persons on it's kill list are retired?"
>> THere's also the fun thought: Idiot richies buying the AI's for their kids. Sadly, the AI likey would do a excellent job raising the kid...