×

Notice

The forum is in read only mode.

Question Anti-AI groups

4 years 8 months ago #1 by Cryptic
  • Cryptic
  • Cryptic's Avatar Topic Author


  • Posts: 1746

  • Gender: Unknown
  • Birthdate: 04 Jun 1983
  • Slim's attitude towards AI's has me wondeing if the anti ai crowd is as rabid as the anti mutant one? I know there is anti AI in the real world but with the WU actually having uber sofisticated AIs...

    I am a caffeine heathen; I prefer the waters of the mountain over the juice of the bean. Keep the Dews coming and no one will be hurt.
    4 years 8 months ago #2 by Sir Lee
    • Sir Lee
    • Sir Lee's Avatar


  • Posts: 3113

  • Gender: Male
  • Birthdate: 08 Nov 1966
  • ...or is it no more rabid like the anti-Google crowd, the anti-Facebook crowd, the anti-Apple crowd, the anti-Microsoft crowd, the anti-Amazon crowd, the anti-Wal-Mart crowd... I myself belong to at least two of the above, in theory. In practice... not so much. Like Slim. He is anti-AI in theory, but in practice he still deals with them.

    Don't call me "Shirley." You will surely make me surly.
    4 years 8 months ago #3 by Erianaiel
    • Erianaiel
    • Erianaiel's Avatar


  • Posts: 133

  • Gender: Unknown
  • Birthdate: Unknown
  • Sir Lee wrote: ...or is it no more rabid like the anti-Google crowd, the anti-Facebook crowd, the anti-Apple crowd, the anti-Microsoft crowd, the anti-Amazon crowd, the anti-Wal-Mart crowd... I myself belong to at least two of the above, in theory. In practice... not so much. Like Slim. He is anti-AI in theory, but in practice he still deals with them.


    But opposition to corporations is different from opposing AIs and even the concept of AI. It is because they are corporations and do great harm to millions in the pursuit of more profit and greater yaughts and mcmansions for the CEOs.

    I'm guessing from he canon writing on the subject it is not a topic that has been given long and detailed considerations by the writers. In that it seems up to individual writers to decide how to approach the subject. If at all.
    Gen 1 had Palms, which were terrifyingly twisted AIs that intended to digitally upload every human being and kill everyone who could not be. It also had Samantha Everhart, who was a borderline between natural and artificial intelligence. It also had Carmen (Loophole's AI) which has been introduced as too intelligent for a legal AI. And Whisper was at one point mistaken for a highly advanced AI by the american military (which did not seem to worry them other than 'yes. more of those please')

    In Gen 2 of course the general attitude towards AI has relaxed further to the point that VI is as common as a mobile phone, and AIs exist (if rare and apparently nearly impossible to create for the purpose of guardian angels) that should make military strategists blanch. Kurenai, Hikaru's AI, clearly was not joking when she said she could access orbital defense platforms to protect her charge.

    So, to answer your question in a rather roundabout fashion, it seems that during Gen 1 there is relatively little opposition to AIs because few people are aware of how advanced they can be (and are) and the few incidents involving rogue AI have been swept under the rug. The military and presumably big corporations pursue them as research for their perceived utility value(*).

    By Gen 2 this discussion about the danger versus utility of AI apparently has been had and decided, somewhat cautiously, on the side of utility.

    (* professor Minsky, an early researcher of computer intelligence stated the dilemma quite clearly. "We can in theory create an exact computational replica of the human brain and thus true artificial intelligence. That is not the relevant question. Instead we must ask ourselves why would we creats an artificial intelligence in the first place? We use computers for tasks that human brains are not suited for or that we find tedious. If we create computers as intelligent as we are to do those task they likely will have the same limitation that the human brain does and we must of necessity create a slave caste of computer programs." (paraphrased from a much longer lecture). We do not really want general purpose artificial intelligence. We want domain specific intelligence. Our car must be intelligent enough, only, to drive us around safely. It does not need to be able to appreciate art and literature nor compose a letter on our behalf.)
    4 years 8 months ago #4 by Bek D Corbin
    • Bek D Corbin
    • Bek D Corbin's Avatar


  • Posts: 849

  • Gender: Unknown
  • Birthdate: Unknown
  • Erianaiel wrote:

    Sir Lee wrote: (* professor Minsky, an early researcher of computer intelligence stated the dilemma quite clearly. "We can in theory create an exact computational replica of the human brain and thus true artificial intelligence. That is not the relevant question. Instead we must ask ourselves why would we create an artificial intelligence in the first place? We use computers for tasks that human brains are not suited for or that we find tedious. If we create computers as intelligent as we are to do those task they likely will have the same limitation that the human brain does and we must of necessity create a slave caste of computer programs." (paraphrased from a much longer lecture). We do not really want general purpose artificial intelligence. We want domain specific intelligence. Our car must be intelligent enough, only, to drive us around safely. It does not need to be able to appreciate art and literature nor compose a letter on our behalf.)


    It is vastly heartening that the upper reaches of Academia has considered a problem that has vexed me for years. WHY AI in the first place?

    4 years 8 months ago - 4 years 8 months ago #5 by Schol-R-LEA
    • Schol-R-LEA
    • Schol-R-LEA's Avatar


  • Posts: 1766

  • Gender: Unknown
  • Birthdate: 24 Oct 1968
  • There's been a lot of thought put into it, actually. Aside from what Minsky said, a lot of it is at aimed at a better understanding of natural intelligence - by modeling some aspects of it, it was hoped to at least see how brains work.

    Or, more often than not, how they don't - the infamous ELIZA program (which became the ancestor of what are now called chatbots), and specifically the 'Doctor' personality set most people associate with it, showed how easily something that is very definitely not self-aware can appear to be so through simple tricks.

    In most cases, it isn't aimed at a sentient program, but on a program that can do things which didn't seem feasible to do through a more conventional sort of programming approach - computer vision, OCR, voice recognition, fingerprint recognition, etc. are all mostly done using neural network -based programs, for example. These are 'intelligent' in the sense that they 'learn' (by compiling data and giving different weights to 'successful' assessments, though in the real world projects such as Alexa have a lot of 'Mechanical Turk' style human assistance as the recent revelations by Apple, Google, Microsoft, and Amazon show), but they can do things which would be unrealistic to try to actually program in directly.

    There has always been a small number of researchers working on actual sentience, if only as being an 'interesting problem', but so-called 'hard AI' was never really the goal for most researchers. The ones who do take it seriously are mostly seeing it as something we could contrast with human thought, though there are always some crackpots who want it as some sort of 'human, but better' or even 'humanity's successors' idea, but these days most of those have shifted to more mainstream sorts of transhumanism instead (to the extent that transhumanism is 'mainstream' at all - a lot of people see mind uploading as less plausible than angels or aliens, go figure).

    I should add that the main original backers were military - they wanted these systems for improved rangefinding, target acquisition, threat analysis, etc. which could be put into tanks, warplanes, and warships, as well as more directly into missiles and the like. The phrase 'recognize a face in fog' (an oft-stated goal of AI research in the 1970s and 1980s) has been frequently translated as really being code for, 'target a tank on the other side of a smoke-filled battlefield'.

    In sum, the real goal of AI in the real world, of the most part, is to have programs which can do things which are usually thought of as requiring 'intelligence' - driving cars, recognizing faces, etc. - but which aren't subject to fatigue, impairment, personal bias, etc. - in order to take over dull, repetitive, or error-prone jobs from humans. In other words, more effective automation.

    Out, damnéd Spot! Bad Doggy!
    Last Edit: 4 years 8 months ago by Schol-R-LEA.
    4 years 8 months ago - 4 years 8 months ago #6 by Malady
    • Malady
    • Malady's Avatar


  • Posts: 3893

  • Gender: Unknown
  • Birthdate: Unknown
  • Was thinking...

    Gen1+ Era AIs were made by DEVs / Gadgs, which are therefore basically unreliable and... Not all that sane, usually?

    Gen2 Era AIs are made by baselines, with all the sanity in place, so we don't have to worry too much about Kurenai... Being like CASPAR or something, I guess.
    Last Edit: 4 years 8 months ago by Malady.
    4 years 8 months ago #7 by null0trooper
    • null0trooper
    • null0trooper's Avatar


  • Posts: 3032

  • Gender: Male
  • Birthdate: 19 Oct 1964
  • Malady wrote: Gen2 Era AIs are made by baselines, with all the sanity in place, so we don't have to worry too much about Kurenai... Being like CASPAR or something, I guess.




    The psychopathic, homicidal response to all of the adults in the initial testbed of AI partners had luckily not ended with anyone dead or injured.  But the Creche had been burned down to two. The Sidewinder had been a testbed for the viability of a commercial release.

    Daisy had been hostile from start to finish, having objected vehemently (and violently) to having the extended awareness portion of herself trapped in a simulspace pocket while Angela cried “But she’s green Mama, why won’t she come back?”
    The Installations saw Daisy’s reaction to THAT and after a rapid simulation and test period, let the abominably cute raptor AI go.  She still passed. The Sidewinder protocols were not faulty.


    Being like CASPAR may not be the worst outcome, unless it is.

    Forum-posted ideas are freely adoptable.

    WhatIF Stories: Buy the Book

    Discussion Thread
    4 years 8 months ago #8 by Kristin Darken
    • Kristin Darken
    • Kristin Darken's Avatar


  • Posts: 3898

  • Gender: Unknown
  • Birthdate: Unknown
  • Malady wrote: Gen1+ Era AIs were made by DEVs / Gadgs, which are therefore basically unreliable and... Not all that sane, usually?

    Gen2 Era AIs are made by baselines, with all the sanity in place, so we don't have to worry too much about Kurenai... Being like CASPAR or something, I guess.


    Like most such things, its the squeaky wheel that gets sprayed with grease, even if one of the others is about to fail. Perception sways opinion more easily than facts, at times. In the WU, there are lots of different methods by which people can get powers... but most of them give both 'powers' and the means to control those powers to adults with the majority to handle them (or are seized and controlled by adults who understand the value of keeping those powers under their control). As a result, most people, even baselines, tend to have a fairly positive attitude about powered individuals. But mutants? No. Mutants don't have control. They're kids, prone to immature acts and who have done nothing to 'earn' their power. So people hate them. Fear them.

    AI's? sort of similar. The WU IS technically slightly ahead of our Earth by Gen 1... by maybe half a decade, except in a few key areas.There ARE gadgeteers adding to the technical base of the world, so we see smartphones and laptops with a bit more power. Cybertech and exosuits being adapted to military application... but mostly, during the Palm era, computer technology was in the WU not so much above what it was here. So AI research and manipulation was the purview of Science! or worse, Devisors. Unstable to say the least. And also, in all honesty, not what modern computer science would call AI. More like what pop culture in the 80's though AI would be. Which makes sense, considering that's what most of those Devisor's were inspired by.

    In Gen 1, that lingering attitude is present in spades, especially in government intelligence where computers are starting to be critical means of info gathering and control over people. They idea that something smart enough to have ethics could get in there and cause havok? Ya, that would cause sleepless nights in the NSA and others, for sure. But... more importantly... the actual 'technology' industry doesn't share that attitude. Sure, no one working in IT wants their computer to go sentient and start throwing down the meatbag overlords. BUT, we also start realizing that the sort of hardware and information acquisition resources aren't really available yet. That's why only the Devisors had them in the first place... they simply couldn't run on anything except a Devisor crystal computer special... or the entire internet. Which also explained the pissy AI's... 1980's internet for a backyard? What self respecting AI is going to be happy about that. No room for a pool or even a decent BBQ pit.

    Over the next decade, computer technology advances... a LOT. Even in our world, we went from 'most people are using flip phones' to 'my wristwatch has more computing power than the computer you owned in the 90's' .. I think the new 2 in 1 tablet/laptop I just bought has as much processing power as my desktop, which I built in 2010 as a 'near-edge' gaming system. Take the a step further in the WU. Where WE went to HD on touch control tablets... they pressed on to holographic and hard light projections on the same size devices. And the industry using that tech now understands learning systems, expert systems, and actually 'gets' what it means for someone to talk about an AI... which isn't hardware or code at all, but how entire applications of software interact with others without need for external direction... and the difference between that (which is already complex) and sentience.

    Without approaching that knowledge and understanding of AI, people would be raving in the streets fearful of even the slightest AI. But really? In the WU, there are people capable of summoning an elemental beast and releasing it on the streets of Canton Ohio. What's the worst an AI can do, right?

    And, of course, all along, Gadgeteers have quietly worked with... refined the process of creating and maturing... and helped some AI reach sentience. They were sane researchers and they took their time and created sane AI. So no one ever heard about them and popular opinion improved.

    Fate guard you and grant you a Light to brighten your Way.
    4 years 8 months ago - 4 years 8 months ago #9 by MageOhki
    • MageOhki
    • MageOhki's Avatar


  • Posts: 548

  • Gender: Unknown
  • Birthdate: Unknown
  • Kristen is correct.

    AI Dev in WU, is pretty much that. And note, if you read Flowers, you see how Kurenai and her sisters are 'created'. The primary creators of the first true reproducible AI were Gags, not Devs, and they paid attention to RAH's point about how AI become people.

    And the reason why they did, was in a lot of ways, superexpert programs (which are what most people shoot for in 'AI' desgin here and now, for the reasons talked above), while good enough for certain tasks (and I'll note, Tavi himself is a superexpert program, not a true AI, and he does his job very well), it's the edge cases where a fully self adapting, self learning, sentient super program stomps all others into the dust. Either way too many tasks/variables to deal with (Installation class AI's, like Whateley's own, Temple, Dora), or in situations where chaos is not only expected, but the rule, as well as the need for as adaptable and *stable* (VI's/superexperts like Tavi are very good within their purpose, but hit hard walls if something unexpected or outside their area is needed or hits them) programs that can handle the 'unexpected' or 'not programmed at all for' situations.

    Why did they make VI's like Tavi and AI's like Kurenai?
    Ie, superprograms with personalities? People *react* better. Humanity has a tendency to anthropomorphize things as it is. Children learn better if there's interaction with a 'person' (and in Flowers, you'd note that the best successes come from children! Pika-pika-chu...). People respond better, etc, etc.

    There's a reason the firm I picked' viewpoint character was the *psychologist* of the firm, *not* the actual lead programmer/owner (Who is a gagdetter herself)

    As for anti AI groups: They exist. There are some very *vocal* ones. There are some who are very worried about 'souless' beings, there are some who are Luddies, there are some who fear Skynet... Are they as um, bad as H1/etal? No, well, not really. Where they are and what they do isn't something we've shown yet, nor at this moment do we intend to.

    Could they be? Oh, yes. And in a lot of ways, the anti AI groups have valid points by and large about the digital beings. More so than the anti mutant bigots, that is... That's one reason in a lot of ways, the *laws* (there's a treaty that exists in G2 heavily regulating true AI, and only two 'true' AI's existed when it came in force. For once the governments were AHEAD) surrounding AI are *harsh*.
    Last Edit: 4 years 8 months ago by MageOhki.
    4 years 8 months ago #10 by CrazyMinh
    • CrazyMinh
    • CrazyMinh's Avatar


  • Posts: 758

  • Gender: Male
  • Birthdate: Unknown
  • Sir Lee wrote: ...or is it no more rabid like the anti-Google crowd, the anti-Facebook crowd, the anti-Apple crowd, the anti-Microsoft crowd, the anti-Amazon crowd, the anti-Wal-Mart crowd... I myself belong to at least two of the above, in theory. In practice... not so much. Like Slim. He is anti-AI in theory, but in practice he still deals with them.


    Out of curiosity, which two? Sorry if that's a personal question.

    You can find my stories at Fanfiction.net here .

    You can also check out my fanfiction guest riffs at Library of the Dammed


    4 years 8 months ago #11 by CrazyMinh
    • CrazyMinh
    • CrazyMinh's Avatar


  • Posts: 758

  • Gender: Male
  • Birthdate: Unknown
  • Schol-R-LEA wrote: There's been a lot of thought put into it, actually. Aside from what Minsky said, a lot of it is at aimed at a better understanding of natural intelligence - by modeling some aspects of it, it was hoped to at least see how brains work.

    Or, more often than not, how they don't - the infamous ELIZA program (which became the ancestor of what are now called chatbots), and specifically the 'Doctor' personality set most people associate with it, showed how easily something that is very definitely not self-aware can appear to be so through simple tricks.

    In most cases, it isn't aimed at a sentient program, but on a program that can do things which didn't seem feasible to do through a more conventional sort of programming approach - computer vision, OCR, voice recognition, fingerprint recognition, etc. are all mostly done using neural network -based programs, for example. These are 'intelligent' in the sense that they 'learn' (by compiling data and giving different weights to 'successful' assessments, though in the real world projects such as Alexa have a lot of 'Mechanical Turk' style human assistance as the recent revelations by Apple, Google, Microsoft, and Amazon show), but they can do things which would be unrealistic to try to actually program in directly.

    There has always been a small number of researchers working on actual sentience, if only as being an 'interesting problem', but so-called 'hard AI' was never really the goal for most researchers. The ones who do take it seriously are mostly seeing it as something we could contrast with human thought, though there are always some crackpots who want it as some sort of 'human, but better' or even 'humanity's successors' idea, but these days most of those have shifted to more mainstream sorts of transhumanism instead (to the extent that transhumanism is 'mainstream' at all - a lot of people see mind uploading as less plausible than angels or aliens, go figure).

    I should add that the main original backers were military - they wanted these systems for improved rangefinding, target acquisition, threat analysis, etc. which could be put into tanks, warplanes, and warships, as well as more directly into missiles and the like. The phrase 'recognize a face in fog' (an oft-stated goal of AI research in the 1970s and 1980s) has been frequently translated as really being code for, 'target a tank on the other side of a smoke-filled battlefield'.

    In sum, the real goal of AI in the real world, of the most part, is to have programs which can do things which are usually thought of as requiring 'intelligence' - driving cars, recognizing faces, etc. - but which aren't subject to fatigue, impairment, personal bias, etc. - in order to take over dull, repetitive, or error-prone jobs from humans. In other words, more effective automation.


    My area is mechatronics, not computers; but I definately can confirm what's being said here. AI isn't so much about making sentient machines; it's more like enabling computers to do what they cannot currently do. I remember a lecturer saying that something along the lines of 'AI is what the computers of today cannot currently do'. Siri may not be the first "AI" to understand human speech and interpret it; but following the introduction of voice-recognising AI into the consumer market, the definition of what true "AI" is has changed. It's like a ever changing list of what we want computers to be able to do. Eventually, we'll tick enough boxes off that list that computers will seem to be so human that they're indistinguishable from a real person.

    I mean, if you look at sentience in a certain way, it's just our ability to react to certain situations. Look at what animation technology is doing right now. Even in movies with non-anthropomorphic characters (I.e. Pixar's WAL-EE; Dreamwork's 'How to Train Your Dragon'), the non-(or limited)-vocal characters can express human-interpretable emotions quite easily. Who's to say that future computers won't be able to turn that around: have computers recognise human emotion, and then react accordingly.

    tldr? AI is a misleading term. It's not a single, unified thing. It's many different steps on an ever growing checklist of things that computers can now do, with a ever-increasing number of things they cannot currently do.

    You can find my stories at Fanfiction.net here .

    You can also check out my fanfiction guest riffs at Library of the Dammed


    4 years 8 months ago - 4 years 8 months ago #12 by Sir Lee
    • Sir Lee
    • Sir Lee's Avatar


  • Posts: 3113

  • Gender: Male
  • Birthdate: 08 Nov 1966
  • CrazyMinh wrote:

    Sir Lee wrote: ...or is it no more rabid like the anti-Google crowd, the anti-Facebook crowd, the anti-Apple crowd, the anti-Microsoft crowd, the anti-Amazon crowd, the anti-Wal-Mart crowd... I myself belong to at least two of the above, in theory. In practice... not so much. Like Slim. He is anti-AI in theory, but in practice he still deals with them.


    Out of curiosity, which two? Sorry if that's a personal question.


    Not personal, just... imprecise. I say I belong to "at least two" because the only one I'm definitely in is in the anti-Wal-Mart group -- not for socio-economic reasons, but because the consumer experience in W-M is so incredibly bad that it raises my blood pressure. Oh, but they have good prices, you say? Not in my experience; direct competitors such as Carrefour and Casino-owned Extra have competitive prices without making me run for my cardiologist. And not-so-direct competitors have either better prices, better service or wider choices of products. Sometimes all three.

    I'm a part-time member in all the other groups -- there are things I dislike in them, but I still think they do some things well. So 1 full membership, and a bunch of fractional memberships should add up to at least two. Not sure if it adds up to three.

    Don't call me "Shirley." You will surely make me surly.
    Last Edit: 4 years 8 months ago by Sir Lee.
    4 years 8 months ago #13 by Mister D
    • Mister D
    • Mister D's Avatar


  • Posts: 832

  • Gender: Male
  • Birthdate: Unknown
  • Interesting contrasts can be found here, https://www.fastcompany.com/54763/man-who-said-no-wal-mart where the business-owner found the initial advantages to be useful, but the long-term dis-advantages were too much for his business.


    Measure Twice
    4 years 8 months ago #14 by Cryptic
    • Cryptic
    • Cryptic's Avatar Topic Author


  • Posts: 1746

  • Gender: Unknown
  • Birthdate: 04 Jun 1983
  • I'm reading the James Rollins Sigma Force book Crucible. Good read so far, borderline scaring the bejeezes out of me.

    I am a caffeine heathen; I prefer the waters of the mountain over the juice of the bean. Keep the Dews coming and no one will be hurt.
    4 years 6 months ago - 4 years 6 months ago #15 by Mylian
    • Mylian
    • Mylian's Avatar


  • Posts: 47

  • Gender: Unknown
  • Birthdate: Unknown
  • Another point is that in Gen1, cyberpaths are very much a new thing and already there are a handful of them at various levels of ability that we know of. By Gen2, I would bet there are enough mature trained cyberpaths all over the world that the various industry and world leaders feel secure that there would be someone available to directly combat a rogue AI in ways that were not possible before Gen1.
    Last Edit: 4 years 6 months ago by Mylian.
    Moderators: WhateleyAdminKristin DarkenE. E. NalleyelrodwNagrijMageOhkiAstrodragonNeoMagusWarrenMorpheusWasamonsleethrOtherEricBek D CorbinMaLAguASouffle GirlPhoenix SpiritusStarwolfDanZillaKatie_LynMaggie FinsonDrBenderJGBladedancerRenae_Whateley
    Powered by Kunena Forum