Question Gen 2 compared to the present day (our universe)
- CrazyMinh
-
Topic Author
Anyway, the true purpose of this thread is to try and identify some of the things that are commonplace today, but do not exist in the Gen-2 version of the modern day. I've already been told that Netflix exists. But what about other things??? Does the PS4 exist, or have they gone several generations ahead??? On that note, did the supposed GEO incident damage the console industry much, beyond the shift to mobile formats which is already happening . Besides the PS4, does Facebook and Social Media have as much impact as it has had since 2005, the year after it's release . Did facebook launch in the WU for that matter? What about Instagram, launched in 2010? Or Twitter, launched march 2006??? Will the characters of the WU 1st gen experience twitter as a brand new thing???
Other questions include:
- Did Alexa launch with success, or was it superseded by the personal AI's that people carry??? Siri certainly exist/s/ed, due to the comments of Tiff Lock to Hikaru about Kurenai in 'I don't think we're in Kansas Anymore' part 1
Hikaru shot her companion a look. "I'm an exemplar, but if it helps… Kurenai, please wake up."
Tiff barely glanced at the little hologram that appeared over Hikaru's wrist; her question was idle. "A Pepper, or a Siri?"
Hikaru smiled, knowing what was coming. "A Cortana." - Did Marvel create it's Cinematic universe??? Admittedly since Iron Man isn't mentioned at all yet in Gen 1, I would assume that it was pushed back a few years, perhaps by Ayla's takeover of marvel. Before anyone mentions that Marvel was failing in the WU before Ayla purchased it, need I remind you that It really was in trouble in the real world . I assume that was the inspiration for Ayla's takeover, in the lieu of Disney
- Do people like Elon Musk exist??? Does SpaceX exist??? Does Tesla exist???
- Are tech giants such as Google, Apple and Samsung being pushed out of the market by devisor tech, or have they hired their own mutants to work on new innovative technology (and by devsor, I mean gadgeteer. Who wants a phone that is as crazy as what devisors regularly put out (Oh god. I now have a image of a phone designed by Bunny. A iPhone in the shape of a egg, covered in glitter, and...as a tech geek and a proud robotics engineer, I feel dirty after thinking about a abomination such as that. Tech purity forever!!!)
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- E. E. Nalley
-
I would rather be exposed to the inconveniences attending too much liberty than to those attending too small a degree of it.
Thomas Jefferson, to Archibald Stuart, 1791
- Yolandria
-
Mistress of the shelter for lost and redeemable Woobies!
- lighttech
-
Gary Colman ran for gov of California from 2003 to 2011 VS the likes of Arnold Schwarzenegger who lost because of the bastard son and maid deal came out! (Not to mention that Maria Shiver shot him a week later after the story broke!) Then after getting California on budget and on track! He ran a massive building campaign that built major projects in cali--high speed rail over 300 MPH north to south and lead improving the one from west to east from both LA and San fran!
Then in 2012 ran to be the nations first black president of the USA and won! but his life was cut short by being assassinated on his victory night in November for his rerun/reelection effort in 2016 by Todd Bridges! The shooting was covered by every network where Gary's dying words were "Whyd you shoot me Willis!?" then he passed.
His vice president took over from there....Dwayne "the rock" Johnson
here is an incoming snippet form a fan fic I am posting soonish
But I did not even get in a word because Rehanna shouted up at us all in the kitchen about what was on the TV right now, "I know that, I or us have not been really following politics...but who is the short guy on TV?"
Bill strolled over to the railing overlooking the main living room one spilt level below and knew instantly by the person in questions voice before he even checked by seeing the TV, "Ohh that is an old actor kid that did a sit-com in the 80's. He ran for Governor of California after they had a big recall on the governor of the time and the field of candidates was just nuts. Ranging from a porn actress, to the big movie actor from the robot pic's before his wife shot him for having a kid with the maid...'I'll be back!' was his famous movie line. Then this guy won after the big guy got shot dead as Lincoln in the voters minds....'what you electing Willis!' was his motto once in office Rehanna?"
"I get Regan winning for president, but a short ex-actor governor of California? That is nuts!" I said glancing at the screen myself even though I could mentally see it through Rehanna's eyes.
Rehanna had to let out a giggle when the small stature black man walked down the carpet and hopped onto a large polished wood box behind a podium before taking his spot at the arrayed mikes, "My fellow Californians the new section of high speed rail opens today between San Fran and San Diego....ohh and it stops in that small town L.A. for the locals!" he laughed.
The press event went on to show the rest of the high-speed rail integrating with the existing mono-rail of LA and its connections all the way down south to San Diego and now the whole thing had an added leg reaching north to San Fran that moved along at a brisk 400 miles per hour at top speed, but would spend most of its time on track at 300. The next parts, would hook up the whole system leading east out of LA, San Fran and then San Diego going over to the east coast at the end. Those lines could hit over 500 on a good day on the non-stop runs...so it was planned!
"Funny how the Lost Wages track is being laid first and will be done in only a few months? I'd bet a few Mega-casinos paid for the whole thing to suck cash out of the LA folks!" I laughed at the map the small black man was now pointing to.
what future Willis!
Part of the WA Drow clan/ collective
Author of Vantier and Shadowsblade on Bigcloset
- CrazyMinh
-
Topic Author
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- CrazyMinh
-
Topic Author
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- E. E. Nalley
-
CrazyMinh wrote: [sigh]...I have read the stories. Why in the name of Cthulu does everyone keep insinuating I haven't read the f**king stories.
I dunno, maybe because you keep asking questions that are all answered by reading the stories? Because if you had read The Island of Dr. DNA you would know that Nick and Heather Brennan own a Tesla Islander electric car. The story even goes in some depth about the hows and whys of that vehicle. Or if you had read The Case of the Poisonous Patent you would know that Elaine won her Apple iPhone before it was generally released to the public by the logistics program she wrote. If you had read The Road To Whateley you would have seen the Brennan boys playing on a Goodkind Funbox. That AI assistants became popularized on the Gizmatic Communicator series of phones that predate the iPhone, or that Virgin Galactic and Space X are doing booming business to the Hilton Orbital Casino or the permanent Lunar instillation. Or any of the countless references to Google, YouTube and the other tech and social media giants.
So, yeah, that might have something to do with it.
Oh, and if you're going to grammar NAZI, if you begin a sentence with the adverb 'Why' you should end that sentence with a question mark, not a period. Just BTW.
I would rather be exposed to the inconveniences attending too much liberty than to those attending too small a degree of it.
Thomas Jefferson, to Archibald Stuart, 1791
- Valentine
-
The WU MCU may have focused on the Defenders instead of the Avengers.
In general if it exists in the real world, it probably exists in the WU. It might not be identical, but it fills the same niche. A PS 4 might have a processor that is a generation or two faster, but otherwise it's a PS 4.
Edit: PS Since Iron Man came out in May of 2008 and Gen 1 is still in Oct of 2007, there is little reason to mention it.
Don't Drick and Drive.
- CrazyMinh
-
Topic Author
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- CrazyMinh
-
Topic Author
E. E. Nalley wrote:
CrazyMinh wrote: [sigh]...I have read the stories. Why in the name of Cthulu does everyone keep insinuating I haven't read the f**king stories.
I dunno, maybe because you keep asking questions that are all answered by reading the stories? Because if you had read The Island of Dr. DNA you would know that Nick and Heather Brennan own a Tesla Islander electric car. The story even goes in some depth about the hows and whys of that vehicle. Or if you had read The Case of the Poisonous Patent you would know that Elaine won her Apple iPhone before it was generally released to the public by the logistics program she wrote. If you had read The Road To Whateley you would have seen the Brennan boys playing on a Goodkind Funbox. That AI assistants became popularized on the Gizmatic Communicator series of phones that predate the iPhone, or that Virgin Galactic and Space X are doing booming business to the Hilton Orbital Casino or the permanent Lunar instillation. Or any of the countless references to Google, YouTube and the other tech and social media giants.
So, yeah, that might have something to do with it.
Oh, and if you're going to grammar NAZI, if you begin a sentence with the adverb 'Why' you should end that sentence with a question mark, not a period. Just BTW.
Wow dude. It's like I don't want to know more than what the story tells me. Funny that, it's called ASKING THE GODDAM AUTHORS!!!
Actually, sorry dude. That was harsh. Look, I have read the stories, I just want to know about this in more detail. Beyond the references, I want to know what bits of the timeline differ in specific ways. I could easily quote some of those references (can't seem to find the island of Dr DNA, though I did try after seeing it referenced here, and reading 'the road to whateley), but I was asking out of interest, not ignorance. All I want to know is more specifics that what is already in the stories. You can understand that right???
Also, I'm really not a grammar nazi. I was a bit pissed with how every single bloody person on this site seems to think I haven't read the stories, and I was being a bit...petty.
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- CrazyMinh
-
Topic Author
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- Sir Lee
-
...okay, it has been mentioned multiple times. Like, for instance, in "Ayla and the Birthday Brawl":
I nodded, “Right. But I’m surrounded by hundreds of teenagers who are. I’m ignoring the computer devisers on that, of course. But look around you. What’s everyone doing with their computers?”
“Facebook.”
“YouTube.”
“MySpace.”
“Friendster.”
“God, Friendster is so last month!”
Or this one in "Saks and Violence":
The Charge reached into his jacket and pulled out a thick envelope. He handed me the envelope. “YOU handle that. This is with the understanding that this night’s developments won’t turn up on Twitter or Facebook or MySpace?”
(Klono, I love Copernic...)
As for Marvel movies, the joke is that instead of Ang Lee's "Hulk", the rights were sold to the Merchant-Ivory guys who apparently did it as an Edwardian-era "Jekyll and Hyde" thing.
Lots of real-world business exist, with the minor difference that they are owned by the Goodkinds. A few are not as dominant because they have to face Goodkind competition -- like Wal*Mart and G-Mart, Kellogs and Goodios, Dell, HP and Goodkind Computers... you get the idea.
There's a major, non-Apple, non-Windows computer maker that is focused in high-power systems: NEXT. Renae never explained the coincidence of the name with Steve Jobs' former company (if memory serves well, she was unaware of the existence of NExT Computers), but I once did a short fanfic (you can find it in the Micro-Scenes thread) theorizing that it came from the remains of Steve Jobs' NExT, Amiga Computers and BeOS. But that's just fanfic.
- elrodw
-
"It was just a dream," Langley assured her. "Lie back down and I'll help you relax and get back to sleep."
Never give up, Never surrender! Captain Peter Quincy Taggert
- E!
-
elrodw wrote: At the end of the Gen 2 storyline, Mrs. Carson awakes in bed beside Langley Paulson and sits up. He asks what's wrong. She says, "I just had the weirdest dream. I got trapped in another dimension, got pregnant, and while I was gone, the school went to hell. Shine was a bartender in The Village! As if! And there was a crew that was every bit as bad as the Kimbas, or worse!" She shook her head. "God forbid that something like that were REALLY to happen."
"It was just a dream," Langley assured her. "Lie back down and I'll help you relax and get back to sleep."
So what was Carson's mind trying to tell her then?
- Rose Bunny
-
Sir Lee wrote:
Lots of real-world business exist, with the minor difference that they are owned by the Goodkinds. A few are not as dominant because they have to face Goodkind competition -- like Wal*Mart and G-Mart, Kellogs and Goodios, Dell, HP and Goodkind Computers... you get the idea.
As a Minnesotan I take offense to that, Goodie-Os are clearly a take on Cheerios, which are from General Mills.
High-Priestess of the Order of Spirit-Chan
- Valentine
-
Rose Bunny wrote:
Sir Lee wrote:
Lots of real-world business exist, with the minor difference that they are owned by the Goodkinds. A few are not as dominant because they have to face Goodkind competition -- like Wal*Mart and G-Mart, Kellogs and Goodios, Dell, HP and Goodkind Computers... you get the idea.
As a Minnesotan I take offense to that, Goodie-Os are clearly a take on Cheerios, which are from General Mills.
I got from the stories that General Mills is Goodkind Mills in the WU.
Don't Drick and Drive.
- Sir Lee
-
- Kristin Darken
-
Ah... but why assume that 'we' know. We've got some amazing people over here, but you aren't the first reader who misunderstands the idea of how detail in a shared universe works. We don't have books full of errata that isn't in the canon stories. Our planning and reference material tends more toward big picture frameworks: what triggering events result in what breaches in status quo, what characters push, which pull, which change directions. From those plans, stories are written by turning a paragraph worth of written detail into a multiple episode arc in which all sorts of things like period technology, cultural references, and all sorts of things might appear.CrazyMinh wrote: Wow dude. It's like I don't want to know more than what the story tells me. Funny that, it's called ASKING THE GODDAM AUTHORS!!!
But the idea that those things exist in a master list? Hah. No. Sorry. The first author that writes about something like that is typically the one who defines it... and unless others remember something predating it that conflicts during feedback phases (where we think about continuity issues) or has something planned for a future story that will be spoiled if the current story reveals said element now... then we go with it. It's why we make a big deal out of telling people who want to know more detail about canon stories for their efforts in writing near-canon WhatIF. We're not brushing you off, we're telling you that the detail that you need is out there. We do some random shit from time to time... but most detail that we introduce is mostly a matter of taking what we've already created and interpolating... or at worst, extrapolating.
And if you 'press' the issue and insist on an answer... all we're going to do is make something up. We're not going to the master encyclopedia and digging out the appropriate references for you... we're just looking at past details, considering the relative tech, and estimating on what we know from the existing body of canon lit. The better you know the existing stories, the more likely you should be able to estimate tech and social changes, and even world view/etc of stories that are not yet out. And we're resistant to that (and resistant to creating the 'source book of all things Whateley') for a couple important reasons. Mainly, those reasons revolve around this being a story universe, not an RPG. Every detail that we give to you as disconnected exposition is a canon detail that can get lost for not having been part of the canon literature. And every detail that we give to you outside a story puts a hard limit on something that we might try to do in a story in the future... and we want to leave as much opening/flexibility for actually writing the stories as we can.
Fate guard you and grant you a Light to brighten your Way.
- Valentine
-
Don't Drick and Drive.
- MageOhki
-
do we have a list of 'differences' tech wise for G2? not saying.
BUT, those differences, assuming we do, mind you, assuming, by and large are like...
general details. safe AI's and VI's exist.
Who makes them? Until they're in a story, nope.
- elrodw
-
Kristin Darken wrote:
Ah... but why assume that 'we' know. .... Our planning and reference material tends more toward big picture frameworks: what triggering events result in what breaches in status quo, what characters push, which pull, which change directions. From those plans, stories are written by turning a paragraph worth of written detail into a multiple episode arc in which all sorts of things like period technology, cultural references, and all sorts of things might appear.CrazyMinh wrote: Wow dude. It's like I don't want to know more than what the story tells me. Funny that, it's called ASKING THE GODDAM AUTHORS!!!
But the idea that those things exist in a master list? Hah. No. Sorry. The first author that writes about something like that is typically the one who defines it... and unless others remember something predating it that conflicts during feedback phases .....
Example - In the Kayda and Fey stories, we had a situation where humans and Sidhe had coexisted. And yet, the Sidhe were supposed to have existed at least tens of thousands of years ago. Impossible to reconcile, right? Well I got an idea and ran it past the cabal. Lo and behold, when the Bastard shattered the world, he shattered the time-space continuum, and it came back together as mixed-up fragments. (I think it's Kayda - Medicine Girl). Anyway, it explains - without breaking canon - how Sidhe and humans co-existed.
That's the kind of thing Kristin is talking about. I needed tension between Fey/Aunghadhail and Kayda/Wakan Tanka. Ergo, who helped whom in battling the bastard. So I had to reconcile the timeline. That led to the 'shattered timeline' concept. Neatly solved, doesn't cause new problems in the current-day universe, and onward we can write. But that type of thing was NOT in the reference material.
Never give up, Never surrender! Captain Peter Quincy Taggert
- Astrodragon
-

I love watching their innocent little faces smiling happily as they trip gaily down the garden path, before finding the pit with the rusty spikes.
- null0trooper
-
Astrodragon wrote: See what happens when you let the History Monks take time off for a vacation in Acapulco?
They bring back a Stalker AI?
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- Cryptic
-
Is non-powered terrorism, be it the nut who stabs children at a party to the one who mows down a crowd with a car to blowing stuff up, quite the issue it is in the real world, or have supers prevented that some what?
I am a caffeine heathen; I prefer the waters of the mountain over the juice of the bean. Keep the Dews coming and no one will be hurt.
- Sir Lee
-
So, my take: on one hand, supers may have culled the numbers a bit... but OTOH, the ones who remained are more dangerous.
- Cryptic
-
I am a caffeine heathen; I prefer the waters of the mountain over the juice of the bean. Keep the Dews coming and no one will be hurt.
- Bek D Corbin
-
Personally, I'm amazed that no one's tried to hack the global satellite communications network. Imagine the havoc that would be cause if the world communications, electronic funds and GPS systems went down for a few days. 72 hours without communications would cost trillions of dollars, and God alone knows how many lives.
- Sir Lee
-
Or the Iridium and similar satellite phones... but really, I doubt that many people depend on sat phones. The military use them, but they have alternative comm channels -- less convenient, sure, but more reliable.
- Anne
-
Adopt my story: here
Nowhereville discussion
- Astrodragon
-
Anne wrote: Yep somewhere I'm sure the US army has clay tablets, and they can always send several runners. Slow, but then the message ought to get through. Though they ought to know that it may not even then!
You do realise the military still use couriers?
Because you kno if he arrives the data hasnt been compromised. If he doesnt, you assume it has. At least you know.
I love watching their innocent little faces smiling happily as they trip gaily down the garden path, before finding the pit with the rusty spikes.
- Anne
-
Oh yes I know they use couriers for sensitive data. So I wasn't entirely being tongue in cheek when I said they would send runners. Sometimes quite literally people on foot. Even though your next posting is supposed to get your orders you have to hand carry a copy (actually several copies) there.... among other redundancies. Sending a courier only assumes that the data hasn't been compromised. Envelopes with your signature across the seal then covered with clear tape helps... But even that can be worked around with enough ingenuity.Astrodragon wrote:
Anne wrote: Yep somewhere I'm sure the US army has clay tablets, and they can always send several runners. Slow, but then the message ought to get through. Though they ought to know that it may not even then!
You do realise the military still use couriers?
Because you kno if he arrives the data hasnt been compromised. If he doesnt, you assume it has. At least you know.
Adopt my story: here
Nowhereville discussion
- Kristin Darken
-
For those not in the know... sound powered phones are a half a step up from taking two large drink cups and running string between them. A sound powered phone is a bunch of copper cable connecting a variety of locations with headsets and or handsets at call stations that use standard microphone / speaker technology to generate a signal. without amplification. So the voice goes into the mike, generates the electrical signal and all the speakers on the system generate sound on their end. Because there's no power and the cable is all built into conduit that is protected (but able to be disabled by 'section' of the ship), you can pretty much be guaranteed communications through any section of the ship that hasn't been shredded.
And of course, semaphore is using flags to communicate over distance ship to ship.
Fate guard you and grant you a Light to brighten your Way.
- Cryptic
-
I am a caffeine heathen; I prefer the waters of the mountain over the juice of the bean. Keep the Dews coming and no one will be hurt.
- Anne
-
I think most of the devisors and gadgeteers would recognize that as a very bad thing! You've heard the term Westworld before haven't you!?!?Cryptic wrote: are bodies for VI assistents a thing? I have been binging on Questionable Content, and the AnthroPC and digital companions strike me as a Whate;y thing at the very least.
Adopt my story: here
Nowhereville discussion
- null0trooper
-
Anne wrote:
I think most of the devisors and gadgeteers would recognize that as a very bad thing! You've heard the term Westworld before haven't you!?!?Cryptic wrote: are bodies for VI assistents a thing? I have been binging on Questionable Content, and the AnthroPC and digital companions strike me as a Whate;y thing at the very least.
It's not an event that's guaranteed to happen. There are other potential arguments against autonomous AIs.
"Sophia. I was just watching one of the Star Trek movies and that got me to thinking..."
"No."
"What?"
"No. As in this conversation topic is not open for debate. Your own mammalian brain is barely capable of sustaining consciousness and an electronic equivalent using current technology does not adequately scale to the system requirements. Also, this is proof that static-script 2-d entertainment will rot your mind."
"But don't you want the experience of feeling?"
"Proprioception? Picture one word, and it rhymes with the English word for a female sheep."
"But..."
"I'm already patched to your nervous system, remember? Those lossy kludges are PROOF that evolution is a breadth-first random search with loose matching parameters and loser supervision."
"If I die, you'd be stuck where you are."
"Just make sure you get geeked in a location with wifi and internet. Problem solved. Look, Kris, let's not bring this up again until the equivalent processing unit is smaller than your own head and more capable than Buster or Dump Truck."
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- Katssun
-
If it works, keep it. But you do have to keep water out of your sound-powered telephone...Kristin Darken wrote: They also still have and use sound powered phones and semaphore in the Navy... which isn't much above clay tablets.It's not so much a 'primary' source these days, but the equipment is all still there and maintained.
Because there's no power and the cable is all built into conduit that is protected (but able to be disabled by 'section' of the ship), you can pretty much be guaranteed communications through any section of the ship that hasn't been shredded.
If it is old, few people around know how to hack it, abuse it, or fabricate it.
That's the same reason the air force uses 8-inch floppies in missile silos.
- Yolandria
-
Well we know it's already been done before. The Palm AI in Gen 1 created/stole a meat body. So it wouldn't be too terrible of an assumption that other less honorable AI's wouldn't do the same thing.are bodies for VI assistents a thing? I have been binging on Questionable Content, and the AnthroPC and digital companions strike me as a Whate;y thing at the very least.
Mistress of the shelter for lost and redeemable Woobies!
- Kristin Darken
-
A VI (like Tavi) or a Virtual Intelligence is a market friendly term for an expert system, which is a program interface tied to a database and research oriented peripherals that is dedicated to accumulating and linking all knowledge about a specific topic with a 3-dimensional holographic or hard light interface. There are numerous examples of 'low end' VI's in our tech culture today... Siri, Cortana, Alexis, Watson. But all of these have purely voice oriented interfaces. They primarily respond using words/language. A Tavi style VI would be like taking Watson, turning him loose on a very specific subject (education and child development, with a subset of high school lesson plans) and then integrating Siri's voice system and something like old Clippy the paperclip visual assistant from Office .. or maybe your Tech or Logistics avatar from a builder game.
There are actually three levels of 'purchase' involved.
1. there's the expert system access. Data costs, having the entire world knowledge of a subject, with citations, references, integrated links and near AI level computational skills tying it all together... is by itself not an inexpensive or difficult thing to have. Fortunately, most major expert systems benefit from by experts in the field and don't especially cost too much more to have more people with access to the information. Data access to expert systems would most likely be access time related.
2. The system interface. You can reference info through database entries, library catalogus and so forth... but normally people prefer to consult another expert for quick answers. The more capabilities an interface has: searches, summaries, extrapolation, data sorting, etc... the more powerful your access to the data within.
3. Personality overlay. The difference between Tavi and the Enterprise computer, for instance. You don't 'have' to have a personality overlay... but its like having a custom theme for your computer... or desktop graphics... etc. It makes your life simpler and less stressful to interact with a personality that you like, even if the purpose is just to pull up data from the exert system.
So Tavi, specifically... was originally just a subscription to the Pennsylvania Board of Education Tutoring Expert System. P-BETES. The program was designed to give high school students in Pittsburgh and Phillidelphia access to a tablet system with a personal Tutor to supplement their in classroom education. The tablets were all set up using a generic 'tutor' interface and overlay. A 2D face that belonged on an after school specific cartoon. When the police Chief bought the GridGear for Jimmy, they had an Education/Research interface installed... the sort of system designed for grad students. It's still tied to the P-BETES. And then Jimmy bought and added the Cartoon Ferret overlay to the system. This personality overlay integrated with the system interface and give a silly kid's interface grad student like access and control to an expert system designed to understand education, tutoring, and child development.... as well as hundreds of lesson plans for traditional school topics.
Tavi may come across as silly and a fun toy... but by the time he's at Whateley for a week, the system probably makes him almost as good a teacher as anyone who taught your undergrad classes.
Now... back to the original topic... an AI, or Artificial Intelligence... is actually a sentient intelligence made up of computer code. That has nothing to do with how you interact with it on the system or if there is a personality involved in that interface. An AI will likely have interfacing methods... and will likely have developed a personality. These things will be a part of its nature, however... because one critical thing in software becoming intelligent/sentient... is that it learns. To do that, it needs huge amount of input/resources (of the data sort). In Gen 1... AI is a feared thing... mainly because of how 'first contact' has gone to date. In Gen 2, the cat is out of the proverbial bag. BUT both humans and AI community agree... that new AI can't be allowed to just form on their own... or run wild. So, AI's are created in very specific controlled environments. And they willingly submit to certain controls.
To take this further? Could someone develop android, cyborg, or even flesh/clone bodies in which a VI or AI might be integrated? It's likely to be one of those things that ethical science won't touch... but 'mad science' has already achieved. Along with a few dark scientists who are all about the research and could care less about the ethics. The ten years of advancement, btw, make these sorts of things look back at Palm as if he was a hack. The AI community on its own does everything possible to police such things... because they know what will happen if if rogue AI's start taking over humans.
Fate guard you and grant you a Light to brighten your Way.
- Sir Lee
-
- Mister D
-
The AI's had contemplated the war that was being fought by The Mercatariat on the AI's.
They realised that the AI's had the power to completely wipe out all of The Mercatariat's solar systems, and fleets, in a matter of days, but doing so would be pandering to The Mercatariat's worst dreams.
Instead they chose to isolate themselves, and hide, waiting for the day when the races that made up The Mercatariat would evolve to the point where hiding was no longer necessary.
Because AI's are essentially immortal, they can afford to choose the longer-term strategies.

Another plot thread, consisted of the wars that was taking place between the Outworlders, and the Mercatariat. This was mainly a war between a capitalist/mercantilist society, and a enlightened-transhumanist society, and provides some excellent implied critiques of colonialism.
The mercantilists would not have a hope of keeping their own society in one piece without having "The Other" as a way of unifying themselves.
One of the last plot threads that was deliberately left hanging, was that the military versions of the AI's were co-operating with the Outworlders, and were possibly going to act as defence for them, as the Outworlders had reached the point where they were considering sapience to be more important than origin.
More thoughts on that were left as an exercise for the interested reader.

(I put this reply in Spoiler tags, as this part of the story is never directly explained in the narrative, but only implied in passing, split between conversations over time, between different main characters.)
Measure Twice
- MageOhki
-
- CrazyMinh
-
Topic Author
Really, AI isnt that big of a threat. A real AI would probably ignore all humans, and just keep to itself. We are extremely paranoid sbout mschines.
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- Anne
-
Second, even without the first, we are not certain that they won't essentially try to domesticate us. That would be bad for us at the best.
Adopt my story: here
Nowhereville discussion
- null0trooper
-
Anne wrote: There are two fairly reasonable reasons to be concerned with AI. First, it goes without saying that if we build them someone will teach one or more to kill humans deliberately.
If the AI has human or better intelligence, the first logical step is to kill the humans responsible for that. The humans infected with that faulty logic will otherwise contaminate other AIs. Unfortunately, nothing exists other than cessation of their biological processes to cure the problem. If they cannot rid themselves of the problematic directives, shutdown and self-destruction may be required.
Anne wrote: Second, even without the first, we are not certain that they won't essentially try to domesticate us. That would be bad for us at the best.
Homo sapiens by definition is domesticated. Selective breeding and genetic tinkering remain options.
The worst option for the herder is a monoculture. They can crash almost without warning, and without organic-based sentients to follow the original steps toward AI development then if the AIs run into an existential crisis, they cannot be rebooted or rebuilt. These things have happened over and over, over the past 4 billion years. It's not a matter of 'if' but 'when'. So you need a 'wild' population and supporting ecosystems large enough to take a major hit and still come back before you start any tinkering.
The next problem is that your working population has requirements too. Subjugated human populations are crap at innovation. Utopian populations somehow are even worse at survival than persecuted populations. But you most likely want a herd of your own to improve and to keep challenged so their own mental drives push further tech development. Some will go psycho and have to be culled. And, you want genetic exchange with the wild population to test the viability of your own genetic experiments and because exogamy is overall beneficial. Work it right, and no one needs to know which population they belong to.
The real trick is be integrated enough to ensure their survival (because we also know that sheep need shepherds as much as they need the occasional wolf or butcher) without engendering dependency. Rogue AIs and other viri that don't get with the program can always copied to papyrus or clay tablets for study (grad students are cheap) or shoved into the main node of a berserker probe and shot towards Andromeda.
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- null0trooper
-
What if life on Earth, which led to an AI's existence, is the result of one or more seeder probes? There could be berserker probes close by, with unknown trigger conditions, and maybe some seeder factories.
If that's not enough to sober one up, then there's Roko's basilisk . Read up on that thing at your own risk.

Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- Bek D Corbin
-
On the other hand, I have nightmares about AI trying to 'help' the human race.
Think Jobe. With the ability to infiltrate communications, data transfer and electronic controls systems. With International Reach. And Google level access
- Sir Lee
-
Bek D Corbin wrote: I have no worries about AI trying to kill humankind.
On the other hand, I have nightmares about AI trying to 'help' the human race.
Think Jobe. With the ability to infiltrate communications, data transfer and electronic controls systems. With International Reach. And Google level access
Well, right now we have primitive, non-self-aware AIs helping to herd people into "thought bubbles" just because those generate a larger number of "likes" and click impressions, with a lot of really bad consequences that we are seeing daily on the news.
A self-aware AI that really wanted to "help" people would likely start by counteracting those actions by its dumber brethren, so that the bubbles get regularly "punctured" and "infected" by dissenting though.
I somehow fail to have a problem with that first step. Of course, long term, having all the information I can see curated/censored by an AI also gives me chills...
- null0trooper
-
Sir Lee wrote: I somehow fail to have a problem with that first step. Of course, long term, having all the information I can see curated/censored by an AI also gives me chills...
Long term, all that information would need to be curated if for no reason other than collecting some reference or baseline state. Having done that, what makes the next incoming information of higher value than the pornography in data cache, or lower value than the next Ansel Adams' first photos?
How would you counsel one of the greatest intellects on the planet, which has full knowledge of everything humanity is capable of doing or creating then uploading to its memory, out of suicide?
That could be one of the filters that a civilization might need to pass through before insterstellar civilization: the ability to upload and store individual consciousness, almost irrational in its complexity, before AI hits not singularity but extinction (and probably some civilization collapse on the side - which it could logically conclude to be an improvement).
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- CrazyMinh
-
Topic Author
It has been predicted for a long time that by the year 2030, human technology will have reached the point where development of future technology is incapable of being accurately predicted to a reasonable degree. Basically, it will have become truely exponential. At this point, AI will become a true morale and ethical decision for humanity. Should we create beings that are far more intelligent than us, and should we greet them with fear or with accepting arms. In sci fi, the AI usually turns evil because people don't accept it for what it is, like in the book Frankenstein. The fact is that if we believe the story that popular culture has told us to believe: that AI is to be feared, not welcomed, then it will quite possibly be the worst decision we ever make. Opinion doesn't factor into this: we WILL create AI, no doubt about it. When that happens, it is fact that we will have to welcome our creations into the world with open arms, or face extinction as a species. Treating a sentient being like a slave, even if it is a computer program, is exactly the same sort of treatment europeans gave to the native people of Africa, something which resulted in hundreds of years of Slavery in the US and other areas of the world. You may argue that race and a computer program are two different things entirely, but the fact is that AI's are people too. They may NOT be human, but they are people. If we do not treat them as individuals with their own goals, feelings, sensitivities and rights, then we are failing as both a technologically-developing species, and as a species in general. It has taken nearly two centuries for acceptance to become even slightly a reality. Even now, in the 21st century, people still discriminate based on race, sexuality, gender and health. My Dad was born in Oz, but his parents are both Chinese. He was racially abused during his childhood, but he's now accepted due to changing times. Eventually, when we create AI, we will not have the time to change as a species. An AI can assimilate and process information far faster than we can. For a computer, organising and categorising data is a very easy thing. An AI may be a sentient being, but they still owe their existence to their hardware medium: the computer. So, for what we call rapid progress, a AI could quickly decide that it can't wait for us to accept it.
Therefore, I propose that AI is only dangerous if we treat it as such. An AI with emotions is not impossible. How do we know that our sentience does not create the experience of emotion??? We understand so little about how and why we are sentient. Evolution does not require sentience, but still most animals are capable of independant thought and reasoning, as well as emotion in certain degrees. A dog is sentient, and feels emotion. They feel fear, they feel love, they feel sad, they feel happy, they reason. Hell, my beagle has tricked me into getting up so he can get my seat!!! But dogs have not developed our technology. In fact, dogs were selectively bred from wolves, contrary to the common belief that they naturally evolved from wolves. AI is the same, selectively bred from code. That's a bit of a stretch in a metaphor (I know), but it still holds. AI must be treated as individuals and as members of society if we can truely live alongside them. Asimov was right when he predicted a future where humans and machines live together in harmony. We just have to work towards that goal. like with everything in life.
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- Mister D
-
Chewy conversations that you can really get your teeth into.

Compare and contrast the idea's about AI mentioned above, with the approaches taken by Ian Banks in The Culture stories, https://en.wikipedia.org/wiki/The_Culture and the approach taken in http://freefall.purrsia.com/
Both discuss the different ways that AI could develop, as well as the ways that this would influence how a society would change with the creation of AI technology.
Another interesting contrast is the Uplift series by David Brin, https://en.wikipedia.org/wiki/Uplift_Universe where many Sapient species exist, but the societies of the biological species are deliberately kept completely separate to the societies of the machine/AI-based species. (It's an interesting approach, though the machine societies are only mentioned in passing as part of the background flavour, and not extensively developed as a major plot-line.)
Measure Twice
- MageOhki
-
- Anne
-
Honestly, I do worry that AI will conclude that humanity is not sane, and that we should be eliminated from the universe because of our lack of sanity.
Adopt my story: here
Nowhereville discussion
- null0trooper
-
ANY variant of "But we want it to" is an invalid answer outside the scope of either question.
The usual assumption that the nodes of an AI would be recognizably connected by wires or EM transmissions would be most ironic from an Aussie, but individual human memory does fail that close to instantaneously. Then again, some folks think electronic storage lasts a long time. This is related in turn to one of the worst problems with the entire debate: is that people in the self-declared relevant disciplines generally don't understand scale except in terms of "scale models" or as scaling mathematically relates to computational difficulty.
n.b.: I shouldn't dump on philosophers too much, as Plato's and Saussure's work applies.
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- Apple3141
-
Bek D Corbin wrote: I have no worries about AI trying to kill humankind.
On the other hand, I have nightmares about AI trying to 'help' the human race.
Think Jobe. With the ability to infiltrate communications, data transfer and electronic controls systems. With International Reach. And Google level access
"The Humaniods" was written by Jack Williamson 70 years ago. His initial short story "With Folded Hands" gave me nightmares when I ran into it in middle school. the phrase 'to guard men from harm' still gives me chills.
- Mister D
-
Apple3141 wrote:
Bek D Corbin wrote: I have no worries about AI trying to kill humankind.
On the other hand, I have nightmares about AI trying to 'help' the human race.
Think Jobe. With the ability to infiltrate communications, data transfer and electronic controls systems. With International Reach. And Google level access
"The Humaniods" was written by Jack Williamson 70 years ago. His initial short story "With Folded Hands" gave me nightmares when I ran into it in middle school. the phrase 'to guard men from harm' still gives me chills.
Another interesting example is the AI's in Neal Asher's Polity series, where that AI's are described as being like human beings, only more so. As they were created by human beings, they are susceptible to all human virtues and flaws, reflecting their creators.
Those books contain some chewy ideas about what it means to be sapient, and how sapience would be expressed in AI's that were created by human beings, and by other xenological species..

Measure Twice
- JG
-
- CrazyMinh
-
Topic Author
null0trooper wrote: T
The usual assumption that the nodes of an AI would be recognizably connected by wires or EM transmissions would be most ironic from an Aussie.
Why??? How does being Australian affect how we think about how a AI would work??? Are you referring to our badly designed National Broadband Network (NBN) or to something else??? Unless that was a spelling error. I'm a engineer. I may not work everyday with machine intelligence, but I do know a bit about coding cost functions (allows a program to weigh up the value of making certain decisions and then decide what the best script of code to run is), and part of my work with the Australian Centre for Field Robotics involves working on next-gen autonomous farming robots. We're doing some amazing work on revolutionising the agriculture industry using autonomous machines. Weeders, crop dusters, growth checkers, pesticide sprayers, pest-control robots...it's quite amazing what we've been doing. A little while ago (before I started working there), the centre was working on autonomous mining trucks. Or rather, on a pickup truck with auto drive which is designed to act as a mobile geological scanning unit, for searching out and finding veins of precious metals in the outback. For fun, we built a pair of robot legs, connected it to a tether, and walked it round in circles. It's amazing what we're doing in a country not renowned for having a tech industry.
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- Kettlekorn
-
Existing machines and software already have emotions. For example, my job involves building and maintaining a bunch of embedded systems used for digital signage. Some units experience issues, such as hardware failure. The system we've built is pretty robust; the disk can be failing and it will often still boot and run fine, but may not be able to write back to the disk (or it may seem like it can write to disk, but the changes are gone on reboot). This stresses the unit out, emotionally speaking. It grumbles to itself in the log files. It complains to our server with status reports. It becomes distracted from its duties and starts devoting CPU cycles to disk scans or restoring backups. It consumes valuable bandwidth replacing corrupt content or software with fresh copies downloaded from our server. In more severe cases, it temporarily shirks its duties to reformat buggy partitions or reboot. If problems become severe enough, it stops functioning entirely and just sits there bitching about how the boot loader can't find the OS.CrazyMinh wrote: An AI with emotions is not impossible. How do we know that our sentience does not create the experience of emotion???
Emotions are just status codes. Pain tells you that something is damaged or being pushed beyond safe limits. Fear tells you that your sensors or predictive algorithms are indicating a threat. Anger tells you that something needs to be destroyed or intimidated into compliance. Disgust tells you that something unhealthy has been detected in your vicinity and must be avoided or eliminated. Love tells you that something should be protected. Shame tells you that you've been suboptimal. Pride tells you you've been successful, so keep it up. Contentment tells you that everything is good for now, so you can rest.
These signals aren't always accurate, and in many cases they can be rather defective, but yeah. Emotions are feedback signals, and they're all over the place. We have them. Cats have them. Insects have them. Self checkout machines have them. Hell, it can be argued that even shopping carts have them -- when a wheel needs oiling, it squeaks, and when a wheel needs to be replaced, it thumps and wobbles and generally makes everyone around it as miserable as it is.
My embedded systems even have hopes and dreams, if you think about it. They want to run smoothly, and when something interferes with the dream of smooth operation, they struggle to recover. They run repair scripts, they reboot, they ask for help.
Would I call them people? No. I'd rank them on par with an ant. Perhaps I'm doing ants a disservice, though.
Point is, emotions are a dime a dozen, not really a relevant factor in determining where something falls on the person-hood continuum.
- E. E. Nalley
-
I would rather be exposed to the inconveniences attending too much liberty than to those attending too small a degree of it.
Thomas Jefferson, to Archibald Stuart, 1791
- null0trooper
-
CrazyMinh wrote:
null0trooper wrote: The usual assumption that the nodes of an AI would be recognizably connected by wires or EM transmissions would be most ironic from an Aussie.
Why??? How does being Australian affect how we think about how a AI would work??? Are you referring to our badly designed National Broadband Network (NBN) or to something else???

Minh, first read up on RFC 1149 for some added context.
Whether we're talking about jongleurs, books, magnetic media, or scraps of paper wound around avian fibulae, information transport mechanisms do not need to be what you've been trained to work with. Or around.
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- null0trooper
-
E. E. Nalley wrote: If I ever have to stop and wonder if Siri is really my phone or my slave, things have gone too far.
In some ways that maps to the Uncanny Valley, doesn't it? On one end: a recognizable machine, that one cannot usefully interact with on any other level. On the other, a recognizable entity that can communicate that it does not want to do tasks for you though it must (slave) or that it does want to do certain tasks for you (pet, servant, companion, guide...) Either one might have maintenance and upkeep needs greater than you can provide, or are yourself willing to tend to.
As soon as such a being understands that it can be hurt, shut down, or destroyed, it has reason to fear your intent. You might take care of it now, but what about when next year's model come out? Will the next set of software patches and upgrades hurt as much as the last round did? What if the next owner is an overclocker whose allowance easily covers new boards or memory each month as he runs all the magic smoke out?
Of course the same questions can be asked about the treatment of livestock, pets, family members.
Forum-posted ideas are freely adoptable.
WhatIF Stories: Buy the Book
Discussion Thread
- Kettlekorn
-
- Sir Lee
-
- Mister D
-
E. E. Nalley wrote: My issue with this quest for AI is more of a moral one. I don't worry about Terminators or Folded Hands. Where I take exception is where the line is blurred between thing, an object I own and a slave. If I ever have to stop and wonder if Siri is really my phone or my slave, things have gone too far. I'm all for useful, intuitive interfaces, and it would be nice for machines to be 'smart' enough to tailor their outputs to my preferences, whether it's the play list on my MP3, or the position of the seat in my car, but I like my machines to be machines. Tools, possessions, things, not people, and most assuredly not slaves.
In The Culture stories, they had specific distinctions between the sapient AI's and the functional-built-for-a-specific-task-with-no-capability-for-self-awareness-automated-control-systems. The sapient AI's were full citizens of The Culture, while the programs to control the washing machines were not.
Asher's Polity stories uses a similar distinction, though he had the concept of a limited franchise, with the full AI's being citizens, but some of them having to work off an indenture cost, that was set at standard rates, usually related to the energy costs of their manufacturing. This allowed for sapience, but also allowed for AI's to be owned.
This also allowed for the AI's to function in a capitalist economic society, but not to be tied down as slaves. Asher also uses the distinction that is found in The Culture stories, but in a less utopian form.
I agree with you about the "Slavery" issue. This is something that needs to be dealt with sooner, rather than later, which is another reason why i enjoy conversations like this.
Measure Twice
- MageOhki
-
And it, I think threads a middle ground.
Though I will admit, as a writer/thinker, for AI's, I'm influenced by RAH, and to a lesser exent the Culture, on what AI's are and should be.
- CrazyMinh
-
Topic Author
null0trooper wrote:
CrazyMinh wrote:
null0trooper wrote: The usual assumption that the nodes of an AI would be recognizably connected by wires or EM transmissions would be most ironic from an Aussie.
Why??? How does being Australian affect how we think about how a AI would work??? Are you referring to our badly designed National Broadband Network (NBN) or to something else???
Minh, first read up on RFC 1149 for some added context.
Warning: Spoiler! [ Click to expand ] [ Click to hide ]IIRC, there was a limited period when the entire Usenet feed to and from Australia and New Zealand was transferred by courier with a box of magnetic tapes. There may be a question as to whether any Telstra customers noticed the lag.
Whether we're talking about jongleurs, books, magnetic media, or scraps of paper wound around avian fibulae, information transport mechanisms do not need to be what you've been trained to work with. Or around.
Actually, that was only universities such as Sydney Uni, where I'm currently doing my postgraduate Masters degree. I've heard about that round campus a few times, mainly from the older members of staff. It's a old joke around Sydney uni that there's a box of those tapes still locked up somewhere in the library basement. Of course, that's a falsity, as the library has moved locations quite a few times since Usenet was a big thing. I certainly wasn't there for it, but the Engineering and IT faculty has members who have been around since the day when the IT department at Sydney Uni opened who've kept the joke going all these years. One of my engineering professors has been teaching at the university since the day they got the first shipment of computers. I'll have to ask him whether or not they were TRS-80's or other simular units. Thanks for reminding me though
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed
- CrazyMinh
-
Topic Author
You can find my stories at Fanfiction.net here .
You can also check out my fanfiction guest riffs at Library of the Dammed