Communion Of Dreams


Easy to predict.

In Communion of Dreams, I have “experts” who are A.I. assistants. As I describe them in that book when I introduce one as the character ‘Seth’:

His expert was one of the best, one of only a few hundred based on the new semifluid CPU technology that surpassed the best thin-film computers made by the Israelis. But it was a quirky technology, just a few years old, subject to problems that conventional computers didn’t have, and still not entirely understood. Even less settled was whether experts based on this technology could finally be considered to be true AI. The superconducting gel that was the basis of the semifluid CPU was more alive than not, and the computer was largely self-determining once the projected energy matrix surrounding the gel was initiated by another computer. Building on the initial subsistence program, the computer would learn how to refine and control the matrix to improve its own ‘thinking’. The thin-film computers had long since passed the Turing test, and these semifluid systems seemed to be almost human. But did that constitute sentience? Jon considered it to be a moot point, of interest only to philosophers and ethicists.

In the world of 2052, when Communion is set, these “experts” are ubiquitous and extremely helpful. Seth is an “S-series”, the latest tech, and all S-series models have names which start with S. I figured that naming convention would be a nice way to track the development of such expert-systems technology, and in the course of the book you see earlier models which have appropriate names.

So when the time came to write St Cybi’s Well, I figured that I would introduce the first such model, named Andi. Here’s the first bit of dialog with Andi:

“Hi, I’m Andi, your assistant application. How can I help you?”

“Andi, check local restaurant reviews for Conwy and find the best ranked Fish & Chips place.”

“You’re not in Conwy. You’re in Holywell. Would you rather that I check restaurants where you are?”

“No, I’m not hungry yet. But I will be when I get to Conwy.”

“Very good. Shall I read off the names?”

“Not now. It can wait until I am closer.”

“Very good. Shall I track your movement and alert you?”

“No.”

“Very good. May I help you with something else?”

“Not right now.” Darnell shut off the app, then the phone, and dropped it back into his pocket. The walk back to his car was uneventful.

Now, I wrote this bit almost eight years ago, long before “Siri” or “Alexa” were announced. But it was predictable that such technology would soon be introduced, and I was amused as all get-out when Amazon decided to name their first assistant as “Alexa”.

Anyway, I also figured that since the technology would be new, and unsophisticated, that Andi would be slightly annoying to use. Because it would default to repetitions of scripts, be easy to confuse, et cetera, similar to encountering a ‘bot on a phone call. And you can judge for yourself, but I think I succeeded in the book — the readers of early chapters thought so, and commented on it.

So this article in the morning Washington Post made me chuckle:

Alexa, just shut up: We’ve been isolated for months, and now we hate our home assistants

“I’m not a bad person,” Angela Hatem said. “I’m so nice to people.” But Alexa, the voice of Amazon’s home devices, isn’t a person, despite how hard “she” tries to emulate one. And coronavirus/self-quarantine/2020 has Hatem feeling a bit stressed out.

“I say things to Alexa that I wouldn’t say to my worst enemy, if I had one. And I don’t know why. She makes me crazy. … I curse at her. I call her names. I’m very, very mean to her,” said Hatem, who lives in Indianapolis with her 1-year-old son. “There’s really few things I can vent at or vent to, and I’m making Alexa my virtual punching bag.”

 

Heh. Nailed another prediction.

* * *

It’s the first of the month. That means that both novels and our care-giving memoir are available for free download, as they are the first of each month. If you haven’t already, please help yourself and tell your friends.

Jim Downey

 

 



All alone in the dark of night?

Perhaps:

Earth could be unique among 700 quintillion planets in the Universe, study finds

So much of humanity’s astronomical research is based around the notion of finding something like us out there – whether that’s looking for environments that could sustain life, ranking planets in terms of their potential habitability, or comparing distant worlds to our own.

But what if – statistically speaking – the odds are stacked against us finding another planet even remotely like Earth? That’s the thinking behind a new study by an international team of researchers, which has taken what we know about the exoplanets that lie outside our Solar System and fed the data into a computer model.

Their resulting calculations, designed to simulate how galaxies and planets have formed over some 13.8 billion years, produces a “cosmic inventory” of terrestrial planets – and one in which Earth very much looks to be unique.

 

Perhaps not:

Jon nodded. “Thanks. So what’s the meeting about? What happened?”

“Dr. Jakobs tried to contact you this morning. After hearing her message, I bounced it up to Director Magurshak. They found something on Titan. An artifact.” Seth paused, looked down at his hands, “a nonhuman artifact.”

Jon sat there for a moment, trying to digest what Seth said. According to what pretty much everyone thought, it wasn’t possible. SETI, OSETI, META and BETA had pretty much settled that question for most scientists decades ago, and twenty years of settlement efforts throughout the solar system hadn’t changed anyone’s mind. Even with the Advanced Survey Array out at Titan Prime searching nearby systems for good settlement prospects, there had never been an indication that there was an intelligent, technologically advanced race anywhere within earshot. Seth knew Jon well, didn’t let the silence wait. He looked back up, eyes level and unblinking, “It isn’t a hoax. The artifact is definitely nonhuman, or at least non-contemporary human. Mr. Sidwell found it out near his base. Dr. Bradsen will have as much a report on it as is available, which isn’t much.”

 

Jim Downey



Well, however you get there, I suppose …

Via Topless Robot, this article/video from the New York Times:

Sex Dolls That Talk Back

Matt McMullen has proved that some people are willing to spend thousands on sex dolls.

* * *
Mr. McMullen’s new project, which he is calling Realbotix, is an attempt to animate the doll. He has assembled a small team that includes engineers who have worked for Hanson Robotics, a robotics lab that produces shockingly lifelike humanoid robots.

Mr. McMullen is first focusing on developing convincing artificial intelligence, and a robotic head that can blink and open and close its mouth. He’s also working to integrate other emerging technologies, like a mobile app that acts like a virtual assistant and companion, and virtual reality headsets that can be used separately or in tandem with the physical doll.

 

It’s accepted wisdom that many new technologies come into their own and are quickly disseminated through the public when a way can be found to use them for sex and/or the depictions of same. Printing. VHS tapes. DVDs. The internet. Smartphone Apps like Tinder or Grindr.

So why not artificial intelligence?

Which isn’t the way I saw the technology for an expert system/assistant like Seth developing, but hey, I suppose whatever works …

 

Jim Downey



Coming soon, to a reality near you.

There are over 70 reviews of Communion of Dreams on Amazon, and if you poke around online you’ll find a bunch more. In addition, I’ve heard from countless friends and fans about the book in private messages and chats. And one of the most common things people will note is just how much they like the character of Seth, the Expert/AI executive assistant for the main character, Jon. The book opens with Seth contacting Jon about something which has come up, and you can get a sense of how useful such a virtual assistant could be:

“Sorry to bother you, Jon, but you’ll need to come back immediately. Business. I’ve made the arrangements. Transport waiting for you in town, take you to Denver. Then commercial flight home.” Audio only. That meant a lot. Tighter beam, easier to encode and keep private. Security protocol.

He wondered if something had gone wrong with the Hawking, the experimental long-range ship undergoing trials, based out at Titan. That was about the only thing he could think of that would require his cutting short his first vacation in four years. No use in asking. “All right. Give me a few minutes to pack my things, and I’ll get started.”

“Understood.”

“And contact my family, let them know I’m on my way back. ”

“Will do. Anything else?”

“Not at present. See you when I get there.”

Of course, CoD is set in 2052, and there have been huge advances in technology which allow for a very natural interaction between a human and a computer.

What’s been fun for me in writing St Cybi’s Well, set in our own time (well, actually, in October 2012), is that I get to plant the seeds for the technology which then shows up in Communion of Dreams. And one of those seeds is an Android app which is a ‘virtual assistant’ named Andi. It’s er, not quite up to Seth’s standards:

Darnell sat there, scanned the blog post. As he read, the assistant repeated “The page you requested is displayed on your screen. Do you need something else?”

“Um, yeah. How about a map to St. Seiriol’s Well?”

“A map is now displayed on your screen. You are presently at the location of St. Seiriol’s Well. Do you need something else?”

“I’m not at the Well. I’m in the parking lot. Where is the Well itself?”

“I’m sorry, available maps indicate you are presently at the Well. Do you need something else?”

“Go back to the Well Hopper site.”

“Very good. The page you requested is displayed on your screen. Do you need something else?”

 

A bit annoying, eh? Well, the people who have been reading the early chapters of the book have certainly thought so. Which was exactly what I was going for. Because technology doesn’t arrive fully developed. It shows up in an early, buggy form, and then gets improved over time. Think back to when we all had dial-up modems: they were annoying, klunky, and expensive … but they also were very, very cool because they allowed us to “get online”.

Anyway, I had to chuckle over a story on NPR yesterday afternoon which reminded me of this. Here’s the intro:

We’re already giving voice instructions to virtual personal assistants, like Apple’s Siri. But artificial intelligence is getting even smarter. The next wave of behavior-changing computing is a technology called anticipatory computing — systems that learn to predict what you need, even before you ask.

Google Now, which is available on tablets and mobile devices, is an early form of this. You can ask it a question like, “Where is the White House?” and get a spoken-word answer. Then, Google Now recognizes any follow-up questions, like “How far is it from here?” as a human would — the system realizes you’re still asking about the White House, even without you mentioning the search term again. It’s an example of how anticipatory computing is moving the way we interact with devices from tapping or typing to predictive voice control.

It wasn’t a prediction on my part to see this development, rather just paying attention to the current technology and tweaking it a bit to fit into the alternate timeline of CoD/SCW. But still, kinda fun to see things going just the way I envision.

 

Jim Downey



What kind of future were you expecting?

One with an ‘expert’ like Seth from Communion of Dreams?

Or maybe, a little closer to our own time, say one which includes something like HAL from 2001?

Ha! Sucker. You should know that reality would prove to be more … banal. And corporate. Like this:

The phone call came from a charming woman with a bright, engaging voice to the cell phone of a TIME Washington Bureau Chief Michael Scherer. She wanted to offer a deal on health insurance, but something was fishy.

When Scherer asked point blank if she was a real person, or a computer-operated robot voice, she replied enthusiastically that she was real, with a charming laugh. But then she failed several other tests. When asked “What vegetable is found in tomato soup?” she said she did not understand the question. When asked multiple times what day of the week it was yesterday, she complained repeatedly of a bad connection.

Over the course of the next hour, several TIME reporters called her back, working to uncover the mystery of her bona fides. Her name, she said, was Samantha West, and she was definitely a robot, given the pitch perfect repetition of her answers. Her goal was to ask a series of questions about health coverage—”Are you on Medicare?” etc.—and then transfer the potential customer to a real person, who could close the sale.

Hmm, I think I can still work “Samantha” into St. Cybi’s Well

 

Jim Downey



Sometimes the future is cool …

… and sometimes it is just chilling.

First, meet Seth’s grandpa:

Watson is a cognitive capability that resides in the computing cloud — just like Google and Facebook and Twitter. This new capability is designed to help people penetrate complexity so they can make better decisions and live and work more successfully. Eventually, a host of cognitive services will be delivered to people at any time and anywhere through a wide variety of handy devices. Laptops. Tablets. Smart phones. You name it.

In other words, you won’t need to be a TV producer or a giant corporation to take advantage of Watson’s capabilities. Everybody will have Watson — or a relative of the Watson technologies — at his or her fingertips.

Indeed, Watson represents the first wave in a new era of technology: the era of cognitive computing. This new generation of technology has the potential to transform business and society just as radically as today’s programmable computers did so over the past 60+ years. Cognitive systems will be capable of making sense of vast quantities of unstructured information, by learning, reasoning and interacting with people in ways that are more natural for us.

Next, consider the implications of this idea:

Now think of another way of doing this. Think of a website that is a repository of all these IDs, and is government-owned or certified. Why can’t I just visit a police station once, pay a fee (so the government doesn’t lose money on this), show all my documentation, have the government scan and upload everything so that all policemen and pertinent authorities can have access. Then my car insurance company, my health insurance company, the car registration agency can all notify this government repository if I stop paying, or if my insurance policy is not valid anymore.

Imagine a world in which the police has tablets or smartphones that show nice big pictures of you, in which whatever they currently do secretly with NSA-type agencies they do openly instead. If they find you without an ID they ask, “who are you?”, and once you give your name, they can see your photo and a ton of information about you. It would be so hard for anyone to impersonate you. I find it paradoxical that while some government agencies spy on you and know all about you, others pretend to know nothing until you show them a piece of plastic that if you lose, somebody else can impersonate you with. We need to evolve from this. We need to evolve into a system in which we have no wallets and a safer world!

Yeah, safer

TrackingPoint, the biggest name in “smart” scope technology today, is rolling out their next big project. Not too surprising, it is a military endeavor. Called the “Future of War,” TrackingPoint is gearing up for a new market.

The company has been getting a lot of attention with their high-end big-bore hunting rifles that are designed to track targets up to 1,000 yards away. The “smart” aspect of the scope technology is a host of rangefinders and sensors that, combined with optical image recognition software, calculate the ballistics of the shot and compensate for it automatically.

TrackingPoint’s hasn’t exactly concealed their intentions to develop arms for the military market. That was always a possibility and something they all but confirmed when they began talking about their second-generation precision guided rifle systems that, chambered for .50 BMG, are expected to be effective well over 3,000 yards. The cartridge, .50 BMG, is a devastating long-range anti-personnel and anti-material round.

From TrackingPoint’s website:

Target handoff can be achieved by leader touching a smart rifle icon and map location at which point the designated user will see an arrow in his scope directing him to look at handoff location. Whether from shooter to shooter, leader to shooter, drone to leader to shooter, shooter to leader to drone, handoff is a simple touch interface via a mobile device and mobile apps augmented by the appropriate a la carte communications gear.

Emphasis added, because:

The MADSS is one mean robot. Developed by defense industry leader Northrop Grumman and currently being showcased at the Fort Benning, Ga. “Robotics Rodeo,” the MADSS is a 1 1/2-ton unmanned ground vehicle designed to provide soldiers with covering fire while cutting down targets.

Make no mistake, it’s an automatic shooting machine, But it requires people to operate it and set targets. The MADSS — Mobile Armed Dismount Support System — tracks and fires on targets only once it gets the green light. It won’t shoot unless a soldier is directing it.

It’s half killer robot, half killer giant remote-control car.

But you know, not all cars need someone in control of them these days:

In Silberg’s estimation, the reason is that Audi, BMW, and Mercedes-Benz drivers are “already accustomed to high-tech bells and whistles, so adding a ‘self-driving package’ is just another option.” Throw in the possibility of a special lane on highways for autonomous vehicles and the ability to turn the system on and off at will, and premium buyers were sold on the option full-stop.

Considering that Audi, BMW, Cadillac, and Mercedes-Benz all plan to have some kind of semi-autonomous, traffic jam assistance feature either on the market or coming in the next few years, and it’s obvious that luxury brands are well aware of what their buyers want.

Draw your own conclusions.

 

Jim Downey

 



Daisy, Daisy …

One of the things I’ve been a little bit surprised by has been just how many people have volunteered to me (or in reviews) just how much they like the ‘Experts’ in Communion of Dreams, and in particular how much of a favorite character Seth becomes to them in the course of the novel.

I don’t mean I’m surprised by how much people like the Experts, and particularly Seth. Hell, I intended the Experts to be likeable. I mean that this is something which people find remarkable enough to, well, remark on it.

That’s because humans tend to anthropomorphize just about everything. Our pets. Our cars. Our tools. Even nature. It’s one of the basic ways that we make sense of the world, as can be seen in religious and spiritual beliefs.  Long before Siri there was HAL, and inasmuch as Communion of Dreams is an homage to 2001: A Space Odyssey I knew that Seth would resonate as a ‘real person’.*

So this morning I was amused to hear a story on NPR about how giving computers/robots more human characteristics tends to cause humans to develop a greater sense of empathy and socialization with them. Amused, but not surprised. From the article:

Many people have studied machine-human relations, and at this point it’s clear that without realizing it, we often treat the machines around us like social beings.

Consider the work of Stanford professor Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.

* * *

What the study demonstrated was that people do in fact obey the rule of reciprocity when it comes to computers. When the first computer was helpful to people, they helped it way more on the boring task than the other computer in the room. They reciprocated.

* * *

“The relationship is profoundly social,” he says. “The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human — in this case, it was just answering questions; it didn’t have a face on the screen, it didn’t have a voice — but given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.”

 

On the NPR website version of the story there’s also this delightful video showing what happens when a robot with cat/human characteristics begs a research subject to not switch it off:

 

Interesting. But again, unsurprising. Consider the whole sequence in 2001: A Space Odyssey when HAL is shut down — a powerful and poignant part of the movie. And referenced at the end of the video above.

Lastly, I laughed out loud once the story was over on NPR, and the transitional bit of music started up. Why? Because it was an instrumental work by the artist Vangelis, composed as the Love Theme from the movie Blade Runner.

Hilarious.

 

Jim Downey

*And for those who have read the book, consider what the role of Chu Ling’s devas are relative to Seth … 😉  We’ll see more of this reference in St. Cybi’s Well.