Filed under: Art, Artificial Intelligence, BoingBoing, Book Conservation, General Musings, movies, Science, tech
An earlier version of Communion of Dreams had one of the minor characters with an interesting hobby: building his own computer entirely by hand, from making the integrated circuits on up. The point for him wasn’t to get a more powerful computer (remember, by the time of the novel, there is functional AI using tech which is several generations ahead of our current level). Rather, he just wanted to see whether it was possible to build a 1990’s-era desktop computer on his own. I cut that bit out in the final editing, since it was a bit of a distraction and did nothing to advance the story. But I did so reluctantly.
Well, this is something along those lines: video of a French artisan who makes his own vacuum tubes (triodes) for his amateur radio habit:
It’s a full 17 minutes long, and worth watching from start to finish. Being a craftsman myself, I love watching other people work with their hands performing complex operations with skill and grace. I have no need or real desire to make my own vacuum tubes, but this video almost makes me want to try. Wow.
Jim Downey
(Via BoingBoing.)
Filed under: Artificial Intelligence, General Musings, Humor, Patagonia, Science Fiction, Society, tech
My phone rang in the grocery store. I set my basket and the six-pack of 1554 down, pulled the phone out of my pocket. Didn’t recognize the number.
“This is Jim Downey.”
“Um, hello. You tried to place an order for some new Nikes this morning?”
“That’s right.”
“Well, I figured out why they couldn’t get the order to go through.”
“Why’s that?”
“Well, it’s your email address. It’s obscene.”
* * * * * * *
Over the weekend, I tried four times to place an order online for some new walking shoes. I wanted some for my upcoming trip to Patagonia. My current pair of walking shoes are still in decent shape, but I wanted some that could also serve as semi-dressy shoes for the trip. I even created an account with Nike, to simplify ordering. But each time, I always got a glitch at the end of the whole check-out process, after jumping through multiple hoops and entering data time and again.
Finally, in frustration, I called the customer service number. After going through about a dozen levels of automated phone hell, I got to talk with “Megan”. She was quite helpful, but I still had to repeat to her all the information I had entered on four separate occasions. And at the end, she got the same error message that I did.
“Um, let me put you on hold.”
Sure.
Wait.
Wait.
About five minutes pass. “Hi, sorry about that. No one here can figure out why the system won’t process the order. But I’m just going to fill out a paper request with all the information, and send it over to the warehouse. They should be in touch with you later today to confirm shipment.”
“Thanks.”
* * * * * * *
“My email address is obscene?”
“Yeah. The system thinks so, anyway.”
The email address I gave them is one I use for stuff like this: crap@afineline.org It’s also the one I use over at UTI. Cuts down on the amount of spam I get in my personal accounts.
I laughed. “I use that to cut down on junk I get from businesses.”
A laugh at the other end of the phone. “I understand.” Pause. “But, um, do you have a real email address I can use?”
“Oh, that one’s real. I just want people to know what I think of the messages they send me when they use it.”
“Ah. OK. Well, you should get a confirmation email later today that the shoes have shipped.”
“That’ll be fine. Thanks.” I hung up, and made a mental note to pass along word to others not to offend the computers at Nike – they seem to have rather delicate sensibilities.
Jim Downey
Filed under: Artificial Intelligence, Babylon 5, General Musings, Humor, J. Michael Straczynski, JMS, Predictions, Science Fiction, Society, tech
As noted previously, I’m a big fan of the SF television series Babylon 5. One of the things which exists in the reality of the series is the ability to erase the memories and personality of someone, and then install a new template personality. This is called a “mindwipe” or “the death of personality.” It’s an old science fiction idea, and used in some intelligent ways in the series, even if the process isn’t explained fully (or used consistently).
Well, I’m about to mindwipe my old friend, the computer here next to this one. It’s served me faithfully for over seven years, with minimal problems. But old age was starting to take a real toll – I could no longer run current software effectively, and web-standard tech such as modern flash applications caused it a great deal of difficulty. The CD player no longer worked, and the monitor was dark, bloated. One side of the speaker system had quit some time back. My phone has more memory, I think – certainly my MP3 player does.
So, about six weeks ago I got a new computer, one capable of handling all the tasks I could throw at it. It allowed me to start video editing, and was perfectly happy to digest my old files and give them new vigor. The monitor is flat, thin, and quite attractive. It plays movies better, and will allow me to archive material on CD/DVDs once again. The laser mouse is faster and more accurate, and I’ll never have to clean its ball. Both sides of the sound system actually work. There’s more memory than I can possibly ever use . . . well, for at least a couple of years, anyway.
And today I finished migrating over the last of my software and data files. I’d been delaying doing this, taking my time, finding other things I needed to double check. But now the time has come. There is no longer a reason for me to keep my old system around. In a few moments I will wipe its memory, cleaning off what little personal data is on there. And in doing so, I will murder an old friend. A friend who saw me through writing Communion of Dreams, who was there as I created a lyric fantasy, who kept track of all my finances during the hard years of owning an art gallery. A friend who gave me solace through the long hours of being a care provider. A friend who allowed me to keep contact with people around the world, who brought me some measure of infamy, who would happily play games anytime I wanted (even if it wouldn’t always let me win).
So, goodbye, my old friend. I will mindwipe you, then give you away to someone else who needs you, who will gladly give you a home for at least a while longer, who will appreciate your abilities as I no longer can.
Farewell.
Jim Downey
Filed under: Artificial Intelligence, Connections, Expert systems, General Musings, Health, Predictions, Science, Science Fiction, Synesthesia, tech, Titan, Writing stuff
[This post contains mild spoilers about Communion of Dreams.]
One of the difficulties facing computer engineers/scientists with developing expert systems and true Artificial Intelligence is the paradigm they use. Simply, working from structures analogous to the human brain, there has been a tendency to isolate functions and have them work independently. Even in modern computer science such things as adaptive neural networks are understood to analogous to biological neural networks in the brain, which serve a specific function:
Biological neural networks are made up of real biological neurons that are connected or functionally-related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
But what if the neuroscience on which these theories are based has been wrong?
Here’s the basics of what was Neuroscience 101: The auditory system records sound, while the visual system focuses, well, on the visuals, and never do they meet. Instead, a “higher cognitive” producer, like the brain’s superior colliculus, uses these separate inputs to create our cinematic experiences.
The textbook rewrite: The brain can, if it must, directly use sound to see and light to hear.
* * *
Researchers trained monkeys to locate a light flashed on a screen. When the light was very bright, they easily found it; when it was dim, it took a long time. But if a dim light made a brief sound, the monkeys found it in no time – too quickly, in fact, than can be explained by the old theories.
Recordings from 49 neurons responsible for the earliest stages of visual processing, researchers found activation that mirrored the behavior. That is, when the sound was played, the neurons reacted as if there had been a stronger light, at a speed that can only be explained by a direct connection between the ear and eye brain regions, said researcher Ye Wang of the University of Texas in Houston.
The implication is that there is a great deal more flexibility – or ‘plasticity’ – in the structure of the brain than had been previously understood.
Well, yeah. Just consider how someone who has been blind since birth will have heightened awareness of other senses. Some have argued that this is simply a matter of such a person learning to make the greatest use of the senses they have. But others have suspected that they actually learn to use those structures in the brain normally associated with visual processing to boost the ability to process other sensory data. And that’s what the above research shows.
OK, two things. One, this is why I have speculated in Communion of Dreams that synesthesia is more than just the confusion of sensory input – it is using our existing senses to construct not a simple linear view of the world, but a matrix in three dimensions (with the five senses on each axis of such a ‘cube’ structure). In other words, synesthesia is more akin to a meta-cognitive function. That is why (as I mentioned a few days ago) the use of accelerator drugs in the novel allows users to take a step-up in cognition and creativity, though at the cost of burning up the brain’s available store of neurotransmitters.
And two, this is also why I created the ‘tholin gel’ found on Titan to be a superior material as the basis of computers, and even specify that the threshold limit for a gel burr in such use is about the size of the human brain. Why? Well, because such a superconducting superfluid would not function as a simple neural network – rather, the entire burr of gel would function as a single structure, with enormous flexibility and plasticity. In other words, much more like the way the human brain functions as is now coming to be understood.
So, perhaps in letting go of the inaccurate model for the way the brain works, we’ll take a big step closer to creating true artificial intelligence. Like in my book. It pays to be flexible, in our theories, in our thinking, and in how we see the world.
Jim Downey
Hat tip to ML for the news link.
Filed under: Arthur C. Clarke, Artificial Intelligence, Expert systems, Google, movies, Predictions, Science, Science Fiction, Society, tech
A good friend sent me a link to a longish piece in the latest edition of The Atlantic titled Is Google Making Us Stupid? by author Nicholas Carr. It’s interesting, and touches on several of the things I explore as future technologies in Communion of Dreams, and I would urge you to go read the whole thing.
Read it, but don’t believe it for a moment.
OK, Carr starts out with the basic premise that the human mind is a remarkably plastic organ, and is capable of reordering itself to a large degree even well into adulthood. Fine. Obvious. Anyone who has learned a new language, or mastered a new computer game, or acquired any other skill as an adult knows this, and knows how it expands one’s awareness of different and previously unperceived aspects of reality. That, actually, is one of the basic premises behind what I do with Communion, in opening up the human understanding of what the reality of the universe actually is (and how that is in contrast with our prejudices of what it is).
From this premise, Carr speculates that the increasing penetration of the internet into our intellectual lives is changing how we think. I cannot disagree, and have said as much in several of my posts here. For about 2/3 of the article he is discussing how the hyperlinked reality of the web tends to scatter our attention, making it more difficult for us to concentrate and think (or read) ‘deeply’. Anyone who has spent a lot of time reading online knows this phenomenon – pick up an old-fashioned paper book, and you’ll likely find yourself now and again wanting explanatory hyperlinks on this point or that for further clarification. This, admittedly, makes it more difficult to concentrate and immerse yourself into the text at hand, to lose yourself in either the author’s argument or the world they are creating.
But then Carr hits his main point, having established his premises. And it is this: that somehow this scattered attention turns us into information zombies, spoon-fed by the incipient AI of the Google search engine.
Huh?
No, seriously, that’s what he says. Going back to the time-motion efficiency studies pioneered by Frederick Winslow Taylor at the turn of the last century, which turned factory workers into ideal components for working with machines, he makes this argument:
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
Do you see the pivot there? He’s just spent over a score of paragraphs explaining how the internet has degraded our ability to concentrate because of hyperlinked distractions, but then he turns around and says that Google’s increasing sophistication at seeking out information will limit our curiosity about that information.
No. If anything, the ability to access a broader selection of possible references quickly, the ability to see a wider scope of data, will allow us to better use our human ability to understand patterns intuitively, and to delve down into the data pile to extract supporting or contradicting information. This will *feed* our curiosity, not limit it. More information will be hyperlinked – more jumps hither and yon for our minds to explore.
The mistake Carr has made is to use the wrong model for his analogy. He has tried to equate the knowledge economy with the industrial economy. Sure, there are forces at play which push us in the direction he sees – any business is going to want its workers to concentrate on the task at hand, and be efficient about it. That’s what the industrial revolution was all about, from a sociological point of view. This is why some employers will limit ‘surfing’ time, and push their workers to focus on managing a database, keeping accounts balanced, and monitoring production quality. While they are at work. But that has little or nothing to do with what people do on their own time, and how the use the tools created by information technology which allow for much greater exploration and curiosity. And for those employees who are not just an extension of some automated process, those who write, or teach, or research – these tools are a godsend.
In fairness, Carr recognizes the weakness in his argument. He acknowledges that previous technological innovations on a par with the internet (first writing itself, then the development of the printing press) were initially met with gloom on the part of those who thought that it would allow for the human mind to become lazy by not needing to hold all the information needed within the brain itself. These predictions of doom proved wrong, of course, because while some discipline in holding facts in the brain was lost, increasing freedom with accessing information needed only fleetingly was a great boon, allowing people to turn their intellectual abilities to using those facts rather than just remembering them.
Carr ends his essay with this:
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Wrong. Wrong, wrong, wrong. This is a complete misreading of what happens in the movie. Kubrick’s vision was exactly the opposite – HAL was quite literally just following orders. Those orders were to preserve the secret nature of the mission, at the expense of the lives of the crew whom he murders or attempts to murder. That is the danger in allowing machinelike behaviour to be determinant. Kubrick (and Arthur C. Clarke) were, rather, showing that it is the human ability to assess unforeseen situations and synthesize information to draw a new conclusion (and act on it) which is our real strength.
*Sigh*
Jim Downey
(Hat tip to Wendy for the link to the Carr essay!)
Filed under: Artificial Intelligence, Augmented Reality, Expert systems, General Musings, Music, Predictions, Ray Kurzweil, Science, Science Fiction, Singularity, Society, Writing stuff
Just now, my good lady wife was through to tell me that she’s off to take a bit of a nap. Both of us are getting over a touch of something (which I had mentioned last weekend), and on a deeper level still recovering from the profound exhaustion of having been care-givers for her mom.
Anyway, as she was preparing to head off, one of our cats insisted on going through the door which leads from my office into my bindery. This is where the cat food is.
“She wants through.”
“She wants owwwwt.”
“Any door leads out, as far as a cat is concerned.”
“Well, that door did once actually lead out, decades ago.”
“She remembers.”
“She can’t remember.”
“Nonetheless, the memory lingers.”
* * * * * * *
Via TDG, a fascinating interview with Douglas Richard Hofstadter last year, now translated into English. I’d read his GEB some 25 years ago, and have more or less kept tabs on his work since. The interview was about his most recent book, and touched on a number of subjects of interest to me, including the nature of consciousness, writing, Artificial Intelligence, and the Singularity. It’s long, but well worth the effort.
In discussing consciousness (which Hofstadter calls ‘the soul’ for reasons he explains), and the survival of shards of a given ‘soul’, the topic of writing and music comes up. Discussing how Chopin’s music has enabled shards of the composer’s soul to persist, Hofstadter makes this comment about his own desire to write:
I am not shooting at immortality through my books, no. Nor do I think Chopin was shooting at immortality through his music. That strikes me as a very selfish goal, and I don’t think Chopin was particularly selfish. I would also say that I think that music comes much closer to capturing the essence of a composer’s soul than do a writer’s ideas capture the writer’s soul. Perhaps some very emotional ideas that I express in my books can get across a bit of the essence of my soul to some readers, but I think that Chopin’s music probably does a lot better job (and the same holds, of course, for many composers).
I personally don’t have any thoughts about “shooting for immortality” when I write. I try to write simply in order to get ideas out there that I believe in and find fascinating, because I’d like to let other people be able share those ideas. But intellectual ideas alone, no matter how fascinating they are, are not enough to transmit a soul across brains. Perhaps, as I say, my autobiographical passages — at least some of them — get tiny shards of my soul across to some people.
Exactly.
* * * * * * *
In April, I wrote this:
I’ve written only briefly about my thoughts on the so-called Singularity – that moment when our technological abilities converge to create a new transcendent artificial intelligence which encompasses humanity in a collective awareness. As envisioned by the Singularity Institute and a number of Science Fiction authors, I think that it is too simple – too utopian. Life is more complex than that. Society develops and copes with change in odd and unpredictable ways, with good and bad and a whole lot in the middle.
Here’s Hofstadter’s take from the interview, in responding to a question about Ray Kurzweil‘s notion of achieving effective immortality by ‘uploading’ a personality into a machine hardware:
Well, the problem is that a soul by itself would go crazy; it has to live in a vastly complex world, and it has to cohabit that world with many other souls, commingling with them just as we do here on earth. To be sure, Kurzweil sees those things as no problem, either — we’ll have virtual worlds galore, “up there” in Cyberheaven, and of course there will be souls by the barrelful all running on the same hardware. And Kurzweil sees the new software souls as intermingling in all sorts of unanticipated and unimaginable ways.
Well, to me, this “glorious” new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life. Once again, I don’t want to be there if such a vision should ever come to pass. But I doubt that it will come to pass for a very long time. How long? I just don’t know. Centuries, at least. But I don’t know. I’m not a futurologist in the least. But Kurzweil is far more “optimistic” (i.e., depressingly pessimistic, from my perspective) about the pace at which all these world-shaking changes will take place.
Interesting.
* * * * * * *
Lastly, the interview is about the central theme of I am a Strange Loop: that consciousness is an emergent phenomenon which stems from vast and subtle physical mechanisms in the brain. This is also the core ‘meaning’ of GEB, though that was often missed by readers and reviewers who got hung up on the ostensible themes, topics, and playfulness of that book. Hofstadter calls this emergent consciousness a self-referential hallucination, and it reflects much of his interest in cognitive science over the years.
[Mild spoilers ahead.]
In Communion of Dreams I played with this idea and a number of related ones, particularly pertaining to the character of Seth. It is also why I decided that I needed to introduce a whole new technology – based on the superfluid tholin-gel found on Titan, as the basis for the AI systems at the heart of the story. Because the gel is not human-manufactured, but rather something a bit mysterious. Likewise, the use of this material requires another sophisticated computer to ‘boot it up’, and then it itself is responsible for sustaining the energy matrix necessary for continued operation. At the culmination of the story, this ‘self-referential hallucination’ frees itself from its initial containment.
Why did I do this?
Partly in homage to Hofstedter (though you will find no mention of him in the book, as far as I recall). Partly because it plays with other ideas I have about the nature of reality. If we (conscious beings) are an emergent phenomenon, arising from physical activity, then it seems to me that physical things can be impressed with our consciousness. This is why I find his comments about shards of a soul existing beyond the life of the body of the person to be so intriguing.
So I spent some 130,000 words exploring that idea in Communion. Not overtly – not often anyway – but that is part of the subtext of what is going on in that book.
* * * * * * *
“Any door leads out, as far as a cat is concerned.”
“Well, that door did once actually lead out, decades ago.”
“She remembers.”
“She can’t remember.”
“Nonetheless, the memory lingers,” I said, “impressed on the door itself. Maybe the cat understands that at a level we don’t.”
Jim Downey
(Related post at UTI.)
