Communion Of Dreams


This sucks.

An earlier version of Communion of Dreams had one of the minor characters with an interesting hobby: building his own computer entirely by hand, from making the integrated circuits on up.  The point for him wasn’t to get a more powerful computer (remember, by the time of the novel, there is functional AI using tech which is several generations ahead of our current level).  Rather, he just wanted to see whether it was possible to build a 1990’s-era desktop computer on his own.  I cut that bit out in the final editing, since it was a bit of a distraction and did nothing to advance the story.  But I did so reluctantly.

Well, this is something along those lines: video of a French artisan who makes his own vacuum tubes (triodes) for his amateur radio habit:

It’s a full 17 minutes long, and worth watching from start to finish. Being a craftsman myself, I love watching other people work with their hands performing complex operations with skill and grace. I have no need or real desire to make my own vacuum tubes, but this video almost makes me want to try. Wow.

Jim Downey

(Via BoingBoing.)



“Well, it’s obscene.”
September 16, 2008, 10:53 am
Filed under: Artificial Intelligence, General Musings, Humor, Patagonia, Science Fiction, Society, tech

My phone rang in the grocery store.  I set my basket and the six-pack of 1554 down, pulled the phone out of my pocket.  Didn’t recognize the number.

“This is Jim Downey.”

“Um, hello.  You tried to place an order for some new Nikes this morning?”

“That’s right.”

“Well, I figured out why they couldn’t get the order to go through.”

“Why’s that?”

“Well, it’s your email address.  It’s obscene.”

* * * * * * *

Over the weekend, I tried four times to place an order online for some new walking shoes.  I wanted some for my upcoming trip to Patagonia.  My current pair of walking shoes are still in decent shape, but I wanted some that could also serve as semi-dressy shoes for the trip.  I even created an account with Nike, to simplify ordering.  But each time, I always got a glitch at the end of the whole check-out process, after jumping through multiple hoops and entering data time and again.

Finally, in frustration, I called the customer service number.  After going through about a dozen levels of automated phone hell, I got to talk with “Megan”.  She was quite helpful, but I still had to repeat to her all the information I had entered on four separate occasions.  And at the end, she got the same error message that I did.

“Um, let me put you on hold.”

Sure.

Wait.

Wait.

About five minutes pass.  “Hi, sorry about that.  No one here can figure out why the system won’t process the order.  But I’m just going to fill out a paper request with all the information, and send it over to the warehouse.  They should be in touch with you later today to confirm shipment.”

“Thanks.”

* * * * * * *

“My email address is obscene?”

“Yeah.  The system thinks so, anyway.”

The email address I gave them is one I use for stuff like this: crap@afineline.org  It’s also the one I use over at UTI.  Cuts down on the amount of spam I get in my personal accounts.

I laughed.  “I use that to cut down on junk I get from businesses.”

A laugh at the other end of the phone.  “I understand.”  Pause.  “But, um, do you have a real email address I can use?”

“Oh, that one’s real.  I just want people to know what I think of the messages they send me when they use it.”

“Ah.  OK.  Well, you should get a confirmation email later today that the shoes have shipped.”

“That’ll be fine.  Thanks.”  I hung up, and made a mental note to pass along word to others not to offend the computers at Nike – they seem to have rather delicate sensibilities.

Jim Downey



About to murder an old friend.

As noted previously, I’m a big fan of the SF television series Babylon 5.  One of the things which exists in the reality of the series is the ability to erase the memories and personality of someone, and then install a new template personality.  This is called a “mindwipe” or “the death of personality.”  It’s an old science fiction idea, and used in some intelligent ways in the series, even if the process isn’t explained fully (or used consistently).

Well, I’m about to mindwipe my old friend, the computer here next to this one.  It’s served me faithfully for over seven years, with minimal problems.  But old age was starting to take a real toll – I could no longer run current software effectively, and web-standard tech such as modern flash applications caused it a great deal of difficulty. The CD player no longer worked, and the monitor was dark, bloated.  One side of the speaker system had quit some time back.  My phone has more memory, I think – certainly my MP3 player does.

So, about six weeks ago I got a new computer, one capable of handling all the tasks I could throw at it.  It allowed me to start video editing, and was perfectly happy to digest my old files and give them new vigor.  The monitor is flat, thin, and quite attractive.  It plays movies better, and will allow me to archive material on CD/DVDs once again.  The laser mouse is faster and more accurate, and I’ll never have to clean its ball.  Both sides of the sound system actually work.  There’s more memory than I can possibly ever use . . . well, for at least a couple of years, anyway.

And today I finished migrating over the last of my software and data files.  I’d been delaying doing this, taking my time, finding other things I needed to double check.  But now the time has come.  There is no longer a reason for me to keep my old system around.  In a few moments I will wipe its memory, cleaning off what little personal data is on there.  And in doing so, I will murder an old friend.  A friend who saw me through writing Communion of Dreams, who was there as I created a lyric fantasy, who kept track of all my finances during the hard years of owning an art gallery.  A friend who gave me solace through the long hours of being a care provider.  A friend who allowed me to keep contact with people around the world, who brought me some measure of infamy, who would happily play games anytime I wanted (even if it wouldn’t always let me win).

So, goodbye, my old friend.  I will mindwipe you, then give you away to someone else who needs you, who will gladly give you a home for at least a while longer, who will appreciate your abilities as I no longer can.

Farewell.

Jim Downey



Flexibility.

[This post contains mild spoilers about Communion of Dreams.]

One of the difficulties facing computer engineers/scientists with developing expert systems and true Artificial Intelligence is the paradigm they use.  Simply, working from structures analogous to the human brain, there has been a tendency to isolate functions and have them work independently.  Even in modern computer science such things as adaptive neural networks are understood to analogous to biological neural networks in the brain, which serve a specific function:

Biological neural networks are made up of real biological neurons that are connected or functionally-related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.

But what if the neuroscience on which these theories are based has been wrong?

Here’s the basics of what was Neuroscience 101: The auditory system records sound, while the visual system focuses, well, on the visuals, and never do they meet. Instead, a “higher cognitive” producer, like the brain’s superior colliculus, uses these separate inputs to create our cinematic experiences.

The textbook rewrite: The brain can, if it must, directly use sound to see and light to hear.

* * *

Researchers trained monkeys to locate a light flashed on a screen. When the light was very bright, they easily found it; when it was dim, it took a long time. But if a dim light made a brief sound, the monkeys found it in no time – too quickly, in fact, than can be explained by the old theories.

Recordings from 49 neurons responsible for the earliest stages of visual processing, researchers found activation that mirrored the behavior. That is, when the sound was played, the neurons reacted as if there had been a stronger light, at a speed that can only be explained by a direct connection between the ear and eye brain regions, said researcher Ye Wang of the University of Texas in Houston.

The implication is that there is a great deal more flexibility – or ‘plasticity’ – in the structure of the brain than had been previously understood.

Well, yeah. Just consider how someone who has been blind since birth will have heightened awareness of other senses.  Some have argued that this is simply a matter of such a person learning to make the greatest use of the senses they have.  But others have suspected that they actually learn to use those structures in the brain normally associated with visual processing to boost the ability to process other sensory data.  And that’s what the above research shows.

OK, two things.  One, this is why I have speculated in Communion of Dreams that synesthesia is more than just the confusion of sensory input – it is using our existing senses to construct not a simple linear view of the world, but a matrix in three dimensions (with the five senses on each axis of such a ‘cube’ structure).  In other words, synesthesia is more akin to a meta-cognitive function.  That is why (as I mentioned a few days ago) the use of accelerator drugs in the novel allows users to take a step-up in cognition and creativity, though at the cost of burning up the brain’s available store of neurotransmitters.

And two, this is also why I created the ‘tholin gel’ found on Titan to be a superior material as the basis of computers, and even specify that the threshold limit for a gel burr in such use is about the size of the human brain.  Why?  Well, because such a superconducting superfluid would not function as a simple neural network – rather, the entire burr of gel would function as a single structure, with enormous flexibility and plasticity.  In other words, much more like the way the human brain functions as is now coming to be understood.

So, perhaps in letting go of the inaccurate model for the way the brain works, we’ll take a big step closer to creating true artificial intelligence.  Like in my book.  It pays to be flexible, in our theories, in our thinking, and in how we see the world.

Jim Downey

Hat tip to ML for the news link.



Playtime!
August 16, 2008, 7:47 am
Filed under: Artificial Intelligence, Astronomy, Humor, MetaFilter, Science, Space, tech

OK, I spent *way* too much time playing this game last night: Orbitrunner. And because I’m the kind of guy that I am, I wanted to inflict it on you.

It’s actually a very interesting bit of gaming, for as simple as seems at first glance. Here’s the description from the site:

Control the Sun with your mouse. Use it to manipulate the planets’ paths. The Sun’s pull gets stronger as planets get closer. If the gravity is at a right angle to the direction of travel, an orbit can form. Make sure planets don’t leave the screen or collide!

I’m sure that they have established some fairly basic approximations for your computer to manipulate, but it still addresses one of the classic problems of physics: how to calculate the orbital dynamics for two or more bodies in motion. Even if you restrict the interactions to one orbital plane, this is a surprisingly difficult problem for more than two bodies, and has been for centuries. From ScienceWorld:

The three-body problem considers three mutually interacting masses , , and . In the restricted three-body problem, is taken to be small enough so that it does not influence the motion of and , which are assumed to be in circular orbits about their center of mass. The orbits of three masses are further assumed to all lie in a common plane. If and are in elliptical instead of circular orbits, the problem is variously known as the “elliptic restricted problem” or “pseudorestricted problem” (Szebehely 1967, pp. 30 and 39).

The efforts of many famous mathematicians have been devoted to this difficult problem, including Euler Eric Weisstein's World of Biography and Lagrange Eric Weisstein's World of Biography (1772), Jacobi Eric Weisstein's World of Biography (1836), Hill (1878), Poincaré Eric Weisstein's World of Biography (1899), Levi-Civita (1905), and Birkhoff (1915). In 1772, Euler first introduced a synodic (rotating) coordinate system. Jacobi (1836) subsequently discovered an integral of motion in this coordinate system (which he independently discovered) that is now known as the Jacobi integral. Hill (1878) used this integral to show that the Earth-Moon distance remains bounded from above for all time (assuming his model for the Sun-Earth-Moon system is valid), and Brown (1896) gave the most precise lunar theory of his time.

And Wikipedia has a very good entry (beyond my math level) about the broader n-body problem:

General considerations: solving the n-body problem

In the physical literature about the n-body problem (n ≥ 3), sometimes reference is made to the impossibility of solving the n-body problem. However one has to be careful here, as this applies to the method of first integrals (compare the theorems by Abel and Galois about the impossibility of solving algebraic equations of degree five or higher by means of formulas only involving roots).

The n-body problem contains 6n variables, since each point particle is represented by three space (displacement) and three velocity components. First integrals (for ordinary differential equations) are functions that remain constant along any given solution of the system, the constant depending on the solution. In other words, integrals provide relations between the variables of the system, so each scalar integral would normally allow the reduction of the system’s dimension by one unit. Of course, this reduction can take place only if the integral is an algebraic function not very complicated with respect to its variables. If the integral is transcendent the reduction cannot be performed.

Well, have fun with it. And be amused about that all that phenomenal computing power at your fingertips making a simple little game. Such is the future.

Jim Downey

(Via MeFi. Cross posted to UTI.)



“Doom, DOOM, I say!!”

A good friend sent me a link to a longish piece in the latest edition of The Atlantic titled Is Google Making Us Stupid? by author Nicholas Carr. It’s interesting, and touches on several of the things I explore as future technologies in Communion of Dreams, and I would urge you to go read the whole thing.

Read it, but don’t believe it for a moment.

OK, Carr starts out with the basic premise that the human mind is a remarkably plastic organ, and is capable of reordering itself to a large degree even well into adulthood. Fine. Obvious. Anyone who has learned a new language, or mastered a new computer game, or acquired any other skill as an adult knows this, and knows how it expands one’s awareness of different and previously unperceived aspects of reality. That, actually, is one of the basic premises behind what I do with Communion, in opening up the human understanding of what the reality of the universe actually is (and how that is in contrast with our prejudices of what it is).

From this premise, Carr speculates that the increasing penetration of the internet into our intellectual lives is changing how we think. I cannot disagree, and have said as much in several of my posts here. For about 2/3 of the article he is discussing how the hyperlinked reality of the web tends to scatter our attention, making it more difficult for us to concentrate and think (or read) ‘deeply’. Anyone who has spent a lot of time reading online knows this phenomenon – pick up an old-fashioned paper book, and you’ll likely find yourself now and again wanting explanatory hyperlinks on this point or that for further clarification. This, admittedly, makes it more difficult to concentrate and immerse yourself into the text at hand, to lose yourself in either the author’s argument or the world they are creating.

But then Carr hits his main point, having established his premises. And it is this: that somehow this scattered attention turns us into information zombies, spoon-fed by the incipient AI of the Google search engine.

Huh?

No, seriously, that’s what he says. Going back to the time-motion efficiency studies pioneered by Frederick Winslow Taylor at the turn of the last century, which turned factory workers into ideal components for working with machines, he makes this argument:

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

Do you see the pivot there? He’s just spent over a score of paragraphs explaining how the internet has degraded our ability to concentrate because of hyperlinked distractions, but then he turns around and says that Google’s increasing sophistication at seeking out information will limit our curiosity about that information.

No. If anything, the ability to access a broader selection of possible references quickly, the ability to see a wider scope of data, will allow us to better use our human ability to understand patterns intuitively, and to delve down into the data pile to extract supporting or contradicting information. This will *feed* our curiosity, not limit it. More information will be hyperlinked – more jumps hither and yon for our minds to explore.

The mistake Carr has made is to use the wrong model for his analogy. He has tried to equate the knowledge economy with the industrial economy. Sure, there are forces at play which push us in the direction he sees – any business is going to want its workers to concentrate on the task at hand, and be efficient about it. That’s what the industrial revolution was all about, from a sociological point of view. This is why some employers will limit ‘surfing’ time, and push their workers to focus on managing a database, keeping accounts balanced, and monitoring production quality. While they are at work. But that has little or nothing to do with what people do on their own time, and how the use the tools created by information technology which allow for much greater exploration and curiosity. And for those employees who are not just an extension of some automated process, those who write, or teach, or research – these tools are a godsend.

In fairness, Carr recognizes the weakness in his argument. He acknowledges that previous technological innovations on a par with the internet (first writing itself, then the development of the printing press) were initially met with gloom on the part of those who thought that it would allow for the human mind to become lazy by not needing to hold all the information needed within the brain itself. These predictions of doom proved wrong, of course, because while some discipline in holding facts in the brain was lost, increasing freedom with accessing information needed only fleetingly was a great boon, allowing people to turn their intellectual abilities to using those facts rather than just remembering them.

Carr ends his essay with this:

I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

Wrong. Wrong, wrong, wrong. This is a complete misreading of what happens in the movie. Kubrick’s vision was exactly the opposite – HAL was quite literally just following orders. Those orders were to preserve the secret nature of the mission, at the expense of the lives of the crew whom he murders or attempts to murder. That is the danger in allowing machinelike behaviour to be determinant. Kubrick (and Arthur C. Clarke) were, rather, showing that it is the human ability to assess unforeseen situations and synthesize information to draw a new conclusion (and act on it) which is our real strength.

*Sigh*

Jim Downey

(Hat tip to Wendy for the link to the Carr essay!)



The memory remains.

Just now, my good lady wife was through to tell me that she’s off to take a bit of a nap. Both of us are getting over a touch of something (which I had mentioned last weekend), and on a deeper level still recovering from the profound exhaustion of having been care-givers for her mom.

Anyway, as she was preparing to head off, one of our cats insisted on going through the door which leads from my office into my bindery. This is where the cat food is.

“She wants through.”

“She wants owwwwt.”

“Any door leads out, as far as a cat is concerned.”

“Well, that door did once actually lead out, decades ago.”

“She remembers.”

“She can’t remember.”

“Nonetheless, the memory lingers.”

* * * * * * *

Via TDG, a fascinating interview with Douglas Richard Hofstadter last year, now translated into English. I’d read his GEB some 25 years ago, and have more or less kept tabs on his work since. The interview was about his most recent book, and touched on a number of subjects of interest to me, including the nature of consciousness, writing, Artificial Intelligence, and the Singularity. It’s long, but well worth the effort.

In discussing consciousness (which Hofstadter calls ‘the soul’ for reasons he explains), and the survival of shards of a given ‘soul’, the topic of writing and music comes up. Discussing how Chopin’s music has enabled shards of the composer’s soul to persist, Hofstadter makes this comment about his own desire to write:

I am not shooting at immortality through my books, no. Nor do I think Chopin was shooting at immortality through his music. That strikes me as a very selfish goal, and I don’t think Chopin was particularly selfish. I would also say that I think that music comes much closer to capturing the essence of a composer’s soul than do a writer’s ideas capture the writer’s soul. Perhaps some very emotional ideas that I express in my books can get across a bit of the essence of my soul to some readers, but I think that Chopin’s music probably does a lot better job (and the same holds, of course, for many composers).

I personally don’t have any thoughts about “shooting for immortality” when I write. I try to write simply in order to get ideas out there that I believe in and find fascinating, because I’d like to let other people be able share those ideas. But intellectual ideas alone, no matter how fascinating they are, are not enough to transmit a soul across brains. Perhaps, as I say, my autobiographical passages — at least some of them — get tiny shards of my soul across to some people.

Exactly.

* * * * * * *

In April, I wrote this:

I’ve written only briefly about my thoughts on the so-called Singularity – that moment when our technological abilities converge to create a new transcendent artificial intelligence which encompasses humanity in a collective awareness. As envisioned by the Singularity Institute and a number of Science Fiction authors, I think that it is too simple – too utopian. Life is more complex than that. Society develops and copes with change in odd and unpredictable ways, with good and bad and a whole lot in the middle.

Here’s Hofstadter’s take from the interview, in responding to a question about Ray Kurzweil‘s notion of achieving effective immortality by ‘uploading’ a personality into a machine hardware:

Well, the problem is that a soul by itself would go crazy; it has to live in a vastly complex world, and it has to cohabit that world with many other souls, commingling with them just as we do here on earth. To be sure, Kurzweil sees those things as no problem, either — we’ll have virtual worlds galore, “up there” in Cyberheaven, and of course there will be souls by the barrelful all running on the same hardware. And Kurzweil sees the new software souls as intermingling in all sorts of unanticipated and unimaginable ways.

Well, to me, this “glorious” new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life. Once again, I don’t want to be there if such a vision should ever come to pass. But I doubt that it will come to pass for a very long time. How long? I just don’t know. Centuries, at least. But I don’t know. I’m not a futurologist in the least. But Kurzweil is far more “optimistic” (i.e., depressingly pessimistic, from my perspective) about the pace at which all these world-shaking changes will take place.

Interesting.

* * * * * * *

Lastly, the interview is about the central theme of I am a Strange Loop: that consciousness is an emergent phenomenon which stems from vast and subtle physical mechanisms in the brain. This is also the core ‘meaning’ of GEB, though that was often missed by readers and reviewers who got hung up on the ostensible themes, topics, and playfulness of that book. Hofstadter calls this emergent consciousness a self-referential hallucination, and it reflects much of his interest in cognitive science over the years.

[Mild spoilers ahead.]

In Communion of Dreams I played with this idea and a number of related ones, particularly pertaining to the character of Seth. It is also why I decided that I needed to introduce a whole new technology – based on the superfluid tholin-gel found on Titan, as the basis for the AI systems at the heart of the story. Because the gel is not human-manufactured, but rather something a bit mysterious. Likewise, the use of this material requires another sophisticated computer to ‘boot it up’, and then it itself is responsible for sustaining the energy matrix necessary for continued operation. At the culmination of the story, this ‘self-referential hallucination’ frees itself from its initial containment.

Why did I do this?

Partly in homage to Hofstedter (though you will find no mention of him in the book, as far as I recall). Partly because it plays with other ideas I have about the nature of reality. If we (conscious beings) are an emergent phenomenon, arising from physical activity, then it seems to me that physical things can be impressed with our consciousness. This is why I find his comments about shards of a soul existing beyond the life of the body of the person to be so intriguing.

So I spent some 130,000 words exploring that idea in Communion. Not overtly – not often anyway – but that is part of the subtext of what is going on in that book.

* * * * * * *

“Any door leads out, as far as a cat is concerned.”

“Well, that door did once actually lead out, decades ago.”

“She remembers.”

“She can’t remember.”

“Nonetheless, the memory lingers,” I said, “impressed on the door itself. Maybe the cat understands that at a level we don’t.”

Jim Downey

(Related post at UTI.)



I wonder which model Nexus this is?

Well, someone sure has been having some fun:

The company AI ROBOTICS was founded 2 years ago by Etienne Fresse and Yoichi Yamato, both robotics specialists working on developing cutting-edge technologies. During the last 3 years the two founders have dedicated all their time and energy to their project “robot woman LISA” which thanks to the support of numerous foreign investors will be presented to the public on June 11 2008. The company’s philosophy is to enhance the conditions of human life and to give as many people as possible access to new technologies. The company AI ROBOTICS is based in Kobe, Japan.

And to think that some people on MeFi (where I came across this) thought it was for real.  Sheesh.  But it does generate some discussion about what happens in the future when this reality does actually show up.

Hmm . . . seems that someone has considered this matter before . . .

Jim Downey



Convergence.

When I went away to college in 1976, I took with me the small black & white television I had received for my eighth birthday. Mostly my roommates and I would watch The Muppet Show before going off to dinner. Otherwise, I really didn’t have the time for television – there was studying to do, drugs and alcohol to abuse, sex to have.

Post college I had a massive old console color TV I had inherited. But given that I lived in Montezuma Iowa, reception was dismal. I found other things to do with my time, mostly SCA-related activities and gaming. I took that console set with me to graduate school in Iowa City, but it never really worked right, and besides I was still busy with SCA stuff and again with schoolwork.

For most of the ’90s I did watch some TV as it was being broadcast, but even then my wife and I preferred to time-shift using a VCR, skipping commercials and seeing the things we were interested in at times when it was convenient for us.

This century, living here and caring for someone with Alzheimer’s, we had to be somewhat more careful about selecting shows that wouldn’t contribute to Martha Sr’s confusion and agitation. Meaning mostly stuff we rented or movies/series we liked well enough to buy on DVD. I would now and then flip on the cable and skip around a bit after we got Martha Sr. to bed, see if there was anything interesting, but for the most part I relied on friends recommending stuff. And besides, I was busy working on Communion of Dreams, or blogging here or there, or writing a newspaper column or whatever.

Now-a-days we don’t even have cable. There’s just no reason to pay for it. I’d much rather get my news and information online. So, basically, I have missed most every television show and special event in the last thirty years. There are vast swaths of cultural reference I only know by inference, television shows that “define” American values I’ve never seen. I don’t miss it.

And you know what? You are becoming like me, more and more all the time.

* * * * * * *

Via Cory Doctorow at BoingBoing, this very interesting piece by

Gin, Television, and Social Surplus

* * *

If I had to pick the critical technology for the 20th century, the bit of social lubricant without which the wheels would’ve come off the whole enterprise, I’d say it was the sitcom. Starting with the Second World War a whole series of things happened–rising GDP per capita, rising educational attainment, rising life expectancy and, critically, a rising number of people who were working five-day work weeks. For the first time, society forced onto an enormous number of its citizens the requirement to manage something they had never had to manage before–free time.

And what did we do with that free time? Well, mostly we spent it watching TV.

We did that for decades. We watched I Love Lucy. We watched Gilligan’s Island. We watch Malcolm in the Middle. We watch Desperate Housewives. Desperate Housewives essentially functioned as a kind of cognitive heat sink, dissipating thinking that might otherwise have built up and caused society to overheat.

And it’s only now, as we’re waking up from that collective bender, that we’re starting to see the cognitive surplus as an asset rather than as a crisis. We’re seeing things being designed to take advantage of that surplus, to deploy it in ways more engaging than just having a TV in everybody’s basement.

OK, I try and be very careful about “fair use” of other people’s work, limiting myself to just a couple of paragraphs from a given article or blog post in order to make a point. But while I say that you should go read his whole post, I’m going to use another passage from Shirky here:

Did you ever see that episode of Gilligan’s Island where they almost get off the island and then Gilligan messes up and then they don’t? I saw that one. I saw that one a lot when I was growing up. And every half-hour that I watched that was a half an hour I wasn’t posting at my blog or editing Wikipedia or contributing to a mailing list. Now I had an ironclad excuse for not doing those things, which is none of those things existed then. I was forced into the channel of media the way it was because it was the only option. Now it’s not, and that’s the big surprise. However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it’s worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

And I’m willing to raise that to a general principle. It’s better to do something than to do nothing. Even lolcats, even cute pictures of kittens made even cuter with the addition of cute captions, hold out an invitation to participation. When you see a lolcat, one of the things it says to the viewer is, “If you have some fancy sans-serif fonts on your computer, you can play this game, too.” And that message–I can do that, too–is a big change.

It is a huge change. It is the difference between passively standing/sitting by and watching, and doing the same thing yourself. Whether it is sports, or sex, or politics, or art – doing it yourself means making better use of the limited time you have in this life.

* * * * * * *

And now, the next component of my little puzzle this morning.

Via MeFi, this NYT essay about the explosion of authorship:

You’re an Author? Me Too!

It’s well established that Americans are reading fewer books than they used to. A recent report by the National Endowment for the Arts found that 53 percent of Americans surveyed hadn’t read a book in the previous year — a state of affairs that has prompted much soul-searching by anyone with an affection for (or business interest in) turning pages. But even as more people choose the phantasmagoria of the screen over the contemplative pleasures of the page, there’s a parallel phenomenon sweeping the country: collective graphomania.

In 2007, a whopping 400,000 books were published or distributed in the United States, up from 300,000 in 2006, according to the industry tracker Bowker, which attributed the sharp rise to the number of print-on-demand books and reprints of out-of-print titles. University writing programs are thriving, while writers’ conferences abound, offering aspiring authors a chance to network and “workshop” their work. The blog tracker Technorati estimates that 175,000 new blogs are created worldwide each day (with a lucky few bloggers getting book deals). And the same N.E.A. study found that 7 percent of adults polled, or 15 million people, did creative writing, mostly “for personal fulfillment.”

* * *

Mark McGurl, an associate professor of English at the University of California, Los Angeles, and the author of a forthcoming book on the impact of creative writing programs on postwar American literature, agrees that writing programs have helped expand the literary universe. “American literature has never been deeper and stronger and more various than it is now,” McGurl said in an e-mail message. Still, he added, “one could put that more pessimistically: given the manifold distractions of modern life, we now have more great writers working in the United States than anyone has the time or inclination to read.”

An interesting discussion about this happens in that thread at Meta Filter. John Scalzi, no stranger at all to the world of blogging and online publishing, says this there:

I see nothing but upside in people writing and self-publishing, especially now that companies like Lulu make it easy for them to do so without falling prey to avaricious vanity presses. People who self-publish are in love with the idea of writing, and in love with the idea of books. Both are good for me personally, and good for the idea of a literate society moving forward.

Indeed. And it is pretty clearly a manifestation of what Shirky is talking about above.

I’ve written only briefly about my thoughts on the so-called Singularity – that moment when our technological abilities converge to create a new transcendent artificial intelligence which encompasses humanity in a collective awareness. As envisioned by the Singularity Institute and a number of Science Fiction authors, I think that it is too simple – too utopian. Life is more complex than that. Society develops and copes with change in odd and unpredictable ways, with good and bad and a whole lot in the middle.

For years, people have bemoaned how the developing culture of the internet is changing for the worse aspects of life. Newspapers are struggling. There’s the whole “Cult of the Amateur” nonsense. Just this morning on NPR there was a comment from a listener about how “blogs are just gossip”, in reaction to the new Sunday Soapbox political blog WESun has launched. And there is a certain truth to the complaints and hand-wringing. Maybe we just need to see this in context, though – that the internet is just one aspect of our changing culture, something which is shifting us away from being purely observers of the complex and confusing world around us, to being participants to a greater degree.

Sure, a lot of what passes for participation is fairly pointless, time-consuming crap in its own right. I am reminded of this brilliant xkcd strip. The activity itself is little better than just watching reruns of Gilligan’s Island or Seinfeld or whatever. But the *act* of participating is empowering, and instructive, and just plain good exercise – preparing the participant for being more involved, more in control of their own life and world.

We learn by doing. And if, by doing, we escape the numbing effects of being force-fed pablum from the television set for even a little while, that’s good. What if our Singularity is not a technological one, but a social one? What if, as people become more active, less passive, we actually learn to tap into the collective intelligence of humankind – not as a hive mind, but as something akin to an ideal Jeffersonian Democracy, updated to reflect the reality of modern culture?

I think we could do worse.

Jim Downey



Eye, Robot.

I like bad science fiction movies. Cheesy special effects, bad dialog and worse acting, it doesn’t matter. Just so long as there is a nub of a decent idea in there somewhere, trying to get out.

And in that spirit, I added I, Robot to my NetFlix queue some time back, knowing full well that it had almost nothing to do with Isaac Asimov’s brilliant stories. I knew it was set in the near term future, and that it had been a success at the box office, but that was about it. This past weekend, it arrived. I watched it last night.

I think Asimov himself predicted just what would be wrong with this movie:

In the essay “The Boom in Science Fiction” (Isaac Asimov on Science Fiction, pp. 125–128), Asimov himself explained the reason for Hollywood’s overriding need for violence:

[…] Eye-sci-fi has an audience that is fundamentally different from that of science fiction. In order for eye-sci-fi to be profitable it must be seen by tens of millions of people; in order for science fiction to be profitable it need be read by only tens of thousands of people. This means that some ninety percent (perhaps as much as ninety-nine percent) of the people who go to see eye-sci-fi are likely never to have read science fiction.The purveyors of eye-sci-fi cannot assume that their audience knows anything about science, has any experience with the scientific imagination, or even has any interest in science fiction.

But, in that case, why should the purveyors of eye-sci-fi expect anyone to see the pictures? Because they intend to supply something that has no essential connection with science fiction, but that tens of millions of people are willing to pay money to see. What is that? Why, scenes of destruction.

Yup. And that is just about all that the movie I, Robot is – destruction and special effects. Shame, really, since I have enjoyed Will Smith in other bad SF (Independence Day, anyone?), and just love Alan Tudyk from Firefly/Serenity. Even what had to be intentional references to such excellent movies as Blade Runner or The Matrix fell completely flat. It was, in a word, dreadful.

Ah, well. Via MeFi, here’s a little gem to wash the bad taste out of your mouth:

Gene Roddenberry would be proud.

Jim Downey




Design a site like this with WordPress.com
Get started