Filed under: Alzheimer's, Health, Hospice, Science, Scientific American, Society
The human mind is a remarkable device. Nevertheless, it is not without limits. Recently, a growing body of research has focused on a particular mental limitation, which has to do with our ability to use a mental trait known as executive function. When you focus on a specific task for an extended period of time or choose to eat a salad instead of a piece of cake, you are flexing your executive function muscles. Both thought processes require conscious effort-you have to resist the temptation to let your mind wander or to indulge in the sweet dessert. It turns out, however, that use of executive function—a talent we all rely on throughout the day—draws upon a single resource of limited capacity in the brain. When this resource is exhausted by one activity, our mental capacity may be severely hindered in another, seemingly unrelated activity. (See here and here.)
Imagine, for a moment, that you are facing a very difficult decision about which of two job offers to accept. One position offers good pay and job security, but is pretty mundane, whereas the other job is really interesting and offers reasonable pay, but has questionable job security. Clearly you can go about resolving this dilemma in many ways. Few people, however, would say that your decision should be affected or influenced by whether or not you resisted the urge to eat cookies prior to contemplating the job offers. A decade of psychology research suggests otherwise. Unrelated activities that tax the executive function have important lingering effects, and may disrupt your ability to make such an important decision. In other words, you might choose the wrong job because you didn’t eat a cookie.
* * * * * * *
Almost a year ago I wrote this:
There’s a phenomenon familiar to those who deal with Alzheimer’s. It’s called “sundowning“. There are a lot of theories about why it happens, my own pet one is that someone with this disease works damned hard all day long to try and make sense of the world around them (which is scrambled to their perceptions and understanding), and by late in the afternoon or early evening, they’re just worn out. You know how you feel at the end of a long day at work? Same thing.
* * * * * * *
We cared for Martha Sr for about four years. Well, we were here helping her for a couple of years prior to that. But the nearly constant care giving lasted for about four, growing in intensity during that time, culminating with nearly six months of actual hospice care.
That was a long time. But my wife and I had each other, and it could have been longer.
That same day, a hospice patient named Michelle passed away. She was only 50 years old. She’d been battling MS for over 20 years. Debra is dispatched to her home.
The little brown house is shrouded by trees. Stray cats eat free food on the rusted red porch. Inside, Michelle lies in her hospital bed with her eyes slightly open. Debra’s there to help Michelle’s husband Ross. He quit his job in 2000 to take care of his wife.
“So eight years,” Debra says.
“She was permanently bedridden,” Ross replies. “This is the way it’s been. But like everything in life, it all comes to an end I guess.”
His voice sounds steady when he speaks, but his eyes are full of tears as he remembers his wife.
“I’ve never seen a women fight something like she did,” Ross says. “She spent years on that walker because she knew when she got in a chair she’d never get out. The pain it caused her.”
Ross talks for more than an hour. Debra listens and commiserates. It’s at these moments, even more than when she’s providing medical care, that Debra feels her work is appreciated.
Appreciated, indeed.
* * * * * * *
Jim Downey
Filed under: Arthur C. Clarke, Artificial Intelligence, Expert systems, Google, movies, Predictions, Science, Science Fiction, Society, tech
A good friend sent me a link to a longish piece in the latest edition of The Atlantic titled Is Google Making Us Stupid? by author Nicholas Carr. It’s interesting, and touches on several of the things I explore as future technologies in Communion of Dreams, and I would urge you to go read the whole thing.
Read it, but don’t believe it for a moment.
OK, Carr starts out with the basic premise that the human mind is a remarkably plastic organ, and is capable of reordering itself to a large degree even well into adulthood. Fine. Obvious. Anyone who has learned a new language, or mastered a new computer game, or acquired any other skill as an adult knows this, and knows how it expands one’s awareness of different and previously unperceived aspects of reality. That, actually, is one of the basic premises behind what I do with Communion, in opening up the human understanding of what the reality of the universe actually is (and how that is in contrast with our prejudices of what it is).
From this premise, Carr speculates that the increasing penetration of the internet into our intellectual lives is changing how we think. I cannot disagree, and have said as much in several of my posts here. For about 2/3 of the article he is discussing how the hyperlinked reality of the web tends to scatter our attention, making it more difficult for us to concentrate and think (or read) ‘deeply’. Anyone who has spent a lot of time reading online knows this phenomenon – pick up an old-fashioned paper book, and you’ll likely find yourself now and again wanting explanatory hyperlinks on this point or that for further clarification. This, admittedly, makes it more difficult to concentrate and immerse yourself into the text at hand, to lose yourself in either the author’s argument or the world they are creating.
But then Carr hits his main point, having established his premises. And it is this: that somehow this scattered attention turns us into information zombies, spoon-fed by the incipient AI of the Google search engine.
Huh?
No, seriously, that’s what he says. Going back to the time-motion efficiency studies pioneered by Frederick Winslow Taylor at the turn of the last century, which turned factory workers into ideal components for working with machines, he makes this argument:
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
Do you see the pivot there? He’s just spent over a score of paragraphs explaining how the internet has degraded our ability to concentrate because of hyperlinked distractions, but then he turns around and says that Google’s increasing sophistication at seeking out information will limit our curiosity about that information.
No. If anything, the ability to access a broader selection of possible references quickly, the ability to see a wider scope of data, will allow us to better use our human ability to understand patterns intuitively, and to delve down into the data pile to extract supporting or contradicting information. This will *feed* our curiosity, not limit it. More information will be hyperlinked – more jumps hither and yon for our minds to explore.
The mistake Carr has made is to use the wrong model for his analogy. He has tried to equate the knowledge economy with the industrial economy. Sure, there are forces at play which push us in the direction he sees – any business is going to want its workers to concentrate on the task at hand, and be efficient about it. That’s what the industrial revolution was all about, from a sociological point of view. This is why some employers will limit ‘surfing’ time, and push their workers to focus on managing a database, keeping accounts balanced, and monitoring production quality. While they are at work. But that has little or nothing to do with what people do on their own time, and how the use the tools created by information technology which allow for much greater exploration and curiosity. And for those employees who are not just an extension of some automated process, those who write, or teach, or research – these tools are a godsend.
In fairness, Carr recognizes the weakness in his argument. He acknowledges that previous technological innovations on a par with the internet (first writing itself, then the development of the printing press) were initially met with gloom on the part of those who thought that it would allow for the human mind to become lazy by not needing to hold all the information needed within the brain itself. These predictions of doom proved wrong, of course, because while some discipline in holding facts in the brain was lost, increasing freedom with accessing information needed only fleetingly was a great boon, allowing people to turn their intellectual abilities to using those facts rather than just remembering them.
Carr ends his essay with this:
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Wrong. Wrong, wrong, wrong. This is a complete misreading of what happens in the movie. Kubrick’s vision was exactly the opposite – HAL was quite literally just following orders. Those orders were to preserve the secret nature of the mission, at the expense of the lives of the crew whom he murders or attempts to murder. That is the danger in allowing machinelike behaviour to be determinant. Kubrick (and Arthur C. Clarke) were, rather, showing that it is the human ability to assess unforeseen situations and synthesize information to draw a new conclusion (and act on it) which is our real strength.
*Sigh*
Jim Downey
(Hat tip to Wendy for the link to the Carr essay!)
Filed under: Alzheimer's, Daily Kos, General Musings, Health, movies, Preparedness, Publishing, Society, Writing stuff
I mentioned in passing last week that I was working on all my care-giving posts for a book. Here’s a bit more about that project, as it is tentatively shaping up.
Sometime last year, when I cross-posted one of those entries on Daily Kos, I discovered that there was someone else there who was in pretty much the exact same situation: caring for a beloved mother-in-law. For a variety of reasons, it is fairly unusual to find a man caring for a mother-in-law with dementia. We didn’t strike up what I would call a friendship, since both of us were preoccupied with the tasks at hand, but we did develop something of a kinship, commenting back and forth in one another’s diaries on that site. Our paths diverged – he and his wife eventually needed to get his mother-in-law into a care facility, whereas my wife and I were able to keep Martha Sr home until the end. But the parallels were made all the more striking by those slight differences. In the end, his “Mumsie” passed away about six weeks before Martha Sr died.
Recently this fellow and I picked up the thread of our occasional conversation once again. And discovered that both of us, independently, had been thinking of writing up a book about the experience of care giving. It didn’t take long before we realized that together we could produce a more comprehensive book, and a lot more easily, drawing on our individual experiences to show similarities and different choices. A few quick emails sorted out the pertinent details – basic structure of the book, that all proceeds from it will go to the Alzheimer’s Association (or them and other related organizations), some thoughts on publishing and promotion – and we were off and running.
For now, I’ll just identify him by his screen name: GreyHawk. By way of introduction, check out this excellent post of his at ePluribus Media, where he very neatly explains the *why* of our decision to write this book:
Special thanks to Jim Downey for the supplying the links to the video and to his blog, and just for being him; my wife and I took comfort from the fact that we were not alone in our situation, and that we knew at least one other couple who were going through a very similar experience to our own.
That’s it right there. Millions of Americans are facing this situation today, and millions more will in coming years as the baby-boomer generation ages. I’m not a scientist who can help find a cure to the diseases of age-related dementia. I’m not wealthy and able to make a significant difference in funding such research. But I can perhaps help others to understand the experience. GreyHawk and I are going to try, anyway. I know that my wife and I found comfort in knowing that we were not alone in this. So did he and his wife. If we can share that with others, and make their experience a little more understandable, a little easier, then that will be a worthy thing.
Wish us luck.
Jim Downey
Filed under: Apollo program, Astronomy, Buzz Aldrin, NASA, Neil Armstrong, Science, Society, Space
Do you recognize these words?
Of course you do. That’s the transmission sent to NASA Mission Control from the Moon on this date in 1969.
I was at a Boy Scout camp outside of St. Louis when it happened. That night, we all sat around a big firepit, and tried to watch a small black and white portable television with bad reception as Neil A. Armstrong and Edwin (Buzz) E. Aldrin, Jr. made the first human steps onto the Lunar surface and spoke these words (links to audio file on Wikipedia):
“That’s one small step for (a) man, one giant leap for mankind.”
And the world was changed forever.
So, where were you?
Jim Downey
(Cross posted to UTI.)
Filed under: Amazon, Google, Government, movies, Nuclear weapons, Predictions, Preparedness, Science, Science Fiction, Society, Survival, tech, Terrorism, Violence, Wired
Yesterday was an anniversary. Here are some stunning pictures related to it. There have been movies made about it. And movies about what it meant. Or what it could lead to. And, of course, there are a whole bunch of books on related subjects. I’ve talked about the threat it presents. Lore about it has widely influenced popular culture. And it is still topical.
Did you mix a drink to celebrate?
Jim Downey
Filed under: Feedback, Marketing, Music, Predictions, Promotion, Publishing, Science Fiction, Society
Well, as I noted the other day, we crossed the threshold of 10,000 downloads of Communion of Dreams sometime last Friday. This after a bit of a slow crawl the last couple of months to reach that number.
Of course, what happens this weekend? Another 500 downloads.
Because, clearly, 9,775 downloads doesn’t indicate that something is popular. But 10,000 does, and so other people want to check it out.
Man, I love marketing. We hairless apes sure have some funny quirks.
But thanks to all those who decided to check out the book this weekend. And, again, thanks to all who downloaded it previously and helped to spread the word about it.
Jim Downey
*with apologies.
Filed under: Civil Rights, Daily Kos, Government, Politics, Predictions, Society, Terrorism, Travel, Violence
The Washington Times ran an interesting story last week:
Want some torture with your peanuts?
Just when you thought you’ve heard it all…
A senior government official with the U.S. Department of Homeland Security (DHS) has expressed great interest in a so-called safety bracelet that would serve as a stun device, similar to that of a police Taser®. According to this promotional video found at the Lamperd Less Lethal, Inc. website, the bracelet would be worn by all airline passengers (video also shown below).
This bracelet would:
• Take the place of an airline boarding pass
• Contain personal information about the traveler
• Be able to monitor the whereabouts of each passenger and his/her luggage
• Shock the wearer on command, completely immobilizing him/her for several minutes
The Electronic ID Bracelet, as it’s referred to, would be worn by every traveler “until they disembark the flight at their destination.” Yes, you read that correctly. Every airline passenger would be tracked by a government-funded GPS, containing personal, private and confidential information, and would shock the customer worse than an electronic dog collar if the passenger got out of line.
“Just when you thought you’ve heard it all… ” indeed.
Now, I’m not a big fan of the Washington Times, so I checked the website mentioned in the article. Where I found this statement:
The bracelets remain inactive until a hijacking situation has been identified. At such time a designated crew member will activate the bracelets making them capable of delivering the punitive measure – but only to those that need to be restrained. We believe that all passengers will welcome deliverance from a hijacking, as will the families, carriers, insurance providers etc. The F-16 on the wingtip is not to reassure the passengers during a hijacking but rather to shoot them down. Besides activation using the grid screen, the steward / stewardess will have a laser activator that can activate any bracelet as needed by simply pointing the laser at the bracelet – that laser dot only needs to be within 10 inches of the bracelet to activate it.
Got that? “This is for your own good”.
Never mind that there are dozens of potential problems I can see how this technology could be abused, inadvertently misused, or accidentally triggered. Never mind that Tasers use a similar type of electro-muscular disruption technology and have been suspect in the deaths of perhaps hundreds. Never mind that it is likely that someone wanting to hijack a jet would figure out a way to disable such a bracelet (it’s activated by a laser pointer? Just wrap something around the bracelet when you move to act.) Consider solely what this does to you: makes you someone else’s pet or slaughter animal.
Airline travel is grim and degrading enough as it is, and most of the airlines are struggling to avoid bankruptcy. If they decide to go forward and implement the use of this kind of technology, a significant percentage of travelers will give up on flying altogether (it’s actually a shame that likely a majority would probably play along, thanks to the conditioning we’ve already received).
I know I sure as hell would give up flying under those conditions.
Sheesh.
Jim Downey
(Via dKos. Cross posted to UTI.)
Filed under: ACLU, Civil Rights, Constitution, Daily Kos, Government, Politics, Privacy, Society, Terrorism, YouTube
. . . and weep for the Fourth Amendment:
And people wonder why Congress has an approval rate of 9%.
Sheesh.
UPDATE: FISA passed in the Senate, 69 – 28:
WASHINGTON – The Senate approved and sent to the White House a bill overhauling controversial rules on secret government eavesdropping Wednesday, bowing to President Bush’s demand to protect telecommunications companies from lawsuits complaining they helped the U.S. spy on Americans.
The relatively one-sided vote, 69-28, came only after a lengthy and bitter debate that pitted privacy and civil liberties concerns against the desire to prevent terrorist attacks. It ended almost a year of wrangling over surveillance rules and the president’s warrantless wiretapping program that was initiated after the Sept. 11, 2001, terrorist attacks.
The House passed the same bill last month, and President Bush is expected to sign it soon. He scheduled a 4 p.m. EDT White House statement to praise the passage.
Jim Downey
(Via Daily Kos. Cross posted to UTI.)
