“He refused to comply with the officers and so the officers had to deploy their Tasers in order to subdue him. He is making incoherent statements; he’s also making statements such as, ‘Shoot cops, kill cops,’ things like that. So there was cause for concern to the officers,” said Ozark Police Capt. Thomas Rousset.
Makes it sound almost reasonable, doesn’t it?
Small problem – the ‘he’ was a 16 year old kid who had fallen from a highway overpass and had broken his back. So, naturally, since he didn’t respond to the authoritah of the cops on the scene, the cops had to Taser him. 19 times.
See, kids, never make the mistake of not instantly jumping up to comply with instructions given by a cop. Just because you’re severely injured is no excuse.
And of course, the multiple “rides” on the Taser didn’t help his injuries. I’m sure there was the usual spasmodic response that happens when about 50,000 volts of juice hit you. And it also delayed surgery to correct the damage of the initial fall:
His dad says the use of the stun gun delayed what would have been immediate surgery by two days.
“The ‘Tasering’ increased his white blood cell count and caused him to have a temperature so they could not go into the operation.”
I smell lawsuit.
But that’s not the only such incident from down this way. Just last week we had a very similar thing happen in my hometown:
Police review Taser use
Captain says device escalated situation.
A man injured in a Taser-related fall from the Providence Road pedestrian bridge over Interstate 70 remained in critical condition last night at University Hospital as Columbia police sought to defend their use of force in the incident that began with a man threatening to jump from the overpass.
Phillip McDuffy, 45, suffered two broken arms, a fractured skull and possibly a broken jaw in the fall, Columbia police Capt. Zim Schwartze said yesterday. Police estimate McDuffy fell about 15 feet onto a concrete embankment beside I-70, landing on his right side after the 1½-hour standoff.
Yeah, they didn’t want him to hurt himself, so they Tased him. Gee, too bad that he fell and broke all those bones. Who would have expected *that* to happen?
The police use of Tasers is just simply out of control in this country. Seriously. My dad was a cop, and a lot of my family’s friends growing up were cops. They’ve got a tough job. I know that the use of Tasers have protected the lives of officers. But this is insane. It is no longer just the odd asshole who happens to make the Greatest Hits of Police Abuse on YouTube. It has now become commonplace for the police to grab their Taser anytime someone doesn’t immediately do what they’re told. Time to get rid of the things, nationwide.
(Cross posted to Daily Kos and my blog.)
Filed under: Comics, General Musings, movies, NYT, Paleo-Future, Predictions, Science, Science Fiction, tech, Travel
. . . but the announcement that there is a functional personal flying device to be revealed today is still pretty cool.
Why do I call it a ‘personal flying device’? Because it isn’t really a classic ‘jetpack‘ as we’ve seen in plenty of cartoons and movies. It is a large beast, weighing about 250 pounds, with twin fans each the size of a garbage can cut about in half. And for safety purposes, there is a support frame which allows the pilot to climb under the thing and strap himself to it. Hardly the ‘engine’ of The Rocketeer. But all in all, not a bad start – this is functional, will fly for about 30 minutes (the longest classic jetpack such as James Bond flew could go for about 30 seconds), and is fairly stable. From here significant improvements will be made. And Glenn Martin, the inventor of the device, understands this:
Only 12 people have flown the jetpack, and no one has gained more than three hours of experience in the air. Mr. Martin plans to take it up to 500 feet within six months. This time, he said with a smile, he will be the first.
Mr. Martin said he had no idea how his invention might ultimately be used, but he is not a man of small hopes. He repeated the story of Benjamin Franklin, on first seeing a hot-air balloon, being asked, “What good is it?” He answered, “What good is a newborn baby?”
Filed under: Alzheimer's, Genetic Testing, Health, io9, NPR, Predictions, Science, Science Fiction, Sleep, Writing stuff
Some years back a good friend sent me a postcard from Florida with the image of a tri-colored heron’s head (you can see the image from which the card came here). On the card, the heron is looking straight at you, top feathers standing straight up, and above it in bright blue ‘electric’ lettering are the words “Stress? What Stress?”
It’s been tacked to the wall next to my desk here since. And it has been something of a standing joke between my wife and I. When things have gotten bad from time to time, one of us will turn to the other and simply say in a squeaky, high pitched voice “Stress? What Stress?”
* * * * * * *
A month ago I wrote about slowly coming down from the prolonged adrenalin high which was being a full time care provider. Doctors have known for a while that such long term stress was hard on care providers. It’ll drive up blood pressure, screw with your sleep habits, and even compromise your immune system. Now they have started to figure out how that immune system mechanism works. Last night I caught a piece on NPR’s All Things Considered with UCLA professor Rita Effros about her research on this mechanism. What professor Effros said (no transcript yet, so this excerpt is my transcription):
So, in the short term cortisol does a lot of really good things. The problem is, if cortisol stays high in your bloodstream for long periods of time, all those things that got shut down short term stay shut down. For example, your immune system.
But let’s say you were taking care of an Alzheimer’s spouse, or a chronically ill child – those kinds of situations are known now to cause chronic, really long-term stress – let’s say years of stress.
(These care providers) were found to have a funny thing happening in their white blood cells. A certain part of the cell is called the telomere, which is a kind of a clock which keeps track of how hard the cell has been working. Their telomeres got shorter and shorter, and it has been known for many years that when cells have very short telomeres they don’t function the way they’re supposed to function.
Every cell contains a tiny clock called a telomere, which shortens each time the cell divides. Short telomeres are linked to a range of human diseases, including HIV, osteoporosis, heart disease and aging. Previous studies show that an enzyme within the cell, called telomerase, keeps immune cells young by preserving their telomere length and ability to continue dividing.
UCLA scientists found that the stress hormone cortisol suppresses immune cells’ ability to activate their telomerase. This may explain why the cells of persons under chronic stress have shorter telomeres.
The study reveals how stress makes people more susceptible to illness. The findings also suggest a potential drug target for preventing damage to the immune systems of persons who are under long-term stress, such as caregivers to chronically ill family members, as well as astronauts, soldiers, air traffic controllers and people who drive long daily commutes.
* * * * * * *
io9 picked up on this story, and gave it a nice Science Fiction spin:
Stress runs down the body’s immune system, which is why people with high-stress jobs or events in their lives are vulnerable to illness. Now a researcher at UCLA has discovered the link between emotional stress and physical damage — and she’s going to develop a pill that will allow you to endure stress without the nasty side-effects. And there may also be one good side-effect: Extreme longevity.
It turns out that when you’re under stress, your body releases more of the hormone cortisol, which stimulates that hyper-alert “fight or flight” reflex. While cortisol is good in small doses, over time it erodes the small caps at the end of your chromosomes known as telomeres (the little yellow dots at the end of those blue chromosomes in the picture). Many researchers have long suspected that telomeres would provide a key to longevity because they are quite large in young people and gradually shrink over time as cells divide.
Rita Effros, the researcher who led the UCLA study, believes that she can synthesize a pill that combats stress by putting more telomerase — the substance that builds telomeres — into the body. This would keep those telomeres large, even in the face of large amounts of cortisol. It might also make your body live a lot longer too.
Curiously, this clue about telomere length and aging is exactly the mechanism I use in Communion of Dreams to reveal that the character Chu Ling is a clone. Genetic testing reveals that the telomeres in her cells are much shorter than would be expected from a child her age, leading to the understanding that this is due to the fact that she has been cloned.
Ironic, eh? No, no one is going to think that I’m a clone. But I find it curious that the same mechanism which I chose for a major plot point pertaining to the health of the human race in my book is one which has been clearly operating on my own health.
Filed under: Alzheimer's, Health, Hospice, Science, Scientific American, Society
The human mind is a remarkable device. Nevertheless, it is not without limits. Recently, a growing body of research has focused on a particular mental limitation, which has to do with our ability to use a mental trait known as executive function. When you focus on a specific task for an extended period of time or choose to eat a salad instead of a piece of cake, you are flexing your executive function muscles. Both thought processes require conscious effort-you have to resist the temptation to let your mind wander or to indulge in the sweet dessert. It turns out, however, that use of executive function—a talent we all rely on throughout the day—draws upon a single resource of limited capacity in the brain. When this resource is exhausted by one activity, our mental capacity may be severely hindered in another, seemingly unrelated activity. (See here and here.)
Imagine, for a moment, that you are facing a very difficult decision about which of two job offers to accept. One position offers good pay and job security, but is pretty mundane, whereas the other job is really interesting and offers reasonable pay, but has questionable job security. Clearly you can go about resolving this dilemma in many ways. Few people, however, would say that your decision should be affected or influenced by whether or not you resisted the urge to eat cookies prior to contemplating the job offers. A decade of psychology research suggests otherwise. Unrelated activities that tax the executive function have important lingering effects, and may disrupt your ability to make such an important decision. In other words, you might choose the wrong job because you didn’t eat a cookie.
* * * * * * *
Almost a year ago I wrote this:
There’s a phenomenon familiar to those who deal with Alzheimer’s. It’s called “sundowning“. There are a lot of theories about why it happens, my own pet one is that someone with this disease works damned hard all day long to try and make sense of the world around them (which is scrambled to their perceptions and understanding), and by late in the afternoon or early evening, they’re just worn out. You know how you feel at the end of a long day at work? Same thing.
* * * * * * *
We cared for Martha Sr for about four years. Well, we were here helping her for a couple of years prior to that. But the nearly constant care giving lasted for about four, growing in intensity during that time, culminating with nearly six months of actual hospice care.
That was a long time. But my wife and I had each other, and it could have been longer.
That same day, a hospice patient named Michelle passed away. She was only 50 years old. She’d been battling MS for over 20 years. Debra is dispatched to her home.
The little brown house is shrouded by trees. Stray cats eat free food on the rusted red porch. Inside, Michelle lies in her hospital bed with her eyes slightly open. Debra’s there to help Michelle’s husband Ross. He quit his job in 2000 to take care of his wife.
“So eight years,” Debra says.
“She was permanently bedridden,” Ross replies. “This is the way it’s been. But like everything in life, it all comes to an end I guess.”
His voice sounds steady when he speaks, but his eyes are full of tears as he remembers his wife.
“I’ve never seen a women fight something like she did,” Ross says. “She spent years on that walker because she knew when she got in a chair she’d never get out. The pain it caused her.”
Ross talks for more than an hour. Debra listens and commiserates. It’s at these moments, even more than when she’s providing medical care, that Debra feels her work is appreciated.
* * * * * * *
Filed under: Arthur C. Clarke, Artificial Intelligence, Expert systems, Google, movies, Predictions, Science, Science Fiction, Society, tech
A good friend sent me a link to a longish piece in the latest edition of The Atlantic titled Is Google Making Us Stupid? by author Nicholas Carr. It’s interesting, and touches on several of the things I explore as future technologies in Communion of Dreams, and I would urge you to go read the whole thing.
Read it, but don’t believe it for a moment.
OK, Carr starts out with the basic premise that the human mind is a remarkably plastic organ, and is capable of reordering itself to a large degree even well into adulthood. Fine. Obvious. Anyone who has learned a new language, or mastered a new computer game, or acquired any other skill as an adult knows this, and knows how it expands one’s awareness of different and previously unperceived aspects of reality. That, actually, is one of the basic premises behind what I do with Communion, in opening up the human understanding of what the reality of the universe actually is (and how that is in contrast with our prejudices of what it is).
From this premise, Carr speculates that the increasing penetration of the internet into our intellectual lives is changing how we think. I cannot disagree, and have said as much in several of my posts here. For about 2/3 of the article he is discussing how the hyperlinked reality of the web tends to scatter our attention, making it more difficult for us to concentrate and think (or read) ‘deeply’. Anyone who has spent a lot of time reading online knows this phenomenon – pick up an old-fashioned paper book, and you’ll likely find yourself now and again wanting explanatory hyperlinks on this point or that for further clarification. This, admittedly, makes it more difficult to concentrate and immerse yourself into the text at hand, to lose yourself in either the author’s argument or the world they are creating.
But then Carr hits his main point, having established his premises. And it is this: that somehow this scattered attention turns us into information zombies, spoon-fed by the incipient AI of the Google search engine.
No, seriously, that’s what he says. Going back to the time-motion efficiency studies pioneered by Frederick Winslow Taylor at the turn of the last century, which turned factory workers into ideal components for working with machines, he makes this argument:
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
Do you see the pivot there? He’s just spent over a score of paragraphs explaining how the internet has degraded our ability to concentrate because of hyperlinked distractions, but then he turns around and says that Google’s increasing sophistication at seeking out information will limit our curiosity about that information.
No. If anything, the ability to access a broader selection of possible references quickly, the ability to see a wider scope of data, will allow us to better use our human ability to understand patterns intuitively, and to delve down into the data pile to extract supporting or contradicting information. This will *feed* our curiosity, not limit it. More information will be hyperlinked – more jumps hither and yon for our minds to explore.
The mistake Carr has made is to use the wrong model for his analogy. He has tried to equate the knowledge economy with the industrial economy. Sure, there are forces at play which push us in the direction he sees – any business is going to want its workers to concentrate on the task at hand, and be efficient about it. That’s what the industrial revolution was all about, from a sociological point of view. This is why some employers will limit ‘surfing’ time, and push their workers to focus on managing a database, keeping accounts balanced, and monitoring production quality. While they are at work. But that has little or nothing to do with what people do on their own time, and how the use the tools created by information technology which allow for much greater exploration and curiosity. And for those employees who are not just an extension of some automated process, those who write, or teach, or research – these tools are a godsend.
In fairness, Carr recognizes the weakness in his argument. He acknowledges that previous technological innovations on a par with the internet (first writing itself, then the development of the printing press) were initially met with gloom on the part of those who thought that it would allow for the human mind to become lazy by not needing to hold all the information needed within the brain itself. These predictions of doom proved wrong, of course, because while some discipline in holding facts in the brain was lost, increasing freedom with accessing information needed only fleetingly was a great boon, allowing people to turn their intellectual abilities to using those facts rather than just remembering them.
Carr ends his essay with this:
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Wrong. Wrong, wrong, wrong. This is a complete misreading of what happens in the movie. Kubrick’s vision was exactly the opposite – HAL was quite literally just following orders. Those orders were to preserve the secret nature of the mission, at the expense of the lives of the crew whom he murders or attempts to murder. That is the danger in allowing machinelike behaviour to be determinant. Kubrick (and Arthur C. Clarke) were, rather, showing that it is the human ability to assess unforeseen situations and synthesize information to draw a new conclusion (and act on it) which is our real strength.
(Hat tip to Wendy for the link to the Carr essay!)