The real reason for this post, however, is to boast — I’ve already finished Inside Out. I thought it was excellent.
I’m feeling nostalgic today. So I’m going to post a lengthy quote from Ted V. McAllister over at Front Porch Republic, because it points out something obvious that many of us don’t think about very often:
The tendency to think in terms of generations, of generational experiences that define an age cohort, is a product of speed. During ages when change is glacial and when technology remains almost constant for decades or centuries, when social and economic conditions alter at a pace that requires historical analysis to detect—during such ages the experiences of life are amazingly consistent across generations. A man might very well assume that his father, his grandfather, and his grandson all participate in the same culture, the same economy, the same religious beliefs, the same constellation of experiences that shaped his identity. People separated by perhaps two hundred years share the same “relevant” knowledge or beliefs. They all belong, as it were, to the same civilizational generation. In some sense one might claim that these people are bound by historical memory and have no reason for nostalgia.
Much of the human story, however, includes people who live through very rapid change and who experience themselves occupying a distinct time or age. They understand that something significant enough has changed as to produce a before and after experience, leaving them aware that there is no going back. For a man born in the United States in 1920, for instance, his lifetime seems full of such transformations. A great depression, a world war, and a subsequent economic transformation (to say nothing of the attending changes in society, politics, and religion) certainly set his generation apart from his parents and his children. Things had changed dramatically and rapidly, and the austerity of his teenage years followed by the nationalistic regimentation of his twenties, gave his life, his beliefs, his fears and expectations, a shape that fit no previous or later American generation. He knew that he and his generation were different.
Multiple years of military futility, with ever-mounting casualties and no evidence of any progress, are insufficient reasons for Obama to change anything substantive about his Afghanistan policy, which amounts to: keep spending money we don’t have, keep sending troops into harm’s way when they aren’t actually making anything any better, and keep telling us that this is all critical for the defense of the United States.
So I’d be shocked if a Rolling Stone article which revealed Stanley McChrystal’s scorn for Richard Holbrooke, and which caused Obama to replace McChrystal with our all-purpose imperial general-of-excellence David Petraeus, will lead to any substantive changes in Afghanistan policy.
And perhaps more interestingly, can a cardiac surgeon who answers “only about five or ten seconds” really know this?
Update: A reader has corrected me — this is a bench trial with no jury. So where you see “juror” please read “the judge.” Thanks for the correction. I now wonder whether the judge will find this expert testimony helpful.
Jurors are being asked to consider these questions in a D.C. Superior Court trial charging three men with conspiracy and obstruction of justice in the murder of Robert Wone, an attorney who was stabbed in the chest while spending the night with friends. According to the defendants’ theory of the case, Wone was killed when an unidentified intruder entered the house on the ground floor, went up the stairs and into Wone’s room on the second floor, stabbed him in the chest several times, and fled, without anyone seeing him (or her). Prosecutors say that the three defendants know who killed Wone, and that they tampered with the crime scene to impede the investigation.
One argument that the prosecution is making is that Wone took a long time to die, such that he might have lived had his friends called 911 earlier. No, says the defense; Wone was pretty much dead within seconds after being stabbed, so there was nothing more the defendants could have done to save his life. Hence, the question for the jury: how long does a stab wound like Wone’s take to kill you? Does a cardiac surgeon know the answer? Does a pathologist? Oh, the headaches that come with being a juror.
Both sides are presenting so-called “experts” (mostly physicians) who offer conflicting testimony about how long it takes to die from a Robert Wone-type stab wound to the chest. But do any of these doctors really know?
Much of what physicians do, we do without much actual evidence of its efficacy. We do it because we were told to do it in residency, or because we think it makes sense. That’s not always a bad thing. The point is that physicians frequently have to act without the certainty they’d prefer. This is true for many of the daily questions which doctors face every day in their practice — whether a drug will work, whether a procedure will help, etc. There’s a lot of uncertainty.
There’s even more uncertainty about the questions attorneys are asking these physician “expert” witnesses to answer in this trial. Question: given these autopsy reports, do you think the deceased would have survived for five to ten minutes after the wound, or do you think he would have died within seconds? The doctors can give their opinions, but I wouldn’t pretend that they have any firm knowledge one way or the other. Trauma, in particular, is a tricky thing to understand. No two traumatic wounds are the same, few of them are actually studied prospectively and systematically, and patients often defy common expectations about the course of their injuries. So there’s no way that any physician is going to know with certainty how long Robert Wone survived after he was stabbed.
This doesn’t mean that their opinions would be useless to a jury in a criminal case. The jurors are asked to listen to the entirety of the prosecution’s argument, and to the entirety of the defense argument, and then decide: has the prosecution proven beyond a reasonable doubt that the defendants are guilty of the crime they’ve been charged with? They may decide to convict even if they dismiss the physician testimony as a bunch of useless verbiage.
I remember my only experience of being a juror. We were asked to consider whether the defendant was guilty of driving drunk. The prosecution and the defense called up a bunch of expert witnesses to testify about nystagmus, the metabolism of alcohol, and whatnot, but for me all of that testimony was useless. Neither side’s expert witnesses were credible. But I had no difficulty voting for conviction, because the rest of the prosecutor’s case established beyond a reasonable doubt that the defendant was guilty. The other jurors seemed to think the same way — we spent a lot of time in the jury room savaging the expert witnesses for both sides. We all thought the defendant was guilty based on the other evidence and testimony.
We’ll see what happens in the Wone trial, but I wouldn’t be surprised if a lot of jurors in that case end up feeling pretty much the same way about the expert testimony.
The Supreme Court has ruled “… that it does not violate the Constitution for the government to block speech and other forms of advocacy supporting a foreign organization that has been officially labeled as terrorist, even if the aim is to support such a group’s peaceful or humanitarian actions.”
That’s how Lyle Denniston at the SCOTUSblog summarizes Holder v. Humanitarian Law Project. My reactions? Well… you’ll just have to wait for those, because I haven’t read the opinion and have barely read any analysis. So I’ll do what I wish more people who feel the need to comment on Supreme Court decisions would do, and that’s to keep my mouth shut* until I’ve actually read about the case.
* Figuratively speaking, at least. Please, do continue…
Here’s the deal with the Supreme Court: their decisions are not political pronouncements in the same way that many of the President’s or members of Congress’ statements are. This is a fact that too many journalists and bloggers writing about the Supreme Court seem to ignore. Even the best newspaper articles about Court decisions will usually have a teaser opening like “the Supreme Court ruled today that employees can’t sue their employers even if they’re the victims of discrimination” or some pithy statement that boils a case down to its essence, but that necessarily fails to mention the context within which the decision is delivered.
In the best news articles and blog posts, much of the important context is found in subsequent paragraphs. But here’s the problem — most articles fall far short of the standards of the best. They omit the context even from the body of the article. They mis-summarize the holding of the Court in their teaser. And even when a reader is presented with a good news article about a Supreme Court decision, many readers will just skim the intro and then jump to conclusions about what the decision means. And by “many readers” I often mean “me.”
I think these things can lead to huge misunderstanding among the general public about the Supreme Court. People think they know that Scalia or Ginsburg is a threat to American freedom, but they don’t really understand why.
So I won’t speculate about this decision because I haven’t read the opinions. However, I will — this is, after all, a blog — share my inadequate first impressions, which are that it’s wholly unsurprising to see John Roberts on the side of the government in a terrorism case, and that it’s a good question whether there might be some hypocrisy lurking somewhere in the First Amendment jurisprudence of Roberts (recall that he authored the recent Citizens United opinion that took a fairly absolutist view of the First Amendment).
Update: After reading the opinion and the dissent, I would have joined Breyer’s dissent. I don’t think, though, that the opinion is unreasonable. It all comes down to whether you’re willing to defer to the executive and legislative branches when they attempt to balance national defense against the first amendment right to speak (Roberts and the majority is), or if you’d prefer that the political branches justify these content-based speech restrictions with more evidence than they offered in this case (as Breyer and the dissent would have them do).
I’d side with Breyer because the executive branch has demonstrated that it is eager to overreach its authority on anything related to terrorism. The legislative branch, meanwhile, has been unwilling to act as a check on executive branch overreaching. The government should be required to demonstrate to a court why restrictions on speech advocating nonviolent and legal activities are justified in each case. The problem with Roberts’ opinion is that although it claims to apply “strict scrutiny” to this criminalization of speech, it in fact defers to the government far too much.
I admit to loving demotivational posters. I can waste whole evenings with them. Two of my favorites:
Both should be safe for work.
Atul Gawande, the surgeon and writer of much-deserved acclaim, reveals his spectacularly technocratic vision of the future world of medicine, a vision of a true medico-industrial complex. This is from Gawande’s address to the Stanford medical school graduating class, and the emphases are mine:
Why does anyone receive suboptimal care? After all, society could not have given us people with more talent, more dedication, and more training than the people in medical science have—than you have. I think the answer is that we have not grappled with the fact that the complexity of science has changed medicine fundamentally. This can no longer be a profession of craftsmen individually brewing plans for whatever patient comes through the door. We have to be more like engineers building a mechanism whose parts actually fit together, whose workings are ever more finely tuned and tweaked for ever better performance in providing aid and comfort to human beings.
You come into medicine and science at a time of radical transition. You have met the older doctors and scientists who tell the pollsters that they wouldn’t choose their profession if they were given the choice all over again. But you are the generation that was wise enough to ignore them: for what you are hearing is the pain of people experiencing an utter transformation of their world. Doctors and scientists are now being asked to accept a new understanding of what great medicine requires. It is not just the focus of an individual artisan-specialist, however skilled and caring. And it is not just the discovery of a new drug or operation, however effective it may seem in an isolated trial. Great medicine requires the innovation of entire packages of care—with medicines and technologies and clinicians designed to fit together seamlessly, monitored carefully, adjusted perpetually, and shown to produce ever better service and results for people at the lowest possible cost for society.
Wow. I’m appalled.
Not because I think that Gawande’s efficient, techno-industrial future world wouldn’t be a huge improvement on the status quo, which is currently rife with waste, graft, corruption, error, and greed. I’m appalled because this is such an archetypal expression of a way of seeing the world that we rely on too much, to our detriment. It’s almost a perfect example of what the agrarian viewpoint is not. If you want to know what agrarianism is, it’s not too simplistic to say that it is exactly the opposite of what Gawande describes.
I could mine this speech for tens of posts about agrarianism generally, but today I’ll limit myself to first impressions.
According to Gawande’s vision, physician autonomy is dead. At the very least, it’s nothing like the kind of autonomy that most doctors and medical students have in mind. In place of craftsmen physicians who use their training and judgment to treat patients, we have “… clinicians designed to fit together seamlessly, monitored carefully, adjusted perpetually…” Think about what this means.
It means that clinicians — doctors, nurses, PAs, NPs — no matter how highly trained or well paid, must not exercise individual judgment but instead must conform, “seamlessly” to a medical system in which they are merely another mechanism alongside the inanimate mechanisms of “technologies” and “medicines.” What else could Gawande mean? He says: “This can no longer be a profession of craftsmen individually brewing plans for whatever patient comes through the door.” And he’s explicit that clinicians, as well as technologies and medicines, must be “monitored carefully” and “adjusted perpetually.” If we actually relied on the judgment of clinicians, how could they then “fit together seamlessly” with the rest of the medico-industrial system?
Setting aside the reasonable question of whether this is even possible, I wonder how many graduating Stanford medical students heard that and swelled up their chests, proud of the knowledge that they were entering upon a career where the highest they can ask of themselves is that they fit seamlessly into a system that will “adjust” them perpetually in the name of efficiency. And I wonder if some of them didn’t worry just a bit over whether robotic technology might improve enough during their careers to make David E. Williams’ vision come true — not just robotic nurses but robotic doctors as well. Better hope you have your student loans paid off by then, folks, or you’ll be competing for those Starbucks jobs when you’re forty like too many of us in other decaying professions are already.
I’m not saying that this is somehow dystopian — compared with our current system we would probably get better medical care for less money after being processed by Gawande’s system. But I doubt that Gawande’s prescription is the only way to improve upon our current mess. But if not, I wonder who would sign up to be a doctor in this system? Plenty of people, certainly, but probably not all the graduating medical students listening to Gawande’s speech.
I suppose there are possibilities in Atul Gawande’s world for a rich and rewarding career for those who like to exercise personal judgment. Just make sure that you are one of the “engineers building a mechanism whose parts actually fit together” instead of just a doctor. Gawande misleads when he suggests that doctors generally need to be more like engineers; under his system most doctors will be more like the part than the part-maker; they will be the objects of the “tweaking” that only a very few of them will have the opportunity to do. And for those lucky enough to be a tweaker instead of a tweaked, what they do won’t bear much resemblance to anything like what most of the graduates imagine when they think about “practicing medicine.” Yes, for the select few, the way is still open in Gawande’s system to become a respected bureaucrat.
Worse, though, than the horrifying thought of doctors suffering through unrewarding careers is the likelihood that Gawande’s medico-industrial utopia will encourage most of us working in that industry to take even less responsibility for its outcomes than we do now. Gawande hints that he thinks this failure to take responsibility is a problem when he bemoans the tendency of specialists to worry “almost exclusively about our particular niche, and not the larger question of whether we as a group are making the whole system of care better for people.”
His solution — to assign responsibility for the whole to a small number of those specialists — is the same solution that the financial industry implemented (and which failed) and that the oil industry implemented (and which failed). If only a few people are assigned responsibility for an enormously complex system of specialists, one over which they cannot possibly know enough about what is going on to accurately forecast the risks and take the proper precautions, the result is the collapse of AIG and the Deepwater Horizon spill. These are both industrial catastrophes — large-scale and difficult to fix because the cause is often hard to identify and to understand.
But we’ll forgive Gawande; he’s just offering up a very conventional vision of improvement from an industrial mind-set; he was probably on a deadline. Like most conventional visions of this sort, the vision is appealing: a social organization that functions like a well-oiled machine! One that works, and works efficiently! The problem is that implementation of such a vision is always such a bitch. Easy to imagine, hard to do.
** Apologies to Arnold Relman.
Gratitude is a way of inhabiting the world. It is a disposition toward the world that reminds us that we are not alone. We are not solitary creatures owing nothing to no one. Rather, gratitude points to our dependence. It is a fitting attitude in the face of our creatureliness. When our thoughts are characterized by gratitude, they are outward looking. Gratitude breaks us out of the cocoon of self-satisfaction and self-concern that is a constant temptation and impels us to think of the ways our lives are related to others.
What a great word, “creatureliness.” I’m going to start using it more often. It’s always struck me as correct that we are, more than we are anything else, creatures. We forget this often, but good people like Mitchell are usually around to remind us.
And I wonder which of our ideas about ourselves is more valuable (or less destructive): the idea that we’re creatures and that one of our attributes is creatureliness, or the idea that we are somehow special, ooh, (wiggle-wiggle) “huuumaan” and different from every other creature not just in the way that all creatures are different from one another, but different by a whole order of magnitude. That last idea is pretty common; I read about it again just yesterday in an essay by Roger Scruton when he wrote:
I believe that we are significantly distinct from the other animals. For we are rational beings who relate to each other “I” to “I”. Freedom, individuality, accountability and the moral life all result from this. They are outgrowths of first-person knowledge, of the fact, as Kant put it, that we alone in the world say “I”. It is not because we are non-rational that we are subject to illusions and fallacies. On the contrary, it is because we are rational.
The standard formulation of such a view is always that we aren’t merely “creaturely” because we’re something more — “rational” or “moral.” This has always struck me as a load of hooey. Really, Roger; we’re moral because we relate to each other as individuals? I don’t see why I should accept this. I think it’s clear that we can be moral when relate to each other each as members of a group. And even if individual relations are what make us free and give us a moral life, I’ve never seen a persuasive argument for why we should think that we, humans, are the only creatures capable of doing this.
My cat, I think, thinks of himself as an individual and relates to my dog as an individual dog. Certainly if you disagree I don’t see you having any more evidence that I’m wrong than I do that says I’m right. I never understood the attractiveness (and it must be attractive because so many smart people like Scruton choose to endorse it) of the idea that we are so much more self-aware than cats and dogs. What I observe is that we are aware of our own self awareness, and we’re not aware of our cat’s self awareness, if any. How you make the leap from this differential awareness to the conclusion that “self awareness exists for me but not for my cat” has always baffled me. And so I refuse to make the leap.
But the original question was: which idea is more destructive, humans-as-creatures, or humans-as-uniquespecialbeings? Those who say that the former is more destructive probably have in mind the idea that, when we act like creatures, we simply kill or get killed. We fight for survival in a Darwinian way. We don’t, as creatures, ever act altruistically.
But this is wrong, because it presupposes the view of non-human creatures that I’ve already said we don’t have enough evidence to accept; namely, that they’re just un-self-aware, unmoral, unfree, animals. I’d rather put it this way: we are very destructive when we disregard others and act too selfishly and kill and maim in an orgy of violence because we lust for power. No analogies need be given between this destructive behavior and the behavior of any non-human creatures. It’s all us.
I’m more of the opinion that I think Mitchell was getting at in his essay. If we remind ourselves that we are creatures, we acknowledge our limits and our dependence upon the natural world as features of ourselves that all creatures share. It’s the sense of being somehow “special” that gets us into trouble; this idea serves most often as an excuse for fucking around with other life forms than it serves as a reminder that we ought to act morally and with restraint.
So I’m glad we’re creaturely. I’m happy to be a creature. I don’t think I’m any less of a (hu)man by admitting it.
I’ve been interested in Rockstar’s new Western video game since reading this generally positive NYT review by Seth Schissel: “In the more than 1,100 articles I have written for this newspaper since 1996, I have never before called anything a tour de force. Yet there is no more succinct and appropriate way to describe Red Dead Redemption.”
A good review, yes, but who is this guy Seth Schissel? Why should I trust his opinion? I know he writes for the NYT, but so does Thomas Friedman.
Now one of my favorite writers has weighed in on Red Dead Redemption. Joe Abercrombie likes it, too:
To put it simply, Red Dead Redemption is fucking stupendous. . . .
But it’s actually the quieter, often unscripted moments that really soar. Breaking wild horses in the desert as the sunset leaks out over the mesas. Squatting in the snowy trees, buffalo rifle levelled, waiting for that perfect shot as a grizzly bear snuffles past. Hunting for confederate gold among the mountain peaks in a lashing lightning storm. An impromptu gunfight in a saloon after a few too many whiskies after I blew the piano player’s brains out because I just didn’t like what he was playing. Alright, that last one wasn’t a quiet moment, but you see what I’m saying.
To which all I can say is, poor bear!
Now I know that sometimes my favorite writers have tastes of their own that I don’t share (cough, China Mieville, cough, cough). But I have no reason to think that Abercrombie’s tastes run to the video game equivalents of M. John Harrison novels (sorry, China, but he sucks). I suspect, based on what Abercrombie says he likes about this game, that I’d love it, too. Maybe I need to start asking myself: Xbox360, or PS3?