In the early 1980s, a little squall blew up on the usually calm sea of pediatrics. Growth hormone had been a rare, expensive medicine, each gram extracted from thousands of pituitary glands and laboriously purified. It was far more precious than gold, and only extremely short children with a proven deficiency in their own growth hormone could hope to get it. Then, in one of the first commercial triumphs of DNA technology, bacteria were persuaded to make human growth hormone. It was purer, more natural to our species, and most important, far less expensive. It wasn't cheap, but it was suddenly accessible to many. Then the other shoe dropped. In the mid-1980s, studies published in the New England Journal of Medicine and elsewhere showed that short children with no growth hormone deficiency could be made a few centimeters taller if they were given human growth hormone. So at the dawn of the age of gene technology, we already faced a decision: Do short children (and their ever-anxious parents) have a "right" to be made taller by hormones even when they have no apparent deficiency or illness—when they are short, but "normal"? And, for that matter, is it ethical for physicians to treat them? And who would pay thousands of dollars a year for this cosmetic endocrinology?
Strong cultural preferences clearly played a crucial role. Shortness is part of the normal variation of the species. Only strong cultural bias could make parents want to "cure" it. And, of course, this bias also intersected with gender. As pediatricians well know, children who were brought in complaining of shortness were traditionally boys. Girls tended to come in when they felt too tall. (It so happens that they can be treated too—with estrogens that accelerate puberty and thus ultimately halt growth.) But girls who feel too short can also get help from growth hormones; increasingly, girls request it so that they can become taller.
Now before you wring your hands too freely, put yourself in the pediatrician's place. You feel the pinch of societal pressure, perhaps even of ethics, telling you that normal human variation cannot be a treatable illness. But you are also pledged to help the child you have in front of you. Set aside the misguided goals of the parents. The child before you—perhaps at the point of tears—feels out of place for being two standard deviations away from the mean in height. And, objectively, he is out of place. Performance in sports, social acceptance, avoidance of ridicule, even romantic success may well hinge on your decision. You have something to offer. Do you withhold it because your wisdom, or society's, says that the child should accept nature's disposition because it is "normal"—despite the fact that it plainly causes suffering?
Let us say that, considering only the risk-benefit analysis for the short child, you decide to offer the growth hormone. Human discretion being what it is, your pharmaceutical magic becomes known throughout your town. Before very long you are getting visits from boys of average height, wanting to be tall. Finally, a gangly six-foot-two-ish high school freshman shows up. He is already a rising star on the basketball team. If only he could grow another inch or two. . . .
It is actually unclear that growth hormone would have the requested effects in these latter cases (no one is enthusiastic about funding or doing the requisite research). But beginning with a study in 1990, the hormones' height-enhancing effects have been overshadowed by an even more dramatic finding: growth hormone slows some of the major effects of aging. In particular, over a period of six months, bone density and muscle mass were maintained or increased in elderly subjects, while control subjects lost both, as most older people do. Images of octogenarian he-men provoked inquiries to physicians throughout the world.
Cardiac risk has so far limited the use of growth hormone as an elixir of youth, although research continues. But a much more widely used hormone, estrogen—along with its growing corps of chemical cousins—had already served as a fountain of youth of sorts for women. Menopause, a perfectly "natural" end to the ovaries' cycles, occurring around age 50, produces an increase in the risk of osteoporosis and heart disease that, while not as abrupt as a hot flash, is more dramatic and step-like than the corresponding changes in men. Estrogen-replacement therapy protected women against these risks while reducing the transient discomfort of hot flashes in the bargain. It too was controversial. Some groups of women seemed to run an unacceptable risk of breast cancer caused, or encouraged, by the hormone. Some also cited an ominous variety of unknown but conceivable effects of what one critic called "a massive drugging of women." But few women were frightened away.
As the popularity of estrogen replacement grew in the 1990s—predictably, under pressure from women wanting it, not from drug purveyors pushing it—bigger and better studies demonstrated that the cancer risk was small, and that there were benefits to women in terms of overall mortality. Leaving aside quality-of-life questions, the research data justified treatment for most women for at least a few years. Soon there were new compounds such as tamoxifen and the newer raloxifene—called "partial estrogen receptor agonists," since they mimic some effects of estrogen but not all of them—which raised a strong hope of picking and choosing among effects. Tamoxifen reduces breast cancer recurrence in affected women; in a recent study of 13,000 healthy women at high risk for this tumor, breast cancer incidence was reduced by 45 percent. It may prove to protect against bone loss and heart disease while not only not causing breast cancer, but actually helping to prevent it. Not surprisingly, men have started to look longingly at the enhancing effects of estrogen, and studies of androgen-replacement therapy are under way.
Estrogen replacement is arguably a "natural" solution to a natural "problem." Menopause is not an illness but a stage of life, albeit one that entails increased risk of illness. Perhaps that is why replacement therapy slipped into widespread use with relatively little fanfare or philosophical opposition, despite being, in essence, a life-enhancement drug. That was not the reaction to Prozac, an antidepressant that by the early 1990s had become as much a household word as Clorox. Followed in rapid succession by other, related drugs—Zoloft, Paxil, Effexor, and Wellbutrin, to name a few—Prozac would quickly and decisively change the way we think about the meaning and treatment of depression.
Most of these drugs have this in common: they block the transporter for serotonin—the chunk of molecular machinery in the nerve ending that sucks the neurotransmitter back into the cell, only to be released again. This sponging-up phase is vital to the function of the synapse, a busily dynamic, substance-filled gap between nerve cells. Without knowing the underlying biological roots of depression, or whether it is really in any sense caused by the sponging up, we nevertheless do know this: block the transporter, slow the sponging process a fraction of a hundredth of a second, and the extra serotonin hanging around in the gap will, in a week or two, cause the depression to lift. It does not work in every case, and many people suffering from depression get better on their own. But while placebo-treated depressions have a 30 percent improvement rate, serotonin re-uptake treatment yields a 60 percent rate, a doubled improvement.
But by 1990 something new became apparent. Far larger numbers of people were being treated with these drugs—SSRI's, or selective serotonin re-uptake inhibitors—than had ever been treated with antidepressants before. Two hypotheses were suggested to account for this trend. First, it might just be that the side effects were milder. Depressed people taking the older medications had to suffer classic symptoms: dry mouth, constipation, hesitant urination, erectile difficulties. This price, added to the one paid at the cash register, made the older drugs too costly, overall, to be taken except when depression was incapacitating and resistant to prolonged psychotherapy. In addition, such symptoms as mouth dryness were so ubiquitous that the patient always knew the drug was there. Side effects, as it were, constantly tapped you on the shoulder, saying: By the way, friend, you may feel better, but you are not yourself.
Prozac and kin were different. They suppressed libido, true, and that cost loomed larger in people's minds as treatment continued. But otherwise, patients felt physically normal. There were no constant physical reminders. More important, there were no cardiac risks. Also, there were hints that the new agents worked not just against depression but against the pervasive anxiety of our tense, neurotic age. People—one becomes hesitant to call them patients—began to say that they felt more like themselves than ever. They felt as if they were growing into and becoming themselves for the first time. Some psychotherapists, such as Peter D. Kramer, the author of List ening to Prozac, were suggesting that these medicines changed personality, and that people liked the change.
Everyone who wanted a third-party payer to foot the bill for Prozac had to get a diagnosis. This would be a label in the spectrum of mood or perhaps anxiety disorders in the DSM-IV, the standard diagnostic manual of mental health conditions and disorders. But there were tens of millions more such diagnoses than there had been a decade earlier. This could be explained by a vast unrecognized epidemic of depression and anxiety, which psychiatric epidemiologists had always claimed existed. But it might also mean that millions of people who felt kind of blue a lot just wanted to feel better and that psychiatrists were quite prepared to stretch the diagnostic categories.
Children were not exempt from this process: they were getting a different kind of drug for a different reason. Ritalin, or methylphenidate, went from being a medicine to calm kids who were bouncing off the proverbial walls—or at least daydreaming and fidgeting constantly—to being a grade-improvement potion for many millions of children. Find your attention flagging in algebra? Get your mom to persuade the pediatrician to write a scrip for Ritalin. Of whom among us might it not be said that we would do better with enhanced concentration? Studies had long shown that stimulants—Ritalin is related to amphetamines—improve both concentration and performance in normal children and adults, as well as in hyperactive children or those with attention deficits. Soon the diagnosis of attention deficit disorder (ADD) was a widely cast net. A 1996 cover of the New Yorker showed a teacher at the blackboard, where she had carefully written "Readin', 'Ritin', Ritalin."
Consider even the hard-core hyperactive child—four out of five times, a boy. Critics have asked whether this boy is really any different from Huckleberry Finn, or whether that spunky hero of America's classic fiction would, today, have simply been medicated out of existence. Huck Finn comfortable in the church pew on Sunday, making A's in spelling and rhetoric, eating his peas with the best manners of the parlor, while the raft drifts off unoccupied, dead to the hope of adventure. Are we merely medicating boyhood?
Worse, say the critics, what happens when we discover that most children can do better in school and even at games when they take Ritalin, with a minimal burden of adverse effects? In all fairness, in a world where only the schooled succeed, are we right to withhold a means of improving school adaptation for millions of hard-pressed children competing fiercely for a niche? But if we give it to them, then what meaning does attention deficit really have? Does the target of treatment become the report card? And in that case, who pays?
As with depression, when "he doesn't pay attention" gets an official diagnosis in the clinician's manual, we have an illness, and the power of third-party payment is legitimately invoked. Most parents will cheerfully let a child be labeled if that classification sets in motion a train of events that they think will help. Add the fact that adults are getting the diagnosis of ADD more frequently, and children more often the label of depression, and we have millions of new candidates for the corresponding pharmacological treatments. We may have evolved in a world of activity and movement, but we now have a culture of physical stasis and focused, often passive attention. Don't all children, and even all adults, have a right to legitimate medical means to prevent maladaptation to that culture? Or do we mean "enhance adaptation?" Well, that's the rub. If we are preventing or intervening in maladaptation, the illness model fits. If we are only enhancing, we feel queasier invoking the same model.
Which brings us to Viagra. Few drugs in history have attained such widespread use so fast. Tens of millions of prescriptions, fortunes made by large and small investors, a black market, a spate of fake, silly imitations, the promise of several real ones under new patents and brand names, and sly grins on the faces of men and women circling the globe—all this occurred within months of FDA approval, and at a cost of perhaps six deaths, apparently due to the doubling up of the drug's effects with those of nitrates used in the same men for heart disease.
Viagra, officially, is a treatment for male impotence. Or, more exactly, it is an agent that makes penises hard for a couple of hours. It starts its work within 45 minutes and is largely cleared from the blood a few hours later. When a man is not having sex—presumably, most of the time—Viagra and its metabolites are simply not there. For the man not taking nitrates, it is virtually without any known adverse effects. It works by enhancing the effect of nitric oxide, found in just the last decade to regulate blood flow into the penis. For the middle-aged or older man, or the man whose erections have been compromised by diabetes, arterial plaque in the penile vessels, or any of scores of common drugs—including most antidepressants—it is a godsend.
The trouble is, when does it stop being medical treatment and start being, well, fun? This is the question third-party payers have to answer for themselves and their patients before they rule on reimbursement—something they already have to do countless times a day. Hugh Hefner, the recently divorced Playboy billionaire, is by his own account a playboy again, and has publicly declared his affection for Viagra. "It's great being a babe magnet in your seventies," he said in an interview in Gear. "And with Viagra, you can go all night."
Now, this sounds less like medicine and more like fun—although a psychiatrist could no doubt be found who would certify Hef's need to go all night. For millions of others, Viagra makes the difference between zero sexual intercourse and something like an age-appropriate frequency. But what is that? Surely, given the world's dynamic demographics, no respectable third-party payer will deny an eightyish man the right to a few erections a month just because not long ago the average man his age had none. In the third millennium few will argue that age is a good basis for any absolute rule, and even fewer will deny that sex is a vital part of health for most people.
So our beleaguered third-party payer wants to be fair, and wants to avoid forcing doctors to stretch diagnostic categories beyond recognition. But, let's see, 365 nights a year, times ten dollars a pill, times . . . how many men? You get the idea. It's easy to understand why some private payers have balked, and why Medicaid's decision to cover the drug raised almost as many eyebrows as it did penises. When a man like Hugh Hefner, identified all his life with sexual potency, is prepared to declare publicly that the drug is changing his life, we can be sure that Viagrafication will proceed apace. Male hesitancy will be minimal. Women will be cheering too, and not just from the sidelines. A woman I know who is recently menopausal says she has gotten her libido and lubrication back with estrogen therapy. But, she says, "I miss clitoral erections." She is determined to try Viagra to see if it helps.
There is every reason it should. The clitoris is embryologically almost identical to the penis, and the crucial engorgement phenomena work the same way. A recent magazine article by a woman who tried Viagra doesn't cite clitoral erections per se, but says her arousal level rose much faster, and her usual time to orgasm—self-described as normal, but long—was markedly shortened. Overall, she felt more passionate and more satisfied more quickly. True, the drug is not an aphrodisiac; even the package insert says so. But we are talking about an agent that engorges sexual organs. Do we really expect that supreme sex organ, the brain, to just observe dispassionately? Can an engorged vagina, or a hard penis or clitoris, really fail to increase desire?
No good research has proved any female effects—yet. But it is a good bet that such effects will exist. Even if they are weaker than the effects on men, it is not difficult to imagine the market for Viagra nearly doubling on gender grounds alone. And notice too the subtle shift in use. If the drug were prescribed for women, we would no longer just be talking about facilitating intercourse, something that even a biblical mind could judge to be part of normality. If a man takes Viagra and a woman lubricates herself with a tube of jelly, intercourse can be achieved. So we would not just be talking about facilitating sex acts; we would be talking about augmenting pleasure. With the FDA-approved use and the extant decisions about coverage, we already ration sex; soon we may be rationing orgasms.
All the enhancement agents have potential social as well as medical downsides. Growth hormone, like steroids in sports, could generate a competitive cycle, pushing all children to the genetic limits of height. Estrogen replacement takes women out of a natural life cycle phase that has been a part of human experience since we evolved from apes. Ritalin tends to make children more similar and may abolish some aspects of what was once considered normal boyhood. Do we really want all children to sit still? As for Viagra, it has already broken up some marriages even as it has revived others. And it's a safe bet that with millions of newly confident men, massage parlors and escort services will see the drug increase cash flow as well as blood flow.
But this set of problems won't go away. Hand-wringing will not reduce the temptation we all feel to use any safe means to make our lives better. There will be false steps, as with the sleep aid Halcion and with the weight-loss treatment fen-phen. Both were abandoned after some years of use because of adverse side effects. There will also be abuse, as with steroids, stimulants, and any number of prescription narcotics and tranquilizers. But morphine's four-millennium history and the coca leaf's shorter but also ancient lineage should make it clear that the future will continue to provide agents that do not just treat illness but also hold out the promise of enhancing normal functioning. They will be better, safer, and more precise in their actions, more varied in their desired effects and more muted in their potentially harmful ones. Ethical, political, and economic debates will go on as they must, but in the meanwhile we will be riding the roller coaster of polypharmacy.
The debate will not just surround agents whose primary purpose is enhancement, but also the boundaries of use for drugs that are mainly for illness. Cholesterol-lowering agents are a case in point. Until recently, they were given to people at risk for heart attack and stroke who could not bring their serum cholesterol levels under 200 with diet changes alone. But now it has been shown that lowering the number from 200 to 180 further reduces the number of untoward vascular events. Leave aside for the moment the question of whether lowering cholesterol causes other medical problems. Do we expect third-party payers to foot the bill for the drop from 200 to 180, even though 200 is considered the bound of normal? Do we revise "normal" downward to 180? Or do we simply concede that normality has a fuzzy boundary, that health and illness are not clear categories but mere points on a continuum?
If we do concede that point, we will have to admit that every time we discover a drug that urges us along that continuum—away from illness and toward health—we must also face a decision about who should pay. The principle of medical insurance legitimately demands that we be sick before we collect. Or does it? Insurance often or usually covers some vaccinations, mammograms, eye examinations, and other interventions for people who are not ill. But these pro cedures, it is argued, prevent graver illness and thus greater expense further down the road. This can be said of some enhancement drugs, such as estrogen, but not of others, such as growth hormone or Viagra. For these, we will have to grapple with whether something like short stature or a semisoft penis is a treatable illness.
Or perhaps we should focus on the larger policy question: When a treatment deemed both ethical and safe is shown to be capable of improving the lives of people who are not ill, can insurers help to pay without going bankrupt? And even if they don't go bankrupt, the costs will raise insurance premiums, which will lead to fewer people being insured. One possibility may be for pharmaceutical company marketing departments to be notified (by Medicare, Medicaid, and large private payers) that their pricing policy for a given drug may determine whether they have access to more than 90 percent of their potential market—those who are basically well, but just want to enhance their lives. Such approaches already govern military contracts with large corporations and influence school textbook publishing—the profitability of which can be decided by certain state school boards. These are not ideal mechanisms, but they seem to distribute cost burdens among the various economic entities involved rather than concentrating them in one place.
Another approach would be to give physicians a certain degree of additional power to determine where a given patient or client stands on the continuum between treatment and enhancement. The patient could be required to share more of the cost as the balance shifts toward enhancement. Obviously, this would be open to abuse, but it still might turn out to be fairer than the present all-or-nothing system. Still another approach would be to let the market regulate cost. Within a few years several choices will be available in the sphere of Viagra-like agents and partially estrogenic substances. Competition, combined with corporate perception of a vast enhancement market, ten or a hundred times larger than the conventional treatment market, could drive prices down. Drug pricing patterns do not have to be the same in the future as they have been in the past.
But the enhancement issue extends beyond drugs. Who pays for psychotherapy, under what conditions, and for how long? Is infertility an illness? Most plans don't pay for treatment, whether medical or surgical, and these treatments are very expensive. Should a young infertile woman, or one with a specific clinical diagnosis, have different opportunities for coverage than a woman in her forties who is otherwise healthy and has electively postponed childbearing? Breast reconstruction after mastectomy or correction of a harelip may be medical treatments, but breast augmentation and face-lifts are clearly enhancements. Or are they? Medical ethicists and policymakers will have a great many judgments to make, and the advancing edge of science will only make those judgments harder.
You may also like:
You need to be logged in to comment.
(If there's one thing we know about comment trolls, it's that they're lazy)