Recently in Business and Technology Category

Rent-Seeking & The Fiscal Cliff

U.S. Treasury Secretary Timothy Geithner mentioned on several Sunday morning shows last week, that one program targeted for spending cuts was direct payments to farmers, which would total about $46 billion over ten years and could contribute to the large cuts needed to negotiate away from the "fiscal cliff".

WikimediaCommonsAlfalfa

Alfalfa, Alicante, Spain (Wikimedia Commons)

The government initiated direct payments to farmers in the wake of the Great Depression, as a way of encouraging farmers not to abandon rural areas for the cities when the price of crops decreased because of surpluses. The practice quickly caught on, as Joseph Heller described in his 1955 book Catch-22:

"Major Major's father . . . was a . . . God-fearing, freedom-loving, law-abiding rugged individualist who held that federal aid to anyone but farmers was creeping socialism. . . .

His specialty was alfalfa, and he made a good thing out of not growing any. The government paid him well for every bushel of alfalfa he did not grow. The more alfalfa he did not grow, the more money the government gave him, and he spent every penny he didn't earn on new land to increase the amount of alfalfa he did not produce. Major Major's father worked without rest at not growing alfalfa. On long winter evenings he remained indoors and did not mend harness, and he sprang out of bed at the crack of noon every day just to make certain that the chores would not be done. He invested in land wisely and soon was not growing more alfalfa than any man in the county."

--Joseph Heller, Catch-22 (New York: Scribner Paperback Fiction, 1996), p. 93 (first published in 1955)

[From the back cover of Journal of Political Economy, Vol. 116, No. 3 (June 2008) Published by: The University of Chicago Press]

Who Is He - Romney?

The first time we commented on Mitt Romney was in April, 2005, when as governor of Massachusetts, he was changing his position on stem cells. The Massachusetts Senate and House of Representatives had passed a bill allowing human cells to be cultured for stem cell research. Romney hadn't succeeded in stopping the bill through lobbying, and the legislatures' overwhelming favor of the measure prevented him from vetoing it. Romney explained:

"I think you're going to see at the national level an interest in legislation which limits the creation of new embryos though cloning...So I think you're going to see a national effort to define the boundaries of ethics, and I hope that proceeds."

Seven and a half years ago Romney gave a clear indication about where he was aiming.

ManWithCompass

Drawing of Man Using Compass via Wikimedia Commons

He was crafting his positions for a run as president in 2008, as we wrote, and figuring out that stem cell research was a controversial "ethical" issue -- his statement signaled that he was in step with what he called a "national effort to define the boundaries". He woodenly hewed to the GOP message, the one that qualified him as A Contender. Only later, in 2006, did he start to make a more fluent story around his change of position, taking strident stands against abortion and embryonic stem cell research.

Romney's message massaging seems in retrospect indicative not only of his political ambitions, but of the man he is and maybe always was, a man whose convictions are politically defined not personally held. Although he once hailed the potential of embryonic stem cell research, he then showed the nation how easily he could adapt.

Did Romney's Position on the Stem Cell Issue Indicate His Positioning on All Other Issues?

Back when Bain needed capital in the early days and the only people willing to give him money ran shady Salvadoran shell companies, well, that's where he started. Decades later, when the people who had likely campaign donations questioned how he planned to win, he asserted that the only people who don't like him were the ones taking government hand-outs. The shifting hue and cry of Romney campaign has been constant, on stem cells, on immigration, on foreign policy, on climate, etc.

Although as Massachusetts governor he defied Norquistian demands for far-right economic positions, he now clambers out that pole, sleeves up to his elbows, ready to get to work dismantling whatever public institutions have profit making potential or regulatory aims on business. So of course he marched behind the religious right and their "sanctity of life" claims in 2005, because he's an adaptable guy. Which is exactly what concerns us most, that in his forever changing positions he seems totally unattached and untethered from any position or "truth" whatsoever.

What Sort of President Would Romney Be? (C'mon, Ot'll Be Fun)

Since he's so often equivocal, we're forced to make assumptions about the president he'd be. To do so, we'll look at the people he's campaigning to and for. For example, we've observed that people who scream about the "sanctity of life", often want to get rid of the life-saving government agencies like FEMA or the EPA. Strange. It's also worth noting that they're also keen to halt certain science and technology, the very science and technologies that we know are key to curing disease and enabling a decent quality of life for humans. In fact if you've ever read up on the positions of people like those who George W. Bush appointed to his President's Council on Bioethics, you'll know that their ideal world would abolish science altogether. Here's the view of Peter Lawler:

"In the Brave New World the tyrants will be the experts...We have a hard time seeing experts as tyrants, because they don't claim to rule through personal authority but on the basis of the impersonal results of scientific studies...most Americans have no idea of the extent to which they have already surrendered their sovereignty to such experts" (Lawler, Peter Augustine: Does Human Nature Have a Future? The end of history, Bobos, and Biotechnology)

Lawler's fear-mongering positions might seem far-fetched, but consider the larger agenda. A rational person would argue for, I'll say, the need for clean water and air, for technology investment, for women's rights to healthcare, for scientists, for expertise, and yes, for experts. But this bioethicist insists that the very scientists who are experts, who would show the health merits or clean air and water, are actually evil, co-opting You, and not to be trusted. (Forget that he says this as the author of a book claiming expertise in bioethics).

He labels biotechnology morally suspect along with numerous other things, sex except for procreation for example. In the same book, he notes on evolution:

"The interesting question today is whether Darwin will follow the other two great secularist system builders of the nineteenth century, Marx and Freud, onto the ash heap of history."

This religious play pulls in the most susceptible, those who believe that God reached down and molded everything from planet Earth to penises a couple thousand years ago. It bamboozles people into believing that empirical thinking can be supplanted with simplistic answers provided by politicians. Some of these people then line-up to dismantle the very systems that support a civil democracy, erecting flags and chanting U-S-A. Who can argue against U-S-A? No one.

For years, this has all seemed to me some bizarre far-off world of an unpleasant and distant past, best to be ignored. But it's not far-fetched as it seems if you listen to the current political debates fronted with "ethical" positions.

Or Not

For instance, the Indiana Senate contender said a few weeks ago that abortion should be banned ("sanctity of life") because God created the children of rape. Mourdock's comment was no less than sociopathic - violent not only to women but men, insulting to intelligent humans, sacrilegious and vile. Where was presidential candidate Mitt Romney? Silent and continuing to run TV ads supporting his Indiana GOP candidate...

Silent. A silence that assures supporters he'll toe whatever line is politically prudent. The calculated silence of a church going man who poses square-jawed and leader-like, yes, but whose compass now seems alarmingly stuck at a magnetic pole, needle wavering this way and that. So how would he be as president? Optimistically, people argue that Romney is a moderate, now just all revved up in campaign mode. I might agree. However when my thinking trends alarmist, I fear for the liberties we think are important, the right to clean air and water, progress in science that helps people live better lives, rights for women to control their bodies and work for fair salaries, rights for disabled people, immigrants, the poor, and on and on, all the things that democracy promises and a plutocracy wants to threaten...

That's all. Vote.

FDA Goes One Nudge Over "The Line" on Tobacco

A US Court blocked the FDA from requiring cigarette warnings on boxes this week, calling the graphics "emotion-provoking images...". U.S. District Judge Richard Leon Smoking.gif decided in a Washington court yesterday that tobacco companies shouldn't have to display images of diseased lungs or a cadaver bearing chest staples on an autopsy table, because this would "unconstitutionally compel speech." Nor should companies have to print 1-800-QUIT on cigarette boxes.

I guess what he's saying is that cigarette companies have the right to package fantasies up in the tobacco they manufacture, fantasies of how cool smokers are, that blithely omit the disease and death their product metes out. The FDA, on the other hand, has no right to present the more accurate side of the story.

You know that smoking causes cancer, heart disease, vascular problems. Did you know that smokers have 7 times the risk of abdominal aortic aneurism (AAA) than non-smokers? You probably know about second-hand smoke. Did you know about third hand smoke which stays in home and hotel walls and ceiling tiles for 30 or 40 years, affecting the health of not only present but future occupants?

The judge says the images on cigarette boxes crossed the line between "factual information" and "government advocacy". The line is "frustratingly blurry", he says, but he sees it.

Malaria Vaccine Data - Release then Patch?

Does International Public Health News Compel Us to Cheer Enthusiastically?

Everyone wants drugs to cure diseases. Everyone wants vaccines to prevent them. And in a world of urgent international public health problems, what is more publicly urgent then developing solutions for AIDS or malaria? Positive news on this front is always welcome, and in line with that, you don't win popularity points by sticking pins in up-beat public health reports, results, or clinical trial data. MarianaRuiz Villarreal'sWikiMosquito.jpg Popular science journalists generally talk about cool, politically neutral science, or slick technology, or brilliant research successfully advanced to save lives; they write about winning clinical trials that will end scourges, any scourge - cancer, AIDS, Hepatitis, obesity... Good news!

Cheerful news, like recent headlines highlighting research showing a vaccine for malaria that may be 55.1% effective. NPR headlines enthused "Vaccine Slashes Infection Risk By Half", whereas a more cautions USA Today said: "Malaria Vaccine May Have Potential to Save Millions".

The RTS,S/AS01 vaccine is a decades long effort, now a collaboration between The Gates Foundation funded PATH Malaria Vaccine Inititive (MVI) and GlaxoSmithKline (GSK). The partners recently published interim results in the New England Journal of Medicine (NEJM)2 and presented their results to the media. By all accounts, the Phase III trials delivered very good news.

The Mosquito Drawing by M. R. Villarreal can be found at Wikipedia 1

But What Does "May" Mean, in "May Save Millions"?

No one could say that Bill and Melinda Gates haven't changed the face of international public health. Mr. Gates leads a relentless campaign pushing the power of vaccines; he berates governments that don't vaccinate enough people; and he effectively leverages the media to deliver his messages. Last year the Gates Foundation held a fund-raiser hoping to collect $3.7 billion from governments and instead received $4.3 billion. As Global Alliance for Vaccination & Immunization's (GAVI) chief executive put it, "Bill was a little like a poker player who put a lot of chips on the table and scared everyone else off." Perhaps Gates is more a bridge guy, but point taken.

Given this, who would write-up the newest Gates Foundation news as, "a vaccine shown to be at best 44.9% ineffective in a half-done clinical trial"? No one. With the intense drive for upbeat news, I credit USA Today for their cautious "may save millions". But if you look more closely, for instance read the editorial accompanying the NEJM report3, or listen to scientists around the web and in journals like The Lancet 4, or pay attention the malaria researchers interviewed by "Nature News5, the caveats of this recent malaria study grab your attention:

  1. First, there's the announcement itself. The data released is interim data; the full results of the malaria trial will be released in three years. Interim data releases are not unprecedented but past experiences, for instance with an AIDS vaccine, caution us against overly enthusiastic receptions for incomplete trials.
  2. The interim results were reported for children aged 5-17 months, but the target age group is infants aged 6-12 weeks. In other words, these results don't address efficacy of the vaccine for the target group.
  3. NEJM reported that at 12 months, the vaccine reduced episodes of malaria by 55.1%. However a US military scientist working with Sanaria, a competing vaccine maker, told Nature News that RTS,S actually offered only 35-36% protection 12 months after the vaccine. It appears that the efficacy of the vaccine might wane over time.
  4. Although the reports noted reduced mortality, another scientist emphasized to Nature News that the data didn't support that announcement. Scientists hypothesize that the vaccine may just delay infection.
  5. Although the vaccine reportedly cut severe disease in older kids by 47%, combining that data with the available data of the younger kids gave only a 34.8% decrease. This suggests the data for the target group of younger kids might turn out lower than reported in these interim results.
  6. In addition, incidents like convulsions and meningitis might be more frequent in the vaccinated group.

These might not be showstoppers. For instance researchers hope that booster shots will improve efficacy. But what if in the end the much touted vaccine turns out not to be a vaccine but just another shot? Scientists and public health workers concern themselves with such non-trivial caveats. What's behind the apparently waning efficacy? How is the adjuvant effecting immunity? Science is exacting even when media reports are not. People also have underlying concerns about what's driving policy, science or the press releases?

Is Marginal Progress, "Success"?

Two of the people interviewed by Nature News are affiliated with Sanaria, a company that is also developing a malaria vaccine. Sanaria just released their own news of a Phase I malaria vaccine study testing the safety of a live attenuated virus. Nature interviewed the first author on the Sanaria study published by Science, as well as the CEO and last author on the Science study. They were complementary of the RTS,S effort, if somewhat critical on some points.6

It's worth noting the history of the Sanaria vaccine, to give context to the executives' comments and perspective. Like the RTS,S vaccine, Sanaria's vaccine is an extremely expensive, tricky, and laborious endeavor. The underlying idea for seems promising, but for starters technicians must painstakingly dissect out the salivary glands of mosquitoes in order to develop the vaccine.7 It's unclear how this can scale.

On top of the laboriousness of vaccine development, once the vaccine was made it didn't seem to work. In their first clinical trial, Sanaria injected 44 subjects. 42 people got malaria and 2 didn't, a 4.5% "success" rate. These subjects might have been better protected from malaria lounging in a malaria endemic region in mosquito-infested huts, but the Sanaria execs quickly pointed out that it wasn't the stunning failure it looked like, rather, it was a trial that "yielded positive results" -- as their press release put it (without including relevant numbers). The company is buoyed by such "success" and primed for the next controversial7 phase.

Because vaccines promise a silver bullet solution to disease, at the moment, every possible vaccine holds promise, since we have no viable one.

Sanaria's position as competitors doesn't invalidate their commentary on RTS,S (complementary as well as critical), since Sanaria executives voiced reservations shared by many others. An editorial in last week's "The Lancet indicated that the release of unorthodox partial results seemed to be more politically than scientifically driven. Diplomatically, The Lancet editors wrote: "although the latest findings are encouraging, we look forward to the full results of the RTS,S/AS01 trial in 3 years time."5

When There is No Treatment, What Does A More Effective "Treatment" Look Like?

Will the upcoming younger cohort data meet World Health Organization (WHO) goals of 'Protective efficacy of more than 50% against severe disease and death lasting longer than one year'? 5 This is an important question. Vaccine experts usually aim for 80% or more efficacy, and representatives for PATH say they hope to get there eventually. So then, does that make this vaccine a beta version?

Is all this media hoopla deserved for a beta version vaccine? A physician working in Africa distributing bed-nets warned against statements that might mislead people "to overestimate the impact of any single new intervention", in a comment at NEJM. Acknowledging this commenter also has vested interests doesn't detract from his message. 75% of the MVI/GSK study participants used bed-nets. But would people in real-life discontinue the more cumbersome bed-net efforts with a vaccine on the horizon? Will bed-nets still be funded with a 50% effective vaccine? A 30% effective vaccine? If you're a mom and your kid gets a vaccine that is 50% effective, what precautions do you then take to prevent infection? Does a 50-50 vaccine make your life better?

The tremendous investment in vaccines, both in terms of money and expectations, shouldn't slow other prevention and treatment efforts. But realistically, we don't have unlimited resources. It would be naive to think that the prolonged difficulty of vaccine development, the immense investment, and lack of a viable alternative don't influence funding and policy decisions.

Some of the problems scientists identified with this vaccine trial have persisted for years. In this 2006 book chapter recently released online, an economist analyzes RTS,S vaccine data of previous trials (PDF) (HT Nature News5). He reports on waning efficacy; and questions how the public health community decides which vaccine candidates merit further investment. 5 years later, as the latest trial barely noses over the 50% bar, we grapple with the same issues and questions he raised back then, but billions more dollars have been invested.

Which leads us to wonder whether mid-trial fanfare primes us react to whatever future malaria vaccine news comes along with knee-jerk positive determinism? What if the younger data shows only (say, hypothetically) 30% efficacy? Would we ever abandon the effort? As more and more money gets invested, do decision makers begin to act less rationally?

Media reports may boost stocks, may raise money and may discourage competitors, but in the end, the science behind the vaccine, the science that's supposed to underpin public health decisions, is fussy and complicated -- caveats matter. After all, you're asking people and governments to donate tens of billions of dollars, and you're promising 7 billion people that your vaccine will keep millions safe.

Tough Economy for an IPO?

Can we push for an end to malaria as if we were trying to put a computer on every desktop? Does this big money, big marketing, big media approach to public health that some find so jarring actually work? I'm not saying it doesn't. Perhaps it will become a more accepted way of developing medicines and vaccines. Maybe public health needs exactly this kind of paradigm shift.

But even if a 40% or 50% effective vaccine is acceptable from a public health perspective, once this vaccine is developed, governments will still need to consider costs. In this economy, some ask, how much will governments shell out for a vaccine with a 50% efficacy rate? Can you and should we market vaccine with lots of pre-release fanfare to push governments towards buying the vaccine?

Asked about cost per vaccine, GSK wouldn't answer directly, but stressed how the company will reinvest all the proceeds to improve the vaccine. Shares of GSK rose slightly on the RTS,S vaccine news, and shares of biotech company Agenus which makes the RTS,S vaccine adjuvant rose from $.48 prior to the announcement, to $2.80 (which got Agenus re-listed by the SEC). All things that may influence decisions. However when questioned about the unconventional data release, PATH's MVI director didn't mention politics, billions of invested dollars, stakeholder expectations, or the saved Massachusetts biotech companies. He said: "we felt it was our scientific and ethical duty to make the results public when they become available."5

----------------------------------------------------

1 The mosquito drawing is by Mariana Ruiz Villarreal. It is the anatomy of a Culex pipiens, a vector for malaria. This image was selected as Wikipedia's Picture of The Day for 10 September 2010.

2 The RTS,S Clinical Trials Partnership; First Results of Phase 3 Trial of RTS,S/AS01 Malaria Vaccine in African Children, October 18, 2011 10.1056/NEJMoa1102287

3White, N. F.R.S.; A Vaccine for Malaria October 18, 2011 10.1056/NEJMe1111777

4Editorial: The Lancet, Volume 378, Issue 9802, Page 1528, 29 October 2011, doi:10.1016/S0140-6736(11)61659-0

5Butler, D.; Malaria Vaccine Results Face Scrutiny: Published online 26 October 2011, Nature 478, 439-440 2011, doi:10.1038/478439a

6Epstein et al: "Live Attenuated Malaria Vaccine Designed to Protect Through Hepatic CD8+ T Cell Immunity": September 8 2011 Science 28 October 2011: Vol. 334 no. 6055 pp. 475-480 DOI: 10.1126/science.1211548

7 Kappe1, S., and Mikolajczak1, S.; "Another Shot at a Malaria Vaccine". Science 28 October 2011: Vol. 334 no. 6055 pp. 460-461 DOI: 10.1126/science.1213934

8 Farlow, Andrew.; "A Review of Malaria Vaccine Candidate RTS,S/AS02A", Chapter Three of The Science, Economics, and Politics of Malaria Vaccine Policy, a report written in 2005 and 2006 and published 14 April 2006 and January 2010. Department of Economics, and Oriel College, University of Oxford.

---------------------------------

We previously wrote about Phase II Clinical Trials of the RTS,S vaccine here. We wrote about US funding for malaria here and here; vaccine strategy here; malaria prevention here and here. We've also written frequently on international public health, including the development of a AIDS vaccine, here and here.

Technology Glitches and Patient Health

Mundane Data Breaches

Mistakes usually occur after a conflagration of seemingly small, quotidian errors. Often no one seems to own the problem, it's simply a "glitch". In our technological world, we're quite accustomed to glitches and large data integrity losses. We stick the newly issued credit card into our wallet before even knowing (or caring) about the details of why it was replaced.

Technology "glitches" are not to be trifled with though, they shut down metropolitan train systems, admit ~32,000 students instead of ~16,000, and compromise the most private data of 31,000 people, 100,000 people, 4 million people...They're just boring news.

In medicine, repercussions from computer glitches make train outages seem trivial. From August 2008 through February 2009, a computer glitch in the Veteran's Affairs record system tied patients to the wrong medical records, leading to incorrect dosing, delays in treatment, and other errors. A computer glitch in another case incorrectly cleared women of breast cancer after mammogram screens showed they actually had tumors.

Bodily Injury and Death

Imagine the most unimaginable "glitch" and it's probably already happened. In one, famous 1980's case (PDF), cancer patients undergoing radiation treatments from the Therac 25, manufactured by Atomic Energy of Canada (AECL), intermittently received radiation doses 100X the prescribed dose. The resulting radiation could burn through the torso and leaving a burn marks on the victim's back. The trauma from radiation trauma killed some patients within weeks.

An investigation of the Therac 25 history showed how multiple errors begot fatal injuries. The high doses occurred when a technician first entered an "X" to incorrectly select a certain dose of high beam photon mode; then "cursor up"; then "E" to correctly select a certain dose in the electron mode; then "Enter", all within 8 seconds. This accidental series of keystrokes activated the high beam instead of the low-beam, but the high beam diffuser wasn't in place, so intense radiation burned ears, breasts, groins, clavicles.

When it happened to one patient, the sound system between the treatment room and the operator wasn't functioning. He had been treated multiple times in the past, so knew something was wrong when as he lay on the table for treatment he suddenly heard unusually loud noises, saw a flash of light, and felt a searing burn. Pause. Then it happened again. The technician only learned something was wrong when the patient pulled himself off the treatment table and began banging on the locked door.

Because the burns happened infrequently, because the error messages were imprecise or oblique, and because technicians, engineers and managers couldn't believe the Therac 25 was malfunctioning, operators continued to injure patients until 1987. In a letter to one hospital physicist AECL explained that their machines couldn't be malfunctioning because of modifications that improved the "hazard rate" by "at least five orders of magnitude! With such an improvement in safety (10,000,000%) we did not believe there could have been any accelerator malfunction."

A glitch -- an "accelerator malfunction"? Or errors attributable to peoples' actions?

Errors Upon Errors

The persistence of medical physicists at several hospitals quickened Therac-25 problem solving, but clumsy safety processes, a reluctant manufacturer, and slow FDA action impeded resolution. In the final analysis, a long list hardware, software, quality assurance and process issues such as these, contributed to the injuries and fatalities:

  • The hardware and software were built by two different companies and only tested when the system was installed in the hospital.
  • Code wasn't independently reviewed.
  • Some engineering errors permitted overrides after malfunctions, other errors allowed for safety check bypasses.
  • The FDA hadn't thoroughly tested the Therac 25 (a medical device) because previous models had a reasonable safety record. But the Therac 25 had undergone numerous changes, for instance manual control systems transitioned to software controlled systems.
  • The company recalled the machines at one point, but because the first patients didn't die, the FDA under-classified the severity of the problem. But an intense radiation beam to the head could result in a more lethal dose than another body part, so later incidents were fatal.
  • The medical physicists and the FDA made recommendations to AECL. The company complied with some safety directives, but ignored others.
  • Technicians incorrectly diagnosed issues, for instance in one case a problem was wrongly attributed to a switch. The company replaced the switches. The problem recurred.
  • AECL wrongly told some institutions who reported incidents that theirs was the first report. So each hospital thought their case[s] unique.

Elusive Intangible Injury

In the Therac-25 case, each contributor -- a software programer, an engineer, a technician, someone in quality assurance, a safety officer, staff at the FDA, a company executive -- made a small mistake. Lawsuits, FDA investigations, out of court settlements, and eventually national media exposure brought the case attention. The entire compendium of errors in the Therac-25 case is so classic and dramatic that it's used as a case study. But what about computer glitches where less harm is done to fewer patients over a shorter period of time? Or what if so many are hurt - millions, say - that the plight of any one individual gets diffused? What if the evidence is unclear - there there are no burn marks on the front and back of the body?

Can injured patients be made whole? In Therac-25 cases, the lawyers of families of patients with terminal cancer argued that patients died sooner and suffered more because of their Therac-25 injuries.

What if doctors delay cancer treatment and the person dies an early death from breast cancer, as in the case we mentioned above? What can lawyers prove, how can victims be compensated? In the case where Veteran's Administration patients were matched with the wrong record, the VA denies that any negative outcomes. No harm reported, no harm done?

What about still "lesser" glitches, everyday database breaches?

Patients: Students of Misfortune?

The US HIPAA laws protect a person's medical data, file, or record from being accessed by an unauthorized person. Therefore someone couldn't enter your doctor's office, grab your paper record from the thousands stuffed floor to ceiling, and forward it on. Sometimes the law seemed overly strict. In the name of HIPAA, unmarried lifelong partners of hospitalized patients were forbidden from learning about their loved one's health.

Although HIPAA has provisions for electronic records, today's larger, more frequent mishaps leaves this regime seeming quaint. Consider the recent data breach at Stanford, where the emergency room records of 20,000 patients were posted on line. A New York Times article details how it happened. One billing contractor dealt with one marketing contractor, who interviewed one potential employee who leaked the data. The marketing contractor received got the data from Stanford Hospital, "converted it to a new spreadsheet and then forwarded it" to a job applicant, challenging them to

"convert the spreadsheet -- which included names, admission dates, diagnosis codes and billing charges -- into a bar graph and charts. Not knowing that she had been given real patient data, the applicant posted it as an attachment to a request for help on studentoffortune.com, which allows students to solicit paid assistance with their work. First posted on Sept. 9, 2010, the spreadsheet remained on the site until a patient discovered it on Aug. 22 and notified Stanford."

Would any of these patients know if they were harmed? What if they had some condition that an insurance company, employer, teacher or other would use to disqualify them as in this Stanford case? Will the class action lawsuit that's filed make them whole? What if someone recognized the value of such data and stole it, as in a recent Orlando, Florida case where hospital employees forwarded emergency room data for over 2,000 accident victims to lawyers? In the old days, hauling 20,000 patient files out of a doctor's office unobtrusively would be a challenge. Not so much with electronic data, all you need is a glitch.

HIPAA specifies that each responsible individual can be fined $250,000. Will the job applicant who outsourced her Excel Worksheet problem to StudentofFortune.com pay $250,000? The marketing contractor? The billing contractor? Stanford?

Often the public has no idea about medical injuries resulting from glitches, physical or otherwise, just as they didn't with the Therac-25. If someone dies, as in the Therac-25 case, perhaps the news will get out. But the more common the incidents, the more data is lost, the more are made to seem benign, the more harm is done, the less we learn about any particular incident.

You can read all this as a depressingly negative outlook on technology and health, but my view is different. Injuries and deaths due to vague "glitches" can be prevented by fixing small, but very tangible errors. The outsourcing of everything has increased the number of contractors, and with all these people, looser interpretations of rules and diffuse culpability. But it's not just contractors. Many employees are also very cavalier about data. Walk-in or call any major medical center and you will see glaring errors. Fixing such errors, attention to detail, and yes - support for regulatory oversight, can reduce harm.

Lest You Want to Do More Than Sit Under The Tuscan Sun

Blue Screens

When I traveled to Italy a few years ago I found the blue screens on computers to be the most memorable travel experience, you know, aside from the terraces and olives and Caravaggios of travel lit - the "Blue Screens of Death". I hadn't seen so many blue screens since the 1990's. Fresh off the plane, the machine to purchase tickets took our money without producing train tickets. The station agent cocked his head and displayed doleful eyes at our request for a refund. Like it was the most absurd thing he'd ever heard! Then he walked around the room gesticulating at exhibits A, B, C, D...all blue screens on all computers, and he explained verbosely in Italian: That's why we wouldn't get a refund. He did finally produce our tickets, not because we explained how to fix the screen problem - which he dismissed with a flick of the hand; not because we subsequently insisted that he use a telephone work-around; but most likely because we threatened to sit there forever. We are usually in a big business hurry, but...

That was only the beginning of Blue Screens in Italy. Blue screens at the airport, blue screens at internet cafes, the hotels, the train stations, the offices, even at the empty museum exhibit -- how? This was a far cry from countries even a decade earlier where the remotest places, say in Asia, got on online and stayed up and doing business. That was my Italian experience.

Trials

Today, Italy is still looking a little medieval, isn't it? All that ancient stone architecture with the tiny little windows romantic in one view, lends a sinister backdrop to the bizarre Perugia murder trial, which Perugians complain sullied their town's reputation.

Then there's the other trial, that of the seismologists being tried for information they supposedly didn't provide to townspeople of L'Acquila before the earthquake. Thousands of scientists have written to protest the prosecution of scientists. Actually, the scientists did relay the risk of earthquake on that day, about 1:1000, but subsequently a government official garbled the message. At the same time, disturbingly, a non-scientist was claiming (falsely) to be able to predict earthquakes based on radon gas measurements. So that radon-guy jacked the townspeople up, then the official tried to reassure them, now the scientists are on trial.1

Shutting Down Speech

This week, the computer screens went black in Italy. The government introduced a new wiretapping bill that imposed severe restrictions on online speech. The Italian bill declared that the online author of any 'alleged defamation' would need to correct the problem within 48 hours or be punished by a large fine. Guilt of defamation would be in the eyes of the "defamed". Wikipedia protested with a blackout.

Wikipedia's action got the bill partially changed to apply only to larger businesses, not blogs and Wikipedia. But as Nieman Lab explains, the bill stills stands. Furthermore, it's the overall state of press freedom in Italy that's "dismal". As Nieman Lab writes:

"Berlusconi owns the influential private media company Mediaset; he exercises direct control over state television. Italy's 100,000 professional journalists, to get work, must belong to the Ordine dei Giornalisti -- a group that is, in effect, a modern-day guild. This year's Freedom House survey of global press freedom, citing 'heavy media concentration and official interference in state-owned outlets,' ranked Italy as only 'partly free."

It makes it seem like blue screens would be the least of their problems. I know, it's totally biased to judge Italy on these select things, just it would be to judge Americans on their predilection for their cowboy hats, guns and anti-science moves. Nieman Labs interviews several people (from Perugia) who understandably worry how severely the government threatens press freedom. And of course many other governments, not only Italy, seek to curtail internet expression. If governments continue to corral the "Internet" -- rather, the now familiar "internet" - will we have to start calling it the "Intranets"?

--------------------

1 In a recent post, we criticized Fox News for profiteering on the weird, absurd, and false earthquake predictions of Jim Berkland. This trial adds another dangerous twist to Berkland's odd-ball predictions. Confusing people about the real risks isn't just bad for science, it's an actual liability for governments.

Hurricane Irene Disaster Management

Just Like 1908?

After Hurricane Irene, some people joked that the media sees hurricanes as a grand opportunity to dress up in the newest outdoor gear and brace against the howling wind, downed trees, and rain driving sideways (although sometimes pranksters steal the show.) Hurricanes have all the right elements for media profiteering too - drama, death, destruction and lots of "human interest". But to build drama, you need to build up the storm. On Friday night, August 25th, we linked to these four news stories in successive Tweets:

  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1903 (Published August 26, 2011) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1908 (Published August 24, 2011) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1938 (Published August 26, 2011 10:28 p.m. EDT) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1985 (Published August 26, 2011 1:23AM) 25 Aug tweet acronymrequired

Not only can't forecasters predict with 100% accuracy the power or path of a storm, but certainly, as we showed, newspaper reporters can't. The media can't necessarily be faulted though, after all a hurricane is a moving target. In fact, as long as everyone tunes in, the media actually plays an helpful role public safety role, that is by creating more drama on television then any one person can witness outside, over-the-top media coverage can actually aid public safety officials.

The list of East Coast storms throughout history is extensive, but reporters plucked somewhat random mix of historical events out of the hundreds available: The so called Vagabond Hurricane of 1903, produced 65mph winds in Central Park; the deadly New England Hurricane of 1938, was a Category 3 at landfall; and Hurricane Gloria in 1985 struck as a Category 2 hurricane. It's unclear what storm in 1908 the Lehigh Valley Morning Call reporter was talking about, since none of the storms that year amounted to much, and on August 24th 2011, when the Morning Call published, most reporters were comparing Irene to Hurricane Katrina, not some random storm that blew out to sea in the Caribbean. Maybe the reporter hadn't had their morning coffee.

But there you have it, taken together, it's clear that storms can go many different ways and we don't have the technical or intuitive abilities to predict them exactly accurately, or at least to the degree that audiences seem to be demanding after the event.

That Healthy Cry, The Complainer - Alive and Well

When Irene actually hit, the hurricane created lots of flooding and destruction not to be trifled with. But as the New York Times reported after the storm, some New Yorkers were peeved at the pre-storm hype. New Yorkers expressed anger at the cops on bullhorns telling people to go inside, anger at the storm itself for not living up to its potential, and of course anger with Mayor Bloomberg. One person complained Bloomberg made people spend too much money: "The tuna fish and the other food, O.K., we're going to eat it. I don't need all this water and batteries, though."

But lets compare this outcome with the great bungling of Katrina in 2005, to see how things can easily go the other way. At least 1,836 people died in Katrina and property damage was estimated to be $81 billion 2005 USD.

FEMA took most of the fall for the Hurricane Katrina management disaster, along with FEMA administrator Michael Brown ,who appeared utterly useless despite fervent support from George W. Bush. As we wrote at the time in "FEMA- Turkey Farm Redux?", FEMA had failed US citizens in multiple hurricanes during the administration of George H.W. Bush in the 1980's, and had been expertly revived and made useful during the Bill Clinton administration under the leadership of James E. Witt. Then George W. Bush decimated the revived FEMA, using it as his father had. As the House Appropriations Committee reported in 1992, FEMA had been used as a "political dumping ground, 'a turkey farm', if you will, where large numbers of positions exist that can be conveniently and quietly filled by political appointment". (Washington Post July 31)

So given the recent history of Katrina, and the debacles of several state and city governments in last winter's multiple blizzards, it seems inane that so many people who lived through those disasters now fault Bloomberg as "the boy who cried wolf". But then people might complain no matter what, and given the somewhat unpredictable path of storms, I think everyone would agree that it's better to be alive complaining, than dead and swept out to sea because of lack of government warning.

Assuring Future Disasters are Worse

Of course we don't know how the government would have fared in a worse disaster. And while people complain about the lack of a bigger hurricane, FEMA is currently hindered from helping with Irene. Why? Apparently, a FEMA funding bill is being held up in the Senate while politicians with idiosyncratic proclivities indulge their hypocritical "family values" by meticulously delineating all the organizations that can't be paid with FEMA money.

To our detriment, we ignore larger issues while we complain. FEMA's role takes a role not only during and after a hurricane, but in adequately preparing people ahead of time, as we wrote in "FEMA and Disaster Preparedness". Neither FEMA nor state or local governments adequately helped prepare for Katrina, as we detailed in: "Disaster Preparedness - Can We?". Although states and cities didn't play as large a role in the the federal government failings as G.W. Bush would later say, rewriting of history, their role is important.

Of course, disaster preparedness means not only motivating citizens to buy supplies and stay inside, not only mobilizing a deft response, but shoring up infrastructure ahead of time. In the wake of Katrina, we all heard about the failure of governments to build adequate New Orlean's levees, an issue Acronym Required wrote about in "Levees - Our Blunder". However before Katrina, few people realized just how flagrantly officials ignored warnings about the weak levees. When the hurricane breached the walls, politicians acted surprised, that surprise masking the blunt unwillingness of politicians and US citizens to support and fund infrastructure.

We wrote about more widespread infrastructure failings in 2007, in "Guano Takes the Bridge, Pigeons Take the Fall". But infrastructure is easy to ignore. Just as vociferously as citizens complain about the hype preceding Hurricane Irene1, they remain stunningly silent on the lack of infrastructure preparedness. In fact there's loud clamoring to dismantle the very agencies that assure our safety. Obama has tried in some ways to address the infrastructure problem, not without criticism.

In the case of the New Orleans levees, the New Orlean's Times-Picayune reports that although $10 billion has been spent upgrading the levees, the Army Corps of Engineers is giving them a failing grade. The report says that the refurbished levees might stand a 100 year event, but a larger event will result in thousands of deaths and billions of dollars in property damage. This was exactly the criticism of the levees after Hurricane Katrina in 2005.

----------------------

1 Here's an interesting analysis of the hype-factor of news relating to Hurricane Irene. The author uses a quantity of publications analysis to argue is that the storm was not hyped.

Autism and the Internet, Drugs, Television, Rain, the Victorian Era & the Media

New Scientists Who Don't Do Science

Every so often, actually with disturbing frequency, claims about the underlying cause of autism spring up like fungii in manure after a rain. It's practically required that claims of this genre be built on false premises or make invalid conclusions, like this week's link between internet use and autism. Oxford personality Baroness Susan Greenfield breathed life into this rumor in an interview with New Scientist, then defended herself by saying provocatively: "I point to an increase in the internet and I point to autism, that's all." But where's the evidence, and why is this stuff being published?

Greenfield's been popularizing science for decades, and recently popularizing science at the cost of science itself. In 2008 she warned the children's brains were being destroyed by technology in a book reviewed by the Times of UK:

"As it happens, her new book, ID: The Quest for Identity in the 21st Century, digresses all over the place in little flash floods of maddening provisos and second thoughts. It's as if she dictated it while bouncing on a trampoline, fixing an errant eyelash and sorting her fraught schedule on a BlackBerry."

Back in 2009, before the UK's Royal Institution fired Lady Greenfield, she argued that the total immersion in "screen technologies" was linked to a "three-fold increase in prescriptions for methylphenidate" (prescribed for attention deficit disorder). She told the paper that people were losing empathy and becoming dependent on "sanitized" screen dialogues. She also complained that packages of meat in supermarkets had replaced "killing, skinning and butchering an animal to eat".

It's hard to criticize people who distort science without seeming to deride all science popularizes. Greenfield falls in the former camp as many people recognize. 254 people commented, on a recent Guardian article saying that the internet changes peoples brain:

  • "That's exactly what my mum said about reading 'The Beano' [A British Comic Strip]."

  • "I hear it gives you cancer as well""

Guardian readers know how to take a piss, but Oxford's Greenfield knows how to get publicity, so she's long engaged in trying to scare people about technology. To her latest, scientists online responded briskly, with vitriol, meaning that in terms of popularity, Greenfield had a field day. We've been following false arguments about autism for a few years, so we wanted to look more closely at how Greenfield's latest claim about the internet causing autism differs from the claim that some economist's claim that television caused autism, which we covered back in 2006. For one, back in 2006 they actual did research -- well, economics research.

But Who Needs To Do Research When They'll Print the Stuff You Make Up?

Greenfield ups the ante from her general technophobia of two years ago by appealing not just to fuddy-duddy technophobes but to all parents and their worst nightmares. One day the child seems fine, then something mysterious happens and the child is no longer themselves. What happened? Doctors and scientists have no clinically actionable idea. Greenfield knows.

Perhaps it makes life easier for some autism suffering families to attribute changes in their child to some outside agent. It's also common to say that a crime has been perpetrated by people from another state or town or country. We've seen autism blamed on vaccines, television, rain...Uncomplicated agents that can be controlled by parents are especially attractive - TV. But where's the evidence? When the New Scientist asked that, Greenfield replied:

"There's lots of evidence, for example, the recent paper "Microstructure abnormalities in adolescents with internet addiction disorder" in the journal PLoS One...There is an increase in people with autistic spectrum disorders. There are issues with happy-slapping, the rise in the appeal of Twitter - I think these show that people's attitude to each other and themselves is changing."

How nimbly she links computer use, with "internet addiction disorder" (IAD) that is not recognized by US psychiatrists, with brain change, with behaviors, and even with attitudes. But the paper didn't say anything about attitudes; didn't prove "addiction", didn't prove detrimental brain changes, didn't prove behavior changes.

Can You Compare the Cognition of Chinese 19 Year Olds Playing Games 12 Hours A Day To 1 Year Old Cooing Babies?

The PLoS One paper deserves more comment than I'm going to devote here. But though PLoS One depends on the community for peer review, and although this paper has over 11,000 views (14/08/11), not one person has peer-reviewed, or "rated" - the paper. Nevertheless, it's cited all over the internet as proof that "internet use" does bad stuff to the brain, it "shrinks it", "wrinkles it", "damages", "contracts", "re-wires" it... But the paper is not about "internet use". It's about on-line gaming.

The PLoS One authors write that the research is particularly important to China because unlike in the US, in China, IAD is recognized and often cited as a big problem. The Chinese vigorously treat the "disorder" with strict treatment regimens including until 2009 shock therapy.

The PLoS One authors used addiction criteria (i.e. "do you feel satisfied with Internet use if you increase your amount of online time?") and asked the subjects to estimate how long they had had the addiction. They then used brain imaging to show that brain changes correlated with self-reported duration of online game playing. There were 18 subjects, 12 males average age 19.5 years, and presumably 6 others (females?) who the authors do not characterize.

The subjects played online games 8-13 hours a day. I can't evaluate the data, I don't know enough about voxel based morphology. But I'm not surprised someone "playing online games" 8-13 hours a day, 6.5 days a week for 3 years is different than the controls, who were "on the internet" less than 2 hours a day. Likewise, I would expect a soldier engaged in street patrol in Afghanistan 10 hours a day, 6 days a week for three years to be different than someone who walked their dog around the block in sunny suburbia 3 days a week for the last month. (If I were in a joking mood I'd say that kids playing online games 13 hours a day 6 days a week must have extraordinary abilities to actually still be in college.)

Even if you believe in IAD, the authors acknowledge the study's limitations. They say they don't prove IAD caused changes; don't prove that the subjects brains weren't different to begin with; acknowledge the "IAD duration" measurements (self-assessment) are crude; and the data aren't rigorous to conclude negative changes.

None of these caveats slowed Greenfield, who cited this paper and linked it to all sorts of unrelated things like "Happy-slapping", an awful British fad. But there's nothing inherently sinister about using Twitter, or the internet - it's not related to autism. What makes a lot of her assertions puzzling is that Greenfield trained as a neuroscientist. Does she not know this stuff? In 2003, she mocked people who attributed anything to "scary technology." So why is she now popularizing the opposite message? Her PLoS One example is nothing more than pulling some study out of thin air and linking it to her own machinations about technology. Claims such as hers provide ripe fodder for quacks, crazies and zealotry.

How Does Technology Change Us? Research Shows Beneficial Effects in Online Gamers

Here's the second instance of "proof" Greenfield gives in the New Scientist interview, and note that again cites an academic paper and links it incongruously to her own made up stuff. She says:

"...A recent review by the cognitive scientist Daphne Bavelier in the high-impact journal Neuron1, in which she says that this is a given, the brain will change. She also reviews evidence showing there's a change in violence, distraction and addiction in children."

But the Bavelier et al review says something different. The scientists specifically warn that no research predictably links brain changes to behavior like violence, distraction or "internet addiction" to technology - TV, video games. The authors cite studies showing the research remains too confounding, as they say in their conclusions:

  • "the interpretation of these studies is not as straightforward as it appears at first glance"

  • most reports tabulate total hours rather than the more important content type, therefore are "inherently noisy and thus provide unreliable data."

  • technology use is "highly correlated with other factors that are strong predictors of poor behavioral outcomes, making it difficult to disentangle the true causes of the observations"

  • Perhaps "children who have attentional problems may very well be attracted to technology because of the constant variety of activities."

Bavelier et al stress that the effects are unpredictable, for instance "good technology" like the once ballyhooed Baby Einstein videos can turn out to have zero or negative effects. Conversely what is assumed to be "bad technology" can be good. They write:

"action video games, where avatars run about elaborate landscapes while eliminating enemies with well-placed shots, are often thought of as rather mindless by parents. However, a burgeoning literature indicates that playing action video games is associated with a number of enhancements in vision, attention, cognition, and motor control."

This point from Bavelier et al is quite interesting because it appears to contradict the general conclusions of the PLoS One authors we cited above concerning online gamers -- assuming the study subjects played comparable games. Exploring these different results is potentially more interesting than a rhetorical sleight of hand tossing a science study citation in to falsely bolster gobbledygook.

To wit, the studies Greenfield uses don't support her points. That technology's effects are still unpredictable is widely acknowledged. Greenfield herself used to promote a computer program called MindFit which claimed to improve mental ability. The game didn't work. But it also didn't make kids pick up knives and murder each other. It's hard to understand Greenfield's motivation for denouncing technology as anything other than provocation.

Greenfield says: "It is hard to see how living this way on a daily basis will not result in brains, or rather minds, different from those of previous generations." But "hard to see" isn't science. A "brain", is not a "mind", nor is it "behavior", nor an "attitude". That's not to say brains don't change, or that technology couldn't affect us. Brains show changes during many activities, often temporarily. It's just to say that technology is not inherently, as she called it, "chilling".

I Point to Television and I Point to Picnics, To Family Dinners, To Teens Doing Charity, To Children Building Sand Castles on Sunny Days

As she is now vilifying the internet as a physiological change agent, Greenfield previously claimed that television changes the brain deleteriously. Now she dismisses the notion. When New Scientist asked her: "What makes social networks and computer games any different from previous technologies and the fears they aroused?" she responded:

"The fact that people are spending most of their waking hours using them. When I was a kid, television was the centre of the home, rather like the Victorian piano was. It's a very different use of a television, when you're sitting around and enjoying it with others..."

Nice image, the innocent television, like the innocent Victorian piano. Happy family times of the Victorian Era, singing around the piano, food aplenty, spirits flowing, enlightened, goal oriented well adjusted children unhindered by repressive social situations. Oh wait, it wasn't always like that? We learn more about the good 'ole days by venturing dangerously out on the internet where you can find the following first hand accounts:

Isabella Read, 12 years old, coal-bearer, as told to Ashley's Mines Commission, 1842: "Works on mother's account, as father has been dead two years. Mother bides at home, she is troubled with bad breath, and is sair weak in her body from early labour. coaltub.jif "I am wrought with sister and brother, it is very sore work; cannot say how many rakes or journeys I make from pit's bottom to wall face and back, thinks about 30 or 25 on the average; the distance varies from 100 to 250 fathom. I carry about 1 cwt. and a quarter on my back; have to stoop much and creep through water, which is frequently up to the calves of my legs."

Sarah Gooder, 8 years old, trapper, as told to Ashley's Mines Commission, 1842: "I'm a trapper in the Gawber pit. It does not tire me, but I have to trap without a light and I'm scared. I go at four and sometimes half past three in the morning, and come out at five and half past. I never go to sleep. Sometimes I sing when I've light, but not in the dark; I dare not sing then. I don't like being in the pit. I am very sleepy when I go sometimes in the morning."

Greenfield's current glorification of TV defies the fact that TV has been roundly implicated for causing all sorts of unsocial behavior and not only by Greenfield before she changed her mind.

The Autism TV Link: "Why Not Tie it To Carrying Umbrellas?"

In 2006 Acronym Required used a study by economists linking autism and television to write a satirical ten step tutorial on how to publish bad science and get lots of media attention for it. The authors proved that a theories popularity, if brought to the attention of a non-critical media was independent of clearly stating no link between autism and television in your study. You didn't even need to be a scientist.

After reviewing those economists' work, Joseph Piven, director of the Neurodevelopmental Disorders Research Center at the University of North Carolina, weighed in on the autism television-watching idea, asking the Wall Street Journal "[W]hy not tie it to carrying umbrellas?" And so the researchers did! And in 2009, in "It's Back! The Rain Theory of Autism", we described how the same researcher group that blamed autism on televisions decided that it wasn't television causing autism, but rain.

The nice thing about making up "science" or just leveraging your status for narcissistic purposes, is that you can change, chameleon-like, at will. If your aim is to generate a headline in mainstream media rather than research, it doesn't matter what the science says. Most people don't remember headlines from one day to the next and they aren't that curious to dig further.

I believe a natural response to Greenfield's wild claims is humor and sarcasm, the same response the Guardian readers had. To Greenfield's latest foray, Carl Zimmer started an amusing twitter exchange with this: "I point to the increase in esophageal cancer and I point to The Brady Bunch. That's all. #greenfieldism".

A string of #greenfieldisms followed, like "@carlzimmer I point to Alzheimer's and I point to cheese doodles. That's all. #greenfieldism". (Of course this territory is risk ridden, because of the prevalence of actual real random "studies" like the one about mice who eat fast food and get Alzheimer's.)

When challenged, Greenfield didn't back down, instead she spewed forth with more analogies, like a clogged toilet if test-flushed. Asked for a response to the fact that there's not evidence claiming detrimental effects of technologies, she scoffed that you wouldn't see effects for 20 years. With just as absurd a distracting non-sequiter she once asked someone who challenged her on the technology-is-bad assertions if they denied smoking causes cancer.

Flexible "Theories" Make For Good Publicity for Scientists, For Newspapers...

I think it's cathartic, funny and educational to diffuse Greenfield's claims with humor. Wicked-fast coordinated Twitter de-bunking of such people is of course useful and could be made even more useful. Unfortunately the issues aren't always as simple as a Greenfieldism. And debunking the rhetoric of individuals seeking publicity on the backs of science is only one angle.

I think it's important to note that it wouldn't be news if there weren't ready and willing news outlets. The New Scientist printed all her assertions about links between technology, brain structure, autism, and behavior. BabiesLaptop.jpg They didn't ask questions. They didn't challenge. They didn't say: wait, isn't autism diagnosed at ages 2-4? Who's propping their 6 month old up in from on the computer to play war games? Why?

The Guardian, like most papers, publishes articles that range in quality. A Guardian comment on the 2009 article about Greenfield's theories, that called the article "absolute nonsense", and wrote I am surprised that the Guardian has published this..."sloppy journalism"..."absolute drivel", pulled in 160 "approve" votes, far more than any other comments. So even if readers hate the article, they'll still read it. Media succeeds because of advertising and hundreds of comments translates to how many hundreds of thousand of hits?

The media is quite capable of selective coverage. They ignore important scientific, political, and economic stories that they consider politically sensitive. But is anti-science coverage ever "censored"? Not if it can drive traffic, and sell ads - provide economic benefit to media outlets.

But to what extent can we accept this concession to the market if it gives us in return uncritical readers, uncritical patients, and uncritical citizens? Does it create an atmosphere amenable to medical quacks? Might it prime a population to be more receptive to political efforts to curb real free speech via social media technologies? Too bad so many potential critics (even bloggers) are involved with or depend on mainstream news outlets, which makes them understandably hesitant to bite the hand that feeds (or might feed) them.

---------

1 Bavelier, D., Green, C.S., & Dye, M. (2010). Children, wired - for better and for worse. Neuron. 67, 692-701, Volume 67, Issue 5, 692-701, 9 September 2010 Copyright � 2010 Elsevier Inc. All rights reserved. 10.1016/j.neuron.2010.08.035

Acronym Required writes frequently on the diffusion and distortion of science in politics. We've written about individuals mixing religion with science, art with science, for instance here

NRDC Founder on Why the US Fails to Take Action on Climate Change

Gus Speth, NRDC founder, book author, law professor, and former academic dean, discusses the root causes of the collective lack of action on climate change and the environment in an interview with Bulletin of Atomic Scientists1. He starts by pointing out that the United States, one of the world's wealthiest countries is losing economic ground. He points out that this applies only to Gross Domestic Product but on on other quality of life indicators -- economic equality, life expectancy, and the environment. If the world continues its current path, he says, climate change will inevitably get worse. Importantly, the impact of continued environmental degradation is entwined with economic decline -- but not in the way that prominent messengers would have you believe.

True, climate change is difficult for individuals to come to terms with, especially if it's not directly impacting them. But misunderstanding of the problem is amplified by what he calls "manufactured reaction". While some people frame it as a science conundrum, it's insead politics and lack of leadership that's paving the path to continued calamity, Speth says:

"Anxiety about acting on climate change was successfully injected into the Tea Party movement; and, as a result, a large percentage of the Republicans who came into office after the 2010 election were people who were on the record as climate deniers, and now the Congress is full of these people..."

Speth points out how the difference between politics now and the 1970's hampers action:

"American politics since, say, 1980, has gone seriously downhill. The level of public discourse on issues has deteriorated; the willingness of politicians to take up tough issues has deteriorated; and it's just a very different scene today in our country....

In the 1970s we passed a host of environmental measures, almost always with serious bipartisan support. There wasn't really a polarization on environmental issues between the two parties, certainly not like what we have today. Politics was far more civil, and it was far more bipartisan. For example, Senator Edmund Muskie, a Democrat, was a champion of the Clean Air and Clean Water Acts, but that legislation was also made possible by people like John Sherman Cooper, a Republican, and Howard Baker, also a Republican, and others. I think we've lost a lot of ground politically since that time."

He notes that the Tea Party is a force because of their ability to communicate ideas to the public. On the other hand, effective to communication about climate change and the environment has suffered because no one is communicating the most important ideas to the public, not the media; not the president, not environmental groups. On the media, he says:

"...the news media, when they report these events, aren't taking the time to talk to climate scientists about what's going on. The most they do is ask a meteorologist to comment, rather than digging in to get the real story...The coverage of these issues in Europe and Japan is much better, but the US mainstream media won't get into it. I think they're scared of losing viewers, frankly."

On what Obama needs to do:

"I think that he has got to find a way of using the scientific community, and the extraordinary strength of American and international science on climate change, to go to the public and talk about it. He's got to bring out what has happened in terms of this denial syndrome and expose it."

On policy, he says:

"We should establish a declining cap on the carbon entering the economy, sell the allowances for the carbon that does enter, and rebate the proceeds to the American public on a per capita basis."

Speth notes that major environmental groups have become close to Washington, so they now take an incremental approach constrained by what they think politicians can bear. Rather than to setting goals based on what really needs to be done, for instance, on climate change action, action and money today focuses on not losing ground from previous actions. Speth says that environmental law in its current form exists in a silo. Instead, it needs to become incorporated with tax law, corporate law, and laws that impact consumers.

Speth also discusses the "growth imperative" - the fact that politicians and corporations focus on growth, but what they're really talking about is profits. Talk about "the economy" is usually based on the crude GDP measure. However it's a myth that profit creates jobs. In fact our current cycle is one of skyrocketing profits while swaths of workers are laid off. By muddling growth and profits with individual well-being, politicians and corporations can continue to reject investments in clean energy and regulatory attempts to force cleaner manufacturing and production.

There's much more to the interview. Some points are quite obvious to you or me perhaps, but what I like is how the Bulletin of Atomic Scientists and Speth cut through the morass of excuses, hand-wringing, and finger-pointing that clutter discussions of climate change and the environment. They clearly focus on the underlying problems with law, economics and politics that smother critical change -- change not as a promise but as action.

1 Gus Speth: Communicating Environmental Risks in an Age of Disinformation" doi: 10.1177/0096340211413559 Bulletin of the Atomic Scientists July/August 2011 vol. 67 no. 4 1-7 Article highlights here; full article (subscription) here

---------------

On Communicating Climate Change: "Communicating Climate Change"

On Climate Change denial: Sea Change or Littoral Disaster

Business and Climate Change: "Carbon Emissions Disclosure Project"

Ice core research to study atmospheric conditions 650,000 years ago: "Holocene Days"

Politics and climate change: "Will Loose Lips - Or Global Warming - Sink Ships?".

Carbon emissions regulation after Katrina: "The Environment & Katrina-Slick Oil Fallout"

Drought in the "Amazon", and in "Australia".

Science research communication and climate change: "Research, Politics and Working Less", and "Science Communication".

Notes in June 2011: Cell Phone Warnings, Fossil Teeth

  • Cell Phone Warnings

    Recently, the World Health Organization's (WHO) International Agency for Research on Cancer (IARC) put the risks of cancer associated with cell phones in a 2B group: Possibly carcinogenic to humans, based on their analysis of available studies. From greatest to lowest risk the classifications are Group 1: Carcinogenic to humans, Group 2A: Probably carcinogenic; Group 2B Possibly carcinogenic; Group 3: not classifiable as to carcinogenicity; Group 4: Probably not carcinogenic.

    Scientists and journalists responded to this with their own interesting and sometimes quirky analyses. Many said the new information made them feel safe about cell phones and pointed out that the 2B group included the coffee. Others said they were concerned about the new classification, and focused on the fact that the 2B group includes DDT. And others argued in more complicated ways, things like - since DDT only affects eagles' eggs, they felt ok about cell phones. Some people reasoned that they know with certainty that tobacco is carcinogenic, and cell phones aren't in that category. How do people decide how to judge risk?

    Because logically, of course, some of this reasoning breaks down. It's not clear what people mean when they announce they'll take a risk with cell phones *because coffee is a possible carcinogen too*. Most likely they haven't read the research on the possible/maybe/sometimes connection between coffee and bladder cancer (the deciding factor for IARC on coffee). No, they're not thinking *bladder cancer*, they're thinking they'll take their chances with cellphones since they drink coffee all the time. But possible/maybe/sometimes isn't really reassurance.

    Some people say that since cell phones have been in use for 15 years or so, we would know if they caused cancer. But the use patterns were different, as were the strength of signal. And recall that cigarettes were only widely acknowledged to be carcinogenic in the 1950's and 1960's, when people had been smoking for hundreds of years. Then it took decades for that research to be acted upon. And people still smoke, no matter how clear it is that smoking causes cancer. At the present stage of cell phone research, we might not even know enough about physics and physiology to understand how cell phones cause or don't cause cancer. It adds up to a lot of unknowns.

    But still, everybody wants an answer. So do journalists and bloggers feel compelled to try to give one? This is sort of funny since no one really knows yet. But science journalists should understand how research works and the inherent uncertainty and risks and the unpredictability of evolving health research. So why feel compelled to provide an answer? Personally, (see, because we can't help ourselves) I think there's enough research that I won't walk around with my cell phone in my front pocket or stick a little mini cell phone inside my ear all day and night. And I hate to say this but I really do want to see more non-industry research. But that's based on what I know of the research, science, economics, and politics.

  • Our Ancestors' Social Groups...Two Million Years Ago

    Scientists looked at the teeth of two million year old fossils and found that female hominids were more likely to leave the area they were born in, whereas males were more likely to stay closer to the cave they were born to...Oh wait, that's not catchy. We should say something like this: "Ancient male hominids had 'foreign brides'", or, hominid men "like[d] their man caves", they were "mama's boys" or were "homebodies"? See, all the good ones are taken. But by all means, lead anachronistically to catch the reader's attention.

    "Foreign Brides"? Really? It's not cool enough that scientists figured out how to analyze the teeth of our human ancestors from 2 million years ago in order to determine their possible social group structure? 1

    Using newly evolved laser technology, Copland et al profiled the strontium mineral levels in the teeth from Australopithecus africanus and Paranthropus robustus, and from modern plant and animals around two caves in South Africa. Strontium moves up the food chain from plants to animals, and accumulates in developing teeth until about the age of eight. Scientists can analyze radioactive strontium levels in teeth for instance, and compare them to surroundings bedrock to determine birthplace. In this study, the two caves were within a band of dolomite bedrock in South Africa and non-dolomite geology surrounds this band. Researchers designated the dolomite band as local, and the non-dolomite regions further afield (~3 - 30km), as non-local.

    The teeth from both species were previously found to be similar in size, but importantly, females typically have smaller teeth than males. So the investigators found that females of the Australopithecine more likely had teeth with non-local strontium profiles, and the males teeth more likely to have a strontium profile reflecting their dolomite home turf. A probable explanation is that the females left the social structure they were born in to. This conclusion is supported by the pattern of female dispersal in our nearest ancestors, chimpanzees and bonobos. By comparison, in gorillas and other primates to whom we're not related, males tend to leave their natal group.

    1 Copland et al; Nature 474, 76-78 (02 June 2011) doi:10.1038/nature10149

follow us on twitter!

Archives