August 2011 Archives

Hurricane Irene Disaster Management

Just Like 1908?

After Hurricane Irene, some people joked that the media sees hurricanes as a grand opportunity to dress up in the newest outdoor gear and brace against the howling wind, downed trees, and rain driving sideways (although sometimes pranksters steal the show.) Hurricanes have all the right elements for media profiteering too - drama, death, destruction and lots of "human interest". But to build drama, you need to build up the storm. On Friday night, August 25th, we linked to these four news stories in successive Tweets:

  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1903 (Published August 26, 2011) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1908 (Published August 24, 2011) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1938 (Published August 26, 2011 10:28 p.m. EDT) 25 Aug tweet acronymrequired
  • Hurricane Irene could be the most destructive hurricane to strike New York City since: 1985 (Published August 26, 2011 1:23AM) 25 Aug tweet acronymrequired

Not only can't forecasters predict with 100% accuracy the power or path of a storm, but certainly, as we showed, newspaper reporters can't. The media can't necessarily be faulted though, after all a hurricane is a moving target. In fact, as long as everyone tunes in, the media actually plays an helpful role public safety role, that is by creating more drama on television then any one person can witness outside, over-the-top media coverage can actually aid public safety officials.

The list of East Coast storms throughout history is extensive, but reporters plucked somewhat random mix of historical events out of the hundreds available: The so called Vagabond Hurricane of 1903, produced 65mph winds in Central Park; the deadly New England Hurricane of 1938, was a Category 3 at landfall; and Hurricane Gloria in 1985 struck as a Category 2 hurricane. It's unclear what storm in 1908 the Lehigh Valley Morning Call reporter was talking about, since none of the storms that year amounted to much, and on August 24th 2011, when the Morning Call published, most reporters were comparing Irene to Hurricane Katrina, not some random storm that blew out to sea in the Caribbean. Maybe the reporter hadn't had their morning coffee.

But there you have it, taken together, it's clear that storms can go many different ways and we don't have the technical or intuitive abilities to predict them exactly accurately, or at least to the degree that audiences seem to be demanding after the event.

That Healthy Cry, The Complainer - Alive and Well

When Irene actually hit, the hurricane created lots of flooding and destruction not to be trifled with. But as the New York Times reported after the storm, some New Yorkers were peeved at the pre-storm hype. New Yorkers expressed anger at the cops on bullhorns telling people to go inside, anger at the storm itself for not living up to its potential, and of course anger with Mayor Bloomberg. One person complained Bloomberg made people spend too much money: "The tuna fish and the other food, O.K., we're going to eat it. I don't need all this water and batteries, though."

But lets compare this outcome with the great bungling of Katrina in 2005, to see how things can easily go the other way. At least 1,836 people died in Katrina and property damage was estimated to be $81 billion 2005 USD.

FEMA took most of the fall for the Hurricane Katrina management disaster, along with FEMA administrator Michael Brown ,who appeared utterly useless despite fervent support from George W. Bush. As we wrote at the time in "FEMA- Turkey Farm Redux?", FEMA had failed US citizens in multiple hurricanes during the administration of George H.W. Bush in the 1980's, and had been expertly revived and made useful during the Bill Clinton administration under the leadership of James E. Witt. Then George W. Bush decimated the revived FEMA, using it as his father had. As the House Appropriations Committee reported in 1992, FEMA had been used as a "political dumping ground, 'a turkey farm', if you will, where large numbers of positions exist that can be conveniently and quietly filled by political appointment". (Washington Post July 31)

So given the recent history of Katrina, and the debacles of several state and city governments in last winter's multiple blizzards, it seems inane that so many people who lived through those disasters now fault Bloomberg as "the boy who cried wolf". But then people might complain no matter what, and given the somewhat unpredictable path of storms, I think everyone would agree that it's better to be alive complaining, than dead and swept out to sea because of lack of government warning.

Assuring Future Disasters are Worse

Of course we don't know how the government would have fared in a worse disaster. And while people complain about the lack of a bigger hurricane, FEMA is currently hindered from helping with Irene. Why? Apparently, a FEMA funding bill is being held up in the Senate while politicians with idiosyncratic proclivities indulge their hypocritical "family values" by meticulously delineating all the organizations that can't be paid with FEMA money.

To our detriment, we ignore larger issues while we complain. FEMA's role takes a role not only during and after a hurricane, but in adequately preparing people ahead of time, as we wrote in "FEMA and Disaster Preparedness". Neither FEMA nor state or local governments adequately helped prepare for Katrina, as we detailed in: "Disaster Preparedness - Can We?". Although states and cities didn't play as large a role in the the federal government failings as G.W. Bush would later say, rewriting of history, their role is important.

Of course, disaster preparedness means not only motivating citizens to buy supplies and stay inside, not only mobilizing a deft response, but shoring up infrastructure ahead of time. In the wake of Katrina, we all heard about the failure of governments to build adequate New Orlean's levees, an issue Acronym Required wrote about in "Levees - Our Blunder". However before Katrina, few people realized just how flagrantly officials ignored warnings about the weak levees. When the hurricane breached the walls, politicians acted surprised, that surprise masking the blunt unwillingness of politicians and US citizens to support and fund infrastructure.

We wrote about more widespread infrastructure failings in 2007, in "Guano Takes the Bridge, Pigeons Take the Fall". But infrastructure is easy to ignore. Just as vociferously as citizens complain about the hype preceding Hurricane Irene1, they remain stunningly silent on the lack of infrastructure preparedness. In fact there's loud clamoring to dismantle the very agencies that assure our safety. Obama has tried in some ways to address the infrastructure problem, not without criticism.

In the case of the New Orleans levees, the New Orlean's Times-Picayune reports that although $10 billion has been spent upgrading the levees, the Army Corps of Engineers is giving them a failing grade. The report says that the refurbished levees might stand a 100 year event, but a larger event will result in thousands of deaths and billions of dollars in property damage. This was exactly the criticism of the levees after Hurricane Katrina in 2005.


1 Here's an interesting analysis of the hype-factor of news relating to Hurricane Irene. The author uses a quantity of publications analysis to argue is that the storm was not hyped.

Autism and the Internet, Drugs, Television, Rain, the Victorian Era & the Media

New Scientists Who Don't Do Science

Every so often, actually with disturbing frequency, claims about the underlying cause of autism spring up like fungii in manure after a rain. It's practically required that claims of this genre be built on false premises or make invalid conclusions, like this week's link between internet use and autism. Oxford personality Baroness Susan Greenfield breathed life into this rumor in an interview with New Scientist, then defended herself by saying provocatively: "I point to an increase in the internet and I point to autism, that's all." But where's the evidence, and why is this stuff being published?

Greenfield's been popularizing science for decades, and recently popularizing science at the cost of science itself. In 2008 she warned the children's brains were being destroyed by technology in a book reviewed by the Times of UK:

"As it happens, her new book, ID: The Quest for Identity in the 21st Century, digresses all over the place in little flash floods of maddening provisos and second thoughts. It's as if she dictated it while bouncing on a trampoline, fixing an errant eyelash and sorting her fraught schedule on a BlackBerry."

Back in 2009, before the UK's Royal Institution fired Lady Greenfield, she argued that the total immersion in "screen technologies" was linked to a "three-fold increase in prescriptions for methylphenidate" (prescribed for attention deficit disorder). She told the paper that people were losing empathy and becoming dependent on "sanitized" screen dialogues. She also complained that packages of meat in supermarkets had replaced "killing, skinning and butchering an animal to eat".

It's hard to criticize people who distort science without seeming to deride all science popularizes. Greenfield falls in the former camp as many people recognize. 254 people commented, on a recent Guardian article saying that the internet changes peoples brain:

  • "That's exactly what my mum said about reading 'The Beano' [A British Comic Strip]."

  • "I hear it gives you cancer as well""

Guardian readers know how to take a piss, but Oxford's Greenfield knows how to get publicity, so she's long engaged in trying to scare people about technology. To her latest, scientists online responded briskly, with vitriol, meaning that in terms of popularity, Greenfield had a field day. We've been following false arguments about autism for a few years, so we wanted to look more closely at how Greenfield's latest claim about the internet causing autism differs from the claim that some economist's claim that television caused autism, which we covered back in 2006. For one, back in 2006 they actual did research -- well, economics research.

But Who Needs To Do Research When They'll Print the Stuff You Make Up?

Greenfield ups the ante from her general technophobia of two years ago by appealing not just to fuddy-duddy technophobes but to all parents and their worst nightmares. One day the child seems fine, then something mysterious happens and the child is no longer themselves. What happened? Doctors and scientists have no clinically actionable idea. Greenfield knows.

Perhaps it makes life easier for some autism suffering families to attribute changes in their child to some outside agent. It's also common to say that a crime has been perpetrated by people from another state or town or country. We've seen autism blamed on vaccines, television, rain...Uncomplicated agents that can be controlled by parents are especially attractive - TV. But where's the evidence? When the New Scientist asked that, Greenfield replied:

"There's lots of evidence, for example, the recent paper "Microstructure abnormalities in adolescents with internet addiction disorder" in the journal PLoS One...There is an increase in people with autistic spectrum disorders. There are issues with happy-slapping, the rise in the appeal of Twitter - I think these show that people's attitude to each other and themselves is changing."

How nimbly she links computer use, with "internet addiction disorder" (IAD) that is not recognized by US psychiatrists, with brain change, with behaviors, and even with attitudes. But the paper didn't say anything about attitudes; didn't prove "addiction", didn't prove detrimental brain changes, didn't prove behavior changes.

Can You Compare the Cognition of Chinese 19 Year Olds Playing Games 12 Hours A Day To 1 Year Old Cooing Babies?

The PLoS One paper deserves more comment than I'm going to devote here. But though PLoS One depends on the community for peer review, and although this paper has over 11,000 views (14/08/11), not one person has peer-reviewed, or "rated" - the paper. Nevertheless, it's cited all over the internet as proof that "internet use" does bad stuff to the brain, it "shrinks it", "wrinkles it", "damages", "contracts", "re-wires" it... But the paper is not about "internet use". It's about on-line gaming.

The PLoS One authors write that the research is particularly important to China because unlike in the US, in China, IAD is recognized and often cited as a big problem. The Chinese vigorously treat the "disorder" with strict treatment regimens including until 2009 shock therapy.

The PLoS One authors used addiction criteria (i.e. "do you feel satisfied with Internet use if you increase your amount of online time?") and asked the subjects to estimate how long they had had the addiction. They then used brain imaging to show that brain changes correlated with self-reported duration of online game playing. There were 18 subjects, 12 males average age 19.5 years, and presumably 6 others (females?) who the authors do not characterize.

The subjects played online games 8-13 hours a day. I can't evaluate the data, I don't know enough about voxel based morphology. But I'm not surprised someone "playing online games" 8-13 hours a day, 6.5 days a week for 3 years is different than the controls, who were "on the internet" less than 2 hours a day. Likewise, I would expect a soldier engaged in street patrol in Afghanistan 10 hours a day, 6 days a week for three years to be different than someone who walked their dog around the block in sunny suburbia 3 days a week for the last month. (If I were in a joking mood I'd say that kids playing online games 13 hours a day 6 days a week must have extraordinary abilities to actually still be in college.)

Even if you believe in IAD, the authors acknowledge the study's limitations. They say they don't prove IAD caused changes; don't prove that the subjects brains weren't different to begin with; acknowledge the "IAD duration" measurements (self-assessment) are crude; and the data aren't rigorous to conclude negative changes.

None of these caveats slowed Greenfield, who cited this paper and linked it to all sorts of unrelated things like "Happy-slapping", an awful British fad. But there's nothing inherently sinister about using Twitter, or the internet - it's not related to autism. What makes a lot of her assertions puzzling is that Greenfield trained as a neuroscientist. Does she not know this stuff? In 2003, she mocked people who attributed anything to "scary technology." So why is she now popularizing the opposite message? Her PLoS One example is nothing more than pulling some study out of thin air and linking it to her own machinations about technology. Claims such as hers provide ripe fodder for quacks, crazies and zealotry.

How Does Technology Change Us? Research Shows Beneficial Effects in Online Gamers

Here's the second instance of "proof" Greenfield gives in the New Scientist interview, and note that again cites an academic paper and links it incongruously to her own made up stuff. She says:

"...A recent review by the cognitive scientist Daphne Bavelier in the high-impact journal Neuron1, in which she says that this is a given, the brain will change. She also reviews evidence showing there's a change in violence, distraction and addiction in children."

But the Bavelier et al review says something different. The scientists specifically warn that no research predictably links brain changes to behavior like violence, distraction or "internet addiction" to technology - TV, video games. The authors cite studies showing the research remains too confounding, as they say in their conclusions:

  • "the interpretation of these studies is not as straightforward as it appears at first glance"

  • most reports tabulate total hours rather than the more important content type, therefore are "inherently noisy and thus provide unreliable data."

  • technology use is "highly correlated with other factors that are strong predictors of poor behavioral outcomes, making it difficult to disentangle the true causes of the observations"

  • Perhaps "children who have attentional problems may very well be attracted to technology because of the constant variety of activities."

Bavelier et al stress that the effects are unpredictable, for instance "good technology" like the once ballyhooed Baby Einstein videos can turn out to have zero or negative effects. Conversely what is assumed to be "bad technology" can be good. They write:

"action video games, where avatars run about elaborate landscapes while eliminating enemies with well-placed shots, are often thought of as rather mindless by parents. However, a burgeoning literature indicates that playing action video games is associated with a number of enhancements in vision, attention, cognition, and motor control."

This point from Bavelier et al is quite interesting because it appears to contradict the general conclusions of the PLoS One authors we cited above concerning online gamers -- assuming the study subjects played comparable games. Exploring these different results is potentially more interesting than a rhetorical sleight of hand tossing a science study citation in to falsely bolster gobbledygook.

To wit, the studies Greenfield uses don't support her points. That technology's effects are still unpredictable is widely acknowledged. Greenfield herself used to promote a computer program called MindFit which claimed to improve mental ability. The game didn't work. But it also didn't make kids pick up knives and murder each other. It's hard to understand Greenfield's motivation for denouncing technology as anything other than provocation.

Greenfield says: "It is hard to see how living this way on a daily basis will not result in brains, or rather minds, different from those of previous generations." But "hard to see" isn't science. A "brain", is not a "mind", nor is it "behavior", nor an "attitude". That's not to say brains don't change, or that technology couldn't affect us. Brains show changes during many activities, often temporarily. It's just to say that technology is not inherently, as she called it, "chilling".

I Point to Television and I Point to Picnics, To Family Dinners, To Teens Doing Charity, To Children Building Sand Castles on Sunny Days

As she is now vilifying the internet as a physiological change agent, Greenfield previously claimed that television changes the brain deleteriously. Now she dismisses the notion. When New Scientist asked her: "What makes social networks and computer games any different from previous technologies and the fears they aroused?" she responded:

"The fact that people are spending most of their waking hours using them. When I was a kid, television was the centre of the home, rather like the Victorian piano was. It's a very different use of a television, when you're sitting around and enjoying it with others..."

Nice image, the innocent television, like the innocent Victorian piano. Happy family times of the Victorian Era, singing around the piano, food aplenty, spirits flowing, enlightened, goal oriented well adjusted children unhindered by repressive social situations. Oh wait, it wasn't always like that? We learn more about the good 'ole days by venturing dangerously out on the internet where you can find the following first hand accounts:

Isabella Read, 12 years old, coal-bearer, as told to Ashley's Mines Commission, 1842: "Works on mother's account, as father has been dead two years. Mother bides at home, she is troubled with bad breath, and is sair weak in her body from early labour. coaltub.jif "I am wrought with sister and brother, it is very sore work; cannot say how many rakes or journeys I make from pit's bottom to wall face and back, thinks about 30 or 25 on the average; the distance varies from 100 to 250 fathom. I carry about 1 cwt. and a quarter on my back; have to stoop much and creep through water, which is frequently up to the calves of my legs."

Sarah Gooder, 8 years old, trapper, as told to Ashley's Mines Commission, 1842: "I'm a trapper in the Gawber pit. It does not tire me, but I have to trap without a light and I'm scared. I go at four and sometimes half past three in the morning, and come out at five and half past. I never go to sleep. Sometimes I sing when I've light, but not in the dark; I dare not sing then. I don't like being in the pit. I am very sleepy when I go sometimes in the morning."

Greenfield's current glorification of TV defies the fact that TV has been roundly implicated for causing all sorts of unsocial behavior and not only by Greenfield before she changed her mind.

The Autism TV Link: "Why Not Tie it To Carrying Umbrellas?"

In 2006 Acronym Required used a study by economists linking autism and television to write a satirical ten step tutorial on how to publish bad science and get lots of media attention for it. The authors proved that a theories popularity, if brought to the attention of a non-critical media was independent of clearly stating no link between autism and television in your study. You didn't even need to be a scientist.

After reviewing those economists' work, Joseph Piven, director of the Neurodevelopmental Disorders Research Center at the University of North Carolina, weighed in on the autism television-watching idea, asking the Wall Street Journal "[W]hy not tie it to carrying umbrellas?" And so the researchers did! And in 2009, in "It's Back! The Rain Theory of Autism", we described how the same researcher group that blamed autism on televisions decided that it wasn't television causing autism, but rain.

The nice thing about making up "science" or just leveraging your status for narcissistic purposes, is that you can change, chameleon-like, at will. If your aim is to generate a headline in mainstream media rather than research, it doesn't matter what the science says. Most people don't remember headlines from one day to the next and they aren't that curious to dig further.

I believe a natural response to Greenfield's wild claims is humor and sarcasm, the same response the Guardian readers had. To Greenfield's latest foray, Carl Zimmer started an amusing twitter exchange with this: "I point to the increase in esophageal cancer and I point to The Brady Bunch. That's all. #greenfieldism".

A string of #greenfieldisms followed, like "@carlzimmer I point to Alzheimer's and I point to cheese doodles. That's all. #greenfieldism". (Of course this territory is risk ridden, because of the prevalence of actual real random "studies" like the one about mice who eat fast food and get Alzheimer's.)

When challenged, Greenfield didn't back down, instead she spewed forth with more analogies, like a clogged toilet if test-flushed. Asked for a response to the fact that there's not evidence claiming detrimental effects of technologies, she scoffed that you wouldn't see effects for 20 years. With just as absurd a distracting non-sequiter she once asked someone who challenged her on the technology-is-bad assertions if they denied smoking causes cancer.

Flexible "Theories" Make For Good Publicity for Scientists, For Newspapers...

I think it's cathartic, funny and educational to diffuse Greenfield's claims with humor. Wicked-fast coordinated Twitter de-bunking of such people is of course useful and could be made even more useful. Unfortunately the issues aren't always as simple as a Greenfieldism. And debunking the rhetoric of individuals seeking publicity on the backs of science is only one angle.

I think it's important to note that it wouldn't be news if there weren't ready and willing news outlets. The New Scientist printed all her assertions about links between technology, brain structure, autism, and behavior. BabiesLaptop.jpg They didn't ask questions. They didn't challenge. They didn't say: wait, isn't autism diagnosed at ages 2-4? Who's propping their 6 month old up in from on the computer to play war games? Why?

The Guardian, like most papers, publishes articles that range in quality. A Guardian comment on the 2009 article about Greenfield's theories, that called the article "absolute nonsense", and wrote I am surprised that the Guardian has published this..."sloppy journalism"..."absolute drivel", pulled in 160 "approve" votes, far more than any other comments. So even if readers hate the article, they'll still read it. Media succeeds because of advertising and hundreds of comments translates to how many hundreds of thousand of hits?

The media is quite capable of selective coverage. They ignore important scientific, political, and economic stories that they consider politically sensitive. But is anti-science coverage ever "censored"? Not if it can drive traffic, and sell ads - provide economic benefit to media outlets.

But to what extent can we accept this concession to the market if it gives us in return uncritical readers, uncritical patients, and uncritical citizens? Does it create an atmosphere amenable to medical quacks? Might it prime a population to be more receptive to political efforts to curb real free speech via social media technologies? Too bad so many potential critics (even bloggers) are involved with or depend on mainstream news outlets, which makes them understandably hesitant to bite the hand that feeds (or might feed) them.


1 Bavelier, D., Green, C.S., & Dye, M. (2010). Children, wired - for better and for worse. Neuron. 67, 692-701, Volume 67, Issue 5, 692-701, 9 September 2010 Copyright � 2010 Elsevier Inc. All rights reserved. 10.1016/j.neuron.2010.08.035

Acronym Required writes frequently on the diffusion and distortion of science in politics. We've written about individuals mixing religion with science, art with science, for instance here

Science Blogging: The Better Journalism?

Science Journalism Debauchery

Has anyone aside from science bloggers had so many rules imposed on them? OK, maybe science journalists. In the 1990's, when the debate over genetically modified (GM) seeds motivated the headline: "MUTANT CROPS COULD KILL YOU" (Express February 18, 1999), the British government attempted to correct the fear-mongering headlines. That didn't work, so to stem future journalistic liberties of that sort, the Parliament tried to subdue the culture that propagated such rumors.

They issued a a lengthy report warning of further journalistic depredation from "the approaching era of digital TV" and the "increasing ghettoisation". (No mention of the internet.) More journalists needed to be "scientists", they said, after surveying GM stories put out by all of eleven UK publications over two days. Only 17% of the stories were written by science journalists, they found, and not any of the commentary came from "science writers". The Science and Technology Committee of the House of Commons, the Royal Society, and SmithKline Beecham suggested punishing future misbehavior, especially for getting the facts wrong:

"media coverage of scientific matters should be governed by a Code of Practice which stipulates that scientific stories should be factually accurate. Breaches of the Code of Practice should be referred to the Press Complaints Commission."

Of course an editor at the Independent responded describing how writers could conquer the facts but still mislead the reader. Thankfully, there's often a compelling counterargument. So in the end, the report's authors settled for a rather bland collection of guidelines dealing with Balance; Uncertainty; and Legitimacy.

And of course while the Parliament fretted about the fate of genetically engineered crops, over at News of The World...

Digital Science Journalism - Publishing Freedom

When science blogging came along it seemed to offer an alternative to the maligned mainstream media science journalism. But despite its growing stature, it too has been besieged by criticism. Some of this came from mainstream media, especially in the beginning.

But interestingly, while traditional science journalism often gets attacked from the outside, online science journalists indulge in lots and lots of self-flagellation. Perhaps this is to be expected from people who labor at the frontier of the often masochistic bench science, replete with high rates of experimental failure. Or perhaps self-criticism makes it easier for science bloggers to generate conversation? Work out their identities? Get traffic?

Of course there's much more to online science journalism then blogging, but I'm going to limit my comments to that. Acronym Required started about seven years ago, and from the rather echoey halls of 2004 science blogging, the medium exploded. Now it impressively fills some of the gaping holes in other science journalism.

We last commented on the state of "science" television programming in 2007 -- and why comment further? The science blogging world offers an amazingly vibrant alternative, filled with witty, reflective, analytical, smart, and generous writers -- especially considering the frequent debauchery of mainstream journalism. Which makes the persistent whine of self-criticism all the more puzzling. Is it some evolutionary thrust gripping science bloggers to impose governing rules on their peers?

This is especially amusing in the context of how blogs started, to augment search. Search itself started in a era that included the (albeit, totally unrealistic) perception of internet as free of boundaries, regulations, and governments. Consider this piece from early 1996:

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity."

Radical, but the philosophy is actually alive and well among quite a few technologists today.

Search back then was pretty rudimentary, thus the role of blogs. To understand just how rudimentary, look at this old Yahoo page with its awesome user interface. (Accompanied by the great ad with a winking person who looks photo-shopped from two different faces, asking awkwardly: "So, My Yahoo! or yours?".)

My point is, the world in which blogging started was simple. For one, an early blog was often not much more than some geek saying -- "hey I found this cool site": link -- so I'm cool too, right? These "trusted links" made a prehistoric stab at "community" and "personalization" -- because who could trust something called the "World Wide Web", with its random collection of and unknown "links"?

Secondly, through innovation if not mindset, the Internet and blogging celebrated independence from tradition. As the internet expanded, many bloggers took to the medium in defiance of the exclusive world and onerous rules of offline publishing. The audience for blogs in the beginning was a very small group of internet users, frontiersmen strongly connected by their independence, who were by default "the community".

Page Views

As the originators of the real commercial internet intended, soon people realized they could make advertising money on the internet, and "pageviews" became an all important metric. The number of people publishing on the internet grew and bloggers were then advised to "keep it short". This advice about post-length was couched as insight about readers short attention spans. But it was as much about drawing pageviews and revenue. "Keep it short" and the unspoken "make us money" became compulsory over 'make it interesting'.

When Tumblr and Twitter arrived on the scene with truly short-form platforms, some of the same organizations then suggested that blogs could actually be a venue for "long-form" writing. Finally, just as the fashion industry moved away from dictating skirt lengths sometime in the 1980s, people eventually stopped dictating ideal post length. Of course they still told people what to do, they just moved on from making demands on post length.

To Join Or Not To Join

It's my impression that science bloggers find more rules to bandy about than others, but granted, I don't have enough data to swear that economists, say, are really more laissez-faire. I couldn't possibly document all the various rules that science bloggers have proposed for other science bloggers over the years, but to illustrate my point, I'll mention a few.

First there's the question of where to host your blog. Some insist that science bloggers should join a science blogging network. This came about when the number of online science bloggers reached a point where they could actually form a group. Those advocating joining offer compelling reasons -- traffic, exposure, "community". Now, the number of such science blogging "communities" has surpassed our ability to keep track of them. There are still pros and cons to joining of course, depending on your goals, technical abilities, impressions of the different online venues, how your schedule might accommodate blogging, etc. But your agreeable answer to join is existentially far more critical to a potential host than to you. After all, the hosts wouldn't exist without the bloggers.

Of course the notion of "online community" includes many possibilities. Communities can be collaborative, nurturing, educational - great; or, if you've observed them in action, joining such an online science community can be like joining the military, where participants -- "travel to exotic foreign lands, meet interesting and exciting people, then kill them."

Proving Your Worth

Once the blogger decides where to put their blog, a barrage of other considerations and demands will follow. For example, in 2007 bloggers for peer-reviewed research reporting (BPR3) emerged, proposing

"to identify serious academic blog posts about peer-reviewed research by offering an icon and an aggregation site where others can look to find the best academic blogging on the Net."

While interesting as a business aggregation proposal, the blog "Peer-To-Peer" diplomatically commented on the idea, saying it would be impossible for such an icon to assure the "quality of the blog post itself". Or, we might add, to insure the quality of the writer's analysis, the quality of the science journal, the quality of the science research, and so on.

Questions of ethics in science blogging are constant, carrying on from earlier discussions of ethics in blogging and science journalism. Way back in 2003, bloggers started wondering whether they should adopt journalists' standards. Perhaps journalism in 2003 was wrapped in mystique that shrouded realities like "MUTANT CROPS COULD KILL YOU", but the drumbeat of ethics has since trailed science bloggers. I can't see how this could be useful people have written strong arguments noting that blogging wouldn't exist if bloggers weren't ethical. Nor has the whole ethics thing really led to changed behavior as far as I can see, but those who push "ethics" will forever peer over our shoulders.

Still other people demand, as the Parliament did 1999, that science bloggers/journalists only blog about things they know. Quite a qualitative statement considering variations in breadth and depth of knowledge among both scientists and journalists. A comment here provides a good rebuttal to that idea. You could also reason that writing solely about what you know at any moment, like the biomechanics of kangaroo tendons, for instance, despite how interesting that may be to you, might be a good way to become a lazy, narrow minded, outdated, and one bored stiff writer to say nothing of your readers'.

Recently the subjects of anonymity and pseudonymity re-emerged and preoccupied many science bloggers. I'm not going to weigh down this post talking about that, except to note 1) that the discussion has largely revolved around the value and necessity of a particular type of individual authentication, and 2) that the discussion has largely ignored the politics and economics driving such individual authentication.

Other people try mark out precise roles for science bloggers/journalists. Science writers should be "educators", they say, or "explainers", or priests of "how things work". Each such suggestion is an invitation for extensive discussion and cogitation, and naturally other people will vehemently disagree with every proposal. So then why don't bloggers just do what suits them best? Or does the constant criticism and re-definition create "community" (and pageviews)?

Getting The Details Right

We've touched on some general instructions to bloggers about how to blog about science. There are more detailed demands too, aimed at all of science blogging and journalism, as the divisions between online and offline media blur. For instance:

  • 2005: Don't use the word "Global Warming": Thus implored some scientists reasoning that people would confuse climate change with their local weather.
  • 2006: Don't use big words: So lectured the film "Flock of Dodos: The Evolution-Intelligent Design Circus". The version I saw at Tribeca, 2006 highlighted words used by scientists in dialogue that were "too big", while characterizing Intelligent Design folks as small word people, i.e. comparatively approachable and understandable. It employed character assassination on all fronts by advising scientists to drop their testy, pompous attitudes, while basically infantilizing people who were religious. Some scientists took this whole thing to heart, overlooking how the movie slyly played to both audiences. People who knew the fairly simple polysyllabic words could be secretly smug that they knew the words when the definitions flashed on the screen like some weird spelling bee; and the other side of the audience could be smug about the portrayal of scientists as surly and smug.

  • 2007: Don't publish on Fridays: The IPCC panel and hundreds of scientists took flack from the communication "framers" for publishing their 2007 report on a Friday (link accessed 04/11) because 'any veteran journalist would know better'. The same post chastised the report for lacking "drama" like portraying "polar bears on melting ice". The authors gave another paper kudos for "reframing the IPCC report" with a "corruption angle" that gave it "more legs". In other words, said the framers, don't be scientists or reporters be PR ringmasters.
  • 2008 "Don't use the word "denial", "denialist", or "denier": Some scientists said that labeling climate change denialists as such was pejorative.

At the time, each of these instructions drew passionate discussions. But times change -- or don't change. Today it's fine to use "global warming" and "denialist". Science Friday still airs to large audiences on Fridays, and Science Magazine successfully publishes, Friday, after Friday, after Friday.

As charming as "Flock of Dodos" was - do big words really make science/scientists extinct? If we believe that message, should we then be discouraged that in 4 years, the Flock of Dodos trailer has 13,376 views on Youtube, while Hoax of Dodos, the Discovery Institutes pathetically best response, has almost as many -- 11,405 views? OK true, the "Pulled Punches" video (cut scenes from Flock of Dodos) has 18,605 views. But for perspective on what 18,605 views means on YouTube, the video "Emma Watson Punches Interviewer" (Jan 19, 2006), has 4,159,895 (all view numbers as of 05/11). Despite the fact that "Punch" is a catchy keyword to put in your comparatively boring science video, what does all this mean for science and science journalism?

"Blogging" is Worthy

What if none of these rules and instructions make science blogging "better", whatever better is? What if people still deny climate change for example, no matter what the facts and no matter what manner we convey them? While pursuing better communication is incredibly important, as is presenting ideas compellingly, how much of science knowledge lost by miscommunication is really any responsibility or fault of scientists and journalists (online or offline)? How much should be attributed to the political inclinations, personal distractions, and various passions of our audiences?

In reality most science journalists have zero time to write stories, whether or not they have generous deadlines. Those stories must always be very compelling just to get read. The extreme example of this fact, illustrated by a UK journalist, applies to most writing:

"You are writing to impress someone hanging from a strap in the tube between Parson's Green and Putney, who will stop reading in a fifth of a second.

We may not like this. We may wish readers didn't prefer reading science only when it's infused with sex or violence or something that 99% of the population have some opinion on. We may wish that journalists really comprised some "fourth estate", or could make a difference, or could educate readers. What if science writers could just all write about their own fascinating interest, rather than about something dictated by advertising? And what if the audience would just read, and not worry about about ethics, badges of legitimacy or whether education was happening as they read?

But until science journalists make a lot more money or have a lot more time, that won't happen on any large scale basis. But most science bloggers write for free or pittance. And if you write mostly for free on a blog, shouldn't you just write? Or does it have to be for some higher purpose (agreed upon by the consensus of one of many "communities")? Because wasn't that the whole purpose of blogging?

Science bloggers should keep in mind what their up against. The lifeblood of mainstream media consists of headlines the likes of this week's "GM Blunder Contaminates Britain With Mutant Crops", about "Frankenstein" crops.

So I'm sure whatever you write, dear blogger, will stand up just fine. And until "offline" journalism reaches different standards, can we stop insisting/demanding/pleading that bloggers "ARE journalists too"? Maybe science blogging could stand on its own apart from journalism if the community of science bloggers trusted themselves.

follow us on twitter!