Monday, October 31, 2011

565. Concerns Are Raised About Genetically Engineered Mosquitoes


Genetically modified Aedes aegypti mosquito
to fight dengue fever 

By Andrew Pollack, The New York Times, October 30, 2011

These mosquitoes are genetically engineered to kill — their own children.

Researchers on Sunday reported initial signs of success from the first release into the environment of mosquitoes engineered to pass a lethal gene to their offspring, killing them before they reach adulthood.

The results, and other work elsewhere, could herald an age in which genetically modified insects will be used to help control agricultural pests and insect-borne diseases like dengue fever and malaria.
But the research is arousing concern about possible unintended effects on public health and the environment, because once genetically modified insects are released, they cannot be recalled.
Authorities in the Florida Keys, which in 2009 experienced its first cases of dengue fever in decades, hope to conduct an open-air test of the modified mosquitoes as early as December, pending approval from the Agriculture Department.
“It’s a more ecologically friendly way to control mosquitoes than spraying insecticides,” said Coleen Fitzsimmons, a spokeswoman for the Florida Keys Mosquito Control District.
The Agriculture Department, meanwhile, is looking at using genetic engineering to help control farm pests like the Mediterranean fruit fly, or medfly, and the cotton-munching pink bollworm, according to an environmental impact statement it published in 2008. Millions of genetically engineered bollworms have been released over cotton fields in Yuma County, Ariz.
Yet even supporters of the research worry it could provoke a public reaction similar to the one that has limited the acceptance of genetically modified crops. In particular, critics say that Oxitec, the British biotechnology company that developed the dengue-fighting mosquito, has rushed into field testing without sufficient review and public consultation, sometimes in countries with weak regulations.
“Even if the harms don’t materialize, this will undermine the credibility and legitimacy of the research enterprise,” said Lawrence O. Gostin, professor of international health law at Georgetown University.
The first release, which was discussed in a scientific paper published online on Sunday by the journal Nature Biotechnology, took place in the Cayman Islands in the Caribbean in 2009 and caught the international scientific community by surprise. Oxitec has subsequently released the modified mosquitoes in Malaysia and Brazil.
Luke Alphey, the chief scientist at Oxitec, said the company had left the review and community outreach to authorities in the host countries.
“They know much better how to communicate with people in those communities than we do coming in from the U.K.” he said.
Dr. Alphey was a zoology researcher at Oxford before co-founding Oxitec in 2002. The company has raised about $24 million from investors, including Oxford, he said. A major backer is East Hill Advisors, which is run by the New England businessman Landon T. Clay, former chief executive of Eaton Vance, an investment management firm.
Oxitec says its approach is an extension of a technique used successfully for decades to suppress or even eradicate pests, which involves the release of millions of sterile insects that mate with wild ones, producing no offspring.
But the technique has not been successfully used for mosquitoes, in part because the radiation usually used to sterilize the insects also injures them, making it difficult for them to compete for mates against wild counterparts.
Oxitec has created Aedes aegypti mosquitoes, the species that is the main transmitter of the dengue and yellow fever viruses, containing a gene that will kill them unless they are given tetracycline, a common antibiotic.
In the lab, with tetracycline provided, the mosquitoes can be bred for generations and multiplied. Males are then released into the wild, where tetracycline is not available. They live long enough to mate but their progeny will die before adulthood.
The study published on Sunday looked at how successfully the lab-reared, genetically modified insects could mate. About 19,000 engineered mosquitoes were released over four weeks in 2009 in a 25-acre area on Grand Cayman island.
Based on data from traps, the genetically engineered males accounted for 16 percent of the overall male population in the test zone, and the lethal gene was found in almost 10 percent of larvae. Those figures suggest the genetically engineered males were about half as successful in mating as wild ones, a rate sufficient to suppress the population.
Oxitec has already said a larger trial on Grand Cayman island in 2010 reduced the population of the targeted mosquito by 80 percent for three months. That work has not yet been published.
Dr. Alphey said the technique was safe because only males were released, while only females bite people and spread the disease, adding that it should have little environmental impact. “It’s exquisitely targeted to the specific organism you are trying to take out,” he said.

The company is focusing on dengue fever rather than malaria because a single mosquito species is responsible for most of its spread, while many species carry malaria. Also, unlike for malaria, there are no drugs to treat dengue, and bed nets do not help prevent the disease because the mosquito bites during the day.
There are 50 million to 100 million cases of dengue each year, with an estimated 25,000 deaths. The disease causes severe flulike symptoms and occasionally, hemorrhagic fever.
The Oxitec technique, however, is not foolproof.
Alfred M. Handler, a geneticist at the Agriculture Department in Gainesville, Fla., said the mosquitoes, while being bred for generations in the lab, can evolve resistance to the lethal gene and might then be released inadvertently.
Todd Shelly, an entomologist for the Agriculture Department in Hawaii, said in a commentary published on Sunday by Nature Biotechnology that 3.5 percent of the insects in a lab test survived to adulthood despite presumably carrying the lethal gene.
Also, the sorting of male and female mosquitoes, which is done by hand, can result in up to 0.5 percent of the released insects being female, the commentary said. If millions of mosquitoes were released, even that small percentage of females could lead to a temporary increase in disease spread.
Oxitec and a molecular biologist, Anthony A. James of the University of California, Irvine, say they have developed a solution — a genetic modification that makes female mosquitoes, but not males, unable to fly. The grounded females cannot mate or bite people, and separating males from females before release would be easier.
In a test in large cages in Mexico, however, male mosquitoes carrying this gene did not mate very successfully, said Stephanie James, director of science at the Foundation for the National Institutes of Health, which oversaw the project.
In Arizona, pink bollworms sterilized by radiation have already helped suppress the population of that pest. To monitor how well the program is working, the sterile bugs are fed a red dye. That way, researchers can tell if a trapped insect is sterile or wild.
But the dye does not always show up, leading to false alarms that wild bollworms are on the loose. Giving the sterilized bugs a coral gene that makes them glow with red fluorescence is a better way to identify them, said Bruce Tabashnik, an entomologist at the University of Arizona. He is an author of  published in the journal PLoS One in September.
Experts assembled by the World Health Organization are preparing guidelines on how field tests of genetically modified insects should be conducted. Proponents hope the field will not face the same opposition as biotechnology crops.
“You don’t eat insects,” said Dr. James of the Foundation for the National Institutes of Health. “This is being done for a good cause.”

Sunday, October 30, 2011

564. Extreme Melting On Greenland Ice Sheet, Team Reports; Glacial Melt Cycle Could Become Self-Amplifying


By ScienceDaily, October 25, 2011 

Marco Tedesco standing on the edge
of one of four moulins (drainage holes)
he and his team found at the bottom of a
supraglacial lake during the expedition to
Greenland in the summer, 2011.
(Credit: P. Alexander)
The Greenland ice sheet can experience extreme melting even when temperatures don't hit record highs, according to a new analysis by Dr. Marco Tedesco, assistant professor in the Department of Earth and Atmospheric Sciences at The City College of New York. His findings suggest that glaciers could undergo a self-amplifying cycle of melting and warming that would be difficult to halt.

"We are finding that even if you don't have record-breaking highs, as long as warm temperatures persist you can get record-breaking melting because of positive feedback mechanisms," said Professor Tedesco, who directs CCNY's Cryospheric Processes Laboratory and also serves on CUNY Graduate Center doctoral faculty.

Professor Tedesco and his team collected data for the analysis this past summer during a four-week expedition to the Jakobshavn Isbræ glacier in western Greenland. Their arrival preceded the onset of the melt season.
Combining data gathered on the ground with microwave satellite recordings and the output from a model of the ice sheet, he and graduate student Patrick Alexander found a near-record loss of snow and ice this year. The extensive melting continued even without last year's record highs.

The team recorded data on air temperatures, wind speed, exposed ice and its movement, the emergence of streams and lakes of melt water on the surface, and the water's eventual draining away beneath the glacier. This lost melt water can accelerate the ice sheet's slide toward the sea where it calves new icebergs. Eventually, melt water reaches the ocean, contributing to the rising sea levels associated with long-term climate change.

The model showed that melting between June and August was well above the average for 1979 to 2010. In fact, melting in 2011 was the third most extensive since 1979, lagging behind only 2010 and 2007. The "mass balance," or amount of snow gained minus the snow and ice that melted away, ended up tying last year's record values.

Temperatures and an albedo feedback mechanism accounted for the record losses, Professor Tedesco explained. "Albedo" describes the amount of solar energy absorbed by the surface (e.g. snow, slush, or patches of exposed ice). A white blanket of snow reflects much of the sun's energy and thus has a high albedo. Bare ice -- being darker and absorbing more light and energy -- has a lower albedo.

But absorbing more energy from the sun also means that darker patches warm up faster, just like the blacktop of a road in the summer. The more they warm, the faster they melt.

And a year that follows one with record high temperatures can have more dark ice just below the surface, ready to warm and melt as soon as temperatures begin to rise. This also explains why more ice sheet melting can occur even though temperatures did not break records.

Professor Tedesco likens the melting process to a speeding steam locomotive. Higher temperatures act like coal shoveled into the boiler, increasing the pace of melting. In this scenario, "lower albedo is a downhill slope," he says. The darker surfaces collect more heat. In this situation, even without more coal shoveled into the boiler, as a train heads downhill, it gains speed. In other words, melting accelerates.

Only new falling snow puts the brakes on the process, covering the darker ice in a reflective blanket, Professor Tedesco says. The model showed that this year's snowfall couldn't compensate for melting in previous years. "The process never slowed down as much as it had in the past," he explained. "The brakes engaged only every now and again."

The team's observations indicate that the process was not limited to the glacier they visited; it is a large-scale effect. "It's a sign that not only do albedo and other variables play a role in acceleration of melting, but that this acceleration is happening in many places all over Greenland," he cautioned. "We are currently trying to understand if this is a trend or will become one. This will help us to improve models projecting future melting scenarios and predict how they might evolve."

Additional expedition team members included Christine Foreman of Montana State University, and Ian Willis and Alison Banwell of the Scott Polar Research Institute, Cambridge, UK.

Professor Tedesco and his team provide their preliminary results on the Cryospheric Processes Laboratory webpage (http://greenland2011.cryocity.org/). They will will be presenting further results at the American Geophysical Union Society (AGU) meeting in San Francisco on December 5 at 9 a.m. and December 6 at 11:35 a.m.

The research was supported by the National Science Foundation and the NASA Cryosphere Program. The World Wildlife Fund is acknowledged for supporting fieldwork activities.

Story Source:
The above story is reprinted from materials provided by City College of New York.

563. Do Bacteria Age?


Bacteria multiply in pairs
By ScienceDaily, October 27, 2011

When a bacterial cell divides into two daughter cells and those two cells divide into four more daughters, then 8, then 16 and so on, the result, biologists have long assumed, is an eternally youthful population of bacteria. Bacteria, in other words, don't age -- at least not in the same way all other organisms do.

But a study conducted by evolutionary biologists at the University of California, San Diego questions that longstanding paradigm. In a paper published in the November 8 issue of the journal Current Biology, they conclude that not only do bacteria age, but that their ability to age allows bacteria to improve the evolutionary fitness of their population by diversifying their reproductive investment between older and more youthful daughters. An advance copy of the study appears this week in the journal's early online edition.

"Aging in organisms is often caused by the accumulation of non-genetic damage, such as proteins that become oxidized over time," said Lin Chao, a professor of biology at UC San Diego who headed the study. "So for a single celled organism that has acquired damage that cannot be repaired, which of the two alternatives is better -- to split the cellular damage in equal amounts between the two daughters or to give one daughter all of the damage and the other none?"
The UC San Diego biologists' answer -- that bacteria appear to give more of the cellular damage to one daughter, the one that has "aged," and less to the other, which the biologists term "rejuvenation" -- resulted from a computer analysis Chao and colleagues Camilla Rang and Annie Peng conducted on two experimental studies. Those studies, published in 2005 and 2010, attempted unsuccessfully to resolve the question of whether bacteria aged. While the 2005 study showed evidence of aging in bacteria, the 2010 study, which used a more sophisticated experimental apparatus and acquired more data than the previous one, suggested that they did not age.

"We analyzed the data from both papers with our computer models and discovered that they were really demonstrating the same thing," said Chao. "In a bacterial population, aging and rejuvenation goes on simultaneously, so depending on how you measure it, you can be misled to believe that there is no aging."

In a separate study, the UC San Diego biologists filmed populations of E. coli bacteria dividing over hundreds of generations and confirmed that the sausage-shaped bacteria divided each time into daughter cells that grew elongated at different rates -- suggesting that one daughter cell was getting all or most of the cellular damage from its mother while the other was getting little or none. Click this link to watch the time-lapse film of one bacterium dividing over 10 generations into 1,000 bacteria in a period of five hours and see if you can see any differences.

"We ran computer models and found that giving one daughter more the damage and the other less always wins from an evolutionary perspective," said Chao. "It's analogous to diversifying your portfolio. If you could invest $1 million at 8 percent, would that provide you with more money than splitting the money and investing $500,000 at 6 percent and $500,000 at 10 percent?"

"After one year it makes no difference," he added. "But after two years, splitting the money into the two accounts earns you more and more money because of the compounding effect of the 10 percent. It turns out that bacteria do the same thing. They give one daughter a fresh start, which is the higher interest-bearing account and the other daughter gets more of the damage."

Although E. coli bacteria appear to divide precisely down the middle into two daughter cells, the discovery that the two daughters eventually grow to different lengths suggests that bacteria do not divide as symmetrically as most biologists have come to believe, but that their division is really "asymmetrical" within the cell.

"There must be an active transport system within the bacterial cell that puts the non-genetic damage into one of the daughter cells," said Chao. "We think evolution drove this asymmetry. If bacteria were symmetrical, there would be no aging. But because you have this asymmetry, one daughter by having more damage has aged, while the other daughter gets a rejuvenated start with less damage."


Story Source:
The above story is reprinted from materials provided by University of California - San Diego. The original article was written by Kim McDonald.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:
  1. Camilla U. Rang, Annie Y. Peng, Lin Chao. Temporal Dynamics of Bacterial Aging and RejuvenationCurrent Biology, 27 October 2011 DOI: 10.1016/j.cub.2011.09.018

562. Capitalism and Environmental Catastrophe


By John Bellamy Foster, Monthly Review, October 29, 2011
John Bellamy Foster and Fred Magdoff at
Occupy Wall Street, photo by Carrie Ann Naumoff
This is a reconstruction from notes of a talk delivered at a teach-in on "The Capitalist Crisis and the Environment" organized by the Education and Empowerment Working Group, Occupy Wall Street, Zuccotti Park (Liberty Plaza), New York, October 23, 2011.  It was based on a talk delivered the night before at the Brecht Forum.  Fred Magdoff also spoke on both occasions.
The Occupy Wall Street movement arose in response to the economic crisis of capitalism, and the way in which the costs of this were imposed on the 99 percent rather than the 1 percent.  But "the highest expression of the capitalist threat," as Naomi Klein has said, is its destruction of the planetary environment.  So it is imperative that we critique that as well.1
I would like to start by pointing to the seriousness of our current environmental problem and then turn to the question of how this relates to capitalism.  Only then will we be in a position to talk realistically about what we need to do to stave off or lessen catastrophe.
How bad is the environmental crisis?  You have all heard about the dangers of climate change due to the emission of carbon dioxide and other greenhouse gases into the atmosphere -- trapping more heat on earth.  You are undoubtedly aware that global warming threatens the very future of the humanity, along with the existence of innumerable other species.  Indeed, James Hansen, the leading climatologist in this country, has gone so far as to say this may be "our last chance to save humanity."2
But climate change is only part of the overall environmental problem.  Scientists, led by the Stockholm Resilience Centre, have recently indicated that we have crossed, or are near to crossing, nine "planetary boundaries" (defined in terms of sustaining the environmental conditions of the Holocene epoch in which civilization developed over the last 12,000 years): climate change, species extinction, the disruption of the nitrogen-phosphorus cycles, ocean acidification, ozone depletion, freshwater usage, land cover change, (less certainly) aerosol loading, and chemical use.  Each of these rifts in planetary boundaries constitutes an actual or potential global ecological catastrophe.  Indeed, in three cases -- climate change, species extinction, and the disruption of the nitrogen cycle -- we have already crossed planetary boundaries and are currently experiencing catastrophic effects.  We are now in the period of what scientists call the "sixth extinction," the greatest mass extinction in 65 million years, since the time of the dinosaurs; only this time the mass extinction arises from the actions of one particular species -- human beings.  Our disruption of the nitrogen cycle is a major factor in the growth of dead zones in coastal waters.  Ocean acidification is often called the "evil twin" of climate change, since it too arises from carbon dioxide emissions, and by negatively impacting the oceans it threatens planetary disruption on an equal (perhaps even greater) scale.  The decreased availability of freshwater globally is emerging as an environmental crisis of horrendous proportions.3
All of this may seem completely overwhelming.  How are we to cope with all of these global ecological crises/catastrophes, threatening us at every turn?  Here it is important to grasp that all of these rifts in the planetary system derive from processes associated with our global production system, namely capitalism.  If we are prepared to carry out a radical transformation of our system of production -- to move away from "business as usual" -- then there is still time to turn things around; though the remaining time in which to act is rapidly running out.
Let's talk about climate change, remembering that this is only one part of the global environmental crisis, though certainly the most urgent at present.  Climate science currently suggests that if we burn only half of the world's proven, economically accessible reserves of oil, gas, and coal, the resulting carbon emissions will almost certainly raise global temperatures by 2° C (3.6° F), bringing us to what is increasingly regarded as an irreversible tipping point -- after which it appears impossible to return to the preindustrial (Holocene) climate that nourished human civilization.  At that point various irrevocable changes (such as the melting of Arctic sea ice and the ice sheets of Greenland and Antarctica, and the release of methane from the tundra) will become unstoppable.  This will speed up climate change, while also accelerating vast, catastrophic effects, such as rising sea levels and extreme weather.  Alternatively, if our object is the rational one of keeping warming below 2° C, climate science now suggests that we should refrain from burning more than a quarter of the proven, economically exploitable fossil fuel reserves (unconventional sources such as tar sands are excluded from this calculation).4
The central issue in all of this, it is important to understand, is irreversibility.  Current climate models indicate that if we were to cease burning fossil fuels completely at the point that global average temperature had increased by 2°C, or 450 parts per million (ppm) carbon concentration in the atmosphere (the current level is 390 ppm), the earth would still not be close to returning to a Holocene state by the year 3000.  In other words, once this boundary is reached, climate change is irreversible over conceivable human-time frames.5  Moreover, the damage would be done; all sorts of catastrophic results would have emerged.
Recently climate scientists, writing for Nature magazine, one of the world's top science publications, have developed a concrete way of understanding the planetary boundary where climate change is concerned, focusing on the cumulative carbon emissions budget.  This is represented by the trillionth ton of carbon.  So far more than 500 billion tons of carbon have been emitted into the atmosphere since the industrial revolution.  In order to have an approximately even chance (50-50) of limiting the increase in global average temperature to 2°C, the cumulative CO2 emissions over the period 1750-2050 must not exceed one trillion tons of carbon; while in order to have a 75 percent chance of global warming remaining below 2°C, it is necessary not to exceed 750 billion tons of carbon.  Yet, according to present trends, the 750 billionth ton of carbon will be emitted in 2028, i.e., about sixteen years from now.
If we are to avoid burning the 750 billionth ton of carbon over the next four decades, carbon dioxide emissions must fall at a rate of 5 percent per year; while to avoid emitting the trillion ton, emissions must drop at a rate of 2.4 percent a year.  The longer we wait the more rapid the decrease that will be necessary.  The trillionth ton, viewed as the point of no return, is the equivalent of cutting down the last palm tree on Easter Island.  After that it is essentially out of our hands. 6
This takes us to the social question.  The problem we face when it comes to the appropriate response to impending climate catastrophe is not so much one of climate science -- beyond understanding the environmental parameters in which we must act -- as social science.  It is an issue of social conditions and social agency.  We live in in a capitalist society, which means a societyin which the accumulation of capital, i.e., economic growth carried out primarily on the terms of the 1 percent at the top (the ruling capitalist class), is the dominant tendency.  It is a system that accumulates capital in one phase simply so that it can accumulate still more capital in the next phase -- always on a larger scale.  There is no braking mechanism in such a system and no social entity in control.  If for some reason the system slows down (as it is forced to periodically due to its own internal contradictions) it enters an economic crisis.  That may be good temporarily for the environment, but it is terrible for human beings, particularly the bottom portion of the 99 percent, faced with rising unemployment and declining income.
Overall, capitalism is aimed at exponential growth.  It cannot stand still.  The minimum adequate growth rate of the system is usually thought to be 3 percent.  But this means that the economy doubles in size about every 24 years.  How many such doublings of world output can the planet take?
Hence, there is a direct and growing contradiction between capitalism and the environment, a contradiction that becomes more and more apparent as the size of the capitalist economy begins to rival the basic biogeochemical processes of the planet.  Naomi Klein has rightly characterized the age we live in as "disaster capitalism" because of its dual economic and ecological crises -- and due to the increasingly exploitative means the rich employ to enable them to prosper in the midst of increasing destruction.7
There are two predominant ways of addressing the climate crisis and the environmental problem generally.  One is to look for technological ways out -- often seen as being spurred by the creation of carbon markets, but the onus is on the technology.  The argument here is that through the massive introduction of various advanced technologies we can have our pie and eat it too.  We can get around the environmental problem, it is suggested, without making any fundamental social changes.  Thus, the pursuit of profits and accumulation can go on as before without alteration.  Such magic-technological answers are commonly viewed as the only politically feasible ones, since they are attractive to corporate and political-power elites, who refuse to accept the need for system change.  Consequently, the establishment has gambled on some combination of technological miracles emerging that will allow them to keep on doing just as they have been doing.  Predictably, the outcome of this high-stake gamble has been a failure not only to decrease carbon emissions, but also to prevent their continued increase.
The turn to those alternative technologies that are already available (for example, solar power) has been hindered by the fact that they are often less profitable or require changes in social organization to be implemented effectively.  As a result, greater emphasis is placed on: (1) nuclear energy (a Faustian bargain if there ever was one); and (b) carbon capture and sequestration technology for coal-fired plants, which is neither economically nor ecologically feasible at present, and hence only serves to keep coal, the dirtiest fossil fuel, going.  Beyond this the only option that the vested interests (the 1% and their hangers-on) have left is to push for geoengineering technologies.  This involves such measures as dumping sulfur dioxide particles in the atmosphere to block the suns rays (with the danger that photosynthesis might be decreased), or fertilizing the ocean with iron to promote algal growth and absorb carbon (with the possibility that dead zones might expand).  These geoengineering schemes are extremely dubious in terms of physics, ecology, and economics: all three.  They involve playing God with the planet.  Remember the Sorcerer's Apprentice!
Nevertheless, such technological fantasies, bordering on madness, continue to gain support at the top.  This is because attempts to shift away from our currently wasteful society in the direction of rational conservation, involving changes in our way of life and our form of production, are considered beyond the pale -- even when the very survival of humanity is at stake.
The other approach is to demand changes in society itself; to move away from a system directed at profits, production, and accumulation, i.e., economic growth, and toward a sustainable steady-state economy.  This would mean reducing or eliminating unnecessary and wasteful consumption and reordering society -- from commodity production and consumption as its primary goal, to sustainable human development.  This could only occur in conjunction with a move towards substantive equality.  It would require democratic ecological and social planning.  It therefore coincides with the classical objectives of socialism.
Such a shift would make possible the reduction in carbon emissions we need.  After all, most of what the U.S. economy produces in the form of commodities (including the unnecessary, market-related costs that go into the production of nearly all goods) is sheer waste from a social, an ecological -- even a long-term economic -- standpoint.  Just think of all the useless things we produce and that we are encouraged to buy and then throw away almost the moment we have bought them.  Think of the bizarre, plastic packaging that all too often dwarfs the goods themselves.  Think of military spending, running in reality at $1 trillion a year in the United States.  Think of marketing (i.e. corporate spending aimed at persuading people to buy things they don't want or need), which has reached $1 trillion a year in this country alone.  Think of all the wasted resources associated with our financial system, with Wall Street economics.  It is this kind of waste that generates the huge profits for the top 1 percent of income earners, and that alienates and impoverishes the lives of the bottom 99 percent, while degrading the environment.8
What we need therefore is to change our economic culture.  We need an ecological and social revolution.  We have all the technologies necessary to do this.  It is not primarily a technological problem, because the goal here would no longer be the impossible one of expanding our exploitation of the earth beyond all physical and biological limits, ad infinitum.  Rather the goal would be to promote human community and community with the earth.  Here we would need to depend on organizing our local communities but also on creating a global community -- where the rich countries no longer imperialistically exploit the poor countries of the world.  You may say that this is impossible, but the World Occupy Movement would have been declared impossible only a month ago.  If we are going to struggle, let us make our goal one of ecological and social revolution -- in defense of humanity and the planet.
Notes
1  Naomi Klein, blurb to Fred Magdoff and John Bellamy Foster, What Every Environmentalist Needs to Know About Capitalism (New York: Monthly Review Press, 2011).
3  See the discussion (and sources cited) in John Bellamy Foster, Brett Clark, and Richard York, The Ecological Rift: Capitalism's War on the Earth (New York: Monthly Review Press, 2010), 13-19.
4  Malte Meinshausen, et al., "Greenhouse-Gas Emission Targets for Limiting Global Warming to 2°C,"  Nature 458 (April 30, 2009): 1158-62; Heidi Cullen, The Weather of the Future(New York: Harpers, 2010), 264-71; "On the Way to Phasing Out Emissions," Potsdam Institute for Climate Impact Research, April 30, 2009.
5  Susan Solomon, et al, Proceedings of the National Academy of Sciences 106, no. 6  (February 10, 2009): 1704-1709; Cullen, Weather of the Future, 264-71.  It should be noted that even a target of stabilizing the climate at less than 2°C increase in global temperatures, or 450 ppm, would be inadequate.  Hansen indicates that we will reach critical tipping points, e.g., related to sea level rise, even before that stage.  If we truly wish to avoid such effects and maintain a stable Holocene state, he argues, we will need to stabilize the climate long-term at 350 ppm carbon concentration, or approximately 1°C increase in global average temperature -- a point that we have already exceeded.  See Hansen, Storms of My Grandchildren, 160-71.
6  Myles Allen, et al., "The Exit Strategy," Nature Reports Climate Change, April 30, 2009; Cullen, Weather of the Future, 264-71; Myles R. Allen, et al., "Warming Caused by Cumulative Carbon Emissions Towards the Trillionth Tonne," Nature 458 (April 20, 2009): 1163-66; Malte Meinshausen, et al., "Greenhouse-Gas Emission Targets for Limiting Global Warming to 2°C," Nature 458 (April 30, 2009): 1158-62; TrillionthTonne.org; Catherine Brahic, "Humanity's Carbon Budget Set at One Trillion Tonnes," New Scientist, April 29, 2009.
7  Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism(New York: Henry Holt, 2007).
8  On the systematic role of waste (economic and ecological) under the regime of monopoly capital and the freedom which that gives us to reshape economy and society in a sustainable direction, see John Bellamy Foster, "The Ecology of Marxian Political Economy," Monthly Review 63, no. 4 (September 2011): 1-16.  On military spending levels see John Bellamy Foster, Hannah Holleman, and Robert W. McChesney, "The U.S. Imperial Triangle and Military Spending," Monthly Review 60, no. 5 (October 2008): 9-13.  On marketing see Magdoff and Foster, What Every Environmentalist Needs to Know About Capitalism, 46-53.


561. Governments Must Plan for Migration in Response to Climate Change, Researchers Say


By ScienceDaily, October 27, 2011

Flood victims using a cable car to flee the Chakdara region in Pakistan last August.
Flooding victims in Pakistan, August 2010
Governments around the world must be prepared for mass migrations caused by rising global temperatures or face the possibility of calamitous results, say University of Florida scientists on a research team reporting in the Oct. 28 edition of Science.

If global temperatures increase by only a few of degrees by 2100, as predicted by the U.N. Intergovernmental Panel on Climate Change, people around the world will be forced to migrate. But transplanting populations from one location to another is a complicated proposition that has left millions of people impoverished in recent years. The researchers say that a word of caution is in order and that governments should take care to understand the ramifications of forced migration.

A consortium of 12 scientists from around the world, including two UF researchers, gathered last year at the Rockefeller Foundation's Bellagio Center to review 50 years of research related to population resettlement following natural disasters or the installation of infrastructure development projects such as dams and pipelines. The group determined that resettlement efforts in the past have left communities in ruin, and that policy makers need to use lessons from the past to protect people who are forced to relocate because of climate change.
"The effects of climate change are likely to be experienced by as many people as disasters," UF anthropologist Anthony Oliver-Smith said. "More people than ever may be moving in response to intense storms, increased flooding and drought that makes living untenable in their current location."

"Sometimes the problem is simply a lack of regard for the people ostensibly in the way of progress," said Oliver-Smith, an emeritus professor who has researched issues surrounding forced migration for more than 30 years. But resettlements frequently fail because the complexity of the task is underestimated. "Transplanting a population and its culture from one location to another is a complex process -- as complicated as brain surgery," he said.

"It's going to be a matter of planning ahead now," said Burt Singer, a courtesy faculty member at the UF Emerging Pathogens Institute who worked with the research group. He too has studied issues related to population resettlement for decades.

Singer said that regulatory efforts promoted by the International Finance Corporation, the corporate lending arm of the World Bank, are helping to ensure the well-being of resettled communities in some cases. But as more people are relocated -- especially very poor people with no resources -- financing resettlement operations in the wake of a changing climate could become a real challenge.

Planning and paying for resettlement is only part of the challenge, Oliver-Smith said. "You need informed, capable decision makers to carry out these plans," he said. A lack of training and information can derail the best-laid plans. He said the World Bank increasingly turns to anthropologists to help them evaluate projects and outcomes of resettlement.

"It is a moral imperative," Oliver-Smith said. Also, a simple cost-benefit analysis shows that doing resettlement poorly adds to costs in the future. Wasted resources and the costs of malnutrition, declining health, infant and elder mortality, and the destruction of families and social networks should be included in the total cost of a failed resettlement, he said.

Oliver-Smith said the cautionary tales of past failures yield valuable lessons for future policy makers, namely because they point out many of the potential pitfalls than can beset resettlement projects. But they also underscore the fact that there is a heavy price paid by resettled people, even in the best-case scenarios.

In the coming years, he said, many projects such as hydroelectric dams and biofuel plantations will be proposed in the name of climate change, but moving people to accommodate these projects may not be the simple solution that policy makers sometimes assume.

A clear-eyed review of the true costs of forced migration could alert governments to the complexities and risks of resettlement.

"If brain surgeons had the sort of success rate that we have had with resettling populations, very few people would opt for brain surgery," he said.

Story Source:
The above story is reprinted from materials provided by University of Florida. The original article was written by Donna Hesterman.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:
  1. A. de Sherbinin, M. Castro, F. Gemenne, M. M. Cernea, S. Adamo, P. M. Fearnside, G. Krieger, S. Lahmani, A. Oliver-Smith, A. Pankhurst, T. Scudder, B. Singer, Y. Tan, G. Wannier, P. Boncour, C. Ehrhart, G. Hugo, B. Pandey, and G. Shi. Preparing for Resettlement Associated with Climate ChangeScience, 2011; 334 (6055): 456-457 DOI:10.1126/science.1208821

Saturday, October 29, 2011

560. Book Review: Rethinking Feyerabend: The “Worst Enemy of Science”?


Paul Feyerabend

By Ian James Kidd, PL0S Biology, October 4, 2011

Feyerabend P (2011) The Tyranny of Science. Oberheim E, editor. Cambridge: Polity Press. 180 p. ISBN-13: 978-0745651897 (hardcover). US$54.95


The relationship between science and the philosophy of science is likely to be judged a contested one. Certainly many philosophical debates may seem oblique to the uninitiated (and even then, perhaps still!), whilst recent intellectual debacles have tended to portray philosophers of science in a poor light. During the 1990s, for example, the “Science Wars” erupted over the question of whether scientific theories provided true, objective descriptions of reality, or whether they were simply arbitrary “constructions,” mere mythologies on a par with ancient Greek theogony or medieval magic [1]. There is some truth to such charges, some of it certainly attributable to an unhealthy certain intoxication with trendy theories (like “relativism” and “constructionism”). Yet even if those charges are not always justified, and even if the majority of the philosophy of science is informed and responsible, it remains true that philosophers of science who pitch into debates about the sciences beyond their own professional boundaries must take extra care before letting loose their ideas.

With that proviso in mind, the title of Paul Feyerabend's book, The Tyranny of Science, should set off alarm bells, especially since the cover of the book depicts blood-red atomic bombs falling from above onto a desolate city. Indeed, the author himself, who was professor of philosophy at Berkeley and Zurich until his death in 1993, has a “bad reputation” both within and beyond the philosophy of science. Feyerabend was famously dubbed “the worst enemy of science” by Science, and even today philosophers of science will tend to associate his name with anti-science polemics, defences of voodoo and astrology, and more besides [2].

Fortunately, Feyerabend is far more sensible than the title and cover of this book and his bad reputation suggest. Although he is reputed as a critic of science, he is not. Feyerabend is critical not of science itself, but of false and misleading images of the sciences. The “tyranny” of the title refers not to an encroaching and disenchanting “scientific worldview,” of the sort popular with some cultural critics, but with the dangers which arose when people fail to understand and appreciate science. Back in the 1960s and early 1970s, Feyerabend urged philosophers of science to take seriously both the history of science and scientific practice—he was a trained physicist himself—and warned his peers that mere abstract reflection on the sciences would produce only idealised fantasies of science, rather than workable models of it. Although subsequent generations of philosophers of science took him seriously, many at the time took his claim as a personal attack—hence the “bad reputation.”

Into the 1980s, Feyerabend began to expand the scope of his ideas. By the beginning of the 1980s, the philosophy of science was a richer discipline, so Feyerabend moved onto new issues. It struck him that public confidence in the sciences was beginning to change into the 1980s. The nuclear accidents at Chernobyl and Three Mile Island, waning interest in the space program, and ambitious new claims on behalf of genetics were beginning to affect public faith in the sciences. Feyerabend was not opposed to such public doubts, but he did worry that the public concerns, although sincere, were too often ill-informed. Worse still, those worries were often amplified by overzealous philosophers who, to his mind, were failing in their job of clarifying concepts, scrutinising arguments, and helping people to articulate and develop their ideas. By the late 1980s, Feyerabend began to take special issue with philosophers who actively encouraged such confusions, for instance by announcing that electrons and genes were mere “social constructions,” or by rebranding forms of relativism, or by implicating “Western Science” in a powerful conspiracy to disempower indigenous cultures—indeed, Feyerabend himself succumbed to such alluring polemics for a time, which partly explains his hostile reaction to them later in his career [3].

Feyerabend's issues with public concerns about science and his worries about philosophers' role in the subsequent debates laid the foundations for the lectures that became The Tyranny of Science. In fact, the original title of that lecture series was Conflict and Harmony, which is a much better title because it indicates that public engagement with science is dynamic and complex—periods of “conflict” and “harmony,” with scientists, policymakers, philosophers, and other involved groups trying to balance the tensions. Feyerabend's claim here is that many of the conflicts concerning science are based upon confusions about and misperceptions of science—for example, the idea that science is “value-free.” That claim clearly cannot be true, if only because science is necessarily motivated by cognitive and practical values, yet it still features within public and policy debates. Feyerabend's aim in these lectures was to try to demonstrate the science is much more complex than people tend to imagine, and that our thinking about it must be correspondingly complex if we are to make sense of it. Science is only a “tyrant” if we fail to do it justice, and attribute to it exalted characteristics—such as “value-neutrality” or isolation from society—which it lacks.

Throughout his career, Feyerabend defended the claim that there is, in fact, no one thing called “Science,” where that term is understood to refer to something singular and formalised, with uniformly shared methods, theories, and concepts [4]. “Science” as so defined does not exist, even though the idea of it is a powerful one. In its place, urged Feyerabend, we should think and talk about multiple sciences—diverse in their methods and aims, held together by some common values perhaps, but otherwise more an aggregate than the monolith that some writers presume. In order to bring about this reconception of philosophy, Feyerabend urged us to reach out to all the resources at our disposal, a fact evidenced in the eclecticism and immense learning obvious in Tyranny. Feyerabend leaps from contemporary social events to the history of geometry, ancient Greek poetry to modern biology, and from the arts to philosophy. The purpose of such intellectual pyrotechnics is not simply to entertain, but to demonstrate just how richly and powerfully the sciences are interlinked with modern human life. For Feyerabend, understanding and appreciation should come as a pair so that, by the end of the lectures, the sciences cease to be the tyrants which contemporary concerns suggest they may be, and which some critics insist they must be.

A key example of the sorts of public worries about science that Feyerabend had in mind concerns genetics. Although human genetic research is conceded to afford wonderful possibilities—for medicine and agriculture, say—there are also corresponding concerns about the abuse of those powers. In the UK, there is a common rhetoric in the popular press concerning “designer babies,” GM crops, “astrological genetics,” and a host of other concerns, each centring upon an implicit worry that the powers of genetic science are too dangerous to be controlled, or that they will be abused. Despite consistent assurances, for instance on the part of the British Government, that genetic research is intensely regulated, public doubts persist. Indeed, the very fact that such doubts exist may frustrate researchers who consider their work to be both morally scrupulous and of clear cognitive and practical value. It may be difficult for those researchers to make willing concessions to public doubts where those doubts are regarded not only as ill-founded, but also as likely to result in further unduly onerous regulation, or even the termination of research projects.

Feyerabend sees a role for philosophers to contribute here. Many worries about genetic research rely upon inarticulate moral or aesthetic concerns—the so-called “yuk factor” which arises at the sight of “Frankenstein” organisms like the famous OncoMouse. In such cases, philosophers can help the public to articulate those concerns and to refine them through argumentation [5]. Often, the worries dissolve upon analysis, and sometimes, of course, are reinforced, but in each case, progress is being made. Feyerabend therefore stressed the need for scientific literacy, philosophical competence, and historical awareness as essential components of informed public engagement with science. Of course, philosophers do not assume a guiding role here; Feyerabend was no fan of the pretensions of some philosophers to resume their ancient, privileged position, but he did consider that their critical sensibilities could be valuable to those wider debates. And since public concerns about the sciences invoke not only scientific facts, but also philosophical judgements about value, purpose, and meaning (the idea of the “sanctity of life,” for instance, demands philosophical input, if only because most of the persons who invoke it are not generally after a biological formulation of it). As long as philosophers remain informed about the sciences they engage with, they can be valuable aids to the project of facilitating public engagement with science—and today, few sciences arouse more fascination, hope, and alarm than the biological sciences [6].

Feyerabend clearly sets himself a broad remit and an ambitious aim. Public concern with the sciences is a persistent and perhaps increasing feature of modern societies. For sure, some of that concern is justified, but much of it is not, for instance because it rests upon false ideas, misperceptions of the science, or because the public imagination has been warped by charged rhetoric and imagery. Feyerabend regretted such misunderstandings and thought that philosophers had an important role to play in helping the public make sense of its concerns. If that sounds paternalistic, it should not—for one thing, philosophers often share those same worries, and for another, philosophers can lay legitimate claim to intellectual skills well-suited to the task of making sense of concerns of science. Feyerabend does not propose that philosophers will pontificate to the public, because he was alert to the fact that philosophers can become “tyrannous” if they, too, cease being engaged with, and responsive to, the concerns and curiosities of the public.

The Tyranny of Science should therefore be interpreted as Feyerabend's attempts to dissolve conflicts and establish harmony between science, society, and philosophy, on the one hand, and between scientists, philosophers, and the public, on the other. The concerns and alarms that concerned Feyerabend are not the exclusive preserve of any of those domains—scientific, public, or philosophical—and to properly understand and address them each must cooperate with the other. Tyranny only arises when one of those would try to dominate the others, and Feyerabend's book offers an engaging and entertaining case against such tyranny.

References:
1.              Gould S. J (2000) Deconstructing the “science wars” by reconstructing an old mold. Science 287: 253–261. FIND THIS ARTICLE ONLINE
2.             Preston J, Munévar G, Lamb D (2000) The worst enemy of science: essays in memory of Paul Feyerabend. Oxford: Oxford University Press.
3.             Feyerabend P (1987) Farewell to reason. London: Verso.
4.             Feyerabend P (1993) Against method. Third edition. London: Verso.
5.              Midgley M (2001) Science and poetry. London: Routledge.
6.             Barnes B, Dupré J (2008) Genomes and what to make of them. Chicago: University of Chicago Press.


Citation: Kidd IJ (2011) Rethinking Feyerabend: The “Worst Enemy of Science”? PLoS Biol 9(10): e1001166. doi:10.1371/journal.pbio.1001166
Published: October 4, 2011
Copyright: © 2011 Ian James Kidd. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: No funding was received for this article.
Competing interests: The author has declared that no competing interests exist.