Matt Ridley's Blog, page 41

March 23, 2014

Technology creates jobs as much as it destroys them

My Times column is on technology and jobs:



Bill Gates voiced a thought in a speech last week
that is increasingly troubling America’s technical elite — that
technology is about to make many, many people redundant. Advances
in software, he said, will reduce demand for jobs, substituting
robots for drivers, waiters or nurses.



The last time that I was in Silicon Valley I found the
tech-heads fretting about this in direct proportion to their
optimism about technology. That is to say, the more excited they
are that the “singularity” is near — the moment when computers
become so clever at making themselves even cleverer that the
process accelerates to infinity — the more worried they are that
there will be mass unemployment as a result.



The McKinsey Global Institute argued last year that perhaps 40
per cent of jobs in clerical and professional services could be
automated by 2025. All sorts of professions, including accountants
and even actors, should begin to fret. With Google’s driverless car
having had just one (human-error) accident in 200,000 miles, there
is every reason to suspect that taxi drivers are heading for the
same fate as wick trimmers and ice harvesters.



For large stretches of every flight, pilots are already mere
spectators; might an incident such as the Malaysian Airlines
mystery make drone cockpits acceptable? Hal could come to seem more
trustworthy than Dave. (No article about the future is complete
without Arthur C. Clarke allusions — Dave being the astronaut who
has to disconnect Hal, the sentient computer, in 2001: A
Space Odyssey.)



Will there be any jobs left for our children? A new book much
talked about in techie circles, The Second Machine Age: Work, Progress, and
Prosperity in a Time of Brilliant Technologies
, by Erik
Brynjolffson and Andrew McAfee, hedges its bets. The two authors
accept that previous scares about technology leading to
unemployment were overdone, but they are worried that this may not
happen for ever. They have seen “one bastion of human uniqueness
after another fall before the inexorable onslaught of innovation”
and think that there may be no human activity immune to
automation.



In the 1700s four in every five workers were employed on a farm.
Thanks to tractors and combine harvesters, only one in fifty still
works in farming, yet more people are at work than ever before. By
1850 the majority of jobs were in manufacturing. Today fewer than
one in seven is. Yet Britain manufactures twice as much stuff by
value as it did 60 years ago. In 1900 vast numbers of women worked
in domestic service and were about to see their mangles and dusters
mechanised. Yet more women have jobs than ever before.



Again and again technology has disrupted old work patterns and
produced more, not less, work — usually at higher wages in more
pleasant surroundings.



The followers of figures such as Ned Ludd, who smashed weaving
looms, and Captain Swing, who smashed threshing machines (and, for
that matter, Arthur Scargill) suffered unemployment and hardship in
the short term but looked back later, or their children did, with
horror at the sort of drudgery from which technology had delivered
them.



Why should this next wave of technology be different? It’s
partly that it is closer to home for the intelligentsia. Unkind
jibe — there’s a sort of frisson running through the chatterati now
that people they actually know might lose their jobs to machines,
rather than the working class. Indeed, the jobs that look safest
from robots are probably at the bottom of the educational heap:
cooks, gardeners, maids. After many years’ work, Berkeley
researchers have built a robot that can fold a towel — it takes 24
minutes.



None the less, it is hard to see where people go next. If we are
reaching the point where robots could do almost anything, what is
there left for people to do? To this I suggest two answers. The
first is that we will think of something. Half the new professions
that are thriving today are so bizarre that nobody could have
predicted their emergence — reflexologist, pet groomer, ethical
hacker, golfball diver. In a world where androids run supermarkets,
you can bet that there’s a niche for a pricey little shop with
friendly salespeople. The more bulk services are automated, the
more we will be willing to pay for the human touch as well.



Automation has made us so much richer than our ancestors, by
cutting the cost (in hours worked) of most of the services that we
desire, that we have been able to afford to employ more and more
people to amuse or pamper us. Most people can afford to eat out,
for example — an unimaginable luxury only a century ago.



If the worst comes to the worst, and the androids take over
absolutely every kind of work, providing all our daily needs so
cheaply and efficiently that we just don’t need people at all, not
even as politicians — why, then what’s the blooming problem? The
point of work is so we can consume, not vice versa. Do not forget
that the poor benefit more than most from automation — as consumers
of ever cheaper goods and services.



Keynes predicted that we would eventually have more stuff than
we needed and would start to ration work down to 15 hours a week.
When you consider that we work far fewer days a year and hours a
week than in his day, and make allowance for the fact that we spend
much longer in education and retirement, we are already there in a
sense. As Arthur C. Clarke put it, “the goal of the future is full
unemployment so we can play”.



In 1700 nearly all of us had to dig the soil from dawn to dusk
or everybody starved (and some did anyway). Technology liberated us
from that precarious and awful world. If it does so again, so that
our grandchildren never have to think in terms of “jobs” at all,
but merely in terms of how they can fill their days fulfilling
their wishes and helping others, mixing bits of work with bits of
leisure, while drawing on the output of Stakhanovite machines for
income, will they envy us our daily commutes and our office
politics? I don’t think so.



I might be wrong, but I think that of all the bad things that
might happen in the world, beginning in Crimea, hyperproductive new
robots are the least of our worries.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on March 23, 2014 05:26

March 21, 2014

The tyranny of experts

My book review for The Times of William Easterly's new book "The
Tyranny of Experts"



 



Imagine, writes the economist William Easterly, that in 2010
more than 20,000 farmers in rural Ohio had been forced from their
land by soldiers, their cows slaughtered, their harvest torched and
one of their sons killed — all to make way for a British forestry
project, financed and promoted by the World Bank. Imagine that when
the story broke, the World Bank promised an investigation that
never happened.



That is, says Easterly, what occurred in Mubende District in
Uganda. It exemplifies all that is wrong with development in
Easterly’s view. It is too top-down, too crony with despots, too
remotely technocratic and too indifferent to the political and
economic freedom of local people. It is run by a tyranny of
experts.



This book is not an attack on aid from rich to poor. It is an
attack on the unthinking philosophy that guides so much of that aid
from poor taxpayers in rich countries to rich leaders in poor
countries, via outsiders with supposed expertise. Easterly is a
distinguished economist and he insists there is another way, a path
not taken, in development economics, based on liberation and the
encouragement of spontaneous development through exchange. Most
development economists do not even know they are taking the
technocratic, planning route, just as most fish do not know they
swim in a sea.



Easterly traces the history of this mistake back to the first
half of the 20th century, when semi-colonial Western powers in
China, in order to preserve their interests, used big charitable
donations to support an autocratic regime under Sun Yat-sen and
then Chiang Kai-shek, who got the message that development was the
card to play in justifying despotism.



In the 1930s, the British had to scramble to find a new excuse
for their colonies — whose occupation had always been justified on
grounds of racial superiority, an argument looking threadbare as
the depression and Nazism made pith-helmeted district commissioners
seem less god-like. A retired colonial office civil servant named
Lord Hailey came up with a technocratic justification instead —
that we were guiding the development of India and Africa. He called
for “a far greater measure of both initiative and control on the
part of the central government”.



During the Second World War Hailey got the Americans to go along
with this, by suggesting a similar line used to uphold southern
segregation — economic betterment would come first; political
liberation could wait. The Cold War meant a new justification for
the same policy in Latin America: use aid to prop up dictators.



The consequence was that it was assumed that the newly liberated
Third World was best ruled by autocrats. “The masses of the people
take their cue from those who are in authority over them,” said
theUnited Nations Primer for Development in 1951.
Nanny state knew best. Top-down development by LSE graduates was
not just the best way; it was the only way. And it was frequently
disastrous.



To this day, the head of the World Bank tours China, praising
its “leadership” and “steady implementation with a determined
will”, as atrocities abound. Tony Blair’s African Government
Initiative believes in “strengthening the government’s capacity to
deliver programs” in its poster-boy of Ethiopia, a country whose
ruler uses aid to crush opposition and grab land through
“villagisation”. Nobody seems to mind.



Easterly believes history undermines the argument that
dictatorship, even of a benevolent kind, is necessary for economic
development. The story of the West’s rise, the roaring of the east
Asian tigers and of China’s sudden growth surge are actually cases
of spontaneous order, unplanned innovation and liberation from
top-down rule, not central planning.



For instance, Deng Xiaoping gets the kudos for China’s miracle
when all he did was recognise after the fact a spontaneous
rebellion against the continuing failure of collective farms. And
Lee Kuan Yew of Singapore was sensible enough not to prevent (and
then to take the credit for) an organic improvement in a city state
exposed to world trade and populated by mercantile Fujian
Chinese.



The decades-old view that conscious policy design offers the
best hope for ending poverty, is just another a form of
creationism, embodying the fallacy of intelligent design – that
because something is ordered and intricate, it must have been
ordained by an intelligent mind. In fact, as Adam Smith and
Friedrich Hayek (and Charles Darwin) realised, no expert can ever
know enough to rival the information that emerges from the
spontaneous interactions of many people.



Technocrats also tend to have a “Blank Slate” view that the
history of a country does not matter much; have traditionally
neglected trade; and have often ignored regional or individual
trends in favour of national ones. Easterly describes the success
of the Mourides from Senegal as a rebuke to the experts. Go up to
an African street retailer in New York, Paris, Madrid or Milan and
ask him where he comes from. The chances are he is a Mouride, a
merchant embedded in a supportive web of credit, trust and
remittances that this religious brotherhood maintains — a bit like
Jews in medieval Europe. The Mourides were practising microfinance
for decades before the development industry discovered it. But
partly because they don’t fit inside a country, conventional
development economics misses such folk.



“It was an unhappy accident,” writes Easterly, “that development
thinking stressed development at the unit of the nation and was
scornful of trade at the moment of independence of many new nation
states.”



Easterly is a fluent writer and a good economic historian, at
home describing the differences between Friedrich Hayek (a
proponent of bottom-up development) and Gunnar Myrdal (top-down),
as he is recounting the history of one particular block in New York
city, which he has studied as a case history of spontaneous
development. This group of houses on Greene Street was once a
freed-slave small-holding, then part of a larger farm, then a
brothel, then a garment factory, then an artist’s studio and is now
full of posh apartments and an Apple store.



The book’s weakness is that having set up a strong historical
and theoretical argument against technocracy and for bottom-up
development, Easterly does not then follow through with some
examples of how the latter might work in practice. Nor does he
tackle the question of whether at least some parts of the modern
aid industry, especially among NGOs and charities, might be getting
rather better at helping in bottom-up ways. It would have been good
to see a manifesto for how Easterly would run the World Bank or for
that matter the Gates Foundation.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on March 21, 2014 02:28

March 13, 2014

The good news you don't hear about diseases

My Times column is on malaria, TB and Aids -- all
in steady decline, a fact that officials and journalists seem
reluctant to report:



 



There’s a tendency among public officials and
journalists, when they discuss disease, to dress good news up as
bad. My favourite example was a BBC website headline from 2004 when
mortality from the human form of mad-cow disease, which had been
falling for two years, rose from 16 to 17 cases: “Figures show rise
in vCJD deaths” wailed the headline. (The incidence fell to eight
the next year and zero by 2012, unreported.) Talk about grasping at
straws of pessimism.



Last week there was a neat example of how good news is no news
in the world of public health. Newspapers widely reported a
scientific paper, which argued that malaria might get worse in the
future at high altitudes as a result of global warming allowing
mosquitoes and parasites to survive in higher regions such as
Ethiopia and Colombia. Breathlessly, the reports suggested an extra
three million people a year might catch the disease.



Did nobody stop to ponder three obvious questions to put this
claim in context? First, less than 2 per cent of Africa is too high
for malarial mosquitoes. Second, malaria’s distribution shows
little correlation with temperature anyway. Lots of tropical
countries are free of the disease and lots of cold countries,
including Britain and Arctic Russia, have in the past suffered
severe epidemics of it. And third, malaria incidence has been not
increasing as the world warms but decreasing at a rate of more than five million
cases a year for seven years.



The death toll from malaria is falling even faster than the incidence: down
by 29 per cent since the year 2000, despite a steadily rising
global population. That’s an astonishing bit of happy news about
one of humankind’s biggest killers, although 627,000 people still
died of it in 2012. One of the places that the authors of the new
study say global warming is supposed to make the problem worse at
high altitudes is South America. Yet in the whole of the Americas,
north and south, there were officially just 800 deaths from malaria
in 2012.



In short, the future of malaria depends on bed nets, mosquito
control, anti-malarial drugs, better housing and Bill Gates.
Temperature is all but irrelevant. Fascinatingly, a statistical
study published last year explained not only the current decline
but the historic disappearance of malaria from Europe and North
America too, largely through the shrinking size of households. The
authors concluded that “the probability of malaria
eradication jumps sharply when average household size drops below
four persons”.



The reason for this is that an infected mosquito returns to feed
in roughly the same place night after night and its success rate in
infecting a new human being is too low for the disease to spread if
there are fewer than four people per household. That’s great news,
because household size is falling throughout the world, so even
without intervention malaria should continue to decline. Yet that
study, unlike the altitude one, went largely unreported.



As did the fact that new HIV infections worldwide have fallen by 33 per cent in total, and 52 per
cent in children, since 2001. Aids-related deaths are down by 30
per cent since 2005. New cases of tuberculosis have been falling for a decade too, and mortality from
TB is 45 per cent down since 1990. Again, these are remarkable and
unexpected turn-arounds. Go back to the turn of the century and you
will find public health officials uniformly gloomy about the
prospects for Aids and TB.



In 2000, for example, the US National Intelligence Council predicted that the burden of HIV/Aids and TB
was going to go on getting so much worse for at least ten years
that it “is likely to aggravate and, in some cases, may even
provoke economic decay, social fragmentation and political
destabilisation in the hardest-hit countries in the developing and
former communist worlds”.



Meanwhile two truly horrible diseases are on the brink of
extinction altogether. Last year there were just 406 cases of polio in the world,
mostly in Pakistan, Somalia and Nigeria. Polio’s eradication is
long overdue, but it’s getting closer. There has been an even
faster decline in guinea worm, a painful parasite that you catch
from ingesting water fleas when drinking, and which grows down your
leg and erupts from your foot. The only remedy is to pull it out
inch by inch over months (I do hope you have finished
breakfast).



More than three million people had guinea worm in the late
1980s, when Jimmy Carter made it one of his top priorities. Last
year just 148 of the parasites survived, mostly in South Sudan.
Despite the civil war there, the eradication work by Mr Carter’s
volunteers continues and just three cases have emerged this year. When guinea worm is
gone, because it infects no other species, it will be the first
deliberate extinction of a living species (smallpox is a virus, and
anyway remains in a laboratory). Good riddance.



Not all diseases are retreating. Dengue fever, spread by a
different (day-feeding) genus of mosquito from the (night-feeding)
malarial genus, is getting steadily more common right across the
planet. Antibiotic resistance is complicating the fight against
some bacteria. But overall the tropical world is seeing the same
huge retreat of infectious death that happened in the temperate
world during the previous century.



And nobody seems terribly interested. Why is this? You can
understand why journalists don’t tell good news stories more often;
their motto, after all, is: “If it bleeds, it leads.” You can be
pretty sure that a country that’s gone out of the headlines —
Sierra Leone, for example — is doing pretty well.



But why are public health officials not keener to blow their own
trumpets? It is, after all, the hard work of dedicated
professionals, backed up by millions of volunteers and funded by
generous philanthropists, that is driving these contagions out.



Here’s where I turn a touch cynical. A few years ago, maternal
mortality — that is, death among women giving birth — began to fall
fast, having stagnated for a decade or so. The editor
of The Lancet recounted how he came under co-ordinated and
determined pressure from women’s health advocates to delay
publication of the news of this fall in maternal mortality because,
said those pressurising him, “good news would detract from the
urgency of their cause”. Aha.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on March 13, 2014 23:26

March 4, 2014

Smoking (and European regulation) kills

My Times column is on harm reduction, Swedish
snus and e-cigarettes:



Is this the end of smoking? Not if the bureaucrats
can help it.



Sweden’s reputation for solving policy problems,
from education to banking, is all the rage. The Swedes are also ahead of
the rest of Europe
in tackling smoking. They have by far the fewest
smokers
per head of population of all EU countries. Lung cancer
mortality in Swedish men over 35 is less than
half the British rate.



Have they done it by being more zealous in ostracising,
educating and shaming smokers in that paternalistic Scandinavian
way? No — they did it through innovation and competition. In the
1980s Swedes developed a tobacco product
called snus, which you put under your upper lip.
You get the nicotine but not the tar. Snus is the most popular and effective way of
quitting smoking in Sweden (and Norway).



You will not have seen snus on sale in
Britain, for the simple reason that the EU banned it. When Sweden
joined the EU, it negotiated a special opt-out. To this
day
, despite abundant evidence that snus is saving Swedish
lives by the bucket-load, despite advice from experts, and despite
a devastating critique of its own feeble defence of the policy, the
European Commission remains committed to
the snusban.



You may think this is rather an obscure topic with which to
occupy such a prominent opinion pulpit as this page but it is a
vital background to the debate about electronic cigarettes — for,
if snuscan halve smoking and lung-cancer deaths,
imagine what electronic cigarettes could do. These are objects that
mimic the actions of smoking but are maybe 1,000 times safer, and
whose sales are doubling each year,without any government
encouragement or medical prescription. E-cigarettes may wipe out
smoking in a couple of decades. Professor David Nutt of Imperial
College describes them as “the greatest health advance since
vaccines”.



Tobacco sales are falling in Europe and America and the industry
fears it is facing in electronic cigarettes its “Kodak” moment – as
when digital photography destroyed a dominant film-camera firm in a
flash. Wells Fargo in the USA predicts that 

e-cigarettes could out-sell cigarettes within ten years.



Surveys show that e-cigarettes are now the most popular method
of quitting smoking, despite a lack of encouragement from the
authorities. Pick up a leaflet from your chemist on how to quite
smoking and you will find they are not even mentioned. When I made
a speech on this topic in the House of Lords, I was stunned by the
enormous response I got from “vapers”, enthusiasts for e-cigs. What
was especially startling was how many of them told of trying to
quit for decades, then finally succeeding.



Yet, instead of welcoming this technology, the powers that be,
in Brussels and Whitehall, are determined to throw obstacles in its
way. Last week the European Parliament voted in support of the
Commission’s proposal that bans reusable electronic cigarettes and
those with a nicotine concentration over 20mg/ml. Our own
government is intent on translating these EU restrictions
into British law, egged on by the British Medical Association and
the big pharmaceutical industry, which burble on about protecting
children from a new threat and not wishing to see the renormalising
of smoking.



Why are public health officials so resistant? The European
Commission frequently displays a precautionary bias against
innovation, weighing any risk of a new product, however small, but
not the risk of an old product it might replace — hence its
attitude to genetically modified crops. In raising the unknown (but
small) risks of e-cigarettes, the public health establishment is
missing the point. What counts is harm reduction, not perfect
utopian safety. Don’t let the best be the enemy of the good, said
Voltaire. The ban on strong e-cigarettes, the ones preferred by
those trying to quit smoking, could prevent the saving of 105,000 European
lives a year, according to modelling by London Economics.



And there’s the Dunning-Kruger effect, whereby incompetent
people are too incompetent to see incompetence. An EU official with
a lower second-class degree from the University of Malta so badly
mangled the results of 15 scientists on harm reduction by
e-cigarettes that they all wrote to correct him.



The British government’s medical regulator, the MHRA, sticks
obstinately to its belief that medicinal regulation will improve
technological progress in e-cigarettes, ignoring reams of evidence
that high barriers to entry inevitably stifle innovation. Doctors,
represented by the BMA, seem to hate the idea of people buying,
rather than being prescribed, products that stop them smoking.
Worse, some of the firms advertising e-cigarettes and selling them
through Boots are now subsidiaries of Satan itself — the tobacco
industry. Not wishing to emulate Kodak, Big Tobacco is rushing to
buy up e-cigarette makers.



Big Pharma wants regulation of its rivals because it makes a
packet out of nicotine replacement therapies (patches and gums), which have a poor track record of helping
people to quit. And politicians? Well, they just seem to enjoy
banning things.



In short, says Professor Gerry Stimson of the London School of
Hygiene and Tropical Medicine, the public health response to
e-cigarettes has been dominated by attempts to regain ownership of
the issue from a consumer-led self-help movement. “Not invented
here” — the old bureaucrat’s cry.



The reason these cynical campaigns have succeeded at all is that
most of us confuse nicotine with smoking. As far as anybody can
tell, nicotine is harmless at the doses present in cigarette smoke.
It’s the tar that kills. Nicotine is addictive, but so is caffeine,
and a cup of coffee has a lot more potentially dangerous chemicals
in it than an e-cigarette. Vaping could well be less risky and
antisocial than coffee drinking.



Yet so brainwashed are we into thinking that nicotine is harmful
that we cannot see an advert for vaping without a Pavlovian
revulsion, and spouting a load of tosh about protecting kids from a
possible gateway into (rather than out of) smoking. And that
ignorance is being exploited by the reactionary opponents of this
disruptive and life-saving innovation. They would apparently prefer
that smoking continues its very slow, but doctor-supervised,
decline over the next 50 years than all but vanish in 20.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on March 04, 2014 09:16

February 17, 2014

The sceptics are right. Don't scapegoat them.

This is my column in the Times this week. I have added
some updates in the text and below.



 



In the old days we would have drowned a witch to
stop the floods. These days the Green Party, Greenpeace and Ed
Miliband demand we purge the climate sceptics. No insult is too
strong for sceptics these days: they are “wilfully ignorant” (Ed Davey), “headless
chickens” (the Prince of Wales) or “flat-earthers” (Lord Krebs), with “diplomas in idiocy” (one of my
fellow Times columnists).



What can these sceptics have been doing that so annoys the great
and the good? They sound worse than terrorists. Actually, sceptics
have pretty well all been purged already: look what happened to Johnny Ball and David Bellamy at the BBC. Spot the sceptic on
the Climate Change Committee. Find me a sceptic within the
Department of (energy and) Climate Change. Frankly, the sceptics
are a ragtag bunch of mostly self-funded guerrillas, who have made
little difference to policy — let alone caused the floods.



What’s more, in the row over whether climate change is causing
the current floods and storms, the sceptics are the ones who are
sticking to the consensus, as set out by the Intergovernmental
Panel on Climate Change (IPCC) — you know, the body that the
alarm-mongers are always telling us to obey. And it is the sceptics
who have been arguing for years for resilience and adaptation,
rather than decarbonisation.



Mr Miliband says: “This winter is a one-in-250-year event”
(yet it’s nothing like as wet as 1929-30 if you count the whole of
England and Wales, let alone Britain) and that “the science is
clear”. The chief scientist of the Met Office, Dame Julia Slingo,
tells us “all the evidence” suggests that
climate change is contributing to this winter’s wetness. (Why,
then, did she allow the Met Office to forecast in November that a
dry winter was almost twice as likely as a wet winter?) Lord Stern,
an economist, claimed that the recent weather is evidence “we are
already experiencing the impact of climate change”. [For a thorough
debunk of Lord Stern's comments on the global
position, see below.]



All three are choosing to disagree with the IPCC consensus.
Here’s what the IPCC’s latest report actually says:



“There continues to be a lack of
evidence and thus low confidence regarding the sign of trend in the
magnitude and/or frequency of floods on a global scale.”



Here’s what a paper published by 17 senior IPCC scientists from
five different countries said last month:



“It has not been possible to
attribute rain-generated peak streamflow trends to anthropogenic
climate change over the past several decades.”



They go on to say that blaming climate change is a politician’s
cheap excuse for far more relevant factors such as “what we do on
or to the landscape” — building on flood plains, farm drainage
etc.



As for recent gales caused by a stuck jetstream, Dr Mat Collins,
of Exeter University, an IPCC co-ordinating lead author, has revealed that the IPCC discussed whether
changes to the jetstream could be linked to greenhouse gases and
decided they could not. “There is no evidence that global warming
can cause the jetstream to get stuck in the way it has this
winter,” he says, in a statement that raises questions
about Dame Julia’s credibility.



In 2012, the Met Office agreed:



“There continues to be little
evidence that the recent increase in storminess over the UK is
related to man-made climate change.”



So please will Lord Stern, Dame Julia and Mr Miliband explain
why they are misleading the public about the science?



That consensus, by the way, has never said that climate change will
necessarily be dangerous. The oft-quoted 97 per cent agreement among
scientists refers to the statement that man-made climate change
happens, not to future projections [and anyway it has been
comprehensively discredited and described as infamous by a prominent climate
scientist]. No climate change sceptic that I know “denies” climate
change, or even human contributions to it. It’s a lazy and
unpleasant slur to say that they do.



Sceptics say it is not happening fast enough to threaten more
harm than the wasteful and regressive measures intended to combat
it. So far they have been right. Over 30 years, global temperature
has changed far more slowly than predicted in 95 per cent of the
models, and has decelerated, not accelerated. When the sceptic
David Whitehouse first pointed out the current 15 to 17-year
standstill in global warming (after only 18 to 20 years of
warming), he was ridiculed; now the science
establishment admits the “pause” but claims to have some post-hoc
explanations.



While the green lobby has prioritised decarbonisation, sceptics
have persistently advocated government spending on adaptation, so
as to grab the benefits of climate change but avoid the harm, and
be ready for cooling as well if the sun goes into a funk. Yesterday
Mr Miliband yet again prioritised carbon limits — cold comfort to
those flooded from their homes. Huge sums have been spent on wind
farms and bio-energy, with trivial impact on emissions. The money
has come disproportionately from the fuel bills of poor people and
gone disproportionately to rich people.



Given that there are about 25,000 excess winter deaths each
year, adding 5 per cent to fuel bills kills far more people now
than (possibly) adding 5 per cent to future rainfall totals ever
would. If just a fraction of renewable energy subsidies sluiced
towards wind farms by the climate secretaries Ed Miliband and Ed
Davey had instead been put into flood defences, they would have
done far more good.



Meanwhile, please notice that those lambasting the sceptics work
for you, drawing wages from public bodies supported by the
taxpayer: Lord Stern, Lord Deben, Dame Julia Slingo, Sir Mark
Walport, Professor Kevin Anderson, even a spin doctor called Bob
Ward, and more. Most of the sceptics operate on self-employed
shoestrings and cost you nothing: Andrew Montford, David Holland,
Nic Lewis, Doug Keenan, Paul Homewood, Fay Kelly-Tuncay. There is
only one professional sceptic in the entire country — Benny Peiser
— and he is not paid by the taxpayer.



Despite the fuss, sceptics have had little effect. Renewable
subsidies for the rich grow larger every year. Jobs are still being
destroyed by carbon floor prices and high energy costs. Emissions
targets have not been lowered. At the very most, George Osborne and
his allies may have slightly pinched the flow of funds to
consultants and academics to talk about the subject. Maybe that’s
what makes the great and the good so cross.



 



Notes:



1. Some details on the row about the "pause", which was
furiously denied for a while, then suddenly explained. Whitehouse's
account is well worth reading for those interested in the history
of the subject. Whitehouse was accused by Mark Lynas of the New
Statesman of being ‘wrong, completely wrong’, and
‘deliberately, or otherwise, misleading the public’. So Bob Ward
asked Phil Jones of UEA to put the record straight. He wrote:



"What you have to do is to take the
numbers in column C (the years) and then those in D (the anomalies
for each year), plot them and then work out the linear trend. The
slope is upwards. I had someone do this in early 2006, and the
trend was upwards then. It will be now. Trend won’t be
statistically significant, but the trend is up."



This last self-contradiction caused much amusement later. Ward
was unable to assemble a rebuttal. Jones eventually stated:



"Bottom line: the no upward trend
has to continue for a total of 15 years before we get worried."



That point is now well past on nearly all the temperature
records. By 2007, the Met Office was boasting that its new
computer could see a resumption of warming in the
future:



"We are now using the system to
predict changes out to 2014. By the end of this period, the global
average temperature is expected to have risen by around 0.3 °C
compared to 2004."



In fact, as of now, at the start of 2014, global
temperatures are if anything slightly lower than in 2004.
The pause continues. Attempts to explain it, using volcanoes,
aerosols, natural cycles, missing Arctic heat and ocean absorption
of heat have proliferated, but so far they are extremely
unconvincing.



The latest example is the paper by Matthew England et al, on
which Nic Lewis had this to say:



"Matthew England's paper claims to
show that the hiatus in global surface temperature since around
2001 is due to strengthening Pacific trade winds causing increased
heat uptake by the global ocean, concentrated in the top 300 m and
occurring mainly in the Pacific and Indian Oceans. But his study
uses model-based ocean temperature "reanalyses", not measurements.
A recent study by Lyman and Johnson of the US Pacific Marine
Environmental Laboratory shows, using actual measurements of
sub-surface ocean temperatures (infilling data gaps using a
representative mean), that ocean heat uptake has actually fallen
heavily from around 2002, whether measured down to 100 m, 300 m,
700 m or 1800 m. Indeed, they show an exceptionally large 90% fall
in the heat content trend for the top 300 m between the decades
1993–2002 and 2002–2011. Several other observational datasets for
the more often cited top 700 m ocean heat content also show a
substantial reduction in heat uptake between those periods. So,
unfortunately, ocean temperature measurements completely contradict
Matthew England's neat explanation for the warming hiatus."



 



2. The seasonal forecasting failures of the Met Office are becoming a habit. The Met Office forecast
“drier than average conditions” just before the extremely wet
April-June of 2012. It forecast a warm March last year before the
coldest March in years. It forecast mild winters in 2008-9, 2009-10
and 2010-11: all three were hard and the authorities were caught
unprepared. (Don’t get me wrong – I hugely admire the Met Office as
a short-term weather forecaster, but it’s no better than the Daily
Express at seasonal forecasts).



 



3. On how to deal with carbon emissions, the most delightful
irony of all is that Lord Stern believes we are doing too much.
Really. Go and read his report and you will find a clear statement
that a Pigovian tax of $80 per tonne of carbon dioxide (equivalent)
should compensate for all the harm likely to be done by carbon
dioxide emissions. If so, as the Adam Smith Insitute’s Tim Worstall points out, then fuel duty is
already 15p a litre too high and other taxes on fossil fuels about
right. So let’s give him another knighthood, cancel all the wind
turbines and declare job done. Then there might be some more money
for flood defences.



as Worstall puts it:



"We can go further as well. As My
Lord Stern has pointed out (and as have eminences like Richard Tol,
William Nordhaus, Greg Mankiw and, in fact, just about every
economist who has bothered to look at the issue) the correct
solution to the results that come from the IPCC is a carbon tax. Of
some $80 per tonne CO2-e in fact according to Stern. And it's well
known that UK emissions are around 500 million tonnes. And also
that we already pay some swingeing amount of such Pigou Taxes: the
fuel duty escalator alone now makes petrol a good 15p per
litre more expensive than it should be under such
a tax regime. And there are other such taxes that we pay, so much
so that we are already, we lucky people here in the UK, paying a
carbon tax sufficient to meet Lord Stern's target (which is, it
should be noted, rather higher than what all the other economists
recommend: we're not stinting ourselves in our approach to climate
change).



We don't quite pay it on all the
right things as yet, this is true, but the total amount being paid
is about right. We just need to shift some of the taxation off some
products and on to others. Less on petrol and more on cowshit for
example.



That is, according to the standard
and accepted science of climate change we here in the UK have
already done damn near everything we need to do to beat it.



This, in turn, means that we now
have to fire everyone who disagrees with this application of that
accepted science. Which means we get to fire Ed Davey for
suggesting more windmills for example. We don't need any other
schemes, plans, subsidies, technological boosts nor regulations. As
Stern and all the others state once we've got that appropriate
carbon tax in place then we're done, problem solved. We just then
sit back and allow the market to churn through the various options
now that we've corrected the price system for externalities."



 



 

 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2014 10:29

February 13, 2014

Science discovers new ignorance about the past

My recent Times column on new discoveries in the
history of our species:



It is somehow appropriate that the 850,000-year-old footprints found on a beach in
Norfolk last May, and announced last week, have since been washed
away. Why? Because the ephemeral nature of that extraordinary
discovery underlines the ever-changing nature of scientific
knowledge. Science is not a catalogue of known facts; it is the
discovery of new forms of ignorance.



For those who thought they knew the history of the human
species, the past few years have been especially humbling. There
has been a torrent of surprising discoveries that has washed away
an awful lot of what we thought we knew, leaving behind both much
more knowledge and many more questions.



I do wish people would teach children this about science: that
it is the richest source of new mysteries. To paraphrase George V,
bugger Boyle’s Law: tell the kids about how we keep finding things
we do not understand. That way they might find silly forms of
superstition and mysticism less enticing.



The Happisburgh footprints are hundreds of thousands of years
older than any other evidence of “human beings” living outside
Africa. They show that some kind of hominid, possibly the species
known as Homo antecessor, was capable of living
in a very cold place (Britain was then colder than it is now) long
before the cold-adapted Neanderthals had even emerged. Who were
these “people”? Why were five of them walking across a tidal
mudflat? Did they wear clothes and light fires? New mystery.



And this is just the latest new mystery to emerge from human
prehistory. A few days ago scientists from the University of Utah
announced that they had found that the Khoisan
people of southern Africa — the click-speaking foragers and
pastoralists who seem to be the most genetically distant people
from all the rest of us — have a bunch of genes in them that came
from Eurasians via East Africans, who got them about 3,000 years
ago. So which European or Arabian people were messing around in
East Africa in 1000BC? New mystery.



Among the genes that those Eurasians took to Africa were a few
Neanderthal DNA sequences. Until only three years ago it was
considered established fact that Neanderthals died out. Now it’s
clear that they didn’t entirely do so, because they mated with the
modern non-Africans from whom we are all descended, and did so just
enough to leave 2 per cent or so of Neanderthal DNA in most of us.
Where and when did that mating happen? New mystery.



And since that 2 per cent is a different 2 per cent in each of
us, it is now apparent that about 40 per cent of
the Neanderthal genome survived inside modern Eurasians. Sequences
that shape skin and hair seem well represented, implying that we
perhaps needed Neanderthal genes to cope with the cold. But
sequences from the Neanderthal X chromosome are largely missing. Does this imply that male
hybrids were mostly sterile, as sometimes happens when sufficiently
different mammal species can still just produce fertile offspring
but of only one sex (in horse-donkey hybrids, for example, female hinnies are occasionally
fertile, but male mules never)? New mystery.



None of the Neanderthal versions of the portion of chromosome 7
that includes FOXP2, the gene vital for spoken language, seems to have survived. Was this because our
ancestors found this particular version of the genetic machinery
inadequate for fluent speech and (as it were) dropped it, by
natural selection? If so, how come the Neanderthal version of FOXP2
itself is so similar to ours and so different from the
chimpanzee’s, implying that they did at least have some form of
language? New mystery.



Then what about the bizarre discovery in the past six years of
the genome of a third species of early man, Denisovans,
contemporary with Neanderthals and our (African) principal
ancestors? A female of this species left her genes in an unusually
thick finger bone in a Siberian cave and, we now know, her species
contributed a pinch of DNA to Melanesians and Australians. Who were
these people? What did they look like? New mystery.



Go back 11 years and try to explain the discovery that a tiny
little hominid with distinct anatomy could have lived on the island
of Flores in Indonesia for hundreds of thousands of years until
only 13,000 years ago. Who were they and how did they get there
across a stretch of sea? New mystery.



Then track back into Africa 120,000 years ago, during a warm,
damp spell of climate, and try to put your finger on what it was
that made at least one group of Africans so darned good at thriving
that they soon displaced all others in the whole of Africa and
eventually spilled out into the rest of the world, embarking on a
headlong and accelerating voyage of technological discovery that
brought them farming, cities, space travel and Twitter. What was it
about these people that enabled this to happen then? Language, mind
or (my favourite theory) the collective wisdom and idea-sharing
that comes with widespread exchange? But whatever the explanation,
it only poses more questions: why then, why there? New mystery.



And why does everybody descended from these people — black,
white or brown — have such a comparatively inbred genome, far more
genetically uniform than that of the chimpanzee? If we went through
a genetic bottleneck in the past 60,000 years,
when our ancestors apparently numbered only a few thousand people
(almost certainly alongside a much larger population that left no
descendants), what caused it and where were “we” at the time? On
the shore of the Red Sea, eating shellfish perhaps? New
mystery.



About the only safe conclusion about human prehistory — as
revealed in genes, stone tools and bones — is that some gigantic
new surprises are in store for us. And that is the beauty of
science: the more you find out, the more you realise what you did
not know. The story of human prehistory is not special in this
regard. You can tell the same tale of expanding new mysteries in
cosmology, neuroscience, the history of climate, the workings of
the immune system. On the voyage of science we are perpetually
sighting great continents of ignorance that we did not even know
were there.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 13, 2014 01:10

February 6, 2014

Do people mind more about inequality than poverty?

My Times column this week was on the facts behind the
inequality debate:



The Swedish data impresario Hans Rosling recently asked some British people to estimate
the average number of births per woman in Bangladesh and gave them
four possible answers. Just 12 per cent got the right answer (2.5),
whereas 25 per cent of chimpanzees would have got it right if the
answers had been written on four bananas from which they could
choose one at random. Remarkably, university-educated Britons did
worse, not better, than non-graduates. It is not so much what you
don’t know as what you know that isn’t so.



Hold that thought while I introduce you to Tom Perkins, the
Silicon Valley venture capitalist and former husband of the crime
writer Danielle Steel, who stirred up fury in America when he wrote to The Wall Street
Journal last month complaining about a rising tide of hatred
against the very rich, and indirectly but crassly comparing it to
Kristallnacht. A few days later President Obama used his State of
the Union speech to take aim at inequality. In this country, too,
inequality is one thing that much rankles with most people, as the
50 per cent tax rate row reveals.



The puzzling thing about this is that by any conceivable
measure, absolute poverty has fallen dramatically over the past few
decades, so why should it matter if the rich get richer? Today’s
British poor spend half as much of their income on food and
clothing as in the 1950s, while working many fewer hours, living
about eight years longer and having access to phones, cars,
medicines and budget airlines that would have amazed even the rich
in the 1950s.



Moreover, here’s a question I’m willing to bet that chimpanzees
would do better than people at: given that inequality has been
rising recently in China, India, America and many other countries,
is global inequality rising or falling?



The answer: it’s falling and has been for several decades,
however you measure it. The reason is that people in poor countries
are getting richer more quickly than people in rich countries are
getting better off.



That fall in global inequality has accelerated since the start
of the financial crisis. As Africa now experiences record rates of
growth, the number of people trying to live on $1.25 a day is
plummeting fast. Mr Rosling likes to show two charts in his talks:
the graph of global income was once a two-humped camel; now it’s a
one-humped dromedary, with the vast majority of the world’s people
in the middle.



Here’s another question that I fancy the chimps would beat the
people at: did poverty and inequality in Britain increase or
decrease as a result of the recession? The answer is that both
fell. Inequality has fallen to levels not seen since the mid 1990s,
as it usually does during recessions, though it is still higher
than it was in the 1970s. Meanwhile the Left’s favourite measure of
poverty — those earning less than 60 per cent of the median income
— has by definition gone down, because median income has gone down. Redefining poverty in this
relative (and very inadequate) way has therefore rather
backfired.



If you measure consumption inequality, it is far lower than
pre-tax income inequality, because the top 40 per cent of earners
pay more in than they get out, while the bottom 60 per cent get
more out than they pay in. Indeed, in Britain the top 1 per cent
generate about 30 per cent of the total income-tax haul. After such
redistribution, the richest fifth of the population has only four times as much money to play with
as the poorest fifth.



With big increases in housing benefit and other redistributions,
consumption inequality may be as low as it has ever been. Add in the
value of pensions (including the state pension), free healthcare,
the fall in the price of food and clothing relative to wages, plus
the dramatic fall in the cost of much technology and it is clear
that for most basic needs, the country has never been less poor or
less unequal. A smartphone’s search engine may be about as capable
as a plutocrat’s full-time secretary was in 1960.



Imagine being told that one of the people in a meeting is a
genuine billionaire (I owe this idea to Professor Don Boudreaux). How would you tell
which one? His bodyguards, private jets and grouse moors are
outside the room; his shirt and jeans are unlikely to give him away
(as they would in 1900); his Rolex could be a cheap imitation; his
teeth, girth and height are probably unremarkable (unlike in 1800);
even his Diet Coke is the same as everybody else’s. Much more than
in the past, most inequality in this country these days — though by
no means all — is in luxuries, rather than necessities.



Here’s another question where my money is on the chimps: does
income generally grow faster for people in the lowest fifth of the
population or people in the highest? It’s the lowest, because many of those people
are young, low-paid people just starting out on their careers,
while many of the richest fifth are older people at the peak of
their pay, about to retire. That is to say, the category “poorest
fifth” may not seem to show much change, but the people in it do.
Income mobility is far from dead: 80 per cent of people born in
households below the poverty line escape poverty when they reach
adulthood.



None of this is meant to imply that people are wrong to resent
inequality in income or wealth, or be bothered about the
winner-take-all features of executive pay in recent decades.
Indeed, my point is rather the reverse: to try to understand why it
is that people mind so much today, when in many ways inequality is
so much less acute, and absolute poverty so much less prevalent,
than it was in, say, 1900 or 1950. Now that starvation and squalor
are mostly avoidable, so what if somebody else has a yacht?



The short answer is that surely we always have and always will
care more about relative than absolute differences. This is no
surprise to evolutionary biologists. The reproductive rewards went
not to the peacock with a good enough tail, but to the one with the
best tail. A few thousand years ago, the bloke with one more cow
than the other bloke got the girl, and it would have cut little ice
to try to reassure the loser by pointing out that he had more cows
than his grandfather, that they were better cows, or that he had
more than enough cows to feed himself anyway. What mattered was
that he had fewer cows.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 06, 2014 01:27

January 27, 2014

Cherry picking and the tale of the Siberian larch trees

This is Stephen McIntyre’s response to me commenting on the
letters from Professor Keith Briffa to the Times in response to my
column on the widespread problem of withheld adverse data. It makes
very clear that my account was accurate, that my account was
mischaracterized by Professor Briffa in serious ways, and that
nothing in his letters refutes my original claim that had a key
dataset not been ignored, a very much less striking result would
have been published. Professor Briffa now says he was reprocessing
the data, but in 2009 he said “we simply did not consider these
data at this time”. Neither explanation fits the known facts
well.



I therefore stand by my story.



My original intention in mentioning this example, chosen from
many in climate science of the same phenomenon, was to draw
attention to the fact that non-publication of adverse data is not a
problem confined to the pharmaceutical industry, but also occurs in
government-funded, policy-relevant areas of academic science.



I have edited McIntyre’s text only to explain acronyms and
abbreviations.



Matt Ridley



27.1.2014



 



Stephen McIntyre:



Briffa’s new letter does not rebut anything that you had written
in your reply or original article. Instead, it is a pastiche of
comments that are either incoherent, not responsive to points in
your reply, untrue or highly contentious.



Review of Events



Let me first do a brief reprise of events.



Figure 1 shows five different Yamal/Yamal region chronologies
for the period 900 on, converted to Standard Deviation units. 
The first three panels show versions from Briffa 2000, Briffa et al
2008 and Briffa 2009 (Climatic Research Unit (CRU) website). All
have pronounced Hockey-stick (HS) shapes. Panels four and five show
the 2006 and 2013 versions, both of which end at elevated but not
HS values.



Hockey Sticks



Figure 1. Five Yamal chronologies. Top – Briffa (2000); second –
Briffa et al 2008; third – Briffa 2009 at CRU website; fourth –
2006 regional chronology mentioned in Climategate dossier and
eventually shown in Briffa et al 2013 Supplementary Material 7;
fifth –  regional chronology (“Yamalia”) from Briffa et al
2013. The dotted horizontal line shows, for orientation, the
closing value of the Briffa et al 2013 regional chronology
(smoothed). The first four panels end in 1996 (denoted by a dashed
vertical red line), while the 2013 chronology ends in 2006. The
smooth for 2000, 2008 and 2009 is smooth as used in the original
articles, while the smooth for the bottom two panels is a running
11-year mean.



 



The Briffa (2000) chronologies for Yamal, Tornetrask and Taimyr
have been widely (almost universally) applied in subsequent
multiproxy temperature estimates. Its Yamal chronology re-processed
the Yamal measurement dataset subsequently published in Hantemirov
and Shiyatov 2002.



In 2006, Briffa and/or Osborn calculated “regional” chronologies
for the Yamal, Tornetrask and Taimyr regions. These regional
chronologies supplemented the measurement data used in Briffa 2000
with other measurement data in the region e.g. measurements taken
by Schweingruber and Vaganov not used in the 2000
calculation.  These three regional chronologies were discussed
in a Climategate email (684. 1146252894.txt) on April 28, 2006,
which mentioned the Yamal/Urals regional chronology as follows:



“URALS” (which includes the
Yamal and Polar Urals long chronologies, plus other shorter
ones).



The chronology in this email is shown in the fourth panel above.
It obviously lacks the pronounced HS of the 2000 version; in
addition, its medieval values are somewhat greater than modern
values.



In another Climategate email (780. 1172776463.txt) the following
year  (March 2007), Osborn discussed with Briffa the
differences in versions of the Yamal/Urals regional chronology that
were in a CRU presentation that Osborn had sent to Briffa.



Six months after this email (November 2007), Briffa and
coauthors published an article (Briffa et al 2008) purporting to
provide “regional chronologies” for the three northern Eurasian
regions previously considered in Briffa 2000.  In the other
two regions (Tornetrask, Taymir), Briffa et al had dramatically
expanded the data sets by incorporating, for example, Schweingruber
datasets in the Taymir region.  However, despite the
implication of Briffa et al 2008 that they had used expanded
datasets, the Yamal dataset of Briffa et al 2008 was the same as
the dataset of Briffa 2000, even though the 2000 Yamal chronology
had a much smaller number (12) of samples in the modern period
relative to the other 2000 chronologies, let alone their 2008
expansions. This chronology is shown in the second panel.



In September 2009, after obtaining measurement data for the
three regions as used by Briffa, I observed that Briffa et al 2008
had incorporated Schweingruber datasets into the 2008 Taimyr
regional chronology, that there were seemingly equivalent
Schweingruber datasets in the Yamal region (noting Khadyta River
KHAD as an example) and that inclusion of such Schweingruber data
would attenuate the very pronounced HS of the Briffa 2008 Yamal
chronology. (BTW this observation has been supported by the 2013
regional chronology.) It seemed highly implausible that Briffa and
associates wouldn’t have done a regional chronology calculation for
the Yamal/Urals region.  I speculated that Briffa and
associates must have done a regional chronology calculation, also
obtaining highly attenuated non-HS modern values and, for some
reason not disclosed in the article itself, not chosen not to
report it.



This surmise, which later proved to be true, was sharply rebuked
at the time.  In an online article at the CRU website in
October 2009, Briffa conceded that the KHAD site met their criteria
for inclusion in a regional chronology, but claimed that they
“simply did not consider” the data at the time.



Judged according to this criterion
it is entirely appropriate to include the data from the KHAD site
(used in McIntyre's sensitivity test) when constructing a regional
chronology for the area. However, we simply did not consider
these data at the time, focussing only on the data used in the
companion study by Hantemirov and Shiyatov and supplied to us by
them. We
would never select or manipulate data
in order to arrive at
some preconceived or regionally unrepresentative result.



At the time, the Climategate emails were not available and,
while this assertion seemed implausible, neither could it be
contradicted with then available information.  However, the
following month (November 2009),  Climategate emails became
available.  Yamal appears to have been of particular interest
to the hacker/leaker of the Climategate emails based both on the
selection of documents and emails and the limited information on
access times.



Yamal was mentioned in one of the questions in the Muir Russell
Issues Paper as follows. (In passing, it’s interesting to note
that, while the Muir Russell report has been derided on many
counts, its neglect of issues raised in the Issues Paper was
under-discussed.)



Have you been
selective in utilizing tree ring evidence from Yamal in Siberia;
and if so, what is the justification for selectivity and does the
selection influence the deduced pattern of hemispheric climate
change during the last millennium?



The reference in the March 2006 Climategate email to a regional
chronology incorporating Yamal, Polar Urals and “other shorter”
datasets was quickly noticed, as it obviously appeared to
contradict CRU’s October 2009 response.   In my
submission to the Muir Russell review, I specifically drew
attention to this email as an example of cherrypicking the Yamal
chronology over a “still unavailable combined chronology attested
in Climategate Letter 1146252894.txt.”



In their submission to the Muir Russell panel in February 2010,
CRU did not address the unpublished regional chronology. They
stated to Muir Russell that the purpose of both Briffa 2000 and
Briffa et al 2008 was to “reprocess” the Hantemirov and Shiyatov
dataset and that they “made no selection of what data to
include”.   They incorporated their October 2009 website
article in their submission – an article, which, as noted above,
said that “simply did not consider” the inclusion of Schweingruber
data into a regional chronology at the time.  While the
purpose of Briffa 2000 included reprocessing the Hantemirov and
Shiyatov dataset using RCS standardization, this was not the stated
purpose of Briffa et al 2008, which, instead, purported to present
regional chronologies. “Reprocessing” was nowhere mentioned in the
article.  Nor is it correct to say that Briffa et al 2008
“made no selection of what data to include”. They decided against
using the expanded regional data used in the 2006 regional
chronology.  Even if that were a justifiable decision (as CRU
later argued), it was still a decision about what data to include
and, to that extent, their submission to Muir Russell was
misleading.



 



As is well known, Muir Russell himself did not even bother
attending the one interview of CRU personnel on Hockey Stick
matters (which was conducted by Geoffrey Boulton.)  Following
this interview, Boulton asked CRU to comment on McKitrick’s October
2009 op ed about Yamal – an article published prior to Climategate
and which therefore did not refer to or discuss the 2006 regional
chronology.  The panel did not ask CRU to comment on the
regional chronology. In their response to Muir Russell, CRU
re-iterated their implausible claim that the “purpose” of Briffa et
al 2008 was merely to “reprocess” the original Hantemirov dataset.
They also inconsistently said that they had considered the
incorporation of more data (presumably including Khadyta River) and
indeed even “intended to explore an integrated Polar Urals/Yamal
larch series”, but had “felt that this work could not be completed
in time”. Needless to say, this incompleteness had not been
disclosed to editors or readers of Briffa et al 2008.  CRU
conspicuously did not disclose to the Muir Russell panel that they
had previously calculated a Yamal/Urals regional chronology or
discuss its supposed defects.



It’s too bad that Muir Russell neglected this and other issues
in their report.  A more competent panel would have settled
some of these issues.



Since the regional chronology was not addressed by Muir Russell,
I submitted an FOI request both for the chronology and for a list
of the sites.  In their response, University of East Anglia
(UEA) confirmed the existence of the chronology, but refused to
release it or the list of sites.



I appealed to the Information Commissioner.  The
Information Commissioner required UEA to disclose the list of sites
immediately.  In negotiations between the Information
Commissioner and UEA, UEA undertook to publish the requested
regional chronology within six months, an undertaking which was a
major improvement, but which was still (in my opinion) a delaying
tactic. The Information Commissioner accepted UEA’s undertaking but
noted my concern, stating that failure on the part of the
University to live up to its undertaking would open up the
possibility of a different position for a fresh FOI request:



The complainant has expressed doubts
as to whether the University really intends to publish the
information by the date specified and believes this to be a
delaying tactic on the University's part. The Commissioner is not
aware of any evidence to support such a contention, but given the
written assurances which have been received from the University as
to the publication date, he considers that any delay beyond October
2012 will need to be reasonably explained by the University if the
withheld information is to remain exempt from disclosure by virtue
of regulation 12(4)(d), if a further request was made.



Having no confidence in UEA’s undertaking, I appealed the
Information Commissioner’s decision. As matters turned out, UEA did
not make the slightest attempt to live up to their undertaking.
They submitted an article to Nature without the requested regional
chronology. This submission was rejected. So six months later, they
had not even begun to comply with their undertaking to the
Information Commissioner. The appeal at the Information Tribunal
proceeded, with exchanges and submissions to the Information
Tribunal becoming increasingly acrimonious.



Tim Osborn stated that to the Information Tribunal that release
of the regional chronology (shown in the fourth panel above) would
have “adverse reputational consequences” to Briffa and CRU and lead
to criticism that would “damage the reputation of individual CRU
scientists as well as CRU's reputation as a leading centre of
excellence in the field of climate change research”:



49. Looking at the situation more
narrowly, there would also inevitably be adverse reputational
consequences for the individual scientists involved in this work
and the University itself if disclosure had been effected. This is
because biases in the 2006 chronology, which in CRU's view limit
its value as evidence of past temperature changes, would doubtless
be seized upon by climate change sceptics as demonstrating that
there were fundamental failings in CRU's approach to the science of
climate change. Whilst such charges would be entirely unfair in all
the circumstances, they would serve to damage the reputation of
individual CRU scientists as well as CRU's reputation as a leading
centre of excellence in the field of climate change research.



Meanwhile, CRU had finally commenced preparation of the article
previously promised to the Information Commissioner.  The new
article (Briffa et al 2013) presented a new regional chronology
incorporating Yamal and Polar Urals, but not Khadyta River. 
This is shown in the bottom panel. In its Supplementary Material 7,
they presented the 2006 regional chronology (fourth panel) together
with a litany of supposed defects.   In April 2013, the
Information Tribunal rejected my appeal. While it seemed to me that
they erred in their decision, the issue became moot with the
publication of Briffa et al 2013 and I did not pursue it
further.



Returning to Figure 1, it is obvious that the modern portion of
the 2013 regional chronology is dramatically attenuated relative to
the pronounced HS of the 2000, 2008 and 2009 chronologies and is
much more comparable to the despised and unreported 2006 regional
chronology. The main difference between the 2006 and 2013 regional
chronologies is the shaving of medieval values through a new policy
requiring the exclusion of radially asymmetric root collar samples
from Polar Urals.  As I observed at Climate Audit at the time,
strip bark bristlecones have far more dramatic radial asymmetry and
it is this radial asymmetry that causes the extreme Hockey Stick
shape of the Graybill bristlecone chronologies.



Consistent adoption of Briffa’s new policy would require the
rejection of the many reconstructions using strip bark
bristlecones. Otherwise,  one simply moves to a new form of
cherrypicking by IPCC paleoclimatologists: accepting radial
asymmetry when it contributes to a HS and rejecting it when it
doesn’t.



 



Briffa’s New Letter



As noted above, Briffa’s second letter is a pastiche of
incoherency, irrelevancy and disinformation.



First Paragraph Is Incoherent



In his first paragraph, Briffa challenged the following sentence
in Ridley’s reply letter:



Professor Keith Briffa says that he
was "reprocessing" a data set rather than ignoring it because it
gave less of an uptick in temperatures in later decades than the
embarrassingly small sample of Siberian larch trees he
published.



As noted above, Briffa had stated (for example to Muir Russell)
that the “purpose” of Briffa et al 2008 was to “reprocess” the
small Hantemirov and Shiyatov 2002 Yamal dataset and denied that
this decision had anything to do with the adverse (non-HS) results
from their 2006 regional chronology calculations.  Although
Briffa objected strongly to the above sentence, I, for one, do not
see any relevant difference between Ridley’s characterization and
Briffa’s new declaration:



This reprocessing was not motivated
by consideration of any “uptick in
temperatures”.



Here Briffa’s animosity has descended to incoherence.





Second Paragraph is Irrelevant



Briffa’s next two sentences are irrelevant to any issue raised
in either Ridley’s original opinion or reply.



Ridley had not suggested that the
Yamal chronology was “dependent” on the Mann bristlecones (or vice
versa). Thus Briffa’s following strident declaration that the two
series are “independent” is simply irrelevant to any actual
issue:



in his Opinion Piece he describes my
publication of this version of the Yamal chronology as a
“relaunch” of the hockey-stick graph of Northern
Hemisphere average temperatures. My work was independent of the
so-called “hockey-stick” graph and I and my colleagues have long
ago demonstrated that the conclusions drawn in that work are not
dependant on the inclusion of my Yamal chronology.



As a nit, the original Yamal chronology was published (2000)
some years prior to my entry into the field (2003-2005) and was
therefore not a “relaunch” of the Hockey Stick in response to
our criticisms.  More accurately, it was Briffa’s
entry into the Hockey Stick market, as, up to this point, his
primary published reconstructions (Briffa et al 1998; Briffa et al
2001) had marked post-1960 declines, well known through “hide the
decline”.



 



The Withheld Regional Chronology



Briffa then made a series of highly inaccurate and/or misleading
statements about the unpublished and adverse 2006 regional
chronology as follows:



Ridley then persists in the
repeated claim that a “larger tree-ring chronology from the
same region did not have a hockey stick shape”. Leaving to
one side the questions of what constitutes a “larger
chronology” and what does or does not represent a
“hockey-stick shape”, this statement implies that a
chronology based on more tree-ring data from this region would
invalidate the conclusions from our published temperature
reconstructions. He also insinuates that just such an “adverse”
chronology had been concealed by us and would not have come to
light without a Freedom of Information (FOI) request. He is wrong
again on both counts. This presumably refers to an FOI request made
to the University of East Anglia for a chronology whose existence
was revealed as a result of the theft of emails from the Climatic
Research Unit. Both the Information Commissioners Office and the
Information Tribunal (appeal number EA/2012/0156) rejected this
request, accepting our explanation that this chronology was
produced as part of ongoing research intended for publication.



This chronology was indeed subsequently published, but as a
demonstration of how inappropriate statistical processing, allied
with a failure to recognise and account for inhomogeneities in the
underlying measurement sets, can produce what is an unreliable
indication of regional tree growth and inferred summer temperature
changes in this area. Ridley’s description of this chronology
as a withheld “adverse” result is, therefore, unjustified.



First, CRU did not disclose the existence of the 2006 regional
chronology or efforts to develop a Yamal/Urals regional chronology
in Briffa et al 2008, their October 2009 website article or in
their submissions to Muir Russell.  Does this imply that CRU
“concealed” the existence of the chronology?  In this case,
“conceal” is Briffa’s word, not Ridley’s.  Ridley’s letter as
submitted used the perhaps more neutral phrase “failed to report” –
a claim that is true.



Second, contrary to Briffa’s assertion, nothing in the
Information Commissioner’s decisions contradicts a view that the
2006 regional chronology would not have come to light without the
FOI requests.  In my opinion, while CRU may well have
re-opened the file on Yamal as at the time of my FOI request, I do
not believe that they then had the faintest intention of including
the 2006 regional chronology in any putative publication.  I
can’t prove this, but it’s what I think.  The Information
Commissioner appears to me to have taken a fairly firm position
with UEA during negotiations. He required UEA to release the list
of sites used in the regional chronology, though this was done
during negotiations rather than in a decision.   This was
an important victory for me, as it enabled me to do my own estimate
of the regional chronology – an estimate that was virtually
identical to the then withheld chronology.  It also seems to
me that the Information Commissioner took a relatively practical
position on the chronology itself.  UEA said that they were
working on the data and undertook to release the data as part of a
publication within six months. The Commissioner accepted this
undertaking, but warned UEA that he would take a different position
on a fresh request if UEA failed to live up to their
undertaking.  At no point did the Commissioner opine on
whether the regional chronology would have been disclosed without
the FOI request.



Third, Briffa took issue with the characterization of the
unpublished 2006 regional chronology as an “adverse” result. 
But Osborn himself stated that release of this chronology would
have “adverse” reputational consequences for Briffa and other CRU
scientists and would “damage” the reputation of CRU.  So, by
their own admission, CRU believed these results to be
“adverse”.



Finally, Briffa et al 2013 (Supplementary Material 7) did indeed
contain a tirade against the 2006 chronology. I entirely agree that
“failure to recognise and account for inhomogeneities in the
underlying measurement sets” is a pernicious problem in the
regional chronology methodology proposed in Briffa et al 2008. All
the more reason why the supposed failure of this methodology on the
Yamal/Urals dataset should have been reported in the earlier
article.



Ridley’s description of the regional chronology as both
“withheld” and “adverse” is completely justified.



“Validated” by Muir Russell



Briffa’s final issue is little more than a cavil.  Briffa’s
first letter had stated:



The accusation of “cherry-picked
publication” was investigated by the Independent Climate Change
Email Review, which concluded in 2010 that our “rigour and honesty
as scientists are not in doubt”.



Ridley’s reply, as submitted, stated:



Briffa further claims that
his research was validated by the inquiry chaired by Sir Muir
Russell. Yet Sir Muir did not even attend the only interview
with academics at the University of East Anglia on the Hockey
stick.  Nor did the panel interview critics of the UEA group.
Nor did the Muir Russell panel even ask Briffa and Jones about
their destruction of documents to evade FOI requests. The inquiry
did not explore, let alone endorse, the specific data sets in
question.



This was shortened by the editors to:



Briffa claims that his research
was validated by the inquiry chaired by Sir Muir Russell, but that
inquiry did not explore, let alone endorse, the specific data sets
in question.



Briffa now complains that there is a relevant distinction
between saying that the accusation of cherry-picked publication
being investigated by Muir Russell and saying that their research
was “validated” by Muir Russell:



Ridley again misquotes me as
saying “my research was validated by the inquiry chaired by
Sir Muir Russell.” I clearly said no such thing. The
Independent Climate Change Email Review had no remit to
“validate” any research. In my opinion this can be
done only through reinforcement by consistent results produced in
repeated, continuing research and published in the peer-review
literature. What I actually said was that I had not “cherry picked”
my data to produce a desired result, which was the specific
accusation levelled at me in his Opinion Piece. Sir Muir Russell’s
team examined this specific accusation and found that I had
not.



 



The distinction is really immaterial.  Ridley’s letter
could have been rephrased as follows without changing the
point.



Briffa further claims that
allegations of cherry-picking had been settled by the
investigations of the Muir Russell panel. This is not the case.
Sir Muir did not even attend the only interview with academics
at the University of East Anglia on the Hockey stick.  Nor did
the panel interview critics of the UEA group. Nor did the Muir
Russell panel obtain the contested regional chronology from Briffa
and/or associates or carry out any investigation of the
circumstances of the contested chronology. Nor did the Muir Russell
panel even ask Briffa and Jones about their destruction of
documents to evade FOI requests. The inquiry did not explore, let
alone endorse, the specific data sets in question.



Conclusion



The issue in the original Opinion Piece was the failure to
report adverse results, with the lugubrious story of Briffa’s
unpublished and unreported 2006 regional Yamal/Urals chronology
being an example.  Nothing in Briffa’s letter refutes Ridley’s
original claim.



In retrospect, when one compares the very attenuated blade of
the 2013 regional chronology with the similarly attenuated blade of
the despised 2006 regional chronology,  all the past excuses
for withholding the 2006 chronology and all the past attempts to
sustain the superblade of the 2000, 2008 and 2009 chronologies ring
increasingly hollow.

 •  0 comments  •  flag
Share on Twitter
Published on January 27, 2014 00:41

January 23, 2014

Why is polygamy declining?

My recent Times column was on human monogamy:



The tragic death of an Indian minister’s wife and the overdose
of a French president’s “wife” give a startling insight into the
misery that infidelity causes in a monogamous society. In cultures
like India and France, it is just not possible for men to reap the
sexual rewards that usually attend arrival at the top of society.
President Zuma of South Africa has four wives and 20 children,
while one Nigerian preacher is said to have 86 wives. Chinese
emperors used to complain of their relentless sexual duties. Why
the difference?




Human monogamy is an enduring puzzle. Among mammals we are the
exception: just 3 per cent of mammals form pair bonds. Our closest
relatives, chimpanzees, bonobos, orangutans and gorillas, are
promiscuous, very promiscuous, territorial-polygamous and
harem-polygamous respectively. Only gibbons among the apes practise
monogamy and they don’t try to do it within a gregarious
species.



Yet we are clearly monogamous by instinct as well as by
tradition. Even in societies that allow polygamy, most people are
in one-partner couples. Free-love communes always, without
exception, collapse because people will insist on falling in love
with particular individuals. This pairing tendency would baffle a
bonobo, where sexual jealousy is apparently unknown.



We are like birds. Penguins and parrots, like us, practise
monogamy within large “urban” colonies. One likely evolutionary
reason is that when it takes two to raise a baby, a male is more
likely to have grandchildren if he puts a lot of effort into one
brood, rather than loving and leaving lots of females. In the
Pleistocene, the long helpless childhood of human beings probably
rewarded diligent fathers with more offspring than callous
philanderers.



You could achieve both if you were cunning. Monogamy and
fidelity are not quite the same thing. Female birds generally like
to stick to one mate to help them bring up the babies, but often —
DNA studies reveal — sneak off and get the babies’ genes supplied
by another, genetically superior male with better plumage or a more
varied song. Successful hunters in human foraging societies tend to
get the same result.



In primates, the threat of infanticide also seems to play a role
in deciding female strategy. In many monkeys and apes, when a new
male takes over a troop, the first thing he does is kill any babies
to bring females back into oestrus quickly. Female gorillas, which
live in small harems, suffer this fate frequently. Chimpanzees
avoid the problem by a system of maximised promiscuity — where
every female does her utmost to mate with every male in the group,
the better to confuse paternity and thereby prevent infanticide by
a new alpha male. In human beings, a horror of step-parents may go
deep.



So at some point in the distant past, we developed the habit of
monogamous pair bonding. Intellectuals, from Rousseau to Engels to
Margaret Mead, have been tempted to speculate about a promiscuous
human past not so long ago, from which marriage crystallised.
Initial encounters with other civilisations based around
agriculture and full of polygamy, such as in Mexico or Tahiti, at
first seemed to confirm this idea, but when in the 20th century
anthropologists began getting to know hunter-gatherers (supposedly
the most primitive level of society), they were startled to find
that monogamous marriage predominated in them. In human beings,
monogamy probably goes back hundreds of thousands if not millions
of years.



Polygamy, in this reading, was mainly an aberration of the last
10,000 years caused by agriculture, which allowed the accumulation
of huge surpluses, which powerful men translated into prodigious
sexual rewards. Herding societies in particular became highly
polygamous, causing people with names such as Attila, Ghenghis or
Tamerlane to conquer other lands so as to supply women to their
sex-starved followers: polygamy and violence tend to go
together.



However, the winners from a polygamous system are not just the
high-status men, but also the low-status women. The peasant girl
who joined the palace harem achieved safety, plentiful food and
access to luxuries, while her brother languished in celibate
poverty. The losers are the low-status men and the high-status
women.



It makes evolutionary sense that high-status males are
attractive to women, because they were in the past likely to be
able to ensure the success of any children they fathered, and that
men are attracted to what Amazonian Indians call “moko dude” women.
(The phrase means “ripe” when used of fruit and, when used of
women: “of the right age, health, genetic quality and
unencumberedness likely to make them capable of producing many
healthy children and grandchildren”, or, more pithily,
“phwoar”.)



So how come the president of France, with the status of a
monarch, cannot even get away with two women at a time? Inch by
inch, from Odysseus to Figaro to Bill Clinton, Western mores have
insisted on monogamy even for the powerful. Clearly the interests
of high-status men and low-status women have lost out to the
interests of high-status women and low-status men.



Interestingly, this trend continues, even as disapproval of
divorce and cohabitation has diminished. Nobody minds much that
François Hollande has never married his three “wives”. Yet that
does not mean that Valérie Trierweiler is prepared to share.



The spread of Christianity, with its teachings on monogamy and
female virtue, could hardly have been better designed to appeal to
poor men, polygamy’s big losers. Democracy, too, seems to insist on
monogamy. Between The Iliad and The
Odyssey (as William Tucker points out in a fine forthcoming
book called Marriage and Civilization), democracy
arrives and there is a sea change from the polygamy of Agamemnon to
the lovelorn fidelity of Odysseus and Penelope.



In a recent paper entitled “The puzzle of
monogamous marriage”, three American anthropologists argue that
this trend is partly explained by competition between societies. To
be economically successful, modern nations had to suppress violence
within themselves.



This was incompatible with rulers grabbing all the best girls:
“In suppressing intrasexual competition and reducing the size of
the pool of unmarried men, normative monogamy reduces crime rates,
including rape, murder, assault, robbery and fraud, as well as
decreasing personal abuses . . . By shifting male efforts from
seeking wives to paternal investment, normative monogamy increases
savings, child investment and economic productivity.”



Which leads to the delightful thought that Mr Hollande’s amorous
proclivities contribute to France’s economic stagnation.

2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on January 23, 2014 23:23

January 17, 2014

China's one-child policy was inspired by western greens

As China’s one-child policy comes officially to an
end, it is time to write the epitaph on this horrible experiment —
part of the blame for which lies, surprisingly, in the West and
with green, rather than red, philosophy. The policy has left China
with a demographic headache: in the mid-2020s its workforce will
plummet by 10 million a year, while the number of the elderly rises
at a similar rate.



The difficulty and cruelty of enforcing a one-child policy was
borne out by two stories last week. The Chinese film director Zhang
Yimou, who directed the Beijing Olympics’ opening ceremony in 2008,
has been fined more than £700,000 for having
three children, while another young woman has come forward with her story (from only two
years ago) of being held down and forced to have an abortion at
seven months when her second pregnancy was detected by the
authorities.



It has been a crime in China to remove an intra-uterine device
inserted at the behest of the authorities, and a village can be
punished for not reporting an illegally pregnant inhabitant.



I used to assume unthinkingly that the one-child policy was a
communist idea, just another instance of Mao’s brutality. But the
facts clearly show that it was a green idea, taken almost directly
from Malthusiasts in the West. Despite all his cruelty to adults,
Mao generally left reproduction alone, confining himself to the
family planning slogan “Later, longer, fewer”. After he died, this
changed and we now know how.



Susan Greenhalgh, a professor of anthropology
at Harvard,
has uncovered the tale
. In 1978, on his first visit to the
West, Song Jian, a mathematician employed in calculating the
trajectories of missiles, sat down for a beer with a Dutch professor,
Geert Jan Olsder, at the Seventh Triennnial World Congress of the
International Federation of Automatic Control in Helsinki to
discuss “control theory”. Olsder told Song about the book The Limits
to Growth, published by a fashionable think-tank called the
Club of Rome, which had forecast the imminent collapse of
civilisation under the pressure of expanding population and
shrinking resources.



What caught Song’s attention was the mathematical modelling of
population that Olsder did, and on which The Limits to
Growth was based. He was unaware that the naive
extrapolation embraced by the Club of Rome, and produced by what
they called “the computer”, had been greeted with scepticism in the
West. Excited at the idea that mathematical models could be used to
predict population as well as ballistic missiles, Song went back to
China and started publishing the pessimistic prognostications
of The Limits to Growth,along with demands that
something must be done to slow the birthrate.



He also fell under the spell of the Club of Rome’s patron saint,
Parson Malthus, the population pessimist of 1798. “When I was
thinking about this, I took Malthus’s book to research the study of
population,” said Song in a recent interview. Malthus, remember, thought we should be cruel to be kind to the
poor, lest they have too many babies: we should “facilitate,
instead of foolishly and vainly endeavouring to impede” hunger, war
and disease, he wrote. He urged that we “court the return of the
plague” and “particularly encourage settlements in all marshy and
unwholesome situations”. [Update: out of context I realise I'm
being a bit unfair to Malthus here. He urged later marriage and
that if this could not arranged, then these more drastic measures
should be taken. He did, however, think that higher child mortality
would reduce population growth.]



It turns out that Malthus was exactly wrong about that. The best
way to cut population growth is not to ensure that babies die, but
to stop babies dying: then people plan smaller families. Even
China’s birthrate had halved in the seven years before Song had his
epiphany, thanks to improved public health, and it would have
fallen even faster in the next decade as China began to grow
economically. But Song wanted to put his “control theory” into
action and set about persuading those in power to put him in
charge. By the end of 1979 he had won the ear of Deng Xiaoping and,
with the help of mathematical bamboozling, had vanquished his
opponents.



General Qian Xinzhong, appointed to act on Song’s ideas,
commanded the sterilisation of all women with two or more children,
the insertion of IUDs into all women with one child (removal of the
device being a crime), the banning of births to women younger than
23 and the mandatory abortion of all unauthorised pregnancies right
up to the eighth month.



What was the reaction in the West to this unfolding atrocity?
The United Nations Secretary-General awarded a prize to General
Qian in 1983 and recorded his “deep appreciation” for what the
Chinese Government had done. Eight years later, even though the
horrors of thepolicy were becoming ever clearer, the head of the
United Nations Family Planning Agency gushed that China had “every
reason to feel proud of its remarkable achievements” in population
control, and offered to help China to teach other countries how to
do it. You can still hear Western greens, steeped as they are in
the Malthusian myth, praising the policy.



Professor Song, now in his eighties, has stuck to his guns and
recently described worries about the ageing of the Chinese
population as unfounded. But by 2011 he had been sidelined and the
reformers of the policy had gained the upper hand. Already the
policy was not being strictly implemented in rural areas and the
wealthy were being allowed to “buy” a second child. A long battle
between Song and the reformer Peng Peiyun seems to have been won by the latter.



As far as I can tell from the Club of Rome’s website, the
think-tank has yet to acknowledge its role in sparking the horror
of the one-child policy, or even to respond to Susan Greenhalgh’s
revelations. It is still publishing pessimistic tracts and
demanding more “governance” to head off Malthusian doom. Malthus
himself was, says his epitaph in Bath Abbey, noted for “his
sweetness of temper, urbanity of manners and tenderness of heart,
his benevolence and his piety”. But his mathematical naivety has
provided despots and tyrants with an excuse for being cruel.

 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2014 23:48

Matt Ridley's Blog

Matt Ridley
Matt Ridley isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Matt Ridley's blog with rss.