Matt Ridley's Blog, page 43

November 24, 2013

Don't shoot the messenger

I have the following letter in the Guardian (online).



While preaching to others to be accurate, John Abraham is
himself inaccurate in his critique of me (
Global warming and business reporting – can business news
organizations achieve less than zero?
, 18 November,
theguardian.com). In correcting one mistake he made – by changing
3.6C to 3.6F – you only exacerbate the problem. Far from it being
"unbelievable" that up to 3.6F of warming will be beneficial, this
is actually the conclusion of those studies that have addressed the
issue, as confirmed in recent surveys by Professor Richard Tol. Mr
Abraham may not agree with those studies, but in that case he is
departing from the consensus and should give reasons rather than
merely stating that he finds them unbelievable. Rather than shoot
the messenger, he should invite readers to read Professor Tol's
most recent paper. It is published in an excellent book edited by
Bjørn Lomborg entitled How Much Have Global Problems Cost the
World?



As for Andrew Dessler's critique of my remarks about feedback by
water vapour and clouds, his actual words confirm that I am right
that these issues are still in doubt, as confirmed by the latest
report from the IPCC. Most of your readers are probably unaware of
the fact that doubling carbon dioxide in itself only produces a
modest warming effect of about 1.2C and that to get dangerous
warming requires feedbacks from water vapour, clouds and other
phenomena for which the evidence is far more doubtful. This is an
area of honest disagreement between commentators, so it is
misleading of Mr Abraham to shoot the messenger again.



Matt Ridley

 •  0 comments  •  flag
Share on Twitter
Published on November 24, 2013 22:41

November 13, 2013

When political tyranny allows economic freedom

I know very little about what is being discussed
inside the Third Plenum of the 18th Central Committee of the
Chinese Communist party, which started at the weekend. The meeting
is being held in secret — although one of the subjects to be
discussed is said to be greater government transparency. About all
we know is that “unprecedented” economic and social reforms are
being discussed, including such things as rural property rights.
But, to judge by a new wave of Mao worship, persecution of
dissidents and reinforced censorship, political reform is less
likely than economic.



In other words, the Chinese Communist Party is trying to
continue pulling off the trick that has served it ever since Deng
Xiaoping defeated the Gang of Four: more economic freedom combined
with less political freedom. The people can choose any good or
service they want — except their government. In many ways it has
worked extremely well. In 1978 Maoism had left the country horribly
poor: more than half the people of China tried to live on less than
a dollar a day. Over the next nine years per capita income doubled,
then doubled again over the nine years after that.



Many a left-leaning Western politician has been heard to muse
about how much better we would grow if only we directed the market
economy with the single-mindedness of the Chinese Communist Party.
In the same way many a right-leaning Western politician has long
admired the Singapore of Lee Kwan Yew on the same grounds. See,
they mutter, a paternalistic government is best at generating
economic prosperity.



Yet this is precisely the wrong lesson to draw from China (and
Singapore). It’s not because it’s unfree at the top that China is
growing fast, but because, at least in some respects, it is very
free at the bottom. The extraordinary fact is that — economically —
the average Chinese person is more free from government
interference than the average Westerner. As Niall Ferguson has documented, general government total
expenditure is twice as high in the United States and Europe as it
is in China as a per cent of GDP. China ranks higher than America
on the ethics of politicians. It takes far less time and trouble to
build a house or a nuclear power station in China than it does in
Britain.



So long as you don’t cross the Communist Party, China is
laissez-faire on a scale that would make Hayek blush. That’s what
happens when you liberalise a totalitarian regime: if the party was
previously taking all the decisions, then once it steps back
there’s very little else in the way of state bureaucracy.



Over here, there are many other levels of regulation, subsidy
and cost to entangle a small business. One of the few freedoms that
Westerners have more of is the — precious — freedom to criticise
the governing party.



Of course, this is not to deny that there’s a great deal of
crony favouritism in China. It is clear from scandals such as the
Bo
Xilai
affair that knowing the right officials gets you the best
deals, and that much of the country’s financial sector is state
controlled. But then look at the way in which British business
cosies up to Brussels and Whitehall. Besides, in China finance
feeds off the prosperity rather than vice versa. It was in
agriculture, a sector unshackled by Deng when he suddenly allowed
peasants to make profits, that economic growth began, and it was in
the special economic zones, where tax was low, trade free and
profit were allowed, that the manufacturing revolution took off.
Finance is the fruit of that labour.



Hong Kong teaches the same lesson. Being run by Britain meant
having almost no political freedom to choose your government. But
being run by a Scottish free-market economist, Sir John Cowperthwaite, who was financial secretary
of Hong Kong from 1961 to 1971, meant having a great deal of
economic freedom to trade and profit without government
interference: no tariffs, no subsidies, low taxes and a very simple
and speedy bureaucracy. Cowperthwaite, incidentally, fought hard to
try this experiment in the teeth of LSE-inspired orders from
Whitehall to adopt central planning and social safety nets.



Indeed, you can say the same about Britain in its heyday. In the
19th century Britons were astonishingly free economically, but the
debates over parliamentary reform show that they were very far from
free to choose their own ministers, let alone their monarchs. Just
as in modern China, the best they could hope for was that a small
set of freeholders could choose between grandees in an oligopoly:
Earl Grey or the Duke of Wellington were if anything slightly less
born to rule than the party princelings Xi Jinping and Bo
Xilai.



In other places and times, too, it is clear that the economic
freedom allowed to traders in city states, from Athens to Florence
to Amsterdam to Singapore, has always been the best way to enrich
the poor. This can happen witout much political freedom, let alone
democracy.



Free-market economists are wont to point out that economic
freedom is in one sense more tolerant than political freedom. If
you like apples and I like oranges, then economic freedom means I
can have one and you can have the other, and we are both happy.
Political freedom means that we take a vote on whether we all
should have apples or all should have oranges, and the loser is
disappointed.



All too often this tyranny of the majority is neglected in the
arguments for having a “policy” or strategy about something.
Obviously it is not possible to let only those people who want to
pay for nuclear deterrence have it, but in between apples and
nuclear deterrence there are many intermediate products that could
be matters of local or individual choice rather than democratic
tyranny: public service broadcasting, fracking, genetically
modified crops and so forth.



Here in the West we are going in the opposite direction to Xi’s
China: ever more political freedom and ever less economic freedom.
Economic decisions — I’ll buy apples instead of oranges —
increasingly become political ones: the policy is oranges.



Political freedom eventually tends to undermine economic freedom
in other ways, as is plainly evident in the furring of the
bureaucratic arteries of the West. As the economist Mancur Olson pointed out, democracy is open to
influence by special interests and these quickly capture the
political process, influencing legislation to erect barriers to
entry against competitors, to direct subsidies to themselves and to
help officials maximise the budgets of their agencies — what
economists call rent-seeking. That was roughly what destroyed
Chinese prosperity before, under the Ming empire — an economically
dirigiste regime that Europe increasingly resembles.



Don’t get me wrong. I am in favour of political freedom, for us
as well as for the Chinese. The real reason — and it is a very
important one — is to stop your rulers using violence against you,
not because it leads to economic growth. The trick is how to get
that benefit and not incur the rent-seeking costs. If they can
figure that out in their plenum, good for them.

 •  0 comments  •  flag
Share on Twitter
Published on November 13, 2013 11:41

November 6, 2013

Explaining the steep decline in the frequency of fires

This morning’s brief strike by the Fire Brigades
Union, like the one last Friday evening, will, I suspect, mostly
serve to remind those who work in the private sector just how well
remunerated many in the public sector still are. The union objects
to the raising of the retirement age from 55 to 60, on a generous
final-salary pension scheme, with good job security. These are
conditions few of those who work for private firms or for
themselves can even dream of.



In my case, as somebody always on the look-out for
under-reported good news stories, it also served to alert me to
just how dramatic the fall in “demand” for firefighters has been.
Intrigued by the strike, I looked up the numbers and found to my amazement that in
2011, compared with just a decade before, firefighters attended 48
per cent fewer fires overall; 39 per cent fewer building fires; 44
per cent fewer minor outdoor fires; 24 per cent fewer road-traffic
collisions; 8 per cent fewer floods — and 40 per cent fewer
incidents overall. The decline has if anything accelerated since
2011.



That is to say, during a period when the population and the
number of buildings grew, we needed to call the fire brigade much,
much less. Most important of all, the number of people dying in
fires in the home has fallen by 60 per cent compared with the
1980s. The credit for these benign changes goes at least partly to
technology — fire-retardant materials, self-extinguishing
cigarettes, smoke alarms, sprinklers, alarms on cookers — much of
which was driven by sensible regulation. Fewer open fires and fewer
people smoking, especially indoors, must have helped too. There is
little doubt that rules about such things have saved lives, as even
most libertarians must concede.



But this is not the whole story. I was stunned to find that the
number of deliberate fires has been falling much faster than the number of
accidental fires. The steepest fall has been in car fires, down
from 77,000 in 2001-2 to 17,000 in 2010-11. This echoes the 60 per cent collapse in car thefts in G7
countries since 1995. Deliberate fires in buildings have more than
halved in number; I assume this is also something to do with crime
detection — CCTV, DNA testing and so forth, which make it much less
easy to get away with arson. Only deliberate outdoor fires show
little trend: perhaps because not until he is deep in the woods
does an arsonist feel safe from detection.



Behind the firefighters’ strike, therefore, lies a most unusual
policy dilemma: how to manage declining demand for a free public
service. NHS planners would give their eye teeth for such a
problem, since healthcare demand seems to expand infinitely,
whatever the policy.



Yet the fire union leaders in the current dispute do not seem
especially keen on trumpeting these numbers from the tops of their
ladders as proof of society’s growing success at suppressing fire.
You would think they might, because firefighters themselves have
certainly played a part in prevention by devoting more of their
time to it — teaching people about the risks of chip pans and the
like. (In passing, I wonder how much the emergence of the oven chip
is responsible for fewer fires: chip-pan fires used to cause
one-fifth of all residential fires. Maybe, too, the general health
war on chips has played a part.)



The reason for the reluctance of firefighters to boast about the
success of their efforts at prevention, of course, is that it
implies the need for fewer of them. They fear that fitness tests
will in many cases lead to redundancy before the new retirement
age. The statistics I have quoted come largely from the recent report that recommended that the
Government could make large efficiency savings in the fire and
rescue service.



Sir Ken Knight’s report to the Government’s fire minister,
Brandon Lewis, pointed out that despite deaths from fires having
hit an all-time low and the number of incidents falling rapidly,
“expenditure and firefighter numbers remain broadly the same. This
suggests that there is room for reconfiguration and efficiencies to
better match the service to the current risk and response context.”
Employment in the fire and rescue service has dropped by just 6 per
cent during the time when incidents have decreased by 40 per
cent.



It is not just the overall numbers of firefighters that could
come down as fires come down. There are plenty of opportunities for
efficiency savings, as in any public service. Sir Ken observed that
he could not explain the differences in the spending of Britain’s
46 separate fire services. Some areas spent almost twice as much as
others, yet the discrepancy could not be explained by population
density, degree of industrialisation, or level of deprivation. Nor
did greater spending produce a faster fall in the number of fires.
Noting that localism can become “siloism”, Sir Ken concluded drily
that “fire and rescue authorities spend to their budgets, not to
their risk.”



Other countries have experienced similar declines in fires and
deaths from fire. In the United States, fire death rates fell by 21 per cent between
2001 and 2010 but international comparisons are no more clear about
the cause than those between British regions. Sweden and New
Zealand spend less per head than we do on fire services and suffer
more fire deaths; but America and Japan spend more and also suffer
more fire deaths. Singapore stands out: very low spending and very
few fire deaths.



There is also a remarkable variety of ways in which countries
deliver fire services. Some, such as Germany, rely largely on
volunteers. Not many countries use as few volunteer firefighters as
Britain does. It is pretty clear that there are opportunities for
British fire services to use more volunteers and on-call staff, to
share senior managers and to copy best practice from each other.
But the unions are not helpful: Cleveland Fire and Rescue Authority
explored the possibility of an employee-led mutual contracting with
the authority to provide the fire service, but under pressure from
the union, the local authority nixed the proposal as tantamount to
a form of privatisation.



Fire was an abiding terror to our ancestors, consuming not just
many of their lives, but much of their property. Almost all of us
have family stories of devastating fires. Although we will always
need this essential service , thankfully, that experience is
becoming steadily rarer. Sir Ken Knight found it likely that this
decline would continue, remarking: “I wonder if anyone a decade ago
would have predicted the need for fire and rescue services to
attend 40 per cent fewer emergency incidents.” The fire service
will undoubtedly have to shrink.



In the meantime, for two hours this morning, the union that
represents firefighters has merely reminded us that a firefighter
who is called out 40 per cent less than ten years ago will retire
at 60 and has pension rights equivalent to a private pension pot of
half a million pounds, to which he will have contributed half as
much as a private sector worker.

 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2013 01:41

October 30, 2013

Storms are becoming ever more survivable

My Times article on the storm that was to hit
Britain on 28 October. In the event, four or five people died.
Disruption to transport lasted only a few days.



 



If you are reading this with the hatches battened
down, it may not be much comfort to know that 2013 has been an
unusually quiet year for big storms. For the first time in 45 years
no hurricane above Category 1 has made landfall from the Atlantic
by this date, and only two in that category, confounding an
official US government forecast of six to nine hurricanes in the
Atlantic, three to five of which would be big. Even if the last
month of the hurricane season is bad, it will have been a quiet
year.



It’s not just the Atlantic that is quiet. Globally, the
“accumulated cyclone energy” of all big tropical storms — known as
hurricanes, cyclones and typhoons, depending on the ocean — looks
to be heading for one of the lowest numbers on
record
(though there are two months to go).



You cannot read much into a single year’s events, of course.
None the less, the apocalyptic predictions of ever worsening storms
made in 2005, after Hurricane Katrina all but destroyed New
Orleans, seem to have been wide of the mark. There has been no
trend up or down in storm frequency or power since the 1960s, when
satellites began measuring these things.



The way bad storms are beamed into our living rooms today
probably gives the opposite impression. Yet the run-up to an
impending storm tends to get more coverage than the

clean-up afterwards, leaving a false impression of unrepaired
devastation. The more salient trend to draw from weather in the
modern era is that whatever it throws at us, we are getting better
at coping: civilisation has become steadily more resilient in the
face of natural disasters.



The financial cost of storms goes up and up, of course, as the
insurance industry never tires of reminding us. But then so does
the value of the economy — there are more coastal properties worth
more money and more fully insured. The trend in insurance claims
tells you nothing about the weather itself. Indeed, as Professor
Roger Pielke, of the University of Colorado at Boulder, told Congress last year, global insured
catastrophe losses have not increased as a proportion of GDP since
1960.



For all the havoc that today’s St Jude’s storm may bring to
transport and property in Britain, and even if it does result in
tragic loss of life, the effect is bound to be less than it would
have been in times past. In any other century a storm such as
today’s would have killed more people, wrecked more ships,
destroyed more crops and left more people homeless than it will do
today.



In November 1703, for instance, a great storm destroyed the
Eddystone lighthouse, drowned 1,500 Royal Navy sailors in 13 sunken
men-of-war, killed 400 people in the Somerset Levels, piled up 700
ships in the Pool of London, tore lead off Westminster Abbey’s
roof, killed the Bishop of Bath and Wells in his bed with a falling
chimney, and left towns on the South Coast looking “as if the enemy
had sackt them and were most miserably torn to pieces”, in Daniel
Defoe’s words. There must have followed a pretty bleak winter for
many poor people.



Thanks to forecasts, warnings, better building materials and
rescue services, we are more likely to survive today. Consider, for
example, that the Met Office was telling us last week roughly where
today’s storm would strike before it had even been born in the
western Atlantic. That was not possible in 1703; indeed it was far
from easy in 1987, as Michael Fish can attest.



Putting society and infrastructure back together after a weather
disaster is also much quicker today than it would have been 300
years ago. There is virtually nothing that a storm does that cannot
be undone by bulldozers and builders. The same is not always true
of volcanoes and earthquakes.



The fierce cyclone that hit eastern India this
month killed only 17 people, after it was well forecast and 800,000
people were evacuated from its path, a sharp contrast to the 10,000
killed in the same region by a cyclone 14 years ago. That India was
in a good position to issue warnings and implement evacuation plans
this time is largely a function of the intervening years of
economic growth, plus the evolution of technology: nearly a billion
Indians use mobile phones today, compared with hardly any in
1999.



The global death rate as a result of tropical storms (cyclones,
typhoons and hurricanes) was 55 per cent lower in the 2000s than it
had been in the 1960s. Much of that change is down to technology,
but freedom helps too. In 2007 Hurricane Dean, a Category 5 storm,
struck the Yucatan in capitalist, middle-income Mexico, but the
country was well prepared and not a single person died. A year
later a storm of similar ferocity hit impoverished, authoritarian
Burma and killed about 200,000 people.



One of the things that makes the world more resilient to weather
disasters is trade. In 1694, some 15 per cent of the entire
population of France starved after heavy rains destroyed the third
harvest in a row, while plenty of food existed elsewhere in Europe.
Trade was so small a part of the economy that the means to get
sufficient grain into France from other countries in Europe simply
did not exist. At one point a convoy of 120 ships left Norway to
bring grain to France, but was captured by the Dutch before being
heroically recaptured by the privateer Jean Bart and escorted in
triumph to Dunkirk. Yet even this was not enough to save many
French lives.



Today a disastrous harvest in one region merely leads to an
upward nudge in global food prices as food is diverted to the
affected region. It would be almost impossible for famine to occur
in a world where voluminous and truly free trade existed — because
simultaneous harvest failures all around the world are virtually
impossible, while rising prices would draw food to hungry regions.
World trade reduces the risk of disaster.



I am often told that globalisation makes us more vulnerable,
because a country such as Britain depends on other countries for
many of the goods we need, so a natural disaster or a trade embargo
would leave us desperate. But I am not convinced. Britain is no
more precarious for getting its laptops and combine harvesters from
abroad than a town is precarious for getting its bread from the
countryside, or an office worker is precarious for getting his
electricity from a plug.



Indeed local trade is vulnerable to weather disasters in a way
that international trade is not. Interdependence actually spreads
risk. As individuals we gave up self sufficiency tens of thousands
of years ago partly because it reduced risk. And there is nothing
more precarious than a self-sufficient community, which could be
destroyed by a single storm, drought or flood.



So, when this storm has passed, say a little thank you to
technology and globalisation for the fact that, on the whole, even
the very worst weather need not disrupt your life very severely or
for very long.

 •  0 comments  •  flag
Share on Twitter
Published on October 30, 2013 02:47

October 27, 2013

Why nuclear power costs so much

My Times article:



 



The real problem with nuclear power is the scale of it. After
decades of cost inflation, driven mostly by regulations to redouble
safety, 1600 megawatt monsters cost so much and take so long to
build that only governments can afford to borrow the money to build
them. Since Britain borrowing £14 billion extra is not really an
option, then we have to find somebody else’s nationalized industry
to do it, and guarantee high returns, as if it were a big PFI
contract.



 



Today’s?? announcement for Hinkley Point in Somerset is likely
to be that we, the British public, are to guarantee for 35 years to
pay nearly twice the current price of electricity to a consortium
largely owned by the French government and a communist Chinese
regime. That is to say, we lock in electricity price rises for
British pensioners and employers while sending dividends to other
governments. Liability, above a certain level, stays with us. And
when the Chinese build nuclear stations in the future, can we be
certain that if, say, the Dalai Lama called on the Queen for tea,
the projects would not suffer from unexpected delays?



 



They say the European Pressurised Reactors at Hinkley will
generate more electricity with the same amount of fuel and need
less down time for maintenance than previous reactors, so why is
the cost so much higher? I’ll explain.



 



There is, however, another way: to make nuclear reactors
smaller, cheaper and quicker to build by assembling them in
factories instead of fabricating them on site. That way we could
put the technology back in the private sector and see costs come
tumbling down. Nuclear power is a fabulous technology that could be
solving all our problems – but not in its current big form.



 



Were it not for carbon policies shutting down coal and gas fired
power stations, and the failure of wind to fill the gap, we would
not touch this deal – but then without carbon alarm, nuclear power
would never have become tolerable to our masters in the green
movement. The worst of it is that the two reactors to be built by
EDF at Hinkley are of a design that is not yet working anywhere.
The two EPRs being built in Europe, at Flamanville in France and
Olkiluoto in Finland are years behind schedule and billions over
budget. True, the two Chinese EPRs at Taishan look like coming in
within five years and on budget, but there’s a good reason that
this might not be the experience in Somerset, whatever the
ambitions of Chinese engineers.



 



This is because nuclear power in the west has been on a journey
of relentless cost inflation for several decades. As the late great
nuclear physicist Bernard Cohen explained in a book in 1990, the
reason the west stopped building nuclear plants in the 1980s was
not the fear of accidents, leaks or the proliferation of waste; it
was the escalation of costs driven by regulation. Labour costs shot
up as more and more professionals had to be employed signing off
paperwork; and according to one study, during the 1970s alone new
regulatory requirements increased the quantity of steel per
megawatt by 41%, concrete by 27%, piping by 50% and electrical
cable by 36%. As the regulation ratchet tightened, builders added
features to anticipate rule changes that sometimes did not happen.
Tight regulations forced them to lose the habit of on-the-spot
innovation to solve unanticipated problems, which further drove up
costs and delays.



 



The ratchet has continued to tighten, and today we have very
slightly greater safety at very much increased cost. Nobody doubts
that the EPR is a safe design; after 9/11 it was made even more
aircraft proof, for example. But then nuclear power was always very
safe. Even a bad design, built and managed by a criminally
negligent regime, managed at Chernobyl to kill remarkably few
people.



 



The plain fact is that per megawatt-hour of power generated,
nuclear power causes fewer deaths than any other way of making
electricity bar none. Coal kills nearly 2,000 times as many people;
bioenergy 50 times; gas 40 times; hydro 15 times; solar five times
and even wind nearly twice as many as nuclear. That’s including
Chernobyl and Fukushima. It is clear that increasing the cost and
the time to build a plant by at least ten times over the past 40
years has merely made a very, very safe system into a very, very,
very safe system.



 



But that cost escalation has stopped nuclear plants being built,
which has cost lives by making us adopt more dangerous technologies
instead. It has also given nuclear projects time horizons that rule
out private investment, repetition and hence cost-saving
innovation. Centrica pulled out of Hinkley because its build time
expanded to eight years and its cost doubled.



 



Ken Owen, Commercial Director for EDF at Hinkley was quoted last
week as saying that he is quite happy for British contractors to do
the “muck shifting” but “most of the available contracts could be
beyond UK suppliers which are struggling to meet the complex safety
and quality standards of the nuclear industry”. It need not be this
way. If, instead of building huge reactors from scratch, we were to
manufacture small reactors in factories, then deliver them one
after another by low loader, there is every reason to think the
costs could come tumbling down without compromising safety.
Somebody needs to do for nuclear reactors what Samuel Colt did for
revolvers: mass-produce them with interchangeable parts.



 



This idea -- of small, modular reactors -- has been floating
around for some time (and of course is already routine in the world
of submarines). Babcock and Wilcox in the United States is leading
a consortium called mPower that is building two 180-megawatt
reactors at Clinch River. A British consortium called Penultimate
Power, which includes some big engineering names, has similar
ambitions. Its chief executive, Candida Whitmill, argues that
instead of the British digging the holes and making the tea, we
still have great nuclear engineering expertise –though it is
withering fast – and could be rolling reactors off production lines
and delivering them to UK licensed sites where they would start
producing some small amount of power in half the time it will take
to build an EPR.



 



But even to get a modular reactor certificated would take three
years and cost tens of millions of pounds. The Office of Nuclear
Assessment insists on a fresh certification for each design,
disproportionately hurting small projects.



 



Scale, as I say, is the problem. In theory size brings scale
economies. But in practice it prevents the repetition that leads
any manufacturer to learn how to cut costs and leads to cautious
and conservative construction lest a mistake bring further costly
delay. If instead you manufactured small reactor modules over and
over again, you would soon bring down the cost. It might take a bit
longer than eight years to install ten modules at Hinkley to equal
one 1600 megawatt EPR, but at least you would be getting some
electricity along the way and all sorts of private investors would
come in, attracted by the shorter time horizon and more modest
scale of each reactor. Small is beautiful.



 

 •  0 comments  •  flag
Share on Twitter
Published on October 27, 2013 07:18

October 18, 2013

The net benefits of climate change till 2080

My Spectator cover story on the net benefits of climate
change.



I will post rebuttals to the articles that criticised this piece
below.



 



Climate change has done more good than harm so far and is likely
to continue doing so for most of this century. This is not some
barmy, right-wing fantasy; it is the consensus of expert opinion.
Yet almost nobody seems to know this. Whenever I make the point in
public, I am told by those who are paid to insult anybody who
departs from climate alarm that I have got it embarrassingly wrong,
don’t know what I am talking about, must be referring to Britain
only, rather than the world as a whole, and so forth.



At first, I thought this was just their usual bluster. But then
I realised that they are genuinely unaware. Good news is no news,
which is why the mainstream media largely ignores all studies
showing net benefits of climate change. And academics have not
exactly been keen to push such analysis forward. So here follows,
for possibly the first time in history, an entire article in the
national press on the net benefits of climate change.



There are many likely effects of climate change: positive and
negative, economic and ecological, humanitarian and financial. And
if you aggregate them all, the overall effect is positive today
— and likely to stay positive until around 2080. That was the
conclusion of Professor Richard Tol of Sussex University after he reviewed 14 different studies of the effects
of future climate trends.



To be precise, Prof Tol calculated that climate change would be
beneficial up to 2.2˚C of warming from 2009 (when he wrote his
paper). This means approximately 3˚C from pre-industrial levels,
since about 0.8˚C of warming has happened in the last 150 years.
The latest estimates of climate sensitivity suggest that such
temperatures may not be reached till the end of the century
— if at all. The Intergovernmental Panel on Climate Change,
whose reports define the consensis, is sticking to older
assumptions, however, which would mean net benefits till about
2080. Either way, it’s a long way off.



Now Prof Tol has a new paper, published as a chapter in a new book, called How Much have
Global Problems Cost the World?, which is edited by Bjorn
Lomborg, director of the Copenhagen Consensus Centre, and was
reviewed by a group of leading economists. In this paper he casts
his gaze backwards to the last century. He concludes that climate
change did indeed raise human and planetary welfare during the 20th
century.



You can choose not to believe the studies Prof Tol has collated.
Or you can say the net benefit is small (which it is), you can
argue that the benefits have accrued more to rich countries than
poor countries (which is true) or you can emphasise that after 2080
climate change would probably do net harm to the world (which may
also be true). You can even say you do not trust the models
involved (though they have proved more reliable than the
temperature models). But what you cannot do is deny that this is
the current consensus. If you wish to accept the consensus on
temperature models, then you should accept the consensus on
economic benefit.



Overall, Prof Tol finds that climate change in the past century
improved human welfare. By how much? He calculates by 1.4 per cent
of global economic output, rising to 1.5 per cent by 2025. For some
people, this means the difference between survival and
starvation.



It will still be 1.2 per cent around 2050 and will not turn
negative until around 2080. In short, my children will be very old
before global warming stops benefiting the world. Note that if the
world continues to grow at 3 per cent a year, then the average
person will be about nine times as rich in 2080 as she is today. So
low-lying Bangladesh will be able to afford the same kind of flood
defences that the Dutch have today.



The chief benefits of global warming include: fewer winter
deaths; lower energy costs; better agricultural yields; probably
fewer droughts; maybe richer biodiversity. It is a little-known
fact that winter deaths exceed summer deaths — not just in
countries like Britain but also those with very warm summers,
including Greece. Both Britain and Greece see mortality rates rise
by 18 per cent each winter. Especially cold winters cause a rise in
heart failures far greater than the rise in deaths during
heatwaves.



Cold, not the heat, is the biggest killer. For the last decade,
Brits have been dying from the cold at the average rate of 29,000
excess deaths each winter. Compare this to the heatwave ten years
ago, which claimed 15,000 lives in France and just 2,000 in
Britain. In the ten years since, there has been no summer death
spike at all. Excess winter deaths hit the poor harder than the
rich for the obvious reason: they cannot afford heating. And it is
not just those at risk who benefit from moderate warming. Global
warming has so far cut heating bills more than it has raised
cooling bills. If it resumes after its current 17-year hiatus, and
if the energy efficiency of our homes improves, then at some point
the cost of cooling probably will exceed the cost of heating
— probably from about 2035, Prof Tol estimates.



The greatest benefit from climate change comes not from
temperature change but from carbon dioxide itself. It is not
pollution, but the raw material from which plants make
carbohydrates and thence proteins and fats. As it is an extremely
rare trace gas in the air — less than 0.04 per cent of the air
on average — plants struggle to absorb enough of it. On a
windless, sunny day, a field of corn can suck half the carbon
dioxide out of the air. Commercial greenhouse operators therefore
pump carbon dioxide into their greenhouses to raise plant growth
rates.



The increase in average carbon dioxide levels over the past
century, from 0.03 per cent to 0.04 per cent of the air, has had a
measurable impact on plant growth rates. It is responsible for a
startling change in the amount of greenery on the planet. As Dr
Ranga Myneni of Boston University has documented, using three decades of satellite
data, 31 per cent of the global vegetated area of the planet has
become greener and just 3 per cent has become less green. This
translates into a 14 per cent increase in productivity of
ecosystems and has been observed in all vegetation types.



Dr Randall Donohue and colleagues of the CSIRO Land and Water
department in Australia also analysed satellite data and found greening to
be clearly attributable in part to the carbon dioxide fertilisation
effect. Greening is especially pronounced in dry areas like the
Sahel region of Africa, where satellites show a big increase in
green vegetation since the 1970s.



It is often argued that global warming will hurt the world’s
poorest hardest. What is seldom heard is that the decline of
famines in the Sahel in recent years is partly due to more rainfall
caused by moderate warming and partly due to more carbon dioxide
itself: more greenery for goats to eat means more greenery
left over for gazelles, so entire ecosystems have benefited.



Even polar bears are thriving so far, though this is mainly
because of the cessation of hunting. None the less, it’s worth
noting that the three years with the lowest polar bear cub survival
in the western Hudson Bay (1974, 1984 and 1992) were the years when the sea ice was too thick
for ringed seals to appear in good numbers in spring. Bears need
broken ice.



Well yes, you may argue, but what about all the weather
disasters caused by climate change? Entirely mythical — so
far. The latest IPCC report is admirably frank about this,
reporting ‘no significant observed trends in global tropical
cyclone frequency over the past century … lack of evidence and thus
low confidence regarding the sign of trend in the magnitude and/or
frequency offloads on a global scale … low confidence in observed
trends in small-scale severe weather phenomena such as hail and
thunderstorms’.



In fact, the death rate from droughts, floods and storms has dropped by 98 per cent since the 1920s,
according to a careful study by the independent scholar Indur
Goklany. Not because weather has become less dangerous but because
people have gained better protection as they got richer: witness
the remarkable success of cyclone warnings in India last week.
That’s the thing about climate change — we will probably
pocket the benefits and mitigate at least some of the harm by
adapting. For example, experts now agree that malaria will continue
its rapid worldwide decline whatever the climate does.



Yet cherry-picking the bad news remains rife. A remarkable
example of this was the IPCC’s last report in 2007, which said that
global warming would cause ‘hundreds of millions of people [to be]
exposed to increased water stress’ under four different scenarios
of future warming. It cited a study, which had also counted numbers
of people at reduced risk of water stress — and in each case that
number was higher. The IPCC simply omitted the positive
numbers.



Why does this matter? Even if climate change does produce
slightly more welfare for the next 70 years, why take the risk that
it will do great harm thereafter? There is one obvious reason:
climate policy is already doing harm. Building wind turbines,
growing biofuels and substituting wood for coal in power stations
— all policies designed explicitly to fight climate change
— have had negligible effects on carbon dioxide emissions. But
they have driven people into fuel poverty, made industries
uncompetitive, driven up food prices, accelerated the destruction
of forests, killed rare birds of prey, and divided communities. To
name just some of the effects. Mr Goklany estimates that globally nearly 200,000 people
are dying every year, because we are turning 5 per cent of the
world’s grain crop into motor fuel instead of food: that pushes
people into malnutrition and death. In this country, 65 people a
day are dying because they cannot afford to heat their homes
properly, according to Christine Liddell of the University of
Ulster, yet the government is planning to double the cost of
electricity to consumers by 2030.



As Bjorn Lomborg has pointed out, the European Union will pay £165
billion for its current climate policies each and every year for
the next 87 years. Britain’s climate policies — subsidising
windmills, wood-burners, anaerobic digesters, electric vehicles and
all the rest — is due to cost us £1.8 trillion over the course of
this century. In exchange for that Brobdingnagian sum, we hope to
lower the air temperature by about 0.005˚C — which will be
undetectable by normal thermometers. The accepted consensus among
economists is that every £100 spent fighting climate change brings
£3 of benefit.



So we are doing real harm now to impede a change that will
produce net benefits for 70 years. That’s like having radiotherapy
because you are feeling too well. I just don’t share the certainty
of so many in the green establishment that it’s worth it. It may
be, but it may not.



Disclosure: by virtue of owning shares and
land, I have some degree of interests in all almost all forms of
energy generation: coal, wood, oil and gas, wind (reluctantly),
nuclear, even biofuels, demand for which drives up wheat prices. I
could probably make more money out of enthusiastically endorsing
green energy than opposing it. So the argument presented here is
not special pleading, just honest curiosity.



 



My response to Duncan Geere's article in the New Statesman:



Four of Geere's paragraphs in turn begin with "He's right..." so
I am glad that Geere confirms that I am right about all my main
points. if you read my article you will find that each of Geere's
assertions about the eventual harm of climate change are also in my
piece. For example, I say: "Even if climate change does produce
slightly more welfare for the next 70 years, why take the risk that
it will do great harm thereafter?". I do not ignore sea level rise:
and anyway it is taken into account in all of the studies collated
by Tol.



Geere's main point, that the graph of benefits starts declining
at 1C above (today's) is very misleading. What this means is that
the benefit during one year is slightly smaller than the benefit
during the year before, not that there has been net harm during
that year. Geere seems to have misunderstood Tol's graph.



My points about fewer droughts and richer biodiversity are
grounded in the peer reviewed literature. Many models and data sets
agree that rainfall is likely to increase as temperature rises,
while the evidence for global greening as a result of carbon
dioxide emissions (and rainfall increases) is now strong. Greater
yields means more land sparing as well.



The main point I was trying to make is that very few people know
that climate change has benefits at all, let alone net benefits
today; even fewer know that it is likely to have net benefits in
the future for about 70 years. This fact, which Mr Geere confirms,
is worth discussing. Judging by the incredulous reaction to my
article in some quarters, this was indeed news to many people.



I note Mr Geere has nothing to say about the harm being done by
climate policies to the very poorest people in the world. A
peer-reviewed estimate is that 200,000 people are dying every year
because of the effect of biofuels on food prices. Western elites
may feel comfortable about this, but I do not, and I think a
serious debate about whether some current policies (as opposed to
others) do more harm than good even in the long run is worth
having.



 



Barry Brill's comment on my article, posted at Bishop Hill:



The 1°C rise mentioned by Mr Gere has its base in 2009. As the
IPCC says that global surface temperatures have increased by 0.85°C
since the pre-industrial era, this point of maximum benefit is
about equal to the 2°C target set by all UNFCCC conferences since
Copenhagen.



At the rate of warming recorded in the recent AR5WG1 SPM
(0.12°C/decade since 1951) it will be well after 2100 before even
this level of diminishing benefit is reached. The IPCC says that
the historic rate won't increase unless the TCR is above about
1.5°C – which seems unlikely in view of recent studies.



The series of published economic studies relied upon by
Professor Tol are based on the IPCC's earlier assessment reports,
which were blithely unaware of the "hiatus". Allowance needs to be
made for at least three new factors:



(1) The hiatus has already set the timetable back by about 17
years;

(2) The models assumed a Best Estimate for ECS of 3.0°C. The
consensus behind that figure has now evaporated;

(3) We now know that natural variation (or the Davy Jones
hypothesis) regularly offsets the effects of AGW.



All these factors suggest that Matt Ridley's timing is extremely
conservative. Any warming occurring in the 21st century is likely
to be a great boon to planet Earth and its inhabitants.



 



My response to Bob Ward's article on the LSE website:



 



This week Bob Ward twice repeated in published form a claim that
he surely knows to be misleading. It concerns the number of people
who die in winter versus summer. Both Bjorn Lomborg in the Times
and I in the Spectator have cited studies showing that there are
more excess deaths in winter than summer in most countries. Mr Ward
does not dispute this. But he cites a Health Protection Agency
report which argues that by 2050 winter deaths in the UK will
probably fall less than summer deaths will rise, and argues that
this means that climate change will then be doing more harm than
good in this one respect at least.



Yet the very source he uses states that the increase in summer
deaths reflects "the increasing size of the population in most
UK regions during the 21st century." (p41) It goes on to show
that if you hold population constant, projected climate change will
increase heat deaths by 3,336, but reduce cold deaths
by 10,766.



Anybody can make a mistake. But surely it was impossible to miss
the HPA's explanation? Yet in the last ten days Mr Ward simply
repeated the error twice, in an article on the Grantham Institute’s
website attacking me and in a letter to the Times this week
attacking Lomborg.



Lomborg has now written to the Times pointing out that all
three of Ward's points in his letter are wrong:

"First, he claims 21 world-leading economists and I
neglect the health impact of tobacco. Wrong. On page 18,
page 228 and 17 other places we write how tobacco is a
huge problem, almost verbatim what Ward claims
we’re ignoring.



Second, he insists climate change cannot be a present
net benefit. Wrong. This is corroborated by the
most comprehensive, peer-reviewed article, collecting all
published estimates showing an overwhelming likelihood that
global warming below 2oC is beneficial.[ii]



Third, he claims that the Health Protection Agency
shows warming will lead to 4,000 more deaths in the
2050s. Wrong again. The HPA is clear that more deaths are
a consequence of many more people in the UK by
the 2050s. With a constant population HPA shows warming
will lead to 7,000 fewer deaths."

Mr Ward’s latest attack on my Spectator article, in which I
argued that the evidence suggests probable net benefits from
warming till about 2080, is an egregious example of his aggressive
style. He called my article “ludicrous” and attacked the Spectator
for the “howler” of publishing it. Yet his own riposte is highly
misleading.

He says the average temperature increase expected is “much lower
than the top of the range of projections” from the IPCC. Well,
indeed – that’s the very meaning of the word average, that it's
less than the extreme. And he says that the IPCC’s “high” emissions
scenario suggests sea level “could” be higher than the average
projection. Indeed. I was careful to say in my article that I was
talking about central estimates, not high-end projections. Either
Mr Ward is simply unable to grasp this point or he was being
deliberately mendacious in implying that the extreme scenarios are
likely to happen.



On the effect of carbon dioxide on global vegetation indices,
one of the main ways in which carbon dioxide emissions are
benefiting the planet, Mr Ward is entirely silent. He simply
ignores the data I cite showing a net global greening in all types
of ecosystem over the past 30 years as measured by satellites. Yet
he implies that carbon dioxide fertilisation is a myth. Has he not
read the Donohue paper or examined the Myneni data? It’s easily
viewed on the internet.



I am happy to debate the benefits of climate change with
anybody, and I stressed in my original article that there is no
certainty about the future. I have never said we need to do nothing
to head off the damaging effects of climate change towards the end
of the 21st century. But I do think the fact, an under-reported
one, that climate change has had net benefits so far should be
discussed alongside the fact that many climate policies are doing
real harm to people and ecosystems. Terms of abuse are not
helpful.



 



 



 

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2013 02:52

Offshore white elephants

My Times column asks if offshore wind is too
expensive:



 



Here’s a short quiz. Question One: which source of
energy is allowed to charge the highest price for its electricity?
Question Two: which source of energy is expected to receive the
greatest capital expenditure over the next seven years? The answer
to both questions is offshore wind.



Offshore wind farms are the elephant in the energy debate.
Today, the energy department estimates that electricity prices are
17 per cent higher as the result of green policies and that this
will rise to 33 per cent by 2020 or 44 per cent if gas prices fall,
as many expect. Offshore wind is the single biggest contributor to
that rise. Of the £15 billion a year that the Renewable Energy
Foundation thinks consumers are going to be paying in total green
imposts by 2020, the bulk will go to support offshore wind.



Britain is a proud leader in offshore wind. “The UK has more
offshore wind installed than the rest of the world combined and we
have ambitious plans for the future,” says Ed Davey, the Energy
Secretary. I wonder why that is. Could it be that other countries
have looked at the technology and decided that it’s far too costly?
George Osborne says he does not want Britain out ahead on green
energy. He should take a long hard look at why we are so far out
ahead on this extravagant folly.



Currently we get under 3 per cent of our electricity from
offshore wind, or less than 0.5 per cent of our total energy. If Mr
Davey’s ambitions are realised and 20 per cent of our electricity
comes from offshore wind in 2020, then we will need 20 gigawatts of
capacity because wind turbines, even at sea, operate at less than
40 per cent of capacity. That’s about six times what we have today
and the cost of building it would be greater than the investment in
nuclear energy over the period.



On the face of it, sticking wind turbines in the sea sounds like
a great wheeze. There’s no need to tangle with turbulent parish
councils worried about their views, or to bribe landowners with
annual payments. The wind blows a little more reliably and
strongly. But the engineering problems have proved daunting. Three
years ago the cement grouting began to dissolve on more than half
of all Europe’s offshore turbines, leading the turbines to move on
their foundations. This necessitated hefty repairs and redesign.
The urgency of meeting political targets was partly to blame.
“There is an alarming asymmetry between construction risks and the
number of players who can manage these risks effectively,” says one
insider.



As a result, costs have not fallen as expected. The Government
had set a target of cutting the price it offered to pay for
offshore wind power to “only” double the wholesale price, but it
quietly abandoned that ambition this summer when it announced that
the “strike price” for offshore wind would drop only a little.
Connecting cables and transformers, dealing with corrosion, losing
days to seasickness, stopping pile-driving during the season when
it might upset porpoises — these have all proved more challenging
than expected and have added to the costs and delays. Many in the
industry think that the lifespan of a turbine in the North Sea is
going to be a lot shorter than the hoped-for 25 years. One study by
Gordon Hughes of Edinburgh University found that the operating
efficiency of Danish offshore wind farms dropped from 39 per cent
to 15 per cent after ten years. Certainly, Britain’s oldest
offshore wind farm — off Blyth in Northumberland — has spent a good
part of its first 12 years out of action.



It is also becoming clear that monstrous turbines are not as
environmentally clean as had been imagined. Last week the £3
billion Navitus Bay wind farm, off Dorset and highly visible from
the Isle of Wight, revealed that it needed 22 miles of cabling to
be dug into the New Forest. Sea birds are at risk too. Pink-footed
geese are apparently avoiding wind farms on migration, while
songbirds are thought to be at risk of becoming confused by
flashing blades while crossing the North Sea. The British Trust for
Ornithology concluded that 2,603 adult and 1,056 immature gannets
will be killed each year by existing and consented wind farms
around the British coast. Since gannet populations are currently
growing, this may not matter much, but it is a far greater toll
than taken by any other industry.



And then there is the risk of an oil tanker hitting a turbine,
or hitting another ship because of having been squeezed into a
narrow shipping lane past a wind farm. As Lord Greenway (an Elder
Brother of Trinity House) put it in the House of Lords this year,
the risk of collision is increased by more than 400 per cent at
some choke points and “if you place an object in the sea, either a
fixed structure or a floating one, sooner or later a ship is bound
to hit it”. Of course, none of these objections is fatal in itself
— all economic activity entails some risks. They are however a
reminder that this very expensive form of electricity is not
“clean”.



Yet even the high price on offer — £155 per megawatt-hour
compared with £90 for nuclear and below £50 for the typical
wholesale price — may be too low to lure the investment needed if
the target of 20 gigawatts of offshore power is to be met by 2020.
With coal being phased out, gas restricted and onshore wind, wave,
wood and water of limited capability, and even with a hugely
ambitious nuclear programme, we will need some £45 billion invested
in offshore wind by 2020, and another £54 billion by 2030, if the
lights are to be kept on. That is considerably more than in any
other energy technology, even nuclear.



Such sums are surely now unrealistic in a time when energy
prices are a political hot potato. If you are sitting in the
boardroom of an energy company worried about the reputational
damage of putting up prices today, you must be getting cold feet
about the future cost of offshore wind. Or, as a spokesman for SSE
said last week: “Although we are continuing to develop offshore
wind projects, it’s now also becoming increasingly hard to see how
a final decision on investment in new offshore wind capacity could
be made before the 2015 election.”



The defenders of renewable energy used to argue that fossil fuel
prices would rise inexorably as supplies ran out, thus making even
expensive offshore wind look like a bargain. Some still do —  Lord
Stern made this argument to the BBC last week. But most now realise
that the superabundance of shale gas and oil has postponed peak oil
once again and is already driving down coal, gas and oil prices in
the United States, with other parts of the world likely to follow
suit. There is very little chance now of offshore wind undercutting
coal or gas fired power in coming decades.



In short, Ed Miliband’s politicising of energy prices may have
killed the industry he most cherishes. Soon the energy debate will
no longer be about whether offshore wind farms should or should not
be built, but about how we are to fill the gap caused by the
inevitable failure of the offshore wind industry to meet the
capacity targets expected of it. And that’s a difficult question,
given that the obvious answer — shale gas — has just effectively
been made less feasible by a new environmental rule passed by the
European Parliament.

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2013 02:46

October 13, 2013

Don't discourage vaping

My Times column tackles an egregious example of
regulation doing more harm than good:



 



Should shampoo be classified as a medicine and prescribed by
doctors? It can, after all, cause harm: it can sting your eyes and
a recent study found traces of carcinogens in 98 shampoo
products. Sure, shampoo can clean hair if used responsibly. But
what’s to stop cowboy shampoo makers selling dangerous shampoo to
the young? Far too many shampoo manufacturers try to glamorize
their product. Time for the state to step in.



Far fetched? If only. This week the European Parliament sensibly
declined to accept the European Commission’s directive to regulate
as medicines those glowing-tipped electronic nicotine vapour
dispensers called e-cigarettes. The British government,
astonishingly, expressed its disappointment at the vote, and
still intends to treat e-cigarettes as medicines from 2016. “We
believe these products need to be regulated as medicines and will
continue to make this point during further negotiations,” a
spokesperson for the Department of Health said. Who’s “we”, by the
way?



All the signs are that “vaping” is rapidly gaining market share
from smoking. Having begun as an ingenious innovation in China in
the early 2000s, the e-cigarette is now big business, with sales of
more than £2 billion this year, and with the number of users
doubling in some parts of the world just in the last year or so.
The big tobacco firms are rushing to acquire the Chinese start-ups
or their knowhow, a sure sign that they expect to lose customers to
vaping. And they will: vaping helps people stop smoking.



Examine the evidence and you will find that e-cigarettes are
saving far more lives than shampoo, and probably doing no more
harm. A recent study in New Zealand, published by the Lancet, divided 657 smokers
into three categories; one-third were asked to use nicotine
patches, one-third e-cigarettes and one-third fake (placebo)
e-cigarettes. The e-cigarette smokers were more likely to abstain
from smoking entirely during the experiment, more likely to halve
their use of cigarettes if they did not quit entirely, and three
times more likely to continue with the product afterwards.



Meanwhile a thorough search for medical threats caused by
inhaling nicotine vapour – as opposed to smoke – continues to find
very little. Nitrosamine and formaldehyde are found at levels 1,000
times lower than in cigarette smoke. Dose matters in toxicity. Even
a former director of Action on Smoking and Health (ASH) has
described vaping as ''a very low risk alternative to cigarettes,
used by smokers as a pleasurable way of taking the relatively
harmless recreational drug nicotine''. Don’t forget there’s
moderately good preliminary evidence that nicotine helps slow progression
of Alzheimer’s, so it might even do good.



None the less, all round the world the health nannies are
itching to get their regulatory hands on e-cigarettes. In a dozen
countries, mainly in Latin America, the things are banned
altogether (presumably after subtle lobbying by tobacco farmers and
the cigarette industry). Here the “NHS Choices” website contains a magnanimous concession that
until they are regulated in 2016, the Medicines and Healthcare
products Regulatory Agency “will not ban the products entirely
during this interim period, but will encourage e-cigarette
manufacturers to apply for a medicine licence.” Meanwhile, it warns
darkly, e-cigarettes “are only covered by product safety
legislation”.



On the same website the NHS recommends instead a white, bulbous
thing called a “nicotine inhalator”, which is a “licensed quit
smoking aid, available on the NHS, [that] consists of just a
mouthpiece and a plastic cartridge”. Somehow, I don’t think they
have got the hang of glamorized marketing. But if they concede the
principle that nicotine inhalers are safe and should be made
available to people at taxpayers’ expense, what are they doing
trying to regulate the sale and purchase of devices, at no cost to
the taxpayer, that do almost the same thing but might actually look
cool, rather than embarrassing, in the street? I cannot help
feeling that the Department of Health is more interested in
retaining control of the nicotine market than promoting health.
This is known in economics as regulatory capture. Maybe “we” means
the makers of patches and inhalators.



Be in no doubt that regulating e-cigarettes as a medicine would
kill people. It would discourage their use, by putting up the cost
of launching, selling and monitoring them and it would make it
harder to buy them. It would therefore, given that we know they
help people stop smoking, kill. That’s the likely effect of what
the Department of Health proposes. As so often, the precautionary
principle, by weighing the costs but not the benefits of a new
technology, does net harm.



Vaping costs much less than smoking, not least because it is
untaxed, so it is bound to spread fairly fast. It’s likely to
overtake smoking within a decade, some think. Frankly, we should be
changing regulations to encourage it: it should be allowed indoors,
as a further incentive to help smokers quit. E-cigarette makers
should be allowed to advertise (which this week’s European
Parliament vote prevented) so as to help it grab market share
faster and save lives faster. So what if this leads to a
“re-glamorising” of people putting cylinders between their lips?
Remember, “we” objected to smoking because it hurt people, not
because it was glamorous.



Passive vaping is also far more pleasant than passive smoking,
as I can attest, for the simple reason that it obeys John Stuart
Mill’s harm principle: that you can do what you like unless it
harms others. This is a concept that used to elude smokers in the
old days, when some were for my taste too addictively oblivious to
how much discomfort their smoke, ash and stale smell caused to
others. There is simply no sensible reason to object to somebody
else vaping – even on an aeroplane.



The addiction that we should be worrying about is the addiction
of regulators to harmful regulation. America’s Centers for Disease
Control and Prevention (CDC) last month said it was ''deeply troubling'' that
e-cigarette use among American teenagers has doubled, even though
less than 10% of the users had never smoked a real cigarette – ie,
most of them were probably cutting their risk of death. Yet justice
officials from 40 American states have now demanded that the Food
and Drug Administration regulate vaping. It would have done by the
end of this month if the government had not shut down.



By the way, if you are now worried about shampoo, don’t be.
Coffee and organic broccoli have more carcinogens in them than most
chemical products. But that does not make them dangerous. It’s all
about dose.



 



PS. A source in the European Parliament, after reading my
article, added a fascinating detail:

A further interesting point is the subtle role played by big
pharma, who fund hundreds of various anti smoking
organisations that curiously all lobbied against e-cigarettes.
They, of course, want control of the market, have all the
pharmaceutical grade production facilities and large corporate
compliance departments whereas most existing  e-cig
companies are small scale start ups.

 



 

 •  0 comments  •  flag
Share on Twitter
Published on October 13, 2013 07:30

October 7, 2013

The inexorable nature of technological progress

My recent Times column on Moore's Law, technological progress
and economic growth:



The law that has changed our lives most in the
past 50 years may be about to be repealed, even though it was never
even on the statute book. I am referring to Moore’s Law, which
decrees — well, observes — that a given amount of computing power
halves in cost every two years.



Robert Colwell, the former chief architect at Intel and head of
something with a very long name in the US Government (honestly,
you’d turn the page if I spelt it out, though now I’ve taken up
even more space not telling you; maybe I will put it at the end),
made a speech recently saying that in less than a
decade, Moore’s Law will come to a halt.



The problem is that the actual electronic components imprinted
on silicon chips cannot get much smaller. They are now down to
about forty silicon atoms across and once they reach ten atoms,
weird quantum effects take over and they stop behaving predictably.
Fortunately, other experts think that may not be the end of the
story. Some other law of falling cost will come to our rescue.
Technological change has developed such inexorable momentum that it
effectively ignores wars, recessions, booms and borders, and we can
no longer switch it off if we wanted to.



I am not an economist, but as far as I can make out, for true
economic growth to happen, something somewhere has to get cheaper.
If you have to work for a shorter time to fulfil a need such as
mobile telephony, sandwiches or an airline ticket, you have a bit
of spare time left over to fulfil another need or want. That gives
somebody else a job providing for your new demand. And so on.
Sometimes things get cheaper because of different organisation of
people and things (Ryanair, Primark); sometimes because of new
inventions that cost less to make or run.



So, for example, today it costs less than half a
second of work on the average wage to earn enough to switch on a
bedside lamp for an hour. In 1950 your grandparents had to work for
eight seconds on the average wage to earn that much light. (And in
1880, 15 minutes.) Thanks to improvements in electricity
generation, light technology and productivity — which is reflected
in rising wages — you have seven and a half seconds that your
grandparents did not have in which to fulfil a different need and
provide a living to a different supplier.



In 1965 Gordon Moore, a mid-ranking Silicon Valley deity, drew a
line through just five data points and boldly deduced that the
number of transistors on a silicon chip seemed to be doubling every
18 months. His friend Carver Mead, another silicon deity, pointed
out that this made chips not only cheaper, but more reliable and
less power hungry. “By making things smaller,” Moore wrote, “everything gets better
simultaneously. There is little need for trade-offs.”



And so it proved. Decade after decade, computing costs halved
every two years. Every prediction that they would level off proved
wrong. Moore expected the limit to come when chip components
reached 250 nanometres, but they passed that in 1997 and have now
hit 22 nanometres. The inevitable, incremental, inexorable
plummeting of computing costs led to desktops, mobiles, the
internet, Twitter, better cars, better accounting, better almost
everything. It is a big part of the reason that average income has
trebled globally in real terms since 1965.



Recently another Silicon Valley guru, Ray Kurzweil, realised that Moore’s Law was at work before
silicon chips even existed. The relay, vacuum tube and single
transistor had all improved along the very same trajectory: the
amount of computing power you can buy for $100 has doubled every
two years for a century, showing no slowdown in the Great
Depression or the Second World War and no acceleration in boom
times.



Yet predictions that computing costs would plunge even faster
now that we knew about Moore’s Law also proved wrong. In technology
you need to take each step before you can take the next. And once
you’ve taken each step, the next becomes mandatory, which is why
people throughout history have stumbled on the same inventions at
the same time. As Kevin Kelly catalogues in his book What Technology Wants, there were six
different inventors of the thermometer, three of the hypodermic
needle, four of vaccination, four of decimal fractions, five of the
electric telegraph, four of photography, three of logarithms, five
of the steamboat, six of the electric railroad. According to two
historians, no less than 23 people deserve credit for inventing the
incandescent bulb around the same time as Edison.



Such redundancy underlines the futility of trying to prevent or
even steer technological change. Opponents of GM crops and fracking
can prevent a nation sharing in the bounty of progress, but almost
certainly cannot stop the world doing so. As Kelly argues, the
technium (the sum of our technologies) marches onward, selecting
its inventors rather than vice versa. It is almost as if it is
alive.



With Moore’s Law, there are plenty of possible ways to keep
costs falling even if component size stops shrinking. Some are
mundane — better run companies, cheaper materials. But others are
high-tech. If transistors can communicate using light rather than
wires, speeds could jump, energy waste fall and costs plunge again.
Or if last week’s Nature magazine is to be
believed, carbon nanotubes — essentially folded versions of the
sexy new substance called graphene — could one day “take us at
least an order of magnitude in performance beyond where you can
project silicon could take us,” according to Philip Wong, of Stanford
University.



Moore’s Law is a reminder that by far the most useful thing you
can do for humanity is to lower the cost of something. Yet far too
few people in government, charities, companies, churches or aid
organisations think of this as their duty. More’s the pity. Let’s
hope the repeal of Moore’s Law is followed by the enactment of many
more such Moore’s laws.



Mr Colwell’s job, by the way, is director of the Microsystems
Technology Office at the US Federal Government’s Department of
Defence’s Defence Advanced Research Projects Agency (Darpa).

 •  0 comments  •  flag
Share on Twitter
Published on October 07, 2013 00:26

October 1, 2013

Global lukewarming need not be catastrophic

My luke-warming column in the Times on 28th September 2013,
pleaded in vain for a moderate middle approach to climate change,
and drew a parallel with the nature-nurture debate. Here's what I
wrote:



In the climate debate, which side are you on? Do
you think climate change is the most urgent crisis facing mankind
requiring almost unlimited spending? Or that it’s all a hoax,
dreamt up to justify socialism, and nothing is happening
anyway?



Because those are the only two options, apparently. I know this
from bitter experience. Every time I argue for a lukewarm “third
way” — that climate change is real but slow, partly man-made but
also susceptible to natural factors, and might be dangerous but
more likely will not be — I am attacked from both sides. I get
e-mails saying the greenhouse theory is bunk and an ice age is on
the way; and others from guardians of the flame calling me a
“denier”.



Yet read between the lines of yesterday’s report from the
Intergovernmental Panel on Climate Change (IPCC) and you see that
even its authors are tiptoeing towards the moderate middle. They
now admit there has been at least a 15-year standstill in
temperatures, which they did not predict and cannot explain,
something sceptics were denounced for claiming only two years ago.
They concede, through gritted teeth, that over three decades,
warming has been much slower than predicted. They have lowered
their estimate of “transient” climate sensitivity, which tells you
roughly how much the temperature will rise towards the end of this
century, to 1-2.5C, up to a half of which has already happened.



They concede that sea level is rising at about one foot a
century and showing no sign of acceleration. They admit there has
been no measurable change in the frequency or severity of droughts,
floods and storms. They are no longer predicting millions of
climate refugees in the near future. They have had to give up on
malaria getting worse, Antarctic ice caps collapsing, or a big
methane burp from the Arctic (Lord Stern, who still talks about
refugees, methane and ice caps, has obviously not got the memo).
Talk of tipping points is gone.



They have come to some of this rather late in the day. Had they
been prepared to listen to lukewarmers and sceptics such as Steve
McIntyre, Ross McKitrick, Pat Michaels, Judith Curry and others,
then they would not have had to scramble around at the last minute
for ad-hoc explanations. These issues have been discussed ad
nauseam by lukewarmers.



The climate war has been polarised in the same way that the
nature-nurture debate was in the 1970s. Back then, if you argued
that genes affected behaviour even a bit, you were pigeon-holed as
a heartless fatalist with possible tendencies to Nazism. I barely
exaggerate at all. Today, if you express a hint of doubt about the
possibility of catastrophic warming, you are a heartless fool with
possible tendencies to Holocaust denial. Sceptics are “truly evil
people”, the former US Senator Tim Wirth said this week.



In the nature-nurture war, polarisation was maintained by the
fact that people only read their own side’s accounts of their
opponents’ arguments. So they spent their time attacking absurd
straw men. Likewise in the climate debate. The most popular
sceptical blogs — such as Wattsupwiththat in America, Bishop Hill
in Britain, JoNova in Australia and Climate Audit in
Canada — provide sometimes brilliant analysis and occasional mad
mistakes: scientific conversation as it should be.



[Update: sure enough Steve McIntyre of Climate
Audit found a huge problem in the IPCC report on day 1: the graphs
appear to have been changed since the previous draft, without
referring back to reviewers, in such a way as to reduce the
apparent failure of the models to match reality. Watch this space.
See "The IPCC disappears the discrepancy".]



But most “proper” climate scientists won’t go near them, so
misunderstand what the sceptics are talking about. They keep saying
they don’t “believe” in climate change. Nothing could be farther
from the truth: most think man-made climate change is real, just
not very frightening. So the IPCC saying yesterday that it is 95
per cent certain that more than half of the warming since 1950 is
man-made is truly a damp squib: well, duh.



We’ve warmed the world and will probably warm it some more.
Carbon dioxide alone can’t cause catastrophe. For that you need
threefold amplification by extra water vapour — which is not
happening. So maybe it’s not a big enough problem to justify
ruining landscapes with wind turbines, cutting rain forest to grow
biofuels and denying World Bank loans to Africans for life-saving
coal-fired electricity. (I declare a commercial interest in coal
and wind, although I give the latter money away as an essay prize:
won this week by Michael Ware’s brilliant demolition
in The Spectator of the electric car
madness.)



Of course, the IPCC’s conversion to lukewarming is not the way
it will be spun, lest it derail the gravy train that keeps so many
activists in well-paid jobs, scientists in amply funded labs and
renewable investors in subsidised profits. After all, Dr Rajendra
Pachauri, chairman of the IPCC, confidently asserted in 2009 that
“when the IPCC’s fifth assessment comes out in 2013 or 2014, there
will be a major revival of interest in action that has to be
taken.” He said this before the people who would write the report
had been selected, before any meetings had happened and before the
research on which it was based had even been published.



Nature-nurture eventually grew reasonable: most people now agree
it’s a bit of both. In the end, the same moderation will happen
with climate, but by then fortunes of your money may have been
spent on technologies that do more harm than good.



We need a grown-up conversation without name-calling about the
possibility that, if the climate resumes warming at the rate the
IPCC expects, it may do more good than harm for at least 70 years:
longer growing seasons, fewer droughts, fewer excess winter deaths
(which greatly exceed summer deaths even in warm countries) and a
general greening of the planet. See here and here.  Satellites show that in the period 1982-2011, 31 per cent
of Earth’s vegetated area became more green, 3 per cent more
brown. The main reason: carbon dioxide.



Leave the last word to Professor Judith Curry, of the School of
Earth and Atmospheric Sciences at the Georgia Institute of
Technology, who used to be alarmed and no longer is. Her message to
the IPCC this week was: “Once you sort out the uncertainty in
climate sensitivity estimates and fix your climate models, let us
know .. .. . And let us know if you come up with any solutions to
this ‘problem’ that aren’t worse than the potential problem
itself.”



 



Post-script:



In response to a comment by Tom Whipple, challenging my "gravy
train" remark, I wrote the following:



I'm surprised by your naivety here. Go and look up the total
grants to climate scientists for their research. it's a very large
number,has grown hugely over the last 3 decades, is almost entirely
unavailable to sceptics and would vanish like snow in summer if the
scientists were more honest about lukewarming. No, they don't get
rich but they get very large flows of funds from taxpayers, which
sceptics don't.



As for the idea that straying from the consensus is lucrative,
the reverse is the truth. Sceptic scientists have in some cases
been driven from their posts and the sceptics I know struggle to
make ends meet by doing other jobs.



As Jo Nova, one of those who lives on a shoestring, pointed out
after doing research on the sums concerned:



"As Climate Money pointed out: all Greenpeace
could find from Exxon was a mere $23 million for skeptics over a decade, while
the cash cow that is catastrophic climate change roped in $2,000
million a year every year during the same period for the scientists
who called other scientists “deniers”."



see http://scienceandpublicpolicy.org/originals/climate_money.html



which finds that the US government alone has spent "$79 billion
since 1989 on policies related to climate change, including science
and technology research, administration, education campaigns,
foreign aid, and tax breaks." Tom, it would be great to get the
numbers for the UK -- why not do that?



And as others have pointed out, McIntyre, McKitrick, Michaels,
Curry and others do publish in the journals. But the gate-keeping
by alarmists exposed by the climate gate scandal continues so it's
far harder to get sceptic papers published. Oh, and remember 30% of
the last IPCC report's sources were non-peer-reviewed.



If you don't believe me, Tom, why not look into it? Go and do a
feature on some of these sceptic blogs. They have HUGE traffic
compared with the ones that promote alarm. At the very least they
are an interesting social phenomenon.



Or you could remain content in the echo chamber of of only
reading your friends' accounts of your enemies' arguments as I
describe above.



2nd postscript:



In response to a message from Hugo Rifkind, I wrote the
following:



The 3.2mm per year since 1993 is a steady rise. You can find the
graph on the web, (e.g. here:http://wattsupwiththat.files.wordpress.com/2013/09/clip_image0044.jpg)
and there's no acceleration within that 18 year period, which is
what I was referring to. It's showing no sign of acceleration in
the last two decades, in others words. That's the total satellite
era as far as sea level is concerned. Before that, the data is much
less good. They are saying, based on buoys corrected for changing
land levels (which complicate the picture e.g. As Scotland
continues to rise because of the ice sheet having been lifted off
it), that the rate of increase was lower before 1993. Well, yes, we
would expect that because there was slight cooling during the
periods 1890-1910 and 1940-1980, so sea level rise was likely to
have been slower. So yes it probably did accelerate around 1990,
but no it is not accelerating now.  I stand by the statement.
They did forecast an acceleration in each of their previous four
reports and it has so far failed to show up.



This is what they said in 2007: “Global average sea level rose
at an average rate of 1.8 mm per year over 1961 to 2003. The rate
was faster over 1993 to 2003: about 3.1 mm per year.”. So they
claimed the acceleration had then happened. There has been no
acceleration since then. That's news.



As for their prediction of an acceleration in the future, yes,
but their previous predictions of acceleration were wrong.
Remember, I was not arguing that there will be no further sea level
rise and no further acceleration, just that reality continues to be
less alarming than predicted. It really is peculiar how journalists
are not prepared to ask tough questions about failed predictions,
but instead jump on those who do.



On the general doomsday stuff, the IPCC may not have yet
pronounced on it, but the scientists who contribute to it have
conceded these points in recent publications.



On malaria: Gething et al have said as follows and all the
people I talk to say this will now be the orthodox view in the IPCC
report: "First, widespread claims that rising mean temperatures
have already led to increases in worldwide malaria morbidity and
mortality are largely at odds with observed decreasing global
trends in both its endemicity and geographic extent. Second, the
proposed future effects of rising temperatures on endemicity are at
least one order of magnitude smaller than changes observed since
about 1900 and up to two orders of magnitude smaller than those
that can be achieved by the effective scale-up of key control
measures. Predictions of an intensification of malaria in a warmer
world, based on extrapolated empirical relationships or biological
mechanisms, must be set against a context of a century of warming
that has seen marked global declines in the disease and a
substantial weakening of the global correlation between malaria
endemicity and climate."



On refugees, the claim was made by UNEP that there would be 50m
climate refugees by 2010. See here: http://wattsupwiththat.com/2011/04/15/the-un-disappears-50-million-climate-refugees-then-botches-the-disappearing-attempt/



You say "And the frequency of droughts, floods and storms
obviously wouldn't be affected yet, because the temperature hasn't
changed, yet." which I find puzzling for two reasons.



1. The temperature has changed! The IPCC says it has, so
do I. Just not in the last 15 years. But the change before
that led to no change in extreme weather.



2. There are very frequent claims made, even in the pages of the
Times, that the frequency of these things has changed! Lord Hunt in
a Times column in April said: "Extreme weather has become more
frequent across the world." and proceeded to give examples ranging
from unseasonable cold in Britain to floods, droughts
and storms elsewhere. I was astonished at
the time that a former chairman of the Met Office should tell
such an untruth and get away with it. Al Gore and many others
frequently claim that climate change caused Sandy and
frequently claim that extreme weather has increased. It would be
bizarre if you were to deny that this claim of an increase in
extreme weather has not been very frequently made by both
scientists and politicians as well as journalists. Yet
the IPCC itself produced a report in 2011 that explicitly denied
it, which is what I was referring to.
So I stand by that.



This is what Roger Pielke Jr said in recent testimony to
Congress:



"• It is misleading, and just plain incorrect, to claim
that disasters associated with hurricanes, tornadoes, floods
or droughts have increased on climate timescales either in the
United States or globally. It is further incorrect to
associate the increasing costs of disasters with the emission
of greenhouse gases.




• Globally, weather-related losses ($) have not increased
since 1990 as a proportion of GDP (they have actually
decreased by about 25%) and insured catastrophe losses have
not increased as a proportion of GDP since 1960.



• Hurricanes have not increased in the US in frequency,
intensity or normalized damage since at least 1900. The same
holds for tropical cyclones globally since at least 1970 (when
data allows for a global perspective).



• Floods have not increased in the US in frequency or
intensity since at least 1950. Flood losses as a percentage of
US GDP have dropped by about 75% since 1940.



• Tornadoes have not increased in frequency, intensity or
normalized damage since 1950, and there is some evidence to
suggest that they have actually declined.



• Drought has “for the most part, become shorter, less
frequent, and cover a smaller portion of the U. S. over the
last century.” Globally, “there has been little change in
drought over the past 60 years.”



• The absolute costs of disasters will increase
significantly in coming years due to greater wealth and
populations in locations exposed to extremes. Consequent, disasters
will continue to be an important focus of policy, irrespective
of the exact future course of climate change."



Two final points if I may. I do hope you will give the prime
minister and almost every other politician a hard time for
confusing a statement about the past — 95% certain that more than
half the warming since 1951 was man made — with a statement about
the future, which he did. If deliberate, that was naughty. If he
was confused, then it was embarrassing. Everybody in the TV studio
I was in last Friday made the same confusion.



What very few of the reporters have done is report that the
projections of warming have been lowered compared with 2007. That's
worth pointing out surely! You may not think they have been lowered
enough to remove the possibility of disaster, but lowered they have
been. It means that it is certainly possible that climate change
policies may do more harm than climate change. I may be wrong to
think that's going to happen, of course, but I am hardly a moral
criminal for raising the possibility and suggesting we discuss it —
yet that's the way most people are treating me.

 •  0 comments  •  flag
Share on Twitter
Published on October 01, 2013 09:46

Matt Ridley's Blog

Matt Ridley
Matt Ridley isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Matt Ridley's blog with rss.