Archive for the ‘energy’ Category

h1

Not much of Chinese energy is from wind or solar.

December 2, 2013

A few days ago I wrote about the pollyannish belief that “China is slowing its carbon emissions.”  An essential element of this ridiculous meme is that the Chinese are producing significant portions of their energy via wind and solar. Not true.

Consider just electricity.   Here is a breakdown of China’s installed electricity capacity by fuel type in 2011 and their electricity generation by fuel type for 2000 to 2010 from the The United Stages’ Energy Information Administration’s evaluation of China’s energy consumption (2012)…

"China's installed electricity capacity by fuel, 2011," from the US Energy Information Administration's evaluation of China's energy consumption

“China’s installed electricity capacity by fuel, 2011,” from the US Energy Information Administration’s evaluation of China’s energy consumption

"China's electricity generation by fuel type, 2000-2010" from the US's Energy Information Administration

“China’s electricity generation by fuel type, 2000-2010″ from the US’s Energy Information Administration

What do these charts tell you?

These two charts are drawn from the same data set and appear next to each other in the same document.

As you can see from the top chart, 6.2% of China’s installed electricity capacity is in wind or solar.  That is over 60 gigawatts installed.  Compare that the the US’s 60 gigawatts of installed wind and 10 gigawatts of installed solar.

Alas, the top chart shows installed capacity, not actual production.  There is a little thing called the “capacity factor.”  The capacity factor is the fraction of the time that particular power source can actually produce power at its rated capacity.  For example, a one gigawatt capacity nuclear power plant will have a capacity factor of about 90%, meaning it can produce one gigawatt 90% of the time.  Wind and solar capacity factors tend to be much lower, simply because sometimes the wind doesn’t blow and the sun doesn’t shine.  The capacity factor for wind in China is 22%

The second chart shows the amount of electrical energy actually produced using the various “fuel types”.  Do you see that very, very thin yellow band along the top of the second chart?  That represents the Chinese electricity generation due to that 6.2% of installed wind and solar.  Can’t see the yellow line?  Let me blow up the last year of the chart for you…

Chinas electricity generation by fuel type blown up 3

That 6.2% of installed capacity in the form of wind and solar yields less than 1.5% of the actual energy.

China’s energy future

The Energy Information Administration document tells us…

China is the world’s second largest power generator behind the US, and net power generation was 3,965 Terawatt-hours (TWh) in 2010, up 15 percent from 2009. Nearly 80 percent of generation is from fossil fuel-fired sources, primarily coal. Both electricity generation and consumption have increased by over 50 percent since 2005, and EIA predicts total net generation will increase to 9,583 TWh by 2035, over 3 times the amount in 2010.

Wow!  three times as much as 2010, a mere 21 years from now!  Where will all this energy come from?

Again, the Energy Information Administration…

Total fossil fuels, primarily coal, currently make up nearly 79 percent of power generation and 71 percent of installed capacity. Coal and natural gas are expected to remain the dominant fuel in the power sector in the coming years. Oil-fired generation is expected to remain relatively flat in the next two decades. In 2010, China generated about 3,130 TWh from fossil fuel sources, up 11 percent annually.

Let me be clear, I am not knocking the use of wind and solar.  I have been personally working on solar energy for 17 years.  But I am knocking unrealistic expectations and quasi-religious environmentalist beliefs.  And I am not criticizing the Chinese for their increasing energy consumption.  They understand, correctly, that abundant energy is the key to prosperity.

h1

Michael Mann averaging error demo

December 13, 2009
This may be beating a dead horse, but I thought it would be fun to examine the question of data centering, or mean subtraction, for principal component analysis (PCA).   So, I created a program that does a side by side comparison of PCA on simple noise with proper averaging and with Michael Mann styled improper averaging.

This was motivated by Steve McIntyre’s observation

“We [Steve McIntyre and Ross McKitrick] also observed that they[Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes] had modified the principal components calculation so that it intentionally or unintentionally mined for hockey stick shaped series. It was so powerful in this respect that I could even produce a HS from random red noise.”

The basic idea of principal component analysis (PCA)

PCA is used to determine the minimum number of factors that explain the maximum variance in multiple data sets.  In the case of the hockey stick each data set represents a chronological set of measurements, usually a tree ring chronology.   These chronologies may vary over time in similar ways, and in theory these variations are governed by common factors.  The single factor that explains the greatest amount of variance is the 1st principal component.  The factor that explains the next greatest amount of variance is the second principal component. etc.  In the case of the hockey stick, the first principal component is assumed to be the temperature.  With this assumption, understanding how the first principal component changes with time is the same as understanding how the temperature changes with time.

PCA is a method to extract common modes of variation from a set of proxies.

The following bullets give a brief explanation of the mathematical procedure for  determining the principal components.  See the tutorial by Jonathon Shlens at New York University’s Center for Neural Science for a nicer, more detailed explanation.

Mathematical procedure

  1. Start with m data sets of n points each.  For example, m tree ring chronologies each covering n years.
  2. Calculate the mean and standard deviation for each of the m data sets.
  3. Subtract each mean from its corresponding data set.  This is called centering the data.
  4. Normalize each data set by dividing it by its standard deviation.
  5. Create an m x n data matrix where each of the m rows has n data points, say, one point per year.
  6. Calculate the covariance for each possible pair of data sets by multiplying the data matrix by its own transpose, yielding  a square, m x m, symmetric covariance matrix.
  7. Find the eigenvalues and eigenvectors for the covariance matrix
  8. Multiply the eigenvector corresponding to the largest eigenvalue by the original m x n data matrix to get the 1st principal component.  Similarly for the eigenvector corresponding to the second largest eigenvalue to get the 2nd principal component. etc.
  9. The magnitude of the eigenvalue tells the amount of variance that is explained by its corresponding principal component.

The data centering, or mean subtraction, (step 3 in the above list) is where one on the hockey stick controversies arises.  McIntyre and McKitrik showed that Mann did not subtract the mean of all of the points from about 1000 year data sets, but rather, he subtracted the mean of  only the last 80 or 90 points (years).  They claim that this flawed process would yield a 1st principal component that looks like a hockey stick, even when the proxy data was made up of simple red noise.

Here is an explanation of Mann’s error in mathematical and graphical formats:

Let every proxy be given a number.  Then the 1st proxy is P1, the second proxy is P2, and the jth proxy is Pj.

Each proxy is made up of a series of points representing measurement in chronological order.   For a particular proxy, Pj,  the ith point is denoted by Pji

Here are two synthesized examples of proxy data.  We can call them Pj and Pk.

We center and normalize each data set by subtracting its average from itself, and then dividing by its standard deviation.  We can call the new re-centered and normalized data Rj and Rk.  Rj and Rk have the same shape as Pj and Pk, but they are both centered around zero and vary between about plus and minus 2.5, as shown below…

Some words about covariance

The covariance, σ, of two data sets, or proxies, is a measure of how similar their variations are.  If the shapes of two properly re-centered and normalized data sets, say Rj and Rk are similar, their covariance will be relatively large.  If their shapes are very different, then their covariance will be smaller.  It is easy to calculate the covariance of two data sets: simply multiply the corresponding terms of each data set and then add them together…

σjk =  Rj1Rk1 +  Rj2Rk2  + … + RjnRkn = Σ RjiRki

If Rj and Rk are exactly the same, then σjk will be n, exactly the number of points in each data set (for example, the number of years in the chronology).  This is a consequence of the data sets being centered and divided by their standard deviations.  If Rj and Rk are not exactly the same, then σjk will be less than n.  In the extreme case where Rj and Rk have absolutely no underlying similarity, then σjk could be zero. 

Some words about noise

Two data sets of totally random, white noise will approach this extreme case of no underlying similarity.  So their covariance, σ, will be very small.  This is easy to understand when you consider that, on the average, each pair of corresponding points in the covariance calculation whose product is positive will be offset by another pair whose product is negative, giving a sum that tends to zero.  But there are other types of noise, such as red noise, which exhibit more structure and are said to be “autocorrelated.” In fact, the two data sets shown above, Pj and Pk  (or Rj and Rk), are red noise.  The covariance of two red noise data sets will be less than n, and if there are enough data points in each set (Rj and Rk each have 100 points) then σjk will likely be small.

One of the important differences for this discussion between white noise and red noise is that the average of white noise over short sub-intervals of the entire data set will be close to zero.  But that will not necessarily be the case red red noise, which you can visually confirm by looking at the plots of Rj and Rk, shown above.

What about incorrect centering

McIntyre  and McKitrick found that Mann improperly performed step 3, the subtraction of the average from each data set.  Instead, Mann subtracted the average of only the last 80 or 90 points (years) from data sets that were about 1000 years long.  For most sets of pure white noise this approach has little effect, because the average of the entire data set and the average of a subset of the data set are usually nearly identical.  But for red noise the effect of improper centering tends to have a much greater effect.  Because of the structure that is inherent in red noise, the average of a subset may be very different from the average of the entire data set.

Here is what the two data sets, shown above, look like when they are improperly centered using the mean of only the last 20 points out of 100 points…

What effect does improperly re-centering (for example, subtracting the average of only the last 80 years of 1000 year data sets) have on the covariance of two data sets?  Let’s call the improperly re-centered and normalized data R’j and R’k where R’j = Rj + Mj , R’k = Rj + M, Rj and Rk  are correctly centered, and Mj are Mk are the additional improper offsets.  Then the improper covariance, σ’jk, between R’j and R’k is given by…

σ’jk = Σ R’jiR’ki 

        =  Σ (Rji + Mj)(Rki + Mk

        =  Σ RjiRki + Σ RjiMk + Σ RkiMj + Σ MjMk

        =  Σ RjiRki + Mk Σ Rji + MjΣ Rki + n MjMk

But remember, Rj and Rj are properly centered, so Σ Rji and Σ Rki each equal zero.  So…

σ’jk  =  Σ RjiRki +  n MjMk

And  Σ RjiRki = σjk.  This leaves…

σ’jk  =   σjk  +  nMjMk

In cases where the product of Mj and Mk is the same sign as σjk, then the absolute value of  σ’jk will be larger than the absolute value of σjk.  This falsely indicates that R’j and R’k have variations that are more similiar than thay really are.  The PCA algorithm will then give a higher weight to these proxies in the eigenvector that is used to construct the first principal component.

Mann Averaging Error Demo

I have written a piece of code that demonstrates the effect of Mann’s centering error.  This code is written in LabVIEW version 7.1.   You can get my source code, but you will need the LabVIEW 7.1 Full Development System or a later version that contains the “Eigenvalues and Vectors.vi” in order to run it.  You can also modify the code if you desire.

Download Mann Averaging Error Demo

This demo generates synthetic proxies of red noise with autocorrelation between 0.0 and 1.0.  If the autocorrelation is set to 0.0, then white noise is generated.  If the autocorrelation is set to 1.0, then brown noise is generated.  Principal component analysis is then performed on these proxies two different ways: with proper centering and with Mann style improper centering.

Each of the synthetic proxies is shown on the top plot in sequence.  After the proxies are synthesized, PCA is performed and the eigenvalues and principal components are shown in sequence.  Eigenvalues and principal components for correct centering are shown on the right.   Eigenvalues and principal components for improper centering are shown on the left.

After all the principal components have been shown, the synthetic proxy graph on the top of window defaults to the first proxy, and the principal component windows at the bottom default to the 1st principal component.  The operator then has the opportunity examine the individual proxies by changing the number in the yellow “View Proxy #” box and to examine the principal components by changing the number in the yellow “View Principal Component #” box. 

The operator can also select the “Overlay of all Proxies” tab at the top right corner of the window to see all proxies overlaid before centering.

This demo lets the operator select the following parameters…

  • Number of data points (years) per proxy.  The default is set to 1000, but you can make this anything you want.
  • Number of proxies.  The default is 70, but you can select anything you want.
  • Autocorrelation.  Set to 0.0 for white noise, between 0.0 and 1.0 for red noise, and 1.0 for brown noise.  The higher the autocorrelation is set, the more random structure there will be in synthetic proxies.   The default is 0.98, giving highly structured noise, but you can experiment with other settings.
  • Number of years to average over.  This determines how many years are used for the improper Mann style centering.  The default is set to 80, because this is approximately what Mann used.
  • Include/Don’t Include Hockey Stick Proxy.  When “Don’t Include Hockey Stick Proxy” is selected, all proxies are noise.  When “Include Hockey Stick Proxy” is selected, the first proxy will have a hocky stick shape superimposed on noise, all other proxies will be noise.  The default setting is “Don’t Include Hockey Stick Proxy.”

Play around with the settings.  Try these…

  • Set the autocorrelation to 0.0 (pure white noise) and select “Don’t Include Hockey Stick Proxy.”  This is the combination that is least likely to result in a hockey stick for the flawed first principal component.  Run it several times.  Amazingly, you are likely to see a small, noisy hockey stick for the first flawed principal component.
  • Set the autocorrelation to 0.0 (pure white noise), and select “Include Hockey Stick Proxy.”  This will give one noisy hockey stick proxy and pure white noise for all other proxies.  The first flawed principal component will be a crystal clear hockey stick with a dominating eigenvalue.  Notice that the first proper principal component is just noise.
  • Set the “Years” and “Average of last” years to the same value.  Since proper centering means averaging over all years, this will result in the “flawed” results actually being correct and identical to the “proper” results.
  • Set “Average of last” years to 80 (default) and try various autocorrelation settings.  You will find that any autocorrelation setting will usually result in the a hockey stick first principal component.

 Here are some screen shots the Mann Averaging Error Demo…

Voila! A Hockey Stick from noise…

Please let me know of any bugs or suggestions for enhancements.  If anyone is interested in LabVIEW 8.6 version, let me know and I will make it available.

h1

Scientific American’s “A Path to Sustainable Energy by 2030:” the Cost

November 13, 2009

091111 November 09 SA coverThe cover story of the November issue of Scientific American, A Path to Sustainable Energy by 2030,” by Mark Z. Jacobson and Mark A. Delucchi  promises a path to a “sustainable future” for the whole world in just 20 years. They define “sustainable” as a world where all energy sources are derived from water, wind and solar. Nuclear need not apply.

The article had a few words about the cost, but much was left out.  Jacobson and Delucchi conclude that their grand plan will cost about $100 trillion dollars.  I found this ridiculously large sum to be too low!  My rough calculations yields a cost of $200 trillion!

This post is an attempt to fill in a few blanks.

I will accept the authors’ mix of energy sources, apply some capacity factor estimates for each source, throw in an estimate of the land required for some sources, and estimate the installation cost per Watt for each source. Since all of these numbers are debatable, I provide references for most of them. But some of the numbers are simply my estimates. Also, I consider only installation costs.  I do not consider the additional costs of operation and maintainance, which may considerable.

Another point, the authors say that the US Energy Information Administration projects the world power requirement for 2030 would be 16.9 TW to accomodate population increase and rising living standards. By my reading, the Energy Information Administration’s estimate is actually 22.6 TW by 203013.  Nevertheless, Jacobson and Delucchi base their plan on only 11.5 TW, with an assumption that a power system based entirely on electrification would be much more efficient.  I will go along with their estimate of 11.5 TW for the sake of argument.

Here are my numbers

(click on image to get larger view)…

Total energy cost calculation

 

The numbers that I have placed in the blue columns are open to debate, but I am fairly confident of the capacity factors.  The capacity factor for concentrated solar power, with energy storage, such as molten salt, can vary depending on interpretation.  If energy is drawn from storage at night, then the capacity factor could be argued to be higher.  On the other hand, it would result in greater collection area, collection equipment and expense.   Note that using my estimates for capacity factors, the “total real power” works out to 12.03 TW, close to Jacobson’s and Delucchi’s 11.5 TW.

PV installation costThe dollars per installed watt is where I would expect the greatest argument.  For example, Jacobson and Delucchi call for 1.7 billion 3000 watt rooftop PV systems.  That is residential size, on the order of 300 square feet.  You can find offers for residential systems at much lower rates than $8 per watt installed.  But this is because of rebates and incentives.  Rebates and incentives only work when a small fraction of the population takes advantage of them.  If every residence must install a photovoltaic system, there is no way to pass the cost on to your neighbors.  Click on the chart on the left, from Lawrence Berkeley National Laboratory: of all the states listed, only one comes in at under $8 per installed watt for systems under 10 kilowatts, and half of the remaining come in at over $9.

Turbine transaction priceWouldn’t prices fall as technology advances?  Not necessarily.  Look at the cost to install wind facilities – it has been increasing since the early 2000s. A large part of the installed price for wind is the cost of the wind turbine itself.  Click on this graph showing the price of wind turbines per kilowatt capacity.  This increasing trend will likely continue if demand is artificially pushed up by a grandiose plan to install millions more wind turbines beyond what are called for by the free-market.

Expect to see the same effect for photovoltaic prices.  While the cost of photovoltaic power has been slowly falling, the demand (as a fraction of the total energy market) has been miniscule.  Jacobson and Delucchi call for 17 TW of photovoltaic power (5 TW from rooftop PV and 12 TW from PV power plants) by 2030.  Compare that to the what is already installed in Europe, the world’s biggest marked for PV: 0.0095 TW.  Achieving Jacobson’s and Delucchi’s desired level would require an orders or magnitude demand increase.  This is likely to lead to higher prices, not lower.  For my calculations I am staying with today’s costs for photovoltaics.

Some perspective

We have started using the word “trillion” when talking about government expenditures.  Soon we may become numb to that word, as we have already become numb to “million” and “billion.”  My estimate for the cost of Jacobson’s and Delucchi’s system comes out to about $210 trillion.  So how much is $210 trillion dollars?

It is approximately 100 times the $2.157 trillion of the total United States government receipts of 2009 (see documentation from the Government Printing Office) . 

It is about 15 times the GDP of the United States.

$210 trillion dollars is about 11 times the yearly revenue of all the national government budgets in the world!  You can confirm this by adding all the entries in the revenue column in the Wikipedia “Government Budget by Country.”

What about just the United States?

Jacobson and Delucchi calculate that with their system the US energy demand with be 1.8 TW 2030.  Keep in mind that the demand today is already 2.8 TW.  If we accept their estimate of 1.8 TW, then that  is about 16% of their estimated world demand of 11.5 TW for 2030.  So roughly speaking, the US share of the cost would be 16% of $210 trillion, or about $34 trillion.  That is 16 times the total United States government receipts of 2009. 

Doesn’t seem to likely to work, does it?

I know that Jacobson and Delucchi don’t like nukes.  But the Advanced Boiling Water Reactor price of under $2 per installed watt sure sounds attractive to me now.  Just a thought.

Update 11/14/2009

Jacobson and Delucchi compared their scheme to the building of the interstate highway system.  See here for are realistic comparison.

Notes

1) Capacity factor of wind power realized values vs. estimates, Nicolas Boccard, Energy Policy 37(2009)2679–2688
2)  http://www.oceanrenewable.com/wp-content/uploads/2009/05/power-and-energy-from-the-ocean-waves-and-tides.pdf
3)  Fridleifsson,, Ingvar B.,  et. al.,  The possible role and contribution of geothermal energy to the mitigation of climate change. (get copy here)
4)  http://en.wikipedia.org/wiki/Hydroelectricity
5)  Tracking the Sun II, page 19 , Lawrence Berkeley National Laboratory, http://eetd.lbl.gov/ea/emp/reports/lbnl-2674e.pdf
6)  Projecting the Impact of State Portfolio Standards on Solar installations, California Energy commission, http://www.cleanenergystates.org/library/ca/CEC_wiser_solar_estimates_0205.pdf
7)  David MacKay – “Sustainable Energy – Without the Hot Air” http://www.withouthotair.com/download.html
8).  64MW/400acres = 40MW/km2 http://www.chiefengineer.org/content/content_display.cfm/seqnumber_content/3070.htm
9)  http://www.windustry.org/how-much-do-wind-turbines-cost
10)  I have chosen a low cost because most hydroelectric has already been developed.
11) 280 MW for $1 billion, http://www.tucsoncitizen.com/ss/related/77596
12) Based on my personal experience as a Scientist working on photovoltaics for 14 years at the National Renewable Energy Laboratory.  This number varies according to insolaton, latitude, temperature, etc.
13)  The EIA predicts a need for 678 quadrillion (6.78 x 1017) BTUs of yearly world energy use by 2030.  One BTU is the same as 2.9307 x 10-4  kiloWatt hours.   So, (6.78 x 1017 BTU) x (2.9307 x 10-4  kWhr / BTU) = 1.98 x 1014 kWhr.    One year is 8.76 x 103 hours.  So the required world power would be given by:  (1.98 x 1014 kWhr) / (8.76 x 103 hr) = 2.26 x 1010 kW = 22.6  TW.

Follow

Get every new post delivered to your Inbox.

Join 49 other followers