Archive for the ‘energy’ Category

h1

Comparison of Arizona Nuclear and Solar Energy

December 9, 2015

Let’s compare and contrast solar energy and nuclear energy in Arizona. There is only one nuclear power plant in the state, the Palo Verde Nuclear Generating Station in Tonopah. There are several solar energy sites, so we will pick the Aqua Caliente Solar Project because it won the Renewable Energy World Solar Project of the Year category in their 2012 Excellence in Renewable Energy Awards.

Palo Verde Nuclear Generating Station

This nuclear plant consists of three reactors with with a total nameplate capacity of 3,937 MW. If these reactors ran for 24 hours day for 365 days a year they would yield 34,500 GWh (gigawatt hours) per year. The actual output is about 31,300 GWh per year (2010). This means they have a capacity factor of about 90%. Averaged over time Palo Verde yields 3,543 MW.

Palo Verde became operational in 1988 and is currently approved to operate until 2047, giving a lifetime of nearly 60 years.

Palo Verde’s construction cost was $5.9 billion in 1988 ($11.86 billion in 2015 dollars). Its operating costs for fuel and maintenance were about 1.33 cents per kWh in 2004 (1.67 cents in 2015 dollars.)

Based on an average power yield of 3,543 W and a cost of $11.86 billion (in 2015 dollars), the construction cost per watt for Palo Verde was $3.34 per Watt (in 2015 dollars).

Agua Caliente Solar Project

This 9.7 square kilometer solar energy farm has a nameplate capacity of 290 MW peak.  Its first year of full operation was 2014. If it were able to produce its nameplate capacity of 290 MW continuously for one year the energy output would be 2540 GWh. The energy output was 741 GWh in 2014, which means a capacity factor of 29%, an excellent result for solar energy. Averaged over time, this solar farm yields 84.6 MW.

Construction cost for Aqua Caliente was $1.8 billion.

Based on an average yield of 84 MW and a construction cost of $1.8 billion, the construction cost per watt for Aqua Caliente was $21.43 per Watt.

Comparison

The cost per kilowatt hour of energy for either of these sources is combination of the construction cost and the operation, fuel and maintenance cost.  The longer the facilities are in operation the lower the fraction of construction cost per kilowatt hour.

The operation, fuel and maintenance cost for the Palo Verde Nuclear plant were about 1.33 cents per kWh in 2004 (1.67 cents in 2015 dollars.)  The great advantage of the Agua Caliente solar farm is that its fuel cost is zero, and we will assume for the sake of argument that its other operation and maintenance costs are also zero.

The following chart shows various costs per kilowatt hour for each of the facilities for various lifetimes.

spreadsheet

1.  $0.0133 per kilowatt hour in 2004.  Converted to 2015 dollars.
2. 2013 energy output.
3. $5.9 million construction cost in 1988 dollars.  Converted to 2015 dollars.
4. 2014 energy output
5. $1.8 billion construction cost in 2014.
6. (GWh/year) x (number of years) x (1,000,000)
7. (Construction cost) / (kilowatt hours produced over lifetime)
8. (Construction cost per kWh) + (operating cost per kWh)

Two blocks of data are highlighted in yellow.  These are the most likely lifetime scenarios for each of the power generating plants.  The Palo Verde nuclear plant has had its license extended to 60 years.  Aqua Caliente solar farm is made from First Solar CdTe modules that have a 10 year material and workmanship warranty and a  warranty of 80% of the nominal output power rating during twenty-five (25) years.  It is reasonable to hope that it will last 40 years

There is one more thing to be considered.  We have assumed so far that the yearly output of each of these power generating stations it the same year after year.  That is not entirely correct.  Historically, the Palo Verde nuclear plant has increased its capacity factor through time as operations have become more efficient.  Whether that trend will continue is unknown.

Solar modules tend to slowly degrade with time.  The First Solar CdTe modules that are used at Aqua Caliente will likely decay at about 0.5% per year. The chart above gives a best case estimate for Agua Caliente and does not compensate for this degradation.

Based on the highlighted sections of the above chart, Aqua Caliente Solar Farm will likely cost about 2.5 times more per kilowatt hour than the Palo Verde Nuclear Plant over the course of their lifetimes.

One more point.  Aqua Caliente requires 9.7 square kilometers to generate an average of 84.6 MW.  Palo Verde Nuclear Plant generates and average of 3,543 MW.  So it would take 41 Agua Calientes to equal the power of Palo Verde.  That would require about 400 square kilometers.

Energy is the lifeblood of civilization.  The pursuit of energy abundance is the pursuit of healthier and more fulfilling lifestyle for greater numbers of people.  I present this data to help inform the choices that need to be made in that pursuit.

h1

Not much of Chinese energy is from wind or solar.

December 2, 2013

A few days ago I wrote about the pollyannish belief that “China is slowing its carbon emissions.”  An essential element of this ridiculous meme is that the Chinese are producing significant portions of their energy via wind and solar. Not true.

Consider just electricity.   Here is a breakdown of China’s installed electricity capacity by fuel type in 2011 and their electricity generation by fuel type for 2000 to 2010 from the The United Stages’ Energy Information Administration’s evaluation of China’s energy consumption (2012)…

"China's installed electricity capacity by fuel, 2011," from the US Energy Information Administration's evaluation of China's energy consumption

“China’s installed electricity capacity by fuel, 2011,” from the US Energy Information Administration’s evaluation of China’s energy consumption

"China's electricity generation by fuel type, 2000-2010" from the US's Energy Information Administration

“China’s electricity generation by fuel type, 2000-2010” from the US’s Energy Information Administration

What do these charts tell you?

These two charts are drawn from the same data set and appear next to each other in the same document.

As you can see from the top chart, 6.2% of China’s installed electricity capacity is in wind or solar.  That is over 60 gigawatts installed.  Compare that the the US’s 60 gigawatts of installed wind and 10 gigawatts of installed solar.

Alas, the top chart shows installed capacity, not actual production.  There is a little thing called the “capacity factor.”  The capacity factor is the fraction of the time that particular power source can actually produce power at its rated capacity.  For example, a one gigawatt capacity nuclear power plant will have a capacity factor of about 90%, meaning it can produce one gigawatt 90% of the time.  Wind and solar capacity factors tend to be much lower, simply because sometimes the wind doesn’t blow and the sun doesn’t shine.  The capacity factor for wind in China is 22%

The second chart shows the amount of electrical energy actually produced using the various “fuel types”.  Do you see that very, very thin yellow band along the top of the second chart?  That represents the Chinese electricity generation due to that 6.2% of installed wind and solar.  Can’t see the yellow line?  Let me blow up the last year of the chart for you…

Chinas electricity generation by fuel type blown up 3

That 6.2% of installed capacity in the form of wind and solar yields less than 1.5% of the actual energy.

China’s energy future

The Energy Information Administration document tells us…

China is the world’s second largest power generator behind the US, and net power generation was 3,965 Terawatt-hours (TWh) in 2010, up 15 percent from 2009. Nearly 80 percent of generation is from fossil fuel-fired sources, primarily coal. Both electricity generation and consumption have increased by over 50 percent since 2005, and EIA predicts total net generation will increase to 9,583 TWh by 2035, over 3 times the amount in 2010.

Wow!  three times as much as 2010, a mere 21 years from now!  Where will all this energy come from?

Again, the Energy Information Administration…

Total fossil fuels, primarily coal, currently make up nearly 79 percent of power generation and 71 percent of installed capacity. Coal and natural gas are expected to remain the dominant fuel in the power sector in the coming years. Oil-fired generation is expected to remain relatively flat in the next two decades. In 2010, China generated about 3,130 TWh from fossil fuel sources, up 11 percent annually.

Let me be clear, I am not knocking the use of wind and solar.  I have been personally working on solar energy for 17 years.  But I am knocking unrealistic expectations and quasi-religious environmentalist beliefs.  And I am not criticizing the Chinese for their increasing energy consumption.  They understand, correctly, that abundant energy is the key to prosperity.

h1

Michael Mann averaging error demo

December 13, 2009
This may be beating a dead horse, but I thought it would be fun to examine the question of data centering, or mean subtraction, for principal component analysis (PCA).   So, I created a program that does a side by side comparison of PCA on simple noise with proper averaging and with Michael Mann styled improper averaging.

This was motivated by Steve McIntyre’s observation

“We [Steve McIntyre and Ross McKitrick] also observed that they[Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes] had modified the principal components calculation so that it intentionally or unintentionally mined for hockey stick shaped series. It was so powerful in this respect that I could even produce a HS from random red noise.”

The basic idea of principal component analysis (PCA)

PCA is used to determine the minimum number of factors that explain the maximum variance in multiple data sets.  In the case of the hockey stick each data set represents a chronological set of measurements, usually a tree ring chronology.   These chronologies may vary over time in similar ways, and in theory these variations are governed by common factors.  The single factor that explains the greatest amount of variance is the 1st principal component.  The factor that explains the next greatest amount of variance is the second principal component. etc.  In the case of the hockey stick, the first principal component is assumed to be the temperature.  With this assumption, understanding how the first principal component changes with time is the same as understanding how the temperature changes with time.

PCA is a method to extract common modes of variation from a set of proxies.

The following bullets give a brief explanation of the mathematical procedure for  determining the principal components.  See the tutorial by Jonathon Shlens at New York University’s Center for Neural Science for a nicer, more detailed explanation.

Mathematical procedure

  1. Start with m data sets of n points each.  For example, m tree ring chronologies each covering n years.
  2. Calculate the mean and standard deviation for each of the m data sets.
  3. Subtract each mean from its corresponding data set.  This is called centering the data.
  4. Normalize each data set by dividing it by its standard deviation.
  5. Create an m x n data matrix where each of the m rows has n data points, say, one point per year.
  6. Calculate the covariance for each possible pair of data sets by multiplying the data matrix by its own transpose, yielding  a square, m x m, symmetric covariance matrix.
  7. Find the eigenvalues and eigenvectors for the covariance matrix
  8. Multiply the eigenvector corresponding to the largest eigenvalue by the original m x n data matrix to get the 1st principal component.  Similarly for the eigenvector corresponding to the second largest eigenvalue to get the 2nd principal component. etc.
  9. The magnitude of the eigenvalue tells the amount of variance that is explained by its corresponding principal component.

The data centering, or mean subtraction, (step 3 in the above list) is where one on the hockey stick controversies arises.  McIntyre and McKitrik showed that Mann did not subtract the mean of all of the points from about 1000 year data sets, but rather, he subtracted the mean of  only the last 80 or 90 points (years).  They claim that this flawed process would yield a 1st principal component that looks like a hockey stick, even when the proxy data was made up of simple red noise.

Here is an explanation of Mann’s error in mathematical and graphical formats:

Let every proxy be given a number.  Then the 1st proxy is P1, the second proxy is P2, and the jth proxy is Pj.

Each proxy is made up of a series of points representing measurement in chronological order.   For a particular proxy, Pj,  the ith point is denoted by Pji

Here are two synthesized examples of proxy data.  We can call them Pj and Pk.

We center and normalize each data set by subtracting its average from itself, and then dividing by its standard deviation.  We can call the new re-centered and normalized data Rj and Rk.  Rj and Rk have the same shape as Pj and Pk, but they are both centered around zero and vary between about plus and minus 2.5, as shown below…

Some words about covariance

The covariance, σ, of two data sets, or proxies, is a measure of how similar their variations are.  If the shapes of two properly re-centered and normalized data sets, say Rj and Rk are similar, their covariance will be relatively large.  If their shapes are very different, then their covariance will be smaller.  It is easy to calculate the covariance of two data sets: simply multiply the corresponding terms of each data set and then add them together…

σjk =  Rj1Rk1 +  Rj2Rk2  + … + RjnRkn = Σ RjiRki

If Rj and Rk are exactly the same, then σjk will be n, exactly the number of points in each data set (for example, the number of years in the chronology).  This is a consequence of the data sets being centered and divided by their standard deviations.  If Rj and Rk are not exactly the same, then σjk will be less than n.  In the extreme case where Rj and Rk have absolutely no underlying similarity, then σjk could be zero. 

Some words about noise

Two data sets of totally random, white noise will approach this extreme case of no underlying similarity.  So their covariance, σ, will be very small.  This is easy to understand when you consider that, on the average, each pair of corresponding points in the covariance calculation whose product is positive will be offset by another pair whose product is negative, giving a sum that tends to zero.  But there are other types of noise, such as red noise, which exhibit more structure and are said to be “autocorrelated.” In fact, the two data sets shown above, Pj and Pk  (or Rj and Rk), are red noise.  The covariance of two red noise data sets will be less than n, and if there are enough data points in each set (Rj and Rk each have 100 points) then σjk will likely be small.

One of the important differences for this discussion between white noise and red noise is that the average of white noise over short sub-intervals of the entire data set will be close to zero.  But that will not necessarily be the case red red noise, which you can visually confirm by looking at the plots of Rj and Rk, shown above.

What about incorrect centering

McIntyre  and McKitrick found that Mann improperly performed step 3, the subtraction of the average from each data set.  Instead, Mann subtracted the average of only the last 80 or 90 points (years) from data sets that were about 1000 years long.  For most sets of pure white noise this approach has little effect, because the average of the entire data set and the average of a subset of the data set are usually nearly identical.  But for red noise the effect of improper centering tends to have a much greater effect.  Because of the structure that is inherent in red noise, the average of a subset may be very different from the average of the entire data set.

Here is what the two data sets, shown above, look like when they are improperly centered using the mean of only the last 20 points out of 100 points…

What effect does improperly re-centering (for example, subtracting the average of only the last 80 years of 1000 year data sets) have on the covariance of two data sets?  Let’s call the improperly re-centered and normalized data R’j and R’k where R’j = Rj + Mj , R’k = Rj + M, Rj and Rk  are correctly centered, and Mj are Mk are the additional improper offsets.  Then the improper covariance, σ’jk, between R’j and R’k is given by…

σ’jk = Σ R’jiR’ki 

        =  Σ (Rji + Mj)(Rki + Mk

        =  Σ RjiRki + Σ RjiMk + Σ RkiMj + Σ MjMk

        =  Σ RjiRki + Mk Σ Rji + MjΣ Rki + n MjMk

But remember, Rj and Rj are properly centered, so Σ Rji and Σ Rki each equal zero.  So…

σ’jk  =  Σ RjiRki +  n MjMk

And  Σ RjiRki = σjk.  This leaves…

σ’jk  =   σjk  +  nMjMk

In cases where the product of Mj and Mk is the same sign as σjk, then the absolute value of  σ’jk will be larger than the absolute value of σjk.  This falsely indicates that R’j and R’k have variations that are more similiar than thay really are.  The PCA algorithm will then give a higher weight to these proxies in the eigenvector that is used to construct the first principal component.

Mann Averaging Error Demo

I have written a piece of code that demonstrates the effect of Mann’s centering error.  This code is written in LabVIEW version 7.1.   You can get my source code, but you will need the LabVIEW 7.1 Full Development System or a later version that contains the “Eigenvalues and Vectors.vi” in order to run it.  You can also modify the code if you desire.

Download Mann Averaging Error Demo

This demo generates synthetic proxies of red noise with autocorrelation between 0.0 and 1.0.  If the autocorrelation is set to 0.0, then white noise is generated.  If the autocorrelation is set to 1.0, then brown noise is generated.  Principal component analysis is then performed on these proxies two different ways: with proper centering and with Mann style improper centering.

Each of the synthetic proxies is shown on the top plot in sequence.  After the proxies are synthesized, PCA is performed and the eigenvalues and principal components are shown in sequence.  Eigenvalues and principal components for correct centering are shown on the right.   Eigenvalues and principal components for improper centering are shown on the left.

After all the principal components have been shown, the synthetic proxy graph on the top of window defaults to the first proxy, and the principal component windows at the bottom default to the 1st principal component.  The operator then has the opportunity examine the individual proxies by changing the number in the yellow “View Proxy #” box and to examine the principal components by changing the number in the yellow “View Principal Component #” box. 

The operator can also select the “Overlay of all Proxies” tab at the top right corner of the window to see all proxies overlaid before centering.

This demo lets the operator select the following parameters…

  • Number of data points (years) per proxy.  The default is set to 1000, but you can make this anything you want.
  • Number of proxies.  The default is 70, but you can select anything you want.
  • Autocorrelation.  Set to 0.0 for white noise, between 0.0 and 1.0 for red noise, and 1.0 for brown noise.  The higher the autocorrelation is set, the more random structure there will be in synthetic proxies.   The default is 0.98, giving highly structured noise, but you can experiment with other settings.
  • Number of years to average over.  This determines how many years are used for the improper Mann style centering.  The default is set to 80, because this is approximately what Mann used.
  • Include/Don’t Include Hockey Stick Proxy.  When “Don’t Include Hockey Stick Proxy” is selected, all proxies are noise.  When “Include Hockey Stick Proxy” is selected, the first proxy will have a hocky stick shape superimposed on noise, all other proxies will be noise.  The default setting is “Don’t Include Hockey Stick Proxy.”

Play around with the settings.  Try these…

  • Set the autocorrelation to 0.0 (pure white noise) and select “Don’t Include Hockey Stick Proxy.”  This is the combination that is least likely to result in a hockey stick for the flawed first principal component.  Run it several times.  Amazingly, you are likely to see a small, noisy hockey stick for the first flawed principal component.
  • Set the autocorrelation to 0.0 (pure white noise), and select “Include Hockey Stick Proxy.”  This will give one noisy hockey stick proxy and pure white noise for all other proxies.  The first flawed principal component will be a crystal clear hockey stick with a dominating eigenvalue.  Notice that the first proper principal component is just noise.
  • Set the “Years” and “Average of last” years to the same value.  Since proper centering means averaging over all years, this will result in the “flawed” results actually being correct and identical to the “proper” results.
  • Set “Average of last” years to 80 (default) and try various autocorrelation settings.  You will find that any autocorrelation setting will usually result in the a hockey stick first principal component.

 Here are some screen shots the Mann Averaging Error Demo…

Voila! A Hockey Stick from noise…

Please let me know of any bugs or suggestions for enhancements.  If anyone is interested in LabVIEW 8.6 version, let me know and I will make it available.