Archive for the ‘energy’ Category

h1

Not much of Chinese energy is from wind or solar.

December 2, 2013

A few days ago I wrote about the pollyannish belief that “China is slowing its carbon emissions.”  An essential element of this ridiculous meme is that the Chinese are producing significant portions of their energy via wind and solar. Not true.

Consider just electricity.   Here is a breakdown of China’s installed electricity capacity by fuel type in 2011 and their electricity generation by fuel type for 2000 to 2010 from the The United Stages’ Energy Information Administration’s evaluation of China’s energy consumption (2012)…

"China's installed electricity capacity by fuel, 2011," from the US Energy Information Administration's evaluation of China's energy consumption

“China’s installed electricity capacity by fuel, 2011,” from the US Energy Information Administration’s evaluation of China’s energy consumption

"China's electricity generation by fuel type, 2000-2010" from the US's Energy Information Administration

“China’s electricity generation by fuel type, 2000-2010″ from the US’s Energy Information Administration

What do these charts tell you?

These two charts are drawn from the same data set and appear next to each other in the same document.

As you can see from the top chart, 6.2% of China’s installed electricity capacity is in wind or solar.  That is over 60 gigawatts installed.  Compare that the the US’s 60 gigawatts of installed wind and 10 gigawatts of installed solar.

Alas, the top chart shows installed capacity, not actual production.  There is a little thing called the “capacity factor.”  The capacity factor is the fraction of the time that particular power source can actually produce power at its rated capacity.  For example, a one gigawatt capacity nuclear power plant will have a capacity factor of about 90%, meaning it can produce one gigawatt 90% of the time.  Wind and solar capacity factors tend to be much lower, simply because sometimes the wind doesn’t blow and the sun doesn’t shine.  The capacity factor for wind in China is 22%

The second chart shows the amount of electrical energy actually produced using the various “fuel types”.  Do you see that very, very thin yellow band along the top of the second chart?  That represents the Chinese electricity generation due to that 6.2% of installed wind and solar.  Can’t see the yellow line?  Let me blow up the last year of the chart for you…

Chinas electricity generation by fuel type blown up 3

That 6.2% of installed capacity in the form of wind and solar yields less than 1.5% of the actual energy.

China’s energy future

The Energy Information Administration document tells us…

China is the world’s second largest power generator behind the US, and net power generation was 3,965 Terawatt-hours (TWh) in 2010, up 15 percent from 2009. Nearly 80 percent of generation is from fossil fuel-fired sources, primarily coal. Both electricity generation and consumption have increased by over 50 percent since 2005, and EIA predicts total net generation will increase to 9,583 TWh by 2035, over 3 times the amount in 2010.

Wow!  three times as much as 2010, a mere 21 years from now!  Where will all this energy come from?

Again, the Energy Information Administration…

Total fossil fuels, primarily coal, currently make up nearly 79 percent of power generation and 71 percent of installed capacity. Coal and natural gas are expected to remain the dominant fuel in the power sector in the coming years. Oil-fired generation is expected to remain relatively flat in the next two decades. In 2010, China generated about 3,130 TWh from fossil fuel sources, up 11 percent annually.

Let me be clear, I am not knocking the use of wind and solar.  I have been personally working on solar energy for 17 years.  But I am knocking unrealistic expectations and quasi-religious environmentalist beliefs.  And I am not criticizing the Chinese for their increasing energy consumption.  They understand, correctly, that abundant energy is the key to prosperity.

h1

Michael Mann averaging error demo

December 13, 2009
This may be beating a dead horse, but I thought it would be fun to examine the question of data centering, or mean subtraction, for principal component analysis (PCA).   So, I created a program that does a side by side comparison of PCA on simple noise with proper averaging and with Michael Mann styled improper averaging.

This was motivated by Steve McIntyre’s observation

“We [Steve McIntyre and Ross McKitrick] also observed that they[Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes] had modified the principal components calculation so that it intentionally or unintentionally mined for hockey stick shaped series. It was so powerful in this respect that I could even produce a HS from random red noise.”

The basic idea of principal component analysis (PCA)

PCA is used to determine the minimum number of factors that explain the maximum variance in multiple data sets.  In the case of the hockey stick each data set represents a chronological set of measurements, usually a tree ring chronology.   These chronologies may vary over time in similar ways, and in theory these variations are governed by common factors.  The single factor that explains the greatest amount of variance is the 1st principal component.  The factor that explains the next greatest amount of variance is the second principal component. etc.  In the case of the hockey stick, the first principal component is assumed to be the temperature.  With this assumption, understanding how the first principal component changes with time is the same as understanding how the temperature changes with time.

PCA is a method to extract common modes of variation from a set of proxies.

The following bullets give a brief explanation of the mathematical procedure for  determining the principal components.  See the tutorial by Jonathon Shlens at New York University’s Center for Neural Science for a nicer, more detailed explanation.

Mathematical procedure

  1. Start with m data sets of n points each.  For example, m tree ring chronologies each covering n years.
  2. Calculate the mean and standard deviation for each of the m data sets.
  3. Subtract each mean from its corresponding data set.  This is called centering the data.
  4. Normalize each data set by dividing it by its standard deviation.
  5. Create an m x n data matrix where each of the m rows has n data points, say, one point per year.
  6. Calculate the covariance for each possible pair of data sets by multiplying the data matrix by its own transpose, yielding  a square, m x m, symmetric covariance matrix.
  7. Find the eigenvalues and eigenvectors for the covariance matrix
  8. Multiply the eigenvector corresponding to the largest eigenvalue by the original m x n data matrix to get the 1st principal component.  Similarly for the eigenvector corresponding to the second largest eigenvalue to get the 2nd principal component. etc.
  9. The magnitude of the eigenvalue tells the amount of variance that is explained by its corresponding principal component.

The data centering, or mean subtraction, (step 3 in the above list) is where one on the hockey stick controversies arises.  McIntyre and McKitrik showed that Mann did not subtract the mean of all of the points from about 1000 year data sets, but rather, he subtracted the mean of  only the last 80 or 90 points (years).  They claim that this flawed process would yield a 1st principal component that looks like a hockey stick, even when the proxy data was made up of simple red noise.

Here is an explanation of Mann’s error in mathematical and graphical formats:

Let every proxy be given a number.  Then the 1st proxy is P1, the second proxy is P2, and the jth proxy is Pj.

Each proxy is made up of a series of points representing measurement in chronological order.   For a particular proxy, Pj,  the ith point is denoted by Pji

Here are two synthesized examples of proxy data.  We can call them Pj and Pk.

We center and normalize each data set by subtracting its average from itself, and then dividing by its standard deviation.  We can call the new re-centered and normalized data Rj and Rk.  Rj and Rk have the same shape as Pj and Pk, but they are both centered around zero and vary between about plus and minus 2.5, as shown below…

Some words about covariance

The covariance, σ, of two data sets, or proxies, is a measure of how similar their variations are.  If the shapes of two properly re-centered and normalized data sets, say Rj and Rk are similar, their covariance will be relatively large.  If their shapes are very different, then their covariance will be smaller.  It is easy to calculate the covariance of two data sets: simply multiply the corresponding terms of each data set and then add them together…

σjk =  Rj1Rk1 +  Rj2Rk2  + … + RjnRkn = Σ RjiRki

If Rj and Rk are exactly the same, then σjk will be n, exactly the number of points in each data set (for example, the number of years in the chronology).  This is a consequence of the data sets being centered and divided by their standard deviations.  If Rj and Rk are not exactly the same, then σjk will be less than n.  In the extreme case where Rj and Rk have absolutely no underlying similarity, then σjk could be zero. 

Some words about noise

Two data sets of totally random, white noise will approach this extreme case of no underlying similarity.  So their covariance, σ, will be very small.  This is easy to understand when you consider that, on the average, each pair of corresponding points in the covariance calculation whose product is positive will be offset by another pair whose product is negative, giving a sum that tends to zero.  But there are other types of noise, such as red noise, which exhibit more structure and are said to be “autocorrelated.” In fact, the two data sets shown above, Pj and Pk  (or Rj and Rk), are red noise.  The covariance of two red noise data sets will be less than n, and if there are enough data points in each set (Rj and Rk each have 100 points) then σjk will likely be small.

One of the important differences for this discussion between white noise and red noise is that the average of white noise over short sub-intervals of the entire data set will be close to zero.  But that will not necessarily be the case red red noise, which you can visually confirm by looking at the plots of Rj and Rk, shown above.

What about incorrect centering

McIntyre  and McKitrick found that Mann improperly performed step 3, the subtraction of the average from each data set.  Instead, Mann subtracted the average of only the last 80 or 90 points (years) from data sets that were about 1000 years long.  For most sets of pure white noise this approach has little effect, because the average of the entire data set and the average of a subset of the data set are usually nearly identical.  But for red noise the effect of improper centering tends to have a much greater effect.  Because of the structure that is inherent in red noise, the average of a subset may be very different from the average of the entire data set.

Here is what the two data sets, shown above, look like when they are improperly centered using the mean of only the last 20 points out of 100 points…

What effect does improperly re-centering (for example, subtracting the average of only the last 80 years of 1000 year data sets) have on the covariance of two data sets?  Let’s call the improperly re-centered and normalized data R’j and R’k where R’j = Rj + Mj , R’k = Rj + M, Rj and Rk  are correctly centered, and Mj are Mk are the additional improper offsets.  Then the improper covariance, σ’jk, between R’j and R’k is given by…

σ’jk = Σ R’jiR’ki 

        =  Σ (Rji + Mj)(Rki + Mk

        =  Σ RjiRki + Σ RjiMk + Σ RkiMj + Σ MjMk

        =  Σ RjiRki + Mk Σ Rji + MjΣ Rki + n MjMk

But remember, Rj and Rj are properly centered, so Σ Rji and Σ Rki each equal zero.  So…

σ’jk  =  Σ RjiRki +  n MjMk

And  Σ RjiRki = σjk.  This leaves…

σ’jk  =   σjk  +  nMjMk

In cases where the product of Mj and Mk is the same sign as σjk, then the absolute value of  σ’jk will be larger than the absolute value of σjk.  This falsely indicates that R’j and R’k have variations that are more similiar than thay really are.  The PCA algorithm will then give a higher weight to these proxies in the eigenvector that is used to construct the first principal component.

Mann Averaging Error Demo

I have written a piece of code that demonstrates the effect of Mann’s centering error.  This code is written in LabVIEW version 7.1.   You can get my source code, but you will need the LabVIEW 7.1 Full Development System or a later version that contains the “Eigenvalues and Vectors.vi” in order to run it.  You can also modify the code if you desire.

Download Mann Averaging Error Demo

This demo generates synthetic proxies of red noise with autocorrelation between 0.0 and 1.0.  If the autocorrelation is set to 0.0, then white noise is generated.  If the autocorrelation is set to 1.0, then brown noise is generated.  Principal component analysis is then performed on these proxies two different ways: with proper centering and with Mann style improper centering.

Each of the synthetic proxies is shown on the top plot in sequence.  After the proxies are synthesized, PCA is performed and the eigenvalues and principal components are shown in sequence.  Eigenvalues and principal components for correct centering are shown on the right.   Eigenvalues and principal components for improper centering are shown on the left.

After all the principal components have been shown, the synthetic proxy graph on the top of window defaults to the first proxy, and the principal component windows at the bottom default to the 1st principal component.  The operator then has the opportunity examine the individual proxies by changing the number in the yellow “View Proxy #” box and to examine the principal components by changing the number in the yellow “View Principal Component #” box. 

The operator can also select the “Overlay of all Proxies” tab at the top right corner of the window to see all proxies overlaid before centering.

This demo lets the operator select the following parameters…

  • Number of data points (years) per proxy.  The default is set to 1000, but you can make this anything you want.
  • Number of proxies.  The default is 70, but you can select anything you want.
  • Autocorrelation.  Set to 0.0 for white noise, between 0.0 and 1.0 for red noise, and 1.0 for brown noise.  The higher the autocorrelation is set, the more random structure there will be in synthetic proxies.   The default is 0.98, giving highly structured noise, but you can experiment with other settings.
  • Number of years to average over.  This determines how many years are used for the improper Mann style centering.  The default is set to 80, because this is approximately what Mann used.
  • Include/Don’t Include Hockey Stick Proxy.  When “Don’t Include Hockey Stick Proxy” is selected, all proxies are noise.  When “Include Hockey Stick Proxy” is selected, the first proxy will have a hocky stick shape superimposed on noise, all other proxies will be noise.  The default setting is “Don’t Include Hockey Stick Proxy.”

Play around with the settings.  Try these…

  • Set the autocorrelation to 0.0 (pure white noise) and select “Don’t Include Hockey Stick Proxy.”  This is the combination that is least likely to result in a hockey stick for the flawed first principal component.  Run it several times.  Amazingly, you are likely to see a small, noisy hockey stick for the first flawed principal component.
  • Set the autocorrelation to 0.0 (pure white noise), and select “Include Hockey Stick Proxy.”  This will give one noisy hockey stick proxy and pure white noise for all other proxies.  The first flawed principal component will be a crystal clear hockey stick with a dominating eigenvalue.  Notice that the first proper principal component is just noise.
  • Set the “Years” and “Average of last” years to the same value.  Since proper centering means averaging over all years, this will result in the “flawed” results actually being correct and identical to the “proper” results.
  • Set “Average of last” years to 80 (default) and try various autocorrelation settings.  You will find that any autocorrelation setting will usually result in the a hockey stick first principal component.

 Here are some screen shots the Mann Averaging Error Demo…

Voila! A Hockey Stick from noise…

Please let me know of any bugs or suggestions for enhancements.  If anyone is interested in LabVIEW 8.6 version, let me know and I will make it available.

h1

Scientific American’s “A Path to Sustainable Energy by 2030:” the Cost

November 13, 2009

091111 November 09 SA coverThe cover story of the November issue of Scientific American, A Path to Sustainable Energy by 2030,” by Mark Z. Jacobson and Mark A. Delucchi  promises a path to a “sustainable future” for the whole world in just 20 years. They define “sustainable” as a world where all energy sources are derived from water, wind and solar. Nuclear need not apply.

The article had a few words about the cost, but much was left out.  Jacobson and Delucchi conclude that their grand plan will cost about $100 trillion dollars.  I found this ridiculously large sum to be too low!  My rough calculations yields a cost of $200 trillion!

This post is an attempt to fill in a few blanks.

I will accept the authors’ mix of energy sources, apply some capacity factor estimates for each source, throw in an estimate of the land required for some sources, and estimate the installation cost per Watt for each source. Since all of these numbers are debatable, I provide references for most of them. But some of the numbers are simply my estimates. Also, I consider only installation costs.  I do not consider the additional costs of operation and maintainance, which may considerable.

Another point, the authors say that the US Energy Information Administration projects the world power requirement for 2030 would be 16.9 TW to accomodate population increase and rising living standards. By my reading, the Energy Information Administration’s estimate is actually 22.6 TW by 203013.  Nevertheless, Jacobson and Delucchi base their plan on only 11.5 TW, with an assumption that a power system based entirely on electrification would be much more efficient.  I will go along with their estimate of 11.5 TW for the sake of argument.

Here are my numbers

(click on image to get larger view)…

Total energy cost calculation

 

The numbers that I have placed in the blue columns are open to debate, but I am fairly confident of the capacity factors.  The capacity factor for concentrated solar power, with energy storage, such as molten salt, can vary depending on interpretation.  If energy is drawn from storage at night, then the capacity factor could be argued to be higher.  On the other hand, it would result in greater collection area, collection equipment and expense.   Note that using my estimates for capacity factors, the “total real power” works out to 12.03 TW, close to Jacobson’s and Delucchi’s 11.5 TW.

PV installation costThe dollars per installed watt is where I would expect the greatest argument.  For example, Jacobson and Delucchi call for 1.7 billion 3000 watt rooftop PV systems.  That is residential size, on the order of 300 square feet.  You can find offers for residential systems at much lower rates than $8 per watt installed.  But this is because of rebates and incentives.  Rebates and incentives only work when a small fraction of the population takes advantage of them.  If every residence must install a photovoltaic system, there is no way to pass the cost on to your neighbors.  Click on the chart on the left, from Lawrence Berkeley National Laboratory: of all the states listed, only one comes in at under $8 per installed watt for systems under 10 kilowatts, and half of the remaining come in at over $9.

Turbine transaction priceWouldn’t prices fall as technology advances?  Not necessarily.  Look at the cost to install wind facilities – it has been increasing since the early 2000s. A large part of the installed price for wind is the cost of the wind turbine itself.  Click on this graph showing the price of wind turbines per kilowatt capacity.  This increasing trend will likely continue if demand is artificially pushed up by a grandiose plan to install millions more wind turbines beyond what are called for by the free-market.

Expect to see the same effect for photovoltaic prices.  While the cost of photovoltaic power has been slowly falling, the demand (as a fraction of the total energy market) has been miniscule.  Jacobson and Delucchi call for 17 TW of photovoltaic power (5 TW from rooftop PV and 12 TW from PV power plants) by 2030.  Compare that to the what is already installed in Europe, the world’s biggest marked for PV: 0.0095 TW.  Achieving Jacobson’s and Delucchi’s desired level would require an orders or magnitude demand increase.  This is likely to lead to higher prices, not lower.  For my calculations I am staying with today’s costs for photovoltaics.

Some perspective

We have started using the word “trillion” when talking about government expenditures.  Soon we may become numb to that word, as we have already become numb to “million” and “billion.”  My estimate for the cost of Jacobson’s and Delucchi’s system comes out to about $210 trillion.  So how much is $210 trillion dollars?

It is approximately 100 times the $2.157 trillion of the total United States government receipts of 2009 (see documentation from the Government Printing Office) . 

It is about 15 times the GDP of the United States.

$210 trillion dollars is about 11 times the yearly revenue of all the national government budgets in the world!  You can confirm this by adding all the entries in the revenue column in the Wikipedia “Government Budget by Country.”

What about just the United States?

Jacobson and Delucchi calculate that with their system the US energy demand with be 1.8 TW 2030.  Keep in mind that the demand today is already 2.8 TW.  If we accept their estimate of 1.8 TW, then that  is about 16% of their estimated world demand of 11.5 TW for 2030.  So roughly speaking, the US share of the cost would be 16% of $210 trillion, or about $34 trillion.  That is 16 times the total United States government receipts of 2009. 

Doesn’t seem to likely to work, does it?

I know that Jacobson and Delucchi don’t like nukes.  But the Advanced Boiling Water Reactor price of under $2 per installed watt sure sounds attractive to me now.  Just a thought.

Update 11/14/2009

Jacobson and Delucchi compared their scheme to the building of the interstate highway system.  See here for are realistic comparison.

Notes

1) Capacity factor of wind power realized values vs. estimates, Nicolas Boccard, Energy Policy 37(2009)2679–2688
2)  http://www.oceanrenewable.com/wp-content/uploads/2009/05/power-and-energy-from-the-ocean-waves-and-tides.pdf
3)  Fridleifsson,, Ingvar B.,  et. al.,  The possible role and contribution of geothermal energy to the mitigation of climate change. (get copy here)
4)  http://en.wikipedia.org/wiki/Hydroelectricity
5)  Tracking the Sun II, page 19 , Lawrence Berkeley National Laboratory, http://eetd.lbl.gov/ea/emp/reports/lbnl-2674e.pdf
6)  Projecting the Impact of State Portfolio Standards on Solar installations, California Energy commission, http://www.cleanenergystates.org/library/ca/CEC_wiser_solar_estimates_0205.pdf
7)  David MacKay – “Sustainable Energy – Without the Hot Air” http://www.withouthotair.com/download.html
8).  64MW/400acres = 40MW/km2 http://www.chiefengineer.org/content/content_display.cfm/seqnumber_content/3070.htm
9)  http://www.windustry.org/how-much-do-wind-turbines-cost
10)  I have chosen a low cost because most hydroelectric has already been developed.
11) 280 MW for $1 billion, http://www.tucsoncitizen.com/ss/related/77596
12) Based on my personal experience as a Scientist working on photovoltaics for 14 years at the National Renewable Energy Laboratory.  This number varies according to insolaton, latitude, temperature, etc.
13)  The EIA predicts a need for 678 quadrillion (6.78 x 1017) BTUs of yearly world energy use by 2030.  One BTU is the same as 2.9307 x 10-4  kiloWatt hours.   So, (6.78 x 1017 BTU) x (2.9307 x 10-4  kWhr / BTU) = 1.98 x 1014 kWhr.    One year is 8.76 x 103 hours.  So the required world power would be given by:  (1.98 x 1014 kWhr) / (8.76 x 103 hr) = 2.26 x 1010 kW = 22.6  TW.

h1

Taking Measure of Biofuel Limits

September 24, 2009

The current edition of American Scientist has a very good article on the fundamental biological limits of governing the production of biofuels.  Taking the Measure of Biofuel limits, by Thomas Sinclair addresses the two obvious limiting factors, light and water, and the perhaps less obvious limiting factor of nitrogen availability in the soil.

Thomas R. Sinclair is a professor in crop science at North Carolina State University with a Ph.D. from Cornell.  He specializes in the relationships between plant physiology, the environment, and crop yields.  He has edited several books as a Ballard Fellow at Harvard University.

Sinclair sets the stage by pointing out..

The U.S. Energy Independence and Security Act calls for 144 billion liters of ethanol per year in the U.S. transportation  fuel pool by 2022.  That equals 25 percent of the U.S. gasoline consumption today.  No more than about 4o percent is to be produced with maize, an important food and export crop.  Non-grain feedstock is supposed to provide the rest.

Before nations pin big hopes on biofuels, they must face some stark realities, however.  Crop physiology research has documented multiple limits to plant production on Earth.  To ramp up biofuel crop production, growers must adapt to those limits or find ways around them.  Such advances may not be as simple a some predict.  Plants and their evolutionary ancestors had hundreds of millions of years to optimize their biological machinery.  If further improvements were easy, they would probably already exist….

Plants cannot be grown without three crucial resource inputs: light, water, and nitrogen.  Each of these inputs is needed in substantial quantities, yet their availability in the field is limited…[T]he close relationship between the available amounts of these resources and the amount of plant mass they can produce – not human demand – will determine how much biofuel the world can produce.

Light

Sinclair considers the conversion efficiency of sunlight  and CO2 to sugars, which ultimately fuel the building of starch, cellulose, protein and lipids, for C3 plants (95% of all of Earth’s plants, but not highly CO2 fixation efficient) and the more efficient sugar-making C4 plants (corn, sugar cane and sorghum, for example).  He points out that “After hundreds of millions of years of evolution, these systems [for converting solar energy into the chemical energy of sugars] are highly efficient within the physical and thermodynamic constraints of photosynthesis and plant growth.”  Not much room has been left for improvement.

Bio-engineering advances may increase yields a little, but they cannot overcome the limits of the sunlight to sugar conversion ratios.  After the numbers are crunched he reveals that if the U.S. is to reach its biofuel goal of 58 billion liters of ethanol grown from corn (40 percent of 144 billion liters), it would require an additional 15 million hectares planted.  Similarly, the remaining 86 billion liters made from non-corn C4 grasses, which are not nearly as efficient as corn for this purpose,  would require at least an additional 48 million hectares.

Water

It may be obvious that in areas of limited water supply, plant growth will be limited by the amount of water available.  As plants transpire water out through their leaves, the rate of transpiration is:

T = G x VPD/k

where T is the transpiration (g/m2)
G is the plant growth (g/m2)
VPD is the Vapor Pressure Deficit, or the difference in the saturated water vapor pressure of air inside the plant leaves and the water vapor pressure of the outside atmosphere
k is a plant specific constant

The difference in the vapor pressure inside and outside a leaf (VPD) is what controls the rate of water loss through the stomata.  The VPD is large in arid regions because the vapor pressure of the water in the atmosphere is low. 

For a given environment  the VPD cannot be controlled – it is what it is.  So the only way for a plant to affect the transpiration, and thus prevent itself from drying out and dying in a arid environment, is to close down its stomata to reduce water loss.  But this also reduces the flow of CO2 into the leaves and O2 out, and consequently reduces or stops the plant’s growth.  There is no magic to get around this.  Sinclair  says…

“Despite claims that crop yields will be substantially increased by the application of biotechnology, the physical linkage between growth and transpiration imposes a barrier that is not amenable to genetic alteration.”

Under these circumstances the plant mass growth is nearly linear with water transpired.  So as more arid regions are put into crop use either crop yields per hectare will be lower, or the amount of irrigation will be higher.  This leads to the production of biofuels at the expense of aquifer depletion.

Nitrogen

Sinclair repeatedly points out that to be economically viable, biofuel crops must yeild at least 9 tonnes of plant mass per hectare of crop.  For C3 and C4 type plants this 9 tonne minimum requires the removal of 166 kg and 118 kg or nitrogen per hectare, respectively.  But, “Expectations for cellulosic yields are sometimes double or triple the 9-tonne-per-hectare yield” required for economic viability.  So, nitrogen removal from the soil will sometimes be double or triple also.   Some of this nitrogen is replaced by plant debris that is left behind and some comes from thunderstorms and some from organisms that fix atmospheric nitrogen.  But these sources are not enough to replace all the nitrogen that is removed with every harvest, and the available nitrogen will be less every year.  Sinclair explains…

“Although this decrease rate is usually small when compared to all the original organic matter in the soil, a cropping practice dependent on a continuous withdrawal clearly is not sustainable…  Nitrogen fertilizer of annual biofuel crops will inevitably be needed once soil organic matter decreases to levels limiting plant growth.”

Sinclair’s conclusion

Taking the limits of light, water and nitrogen in to account, for corn he concludes…

“The equivalent of 40 percent of today’s U.S. maize crop will be required to ethanol production while other domestic and export demands for maize also must be met.”

And for cellulosic derived ethanol he concludes that up to…

“50 million hectares of new land must be brought into high and sustainable agricultural production to achieve the required yields… it would be the most extensive and rapid land transformation in U.S. history… [L]and used for cellulosic feedstock must be in regions with sufficient rainfall to achieve needed yields.  The amount of water transpired by those crops could be large enough to influence the hydrologic balance of farming regions.”

and for in general…

“I]ncreased nitrogen supplementation required for the new crops will result in more nitrogen leaching into natural waterways…”

My final words

Sinclair indicates that between corn and other plants for ethanol, the U.S. may have to put as much as an additional 65 million hectares into crop production (15 million hectares for corn and 50 million hectares for other biofuel crops) to  generate 144 billion liters of ethanol.  This would replace only 25% of our gasoline usage.

How big is 65 million hectares?  It is the same as 650,000 square kilometers, and about the same as 160 million acres.  To put this in perspective, this is more than 10 times the acreage of corn planted in Iowa in 2007.   It is more than 150% of the corn acrage planted in the entire United States in 2007.

Look at the figures below.  The first image is from the USDA Census of Agriculture for 2002, and it shows the acreage planted in corn for grain in the United States that year.  Each dot on the map represents 10,000 acres.  To achieve 144 billion liters of ethanol we could need an additional 160 million acres of of corn and other crops, or more than 10 times the amount of corn acreage planted in Iowa.  The second figure shows the corn acreage of Iowa multiplied ten fold and added to the map of the United States.  This should give you some idea of the unprecedented agricultural multiplication that would be needed to satisfy the U.S. Energy Independence and Security Act.

corn acres

new crop area 2
Cartoon of ten times the corn acreage of Iowa added to the US. This gives some idea of what may be required to satisfy the U.S. Energy Independence and Security Act requirement of 144 billion liters of ethanol to replace 25% of U.S. gasoline usage.

Let’s face it – this ambitious goal of 144 billion liters of ethanol per year from biofuels is a very bad idea.  Our most precious resources are the land, water and resources for making fertilizer (which is primarily natural gas for nitrogen fertilizers).  The dumbest thing we can do is deplete our soil and aquifers, pollute our water with extra nitrogen fertilizer, and waste our natural gas to make ethanol.  If you think living with a shortage of gasoline is rough, try living with a shortage of food.

h1

I was (partially) wrong

August 20, 2009

I recieved  comment form the GM spokesman, Rob Peterson, about my last two posts lambasting the supposed 230 mile per gallon Chevy Volt.  Here is Rob’s comment  in its entirety.

This is Rob Peterson from GM.

Although the Volt has a 16 kWh battery, only 8 kWh is used. This will significantly impact the rest of your calculations and your synopsis. Please post a correction based on this fact.

As for the Volt’s city fuel efficiency rating of 230mpg – this is based on the EPA’s draft methodology. The same methodology which will be used for all other vehicles of this type.

r

I responded to Rob with two comments, which you can read here.  One of those comments questions his sincerity about “blaming” the 230 mile per gallon claim on the EPA.  However, he is essentially right about the the charging cycle of the 16 kWh battery only using about half of that.  He has asked me to “Please post a correction based on this fact.”  I have done so, but the final numbers for the vaunted Volt are still underwhelming.

Here is a table comparing miles per gallon, kWh per mile, and pounds of CO2 per mile for the Chevy Volt, the Toyota Prius, and the a couple of ancient Honda Civics.  You can read the details of how I derived the numbers for the Volt, using Rob’s partial capacity charge cycle scheme in the text below.  Note that the prices for the Honda Civics have been adjusted for inflation to 2009 dollars for a fair comparison.

milage chart copy

Now that I have posted a correction, can I expect Rob Peterson to post a retraction of GM’s preposterous 230 mile per gallon claim?  Not Likely.

I  have not yet been able to find an official specification for the number of kilowatt-hours per mile for the Volt.  I am hoping Rob will provide one.  I have found Rob’s description of the charging scheme for lithium-ion batteries to be essentially correct.  That is, the battery is typically charged by the electrical grid to around 90% of total capacity.  Then the car is propelled entirely off of battery power until it reaches about 30% capacity.  This is known as the “charge depletion” mode.  When the battery gets to about 30% of capacity the gasoline powered generator kicks in and maintains the charge at about 30% capacity.  This is known as the “charge sustaining” mode.  

Then, when the battery is plugged into the electrical grid it is recharged with grid energy from about 30% capacity back up to about 90% capacity    That is a range of about 60% of the total capacity.  So, for a 16 kilowatt-hour battery, a complete charge off the electric grid puts about 9.6 kWh (0.6 x 16 kWh) into the battery.  But an extra 10% or so is lost due to transmission line and battery conversion losses.  So the amount of power taken from the electrical grid will be about 10.6 kilowatt-hours. If that charge will propel the car for 40 miles, then that works out to 3.8 miles per kWh (or about 0.27 kilowatts per mile) 

I cannot find the value of about 0.27 kWh per mile anywhere in the specifications for the Volt, but I did find this somewhat cryptic statement at Chevy.com:

“Under the new procedure, the EPA weights plug-in electric vehicles as traveling more city miles than highway miles on only electricity. The EPA procedure would also note 25 kilowatt-hours/100 miles electrical efficiency in the city cycle.

So, lets accept the value of 25 kilowatt-hours/100 miles (0.25 kWh per mile) for the moment.  What is the affect that this will have on the numbers I reported for CO2 emissions?

The number of pounds of CO2 emitted per mile while powering the car with gasoline (known as the “charge sustaining” mode) will remain unchanged.  There are 19.4 pounds of CO2 produced per gallon of gasoline burned, and GM claims 50 miles per gallon in “charge sustaining” mode.  So:

( 19.4 lbs of CO2 / gallon) / (50 miles / gallon) =
0.39 lbs of CO2 per mile

This 0.39 lbs of CO2 per mile for the Volt running on gasoline (charge sustaining mode) is the same as for the Toyota Prius, because it also gets 50 miles per gallon.

Here is the same calculation for my ancient 1988 Honda Civic hatchback that got 47 miles to the gallon:

( 19.4 lbs of CO2 / Gallon) / (47 miles / gallon) =
0.41 lbs of CO2 per mile

And for the 197887 Honda Civic Coupe HF, which got 57 miles per gallon:

( 19.4 lbs of CO2 / Gallon) / (57 miles / gallon) =
0.34 lbs of CO2 per mile

Lets assume now that the Volt uses 0.25 kilowatt-hours per mile (“25 kilowatt-hours/100 miles’) when running off of power provided to the battery by the electric grid (known as the “charge depleting” mode).  On the average the grid yields 1.34 pounds of CO2 per kilowatt-hour. The grid transmission losses and grid to battery conversion losses  add up to about 10%.  So the amount of CO2 yielded per mile will be:

(1.34 lbs of CO2 per grid kWh) x (0.25 kWh per mile)  x 1.1 =
0.37 lbs of CO2 per mile

Almost identical to the CO2 emitted when it is running off of gasoline (0.39 lbs of CO2 per mile).  And it is also nearly identical to the amount of CO2 per mile as the  much cheaperPrius generates while running off of gasoline.

But here it the rub.  If the Volt is driven in an area where the electricity is predominantly generated with coal (by far the most common source or electricity generation in the US), then the CO2 emissions go way up.  That is because Coal emits about 2.1 pounds of CO2 per kilowatt-hour generated for the electric grid.  So again we can asume 10% for the sum of the grid transmission losses and grid to battery conversion losses.  Then the amount of CO2 that the Volt yields per mile driven in a region where coal is the primary source of electricity will be:

(2.1 lbs of CO2 per grid kWh) x (0.25 kWh per mile) x 1.1 =
0.58 lbs of CO2 per mile

If we really concerned about reducing CO2 (I’m not), saving energy (I am), creating American jobs (I am), and saving money (I am), then we should support the production of an American car that is similar to a 1988 Honda Civic.  Why argue the merits of a $40,000 car that few people will ever be able to afford?  A $15000 dollar car that gets as good or better mileage and generates as little or less CO2 would be bought by millions and have a much greater impact.

h1

More eye opening facts about the Chevy Volt

August 18, 2009

OK, so maybe the Chevy Volt doesn’t really get 230 miles per gallon.  Are such exaggerations justified because they serve a greater cause?  The Chevy Volt will help save the world, after all, by reducing  Co2 emissions, right?

Wrong!

In fact, in some cases the amount of CO2 generated per mile for the Chevy Volt is the same as a conventional automobile getting only 21 miles to the gallon.  Read on…

When running on gasoline (known as “charge sustaining operation”) the Volt will get 50 miles per gallon.   According to the EPA burning one gallon of gasoline yields 19.4 pounds of CO2.  That means the CO2 emitted per mile driven while running on gasoline will be 0.39 pounds.

 ( 19.4 lbs of CO2 / Gallon) / (50 miles / gallon) = 0.39 lbs of CO2 per mile

How much CO2 will be emitted per mile when the Volt is powered by energy from the electrical grid that has been stored in its battery?  That depends on how the energy on the grid is generated.  If you live in an area where the power on the grid is generated primarily with coal, then the amount of CO2 per kilowatt-hour generated is fairly high.  If you live in an area where the power on the grid is generated primarily from nuclear, then the amount is fairly low.  On the average, though, there are 1.34 pounds of CO2 pumped into the atmosphere for every kilowatt-hour of energy generated for the electric power grid in the United States, according to the Department of Energy (2000).

The fully charged lithium-ion batteries hold 16 kilowatt-hours of energy and will propel the Volt 40 miles.  That works out to 0.4 kilowatt-hours per mile.  So that means on the average, 0.54 pounds of CO2 will be put in the atomosphere for every mile that the Volt drives on energy drawn from the electrical grid, assuming perfect charging efficiency.

(1.34 lbs of CO2 per grid kWh) x (0.4 kWh per mile) = 0.54 lbs of CO2 per mile

But charging a lithium-ion battery off the grid is not 100% efficient.  There are grid transmission losses and grid to battery conversion losses which add up to about 10%.  So running your Volt  off of electric grid power will yield closer to 0.59 pounds of CO2 for every mile your drive.  That is 151% of the CO2 put in the atmosphere by the running the Volt off of gasoline.

How many miles per gallon must a conventional automobile get in order to put the same amount of CO2 into the atomsphere per mile as a Chevy Volt does when running off of grid power?  That’s easy- about 33 miles per gallon.  Here are some cars that will do better.

( 19.4 lbs of CO2 per Gallon) / (0.59 lbs of CO2 per mile) = 33 miles per gallon

If you drive in an area where the electric grid is primarily powered by coal, then the numbers are even worse.  Burning coal to power the electric grid yields about 2.1 pounds of CO2 for every kilowatt-hour generated.  Driving your Volt with grid generated power will yield about 0.92 pounds of CO2 for every mile driven (when 10% conversion inefficiencies are added in).

(2.1 lbs of CO2 per grid kWh) x (0.4 kWh per mile)  x 1.1 = 0.92 lbs of CO2 per mile

That is the same amount of CO2 per mile as a conventional automobile that gets only 21 miles per gallon!

( 19.4 lbs of CO2 per Gallon) / (0.92 lbs of CO2 per mile) = 21 miles per gallon

So don’t be fooled by astronomical claims of miles per gallon for the Chevy Volt.  And if you are worried about CO2 (I’m not), then don’t count of the Chevy Volt to save you – it won’t.

h1

More on compact fluorescent lights

July 18, 2009

I compared a new14w CFL designed to replace a 65W incandescent recessed light (Commercial electric, model  EDXR -30-14) and an new 65W incandescent recessed light (GE Reveal 65) by measuring their spectra with a NIST traceable calibrated spectroradiometer.  In each case the bulb pointed down, like a typical recessed light, with the spectroradiometer measurement point 108 cm below the bulb.  The measurement was repeated seven times for each bulb: first with the spectroradiometer directly below the bulb, then with the spectroradiometer moved about 15 cm horizontally, then 30 cm horizontally…out to about 90 cm horizontal shift. 

Note that the GE Reveal 65 had an “enhanced color spectrum that used a neodymium glass filter to reduce the amount of light in the middle part of the visible spectrum to yield more vivid reds and blues.  I would have been better off with a simpler incandescent lamp for this comparison. 

The first graph below shows the spectral irradiance for the CFL.  Note that most of the irradiance is in the visible part of the spectrum.  The seven curves correspond to the seven horizontal positions, with the highest irradiance being directly below the bulb.  The second graph is the same, but zoomed in to the visible part of the spectrum.

setup

CFL irrad 400-1400

CFL irrad 400-750

The following two graphs show the same thing for the incandescent lamp.  Notice the dip in the middle of the visible spectrum.  This is due to the neodymium glass filter.  If that filter were not present the total irradiance of the incandescent lamp would have been higher.  I will repeat this experiment at a later date with the simpler incadescent lamp.

Incan irrad 400-1400

 

Incan irrad 400-750

Irradiance only tells the beginning of the story.  The human eye is more sensitive to some colors than to others.  It is more sensitive to the middle of the visible part of the spectrum than to the red or the blue.  Of course, it is totally blind to the UV and the IR.  So, the irradiance is multiplied by  a Luminosity Function  and a constant to give a measure of how bright a light is.  The following plot shows the typically used Photonic Luminosity function.

Luminosity function

The following two graphs show the products of the Photonic Luminostiy function, a constant (683 lux/W/m2), and the spectral irradiance of the CFL and the incandescent bulbs.  The total area under any curve gives the “brightness” for the lamp at a particular horizontal shift.  I have deliberately left the Y axis the same on both graphs to make them easier to compare.  It is clear that the CFL is very bright over two narrow wavelength bands centered on about 545 nm and 620 nm, while the incandescent light is spread more evenly over the visible spectrum.  This is probably why people feel that colors look less natural under a CFL.CFL photonic

Incan photonicAfter all the graphs and the math, which light is brighter?  It depends on the horizontal position, as shown in the following figure.  The incandescent is brighter directly below the lamp, but the CFL is brighter off to the sides.  This should not be too surprising, because the light from the incandescent comes from a small filament, which is more easily reflected in the same direction than the light from the extended source of the CFL.  But when integrated over all directions, the incandescent and the CFL are probably a very close match, as claimed by the CFL manufacturer.

relative brightness

It would be interesting  to repeat this experiment with bulbs that have accumulated about 1000 hours.  But that is an experiment for another day.

Warm-up time.

I also measured the irradiance of the CFL as a function of time.  This was done for the lamp after it had been off and cool for hours, and again after it had been fully warmed and then allowed to cool for three minutes.   It takes about 4.5 minutes to get to full irradiance for a cold lamp, and about 3 minutes for a warm lamp.  Of course, the warm-up time for the incandescent is essentially zero minutes.

 

 

warm-up time

Conclusions

There are  hundreds of different configurations of CFLs  and incadescent bulbs being used in the world.  My sample is miniscule.  However, some of my numerical results are probably fairly representative, and there are common observations reported by many users. 

As shown above, at least in my case, the 14 Watt CFL was about a bright as the  65 Watt incandescent it was designed to replace.  However, the color quality of the CFL was much poorer.  This poor color quality is a function or the flourescent nature of the lamp, and is likely common to most CFLs. 

The CFL takes a long time to warm up, compared to the instant-on of an incandescent.  The warmup time probably varies from one type of CFL to another.  I have data to indicate that the irradiance vs. time for the warmup minutes can look quite different for a new CFL vs. and an identical CFL with several thousand hours, but that data is not presented here.

As indicated in a previous post, my experience is that a CFL will save money compared to an incandescent that it is designed to replace.  But as shown here, the color quality of the light is worse and there may be an annoying wait for it to warm up.

I will continue to use CFLs where they make sense, but I am also stockpiling some incandescents for the day when they are no longer available by government mandate.  Short duration use of many CFLs reduces their lifetime, and as seen above, it may take several minutes for the CFL to get to full brightness.  So I will use incandescents in closets and storage rooms, etc., and CFLs in the main living areas.

Last comment

I have presented this information as a small part of a large issue.  My endorsement of CFLs, despite some of their drawbacks, is most definitely not support for the government mandate to force us to use CFLs.  I am stockpiling incandescents for certain situations and would suggest that others do the same.  Perhaps the price of LEDs will drop enough to make this issue irrelevant.

Ultimately, I would like to see abundant amounts of energy available to all Americans and to all the people of the world.  Then the issue of light bulb choice would simply be moot.  My fear is that we are moving in the opposite direction.

Follow

Get every new post delivered to your Inbox.

Join 52 other followers