Tuesday, September 18, 2012

Distributed solar makes the grid more Stable


As our society continues to become more wired, the impact of a sudden power outage – such as what occurred in India in early August – becomes increasingly severe and disruptive. With more and more businesses – including mission critical facilities like hospitals, military bases, and water treatment plants – reliant upon access to large amounts of electricity and the Internet, blackouts can significantly damage a country’s economy, public health and safety.

At the same time, there are regions of the world – particularly in emerging nations – where entire villages remain without access to power because it is simply too expensive to build the infrastructure needed to transport electricity to the rural areas. According to a 2010 International Energy Agency report, the lack of access to electricity hinders social and economic development and exacerbates major health problems such as hunger, sanitation and access to clean water. As a recent New York Times headline simply put it, energy access is vital to abolishing the worst poverty in the world.
A solution to both of these problems – increasing vulnerability to a power outage in developed areas and lack of access to electricity in developing areas – can be found in distributed generation. Traditionally, electricity is generated in large, centralized facilities, and for the most part these facilities run on fossil fuels. Distributed generation instead allows electricity to be generated from many small, de-centralized sources, such as rooftop solar or a small solar farm.

For developed areas, this method of electricity generation offers far greater grid security than traditional generation in centralized facilities. Generating power through several independent generation stations rather than a handful of major power plants dramatically decreases the impact of one power plant unexpectedly shutting down. The presence of several generation stations allows some to ramp up their production to account for the unexpected loss of others, keeping the grid stable even as power generation fluctuates.

For areas currently without access to electricity, distributed generation facilities bypass the onerous cost of developing infrastructure to transport electricity long distances from enormous power plants and delivers the power they so badly need.

The technology to both develop power grids using more and more distributed generation and integrate distributed generation into large electric grids is getting more advanced each year. Companies are already developing solar-powered thermal power plants designed specifically for off-grid applications and solar community cooking systems to reduce fossil fuels use. Others are supporting the development of solar-powered toilets that require no running water and produce no pollutants.
Other solutions include local wind generators – small wind turbines – that can power homes and small businesses. And as the use of home-based solar panels increases, each individual household or business will create more and more of its own electricity, increasing energy security, reducing reliance on fossil fuels and netting an economic benefit.

As nations overhaul their grids in response to the recent blackout in India and work to provide electricity access to their most remote areas, distributed generation should be part of the solution.

Original Post:

By Badal Shah 

Friday, August 10, 2012

Electricity Forecast: Disruptive and More Centralized


The escalating controversy over recent bankruptcies at solar start-up companies like Solyndra and Abound Solar is a distraction from real story about the solar power industry.  It is not only here to stay, but poised for rapid and relentless growth over the next 20 years. Make no mistake about it: the future of widespread solar poweris now inevitable. The same can be said about electric vehicles, energy storage and fuel cells, but perhaps the most underappreciated change coming down the pike is the advent of market-aware energy consuming assets. Here are five reasons why these changes are inevitable:
  1. Human intelligence - and the ability to build upon past discoveries – is increasing every day. It’s the positive side of Toffler’s Future Shock. With computational capabilities, we can run virtual tests, in parallel, on many different technologies, and learn far more quickly than ever before. And we can instantaneously disseminate knowledge. Thus, technological advancement that might have taken decades to achieve in the pre-computer age may now take only a few years. For proof, look at what is happening in solar with all of the different technologies. Solyndra is an example of this, and a good one. While pointing out the weakness of investing too heavily in a single technology, the case also highlights a reason for optimism: Solyndra lost because (forget the Chinese subsidies for a minute) the competing technologies advanced too quickly and became too cheap too fast for Solyndra to keep up. Other technologies are similarly advancing, though perhaps not as quickly.
  2. Our grid is aging. Much of it is older than I am. And it is extremely expensive to replace. Those high avoided costs create an opportunity for efficiency,decentralized supply solutions, and responsive demand assets.
  3. Prices. We have seen a bottoming out of electric energy prices. Natural gas is unlikely to ever get cheaper than it was last year when we were punching so many holes into the shales that short-term supply overwhelmed demand. Many people seem to confuse short- and long-term elasticities. Over time, new demands for shale gas will emerge; gas-powered trucks and vehicles, LNG exports (Cheniere is the first of many in line in the permitting process), and new gas-fired power plants. The EIA projects retirement of 49 gigawatts of electric demand through 2020. Gas-fired generators will meet a large portion of that demand..
  4. Carbon. We have fallen into a believer/non-believer dialogue (if one can call it that), which appears more driven by ideology than science. Yet, assuming the majority of scientists are correct, the dynamic will not go away, ideology notwithstanding. This July was hotter in the US than the year of dust bowl; one month doesn’t make a climatic trend. But ten years just may. The odds appear to favor hotter, drier, and more volatile weather. Sooner or later (if you agree with the Economist), that leads to higher energy prices through some sort of carbon based pricing.
  5. Economic growth. Even an anemic 2% economic growth rate would see a doubling of electricity consumption in 35 years. 3% moves it to 23 years. The average annual rate from 1948 to 2009 was 3.28%. A doubling of consumption and associated infrastructure and fuel costs suggests higher underlying fuel AND infrastructural costs.
So the stage is set for potentially higher prices (absent some black swan event we cannot foresee), and improved relative economics of new distributed technologies whose costs continue to come down. We can expect to see deployment of these technologies in four distinct areas: 1) where subsidies and regulatory decisions provide early safe havens; 2) where high avoided costs improve the economics (the island of Omotepe in Lake Nicaragua is said to have 60 cents/kWh electricity rates – the race for solar should be on!); 3) where volatility is most pronounced; and 4) where distributed technologies can add the highest value in terms of what and when avoided resources are displaced.
The key to optimizing many of these investments, irrespective of technology, is to tie them into an economic context. In other words, we need to make them market aware, and profitable. In so doing, we can optimize the size of the assets during the initial investment phase, and their performance on a daily basis. Let’s take gas-fired back-up generation in Texas for example. Assuming a consumer requires 1 MW of back-up gen for a business-related need such as reliability, the question during the initial assessment phase should focus around trade-offs between size of the unit and value of participation in energy markets. Given market volatilities, capital costs to supersize the unit, and underlying fuel costs, the optimal investment could be a unit x% larger, which could sell into the market whenever the real-time price exceeds a specific price. Such decisions require analytical capabilities, an understanding of past and expected future market dynamics, and an awareness of the regulatory environment. Most of all, it requires knowledge of real-time market characteristics.
The future grid will be characterized by market awareness of many decentralized resources, and require an intelligence which does not currently exist.
Original Post

Saturday, June 30, 2012

Renewables in New York


By Jennifer Runyon 
June 29, 2012  

In the future, the state of New York won’t be known only for its world-famous city and picturesque northern landscape, rather it may soon also be known for its renewable energy projects. Two interesting developments this week should start to attract developers of larger solar and wind power projects to various regions in the state. 
In April, Gov. Andrew Cuomo announced the NY-Sun Initiative, a plan that brought together the New York State Energy Research and Development Authority (NYSERDA), Long Island Power Authority (LIPA), and the New York Power Authority (NYPA) to help develop and fund a solar energy expansion plan. The goal is to double the amount of non-ultility owned solar power installed annually in New York, and quadruple that amount by 2013.
Yesterday, LIPA announced a CLEAN solar initiative, otherwise known as a feed-in tariff to spur up to 50 MW of commercial and large-scale solar projects in its region over the next two years. Under the program, LIPA will purchase all of the energy generated by local solar projects at a fixed-rate of 22 cents per kilowatt-hour for 20 years.  Projects must be at least 50 kilowatts (kW) in size so residential systems won’t qualify. LIPA said that it expects the largest projects to be in the 3-MW range. The program is capped at 50 MW.
Notably, the state of New York recognized that balance of systems (BOS) costs must be reduced in order for solar to be competitive. As part of the NY-Sun initiative, the state created the NY-Sun BOS initiative, which will work with private and public partners across New York State to try to implement statewide standardization and streamlining of the procedures for permitting and interconnection, and development and training. 
On the topic of permitting, the NY Public Service Commission (PSC) has just completed the comment period for rulemaking on the somewhat controversial “Article 10.” In a nutshell (and straight from the PSC website), “Article 10 provides for the siting review of new and repowered or modified major electric generating facilities in New York State by the Board on Electric Generation Siting and the Environment (Siting Board) in a unified proceeding instead of requiring a developer or owner of such a facility to apply for numerous state and local permits.” New projects must be at least 25 MW in size to qualify for state siting board review.
For renewable energy, Article 10 specifically addresses wind farm siting and permitting, which often faces challenging and aggressive local opposition.  Essentially it gives the state the ability to approve wind farms and takes some of the oversight from local governments.  Major industry players like Iberdrola and EDP Renewables are strongly in favor of the measure.
As you might imagine, there are opponents to the article as well.  Resident Mary Kay Barton said, “what’s at stake is our long-held, constitutional right to ‘home rule’ — the right to decide for ourselves what we want our communities to look like 20, 40, and 60 years down the road,” in an op-ed she wrote for The Daily News.
Local towns and municipalities are fearful of having less oversight in what is built within their borders. The PSC will make a final determination on rulemaking for Article 10 sometime this summer.
Like it or not, New York State is embracing renewables in a big way – encouraging the development of solar and big wind.  Our readers have supported the state in its initiatives so far.  Last February, the 32-MW solar farm at the Brookhaven National Laboratory on Long Island was selected at the Readers’ Choice Project of the Year
Perhaps Sinatra was right about New York. For renewables, "if I can make it there, I'll make it anywhere." We’ll see.

Original post:

Saturday, June 16, 2012

Solar in Snow


Today's snowy climate PV systems tend to be installed at angles shallow enough to make them prone to snow loss, and as large-scale PV installations become more widespread in snowy locations analytical models are needed to estimate the impact of snow on energy production.
Both weather and array design factors influence the amount of snow loss. Weather factors include the quantity and quality (moisture content) of the snow, the recurrence pattern of storms, and the post-storm pattern of temperature, irradiation, wind speed, wind direction, and relative humidity. Array design factors essentially boil down to orientation (fixed or tracking, tilt, azimuth, and tracker rotation limits) and the surrounding geometry (open rack or building-integrated). Building features can also either help (e.g. melt) or hinder (e.g. dam up or drift) natural snow shedding.
Nonetheless, a generalised monthly snow loss model is introduced here which, despite some limitations, appears to deliver good- quality, unbiased monthly loss estimates which can now be used as inputs to the simulation programs PV investors rely on for decision-making.
Lake Tahoe Test Bed
BEW Engineering, Inc - a DNV company - set up three pairs of 175 WP poly-silicon Mitsubishi model PV-UD175MF5 PV modules at fixed tilt angles of 0°, 24° and 39° on south-facing racks in Truckee, California, at the beginning of the 2009-2010 winter. The module pairs are spaced far enough apart to prevent row shading, even on the winter solstice.
Near Lake Tahoe, the station's latitude is 39° and its elevation is 5900 feet (1800 metres). The site receives an annual average of 200 inches (5 metres) of snow.
One module of each pair is manually cleaned and thermostatically heated. The three un-cleaned modules are allowed to shed or accumulate snow naturally and are bordered with two feet (0.6 metres) of similar material to minimise edge effects.
A datalogger saves hourly records of irradiance for the three tilt angles, short-circuit current and temperature for each module, along with air temperature and relative humidity. Meanwhile, an hourly webcam shot records snow depth and assists with quality checks. A second source of data is a 125 kWP Truckee Sanitary District (TSD) system located two miles (3.2 km) south of the BEW station and sitting at the same elevation.
For BEW's rig, snow losses are gauged as the difference in monthly amp-hours between the clean and uncleaned modules. For the TSD system, snow losses are gauged as the difference in measured energy and predicted energy for an always-cleaned array.
The TSD system faces south at a fixed 35° tilt, similar to one of the paired sets of BEW's test modules. The lowest edge of the 17 foot (5 metre) long rows are six feet (2 metres) above ground. While the District does not manually clean this array, they do regularly plough snow from between the rows to prevent snow from piling up. This maintenance practice proved to be especially valuable because snow is not removed from the array, yet ground interference does not occur. It is as if the array is very high above ground. Indeed, ground interference at the BEW site has resulted in twice the annual energy loss as the TSD site.
Calculate Winter Losses
Depending on tilt angle, wintertime energy losses of 40%-60% and annual energy losses from 12%-18% were noted in the first year of operation, though data from the TSD system were not included. The first winter was statistically very normal. The lost energy due to snow buildup in the seven-month winter season ranged from as little as 25% for the 39° tilt to as much as 42% for the flat orientation. The seasonal results project to losses in annual output of 12%, 15%, and 18% for the 39°, 24°, and 0° tilts, respectively.
While these results were hugely significant for this location, no attempt was made to project how the Truckee results would translate to other, less snowy locations based on the first year of measurements. The model development and fitting task was completed after the second year of measurements, after which BEW's generalised model was tuned enough to be provisionally applied to other locations. The current form of the model is:
Snow loss, % = C1*Se'*cos2(T)*GIT*RH/TAIR2/POA0.67
Where:
C1 is a fitted coefficient, 5.7x104Se' is the 6-week rolling average effective snowfall in inches, with
Se = S (monthly snow, inches)*0.5*[1+1/N], where N is the number of snow events per month
GIT = ground interference term, defined in detail below
RH = average monthly relative humidity, %
TAIR = average monthly air temperature, C
POA = monthly plane of array insolation, kWh/m2
The GIT is further defined as:
GIT = 1-C2/exp(ϒ); C2 is fitted from data as 0.5; ϒ is the dimensionless ratio of snow received divided by snow dissipated, such that whenever the amount of snow received exceeds the ability of the array geometry to deposit it on the ground, shadow-like interference will quickly reduce array output by a factor of 2 to 1. BEW defines ϒ as:
R*cos(T)*Se'*2*tan(P)/(H2-Se'2)
Where:
R is the row plane of array dimension, inches
T is the tilt angle, degrees
P is the stabilised snow pile angle, nominally assumed to be 40 degrees
H is the drop height, inches
And Se', effective rolling-average snowfall, inches as defined above
For one of the US's snowiest urban areas, it was observed that annual losses of 12%-18% may be expected in a typical year for fixed tilt arrays mounted at tilt angles ranging from 39° to 0° (flat). However, monthly losses may be substantially higher; an entire month's output was lost for a shallow tilt angle unit when several feet of snow fell, for example.
On a rolling annual basis, the snow losses have averaged 6% for the TSD system, 13% for the 39° BEW module, 17% for the 24° BEW module, and 26% for the flat 0° BEW module. However, the principal use of this information is not necessarily to point out how much potential generation is sacrificed in a very snowy location, but to serve as a baseline for validating proposed snow loss models.
Developing a Losses Model
Key variables affecting generation might be supposed to include snowfall quantity, climate and weather factors such as temperature, radiation, relative humidity, wind speed/direction, and snow moisture content. Additional influences include array geometry such as tilt angle, row slant length and distance to ground as well as ground interference effects.
An equation that relates monthly energy loss to monthly snowfall in inches was developed with units of percentage loss per inch of snow. The final equation accounts for ground interference, air temperature, plane of array insolation and relative humidity. Terms such as wind and snow moisture content were not available in this test.
A promising simple annual snow loss relationship was posed, which suggests annual energy loss may be estimated as the product of a 0.1%/inch snow loss, multiplied by a tilt angle adjustment factor.
There is a clear relationship between tilt angle and energy loss, though the relationship will be influenced by other factors. However, the study only evaluated fixed-tilt configurations, and although tracking systems can be evaluated to some degree by the model, in practice the dynamic movement and vibration of tracking systems is likely to lessen the effect of snow even more than predicted.
The most encouraging findings are that the study shows annual energy predictions can be essentially unbiased when accounting for snow, and that the errors are well within the normal level attainable with simulation programs in general. Furthermore, these results can be obtained using measurements widely available in long-term climate databases, coupled with array-specific design geometries. Better estimates are possible if exact array geometry information is available to characterise ground interference effects. Indeed, the effect of ground interference is significant and was observed to have roughly a two-fold effect on typical snow loss for the specific array geometry used at the test station.
Applying a General Model
The Lake Tahoe area is not a prominent solar market, though the Truckee Sanitary District installed an array in 2009 and there are several other commercial PV installations in the region. However, well-established commercial solar markets (together with average annual snowfall) include Denver, 60 inches (152 cm); Milwaukee, 47 inches (119 cm); Boston/New England, 43+ inches (109+ cm); Detroit (and Ontario Canada), 42 inches (107 cm); Chicago, 38 inches (97 cm); and the Mid-Atlantic region with 20-30 inches (51-76 cm). Taking Philadelphia, Detroit and Denver, each city is at about the same latitude, roughly 40°, but their average annual snowfall varies smoothly from 20-60 inches (0.5-1.0 metres)/year, all well short of Truckee's normal total.
Each system is assumed to be south-facing, at a tilt angle equal to latitude minus 15°, with ground interference characteristic of common modules 2 metres long in portrait mode, mounted six inches (15 cm) above the roof.
Inputs needed to generate these estimates were readily obtained from the National Renewable Energy Laboratory's (NREL) solar radiation database and Wikipedia's climate data for each city. In addition to two fixed coefficients, the data needed to run the model are factors including site latitude, array geometry (tilt, row slant length, and height above ground), monthly snowfall and the number of snow events per month, average air temperature, plane of array insolation, and average relative humidity. The monthly loss estimates which result can be used directly as inputs to popular PV simulation programs such as PVSyst.
BEW is now concluding its third season of measurements at Truckee and plans to present its updated findings at the Solar Power International conference in Orlando, Florida this autumn. With such large amounts of money tied to performance, quantitative means of addressing snow loss risk are sorely needed. As this is the first published analytical model for snow loss estimation, the impact of applying it in this emerging market is potentially very large. The goal is to improve snow loss modelling and thereby improve the bankability of projects in snowy locations.
Tim Townsend and Loren Powers are engineers at BEW Engineering, San Ramon, California. BEW is now part of DNV KEMA Energy & Sustainability. E-mail: tim.townsend@dnv.comor loren.powers@dnv.com

Tuesday, May 29, 2012


Friday, April 13, 2012

PUC May Decide California's Fight Over Net-Metering