Tuesday, December 23, 2008

PHOTOBIOREACTORS

Large solar collectors on the roof track the sun, collect sunlight, and distribute it through large optical fibers to the bioreactor's growth chamber. The fibers function as distributed light sources to illuminate cyanobacteria (algae).
Each growth chamber consists of a series of illumination sheets containing the optical fibers and moist cloth-like membranes on which the algae grow. By stacking the membranes vertically and better distributing the light, more algae can be produced via photosynthesis in a smaller area.
Photobioreactors use sunlight to sequestor carbon from coal-fired power plans as they produce biomass. The Ohio University reactor will ultimately remove the carbon generated by the production of about 125 MW of electricity in a coal fired plan.
This system is expected to sequester carbon at a cost of $5-8 per ton surpassing the U.S. Department of Energy's goal of $10 per ton. It will also reduce the space required by a factor of 10 or more, when compared to raceway cultivators.
Light delivery and distribution is the principle obstacle to using commercial-scale photobioreactors for algae production. In horizontal cultivator systems, light penetrates the suspension only to 5 cm, leaving most of the algae in darkness. The top layer of algae requires only about 1/10th the intensity of full sunlight to maximize growth, so the remaining sunlight is wasted.
The biomass has a variety of potential uses: hydrogen production, feedstocks, agriculture, pharmaceuticals.

UNINTERRUPTIBLE POWER SUPPLIES

Uninterruptible power supplies, or simply "UPS", is sometimes referred to as a battery backup system, which maintains a continuous supply of electric power to a building, or certain electrical devices within a building by supplying power from the UPS system whenever power is not available from the grid or utility company.

Typically, uninterruptible power supplies are located between the source of the normal power supply - such as the electric utility company - and the electric load the UPS system is protecting. When electric power from the grid fails - whether through a lightning strike, failed transformer, or a black-out occurs, the UPS will instantly recognize the loss or interruption of power from the grid, and switch from the grid power to UPS power.

Uninterruptible power supply systems can be designed to protect small or large loads, including systems small enough to protect one or more computers, to critical life support systems that may be found in a home or hospital, to telecommunications equipment where an unexpected power disruption could threaten life or health or serious business disruption or computer data loss.
Small UPS systems can protect loads as small as just one computer to large UPS systems that will power and protect a company's entire data center or a building such as an office building or hospital. These systems can be as large as 3-20 megawatts and typically work in conjunction with a genset or a cogeneration plant.

FLUE GAS DESULFURIZATION

Flue gas desulfurization is a chemical process to remove sulfur oxides from the flue gas at coal-burning power plants. Many FGD methods have been developed to varying stages of applicability.
Their goal is to chemically combine the sulfur gases released in coal combustion by reacting them with a sorbent, such as limestone (calcium carbonate, CaCO3), lime (calcium oxide, CaO) or ammonia (NH3). Of the FGD systems in the United States, 90 percent use limestone or lime as the sorbent. As the flue gas comes in contact with the slurry of calcium salts, sulfur dioxide (SO2) reacts with the calcium to form hydrous calcium sulfate (CaSO42H20) or gypsum.
Certain material produced by some power plants in an oxidizing and calcium based process for air emission scrubbing is called FGD (or synthetic) gypsum. FGD gypsum is precipitated gypsum formed through the neutralization of sulfuric acid. While the material may vary in purity, which is defined as the percentage of CaSO4٠2H2O, it is generally over 94% when it is used in wallboard manufacturing. Because this material is very consistent when produced by power plants, wallboard manufacturers will often be located adjacent to the power plant to allow the FGD material to be delivered directly to the wallboard plants. This synergistic relationship not only is economically attractive, but it reduces the need to mine natural gypsum and therefore has a positive environmental impact.
FGD material can be wet or dry. Definitions related to FGD material can be found on this website by clicking on the tab “What are CCPs?” on the Home Page. The PDF file “Glossary of Terms” can be downloaded. As many different terms are used for FGD material, and operational differences between systems may create slightly different types of FGD, this Glossary of Terms is a reliable source of information.

ESTERIFICATION

Esterification is the chemical process of combining an alcohol and an acid which results in the formation of an ester.
• Acid Esterification. Oil feedstocks containing more than 4% free fatty acids go through an acid esterification process to increase the yield of biodiesel. These feedstocks are filtered and preprocessed to remove water and contaminants, and then fed to the acid esterification process. The catalyst, sulfuric acid, is dissolved in methanol and then mixed with the pretreated oil. The mixture is heated and stirred, and the free fatty acids are converted to biodiesel. Once the reaction is complete, it is dewatered and then fed to the transesterification process.
• Transesterification. Oil feedstocks containing less than 4% free fatty acids are filtered and preprocessed to remove water and contaminants and then fed directly to the transesterification process along with any products of the acid esterification process. The catalyst, potassium hydroxide, is dissolved in methanol and then mixed with and the pretreated oil. If an acid esterification process is used, then extra base catalyst must be added to neutralize the acid added in that step. Once the reaction is complete, the major co-products, biodiesel and glycerin, are separated into two layers.
• Methanol recovery. The methanol is typically removed after the biodiesel and glycerin have been separated, to prevent the reaction from reversing itself. The methanol is cleaned and recycled back to the beginning of the process.
• Biodiesel refining. Once separated from the glycerin, the biodiesel goes through a clean-up or purification process to remove excess alcohol, residual catalyst and soaps. This consists of one or more washings with clean water. It is then dried and sent to storage. Sometimes the biodiesel goes through an additional distillation step to produce a colorless, odorless, zero-sulfur biodiesel.
• Glycerin refining. The glycerin by-product contains unreacted catalyst and soaps that are neutralized with an acid. Water and alcohol are removed to produce 50%-80% crude glycerin. The remaining contaminants include unreacted fats and oils. In large biodiesel plants, the glycerin can be further purified, to 99% or higher purity, for sale to the pharmaceutical and cosmetic industries.

BIOMASS GASIFICATION

Biomass fuels such as firewood and agriculture-generated residues and wastes are generally organic. They contain carbon, hydrogen, and oxygen along with some moisture. Under controlled conditions, characterized by low oxygen supply and high temperatures, most biomass materials can be converted into a gaseous fuel known as producer gas, which consists of carbon monoxide, hydrogen, carbon dioxide, methane and nitrogen. This thermo-chemical conversion of solid biomass into gaseous fuel is called biomass gasification.
The producer gas so produced has low a calorific value (1000-1200 Kcal/Nm3), but can be burnt with a high efficiency and a good degree of control without emitting smoke. Each kilogram of air-dry biomass (10% moisture content) yields about 2.5 Nm3 of producer gas. In energy terms, the conversion efficiency of the gasification process is in the range of 60%-70%.
Conversion of solid biomass into combustible gas has all the advantages associated with using gaseous and liquid fuels such as clean combustion, compact burning equipment, high thermal efficiency and a good degree of control. In locations, where biomass is already available at reasonable low prices (e.g. rice mills) or in industries using fuel wood, gasifier systems offer definite economic advantages.
Biomass gasification technology is also environment-friendly, because of the firewood savings and reduction in CO2 emissions.Biomass gasification technology has the potential to replace diesel and other petroleum products in several applications, foreign exchange.

PLASMA GASIFICATION

Plasma Gasification is able to get the energy it needs from waste-streams such as municipal solid waste (MSW) and even hazardous and toxic wastes, without the need to bury these wastes in a landfill.
There are two methods used in plasma gasification - the first one is a "plasma arc" and second is called a "plasma torch."
A "plasma arc" plasma gasification plant operates on principles similar to an arc-welding machine, where an electrical arc is struck between two electrodes. The high-energy arc creates a high temperature, highly ionized gas. The plasma arc is enclosed in a chamber. Waste material is fed into the chamber and the intense heat of the plasma breaks down organic molecules (such as oil, solvents, and paint) into their elemental atoms. In a carefully controlled process, these atoms recombine into harmless gases such as carbon dioxide. Solids such as glass and metals are melted to form materials, similar to hardened lava, in which toxic metals are encapsulated. With plasma arc technology there is no burning or incineration and no formation of ash.
"Plasma arc" plasma gasification plant have a very high destruction efficiency. They are very robust; they can treat any waste with minimal or no pretreatment; and they produce a stable waste form. The arc melter uses carbon electrodes to strike an arc in a bath of molten slag. The consumable carbon electrodes are continuously inserted into the chamber, eliminating the need to shut down for electrode replacement or maintenance. The high temperatures produced by the arc convert the organic waste into light organics and primary elements.
Combustible gas is cleaned in the off-gas system and oxidized to CO2 and H2O in ceramic bed oxidizers. The potential for air pollution is low due to the use of electrical heating in the absence of free oxygen. The inorganic portion of the waste is retained in a stable, leach-resistant slag.
In "plasma torch" systems, an arc is struck between a copper electrode and either a bath of molten slag or another electrode of opposite polarity. As with "plasma arc" systems, plasma torch systems have very high destruction efficiency; they are very robust; and they can treat any waste or medium with minimal or no pre-treatment. The inorganic portion of the waste is retained in a stable, leach-resistant slag. The air pollution control system is larger than for the plasma arc system, due to the need to stabilize torch gas.

ADSORPTION CHILLER

Absorption chillers use heat instead of mechanical energy to provide cooling. A thermal compressor consists of an absorber, a generator, a pump, and a throttling device, and replaces the mechanical vapor compressor.

In the chiller, refrigerant vapor from the evaporator is absorbed by a solution mixture in the absorber. This solution is then pumped to the generator. There the refrigerant re-vaporizes using a waste steam heat source. The refrigerant-depleted solution then returns to the absorber via a throttling device. The two most common refrigerant/ absorbent mixtures used in absorption chillers are water/lithium bromide and ammonia/water.

Compared with mechanical chillers, absorption chillers have a low coefficient of performance (COP = chiller load/heat input). However, absorption chillers can substantially reduce operating costs because they are powered by low-grade waste heat. Vapor compression chillers, by contrast, must be motor- or engine-driven.

Low-pressure, steam-driven absorption chillers are available in capacities ranging from 100 to 1,500 tons. Absorption chillers come in two commercially available designs: single-effect and double-effect. Single-effect machines provide a thermal COP of 0.7 and require about 18 pounds of 15-pound-per-square-inch-gauge (psig) steam per ton-hour of cooling. Double-effect machines are about 40% more efficient, but require a higher grade of thermal input, using about 10 pounds of 100- to 150-psig steam per ton-hour.

A single-effect absorption machine means all condensing heat cools and condenses in the condenser. From there it is released to the cooling water. A double-effect machine adopts a higher heat efficiency of condensation and divides the generator into a high-temperature and a low-temperature generator.

POWER PLANTS

Simple Cycle Power Plants (Open Cycle)
The modern power gas turbine is a high-technology package that is comprised of a compressor, combustor, power turbine, and generator, as shown in the figure "Simple-Cycle Gas Turbine".
In a gas turbine, large volumes of air are compressed to high pressure in a multistage compressor for distribution to one or more combustion gases from the combustion chambers power an axial turbine that drives the compressor and the generator before exhausting to atmosphere. In this way, the combustion gases in a gas turbine power the turbine directly, rather than requiring heat transfer to a water/steam cycle to power a steam turbine, as in the steam plant. The latest gas turbine designs use turbine inlet temperatures of 1,500C (2,730F) and compression ratios as high as 30:1 (for aeroderivatives) giving thermal efficiencies of 35 percent or more for a simple-cycle gas turbine.
Combined Cycle Power Plants
The combined-cycle unit combines the Rankine (steam turbine) and Brayton (gas turbine) thermodynamic cycles by using heat recovery boilers to capture the energy in the gas turbine exhaust gases for steam production to supply a steam turbine as shown in the figure "Combined-Cycle Cogeneration Unit". Process steam can be also provided for industrial purposes
Fossil fuel-fired (central) power plants use either steam or combustion turbines to provide the mechanical power to electrical generators. Pressurized high temperature steam or gas expands through various stages of a turbine, transferring energy to the rotating turbine blades. The turbine is mechanically coupled to a generator, which produces electricity.
Steam Turbine Power Plants:
Steam turbine power plants operate on a Rankine cycle. The steam is created by a boiler, where pure water passes through a series of tubes to capture heat from the firebox and then boils under high pressure to become superheated steam. The heat in the firebox is normally provided by burning fossil fuel (e.g. coal, fuel oil or natural gas). However, the heat can also be provided by biomass, solar energy or nuclear fuel. The superheated steam leaving the boiler then enters the steam turbine throttle, where it powers the turbine and connected generator to make electricity. After the steam expands through the turbine, it exits the back end of the turbine, where it is cooled and condensed back to water in the surface condenser. This condensate is then returned to the boiler through high-pressure feedpumps for reuse. Heat from the condensing steam is normally rejected from the condenser to a body of water, such as a river or cooling tower.

OCEAN THERMAL ENERGY CONVERSION

The oceans cover a little more than 70 percent of the Earth's surface. This makes them the world's largest solar energy collector and energy storage system. On an average day, 60 million square kilometers (23 million square miles) of tropical seas absorb an amount of solar radiation equal in heat content to about 250 billion barrels of oil. If less than one-tenth of one percent of this stored solar energy could be converted into electric power, it would supply more than 20 times the total amount of electricity consumed in the United States on any given day.
Ocean Thermal Energy Conversion, or "OTEC," is an energy technology that converts solar radiation to electric power. OTEC systems use the ocean's natural thermal gradient—the fact that the ocean's layers of water have different temperatures—to drive a power-producing cycle. As long as the temperature between the warm surface water and the cold deep water differs by about 20°C (36°F), an OTEC system can produce a significant amount of power.
The oceans are thus a vast renewable resource, with the potential to help us produce billions of watts of electric power. This potential is estimated to be about 1013 watts of baseload power generation, according to some experts. The cold, deep seawater used in the OTEC process is also rich in nutrients, and it can be used to culture both marine organisms and plant life near the shore or on land.
The economics of energy production today have delayed the financing of a permanent, continuously operating OTEC plant. However, OTEC is very promising as an alternative energy resource for tropical island communities that rely heavily on imported fuel. OTEC plants in these markets could provide islanders with much-needed power, as well as desalinated water and a variety of mariculture products.

REVERSE OSMOSIS DESALINATION

Reverse Osmosis Desalination involves removing the salt from water to make it drinkable. There are several ways to do it, and it is not a new idea at all. Sailors have been using solar evaporation to separate salt from sea water for at least several thousand years. Most of the world’s 1,500 or so desalination plants use distillation as the process, and there are also flash evaporation and electrodialysis methods. All these methods are very expensive, so historically desalination has only been used where other alternatives are also very expensive, such as desert cities. However, an exploding world demand for potable water has led to a lot of research and development in this field and a new, cheaper process has been developed that involves heating sea water and forcing it through membranes to remove the salt from the water.
The process is even cheaper if the desalination plant can be located next to an electrical power plant that is already heating sea water to use for cooling the electrical generating units. Even so, it is still more expensive than other alternatives, but it is indeed becoming more competitive and could become a viable alternative to Edwards water. There is also a lot of interest in using local, brackish groundwaters as a source for desalination instead of ocean water. Such waters typically have only one-tenth the salinity of sea water, so desalination can be accomplished more easily and transportation is less of an issue.

In April 2000 the Texas Water Development Board approved a $59,000 grant to the Lavaca-Navadid River Authority to determine if building a $400 million plant on Matagorda Bay at Point Comfort would be economically and environmentally feasible. There is a power plant at this location that could supply the heated sea water for the membrane process.
The study was released two months later and the cost rose to $755 million, but this included the cost of transmission facilities to San Antonio. The study estimated that a 50-50 mix of desalinated water and water treated by other conventional methods could be delivered to San Antonio users for about $2.80 per thousand gallons, compared to a current cost of $1.36 per thousand gallons.

THE ECONOMICS OF NUCLEAR POWER

The economics of nuclear power is a highly contentious area. It is often difficult to establish independently verified estimates of the basic construction costs and the operating cost. In addition, the results are crucially dependent on the accounting and investment appraisal assumptions such as the rate of return on capital that is sought (the discount rate) and the life-time of the plant.
These latter factors are of particular relevance to nuclear power because the main element in the cost for each unit of electricity generated is that associated with building the plant, the capital cost. The shorter the expected life-time and the higher the discount rate, the higher these fixed costs will be. In a monopoly system, the assumed life of the plant can be the expected physical life-time because there will be nothing to stop the owner running the plant until it is worn out. In a competitive system, the plant may have to be retired much earlier if it cannot compete with new plants.
The running costs of nuclear power plants are difficult to establish because most electric utilities regard this data as commercially confidential. However, in the USA, utilities are required to publish fully authenticated running costs. In 1997, the cheapest to run nuclear plants cost about 1c/kWh (0.6p/kWh), while the average was about 2.4c/kWh (1.5p/kWh). Of this, about 0.4-0.6c/kWh was fuel cost while the rest, 0.5-1.8c/kWh, represented the non-fuel cost of operation and maintenance (wages, spare parts etc.)
Government owned utilities have usually been able to invest money at very low rates of return on capital partly because new power stations were seen as a safe investment and partly because, for a variety of reasons, governments have tended to require a lower rate of return on capital than private industry. Thus, in Britain before privatisation, the national utility, the CEGB, could invest at a 5 per cent real (net of inflation) rate of return and recover the costs over 35 years. After privatisation, it is known that private investors are looking for about 12-15 per cent real return and recover the capital over 15-20 years.

Development of Nuclear Technologies

The history of nuclear power development has been one of unfulfilled promises and unexpected technical difficulties. The ringing promise from 1955, of `power too cheap to meter' is one that has come back to haunt the nuclear industry.
With most successful new technologies, people confidently expect that successive designs become cheaper and offer better performance. This has not been the experience with nuclear power: costs have consistently gone up in real terms and processes which were expected to prove easy to master continue to throw up technical difficulties. The issues surrounding waste processing and disposal which at first were assumed to be easily dealt with, remain neglected.
Despite this history of unfulfilled expectations, two factors have meant that nuclear power continues to be discussed as a major potential energy source. First, the promise of unlimited power independent of natural resource limitations and second, the attraction to engineers and scientists of meeting the technological challenges that are posed.
However, in the developed world, patience with nuclear technology is running out. Governments are no longer willing to invest more tax-payers' money in a technology which has provided such a poor rate of return. Electric utilities cannot simply pass on development costs to consumers. Equipment supply companies, which have generally made little or no money from nuclear technology, are unwilling to risk more money on developing technologies which might not work well and which might not have a market.
There is still talk about new nuclear technologies, but a critical look at the real resources going into them shows that little money is now being spent.

Various Methods for Recovery of Waste Heat

Low-Temperature Waste Heat Recovery Methods – A large amount of energy in the form of medium- to low-temperature gases or low-temperature liquids (less than about 250 degrees F) is released from process heating equipment, and much of this energy is wasted.

Conversion of Low Temperature Exhaust Waste Heat – making efficient use of the low temperature waste heat generated by prime movers such as micro-turbines, IC engines, fuel cells and other electricity producing technologies. The energy content of the waste heat must be high enough to be able to operate equipment found in cogeneration and trigeneration power and energy systems such as absorption chillers, refrigeration applications, heat amplifiers, dehumidifiers, heat pumps for hot water, turbine inlet air cooling and other similar devices.

Conversion of Low Temperature Waste Heat into Power –The steam-Rankine cycle is the principle method used for producing electric power from high temperature fluid streams. For the conversion of low temperature heat into power, the steam-Rankine cycle may be a possibility, along with other known power cycles, such as the organic-Rankine cycle.

Small to Medium Air-Cooled Commercial Chillers – All existing commercial chillers, whether using waste heat, steam or natural gas, are water-cooled (i.e., they must be connected to cooling towers which evaporate water into the atmosphere to aid in cooling). This requirement generally limits the market to large commercial-sized units (150 tons or larger), because of the maintenance requirements for the cooling towers. Additionally, such units consume water for cooling, limiting their application in arid regions of the U.S. No suitable small-to-medium size (15 tons to 200 tons) air-cooled absorption chillers are commercially available for these U.S. climates. A small number of prototype air-cooled absorption chillers have been developed in Japan, but they use “hardware” technology that is not suited to the hotter temperatures experienced in most locations in the United States. Although developed to work with natural gas firing, these prototype air-cooled absorption chillers would also be suited to use waste heat as the fuel.

COGENERATION TECHNOLOGIES

A typical cogeneration system consists of an engine, steam turbine, or combustion turbine that drives an electrical generator. A waste heat exchanger recovers waste heat from the engine and/or exhaust gas to produce hot water or steam. Cogeneration produces a given amount of electric power and process heat with 10% to 30% less fuel than it takes to produce the electricity and process heat separately.

There are two main types of cogeneration techniques: "Topping Cycle" plants, and "Bottoming Cycle" plants.

A topping cycle plant generates electricity or mechanical power first. Facilities that generate electrical power may produce the electricity for their own use, and then sell any excess power to a utility. There are four types of topping cycle cogeneration systems. The first type burns fuel in a gas turbine or diesel engine to produce electrical or mechanical power. The exhaust provides process heat, or goes to a heat recovery boiler to create steam to drive a secondary steam turbine. This is a combined-cycle topping system. The second type of system burns fuel (any type) to produce high-pressure steam that then passes through a steam turbine to produce power. The exhaust provides low-pressure process steam. This is a steam-turbine topping system. A third type burns a fuel such as natural gas, diesel, wood, gasified coal, or landfill gas. The hot water from the engine jacket cooling system flows to a heat recovery boiler, where it is converted to process steam and hot water for space heating. The fourth type is a gas-turbine topping system. A natural gas turbine drives a generator. The exhaust gas goes to a heat recovery boiler that makes process steam and process heat. A topping cycle cogeneration plant always uses some additional fuel, beyond what is needed for manufacturing, so there is an operating cost associated with the power production.

Bottoming cycle plants are much less common than topping cycle plants. These plants exist in heavy industries such as glass or metals manufacturing where very high temperature furnaces are used. A waste heat recovery boiler recaptures waste heat from a manufacturing heating process. This waste heat is then used to produce steam that drives a steam turbine to produce electricity. Since fuel is burned first in the production process, no extra fuel is required to produce electricity.

CRYSTALLINE SILICON

Monocrystalline Silicon is made from very pure Monocrystalline Silicon. Monocrystalline Silicon has a single and continuous crystal lattice structure with practically zero defects or impurities.
One of the many reasons Monocrystalline Silicon is superior to other types of silicon cells are their high efficiencies - which are typically around 15%.
Because the manufacturing process required to produce Monocrystalline Silicon is more involved and detailed than other types, this results in slightly higher costs for Monocrystalline Silicon than other silicon technologies.
Polycrystalline Silicon - also referred to as "polysilicon" or "Poly-Si" is a material consisting of multiple small silicon crystals and has long been used as the conducting gate material in MOSFET and CMOS processing technologies. For these technologies, Polycrystalline Silicon is deposited using LPCVD reactors at high temperatures and is usually heavily n or p-doped.

The main advantage of Polycrystalline Silicon over other types of silicon is that the mobility can be orders of magnitude larger and the material also shows greater stability under electric field and light-induced stress. This allows far more complex, high-speed electrical circuits that can be created on the glass substrate along with the amorphous silicon devices, which are still needed for their low-leakage characteristics.
When Polycrystalline Silicon and Amorphous Silicon devices are used in the same process, this is called "hybrid processing."
A complete Polycrystalline Silicon active layer process is also used in some cases where a small pixel size is required, such as in projection displays.

GEO THERMAL ENERGY

The Earth's crust is a bountiful source of energy—and fossil fuels are only part of the story. Heat or thermal energy is by far the more abundant resource. To put it in perspective, the thermal energy in the uppermost six miles of the Earth's crust amounts to 50,000 times the energy of all oil and gas resources in the world!
The word "geothermal" literally means "Earth" plus "heat." The geothermal resource is the world's largest energy resource and has been used by people for centuries. In addition, it is environmentally friendly. It is a renewable resource and can be used in ways that respect rather than upset our planet's delicate environmental balance.
Geothermal power plants operating around the world are proof that the Earth's thermal energy is readily converted to electricity in geologically active areas. Many communities, commercial enterprises, universities, and public facilities in the western United States are heated directly with the water from underground reservoirs. For the homeowner or building owner anywhere in the United States, the emergence of geothermal heat pumps brings the benefits of geothermal energy to everyone's doorstep.
There's a relatively simple concept underlying all the ways geothermal energy is used: The flow of thermal energy is available from beneath the surface of the Earth and especially from subterranean reservoirs of hot water. Over the years, technologies have evolved that allow us to take advantage of this heat.
In fact, electric power plants driven by geothermal energy provide over 44 billion kilowatt hours of electricity worldwide per year, and world capacity is growing at approximately 9% per year. To produce electric power from geothermal resources, underground reservoirs of steam or hot water are tapped by wells and the steam rotates turbines that generate electricity. Typically, water is then returned to the ground to recharge the reservoir and complete the renewable energy cycle.
Underground reservoirs are also tapped for "direct-use" applications. In these instances, hot water is channeled to greenhouses, spas, fish farms, and homes to fill space heating and hot water needs.

POWER TOWER SYSTEMS

A power tower converts sunshine into clean electricity for the world’s electricity grids. The technology utilizes many large, sun-tracking mirrors (heliostats) to focus sunlight on a receiver at the top of a tower. A heat transfer fluid heated in the receiver is used to generate steam, which, in turn, is used in a conventional turbine-generator to produce electricity. Early power towers (such as the Solar One plant) utilized steam as the heat transfer fluid; current designs (including Solar Two, pictured) utilize molten nitrate salt because of its superior heat transfer and energy storage capabilities. Individual commercial plants will be sized to produce anywhere from 50 to 200 MW of electricity.
A power tower converts sunshine into clean electricity for the world’s electricity grids. The technology utilizes many large, sun-tracking mirrors (heliostats) to focus sunlight on a receiver at the top of a tower. A heat transfer fluid heated in the receiver is used to generate steam, which, in turn, is used in a conventional turbine-generator to produce electricity. Early power towers (such as the Solar One plant) utilized steam as the heat transfer fluid; current designs (including Solar Two, pictured) utilize molten nitrate salt because of its superior heat transfer and energy storage capabilities. Individual commercial plants will be sized to produce anywhere from 50 to 200 MW of electricity

Power towers enjoy the benefits of two successful, large-scale demonstration plants. The 10-MW Solar One plant near Barstow, CA, demonstrated the viability of power towers, producing over 38 million kilowatt-hours of electricity during its operation from 1982 to 1988. The Solar Two plant was a retrofit of Solar One to demonstrate the advantages of molten salt for heat transfer and thermal storage.

Utilizing its highly efficient molten-salt energy storage system, Solar Two successfully demonstrated efficient collection of solar energy and dispatch of electricity, including the ability to routinely produce electricity during cloudy weather and at night. In one demonstration, it delivered power to the grid 24 hours per day for nearly 7 straight days before cloudy weather interrupted operation.

RECOVERY OF WASTE HEAT FROM COGENERATION AND TRIGENERATION PLANT

In most cogeneration and trigeneration power and energy systems, the exhaust gas from the electric generation equipment is ducted to a heat exchanger to recover the thermal energy in the gas. These heat exchangers are air-to-water heat exchangers, where the exhaust gas flows over some form of tube and fin heat exchange surface and the heat from the exhaust gas is transferred to make hot water or steam. The hot water or steam is then used to provide hot water or steam heating and/or to operate thermally activated equipment, such as an absorption chiller for cooling or a desiccant dehumidifer for dehumidification.
Many of the waste heat recovery technologies used in building co/trigeneration systems require hot water, some at moderate pressures of 15 to 150 psig. In the cases where additional steam or pressurized hot water is needed, it may be necessary to provide supplemental heat to the exhaust gas with a duct burner.
In some applications air-to-air heat exchangers can be used. In other instances, if the emissions from the generation equipment are low enough, such as is with many of the microturbine technologies, the hot exhaust gases can be mixed with make-up air and vented directly into the heating system for building heating.
In the majority of installations, a flapper damper or "diverter" is employed to vary flow across the heat transfer surfaces of the heat exchanger to maintain a specific design temperature of the hot water or steam generation rate.
In some co/trigeneration designs, the exhaust gases can be used to activate a thermal wheel or a desiccant dehumidifier. Thermal wheels use the exhaust gas to heat a wheel with a medium that absorbs the heat and then transfers the heat when the wheel is rotated into the incoming airflow.
A professional engineer should be involved in designing and sizing of the waste heat recovery section. For a proper and economical operation, the design of the heat recovery section involves consideration of many related factors, such as the thermal capacity of the exhaust gases, the exhaust flow rate, the sizing and type of heat exchanger, and the desired parameters over a various range of operating conditions of the co/trigeneration system — all of which need to be considered for proper and economical operation.

COMPRESSED AIR ENERGY STORAGE

On nights and weekends, Compressed Air Energy Storage ("CAES") systems compresses air on the surface and then pumps the air underground to a cavern or former mine. There, it is stored as an energy source. During the day and at peak times, air is released and heated using a small amount of natural gas. The heated air flows through a turbine generator to produce electricity.

In conventional gas-turbine power generation, the air that drives the turbine is compressed and heated using natural gas. On the other hand, compressed air energy storage technology needs less gas to produce power during periods of peak demand because it uses air that has already been compressed and stored underground.

Two major compressed air energy storage plants exist worldwide: a CAES plant in Alabama, which is 11-years-old and rated at 110 megawatts, and a German facility that is 23-years-old and 290 MW. A new CAES plant is under development located near Cleveland and will be capable of generating 2,700 MW. Currently, manufacturers can create CAES machinery for facilities ranging from 5 to 350 MW. Palo Alto, Calif.-based EPRI has estimated that more than 85 percent of the U.S. has geological characteristics will accommodate underground compressed air energy storage. Studies have concluded that the technology is competitive with combustion turbines and combined-cycle units, even without attributing some of the uncommon benefits of energy storage.

Compressed air energy storage utilities can use off-peak electricity to compress air and store it in airtight underground caverns. When the air is released from the underground mine or cavern, the air expands through a combustion turbine to create electricity. Nearly two-thirds of the natural gas in a conventional power plant is consumed by a typical natural gas turbine because the gas is used to drive the machine's compressor. By comparison, a compressed-air storage plant uses low-cost heated compressed air to power the turbines and create off-peak electricity, conserving some natural gas.


Compressed air energy storage has a few disadvantages. The disadvantage is that energy is lost when it is “pumped” into the cavern and then re-extracted as compressed air. Some estimates say that it could be as high as 80 percent. That, in effect, means that the selling price must accommodate that shortcoming, which may drive up rates for consumers. Also, building underground storage can be expensive, which might make some prospective projects infeasible. But, with gas prices estimated to be in the $5-6 per million BTU range in the short to medium term, an investment in underground storage could pay for itself over time. Moreover, if the nation develops an energy policy that pushes renewable power sources, the idea may catch on. If that happens and a debate over the technology ensues, developers say that they can win approval from stakeholders. Because storage is used with renewable forms of power, capital costs can be more readily recouped. And furthermore, wind and solar energy, for example, can be stored whenever it is generated and then released on demand—helping to negate the argument that those power sources are intermittent and therefore unreliable.

COMPRESSED NATURAL VEHICLE

According to the Natural Gas Vehicle Coalition (NGVC), as of 2005 there are 130,000 light- and heavy-duty compressed natural gas (CNG) and liquefied natural gas (LNG) vehicles in the United States and 5 million worldwide.
Dedicated natural gas vehicles (NGVs) are designed to run only on natural gas; bi-fuel NGVs have two separate fueling systems that enable the vehicle to use either natural gas or a conventional fuel (gasoline or diesel). In general, dedicated NGVs demonstrate better performance and have lower emissions than bi-fuel vehicles because their engines are optimized to run on natural gas. In addition, the vehicle does not have to carry two types of fuel, thereby increasing cargo capacity and reducing weight.
There are a few light-duty NGVs still available, but if you want a specific type of vehicle, you may want to consider retrofitting a vehicle to an NGV by using an aftermarket conversion system. Heavy-duty NGVs are also available as trucks, buses, and shuttles. Approximately one of every five new transit buses in the United States is powered by natural gas.
As a new twist, tests are being conducted using natural gas vehicles that are fueled with a blend of compressed natural gas and hydrogen.
This model year, auto manufacturers are producing fewer models than in years past. In order to get more vehicle options, you may choose to retrofit your own vehicle.
CNG fueling stations are located in most major cities and in many rural areas. Public LNG stations are limited and used mostly by fleets and heavy-duty trucks. LNG is available through suppliers of cryogenic liquids.

INCREASING NITROGEN OXIDE EMISSION

NOx and the pollutants formed from NOx can be transported over long distances, following the pattern of prevailing winds in the U.S. This means that problems associated with NOx are not confined to areas where NOx are emitted. Therefore, controlling NOx is often most effective if done from a regional perspective, rather than focusing on sources in one local area.
Since 1970, EPA has tracked emissions of the six principal air pollutants - carbon monoxide, lead, nitrogen oxides, particulate matter, sulfur dioxide, and volatile organic compounds. Emissions of all of these pollutants have decreased significantly except for NOx which has increased approximately 10 percent over this period
Selective Catalytic Reduction (SCR) is a proven and effective method to reduce nitrogen oxides which is an air pollutant associated with the power generation process. Nitrogen oxides are a contributor to ground level ozone.

. SCR Systems work similar to a catalytic converter used to reduce automobile emissions. Prior to exhaust gases going up the smokestack, they will pass through the SCR System where anhydrous ammonia reacts with nitrogen oxide and converts it to nitrogen and water.

CANOLA BIO DIESEL

Canola biodiesel is an environmentally- friendly, renewable energy source that could also produce cost savings for taxpayers and private businesses and is produced from farmers that grow canola.

Initial research conducted by the University of Saskatchewan and the AAFC Saskatoon Research Centre has found that each ton of renewable biodiesel fuel saves five times its weight in diesel fuel. As well, engines using biodiesel demonstrate wear rates as much as 50% lower than those using regular commercial fuels – effectively doubling engine life.

Canola is a member of the Brassica Family, which includes broccoli, cabbage, cauliflower, mustard, radish, and turnip. It is a variant of the crop rapeseed. Grown for its seed, the seed is crushed for the oil contained within. After the oil is extracted, the by-product is a protein-richmeal used by the intensive livestock industry.
Canola is a very small seed, which means sowing depth must be controlled. The current sowing practice is to cover the seed lightly with soil, which provides more protection from drying out after germination.

Canola is generally sown in autumn and develops over winter, with flowers emerging in the spring and is harvested early summer. With a growing period of around 180-200 days climatic effects such as sudden heat waves can reduce yields and hot dry conditions can limit its oil content. Summer weather ensures low moisture (less than 6%) at harvest. Carry-in stocks of canola are minimal because of a lack of on-farm storage. Canola is a good rotational crop, acting as a break crop for cereal root diseases. However for disease-related reasons, a rotation period of 3-5 years is required for canola crops. of iodine in grams absorbed per 100 ml of oil is then the IV. The higher the IV, the more unsaturated (the greater the number of double bonds available) is the oil and the higher the potential to ‘gum up’ when used as a fuel in an engine. Though some oils have a low IV and are suitable without any further processing other than extraction and filtering, the majority of vegetable and animal oils have an IV which does not permit their use as a neat fuel.

OCEAN WATER DESALINATION

Ocean Water Desalination involves removing the salt from water to make it drinkable. There are several ways to do it, and it is not a new idea at all. Sailors have been using solar evaporation to separate salt from sea water for at least several thousand years. Most of the world’s 1,500 or so desalination plants use distillation as the process, and there are also flash evaporation and electrodialysis methods. All these methods are very expensive, so historically desalination has only been used where other alternatives are also very expensive, such as desert cities. However, an exploding world demand for potable water has led to a lot of research and development in this field and a new, cheaper process has been developed that involves heating sea water and forcing it through membranes to remove the salt from the water. The process is even cheaper if the desalination plant can be located next to an electrical power plant that is already heating sea water to use for cooling the electrical generating units. Even so, it is still more expensive than other alternatives, but it is indeed becoming more competitive and could become a viable alternative to Edwards water. There is also a lot of interest in using local, brackish groundwaters as a source for desalination instead of ocean water. Such waters typically have only one-tenth the salinity of sea water, so desalination can be accomplished more easily and transportation is less of an issue.

In April 2000 the Texas Water Development Board approved a $59,000 grant to the Lavaca-Navadid River Authority to determine if building a $400 million plant on Matagorda Bay at Point Comfort would be economically and environmentally feasible. There is a power plant at this location that could supply the heated sea water for the membrane process. The study was released two months later and the cost rose to $755 million, but this included the cost of transmission facilities to San Antonio. The study estimated that a 50-50 mix of desalinated water and water treated by other conventional methods could be delivered to San Antonio users for about $2.80 per thousand gallons, compared to a current cost of $1.36 per thousand gallons. A similar plant being constructed in Tampa, Florida will raise customer’s water bills by about $7.50 a month. (1), (2)

A major advantage of desalination of ocean water is that water is always available even in the most severe droughts. The main environmental concerns of this project are increased salinity levels in Matagorda Bay and the fate of plankton and tiny sea creatures in the water removed for the process. Supporters say it won’t raise the salinity level appreciably and that organisms can be vacuumed out and replaced into the ocean. No one knows yet how this project would be funded

FUEL CELLS

Hydrogen's potential use in fuel and energy applications includes powering vehicles, running turbines or fuel cells to produce electricity, and generating heat and electricity for buildings. The current focus is on hydrogen's use in fuel cells.

A fuel cell works like a battery but does not run down or need recharging. It will produce electricity and heat as long as fuel (hydrogen) is supplied. A fuel cell consists of two electrodes—a negative electrode (or anode) and a positive electrode (or cathode)—sandwiched around an electrolyte. Hydrogen is fed to the anode, and oxygen is fed to the cathode. Activated by a catalyst, hydrogen atoms separate into protons and electrons, which take different paths to the cathode. The electrons go through an external circuit, creating a flow of electricity. The protons migrate through the electrolyte to the cathode, where they reunite with oxygen and the electrons to produce water and heat. Fuel cells can be used to power vehicles or to provide electricity and heat to buildings.

A phosphoric acid fuel cell (PAFC) consists of an anode and a cathode made of a finely dispersed platinum catalyst on carbon paper, and a silicon carbide matrix that holds the phosphoric acid electrolyte. This is the most commercially developed type of fuel cell and is being used in hotels, hospitals, and office buildings. The phosphoric acid fuel cell can also be used in large vehicles, such as buses.

Solid oxide fuel cells (SOFC) currently under development use a thin layer of zirconium oxide as a solid ceramic electrolyte, and include a lanthanum manganate cathode and a nickel-zirconia anode. This is a promising option for high-powered applications, such as industrial uses or central electricity generating stations.

HYDROGEN FUEL

Since the early 19th century, scientists have recognized hydrogen as a potential source of fuel. Current uses of hydrogen are in industrial processes, rocket fuel, and spacecraft propulsion. With further research and development, this fuel could also serve as an alternative source of energy for heating and lighting homes, generating electricity, and fueling motor vehicles. When produced from renewable resources and technologies, such as hydro, solar, and wind energy, hydrogen becomes a renewable fuel.

Composition of Hydrogen

Hydrogen is the simplest and most common element in the universe. It has the highest energy content per unit of weight—52,000 British Thermal Units (Btu) per pound (or 120.7 kilojoules per gram)—of any known fuel. Moreover, when cooled to a liquid state, this low-weight fuel takes up 1/700 as much space as it does in its gaseous state. This is one reason hydrogen is used as a fuel for rocket and spacecraft propulsion, which requires fuel that is low-weight, compact, and has a high energy content.

In a free state and under normal conditions, hydrogen is a colorless, odorless, and tasteless gas. The basic hydrogen (H) molecule exists as two atoms bound together by shared electrons. Each atom is composed of one proton and one orbiting electron. Since hydrogen is about 1/14 as dense as air, some scientists believe it to be the source of all other elements through the process of nuclear fusion. It usually exists in combination with other elements, such as oxygen in water, carbon in methane, and in trace elements as organic compounds. Because it is so chemically active, it rarely stands alone as an element.

FLUIDIZED BED BOILER FOR COAL

It was a small coal burner by today's standards, but large enough to provide heat and steam for much of the university campus. But the new boiler built beside the campus tennis courts was unlike most other boilers in the world.
It was called a "fluidized bed boiler." In a typical coal boiler, coal would be crushed into very fine particles, blown into the boiler, and ignited to form a long, lazy flame. Or in other types of boilers, the burning coal would rest on grates. But in a "fluidized bed boiler," crushed coal particles float inside the boiler, suspended on upward-blowing jets of air. The red-hot mass of floating coal — called the "bed" — would bubble and tumble around like boiling lava inside a volcano. Scientists call this being "fluidized." That's how the name "fluidized bed boiler" came about.
Why does a "fluidized bed boiler" burn coal cleaner?
There are two major reasons. One, the tumbling action allows limestone to be mixed in with the coal. Remember limestone from a couple of pages ago? Limestone is a sulfur sponge — it absorbs sulfur pollutants. As coal burns in a fluidized bed boiler, it releases sulfur. But just as rapidly, the limestone tumbling around beside the coal captures the sulfur. A chemical reaction occurs, and the sulfur gases are changed into a dry powder that can be removed from the boiler. (This dry powder — called calcium sulfate — can be processed into the wallboard we use for building walls inside our houses.)
The second reason a fluidized bed boiler burns cleaner is that it burns "cooler." Now, cooler in this sense is still pretty hot — about 1400 degrees F. But older coal boilers operate at temperatures nearly twice that (almost 3000 degrees F). Remember NOx from the page before (go back)? NOx forms when a fuel burns hot enough to break apart nitrogen molecules in the air and cause the nitrogen atoms to join with oxygen atoms. But 1400 degrees isn't hot enough for that to happen, so very little NOx forms in a fluidized bed boiler.

CLEAN COAL TECHNOLOGY

One way is to clean the coal before it arrives at the power plant. One of the ways this is done is by simply crushing the coal into small chunks and washing it. Some of the sulfur that exists in tiny specks in coal (called "pyritic sulfur " because it is combined with iron to form iron pyrite, otherwise known as "fool's gold) can be washed out of the coal in this manner. Typically, in one washing process, the coal chunks are fed into a large water-filled tank. The coal floats to the surface while the sulfur impurities sink. There are facilities around the country called "coal preparation plants" that clean coal this way.
Not all of coal's sulfur can be removed like this, however. Some of the sulfur in coal is actually chemically connected to coal's carbon molecules instead of existing as separate particles. This type of sulfur is called "organic sulfur," and washing won't remove it. Several process have been tested to mix the coal with chemicals that break the sulfur away from the coal molecules, but most of these processes have proven too expensive. Scientists are still working to reduce the cost of these chemical cleaning processes.
Most modern power plants — and all plants built after 1978 — are required to have special devices installed that clean the sulfur from the coal's combustion gases before the gases go up the smokestack. The technical name for these devices is "flue gas desulfurization units," but most people just call them "scrubbers" — because they "scrub" the sulfur out of the smoke released by coal-burning boilers.
The Clean Coal Technology Program tested several new types of scrubbers that proved to be more effective, lower cost, and more reliable than older scrubbers. The program also tested other types of devices that sprayed limestone inside the tubing (or "ductwork') of a power plant to absorb sulfur pollutants.
Most scrubbers rely on a very common substance found in nature called "limestone." We literally have mountains of limestone throughout this country. When crushed and processed, limestone can be made into a white powder. Limestone can be made to absorb sulfur gases under the right conditions — much like a sponge absorbs water.

B100 DIESEL

B100 Biodiel - also referred to as "Neat Biodiesel" is a renewable fuel for diesel engines and turbines that is made from natural oils (either virgin oil or recycled/waste vegetable oil).
B100 Biodiesel can be used in its' pure, 100% Biodiesel (B100 Biodiesel) form or mixed at any concentration with petroleum diesel for us in existing diesel engines or turbines with little or no modification.

B100 Biodiesel has the following benefits:
• Biodiesel is very easy to use. B100 Biodiesel can be blended with petrleum-diesel at any time in your fuel tank. In post-1994 vehicles no conversion of the vehicle is required. Older ones may have rubber fuel lines and/or seals in the fuel system. B100 Biodiesel will gradually swell rubber and degrade it. Viton is resistant to B100 Biodiesel and you may need to replace your fuel lines/seals with it.
• B100 bodies has up to 95% fewer emissions. Green GreenhouseGEmissions, nitrogen oxides, particulate matter, and carcinogens are all greatly reduced.
• B100 Biodiesel is both renewable and sustainable as it is made from vegetable oil, animal fats (i.e. poultry fat or beef tallow) virgin vegetable oils or recycled/waste vegetable oils.
• B100 is non-hazardous and is less toxic than sugar! There are no governmental requirements or regulations to call out HAZMAT should there be a spill of B100 Biodiesel.
• Regarding the sustainability of B100 Biodiesel - in a recent cradle-to-grave life-cycle analysis, B100 Biodiesel came out very positive in a "Net Energy Balance study. One study found that B100 Biodieselyields 3.2 units of fuel-product energy "output" for every unit of fossil energy consumed ("input") in its life cycle. The only biofuel that exceeds the Net Energy Balance of B100 Biodiesel at 3.2 to1 is Biomethane, which comes in at 7.7 to 1 in a Net Energy balance analysis

ENHANCED OIL RECOVERY

EOR Technologies provides Enhanced Oil Recovery; technologies, engineering, consulting services and a significant opportunity for oil and natural gas well owners and operators to significantly increase their oil production and revenues through our range of EOR technologies and services.
In the U.S., Enhanced Oil Recovery represents a $24 Trillion market opportunity according to the U.S. The Department of Energy. The $24 Trillion figure is based on oil at $100/bbl. The DOE's studies and reports indicate we can recover 240 billion barrels of oil through Enhanced Oil Recovery.
Enhanced Oil Recovery also has the environmental benefits of removing Carbon Dioxide from the atmosphere and sequesters the CO2 in oil & gas reservoirs. Through CO2 Injection, the "stranded oil and gas" that would not have otherwise been recovered, is produced, leaving behind the Carbon Dioxide.
All EcoGeneration Solutions, LLC. companies are committed to reducing and eliminating greenhouse gas emissions and carbon dioxide emissions through our sustainable power and energy operations.
In association with the Renewable Energy Institute, affiliate companies and investors, we provide "turnkey" Renewable Energy Project development services that range from initial Engineering Feasibility & Economic Analysis Studies through "turnkey" project development, including construction/installation, start-up and commissioning, Operations & Maintenance, and Long Term Service Agreements for the lifetime of our power plants and energy systems.

2 BLADED AND 3 BLADED WIND TURBINES

Today's "modern" 3-bladed wind turbines represent the latest technological improvements in wind turbine generators, and are superior to the 20-30 year old technology that 2-bladed wind turbines represent.
First of all, it is important to remember that 2-bladed wind turbines may generate only about 90% of the power of a 3-bladed wind turbine of comparable size. While a 2-bladed wind turbine saves the weight of one extra blade when compared with a 3-bladed wind turbine, engineers of the most efficient wind turbines have determined that the extra blade used on 3 bladed wind turbines provide the optimum wind turbine efficiency and wind turbine design for the "ideal" wind turbine generators of today.
Secondly, the top-3 leading wind turbine manufacturers have standardized on the 3-bladed wind turbine. They do not manufacture any 2-bladed wind turbines. Plainly stated, a wind turbine with an even number of blades (2 blades or 4 blades) are NOT of optimum design or efficiency. In fact, this debate was settled years ago when the wind turbine engineers and designers began building wind turbines over 600 kW in power output.
The top-3 leading wind turbine manufacturers have standardized on the 3-bladed wind turbine. They do not manufacture any 2-bladed wind turbines. Plainly stated, a wind turbine with an even number of blades (2 blades or 4 blades) are NOT of optimum design or efficiency. In fact, this debate was settled years ago when the wind turbine engineers and designers began building wind turbines over 600 kW in power output.
The leading wind turbine manufacturers and their engineers have decided that 3 bladed wind turbines are the optimum number of wind turbine blades due to the stability of the wind turbine as well as the significant wind loads and stresses placed on a 2-bladed wind turbine. A wind turbine that has an odd number of blades is similar to a disc when calculating the computational fluid dynamics of the wind turbine. Engineers have learned that wind turbines that have an even number of blades - such as the 2 bladed wind turbines of the past - have stability problems for a machine with a stiff structure. The reason for this problem is simple, engineers recognized that when a 2-bladed wind turbine's top blade bends backwards - when the wind turbine's 2 blades are in the vertical position - since it is now generating the maximum power from the wind - that the lower or bottom blade is now aligned with the tower and the blade is hidden or blocked from the wind - and this generates a huge amount of stress and loads on the wind turbine and its' primary components such as the bearings, shaft, transmission etc.

WIND RESOURSE ASSESMNT

All markets for wind turbines require an estimate of how much wind energy is available at potential development sites. Correct estimation of the energy available in the wind can make or break the economics of wind farm development. Wind maps developed in the late '70s and early '80s provided reasonable estimates of areas in which good wind resources could be found. But new tools and new data available from satellites and new sensing devices now allow researchers to create even more accurate and detailed wind maps of the world.
Wind mapping techniques developed by the National Renewable Energy Lab ("NREL") and U.S. companies are being used to produce high-resolution projections of U.S. and foreign regions that are painting a whole new picture of wind potential. These maps are created using highly accurate GPS mapping tools and a vast array of satellite, weather balloon, and meteorological tower data, combined with much-improved numerical computer models. The higher horizontal resolution of these maps (1 km or finer) allows for more accurate wind turbine siting and has also led to the recognition of higher-class winds in areas where none were thought to exist.
The ability to accurately predict when the wind will blow will help remove barriers to wind energy development by allowing wind-power-generating facilities to commit to power purchases in advance. NREL researchers work with federal, state, and private organizations to validate the nation's wind resources and support advances in wind forecasting techniques and dissemination. Wind resource validation is important for both wind resource assessment and the integration of wind farms into an energy grid. Validating new, high-resolution wind resource maps will provide an accurate reading of the wind resource at a particular site. Development of short-term (1 to 4 hours) forecasting tools will help energy producers proceed with new wind farm projects and avoid the penalties they must pay if they do not meet their hourly generation targets. In addition, validating new high-resolution wind resource maps will give people interested in developing wind energy projects greater confidence as to the level of wind resource for a particular site.

Solar Dish Engine

A Solar Dish-Engine System is an electric generator that “burns” sunlight instead of gas or coal to produce electricity. The major parts of a system are the solar concentrator and the power conversion unit. Descriptions of these subsystems and how they operate are presented below.
The dish, which is more specifically referred to as a concentrator, is the primary solar component of the system. It collects the solar energy coming directly from the sun (the solar energy that causes you to cast a shadow) and concentrates or focuses it on a small area. The resultant solar beam has all of the power of the sunlight hitting the dish but is concentrated in a small area so that it can be more efficiently used. Glass mirrors reflect ~92% of the sunlight that hits them, are relatively inexpensive, can be cleaned, and last a long time in the outdoor environment, making them an excellent choice for the reflective surface of a solar concentrator. The dish structure must track the sun continuously to reflect the beam into the thermal receiver.
THE POWER CONVERSION UNIT includes the thermal receiver and the engine/generator. The thermal receiver is the interface between the dish and the engine/generator. It absorbs the concentrated beam of solar energy, converts it to heat, and transfers the heat to the engine/generator. A thermal receiver can be a bank of tubes with a cooling fluid, usually hydrogen or helium, which is the heat transfer medium and also the working fluid for an engine. Alternate thermal receivers are heat pipes wherein the boiling and condensing of an intermediate fluid is used to transfer the heat to the engine.
The engine/generator system is the subsystem that takes the heat from the thermal receiver and uses it to produce electricity. The most common type of heat engine used in dish-engine systems is the Stirling engine. A Stirling engine uses heat provided from an external source (like the sun) to move pistons and make mechanical power, similar to the internal combustion engine in your car. The mechanical work, in the form of the rotation of the engine’s crankshaft, is used to drive a generator and produce electrical power.
In addition to the Stirling engine, microturbines and concentrating photovoltaics are also being evaluated as possible future power conversion unit technologies. Microturbines are currently being manufactured for distributed generation systems and could potentially be used in dish-engine systems. These engines, which are similar to (but much smaller than) jet engines, would also be used to drive an electrical generator. A photovoltaic conversion system is not actually an engine, but a semi-conductor array, in which the sunlight is directly converted into electricity.

BIOMETHANE TECHNOLOGY

Biomethane Technologies provides the following power and energy project development services:Project Engineering Feasibility & Economic Analysis Studies, Engineering, Procurement and Construction , Environmental Engineering & Permitting , Project Funding & Financing Options; including Equity Investment, Debt Financing, Lease and Municipal Lease, Shared/Guaranteed Savings Program with No Capital Investment from Qualified Clients , Project Commissioning , 3rd Party Ownership and Project Development , Long-term Service Agreements , Operations & Maintenance , Green Tag (Renewable Energy Credit, Carbon Dioxide Credits, Emission Reduction Credits) Brokerage Services; Application and Permitting
Biomethane Technologies is a privately-held company that was founded by several of the board members at the Renewable Energy Institute.Biomethane Technologies is focused on generating Biomethane from multiple waste streams and renewable energy technologies such as Anaerobic Digesters, Biogas Plants, Sewage Sludge and Landfill Gas to Energy projects.
Your company should consider hiring us if you are considering Anaerobic Digesters, Biogas Plants, Biomethane production or solutions for your Sewage Sludge Problems at your facility as we are:vendor neutral in terms of equipment selection. our lead engineer has over 27 years experience in Anaerobic Digesters, Anaerobic Digester design and repairing other companies Anaerobic Digesters - and also biomethane optimization. We know Anaerobic Digesters, Biogas Plants, Biogas-to-Biomethane, Biomethane Optimization, Cogeneration and Trigeneration power plants. Our knowledge and expertise will insure maximum Biomethane production at your facility. Our professionals can provide the turnkey solutions your facility needs, including; design, engineering, finance, legal, operations, maintenance and service/repairs of existing anaerobic digesters.

OIL SEED PROCESSING

Water assisted. Here the finely ground oilseed is either boiled in water and the oil that floats to the surface is skimmed off or ground kernels are mixed with water and squeezed and mixed by hand to release the oil. b) Manual pressing. Here oilseeds, usually pre-ground, are pressed in manual screw presses. A typical press is shown in diagram 1. c) Expelling. An expeller consists of a motor driven screw turning in a perforated cage. The screw pushes the material against a small outlet, the "choke". Great pressure is exerted on the oilseed fed through the machine to extract the oil. Expelling is a continuous method unlike the previous two batch systems. d) Ghanis. A ghani consists of a large pestle and mortar rotated either by animal power or by a motor. Seed is fed slowly into the mortar and the pressure exerted by the pestle breaks the cells and releases the oil. Ghani technology is mainly restricted to the Indian sub-continent. e) Solvent extraction. Oils from seeds or the cake remaining from expelling is extracted with solvents and the oil is recovered after distilling off the solvent under vacuum.
Most oil bearing seed need to be separated from outer husk or shell. This is referred to as shelling, hulling or decortication. Shelling increases the oil extraction efficiency and reduces wear in the expeller as the husks are abrasive. In general some 10% of husk is added back prior to expelling as the fibre allows the machine to grip or bite on the material. After decortication the shell may have to be separated from the kernels by winnowing. At small scale this can be done by throwing the material into the air and allowing the air to blow away the husk. At larger scale mechanical winnowers and seed cleaners are available
A wide range of makes and sizes of expellers are available. In India in particular a number of efficient small or "baby" expellers are available. A typical example with a capacity of up to 100 kg/hr is shown in figure.3. This machine has a central cylinder or cage fitted with eight separate sections or "worms". This flexible system allows single or double-reverse use and spreads wear more evenly along the screw. When the screw becomes worn only individual sections require repair thus reducing maintenance costs. As the material passes through the expeller the oil is squeezed out, exits through the perforated cage and is collected in a trough under the machine. The solid residue, oil cake, exits from the end of the expeller shaft where it is bagged.

LEATHER FACTS AND MAKING OF IT

The primitive man, even more than 7000 years ago, made and used leather goods. He dried fresh skins in the sun, softened them by pounding in animal fats and brains, and preserved them by salting and smoking. Of course, the products were crude, made for protection than as fashion. The Egyptians and Hebrews developed around 400 BC, the process of vegetable tanning that involved simple drying and curing techniques. Under the Arabs during the Middle Ages, the art of leather making became quite sophisticated. Morocco and cordovan leathers were in great demand. The ancient puppet theatre in the southern India used primarily leather dolls. The tradition continues even today.
Following the industrial revolution in Europe, power driven machines were introduced to perform operations such as splitting, fleshing, and dehairing. The chemical tannage were introduced towards the end of 19th century. Evidence of shoemaking exists as early as 10,000 B.C. Napoleon Bonaparte had his boots worn by servants to break them in before he wore them The boots worn by Neil Armstrong for his walk on the moon in 1969 were jettisoned before returning to earth to prevent contamination The original French version of the Cinderella story features a fur slipper instead of a glass one. The confusion arose in the similarity of a French word for white fur (vair), which resembled the word for glass (verre).
The hide, left to itself, would rather decompose than become leather. It is cured of such inclinations by a dehydrating process (air-drying, salting, or pickling with acids and salts) before being shipped to a tannery. The hide has about 60 to 70 percent water and 30 to 35 percent protein (of which 85 percent is fibrous). Tanning displaces water from the hide's protein fibres and cements these fibres together. Tanning derives its name from tannic acid found in plants (vegetable tanning), mineral salts (mineral tanning) or in oil and fatty substances (oil tanning) The tanned pelt is dried, dyed, oiled and greased to lubricate it and to enhance its softness, strength, and ability to shed water.
The leather is further dried and reconditioned with damp sawdust to a uniform moisture content of 20 percent. It is then stretched and softened, and the grain surface is coated to give it additional resistance to abrasion, cracking, peeling, water, heat, and cold.The leather is then ready to be fashioned into any of a multitude of products. These include shoes and boots, outer apparel, belts, upholstery materials, suede products, saddles, gloves, luggage and purses, and recreational equipment as well as such industrial items as buffing wheels and machine belts.

LEAD AND ZINC ORE MINING

Lead was one of the first metals to be discovered, and has been in use for at least 8000 years. It is a soft metal, which is easy to work and does not corrode. Its main ore, galena, occurs widely, and can be smelted at temperatures which can be reached in an ordinary campfire. It is often associated with zinc and copper, and usually contains a small proportion of silver which can add to its value. The gangue material forms much the greater part of the content on a vein, and used to be regarded as waste. Now materials such as barytes and fluorspar are often the more important products, and the metal ores are regarded as a by-product.
In the Lake District zinc was produced mainly at the mines of Force Crag, Threlkeld and Thornthwaite. Lead was found in many places, but the principal mining fields were in the Helvellyn, Newlands and Caldbeck areas.
There are no records or remains of very early mining in the Lake District, and few records for the centuries after the Romans left, but the Elizabethans operated lead mines in the Derwentwater area - at Stoneycroft, Brandlehow, Barrow and Thornthwaite - and in the Caldbeck Fells at Red Gill and Roughtongill. In 1564 a lead mine was opened in Greenhead Gill at Grasmere, but this venture was not successful and the mine closed in 1573, although the remains that can still be seen today are well worth a visit.
Mining in the Lake District was generally in decline during much of the 17th century, and there is no record of great or continuous activity so far as lead mining was concerned during the 18th century. Lead mining at Greenside, however, may have been started as early as 1650. Top Level was driven in 1790, some 40 fathoms below the summit and stoped out to the surface. Although that mine was then abandoned for several years, it was later to become the richest in the area, being worked more or less continuously for the next 150 years after the formation of the Greenside Mining Company in 1822.
Some 2,400,000 tons of lead ore were produced during the life of the mine, and 2 million ounces of silver. In the early days the dressed ore was taken elsewhere to be smelted, firstly to Stoneycroft Gill - a distance of ten miles up hill and down dale - and then, from 1820, to Alston where the London Lead Company had erected an up-to-date smelter. Only during the 1830s was a smelter built on site at the foot of Lucy Tongue Gill, and a flue arched with stone was cut out of the bedrock, ending a mile away on the Stang, where there was a stack. The course of this flue can easily be traced today, and part of the stack is still standing. The smelter was in operation until the beginning of the 20th century when the decision was made to send the dressed ore by road to Troutbeck and from there by rail to Newcastle upon Tyne for processing.

DIGITAL HOLOGRAPHY TECHNOLOGY

Holography has been used since the 1970’s for tire inspection. Before the development of electronic holography or electronic speckle interferometer (ESPI) cameras, film holography cameras were used in combination with vacuum stress. Early film holography cameras were also use to solve a major production issue in the inspection of abradable turbine engine components.
Since the 1970’s turbine aircraft engines have used abradable seals in the compressor stages of the engine to achieve high-pressure ratios per stage, reducing the turbine power required to drive the compressor, engine weight and increasing performance. The loss of this material can affect engine performance and inspection of the bond line in production or engine overhaul is required.
Ultrasonic through transmission C-Scan is capable of detecting disbonds in parts where the shroud geometry is a straight or slightly conical cross section. However, in most engines, the design of the compressor shrouds includes brazed stators, material thickness changes, flanges and other features that obscure or shadow the abatable seal material.
In 1982, a holography NDT technique entered production at Pratt & Whitney, combining time average holography with a low frequency ultrasonic vibration applied to the compressor shroud. Holography provided excellent disbond detection with easily interpreted images essentially identical to UT results, but not affected by part geometry, material thickness changes. Early systems used film holography with a one step chemical process, invented by the author, which produced production quality holograms in approximately 10 seconds. The results were viewed on a video monitor. Electronic holography currently using mega-pixel CCD cameras has radically improved system operation speed and reliability. Since1982, holography has been the inspection standard for Feltmetal and plasma sprayed aircraft abradable seals.

LASER SHEAROGRAPHY TECHNOLOGY

Laser interferometric imaging NDT techniques such as holography and shearography have seen dramatic performance improvements in the last decade and wide acceptance in industry as a means for high-speed, cost effective inspection and manufacturing process control. These performance gains have been made possible by the development of the personal computer, high resolution CCD and digital video cameras, high performance solid-state lasers and the development of phase stepping algorithms. System output images show qualitatively pictures of structural features and surface and subsurface anomalies as well as quantitative data such as defect size, area, depth, material deformation vs. load change and material properties. Both holography and shearography have been implemented in important aerospace programs providing cost effective, high-speed defect detection.
Holography images test part responses to changes in load showing the, as well as part movement. Holography using continuous wave lasers and video frame rate data acquisition require vibration isolation usually in the form of air supported isolation tables. Coupled with ultrasonic vibration excitation of the test part, holographic systems in production provide very high-resolution images of disbonds in small complex shaped components, such as turbine aircraft components and medical devices.
Shearography NDT systems use a common path interferometer to image the first derivative of the out-of-plane deformation of the test part surface in response to a change in load. This important distinction is responsible to two key phenomena. First, shearography is less sensitive to the image degrading effect of environmental vibration. Shearography systems may be built as portable units or into gantry systems, similar to UT C-Scan systems, for scanning large structures. Second, the changes in the applied load required to reveal subsurface anomalies frequently induce gross deformation or rotation of the test part. With holography, several important test part stressing techniques, such as thermal and vacuum stress, create gross part deformation. Defect indications may be completed obscured by these translation fringe lines. Shearography, on the other hand is sensitive only to the deformation derivatives and tend to show only the local deformation on the target surface due to the presence of a surface or subsurface flaw.

Shearography, in particular offers unique and proven defect detection capabilities in aerospace composites manufacturing. Shearography images show changes in surface slope, in response to a change in applied load. Shearography whole field, real-time imaging of the out-of-plane deformation derivatives is sensitive to subsurface disbonds, delaminations, core damage, core splice joint separations as well as surface damage. Secondary aircraft structures have long used composite materials. The drive for better vehicle performance, lower fuel consumption and maintainability are pushing the application of composites and sandwich designs for primary structures as well. Faster and less expensive inspection tools are necessary to reduce manufacturing costs and ensure consistent quality.

WATER RESOURCES

water resources are divisible into two distinct categories : the surface-water resources & the ground-water resources. Each of these categories is a part of the earth's water circulatory system,called the hydrologic cycle, & is ultimately derived from precipitation,which is rainfall plus snow. They are interdependent & frequently the loss of one is the gain of the other. The brief description of the run-off cycle,which is a part of the hydrologic cycle,will help us to understand the origin & the interdependence of these two categories of water resources.
The precipitation that falls upon land & is the ultimate source for both the categories of water resources is dispersed in several ways. A sizeable portion is intercepted by the vegetal cover or temporarily detained in surface depressions.Most of it is later lost through evaporation. When the available interception or the depression storage are completely exhausted & when the rainfall intensity at the soil surface exceeds the infiltration capacity of the soils, the overland flow begins.Once the overland flow reaches a stream channel, it is called surface run-off, which together with other components of flow, forms the total run-off.
Part of the water that infiltrates into the surface soil may continue to move laterally at shallow depth as interflow owing to the presence of relatively impervious lenses just below the soil surface & may eventually reach the stream channel when it is called the sub-surface runoff. A part of the sub-surface run-off may enter the stream promptly, whereas the remaining part may take a long time before joining the stream flow.
A second part of the precipitation which infiltrates is lost through evapo-transpiration via plant roots & thermal gradients just below the soil surface. A third part may remain above the water table in the zone of unsaturated flow.A fourth remaining part percolates deeply into the ground-water.Part of this ground-water may eventually reach the stream channel & become the base flow of the stream. This portion is termed ground-water run-off or ground-water flow.
Apart from infiltrated rain-water, the seepage from canals,ponds,tanks,lakes,irrigated fields,etc.is also dispersed & accounted for in the same manner.
The total run-off in the stream channel includes the snow-melt, the surface run-off the sub-surface run-off, the ground-water run-off & the channel precipitation, i.e. the precipitation falling directly on the water surface of streams,lakes,etc. It constitutes what is known as the surface-water resources. The portion of the precipitation which, after infiltration,reaches the ground-water-table, together with the contribution made to ground water from a neighbouring basin, influent rivers,natural lakes,ponds,artificial storage reservoirs,canals,irrigation,& constitutes the ground-water resources.That quantity of water in the ground-water reservoir, which is not annually replenishable, is not taken into account, as it is a sort of dead storage which cannot be used on a continuing basis from year to year.

LAND UTILIZATION

The pattern of land-use of a country at any particular time is determined by the physical, economic & institutional framework taken together. In other words, the existing land-use pattern in different regions in India has been evolved as the result of the action & interaction of various factors, such as the physical characteristics of land, the institutional framework, the structure of other resources (capital,labour,etc.) available & the location of the region in relation to other aspects of economic development, e.g. those relating to transport as well as to industry & trade. The present pattern can,therefore, be considered to be in some sort of static harmony & adjustment with the other main characteristics of the economy of the region. In the dynamic context, keeping in view the natural endowments & the recent advances in technology, the overall interests of a country may dictate a certain modification of or a change in the existing land-use pattern of a region. A close study of the present land-use patterns & the trends during recent years will help to suggest the scope for planned shifts in the patterns.
Out of the total geographical area of 328 million hectares, the land-use statistics are available for roughly 306 million hectares, constituting 93 percent of the total. During 1970-71 the latest year for which the land-use data are available, the arable land (the net area sown plus the current & fallow lands) was estimated at 161.3 million hectares or 52.7 percent of the total reporting area. Around 65.9 million hectares or 21.6 percent of the total area was under forests. Land put to non-agricultural uses was estimated at 16.1 million hectares(5.2 percent of the total) & the barren & unculturable land at 30.2 million hectares or 9.9 percent of the reporting area. Permanent pastures & other grazing land were estimated at 13 million hectares(4.2 percent), land under miscellaneous tree crops & groves, not included in the net area sown, at 4.3 million hectares(1.4 percent) & the culturable waste-land at another 15.2 million hectares or 5 percent. These figures add up to 306 million hectares of the reporting area.
The area,for which data on the land-use clasification are available, is known as the 'reporting area'. In areas where the land-use classification figures are based on land records, the reporting area is the area according to village papers or records maintained by the village revenue agency & the data are based on a complete enumeration of all the areas. In some cases, village papers are not maintained; but the estimates of the area under different classes of land are based on the sample survey or other methods to complete the coverage.
The reporting area is the aggregate of the areas based on these two methods. The areas for which no statistics are available are called 'non-reporting area'. The whole of the reporting area is neither completely surveyed cadastrally nor completely covered by complete enumeration of sample surveys. There are still pockets of areas in a few states for which only 'ad-hoc estimates' are prepared. Of the total geographical area, only 80.7 percent is cadastrally surveyed. Of the cadastrally surveyed areas, 91.4 percent has a permanent reporting agency, whereas 8.6 percent has no reporting agency.

MEDICINAL PLANTS

India is endowed with a rich wealth of medicinal plants.These plants have made a good contribution to the development of ancient Indian materia medica. One of the earliest treatises on Indian medicine,the Charak Samhita(1000 B.C),records the use of over 340 drugs of vegetable origin. Most of these continue to be gathered from wild plants to meet the demand of the medical profession.Thus, despite the rich heritage of knowledge on the use of plants drugs, little attention had been paid to grow them as field crops in the country till the latter part of the nineteenth century.
During the past seven or eight decades, there has been a rapid extension of the allopathic system of medical treatment in India. It generated a commercial demand for pharmacopoeial drugs and products in the country, Thus efforts were made to introduce many of these drug plants into Indian agriculture, and studies on the cultivation practices were undertaken for those plants which were found suitable and remunerative for commerical cultivation. In general, agronomic practices for growing poppy, isabgol, senna, cinchona, ipecac, belladonna, ergot and a few others have been developed and there is now localized cultivation of these medicinal plants commercially. The average annual foreign trade in crude drugs and their phytochemicals is between 60 and 80 million rupees and this accounts for a little over 0.5 per cent of the world trade in these commodities.
The curative properties of drugs are due to the presence of complex chemical substances of varied composition (present as secondary plants metabolites) in one or more parts of these plants. These plants metabolites in one, according to their composition, are grouped as alkaloids, glycosides, corticosteroids, essential oils, etc. The alkaloids form the largest group, which includes morphine and codein (poppy), strychnine and brucine(nux vomica), quinine(cinchona), ergotamine(ergot), hypocyamine,(beeladona) ,scolapomine(datura), emetine(ipecac), cocaine(coco), ephedrine(ephedra), reserpine(Rauwolfia), caffeine(tea dust), aconitine(aconite), vascine(vasaca). santonin(Aremisia), lobelin(Lobelia) and a large number of others. Glycosides form another important group represented by digoxin(foxglove), stropanthin(strophanthus), glycyrrhizin(liquorice), barbolin (aloe), sennocides (senna),etc. Corticosteroids have come into prominence recently and diosgenin(Dioscorea), solasodin(Solanum sp.),etc. now command a large world demand. Some essential oils such as those of valerian kutch and peppermint also possess medicating properties and are used in pharmaceutical industry. However, it should be stated in all fairness that our knowledge of the genetic and physiological make-up of most of the medicinal plants is poor and we know still less about the biosynthetic pathways leading to the formation of active constituents for which these crops are valued.
During the last two decades, the pharmaceutical industry has made massive investments on pharmacological, clinical and chemical researches all over the world in an effort to discover and still more potent plants drugs ; in fact, a few new drug plants have suceessfully passed the tests of commercial screening. However, benefits of this labour would reach the masses when the corresponding support for agricultural studies for commercial cultivation is provided. Infact, agricultural studies on medicinal plants, by its very nature, demand an equally large investment and higher priority. India, in particular, has a big scope for the development of the pharmaceutical and phytochemical industry.

MUSHROOM PRODUCTION

Selection of Strains:
For successful mushroom production, it is necessary for each grower to produce as economically and efficiently as possible the highest quality of mushrooms. This can be accomplished among other requirements, by selecting the best strains which should be high yielding , visually attractive, having desirable flavour, and resistance to adverse climate and pests and diseases. Presently, there are many strains of white, cream and brown varieties in cultivation. The brown variety is the natural mushroom and considered to be the most vigorous form. It tolerates and adverse conditions better than the white variety.
A snow white mushroom first appeared amongst a bed of mushroom in the USA and ever since the variety has dominated the mushroom industry throughout the world, although it has a very high limited shelf-life. Where growing conditions tend to be on the dry side and humidity cannot be correctly controlled the brown mushroom should be grown. New superior strains are through selection, hybridization and induced mutations continually introduced by mushroom research laboratories and spawn makers. In India, S 11, S 649 and S791 are the good strains available. These strains were originally introduced from reowned commercial spawn makers, Somycel and darlington. Now these strains are well adapted in the Indian climate and are very popular with the growers.
Maintenance of Strains:
Three methods are known by which strains can be propagate. these are multispore culture, tissue culture and mycelium transfer. By periodic subculturing of the mycelium on a suitable agar medium, the span strains can be kept for many years in a fairly good state. However, the frequent subculturing of the strain may result in its degeneration. Maintenence of strain by multisporous culture is only possible if new multispore cultures are compared with the original strain before the original multisporous culture would show much genetic variation.
In the tissue culture, small pieces of fruit bodies are cut under sterile conditions and inoculated on a nutrient medium. Mycelium growing out of these tissue can provide the starting point for subsequent spawn production. However, it is commonly observed that tissue cultures often give lower yields than the original cultures. Of these 3 methods, mycelium transfer is most reliable but it is essential that the performance of the mycelium is continually checked in order to detect any degeneration-like slow-growing matted mycelium or fluffy mycelium with abnormal growth rate.
Spawn:
The propogating material used by the mushroom growers for planting beds is called spawn. The spawn is equivalent to vegetative seed of higher plant. Quality of spawn is basic for the successful mushroom cultivation.
At present, the pure culture spawn has been the basis of modern spawn production units all over the world. The manufacture of the pure culture spawn is done under scientifically controlled conditions which demand a standard of hygiene as in a hospital operation theatre. Equipment and substrate used for spawn are autoclaved and filtered air is passed during the inocluation ensures complete freedom from contamination.

THE FORGING PROCESS

Forging is a metal forming process used to produce large quantities of identical parts, as in the manufacture of automobiles, and to improve the mechanical properties of the metal being forged, as in aerospace parts or military equipment. The design of forged parts is limited when undercuts or cored sections are required. All cavities must be comparatively straight and largest at the mouth, so that the forging die may be withdrawn. The products of forging may be tiny or massive and can be made of steel (automobile axles), brass (water valves), tungsten (rocket nozzles), aluminum (aircraft structural members), or any other metal. This process is also used for coining, but with slow continuous pushes.
The forging metal forming process has been practiced since the Bronze Age. Hammering metal by hand can be dated back over 4000 years ago. The purpose, as it still is today, was to change the shape and/or properties of metal into useful tools. Steel was hammered into shape and used mostly for carpentry and farming tools. An axe made easy work of cutting down trees and metal knives were much more efficient than stone cutting tools. Hunters used metal-pointed spears and arrows to catch prey. Blacksmiths used a forge and anvil to create many useful instruments such as horseshoes, nails, wagon tires, and chains.
Militaries used forged weapons to equip their armies, resulting in many territories being won and lost with the use and strength of these weapons. Today, forging is used to create various and sundry things. The operation requires no cutting or shearing, and is merely a reshaping operation that does not change the volume of the material.
Forging changes the size and shape, but not the volume, of a part. The change is made by force applied to the material so that it stretches beyond the yield point. The force must be strong enough to make the material deform. It must not be so strong, however, that it destroys the material. The yield point is reached when the material will reform into a new shape. The point at which the material would be destroyed is called the fracture point.

HEAT TREATMENT THROUGH HARDENING

Hardening just the surface layer of steels is called case hardening. A very hard case, or "skin" resists wear and is supported by a core of lower hardness, depending on the type of steel, which is more tough and ductile, resisting breakage.

Carburizing and carbonitriding are the two most common types of case hardening processes and are designed for only certain types of steels. When steel at proper temperature is surrounded by certain elements, such as carbon and nitrogen, it will absorb those elements into its surface. The added elements form a "case" which can be very hard.

For a quality job, it is vital for the heat treater to closely control furnace atmosphere, temperature, time, convection system plus the orientation of the parts in the furnace. Selective hardening may be accomplished by masking with high temperature tape, copper paint or copper plating.
CarburizingThis is a form of case hardening in which the furnace atmosphere is adjusted to deposit carbon into work when it is held at critical temperature. This layer of increased carbon can then achieve very high hardness when quenched. Since the sub-surface area has a lower carbon content, it does not harden as much or at all during this process. This leaves a more tough, ductile core than a through-hardening alloy or tool steel with the same surface hardness potential Case hardening is usually specified with a Rockwell C hardness range of three points, such as Rc 58-60, plus a "case depth" within the range of .010" to .080" or more. Interestingly, the rate of carburization will increase in depth either with temperature or over time. Since too high a temperature causes undesirable grain growth and a long period at lower temperature raises costs, a quality heat treater will start with a high temperature and then lower it to refine the grain before quenching.

Since carburized parts are designed for maximum surface hardness, they are sometimes NOT tempered after the hardening quench as is typical. This is especially the case with shallow case depths. In the tough core offsets the brittleness inherent in untempered steel.
Although many types of steels respond to carburizing, classic alloys for this purpose are 8620 and 9310. These "low alloy" steels can not only exceed Rc 60 when carburized, but produce a tough core of good hardness (Rc 30-38). These alloys are the choice of gear makers. Low carbon steels such as 1018 or 1117, while not offering as much toughness, can also be carburized for a low cost solution.

NATURAL GAS

Natural gas is a fossil fuel source of energy, which represents more than one fifth of total energy consumption in the world. It has been the fastest growing fossil fuel since the seventies.
Due to economical and ecological advantages that it presents as well as its safety qualities (e.g. reduced flammable range), natural gas is an increasingly attractive source of energy in many countries. At present, natural gas is the second energy source after oil. According to Energy Information Administration, natural gas accounted for 23% of world energy production in 1999. It has excellent perspectives for future demand. Natural gas is considered the fossil fuel of this century, as petroleum was last century and coal two centuries ago.
Natural gas presents a competitive advantage over other energy sources. It is seen as economically more efficient because only about ten per cent of the natural gas produced is wasted before it gets to final consumption. In addition, technological advances are constantly improving efficiencies in extraction, transportation and storage techniques as well as in equipment that uses natural gas.
Natural gas is considered as an environmentally friendly clean fuel, offering important environmental benefits when compared to other fossil fuels. The superior environmental qualities over coal or oil are that emissions of sulphur dioxide are negligible or that the level of nitrous oxide and carbon dioxide emissions is lower. This helps to reduce problems of acid rain, ozone layer or greenhouse gases.
Natural gas is also a very safe source of energy when transported, stored and used.

Although resources of natural gas are finite and natural gas is a non-renewable source of energy, these resources are plentiful all over the world. Natural gas reserves are continuously increasing as new exploration and extraction techniques allow for wider and deeper drilling.