A public utility is an organization that can be privately or publicly owned but both are regulated from local community based groups to statewide government monopolies. These services are provided by organizations consumed by the public, such as electricity, natural gas, water, telecommunications, and wastewater.

Since public utilities can be natural monopolies due to their infrastructure and requirements to produce and deliver, they are expensive to maintain and as a result, that is why they are either ran or regulated by the government.

Publicly owned utilities include cooperative and municipal utilities. Municipal tend to include areas outside of city limits and sometimes may not even serve the whole city, whereas cooperative utilities are owned by the customers they serve so they are usually found in rural areas. Public owned utilities are non-profit, private owned utilities are owned by investors mainly and operate for profit, usually referred to as a rate of return.

Public utility commissions is a government agency, normally on a state level which regulates the commercial activities related to the associated utilities. They work on behalf of the citizens and customers and are typically composed of commissioners appointed by theire respective governors and/or dedicated staff that implement and enforce the rules and regulations, as well as approve or deny rate increases and monitor/report on relevant activities.

Have you ever found yourself asking these questions;

  • Where does our tap water come from?
  • Is it safe to drink?
  • How is it treated?
  • Are there regulations?

If so, here you will find answers.

Water continually moves around, constantly changing form from vapor, to liquid, or solid in ice in what is referred to as the water cycle; precipitation, evaporation/transpiration, and runoff are the primary phases of this cycle.

Water is essential for life, not only does it grow our food and serve as a lifeblood of our industry but it can also be used to power electricity and over half of our bodies are made up of it. Drinking water daily is a necessity in life, experts recommend drinking eight glasses of water a day, which is about half of a gallon, so it is imperative that not only you take in that much, but you take in quality drinking water as well since some waters can carry bacteria, viruses, and other diseases if not treated properly.

Safe drinking water is a privilege Americans, as well as other countries, take for granted. To understand this, you need look no further than third world countries fighting for water drops on a daily basis, literally digging up dirt to find that small puddle of dirty water beneath it, in order to drink it to survive, hopefully parasite free, but polluted water is not guaranteed to be fully filtered this way because many chemicals and parasites do not fully breakdown in soil as it seeps down. Water treatment is necessary for safe drinking waters to avoid issues like the Flint Michigan’s water problem or even the bigger lead contaminated water plague that affected our schools and houses across the country in the late eighties, into the nineties. Before we get to types of treatment used to purify our water, let’s discuss where our water comes from.

A community’s water source primarily depends on the foresight and planning of the founders and local use of the land and water sources in the area so every town, city, county, and/or state is going to have different scenarios on how they get theirs but it is still going to come from the same type of source; surface water. Our surface water comes from lakes, rivers, streams, oceans, reservoirs, ponds, and other parts of the Earth’s surface, as well as groundwater wells and rain in general for those who harvest their own. The water goes from these sources to storage tanks, treatment plants, smaller reservoirs, and/or to houses directly through various piping systems.

The largest source is the ocean, but health concerns come with the saline water from the seawater and brackish, which are mostly found to be used in some western states like parts of California using seawater and parts of Texas using brackish water.

The seawater can be too salty for human consumption, or other purposes in general, and needs to undergo desalination procedures before so. Desalination has been a cost-prohibitive exercise due to the high amount of energy required to push the water through the compact micro-filters that remove the salt molecules. The largest desalination plant in this country, and possibly the world, is the Carlsbad plant which opened in late 2015, located in Carlsbad, California, costing roughly one-billion dollars to make. Carlsbad uses the reverse osmosis method of forcing the water through semipermeable membranes (which is a selective barrier, allowing some parts through while restricting others) under very high pressure. Improvements in membrane technology has produced filtration that lasts longer and are much more efficient than previous models.

Desalting brackish water is less costly because the levels are nowhere near that seawater, brackish is more like the combination of lake and seawater, for example, the aftermath of a storm near an ocean would make otherwise, normally pure lakes and ponds close by, to become brackish as it is mixed with salt water. Brackish water tends to go through a thermal process that heats the water enough to form a vapor, which is then condensed as freshwater leaving the salt behind.

For standard surface water not requiring desalination, the main treatment goal to remove water-borne diseases. Before disinfection became a common practice, widespread outbreaks of cholera and typhoid were frequent throughout the country. These diseases are still very common in less developed countries but have largely disappeared in the U.S. when chlorine and filtration became widely used nearly a century ago.

Common steps to treating water are coagulation and flocculation, where chemicals are added to the incoming water to bind with the dirt and dissolve particles, settling to the bottom as large particles, referred to as floc. The clear water then flows through filters composed of sand, gravel, and/or charcoal to remove the excess dissolved particles such as dust, parasites, bacteria, viruses and any remaining chemicals. Then chlorine or chloramine is added to kill any remaining parasites, bacteria, viruses, and germs. Ultraviolet lights and ozone can be used for disinfection as well, which some would say are cleaner and much more effective.

UV lights at certain wavelengths can become mutagenic to bacteria, meaning the light mutates the bacteria DNA, producing thymine that kills or disables the bad microorganisms. Ozone is bubbled into the water in large tanks, destroying the illness-causing microorganisms, as well as taste and odor. Fluoride is usually added to water as well, to prevent tooth decay.

Various other chemicals are added as well to adjust hardness and pH levels, or to prevent corrosion, based on the water source and piping system, especially considering that many old water pipes were made with lead. While the most effective solution may be to replace the lead pipes, that is not highly feasible in some areas, so water utilities add some form of phosphate to the water supply, which forms a protective film between the lead and water flowing through it.

When you look at the water crisis in Flint, Michigan, or Baltimore, Maryland, for examples, you can see that preventative measures were not taken. In Flint, the water source was changed from the City of Detroit water source to the Flint River, but no additives were put in place to prevent the corrosion of the pipes, which contributed to the high lead levels found in the water. When you find that Flint River has eight times the amount of corrosion as the City of Detroit’s source did, it’s not hard to see that Flint politicians and environmental reps failed to add orthophosphate to control the corrosion when the water sources were switched. Someone did not do their homework here, because even low levels of lead poisoning can cause behavioral problems, slow growth, and even affect IQ levels.

Lead is not the only thing contaminating our water sources either, in farming communities across the country, water can be contaminated by fertilizer and livestock as well. Nitrate runoff in rivers and groundwater can also be common in places with high levels of farming activities, which causes “blue-baby syndrome” where infants can suffer from shortness of breath, which if left untreated, obviously can lead to death. In Des Moines, Iowa, nitrate has shown up so much that the Water Works utility company has had to remove it through an ion exchange process in one of their treatment plants.

With the aforementioned waterborne diseases, it is critical for water sources to be disinfected before consumption so it is chlorinated from the time it leaves treatment center to when it arrives in our pipes. 

Some cities protect their water at the source too, with regulations and programs put in place, instead of treatment facilities where they use watersheds where rivers, lakes, and ponds drain into. Supposedly, New York City’s watershed protection is so good, and the drinking quality is so high that their system needs no filtration at all. They have managed to keep it that way by regulating who controls the land around the sources, limiting the type of development that can occur and lead to pollution in the first place. Although the water is not filtered, it is still disinfected with chlorine and UV lights as well as the normal chemicals used to regulate pH, hardness and prevent corrosion.

While federal government has EPA to regulate over one-hundred-thousand water systems, local government also sets rules for these systems as well as over fifteen-million Americans that rely on private wells. From the source, to the tap, water goes through this critical process and is heavily monitored to prevent bacterial outbreaks, natural disasters, and human activity that could cause serious health concerns for the general public.

The driving force behind the development of drinking water standards and regulations is the protection of public health. Many laws have been adopted concerning water quality standards, going as far back as the Interstate Quarantine Act of 1893, which sought to control the introduction of communicable diseases from other countries. The first drinking water regulations prohibited the use of a common drinking cup on trains. The first federal drinking standard was adopted in 1914 but was limited to bacteriological quality, not physical and chemical requirements. Since then, more policies have been adopted and enforced to regulate the quality of drinking water for public use, such as the Safe Drinking Water Act (SDWA), passed by congress in 1974. This act gave the EPA authority to delegate implementation of drinking water regulations to states that have developed programs at least as stringent as the federal one.

Under the SDWA, public water systems are required to conduct testing on a regular basis. Weekly, monthly, quarterly, and annual studies are done on various spots. Private water systems like wells do not need to go by these tests, regulations or monitoring procedures, although they are recommended. Should a well be neglected and grossly contaminated, the government has in the past intercepted, closed down the well and fined the owners for neglect and polluting.

Systems today have methods of controlling nearly all properties of water; hardness, acidity, alkalinity, color, turbidity, taste, and odor, as well as the biological and organic chemical characteristics, so there is no reason to neglect.

Public agencies and private water developers have built thousands of reservoirs across the U.S. to capture seasonal runoff, protect against floods, and allocate water supplies throughout the years and these reservoirs store millions of acre-feet of water (think; flooding a football field). Areas that cannot harvest enough rain tend to get excess from areas that do, as mentioned at the beginning, it all comes down to how the founders and early settlers hashed it all out for the most part.

Water recycling, or water reclamation, involves treating municipal wastewater to remove sediments and impurities for reuse. Right now, reclaimed water is used for park irrigation, some lakes, ponds, crops, etc. but greater technological advances and investments are returning so much impressive yields as they demonstrate how well it replenishes groundwater that at some point, it too will become so clean that you are drinking it. The toilet to tap concept is no longer an “if” statement, but a “when”.

Now, going back over the first set of questions;

  • Where does our tap water come from? In short, Nature. Harvested by Earth and it’s habitats
  • Is it safe to drink? Mostly, yes.
  • How is it treated? Desalination, Filtration and/or Chlorination
  • Are there regulations? Yes, heavy monitoring and frequent testing

Ever wonder how we get our gas or the differences of the gas you put in your car compared to the gas that heats your water at home?

First off, we need to look at crude oil. Crude oil is a dark, sticky liquid that cannot be used without changing its form. It is found beneath the earth and drilled heavily to become accessible, then a well is made to transfer the gas and oil located deep down.

Once it is brought to the surface, the first part of refining the oil is to heat it until it boils. From there, the liquid is boiled between 80-350 degrees Celsius (176-662 degrees Fahrenheit) and separated into different liquids and gases; gasoline, kerosene, diesel fuel, lubricating oil, tar etc. which is then used for vehicles, chemicals, jets, heavy machinery, wax, power stations and even our roads.

Natural gas is found close by crude oil, deep below the Earth’s surface. Like crude oil, it is a fossil fuel too and there is no telling how long ago they were created, one theory is they were both created millions of years ago when plants and sea animals were buried by rock and sand. The layers of mud, sand, rock, plant, and animal matter continued to build up until the pressure and heat turned them into oil and gas. Both have similar uses but may differ in the effect and outcome of their use. A form of fracking is used to bring both to the surface

Crude oil and natural gas both consist of hydrocarbons, meaning they are both made of hydrogen and carbon atoms. The difference is natural gas has much less atoms which means it always stays in gas form under regular temperatures and pressures, while crude oil has much more, so it mostly stays in liquid form, besides the gases that escape the boiling process. Needless to say, that makes natural gas much lighter, even lighter than air.

Natural gas consists mostly of methane but is still much cleaner than gasoline, from a carbon standpoint. It produces much less carbon dioxide, making it safer for our environment, but because it requires a lot of energy to compress, it would need a much larger tank than standard gasoline for fuel use, which is why we use gasoline in our vehicles. Beyond that, the demand for crude oil in most vehicles is because crude oil is cheaper than natural gas.

Crude oil is also used to make Liquefied Petroleum Gas for cooking and heating in homes, mostly in the big cities though because the gas passes through pipelines connected to houses and buildings. Other things that make crude oil high in demand is its use for making women’s cosmetics, plastics, rubber, and other similar stuff.

Natural gas is used for the same, once gas is located under the ground, wells are drilled and gas is brought up from the reservoirs to the surface through pipes. Huge networks of pipes bring gas to us from the wells. Large pipes take gas to power plants to make electricity and to factories and businesses. From these large pipes, the gas travels to smaller pipes and is brought to our homes. As gas passes into our homes, it travels through a meter which measures the amount of fuel we use in our appliances such as the furnace, water heater, dryers and stoves. The meter is then read by the gas company and we are charged for the amount we have used. Just like electricity, natural gas has to travel a long way to get to your home. It is also referred to as CGN (Compressed Natural Gas) and mostly used in rural homes. Natural gas is a non-renewable source also used as chemical feedstock in the manufacturing of plastics, glass, steel and paint, as well as for fertilizers because it produces ammonia which is helpful for plants’ growth. Both are major energy sources around the world.

Natural gas goes all the way back to the 500-1000 BC era where Chinese supposedly discovered a way to transport the gas seeping from the ground in crude pipelines of bamboo to where it was used to boil salt water to extract the salt, but the first industrial extraction on record was in New York, 1825. Given tests and studies on estimated remaining recoverable reserves, there is enough to last maybe 250 years, could be even sooner if we raise our consumption.

Natural gas is a major source of electricity generation through the use of cogenerations, gas turbines, and steam turbines. Natural gas is also well suited for a combined use in association with renewable energy sources such as wind or solar and for alimenting peak-load power stations functioning in tandem with hydroelectric plants. Most grid peaking power plants and some off-grid engine-generators use natural gas. Combined cycle power generation using natural gas is currently the cleanest available source of power, using hydrocarbon fuels, and this technology is widely and increasingly used as natural gas can be obtained at reasonable costs. This may eventually provide cleaner options for converting natural gas into electricity, but the cost is high.

Extraction of natural gas or oil leads to decrease in pressure in the reservoir, that decrease in pressure in turn may results in subsidence in the ecosystem, waterways, sewer, water supply or just sinking in of the ground. Looking at this illustration, as well as others before, you can see oil, gas, and water take up portions of the ground. If we continue to take much more than what is being produces, eventually it will just make a hollow cave, the force and pressure of everything above that cave could cause it to cave in, creating a sinkhole on the surface and ruining anything and everything in between.

This page is dedicated to electricity and electronics in general. Here you will find the history of electricity as well as tips and links to explanations and the understanding of some electrical entities.

Electricity is a form of energy and occurs in nature. There are two types of electricity in this world; static and current.

Static was founded thousands of years ago by the Greeks when they rubbed their fur on fossilized tree resin, which helped them determine that charges were built up from surfaces of material rubbing on each other.

Current is a little more complex in the sense that while the concept of electricity has been known for thousands of years, when discovered and developed scientifically and commercially, multiple great minds were responsible for what it is today;

  • Thales of Miletus was the first scientist to recognize the existence of electrical power in nature when he determined you can cause sparks by rubbing materials together (static electricity)
  •  In 1600, English Physician William Gilbert described the force that certain substances exert when rubbed against each other as the Latin word “electricus”
  • In 1660 Otto von Guericke invented an electro-static generator to generate static electricity
  • 1729 saw Stephen Gray discover the conduction of electricity
  • In 1733 Charles Francois du Fay discovered that the forms of electricity can either be resinous (which later became known as negative, -)or vitreous (which later became known as positive, +)
  • In 1752 Benjamin Franklin conducted an experiment with a kite, key, and a storm to prove that lightning and electric sparks were the same.
  • Later that century Alessandro Volta discovered that certain chemical reactions could produce electricity cathodes and anodes, and by early 1800’s he constructed an early electric battery known as the voltaic pile which made him the first person to create a steady flow of electrical charge. Volta also created the first transmission of electricity by linking positively-charged and negatively-charged connectors and driving an electrical charge, or voltage, through them. The volt, a unit of the electromotive force that drives current, was named in his honor.
  • Early 1800s saw Charles-Augustin de Coulomb, a French physicist best known for the formulation of Coulomb’s law, which states that the force between two electrical charges is proportional to the product of the charges and inversely proportional to the square of the distance between them

  • In 1820, Hans Christian Orsted discovered that electrical current creates a magnetic field. This discovery made scientists relate magnetism to the electric phenomena.
  • Late 1820s saw French mathematician André-Marie Ampère found and named the science of electrodynamics, now known as electromagnetism. His name still lives today in the ampere, the unit for measuring electric current.
  • 1827 saw George Ohm give his theory of electricity known as Ohm’s law. His name lives on today in ohm, the physical unit measuring electrical resistance.
OHM’s law
  • In 1831 Michael Faraday created a crude power generator known as the electric dynamo. Faraday’s rather crude invention used a magnet that was moved inside a coil of copper wire, creating a tiny electric current that flowed through the wire.
  • By1878 American Thomas Edison and British scientist Joseph Swan each invented the incandescent filament light bulb in their respective countries. Although light bulbs had been invented by others, the incandescent bulb was the first practical bulb that would light for hours on end. Swan and Edison later set up a joint company to produce the first practical filament lamp, and Edison used his direct-current system (DC) to provide power to illuminate the first New York electric street lamps in September 1882.
  • Late 1800’s and early 1900’s Serbian American engineer, inventor, and all around electrical wizard Nikola Tesla worked with Edison and later had many revolutionary developments in electromagnetism, and had competing patents with Marconi for the invention of radio. He is well known as the father of alternating current (AC) and AC motors, as well as his work with the polyphase distribution system.
  •  
  • Later, American inventor and industrialist George Westinghouse purchased and developed Tesla’s patented motor and with Tesla, convinced American society that the future of electricity lay with AC rather than DC while fueding with Edison over the war of the currents.
  • An honorable mention is James Watt, his engine was a defining development of the Industrial Revolution because of its rapid incorporation into many industries. Because of Watt’s contributions to science and industry, the watt, the unit of power in the International System of Units (SI) was named for him.

Electricity is both a basic part of nature and one of the most widely used forms of energy. Electricity itself is a secondary energy source because it is produced by converting primary sources of energy such as coal, natural gas, nuclear energy, solar energy, and wind energy into electrical power. It is also referred to as an energy carrier, so it can also be converted to other forms of energy such as mechanical energy or heat. Primary energy sources are renewable or nonrenewable, but the electricity we use is neither.

Using 2016 numbers, over four-trillion kilowatthours (kWh) of electricity were generated at utility-scale facilities in the United States. About 65% of this electricity generation was from fossil fiels (coal 30.4%, natural gas 33.8%, petroleum .6% and other gases .3%), just under 20% was from nuclear energy and about 15% was from renewables (Hydro 6.5%, wind 5.6%, biomass 1.5%, solar .9% and geothermal .4%). With an estimated 19 billion kWh or electricity generated from small scale solar photovoltaic systems.

Electricity is there within a flip of a switch, no matter how far it travels. It is instantaneous whether the power plant is one block away, one mile away, or even multiple miles away. All the poles and wires you see along the roads and highways and even in front or behind your home are called the electrical transmission and distribution system. Power plants all across the country are connected to each other through an electrical system referred to as the “power grid”. If one plant fails or can’t produce enough electricity to run all the air conditioning units when it is hot, another plant steps up and sends some where it is needed.

Electricity is made by large machines called turbines, which are turned very quickly with energy, most plants use heat energy produced by burning coal or natural gas, while some use wind energy or moving water. The spinning turbine causes large magnets to turn within wire coils- these are the generators. The moving magnets within the coil of wire causes the charged particles (electrons) to move within the coil of wire, this is electricity.

 

Turbine generators of steam, gas, diesel, etc. all operate on the same principle;

Magnets + copperwire + motion = electric current. The electricity produced is the same, regardless of the source.

 

The current is sent through transformers to increase voltage in order to push it long distances, stretching across the country. Once that current reaches substations, where the voltage is lowered so it can be sent to smaller power lines.

 

It travels through distribution lines to your neighborhood, where smaller pole-top transformers reduce the voltage again, in order to take the power safely to your home, connecting safely through the service drop and passing a meter that measures how much each house uses and then going to the service panel in your basement, backyard, or garage where breakers or fuses protect the wires inside your home from being overloaded.

The electricity travels through wires inside the walls to outlets and switches all over the house.

 

Electricity, like air and water, is another thing people take for granted. Few people probably stop and think what life would be like without it. Electricity brought you lighting, heating/cooling, and powering to computers and televisions. Without it we would still be using candles, oil lamps, and kerosene lamps for lighting, iceboxes to keep food cold, and wood or coal burning stoves to provide heat.

Electricity travels in closed circuits (circuit was termed from the word “circle”). It must have a complete path from the power station through the wires and back. If the circuit is open, the electricity cannot flow, but when it is closed, it can. Turning on a light switch, closes a circuit so the electricity can pass through the switch to the light, when it is switched off it opens the circuit causing the loss of flow to the light.

Energized electrical equipment can be deadly so be careful and make sure everything is de-energized, if possible, before you work on it.

People often encounter situations in which they are required to work with energized electric tools or equipment but the most important thing to remember in these situations is to always consider the electric circuits, apparatus, and your tools to be energized and deadly, even if off. Statistically a person is electrocuted and possibly killed once a day somewhere in the United States.  In addition, thousands of field workers are severely burned or injured every year by electrical mishaps on the jobsite.

Electricity can hurt, burn, and kill you—even at low voltages. Always keep in mind that electricity travels at the speed of light and that it is trying to find the path of least resistance to get to ground. Your body is mostly made up of water and therefore is an excellent conductor of electricity. The effects of an electrical current passing through the body range from a mild tingling sensation to severe pain, muscular contractions, and even death. As the current passes through a body, it will burn from the inside out.

Before you begin work, survey the jobsite to find overhead power lines, poles, and guy wires. Look for lines that may be hidden by trees or buildings, and if you are digging and know there are power lines under ground, contact your provider a couple days in advance and have them stake the yard so you know where your lines are to prevent injury to yourself, others, and your home or work.

The easiest way to avoid electrical accidents is simply to avoid contact with energized components. Always presume that an electrical circuit is energized and dangerous until you are certain that it is not, and even then, be vigilant. Before working on a circuit, use a voltage meter to determine if the circuit is energized, tape or wire nut them off whether they are or not.

Before you work on electrical equipment, turn off the power to it, then try it out if you don’t have a meter to confirm it is off, be sure to return the switch to off once confirmed though.

To be safe, all electrical equipment and apparatuses should be double-insulated or grounded and If possible, avoid the use of extension cords unless those extension devices are equipped with surge protectors, then they can be permanently used.

Here are a few more tips to think of when conducting electrical work;

  • Check your work area for water or wet surfaces near energized circuits. Water acts as a conductor and increases the potential for electrical shock.
  • Check for metal pipes and posts that could become the path to ground if they are touched.
  • Do not wear rings, watches, or other metal jewelry when performing work on or near electrical circuits. They are excellent conductors of electricity.
  • Leather gloves will not protect you from electrical shock. They are cowhide, typically, and have inherent moisture in them.
  • Never use metal ladders or uninsulated metal tools on or near energized circuits.
  • Make it a daily habit to examine your electrical tools and equipment for signs of damage or deterioration. Do not use them if the electrical wires are damaged or if they are not insulated or grounded. Defective cords and plugs should be thrown away immediately and replaced.

Electrical tip: Wire color code

Green, green/yellow striped, or bare wires are always used for ground.

DC (direct current): Red is hot (positive), black is not (negative)

AC (Alternating Current): Black (or red) is hot, white (or grey) is not. Red is a secondary hot, mostly used in 220v. Grey is a secondary neutral, can be used in place of white.

4-20ma: follow AC code (black or red/white) for power, DC code (black/red) for output signal. Orange/yellow for sensor/memory, brown/red for coil, blue/violet for signal.

3-phase: L1 blue or brown
L2 red or orange
L3 blue or yellow

If you are looking to understand current loop and why most industries use 4-20mA, check out this page; 

http://eagleyeforum.com/understanding-4-20ma/

 

Acronym Legend:

  • mA- Milliamp
  • PID- Proportional Integral Derivative
  • SCADA- Supervisory Control and Data Acquisition
  • PLC- Programmable Logic Controller
  • DC- Direct Current
  • AC- Alternating Current
  • HMI- Human-Machine Interface
  • GPH- Gallons per hour
  • pH- Power of Hydrogen
  • ORP- Oxidation-Reduction Potential

Introduction:

4-20mA current loop is a DC loop that is the dominant standard of analog process control signals used to transmit process information in many industrial applications. 

These loops are used to control and carry signals from field instruments to PID controllers, PLC cabinets and eventually reaching data systems like SCADA. It is the ideal method of transferring process information because current does not change as it travels from transmitter to receiver, much like water in your home; the flow is constant.

Origin:

Before electronic circuitry, process control was completely mechanical and therefor the standard was pneumatic control signals by air compression, ranged 3-15 psi. The reason it was 3-15 psi was because signals below 3 were unrecognizable and it was easier to differentiate a live zero (3 psi) from complete failure (0 psi), 15 was used to keep it easily divisible in percentage; 3 (0%), 6 (25%), 9 (50%), 12 (75%), 15 (100%).

In the 1950s when electronics became less expensive and more popular, current input became the more efficient and preferred standard.

Over the years there have been other loops used for applications as people experimented but the main ones were;

  • 10-50mA because technology at those times used magnetic amplifiers which required a minimum 10mA to operate
  • 4-20mA, which as you can tell both share the same divisional breakdown; 3 psi/10mA/4mA= 0%, 6 psi/20mA/8mA= 25%, 9 psi/30mA/12mA= 50%, 12 psi/40mA/16mA= 75%, and 15 psi/50mA/20mA= 100%

These current values were also used because they corresponded to 1-5V analog voltage across a 250 Ohm resistor, making it easy to adapt the 4-20mA current loop to a 1-5VDC analog input voltage as well.

Current, like water flow, can be restricted, while water is restricted by pipe size reduction, current is restricted by resistance, these resistors are placed strategically in line to reduce the current and alleviate the chance of equipment getting fried. Even with resistance, current flow is still constant, which is why current loop is recommended over voltage analog.

This is why using current as a means of conveying process information is so reliable, it is kept low and sufficient so there is rarely a significant drop.

The reason for the minimal value (3 psi, 4 mA, 10mA) is because it should be enough to drive the minimal requirements of devices and still be able to differentiate a “live-zero” and real loss of energy. Since all require a minimum of  3 mA to power up and 4 is nothing more than a value transmitted, when you receive the 3.5-4mA, it shows that you still have power and that value is live, anything less would show a disruption in the power supply, like loss from provider, cable cut or tripped breakers.

The reason 20mA was used as the maximum is stated to be because the human heart can withstand up to 30mA of current, so as a safety point of view, 20mA was chosen.

In choosing their minimum and maximum, it was realized the originators wanted to stay in the parameters of 3-30mA, and with wanting to stay linear with base value, the only options present were 4-20 or 5-25. Since calculations are easier in multiples of 2, 4-20mA had more votes, thus 4-20mA is what we get.

Current loop steps

The 4 steps and requirements necessary to make a 4-20mA loop are;

Power Source

The most common DC power source for 4-20mA is 24V but it has been used with 12V, 15V and even 36V since some older systems used higher voltage. The reason DC is used over AC is the magnitude of the current, with DC being constant and AC continuously changing, making it difficult for the signal level to transmit. With that in mind, the power supply must be greater than the sum of the minimum voltage required to operate the transmitter, plus the IR drop of the Receiver, and in the event of long transmission runs, the IR drop in the wire.

When calculating that drop, consider the maximum level of the current that can flow through the 4-20mA loop, not just your 20mA value, but the over-scale or alarm limit of the transmitter.

Sensor

The device used for measuring the process variable; commonly being temperature, humidity, valve position, motor speed, flow, pH, ORP, Ammonia, level, and/or pressure

 

Transmitter

This is the key for transmitting the 4-20mA signal from the sensor to the controller, conveying the real world signal of flow, speed, pressure, etc. into a control signal necessary to regulate the flow of current in the loop. It takes that signal the sensor gives it and converts it to the 4-20mA source that the controller can understand; 4mA being the 0 to 20mA being the 100% of what the scaling/span is.

The transmitter only uses what it needs to convert the measurement, it causes the drop mostly requiring the resistor in the controller.

 

Receiver/Controller

This device is at the other end of the transmission line, receiving that transmitted signal. This unit itself can be any number of different devices, such as; panel meter, PLC, motor speed control, or some other digital control system like SCADA or BACTALK that you span to the process; 0-X feet, 0-X degrees, 0-X gallons per hour, etc. “X” being whatever you program as the maximum capacity that your instrument will read.

It reads the output that the transmitter gives it and either displays it itself in a simple form so operators can say or it stores the info in a database for trends, for example; if you have a tank that is 30 feet and you set a span of 0-30, the sensor sends a signal which the transmitter converts to 12mA, the control system will reflect 15 feet since the 12mA is 50% of the set-point. It will either show this level on a display of some sort or add it to a software trend-log for future reference.

The span importance is high, the local field span should be the same as the span on the analog card for accurate results.

 

Here are some examples of uses;

  • You have a powered local controller spanned 0-300 GPH for a flowmeter, the flow-meter is transmitting 12mA back to the controller giving it a reading of 150 GPH, if that controller has an output source to an HMI, that will reflect the same value.

  • You have a tank that is 40 feet deep so you span your controller for 0-40’ and place your transducer on it, it reads 10’, your controller is receiving an 8mA signal transmitted from the sensor.

4-20mA isn’t limited in just reading, it regulates too. Here is an updated photo of the level maxed out signaling an alarm at the PLC that calls for the motor to kick on and pump water out

The PLC stores analog and discreet cards for these purposes. The analog cards are spanned to go along with the 4-20mA ranges while the discreet cards go towards status such as motor running/not running, alarms active, valve opened/closed, etc. but they work together, for example a signal coming back high trips the alarm and tells the pump to come on through ladder logic programming on the software side of the PLC. If the full capacity of the pump isn’t required, or a valve doesn’t need to be fully open, you can set an analog to them as well to regulate how much flow should pass.

Same works with air conditioning; if you have an HVAC system linked with a span of 0-100 degrees and your thermostat is set at 71 degrees, it would show 15.36 mA, if you adjust it to 78 degrees, it would then send a 16.48 mA signal in that instance. The fan itself would be tied to the discreet to indicate on/off when temp is above/below the setpoint.

 

To go over again, the numbers represent the real value, in percentages, of the process once you have a minimum and maximum established. here is a breakdown of what those numbers mean in real value;

4-20mA;

4mA=0%, 8mA=25%, 12mA=50%, 16mA=75%, 20mA=100%

Likewise for the pneumatic 3-15psi;

3psi=0%, 6psi=25%, 9psi=50%, 12psi=75%, 15psi=100%

The analog cards work in spans of 0-X on various processes, they convey and convert the ranges from the field to the HMI or SCADA unit while the discreet works with the same, but only as status of on/off.