Battery Life or “design life” of a battery is based on average use at room temperature (20-25°C) operation. For a modest UPS System, the design life is typically 5 years. Since, UPS applications are standby applications, the batteries are float charged, and the life is also referred to as “float life”.
The moist gel interior of VRLA batteries dries up over time, gradually reducing the effectiveness until the battery capacity is no longer viable for the application. This is why batteries will wear out regardless of how well they are maintained.
Typically, you have around 200 charge/discharge cycles in a 5 year design life battery. This is because the charge and discharge process involves a chemical reaction and this causes corrosion within the battery itself.
As this limit is approached the battery capacity starts to tail off, and can become very low very quickly. You can see that if a battery is used daily for example, the life expectancy is lower than one year.
Note how cycle life can be extended significantly by reducing the battery depth of discharge
Sulphation
If the battery is allowed to stand unused for a prolonged period of time, lead sulphate crystals form- blocking recharge. If this happens the UPS charger is usually incapable of recharging these batteries. It is possible to sometimes recover such batteries using high charging voltages that break down the sulphate but also having a current limited charger. Temperature monitoring is also required and as such, this is beyond the scope of most UPS built in chargers.
Sulphation occurs mainly when batteries are allowed to stand in an uncharged state. This is why it is important to have your UPS charged as soon as possible after an outage.
Heat
The float life of batteries is rapidly reduced with heat, and I mean rapidly.
HIGH TEMPERATURE will reduce battery service life often quite dramatically, and in extreme cases can cause Thermal Runaway, resulting in high oxygen/hydrogen gas production and battery swelling. Batteries are irrecoverable from this condition and should be replaced.
Based on this, if the batteries are locked in a cupboard with little ventilation and temperatures allowed to build, for example to 50°C, then a 5 year float life battery would be expected to last no more than 6 months, regardless of how it has been used.
Thermal runaway results on VRLA battery
Battery Life Conclusions
A battery cannot be expected to last in excess of its design life so schedule a replacement before this.
Regular cycling of the battery will diminish its performance. If your application is for regular charge/discharge cycles then the life expectancy reduction needs to be considered.
Avoid heat build up. Ensure the UPS and batteries are well ventilated with adequate air flow though the air intakes. Ensure vents are free from a build up of dust and the UPS is not in direct sunlight.
Always recharge the batteries as soon as possible after an outage to prevent the possibility of sulphation.
Our client runs a campus environment and has been suffering from regular outages on his CCTV equipment. Throughout the campus the CCTV cameras get their power directly from the street lights. This meant that a UPS solution would not have to power just the CCTV, but the lights as well.
Another problem is that space is a real issue. A competitor had visited this site and had proposed a UPS solution that would fit in a pre-fabricated cabinet outside the comms room, and this would keep the system up and running for a good 12 hours or so. Hmm, this seemed real overkill and a more cost effective solution would be to fit a small UPS within the comms cabinet and use a generator outside. This was a better solution, but not what the client wanted to pursue. Discussing this with the client it became apparent that having the system up and running for 12 hours was more of a wish list than a real requirement. In fact about an hour to 90 minutes would be acceptable. What else can we do?
The site has the street lights split into three zones, each powered from a single phase. This necessitated the use of a three phase UPS System, although the entire power consumption was in the region of 3000W or so. Our standard 10KVA 3phase UPS, the VFI33-10KT would provide around 20 minutes runtime. Not long enough.
Fitting a battery pack comprising a +/-120V strings with 36Ah capacity did the job exactly with a calculated 102 mins of runtime. There’s the solution, now where’s it going to go?
The comms room was a bit of a squeeze. And calling it a comms room is also a bit misleading. It was more of an out-house than anything. The UPS could possibly fit, but then getting in would be a challenge. As luck would have it another outhouse was nearby that we could wire the UPS to. Bring on the electricians.
A schematic was made up, discussed with the site electrician and a plan put in place to minimise downtime. Phase 1, the electrician would run cables to the outhouse and fit the UPS input and output breaker panels. Phase 2, UPS installation, leaving it in bypass mode. Phase 3 involved unavoidable downtime where the power feeds to the cameras needed to be diverted to the UPS.
Once this was completed, the UPS internal bypass made sure that power was still being presented to the CCTV. All that was left was for Power Inspired to come back to site and commission the UPS System. Take it out of bypass and switch it online. All completed without any downtime.
Happy days, over 90minutes autonomy for a CCTV UPS application from a 10KVA 3phase UPS System with additional battery cab.
It was an August Friday, there was a sense of
urgency prior to the blackout: people skipping around starting their weekend or
at least the endorphins were being released with the anticipation that R&R
was imminent for those that had the weekend off.
This Friday was different, just before 5pm trains halted and traffic lights glitched in central London- there was a metaphoric handbrake placed on everyone’s journeys. A blanket of darkness swept over parts of England & Wales on the late afternoon of 9 Aug. 2019. Chaos engulfed the regular journey home for many London commuters whether they were sat in their cars or were wading through busy mainline stations.
Walking through Cardiff around the same time you’d hear masses of security alarms going off like a Mission Impossible movie. Newcastle homes and businesses were affected and the local airport announced flight cancellations. Not all of the power outages on that day can be attributed to the causes discussed herein, the recorded blackouts are visible in the map below.
Source: https://www.dailymail.co.uk/news/article-7343681/Government-launches-probe-mysterious-power-cut.html Power cuts registered on 9-8-19
You might be thinking ‘how can a train problem in London Euston simultaneously affect traffic lights in Bradford but have no significant power disturbances in between?’ Diverse areas of England and Wales, in terms of proximity, and with disparate tenures were affected that Friday. I was in Oxford and had no inkling, no horror movie like lights out or quietness as my fridge cutout conveying powerlines downing. The question may not bother you longer than a few seconds, because you know about the National Grid, right. You may have mapped out a plan that if it ever happened you’d start walking home and if it went on for hours you’d be forced into eating the contents of your fridge before they spoiled.
If you want to understand a little
bit more about your electricity supply and get an insight into what is keeping
the WiFi signal alive here’s a high level intro to the events of that evening.
Along your read there is a bit of technical knowledge needed regarding AC power network. Where power demand is greater than power generated then the frequency falls. If the generated power is greater than the demand then the frequency will rise. Once the frequency fluctuates to a level outside the set tolerance 50Hz +/-1% any service or appliance connected to the grid will experience instability and/or damage. Hence, frequency changes are monitored and balanced meticulously by controlling demand and total generation. There’s a website here if you want to see what is happening right now.
Watt happened then?
1.1m customers experienced a problem, blackouts on this scale are rare. Even the energy watchdog Ofgem demanded a report. It was released a couple of weeks ago and some of the findings are mentioned here.
Key Connected Personnel
Did the fact that it was 5pm on a Friday and certain connected people had started their weekend have anything to do with it? Only in terms of operational communications. There’s a protocol stating a sequence of communications must be released. Owing to the incident being on a Friday it is believed that certain key members were not readily available however it’s a red herring to believe this had an impact on the nationwide extent of the powercut. The important decisions were left to the Electricity System Operator (ESO) control office that manages the response in such situations.do trains stop during a electricitytrains stop
Electricity demand
The ESO had forecast demand, expecting it to be the same as the previous Friday. Post-event analysis whereby the demand profiles for the two days were mapped shows almost identical dips and rises. Nothing to point the finger at here. It’s not like Love Island airing and causing a surge in demand like earlier on in 2019. That particular increase in demand caused the grid to switch onto coal-powered generation after the longest stint of fossil-free power generation. (Like we needed a reason to dislike that program.) Incidentally, that record still stands at 18 days, 6 hours and 10 minutes. To date this year, we have prevented 5m tonnes of carbon dioxide(source = Guardian) being released into the atmosphere. Greta would be pleased to know.
The electric generation for the day was as expected, humans are creatures of habit so consumption was predictable, and it was known that neither wind nor solar was going to break any records. The generation mix was as per any regular day in August.
The Weather
Did the weather have to do with it? The ESO control room is provided with lightning strike forecasts. It comes from the Meteogroup in the form of geographical region stating the likelihood of a strike, represented on a scale of 1 to 5. A few minutes prior to the strike, the details sent across were ‘1’ signifying that the highest risk of lightning was predicted practically everywhere in England. Within the two hours prior to 5pm the main land UK had 2,160 strikes. So when the lightning strike on the transmission circuit occurred hitting a pylon near St Neots, it wasn’t a surprise.
Lightning strikes are routinely managed as part of everyday system operations. The eventuality is factored in by the ESO. The Protection System detected the fault and operated as expected within the 20 seconds time-limit, as specified by the grid code. The embedded generation which was part of the response to the strike, had a small issue on the distribution system. A 500MW in reduction of generated electricity was recorded. The monitoring system calculated the frequency, on the grid, which was in tolerance and any voltage disturbances were within industry standard, the Loss of Mains protection that was triggered was on cue. The ESO state this was all handled as expected in the response to the lightning event, and the incident was handled to restore the network to its pre-event condition.
The catalyst to the wide scale power problems was the unrelated independent problems that occurred at two separate sites just minutes later. Not one but two disparate power plants had a problem at nigh on the same time.
An off-shore wind farm (Hornsea) and a gas power station (Little Barford) meant the National Grid lost a combined 1691MW, as these two sites started under generating. (For the record these losses are attributed to the consequences of the lightning strike but the industry is asking the question of compliance, nothing is clarified yet). These generators fell off the grid and the frequency fell. As demand was now greater than the electricity generated the frequency fell below the tolerance value. To correct the frequency, the ESO did its job by prioritising disconnection of major load, it had to reduce demand by 1000MW. This equated to 5% of the consumption at that time, hopefully you were in the protected 95% that was kept powered !
Why wasn’t generation increased?
Reserves are already part of most plants, the solution would be to have more reserves available, right. Yes, but it is cheaper to just turn-off the load. It is also instantaneous, not too unlike having an overload on your UPS, you react by unplugging the load and then contemplate whether you need a higher capacity UPS. Not every power source can produce enough energy to stabilise the ‘demand-generation’ equation. Ramping up generation represents a significant outlay, sometimes the costs are inexact particularly when considering solar/wind plants due to forecast uncertainty and lest we forget every power plant is a business that needs to make money.
A note about renewable energy: the National Grid supply was originally set-up for fossil-fuel, the integration of renewable energy into the system is not simple. There are technical discussions relating to inertia, stability, ongoing compliance monitoring that needs to be addressed by policy makers and operators etc. before we see large scale deployment. 30% deployment seems to be the uptake globally on average in any country, more than this will require changes to system operations and market designs. Comforting to know that the National Grid is already being adapted and is expecting to be a carbon-free grid in the next 6 years.
Reducing demand
Each geographical region of the country is unique. The frequency recovery in different area is dependent on the transmission voltage, the transmission lines, energy generation and voltage support etc. Routine maintenance is carried out on circuits and equipment rendering them out of service. Simply put, each region will react differently when the demand and generation is altered. The ESO is set up to manage faults and control demand. This is done in a predetermined manner based on knowledge about the limitations of all these region, those that lost power were scheduled to lose power.
Large users of electricity actually know the score when they connect to the grid. In these situations, the ESO will trigger the Distribution Network Operators (DNOs) to power-down companies who have contracts agreeing that they can have their energy cut-off to stabilise the grid i.e. balance the frequency. It doesn’t matter if it’s peak time in London’s Stations the agreement is to ‘pull the switch’ to non-essential supplies. The ESO signals to the DNO to stop powering those companies when it needs to control demand. The agreement is to cut-off for 30 minutes.
Further delays experienced by London’s commuters past this half an hour is reported to be the result of those companies having to restart their systems after the period of being cut-off. Certain new class of trains needed technicians to manually reboot approx. 80 trains on site and individually. On occasions the train companies had shifted supply to backup power but then when the grid was back the trains had complications switching onto grid power.
Companies would rather have the power cut than sustain long term equipment damage. Even so, it is unacceptable to the trainline operators and they did demand answers as the scale of disruption was phenomenal.
The report suggests that the ‘critical loads’ were affected for several hours because of their customers’ system’s shortcomings as the DNOs had only pulled the switch for 30 minutes. It also suggests that no critical infrastructure or service should be placed at risk of unnecessary disconnection by DNOs.
There are plans afoot to address the shortcomings highlighted by the report, we can only wait to see whether a powercut on this scale reoccurs. Modern technology can only facilitate improvements. Many of us have Smart Meters installed, the data these feedback will allow smart management. These would give the DNOs opportunity to improve reliability and switch-off only the non-critical loads when their network is being put under stress. Hey, you didn’t believe that those temperamental meters were just a freebie for you to cut-back usage and reduce your fuel bills, did you?
Need any idea how long your UPS will last for? Eg How much runtime will you get out of your UPS? Then this UPS Runtime Calculator is just what you need.
You’ll need to know how much power (in Watts) your UPS is delivering. Then you’ll need to know how many battery blocks and of what Ampere Hour capacity are in your UPS.
This calculator is based upon 12V blocks only and will only accept integer values. So, if you have one single 6V battery of 12Ah capacity, then you’ll need to say it’s a 12V 6Ah battery. If the spec of your battery is not in Ampere Hours but Watt Hours, then as a very rough guide divide the Wh rating by 4 to get the Ah. If you have 7.2Ah or 8.5Ah then if you round down this will give you a minimum, and round up will give you a maximum.
Note, the calculator is approximate. There are no assumptions made on standby current consumption and inverter efficiency. These will be different for different UPS and also different at different load levels. Please just use as a guide. For example if you have an AC load of 1000W, the calculator makes no allowance for DC to AC conversion losses. This allows you to add your own. For example if your system uses 5W in standby, and has an efficiency of 90% then for a 1000W AC load, use 1000 / 0.9 + 5 = 1116W.
If your load varies over time, you’ll need to estimate the average power consumption. You’ll need to size a UPS to meet the maximum power draw expected, but calculate the runtime based upon the average power consumption.
UPS Runtime Calculator
If you want to select a UPS to meet load and runtime calculators please use the UPS Selection Tool.
If you’ve used the UPS Runtime Calculator please leave a comment or drop us a line with any ideas.
Did you know BS7671:2018 Requirements for Electrical Installations, a.k.a. The IET Wiring Regulations 18th Edition states that any socket outlet 32A and under must be protected by a Residual Current Device (RCD)?
Section 4.11.3 is the Requirements for fault protection. Subclause 4.11.3.3 entitled “Additional requirements for socket outlets and for the supply of mobile equipment for use outdoors” states:
In AC systems, additional protection by means of an RCD with a rated residual operating current not exceeding 30mA shall be provided for:
(i) socket-outlets with a rated current not exceeding 32A
BS7671:2018 Section 4.11.3.3
In other words any socket outlet that you plug anything into (basically anything powered from a 13A outlet, or up to 8KVA Systems on Commandos) must have an RCD protecting that circuit. There are exceptions to this, dwellings excepted, but only following a documented risk assessment which clearly states why an RCD would not be necessary.
Purpose of RCDs.
An RCD works differently to a miniature circuit breaker (MCB) or fuse. An MCB renders devices safe in the event of an overload, or short circuit to earth. They are rated in Amps, generally in stages from 1-32A. RCDs work by tripping on an earth leakage fault typically of 30mA. This is a fault current of up to 1000 times smaller than the MCB! RCDs are useful as certain hazards can exist in the event of a fault that will not trip an MCB. Typically this involves applications that are, or may, come into contact with water.
Earth leakage is a small current that stems from phase conductors to earth. This causes an imbalance between live and neutral and it is this imbalance that RCDs detect. If the earth leakage is high enough on an appliance due to a fault or water contact then the equipment chassis can deliver a dangerous “touch current” if a user touches it. The RCD is there to protect against this scenario. If your application has water involved, then it is very difficult for a risk assessment to justify the omission of an RCD from the electrical infrastructure unless other safety measures are taken.
Isolation Transformer
An isolation transformer, by its very nature will stop RCDs from tripping – even in the event of an earth fault. See Isolation Transformers – what you need to know for further reference on this. However this isn’t a problem. In fact, the isolation transformer can make the installation more safe than with the RCD alone. Even a device with a fault can be touched by a user without any hazard occurring. Unless – and I can’t stress this point enough – the isolation transformer has the output Neutral and Earth bonded!
N-E bonds are not there for safety, but rather for noise rejection performance by establishing a zero volt neutral-earth voltage. Isolation transformers in conjunction with UPS Systems provide a very resilient power protection solution. However, in order to ensure the system is safe, then you should not bond the N-E. Our isolated UPS systems leave the system floating, providing true isolation and an inherently safe electrical environment. If you use a N-E bonded system and no risk assessment has been carried out to determine that no RCD is necessary then this contravenes the requirements of BS7671:2018.
Decision Flowchart
Start by asking if there is a documented Risk Assessment as to why there is no need for an RCD on a socket outlet. If there is, then you’re good to go and any UPS is good for this scenario. You can use isolated (floating or N-E bonded) or non-isolated depending upon your requirements.
If there is no risk assessment in place then we check if there is an RCD fitted. If not, or unknown, then in order to provide the safest environment, the solution is a truly floating isolated UPS. Granted, if no RCD is in place, fitting any UPS does not make the situation less safe, it’s just that a floating isolated UPS does make it safe.
If there is an RCD fitted, and no risk assessment has been carried out, then you must not use any NE bonded system NOTE 1. This removes the safety aspect of the RCD.
Conclusion
According to the 2018 Wiring regulations there needs to be an RCD fitted on any sub 32A circuit. This will cause power to be removed if earth leakage of over 30mA is detected. Any standard UPS will not interfere with the operation of the RCD, however an isolated UPS will prevent the RCD from operating.
However, a floating isolated system, where Neutral and Earth are not connected provides a safe electrical environment. In situations where an RCD should be installed, for example there is water required by the application, and the electrical infrastructure is unknown (for example older installations to which RCD was not a mandatory requirement), floating isolated UPS provide the ideal solution.
An isolated UPS that is floating renders RCDs ineffective but provides enhanced safety by removing any touch current hazard.
On the other hand, a N-E bonded UPS system not only negates an RCD but does not make safe any scenario to which the RCD was required to protect against. There’s a reason for section 4.11.3.3 of BS7671 and this situation violates it.
An isolated UPS with a Neutral and Earth Bond renders RCDs ineffective and does not protect against hazards for which the RCD is intended.
NOTE 1: Unless a secondary RCD is fitted to the output of the UPS.
Many moons ago we blogged about BS8418:2010 (Installation and remote monitoring of detector-activated CCTV systems, Code of Practice) and the requirements for UPS Systems. That standard stated:
Unless the mains power supply is supplemented with a stand-by generator, an uninterruptible power supply (UPS) must be able to power the CCTV control equipment and communications devices for a minimum of 4 hours after mains power failure. Where the mains power is supplemented by a stand-by generator, the UPS needs to be capable of providing stand-by power for a minimum of 30 minutes after mains power failure (for example if the stand-by generator does not start).
The 2015 revision relaxed this somewhat, allowing for a documented threat assessment and risk analysis to determine whether a UPS is required or not. That said, it is difficult to state how any threats or risks are mitigated against a loss of power without a UPS, so the requirement for UPS Systems is likely still to remain in BS8418:2015 installations.
If a UPS is used as the “alternative power source” then this has been changed from a 4 hour requirement to a 30minute requirement when supporting control equipment and data transmission devices. However the standby power capability for the detectors and semi-wired detectors remains at 4hours.
Find a UPS Solution
Enter in your load power and how long you need the UPS to provide backup power for. The UPS Selector will identify any UPS that meet your requirements.
You can filter the selection based upon required features, by clicking the checkbox. Many models are available to by online from our webstore but contact us using the form below for specific requirements or for other products not available to purchase online.
Our Lithium Ion UPS range is an impressive series of UPSs with internal Lithium Ion batteries, that make the units efficient, lightweight and more environment friendly. They also reduce the whole life costs of the UPSs. We have conducted some tests to show you how the Lithium-Ion UPS compares to the VRLA UPS in terms of runtime.
Each unit is connected to 1800W load. The Lithium UPS battery capacity is 48V 9.9Ah = 475VAh. The VRLA UPS battery capacity is 72V 9Ah = 648VAh. Although the Lithium UPS has only 75% of VRLA UPSs battery capacity, the runtime results are outstanding! See the video below:
We provide 5-year warranty on the Lithium-Ion UPS systems including the batteries.
Lithium-Ion UPS only from Power Inspired. Learn more at www.lithium-ups.com and register your interest.
The definition of transfer time, sometimes also called switchover time, says it is the amount of time a UPS will take to switch from utility to battery supply during a mains failure, or from battery to mains when normal power is restored. What this means is that when the main power supply fails, the UPS will need to switch to a battery mode to provide sufficient power and ensure smooth running of the attached equipment. The transfer time duration differs, depending upon the UPS system attached. It should, however, always be shorter than your equipment’s hold up time. Hold up time is the amount of time your equipment is able to maintain consistent output voltage during a mains power shortage.
Line interactive UPS systems, such as our VIX or VIS series, have transfer time typically between 2-6 milliseconds. For regular computer based systems, where hold up time is approx. 5 milliseconds, line interactive UPS systems are usually sufficient; however some computer systems, as well as other critical sensitive equipment, are more sensitive and require shorter transfer time. Hence in this case you should always choose UPS with zero transfer time like our VFI series.
If your equipment is critical and doesn’t tolerate even slightest power distortion, we recommend choosing online double conversion UPS technology with zero transfer time to ensure your equipment has the highest degree of protection.
Here’s a quick look up of transfer times for Power Inspired UPS systems:
Product
UPS technology
Typical transfer time
VIX3065
Line interactive UPS
Typically 2-6 milliseconds
VIX1000N
Line interactive UPS
Typically 2-6 milliseconds
VIX2150
Line interactive UPS
Typically 2-6 milliseconds
VIX2000N
Line interactive UPS
Typically 2-6 milliseconds
VIS1000B
Line interactive UPS with sinewave inverter
Typically 2-6 milliseconds
VIS2000B
Line interactive UPS with sinewave inverter
Typically 2-6 milliseconds
VFI1500B
Online double conversion UPS
Line to battery
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI3000B
Online double conversion UPS
Line to battery
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI3000BL
Online double conversion UPS
Line to battery*
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI6000BL
Online double conversion UPS
Line to battery*
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI10KBL
Online double conversion UPS
Line to battery*
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI1000T
Online double conversion UPS
Line to battery
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI3000T
Online double conversion UPS
Line to battery
0 milliseconds
Line to bypass
Approx. 4 milliseconds
VFI10KT
Online double conversion UPS
Line to battery
0 milliseconds
Line to bypass
Approx. 4 milliseconds
TX1K
Online double conversion UPS with isolation transformer
Line to battery
0 milliseconds
Inverter to bypass
4 milliseconds
Inverter to ECO
Less than 10 milliseconds
TX3K
Online double conversion UPS with isolation transformer
Line to battery
0 milliseconds
Inverter to bypass
4 milliseconds
Inverter to ECO
Less than 10 milliseconds
TX6K
Online double conversion UPS with isolation transformer
Line to battery
0 milliseconds
Inverter to bypass
4 milliseconds
Inverter to ECO
Less than 10 milliseconds
TX10K
Online double conversion UPS with isolation transformer
Transfer times are dependent on which stage the power interruption occurs in. That’s why the transfer times stated in the above table are approximate.
As previously mentioned, transfer times also measure the amount of time it takes for the UPS to switch back to mains. The transfer back to mains power is always controlled with minimal interruption as this transfer is planned. As opposed to an unplanned mains failure which happens suddenly and hence a variation in the actual time taken.
We have conducted a transfer time measurement using an oscilloscope (photograph above). For purpose of this exercise, we have used a standard line interactive UPS system and stimulated a power cut. The oscilloscope managed to capture the transfer time which on this occasion lasted 15 milliseconds, due to the original sine wave being interrupted at the peak of the cycle.
“How does transfer time affect my equipment?”
That’s simple – if your equipments tolerance is below UPS transfer time, the UPS will not provide power in sufficient time in order to keep your equipment running.
Let’s say you have highly sensitive laboratory equipment with hold up time of 2 milliseconds. Line interactive UPS will not be sufficient in this case as it will not switch to battery mode quick enough. You will need to invest in an online double conversion UPS or Isolated online double conversion UPS in order to avoid any downtime. On the other hand if your equipment is a very basic computer workstation with approximate transfer time of 10 milliseconds, you can use the line interactive UPS system with peace of mind that your equipment is protected.
Transfer time is definitely one of the things you need to keep in mind while searching for suitable UPS. More factors affecting your choice of UPS technology are covered in this article.
Electricity is mainly generated by turning a large magnet through coils of wire. This induces a clean sinusoidal waveform that can be transmitted down cables, stepped up and down using transformers, to eventually find it’s way into our homes, offices and factories. Along the way, however, some power virusescan interfere with this clean power and cause your equipment power problems. Some problems are obvious, and others not so. There’s generally accepted to be 9 power problems but there’s another problem which is often overlooked and we make it 10.
1. The Blackout
This is one of the most obvious power problems. A complete loss of power. Caused by a variety of reasons, tripped breakers, fuses blown, faults on the utility line, the list goes on. Some power cuts are brief lasting only a moment, for example lightning striking a power line causing protection equipment to operate and then reset. Some may be for hours or days, for example when a cable is dug up by accident. Others last until the breaker is reset. Whatever the cause a sudden loss of power is clearly undesirable for electrical equipment.
Oops!
Only a UPS System can protect against black outs. Your choice of UPS will depend upon the load you are protecting and the amount of time you need support for.
2. The Power Sag
Also known as a power dip, this is where the power momentarily drops. It’s usually caused by the start up of heavy electrical equipment. Other causes include overloads on the network, or utility switching. Note that the plant that is causing the power sag may not be in your building but sharing the same substation. The severity of the dip will impact equipment in different ways. Some equipment will have a natural ability to cope for momentary dips where others will shut down or reset.
You will need a UPS System to protect against a power sag.
3. The Voltage Surge
Some call it a spike, but in any event it’s a short term high voltage on the power line. Usually caused by lightning, which doesn’t have to be a direct hit on the power lines but nearby causing the spike to be induced onto them. The surges are generally destructive in nature as most equipment is not designed to protect against them.
This is where the voltage drops below 10% of the nominal voltage for an extended period of time. This is caused by high demand on the network. The effect is more pronounced the further you are away from the electrical substation. In fact, in rural areas this can be a problem when switching on everyday appliances such as ovens or electric showers. Brown outs affect different equipment in different ways. Computer systems tend to be able to cope well with brown outs as the switch mode power supplies have a wider input voltage. Other equipment that relies on a stable AC source such as lighting, motors or heating will not fare so well. Equipment with linear power supplies such as in high end AV applications may fail entirely.
In order to protect against a brown out you will need some form of voltage regulation. A line interactive UPS System incorporates a boost function to raise the voltage higher by a fixed percentage to bring it into the nominal range. It does this without needing to revert to battery operation.
5. Over Voltage
Also known as a voltage swell this power problem is caused when the demand on the network is lower than normal. This causes the output voltage from the substation to rise. This is a problem when the voltage is over 10% of the nominal. The effects of over voltage can range from overheating, diminished equipment life to complete equipment failure. It’s the inverse of the brown out in that the closer you are to the substation the more pronounced the effect will be.
Similar to the brown out you will need some form of voltage regulation. A line interactive UPS System incorporates a buck function to lower the voltage by a fixed percentage to bring it back into the nominal range.
6. Electrical Noise
This is generally noise between the live and neutral conductors and is called normal mode noise. Its caused by radio frequency interference (RFI) or electro-magnetic interference (EMI). This is usually from electronic devices with high switching speeds. Since the noise carries little energy it generally does not cause damage but rather disruption in the function of other electronic systems. Some filters may remove this, but this is not always effective. The best way to eliminate noise is to recreate the output waveform and this can only be done with an online double conversion UPS System.
7. Frequency Variation
Frequency variation can’t occur on the utility as this would require all the power stations in the country to suddenly change frequency. In fact, the frequency on the national grade is very tightly maintained at 50Hz. However, when you’re not connected to the utility and instead relying on a portable (or even large scale) generator then this can be an issue. As the load increases on the generator and in particular sudden large power draws from them causes the motor to slow down and hence change the output frequency. Some equipment won’t be affected by this at all but it can cause damage to other systems, particularly those with motors or other inductive devices.
More severe than electrical noise, switching transients are very fast high voltage spikes induced onto the power conductors. Caused by the switching off of inductive loads and variable speed drive systems. Such power problems may not be immediately damaging but they can cause degradation of devices subjected to them, particularly if the transient is of high enough voltage.
A surge suppressor can help if the magnitude of the transient is high enough, but these only work at levels above the nominal voltage. This means you could still have a transient of many hundreds of volts entering your equipment. Like with electrical noise a filter will help, but can only reduce a transient not eliminate it. The only way to be sure to eliminate the transient is with the online double conversion UPS System.
9. Harmonic Distortion
Harmonic distortion is where the supply voltage varies from a pure sine wave. The amount of variation is a measurement called the Total Harmonic Distortion or THD. Since we’re talking about voltage we call it THDv, not to be confused with THDi which is a measure of the distortion of input current which is a different thing entirely.
It is generally caused by non-linear loads. These are types of loads that don’t take current in a smooth sinusoidal fashion but instead take it in large chunks. Depending on supply characteristics these chunks of current cause a greater or lesser degree of distortion on the supply voltage. This causes problems for motors and transformers with hum and overheating. In three phase supplies harmonic distortion can actually cause the burning out of neutral conductors and surprise tripping of circuit breakers. Again the only way to eliminate harmonic distortion from your load is to use the online double conversion UPS System.
Summary
That’s the main generally accepted 9 power problems that can cause issues for electrical and electronic equipment. But wait, didn’t I say there was a tenth?
10. Common Mode Noise
This power problem is often overlooked and can cause equipment malfunction. It’s defined as electrical noise between the earth conductor and the live/neutral conductors. Even an online UPS System may not eliminate common mode noise. This is because it is normal practice to have the neutral conductor connected through the UPS from input to output. So although any noise between the live conductor and ground would be taken care of, any noise between neutral and ground is passed straight through to the load.
In a modern electrical infrastructure this generally may not be a problem since the neutral and earth are tied together at the distribution board. This shorts them together and in theory eliminates any voltage or noise between them. However, particularly on long circuits with a lot of equipment on them, voltages can start to be created and common mode noise becomes an issue. Hospital laboratories are a prime example of this.
The way to solve common mode power problems is to isolate the load from the supply. This is exactly what the TX Series does. The in-built isolation transformer creates a new live and neutral, and the online double conversion technology then ensures a high quality stable output. An an added advantage the isolation transformer can provide a safety shield against electric shock which is particularly important in applications where water and electricity may mix. Again, hospital laboratories are a prime candidate. Thus the TX Series can also be defined as Laboratory UPS System. Click for further information on the isolation transformer.
The new summary is this. If you need to provide the highest degrees of power protection against power problems and viruses then the UPS Technology choice should be online double conversion, and the load should be isolated. Choose the TX Series Isolated UPS System.
For the highest degrees of power protection the TX Series of Isolated UPS from 1-10KVA