Here in the UK, there are a number of tariffs available to customers.
The suppliers have fixed and variable costs. If you buy just a little
electricity, their fixed costs have to be met from the sale of just a
few units of electricity. So units cost a lot. If you buy a lot, their
fixed costs per unit will be a lot less, so units cost less.
Some tariffs have a relatively high rate for the first few units. But
once those have been paid for, the fixed cost element has been covered.
So subsequent units can be sold at a lower price.
SOme tariffs have a fixed daily standing charge plus an additional cost
per unit. This cost is invariably less than those tariffs which do not
include a standing charge.
For very light users, a high unit cost and no standing charge can work
out best value.
There are two components to the cost of providing you with
electricity. The first part doesn't depend how much electricity
you use and is payment mainly for the fixed infrastructure.
The second part is the part which depends how much energy you
use, such as the cost of power station fuel, and some of the
replacement cost of parts of the infrastructure which age faster
with more load or need upgrading for higher load.
You can think of these as matching the fixed (standing) charge,
and the energy usage charge. The trouble is that the fixed
charge would be rather higher than people are willing to accept.
So what's done is that the fixed charge is reduced and subsidised
from the energy usage part. This makes electricity accessible to
low users without being price-prohibitive, which is generally
regarded as a socially responsible thing to do. However, if you
are a high user, you would end up paying too much subsidy towards
the fixed costs, so your energy usage price is reduced at this
level to prevent over subsidy of the fixed costs.
[email address is not usable -- followup in the newsgroup]
The basic Econ 101 reason for the rate structure is that the utility is
attempting to get customers to pay not just why the electricity is worth to
the customer "on the average" but pay more for that first kWh than the last
As an example: we I live we lose electricity for over 24hours at a time a
few times each year. I paid about $500 for a generator that puts out
about 5kW. When the "grid" is down, I run the generator about 6 hours a
day to keep my food cold, the water flowing, and some TV and computer time.
I happily pay about $10/day for this power. "Doing the math," I consume
about 30 kWh per day when running on the generator. At peak local rates
that would cost me about $3/day.
If the utility could get away with it, it could charge me $10 a day for the
first 30 kWh/day of usage (or even more if you consider the wear and tear on
my generator.) But at $.33/kWh I would not use much more power.
But the utility can charge me more for the first few kWh than for the last
few by other schemes which "fly" by the regulators more easily. But it's
It's quite rational for a utility to charge and have a rate structure to
extract all from the customers they are willing to pay. Sometimes that
results in downright fantastic profits and sometimes even with such a rate
structure the utility can't cover its fixed costs.
If you look at a "demand curve" for electricity consumption, the utility is
trying to recover all the area under the demand curve up to the point of
total demand rather than just the product of total demand and the price at
which supply = demand.
In other words, I read it as:
You want the utility to supply you electricity at the "fuel" cost/kWh of the
energy delivered to you + a profit margin but ignore the cost of capital to
provide the infrastructure which is the same whether you require 1kWh
/month or 10000 kWh/month. If you had no outages then what is the cost/kWh
of your emergency supply? I would suggest infinite $/kWh. Simple economics
that you use with respect to buying a car- capital cost amortised over
lifetime +operating cost.
It's not the "demand curve" but the cost of capital +fuel to supply the
demand at any time + the capital cost of capacity which must be there (a
100MW unit which happens to be off line due to low demand, has the same
capital cost that it would at full load.).
The utility can handle this by some formula where capital costs are lumped
in the bill independent of load, which would be honest, but "politically"
undesirable. " I have to pay this whopping amount when I have been in Italy
on a wine and food tour for the whole month and my nght lights only drew
1kWh?". Hence rates that were based on decreasing cost/kWh with load in the
days when Reddy Kilowatt was encouraging use of electrical energy and fossil
fuels were cheap and plentiful and pollution was restricted to "put the
outhouse downhill from the well"
Regulation does limit greed when the utility has to justify its rates (I
used to live in such a region . De-regulation and market forces were
supposed to do the same through market competition- but in many cases,
hasn't had this effect as far as the consumer is concerned because MBA's
don't realize that a utility and a grocery store operate under different
financial regimes and outlooks.
There, I have got this off my chest and have probably pissed off a bunch of
people. I hope you are not one of them because, from what I have seen, you
Don Kelly email@example.com
remove the X to answer
The economics for delivery of electricity were well thought out in the
early years of the Twentieth Century where the concepts of 'demand'
and 'diversity' were created.
The Chicago electricity mogul Samuel Insull was looking for ways to
lower the cost and increase the market share of his electrical service
and came across the 'demand' meter, which had been invented and was
usefully employed in Great Brittain to measure the maximum power
consumption for a set period of time during each monthly billing
Suppose you have factory A and factory B as electrical customers.
Both use 10000 kWh per month, but...
Factory A spreads the load evenly throughout the day and evening
Factory B has short periods when electricity usage soars to a very
Who should pay the higher overall bill at the end of each month?
With the concept of 'demand', factory B is going to require bigger
generators, place more demands on transformers, (larger size)
distribution, and transmission for the peak load periods. Even though
most of the time this equipment is idle, it has a cost and must be
financed and paid for. Often this cost is higher then the amount of
electricity that is supplied in kWh.
Factory A has more or less a constant, smaller and predictable load.
It can be served with far less capital cost.
Note that the capital cost is mostly paid for by the utility (for the
equipment on the utility side of the meter). The utility must have
some means of recovering this cost that avoids just passing it along
to other customers.
Thus, both companies might pay the same per kWH, but factory A must
pay for the greater peak demand it imposes on the system. It gets
more complex than that, but this is the basic idea.
All true. I'd add that as an interim measure, some utilities have found
that allowing deliberate overload for a couple of hours at a time can be
more cost-effective than putting in a transformer/line that is rated for
continuous duty at the highest load demand.
This can be a controversial practice since 'sooner or later', the duration
and magnitude of the overload will catch up and the unit will fail. Then
the critics come screaming about 'penny-pinching utility is too cheap to put
the correct size equipment in place.'
But in reality, delaying the costly upgrade of equipment for five or ten
years is reflected in the rate base that those same customers benefit from.
I'd only quibble with the point about de-regulation. Keep in mind that
*utilities* aren't de-regulated. The T&D of customer service is still a
regulated business in all 50 states AFAIK. Only the *generation* aspect of
the electric power industry is de-regulated.
In NY's 'National Grid' territory, we get bills that show two different
fees, 'delivery' and 'supply' charges. The 'delivery' aspect is supposed to
reflect the regulated utility's costs of building/maintaining the
infrastructure, while the 'supply' portion is the *generation* aspect. We
can shop around for the 'supply' portion of our bill, contracting with a
wide variety of independent generators. The 'delivery' aspect is determined
by tariffs and rates approved by the state's public service commission (much
like in the old, fully-regulated days).
When you separate out these two different costs, my bill for example, shows
that the generation cost is only about half of the total. I've shopped
around a couple of times for this and found that the difference from what
the indpendents charge and the rate that the utility is brokers for their
supply isn't an awful lot. Some of my friends and neighbors came to the
same conclusion, "The savings in shopping around for an independent supply
just aren't worth the hassle."
I'd also add that 'in the good-ole days', keeping track of depreciation of
each individual capital asset and apportioning each to the appropriate
customers, and tracking individual hourly usage and charging it against
individual plant hourly production costs was just *not* possible. The cost
of such accounting would outweigh everything else. So costs got aggregated
and averaged and then apportioned using some scheme that the regulators
deemed 'acceptable'. Now, with electronic metering and computers, it might
be possible to do something like this in the not-to-distant future, but
what's the up-side?
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.