That'd be University of Pennsylvania. Penn State U. is a different
institution. You probably want to get that right when you, you know,
write your nasty letters, file your lawsuits, or what have you. `
I'm only a consumer in the realm of physical security, but personally,
I'd be outraged that the "professionals" are trying to keep this
information secret from me. In the computer security realm, the
professionals tend to be fairly open with their clients about what their
system is capable of. I'd expect no lower standard of professionalism
You probably won't find a very receptive audience, and you probably
shouldn't even bother writing, to be honest.
Also, WRT copyrighted photographs, academic use is generally protected
under fair use, so you probably won't get too far there, either.
In other words, you've little recourse. Making a big stink only makes
you look a bit silly. One option you may wish to consider is to follow
what is generally considered de rigeur in the software industry:
acknowledge the vulnerbility, and publish a workaround and a fix. Of
course, that costs money--yours, not the consumers'--but then, that's
the result of designing insecure products. Vulnerabilities happen to
even the best designer, after all. Try not to take it so personally.
Seems to be some confusion here--on the nature of debating, at least;
saying "sure it does" or "no it doesn't" does nothing to further your
So, here's what I see as a rational position. Defining "security through
obscurity" as relying on the secrecy of a design rather than the secrecy
of a password, code, or other authentication token is poor design. The
reason is a very simple calculation of the amount of secrecy preserved.
In a password-protected computer system with, say, an 8 character
password comprised of [A-Z][a-z][0-9], there are 8^62 possible
passwords. Assuming a pseudo-random (i.e. "secure") password, that gives
an attacker a 1/8^62 chance of successfully guessing the password on a
given try; i.e., on average, it will take 8^31 guesses to get the right
password. This is security through a very small amount of secret
information; keeping the functioning of the code behind the password
authentication mechanism secret ads relatively little value (there are
only a handful of likely designs of such a system in any given language
and larger system design; i.e. knowing the parameters of the system, I
can make a much narrower guess at the implementation).
Comparatively, in a non-password-protected system relying on an obscure
entrance mechanism--say, a Webpage with a URL not linked to from a
search engine or public page--the mechanism is still easily guessed,
because it contains much less random information. For instance,
Reuters did just this
Or, let me present an example in the case of safes. We can generalize
the methods of accessing a safe in two ways: via knowing (or guessing)
the authentication mechanism (key, code, etc) or by bypassing the
requirement for authentication. In the former case, obscurity gains
quite literally nothing; as I discussed above, with a sufficient amount
of secret random data, there's no value in keeping the mechanism also
secret. In the latter case, it may be tempting to say that obscurity is
worthwhile here, but it probably is not: anyone can disassemble a safe
to determine how it functions, and the mechanical principles used are
simple enough that it wouldn't take a rocket scientist (or a locksmith)
to see the holes (as in the case of Blaze's master keyed sytems paper,
where the vulnerability was well known and readily apparent to anyone
who understood the system). So obscurity ads little or no value, and may
in fact detract: by assuming the inner workings are secret, a lock
designer may disregard vulnerabilities that would become apparent to
anyone who *did* know the inner workings, meaning that mere
disassembly--or a leak of the product designs--may be sufficient
information to bypass the (much harder to come-by) password.
So obscurity clearly ads little or no value in a secure system, and if a
system relies on its workings being secret, that reliance is false. You
can find more literature on this on the Web, of course, in relation to
computer security, but I believe it is even more applicable when it
comes to physical security, where relatively less expertise is needed to
understand at least simple locking mechanisms (I'm not a physical
security expert, obviously, but I can understand how a master keyed
cylinder lock works--and spot the hole--without any training or
would be a lot easier
reading than "Safecracking for the computer scientist."
As a person interested in computer security, I prefer actual security
over pretending flaws/weaknesses don't exist if no one talks about them.
Wake up. You so-called professionals in the security field are burying
your heads in the sand. Security-through-obscurity has never and will
I am not going to take an elitist stance and belittle anyones training
or education. You may be a very smart guy and I am sure you are
skilled at what you do. But let's face it, the product that you work
with has flaws. Rather than castigating the messenger, you should be
working to improve your product. That is the mark of a true
I for one hope this paper gets distributed far and wide. I am going to
do my best to make sure it does so...
the firstname.lastname@example.org wrote:
problem.. we are a 'middle man'.. we do NOT recommend, nor,
except for unusual circumstances, 'set requirements'.. EITHER
with the manufacturer, OR the consumer.
we can say, this is what you MUST HAVE, but, if the manufacturer
does not make it (too small a demand) OR the consumer afford it,
then we work with what we can.
( the USUAL considering factor in my book is MONEY, the retail
customer doesn't have it,)
If I could 'wave my magic wand' and fix the security issues, ALL
carpenters in the US would be RAN OUT OF THE COUNTRY..
thats for a start..unles they were willing to FOLLOW
INSTRUCTIONS, and not build 'cheap'
then, the 'construction company bosses would be NEXT'
Architects would NEVER be allowed to specify hardware AT ALL.
they are IDIOTS, as far as I have seen.. they can say the girder
needs to be so big, etc, but NOT touch hardware issues at all.
NO hollow core doors allowed..
NO glass in doors allowed..
NO sheet rock/ 2 by 4/ walls allowed NEAR a door
then from there on, we can have some interesting times.
Sure it is, against anyone the mechanism remains obscure to.
I'd rather have a
Because you think you will be able to do it better than the manufcaturers that
do nothing but? If you really think you can do it better than the pros, and
logically why would you, you've got your safe right there don't you? Study it
and add some surprises to it. Make sure you know what you are doing though or
the only one locked out will be you.
instead of being ignorant, and relying on the
Most safe crackers that will have the information on your lock and box design
have knowledge of, and practical experience with, techniques that you have
likely never heard of so if it's between you and them with regard to the
security of the container and mechanism there's probably not going to be much
of a contest. Your best bet: Know and understand the ratings system used for
safes and make sure that you have a professionally installed and monitored
alarm (forget about the dialer setups commonly used in residential applications
if you are serious) that does not allow the time needed relative to what you
have because if given enough time the right person WILL get in it or carry it
off (I don't care how heavy or well attached it is) no matter what you do.
Also keep in mind that somebody can't steal what they can't find.
Will people on this newsgroup give me information about
picking locks, etc.?
Yes and No. This is a serious debate, based on serious
principles. Most experienced people here are quite willing
to discuss the basics of lock construction and operation.
Few (if any) are willing to give specific answers regarding
opening a particular lock or safe - without knowing the
asker or having other evidence that the inquiry is
Another balancing act regards the general effect of
information. As Joe K. put it succinctly, "On one side there
are the idealists who believe that even weak security should
not be further compromised without good reason; on the other
there are those who believe that weak locks should be forced
out of the market. There's never going to be agreement
here... can we just agree that reasonable people can
disagree, and have done with it?"
People have contrasted locksmithing "security by obscurity"
with practice in the software arena (in which it has
generally been considered to be misguided and therefore be
bad for society.) Exposing flaws as a social good breaks
down when there are hundreds of thousands of current owners
of the product who don't know that the flaw has been
exposed. Even if they find out, there is another big
difference. This is the cost of correcting the flaw
(upgrading.) Installing the patches on your copy of software
takes a bit of effort, but you don't have to throw out and
purchase a new physical product (such as a lock.) The
manufacturer of the lock is pretty certain not to make it
available for free. Basically you have to buy a new item and
have it replaced, and this adversely impacts users, many of
whom do not have the budget to correct the flaw. Therefore
publishing the security flaw costs users *much* more for a
lock than for a piece of software.
And the fact is that a nominally flawed product _does_
provide adequate security against the unmotivated and
ignorant who are the primary folks attacking physical
security systems (as opposed to the motivated and clueful
who attack electronic security and can do it from a distance
without physical presence).
To quote you:
Can I ask: How many times have your relatives been burgled? Do they all
live in neighbourhoods that suffer from burglaries? Your security
assessment of the threats appear to be non-existent!
Here is an example for you: If you leave your door unlocked for one
week: what are the chances of you getting burgled? If you leave your
door locked for a week what are the chances of you getting burgled? The
answer is: it all depends on the presence of the burglar.
Wait, when exactly security start to suffer ? Is it when the "security
profesional"'s design is flawed, or when some other guy reveals it ?
The measure of security is absolute, and not dependant of the fact that
people know about vulnerabilities.
Besides, I have a problem with what you call "proprietary techniques".
Is a vulnerability a proprietary technique ? What I think is that when
you call your products "secure" and complain about a guy saying that
they're finally not that secure, "proprietary techniques" are actually
lies (or "deliberate omissions").
Now let's talk about "the damage it will cause to your profession".
Aren't you responsible for your own image ? Do you really feel good
when you say something that means "I used to have a good reputation
because customers didn't know my products aren't as secure as I claim,
now I risk losing that reputation because some guy revealed the truth.
Please make him shut up" ?
Well, Dan sort of had the right idea that you would (on average) have
to try half of the possible passwords. However, the calculation should
result in 4*8^61 as the number of guesses needed. This is 2^92 times
larger or more than a billion billion billion times as many guesses.
Big numbers can be tricky.
I've heard this before--the claim that replacing locks costs more than
replacing software. It's possible that this is correct, though the cost
of patching software is probably much higher than you think (for a large
enterprise, the cost of even a little downtime is quite
steep--Amazon.com supposedly loses $180,000 per hour of downtime
Regardless, the debate here is not really technical, but legal (or
moral)--does a vendor have a duty to fix a faulty product? In the case
of the software industry, there are EULAs--End User License
Agreements--that specify a limited liability for the vendor. In the case
of the lock industry, is there an equivalent? If you advertise a lock as
being resistent to bic-pen-attacks, and it turns out that it can be
opened with a bic pen, do you have a duty to replace it? I'd say yes.
So take the case of Kryptonite--had the physical security industry had
any sort of "full disclosure" practice, it would have been widely known
to the general public (i.e. me) that Kryptonite bike locks are worthless
pieces of shit. I never would have bought one, nor would others. As a
result, Kryptonite would have had to push the sales of their alternative
mechanisms, and may have lost some sales, but in the end, it probably
would have been *cheaper* than their current recall program is.
My point is that it's not hard to imagine a situation in which full
disclosure does not merely increase the security of the customer, but it
also lowers the operating costs of both the customer and the vendor.
And yes, I recognize that some customers will remain unaware of
published vulnerabilities, just as they do with software, but
ultimately, I believe it is more valuable to reward those who track such
things than to punish them in a misguided attempt to serve those who
would rather remain ignorant. It provides the *option* of greater
security to those who desire it, rather than forcing everyone to
maintain the same level of mediocrity (what is this, Soviet Russia?).
Now you're being unfair. Those "IT security guys", of which I am one,
have produced some pretty good stuff, if we count cryptographers, even
And its not like they wouldn't tell you how to break a certain insecure
They even produced some nice
alternative to that product:
for instance -- but
if you choose to ignore that work of security pros and instead opt to use
an insecure product on whose development the IT security guys in general
have no influence on, then please do as you see fit. Just don't blame
IT-security in general.
In the 19th century, auguste Kerckhoffs stated: "the security of a
cryptosystem must depend only on the key and not on the secrecy of any
other part of the system". I don't see why this principle should not
extend to locks.
Most vectors of attack on a lock are through design flaws
(and if a lock has design flaws, that's definitly a reason to publish
them, and get them fixed) and small mechanical inaccuracies (which are
merely a measurement for the quality of implementation of a certain
mechanism, but for which I don't see any problem on publication either
"Company XY lock series Z are shoddily made, and thus easier to crack, but
then, they are cheap...") I'm not a locksmith, but only an amateur mainly
concerned with historic locks and lockpicking (pre 19th-century); so
bear with me if I'm missing some details of the trade. Still I think
these points are valid.
In closing, I think we could learn much from each other. The "physical
security" guys obviously some lessons about full disclosure, and the
"IT security" guys maybe something about incident response.
With all due respect one thing the computer type people just can't seem
to understand is that locks are "real" and are not able to be updated in
the same way software which is "virtual" can be....
Everything is relative in the "physical" security world... You select a
based on a survey of the installation environment and the most likely
threats that the lock will be subjected to...
There is NO SUCH THING as a lock that will keep everyone out, as
others here have noted "if it can be built, it can be taken apart"
It is only a matter of having the right collection of knowledge and
access to tools and time alone with the lock...
The concept of "Security" is not achieved simply through the installation
of a lock or a safe... It is the collective achievement of an entire effort
of an individual or organization that includes locks as only one element
of the greater scheme, which includes but is not limited to:
-- Locks (and the Doors and Walls that contain the room)
-- Security Personnel (Guards, Armed or Un-armed)
-- Electronic Monitoring Systems (CCTV and Burglary Alarms)
-- Policies and Procedures that reinforce the goal of "Security"
-- An Architectural Design that supports the goal of "Security"
(Think of a school, which by design is easy to get out of in the event
of an emergency... This element of its basic design also makes it
easy to get into as wel, and is much more difficult to later adapt such
a building to being more "secure" while still remaing "safe" and easy
to get out of during an emergency ...)
Please enough with the "security by obscurity" argument... Just
because the IT industry moves at the speed of the internet does
not mean that others that don't should be made to "catch up with"
it in order to considered "acceptable"...
Having some level of the construction, design and assembly of
locks remaining somewhat private is a good thing... It does add
just a SMALL AMOUNT of added security just by the fact that not
everyone knows EXACTLY how it works... What the world does
not need is some type of "script kiddies" that can bypass locks
that are the ONLY security device being employed simply
because one person or even a group of people decided to
apply the concepts and procedures employed in the IT industry
to locks and physical security under the mistaken impression
that revealing the "design flaws, vulenrabililities and most
common ways to attack" such devices will make it possible
for every lock to be upgraded to "fix or resist" such things...
Downloading a software patch is quite easy and cheap when
you compare it to actually physically replacing an actual device
(in this case a lock)...
~~formerly a maintenance man, now a college student
Umm... Last time I checked a building and it's design and equipment
were not able to be upgraded simply by downloading a "patch"...
The line between "reality" and computer science seems to be a bit
hazy for a lot of people that make statements supportive of certain
Neither can software be updated in this way.
In some cases it can, but it's a narrow sub-section of the possible
case - very difficult to rely on it. Much of today's software is in
"embedded" systems - pretty much a closed system for updates, once
shipped. For operating systems they're pretty much fixed too -
relatively few domestic PC systems get updated after purchase. Even
for mass-market retail software, once that "golden master" has been
produced, the cost of updates is enormous.
There's also the risk that for many systems you need _all_ members of
a network to be secure - just one unpatched box may be enough to
expose all of them.
Even worse is the case where the secure component is a major part of
an industry standard protocol. How do you fix a protocol like WEP or
A5 when there are millions of dependent devices out in service ? The
risk might be large, but it's rarely a justification for rendering a
whole technology generation obsolete - especially when several
manufacturers are involved.
So software has many advantages, but the _practical_ benefits of them
are less useful than you might hope.
I've seen defective locks on file cabinets replaced faster than a patch
can be downloaded and installed. But more to the point, each of the
listed items in the post I responded to has parallels in computer
security. And quite often, as already stated by Andy Dingley, there's
more to solving computer security problems than simply patching
Then there's the line between what you know think you know about me and
what I really am...
Since I only dabble in cryptography on occasion as the mood strikes me,
let me engage in some speculation. I suspect Matt Blaze is more
interested in the parallels between the design of locks and the design
of cryptographic algorithms, and his article is intended to encourage
an exchange of ideas between lockmakers and cryptographers.
On the other hand, if I understand correctly, "security through
obscurity" is a mistake made as often by lockmakers as by computer and
communication equipment manufacturers. There are parallels between
physical security and computer security, and each can learn from the
other, including learning from the other's mistakes.