Decrease font size
Increase font size
Topic Title: Coping with Engineering Failure of Cryptographic Systems
Topic Summary: Failure Modes and Effects Analysis (FMEA) in Cyber Security
Created On: 13 November 2012 11:14 AM
Status: Read Only
Linear : Threading : Single : Branch
Search Topic Search Topic
Topic Tools Topic Tools
View similar topics View similar topics
View topic in raw text format. Print this topic.
 13 November 2012 11:14 AM
User is offline View Users Profile Print this message


Posts: 1059
Joined: 05 September 2004

This is a question that CESG ( ) and the cabinet office refuse to help me or my company with. I have also put the question to the Information Security Group at Royal Holloway University of London, but with no clear answer as yet.

Perhaps someone in IET could point me to published research in this field:

Lets assume (for the purposes of this enquiry) that all the alternative public key cryptographic mathematics and algorithms just work from a black box point of view and can be slot in replacements for each other at the implementation or application level. So the application software can just choose algorithms from within a set of public key crypto and certificate handling computational packages, say package 1, 2 or 3.

This enables me to just concentrate on discussing the engineering assumptions and the engineering structure. I don't even want to be specific about what engineering assumptions are required I just want to put the assumptions into the two normal engineering classes.

a) A singular assumption that if found incorrect would mean a particular algorithm fails to provide the level of protection demanded of it in the engineering specification.

b) A singular assumption that if found incorrect would mean that all algorithm across all packages fail to provide the level of protection demanded of them in the engineering specification. (Common Mode Failure if you like, that could perhaps happen if the are in reality no one way or trap door functions in mathematics for example)

So what I am really talking about is what would need to happen from an engineering point of view in the event of a class (a) failure and or in the event of a class (b) failure.

First Case - Class (a) failure

It seems to me that part of the structures in place now can cope with a class (a) failure and some can't.

So the at the application level different algorithms can be in principle be switched in and out, but in terms of the certificate you buy from VeriSign or Thawte this is not possible.

Therefore the question simplifies to what are the reasons stopping VeriSign or Thawte providing certificates compatible with two or more crypto algorithm packages? Do they have a vested interest in keeping loyal with a particular technology? If so what are the reasons for this technological lock-in?

Second Case - Class (b) failure

Are people thinking about this possibility as well. For example partitioning of the internet into physically secure links and physically insecure links.

If there were no public key cryptography algorithms available to engineers at some future point in time we might have to have a symmetric key link provided by BT for example to a secure data centre and then they either route requests on to banks or other secure data centres via second secure link, or they pass the request into the open internet. Everyone would have to trust BT to keep everything physically secure and no release our data to other parties.

What are the cheapest and most practically options if we had to live in a future world like this, where physical engineering solutions and trusted partners were used to provided for our data security needs?

Are systems security engineers actively engaged now in researching these two classes of failure?

There is a IET conference on cyber security for industrial control systems on 6th February and I am not sure whether to go or not.

The organisers have not confirmed the keynote speaker as yet, but I have advised them that I would not want to hear from a speaker that simply glosses over the problem of how we deal with the potential consequiences of engineering failure in cyber security systems. Especially as more and more industrial control systems get connected to the internet (knowingly or unknowingly). There is no lack of material for such analyses; there are plenty of engineering case studies to look at - indeed there are hundreds of serious failures in the public domain and probably thousands of serious failures know to the cyber security industry that are kept hushed up.

James Arathoon

James Arathoon

See Also:

FuseTalk Standard Edition v3.2 - © 1999-2016 FuseTalk Inc. All rights reserved.