OII Standards and Specifications List

I'M Europe
OII Home Page
What is OII?
Standards List
OII Guides
OII Fora List
Conference Reports
Monthly Reports
Whats New?
OII Index
OII Feedback
Search Database

OII Guide to Information Security

This guide is intended to provide guidance on standards and specifications in information security. It serves as a compendium to the Information Security Standards section of the OII Standards and Specifications List.

Information security is a critical and integral aspect for the provisioning and operation of trust services. This guide explains some of the key technical concepts -- technical building blocks -- for trust, which is the subject of the OII Guide to Trust Services.

This guide has the following structure:

  1. Management Issues
  2. Technical Elements
  3. Key Infrastructure

1. Management Issues

1.1 Overview

'Security' is a subjective term, but may be defined as an acceptable balance of threats against safeguards for a particular circumstance. While a lot of discussions on information security focus on technical mechanisms, it is important to bear in mind that the implementation of specific information security measures should be guided by an overall security policy. The major areas for consideration in developing such a policy are:

1.2 Activities

It is important to appreciate the different activities related to the management and planning of information security, as well as the associated roles and responsibilities within an organization. The main security management activities include:

  • Identifying and analysing security threats to information assets
  • Identifying and planning the implementation of adequate safeguards to the threats
  • Managing Trusted Third Parties
  • Managing information security risks based upon evaluating criteria to determine the importance of those risks
  • Developing plans for incident handling
  • Determining security objectives, strategies and policies
  • Determining organizational information security requirements and developing a security awareness programme
  • Planning follow-up programmes for monitoring, reviewing, and maintenance of security services.

1.3 Threats

For strategic risk management, it is necessary to identify the minimum requirements to be addressed in managing information security. Risk management techniques need to be identified, as well as the development and implementation of an IT security plan to address these risks. In security terms, these risks are known as threats, of which the following are some examples:

  • Message integrity: message modification, addition, destruction, replay, preplay, reordering within a sequence, delay, routing corruption
  • Data integrity: data incomplete, unreasonable, inconsistent
  • Authentication: impersonation of the messaging system, false acknowledgement of receipt, false origination of the message
  • Non-repudiation: denial by originator/recipient of origin/content, lack of acknowledgement authentication
  • Confidentiality: loss of confidentiality (content revealed to 3rd party), loss of anonymity (identities revealed to 2nd or 3rd party), traffic analysis (deductions from message flow/times)
  • Other issues: no originator clearance (access control), misrouting through insecure channels, denial of communications, channel flooding, refusal to complete a transaction.

1.4 Safeguards

Specific safeguards can be selected that achieve appropriate levels of protection against the threats identified. Additionally, organization-wide minimum security requirements can be implemented to meet the organization's needs. In order to provide help for the safeguard selection, guidelines containing baseline safeguards are often useful. The guidance typically includes the selection and use of safeguards to support the management and maintenance of the site, particularly those connected to any external networks (and specifically the Internet). Safeguards range from organizational policy and operational statements to more physical examples such as firewalls.

1.5 Evaluation Criteria for Information Security Systems

The main objective of common security evaluation criteria is to provide a method to measure the capability of a (to be) installed system of trust or information security products. This is generally based upon defining the general concepts and principles, presenting a general model and then expressing security functional and assurance requirements and specifications for information products and systems. This produces a protection profile which allows creation of generalized, reusable sets of security requirements against specific products or systems to be evaluated -- the security target. To create this profile, it is necessary to establish a set of components for expressing the security functional requirements in a standard way.

Standards and Specifications

  • ISO/IEC 13335 Guidelines for the management of IT Security. (Part 1: Concepts and models for IT Security; Part 2: Managing and planning IT Security; Part 3: Techniques for the management of IT Security; Part 4: Baseline approach; Part 5: Application of IT security services and mechanisms)
  • ISO/IEC 15408 Evaluation criteria for IT Security. (Part 1: Introduction and general model; Part 2: Security functional requirements; Part 3: Security assurance requirements)
  • Common Criteria for Information Technology Security Evaluation. Evaluation of IT security systems

1.6 Trusted Third Parties

Trusted Third Parties (TTPs) typically have a prominent role in a secure system. It is, therefore, essential to ensure that TTPs are respected and trusted. This means that business users, system mangers, developers and operators of TTPs conform to certain responsibilities and that the services they offer are fully understood. This is addressed more fully in the Trusted Third Parties section of the OII Guide to Trust Services.

2. Technical Elements

2.1 Overview

The main technical elements of information security are:

These technical elements are inter-related. The process of how they relate to each other in an electronic transactional environment is presented in the pictorial representations provided below. The operation of these processes is discussed in the Key Infrastructuresection.

The following process description relates to theoutbound diagram (2.1.1).

To sign a document or any other information, the signer first delimits precisely what is to be signed. The delimited information to be signed is termed the 'message'. The message may range from a simple text e-mail to a substantial database.

Then a hash function in the signers' software computes a hash result, a code unique to the message. The signer's software then transforms the hash result into a digital signature by reference to the signer's private key. This transformation is sometimes described as 'encryption'.

The resulting digital signature is thus unique to both the message and the private key used to create it. Typically, a digital signature is attached to its message and stored or transmitted with its message. However it may also be sent or stored as a separate data element, so long as it maintains a reliable association with its message.

The inbound diagram represents the reverse of this process -- the receipt and processing of the signed document/information (2.1.2).

The final diagram represents the outbound process ofmessage content encryption whereby the message itself is encrypted, in addition to being signed (2.1.3).

2.1.1 Outbound
The Outbound Processes
2.1.2 Inbound
The Inbound Processes
2.1.3 Message Content Encryption
The Message Content Encryption Processes

2.2 Keys

Keys are very large numeric values which when applied to an input data stream produce, via an algorithmic function, an output which is generally:

  • An encrypted, unique 'summary' of the input -- its digital signature
  • A confidential output -- an encrypted message.

Keys are assigned to identified entities (consumers, businesses). Generally each key belongs to only one entity although in so-called symmetric cryptosystems the same key may belong to more than one party (usually two). However, one entity may own multiple keys either through a key-pair arrangement or because the different nature of transactions warrant different keys for the individual arrangements. Keys, digital signatures, and encrypted information are created and verified by means of cryptography, the branch of applied mathematics that concerns with transforming information into complex, seemingly unintelligible forms and back again.

The keys used to generate digital signatures and encrypted messages are most commonly generated through algorithms based upon an important feature of large prime numbers: once they are multiplied together to produce a new number, it is virtually impossible to determine which numbers created that new, larger number.

A 'symmetric cryptosystem' is one where a single key is used and 'owned' by the originating and receiving parties. This has the advantage of being simpler to manage, but the major disadvantage of the inability to prove which of the two parties originated or was involved with the transaction at a precise moment in time. Such a system is dependent on the parties involved trusting each other not to reveal the key to any other party.

In an 'asymmetric cryptosystem', there are two keys -- the key pair. The private key is known only to the signer and used to create the digital signature. The public key is more widely known and is used to verify the digital signature. A recipient must have the corresponding public key in order to verify that a digital signature is that of the signer. If many people need to verify the signer's digital signatures, the public key must be distributed to all of them, perhaps by publication in an on-line repository or directory where they can easily obtain it.

Although keys of the pair are mathematically related, it is computationally infeasible (or at least, exceedingly improbable) to derive one key from the other, if the asymmetric cryptosystem has been designed and implemented securely for digital signatures. Although many people will know the public key of a given signer and use it to verify that signer's signature, they cannot discover that signer's private key and use it to forge digital signatures.

Standards and Specifications

  • PKCS #8, describes a format for private-key information.

2.3 Hash Function

A 'hash function' is used in both creating and verifying digital signatures. A hash function creates, in effect, a digital freeze frame of the information, by presenting a "code" usually much smaller than the message but nevertheless is unique to it. This compressed form of the message is often referred to as a "message digest", or finger print of the message, in the form of a hash value or hash result and has a standard length. It is virtually impossible to derive the original message from knowledge of the hash value. Hash functions therefore enable the software for creating digital signatures to operate on smaller and predictable amounts of data, while still providing robust evidentiary correlation to the original content, thereby efficiently providing assurance that there has been no modification of the message since it was digitally signed.

If the message changes, the hash result of the message will invariably be different.

Standards and Specifications

2.4 Digital Signatures

From a legal point of view, the concept of "signature" means any mark made with the intention of authenticating the marked document. In an electronic environment, today's broad concept of signature may well include markings such as digitized images of paper signatures, typed notations such as "s/John Smith" or even electronic organizational e-mail headers.

From an information security viewpoint, these "electronic signatures" are entirely different from "digital signatures" which are discussed in this guide, although the term "digital signature" is still used colloquially, but incorrectly, to mean any form of computer based signature. Digital signatures are irrefutably derived from documents, while electronic signatures simply infer the source of documents through placement of signature.

For transformation using digital signatures, two different electronic keys are generally used: one for creating a digital signature or transforming data into a seemingly unintelligible form, and another key for verifying a digital signature or returning the message to its original form.

A digital signature is a digitally signed hash result of a delimited 'block of data' -- the message. Typically it is attached to its message and stored or transmitted with its message. However it may also be sent or stored as a separate element, so long as it maintains a reliable association with its message. Since a digital signature is unique to its message, it is useless if permanently disassociated from its message. Digital signatures principally allow the verification of the originator or the integrity of a block of data.

Standards and Specifications

  • ISO/IEC 9796:1991 Digital signature scheme giving message recovery
  • CEN ENV 12388: Medical Informatics -- RSA Algorithm for Digital Signature Services
  • ISO/IEC 14888 Digital signatures with appendix. (Part 1 General principles and requirements for digital signature with appendix; Part 2 Fundamental structure, mathematical functions and data objects)
  • Digital Signature Standard/Digital Signature Algorithm (DSS/DSA)
  • PKCS #1 Rivest-Shamir-Adelman (RSA) signature. Digital signature scheme giving message recovery
  • PKCS #2 Diffie Hellman Agreement. Simple Diffie-Hellman exchange and ASN1 key formats
  • Agnew-Mullin-Vanstone (AMV) signature. Digital signatures with appendix.

The output of the Digital Signature Initiative from The World Wide Web Consortium is also relevant.

2.5 Encryption

Encryption is generally related either to the encryption of the content or the production of a unique digital signature based on the content. In the former it is the content which is secured, while the latter secures the authentication (e.g. source, timestamp) of the information.

Standards and Specifications

  • FIPS 46-2 -- Data Encryption Standard (DES). A 64-bit block cipher, symmetric algorithm, also known as Data Encryption Algorithm (DEA) by ANSI (ANSI X3.92 and ANSI X3.106) and DEA-1 by ISO
  • FIPS 74 -- Guidelines for using DES
  • FIPS 81 -- DES modes of operation
  • ISO 9735 EDIFACT -- Application level syntax rules. (Part 7: Security rules for batch EDI (confidentiality); Part 8: Associated data in EDI (confidentiality))
  • ISO/IEC 10181:1996 Information technology - Open Systems Interconnection - Security frameworks for open systems. (Part 5: Confidentiality framework)
  • RFCs 1968 and 1969 -- PPP Encryption

2.6 Certificates

A certificate is an electronic record that lists a public key together with the name of the certificate subscriber -- i.e. its principle function is irrefutably binding a (public) key with a particular holder. It may also confirm that the prospective subscriber signer identified in the certificate holds the corresponding private key.

A recipient of the certificate can use the public key listed in the certificate to verify that the digital signature was created with the corresponding private key. If such certification is successful, assurance is provided that the digital signature was created by the holder of the public key named in the certificate, and that the corresponding message had not been modified since it was digitally signed.

Certificates are generally issued by certification authorities which in turn are often part of a physical Key Infrastructure involving Certification Authorities, Registration Authorities and Directory Service Agents. To assure the authenticity of the certificate with respect to both its contents and its source, the certification authority also digitally signs it. The issuing certification authority's digital signature on the certificate can be verified by using the public key of the certification authority listed in another certificate issued by another certification authority. Then that other certificate can in turn be authenticated by the public key listed in yet another certificate and so on, until the entity relying on the digital signature is adequately assured of its genuineness.

The format of certificates which is most widely accepted is that defined by ISO/IEC JTC1 SC21, known as X.509 v3. It provides support for a wide range of applications and is generally considered to support a flexible trust model corresponding to user requirements. X.509 v3 is widely adopted as 'the' general-purpose public key certificate format. There is also a development in progress of a so-called simple public key certificate (see below).

It is not possible to predict the way that certificate usage will grow. Factors such as user acceptability, public policy and vendor support will all be significant. It also seems likely that there will be a proliferation of certificate types conforming to the X.509 v3 standard and there are steps to initiate action, such as the registration of certificate variants, to minimise the varieties of certificates and make interoperability easier.

To assist the writers of certificate policies or Certification Practice Statements with their task (but not to define particular certificate policies), a framework which identifies the elements for formulating a certificate policy is highly useful. The main elements of this framework include:

  • Community definition and applicability
  • Identification and authentication policy for subjects, Registration Authorities and Certification Authorities
  • Key management policy
  • Non-technical security policy
  • Technical security policy
  • Operational requirements
  • Legal & business provisions
  • Certificate and Certificate Revocation List profiles
  • Policy administration.

The degree to which a certificate user can trust the binding of a certificate -- i.e. the binding between a name and a public key -- depends on such factors as the certification authority policy, the procedures for authentication of end entities, the procedures and security controls, and the policy and procedures of the end entity for handling private keys. The liability assumed by certificate issuers and end entities also plays a role in the degree of trust. A certificate policy allows the users of a certificate to decide how much trust to place in the certificate. A key certificate itself generally provides:

  • Information about the keys involved, including key identifiers for subject and issuer keys, indicators of intended or restricted key usage, and indicators of certificate policy
  • Alternative names for a certificate subject, a certificate issuer, or a certificate revocation list issuer, and additional attribute information about a certificate subject
  • Constraint specifications to be included in Certification Authority certificates (certificates for CAs issued by other CAs), to facilitate the automated processing of certification paths when multiple certificate policies are involved, e.g. when policies vary for different applications in an environment or when interoperation with external environments occurs.

In order to provide these features, specifications and profiles need to be available for the format and semantics of certificates. This then allows community or organization specific certificate profiles while ensuring interoperability of the different types of certificate. Some communities will need to supplement, or possibly replace, this profile in order to meet the requirements of specialised application domains or environments with additional authorization, assurance, or operational requirements. Implementations are not required to use any particular cryptographic algorithms.

An alternate to a 'full certificate' is that of an Simple Public Key Infrastructure Certificate (SPKI) -- i.e. 'a simple certificate'. This grants a specific authority to a public key rather than binding an "identity" (such as a person's name) to that key. For example, one SPKI certificate might grant permission for a given public key to authenticate logins over a specific network on a given host for a period of time.

One of the (perceived) problems of X.509 certificates is that it uses a (technical) abstract syntax notation (ASN.1) to define its data structures. This is in contrast to SPKI certificates which have a text based structure. The main driving force behind SPKI is the desire to keep down overheads arising from use of an ASN.1 based certificate and an infrastructure supporting a global directory, the search for an efficient implementation, and freedom and flexibility to develop structures for a growing number of applications.

The main purpose of an SPKI certificate is to authorize some action, give permission, grant a capability, etc. The first requirement for an SPKI certificate is then to bind a meaningful or useful attribute to a public key (and therefore to the keyholder of the corresponding private key). In many cases, the attribute would not involve any recognizable name. The definition of attributes or authorizations in a certificate is up to the author of the application code that uses the certificate. The creation of new authorizations should not require interaction with any other person or organization, but rather be under the total control of the author of the code using the certificate.

Standards and Specifications

  • ANSI X9.5 -- Extensions to Public Key Certificates & CRLs
  • ANSI X9.57 -- Certificate Management for Financial Services
  • FIPS 102 -- Guidelines for certification and management
  • ISO/IEC 9594-8:1995 Information technology - Open Systems Interconnection - The Directory: Authentication framework. (ITU-T Recommendation X.509.) Definition of the X509 certificate format for public keys
  • IETF Draft: X.509 Certificate and Certificate Revocation List Profile. This specification profiles the format and semantics of X.509 certificates
  • ISO/IEC 14888 Digital signatures with appendix. Part 3 defines certificate-based mechanisms
  • PKCS #6, which describes a format for extended certificates.
    Note: PKCS #6 is being phased out in favour of Version 3 of X.509.

Relevant on-going activities

  • The Internet Engineering Task Force (IETF) is developing a Simple Public Key Certificate (SPKI)
  • CEN TC 68 is developing a document entitled,Banking -- Certificate Management, including a profile of the X.509 certificate for financial community.

3. Key Infrastructure

3.1 Overview

A Key Infrastructure provides a common way of organizing the physical infrastructure, applications, administration and procedures to support information security, typically in a TTP environment. A principal aspect of a key infrastructure is the management of public keys and, therefore, it is often used synonymously with Public Key Infrastructure (PKI).

The operations discussed below are divided into a series of connected activities that form this infrastructure. The operations themselves may be, and often are, performed independently and can be perceived as the individual basic services of a TTP.

To use this infrastructure, it is first necessary to be able to create the keys involved -- 'Key Generation'. Generation may be by the using organization itself, although it is more generally provided a third party service. Whoever generates the key will need to register it through a process of 'Key Registration'. Once generated, and during use with new trading partners, it will be necessary to distribute the keys to end-points or intermediaries within any business scenario -- 'Key Distribution'. 'Secure Messaging' is the concept that describes how information is sent between TTPs or from primary parties to TTPs in a secure way.

Parties receiving the keys are likely to wish to prove the authenticity and 'good standing' of the key though a 'Key Verification' service in case the key has been compromised. If the key owner believes compromise has taken place, they may whish to use a ‘Key Revocation’ facility to ensure that further transactions are no longer proved to be associated with them. A function of 'Key Recovery' is also often required. For example, if a key is lost, the key owner can be reissued with a key to ensure that information can once again be read. Key recovery is also associated with the term 'Key Escrow'.

These aspects are covered in detail in the following subsections:

3.2 General Requirements

The general requirements of a PKI often include the following:

  • PKI management and conformance to accepted PKI management standards -- notably: ISO 9594-8
  • The use of confidentiality in PKI management protocols should be kept to a minimum in order to ease regulatory problems
  • PKI management protocols should allow the use of different industry-standard cryptographic algorithms: this means that any given authority or end entity may, in principle, use whichever algorithms suits it for its own key pair(s)
  • PKI management protocols should not preclude the generation of key pairs by the end entity concerned
  • PKI management protocols should support the publication of certificates by the end entity concerned or authorities
  • PKI management protocols should support the production of certificate revocation lists by allowing certified end entities to make requests for the revocation of certificates
  • PKI management protocols should be usable over a variety of "transport" mechanisms, including mail, HTTPTCP/IP and FTP

Standards and Specifications

  • ISO/IEC 9594-8:1995 Information technology - Open Systems Interconnection - The Directory: Authentication framework. (Also published as ITU-T Recommendation X.509.)
  • Internet PKI (Part 2: Certificate Management protocol; Part 3: Certificate Policy and Certification Practices Framework)
  • IETF Draft Architecture for PKI (Produced byThe Open Group):

3.3 Infrastructure

Within a PKI the following entities are defined:

  • Certification Authorities (CAs)
  • Local Registration Authorities (LRAs)
  • Directory Service Agents (DSAs)
  • Service providers (TTPs and other service providers)
  • End Users (individuals, devices)

The purpose of Certification and Local Registration authorities is to register entities (e.g. TTP service providers and end users) with their public keys, and to maintain this information at all times. Information on registered entities is made available in a directory, which in most advanced models is assumed to be a distributed X.500 directory or one of its (Internet) derivatives. The directory will, at a minimum, contain the information supplied to the CA and possibly other TTPs registered by the CA. If so, the TTP may supply information related to the service it provides.

The CA is connected by means of secured communications LRAs through which any user may register. A registration is acknowledged by a certificate issued by the CA at the request of some LRA. Finally TTPs may additionally register as providers of special services -- e.g. timestamping.

The communication between the LRAs and the CA must be cryptographically secured although not necessarily by means of public keys. Each CA may register a number of LRAs. The reason for this is that registration typically requires physical identification, for which reason the number of LRAs can be relatively large in order to register all the users in a secure way.

An end-user registers at an LRA of their own choice -- typically the one closest to their premises. They are identified in the system by their credentials that may involve manual procedures. For low risk (value) Electronic Commerce the registration may take place online. The registered information is passed on to the CA in a secured message. The user registers a public key, of which the corresponding private key is known only to the entity. The CA verifies that this particular key has not been registered previously. The CA then issues a certificate, typically with a certain validity period which is returned to the user. The CA may update the certificate or issue revocation certificates, if the public key is revoked. Each certificate has a unique reference number, and a status.

Any entity may extract certificate information from a directory by means of a directory user agent -- a software package that communicates with one or several directories. The directory or repository is needed since a certificate only proves that the entity has been registered according to described procedures and not the certificate is current and valid. Thus any entity may need to enquire about the status of any certificate at a specific point in time.

The response is secured to guarantee the inquiring entity that the answer is current. It is up to the entity to chose the criteria -- and thus assume the risk -- for how frequently to make a query to the directory to ensure changes to the certificate status are not ignored. The ideal situation is that the certificate status is checked at each and every use -- this is known as session based certification. Although ideal, this has several implications not least that significant (cryptographic) processing power needs to be present to continually process the mass of validation requests which are generated.

The implementation of a full PKI is a complex set of activities. Thus the concept of a Simple PKI (SPKI) has also been progressing. This includes a Simple Distributed Security Infrastructure (SDSI) and aSimple Public Key Certificate.

SDSI is a distributed security infrastructure whose principals are public digital signature verification keys with individuals controlling the associated private keys. This means that a global hierarchy is unnecessary, but there is support for common roots such as certification mechanisms. Each principal is a "certification authority" and manages a local name space with which he can refer to other principals. There are three types of certificate: identity certificate, name/value certificate, and membership certificate. Identity certificates have human-readable content and the process for creating them is manual.

A key can delegate the authority to sign certificates on behalf of the key. The delegation can be limited to certificates that match a template. Certificates can time out, and they can be reconfirmed by an on-line agent acting for the issuer. SDSI is optimized for an on-line Internet environment in which clients can interact with servers to learn what credentials are needed to satisfy a request, and can retrieve the needed credentials from other servers.

3.4 Management

The purpose of key management is to provide procedures for handling cryptographic keys. This includes objectives of key management, as well as principles, concepts and procedures which are common to the different ways of managing keys. The specific requirements and framework for the management of the key life cycle -- the key life cycle model -- identify different states and transitions and implicitly define key management services, which might be part of a key management system or be provided by another service provider as a TTP.

Standards and Specifications

  • ISO/IEC 11770 Information technology -- Security techniques -- Key management (Part 1: Framework)
  • IEEE 802.10c. Defines a protocol for key management and security attribute negotiation for LAN security.

Relevant on-going activities

  • The Security Technical Committee of ETSI is preparing a standard for TTPs.

3.5 Generation

Key generation is the creation of a public/private key pair, which may include the storing and distribution of key material using a smart card. Generally key generation is performed by non-user entities. However, in some environments the user entity may be responsible for the generation of its own key. This is generally a political question. It may be considered important that the owner of a secret key pair is responsible for, or at least involved in, the generation of his own key pair. The key generation could take place at the owners premises with trusted software. A key pair generating service needs to utilize trustworthy systems and must be aware of the fiduciary responsibilities and protect itself against the retaining of another person's private key.

Standards and Specifications
There are no known specifications specifically associated with key generation. Key definition specifications are covered in the Technical Elementssection.

3.6 Registration

Registration and certificate issuing services establish a relationship between a person or legal entity and a public key. This leads to the production of a certificate recording that relationship and possibly more information about the person or entity, such as their professional or academic qualifications, their national identity number or their residence.

Standards and Specifications

  • PKCS #10, describes a syntax for certification requests

3.7 Distribution

Distribution is concerned with how secret keys can be distributed and established between parties. This is dependant on whether keys are Public/Private (Asymmetric) or a single key is shared (Symmetric).

Environments for symmetric keys establishment:

  • Point to Point, when two entities already share a secret key that can be used to establish further keys
  • Key Distribution Centre (KDC), when the two entities do not share a secret key and the KDC generates and distributes the key
  • Key Translation Centre (KTC), which converts and distributes keys for entities who do not already share a secret key.

Environments for asymmetric key establishment:

  • Establish a shared secret key between two entities A and B by key agreement the secret-- key is the result of a data exchange between the two entities A and B. Neither of them can predetermine the value of the shared key
  • Establish a shared secret key between two entities A and B by key transport -- the secret key is chosen by one entity A and is transferred to another entity B, suitably protected by asymmetric techniques
  • Make an entity's public key available to other entities by key transport in an authenticated way (confidentiality is not required).

Standards and Specifications

  • ISO 9735-9 EDIFACT - Application level syntax rules. Security key and certificate management message (message type - KEYMAN) in X.509 assumed architecture.
  • ISO/IEC DIS 11770-3 Mechanisms using asymmetric techniques

3.8 Secure Communication

3.8.1 General

In order to exchange information associated with a TTP Network / PKI, the communication between the parties must be secured. This includes message security and security between the client and server. In order to do this and at the same time to utilise common transfer mechanisms there are several standards and specifications which adapt existing messaging standards to provide this extra level of security.

Typical requirements include:

  • Confidentiality of an account number
  • Integrity of payment data
  • Authentication
  • Buyer knows seller is a secure merchant
  • Seller knows buyer has a valid account.
3.8.2 Message Security

This includes, for example, adaptation for adding cryptographic signature and encryption services to message based transactions (e.g. Internet MIME electronic mail messages) as well as protocol based sessions (e.g. SET). Typically, this involves providing authentication, message integrity and non-repudiation of origin (using digital signatures) and privacy and data security (using encryption).

Standards and Specifications

3.8.3 Client/Server Security

In an open system environment, it is important to provide privacy and reliability between two communicating applications, in a client/server relationship. Generally this involves a protocol composed of two layers. At the lowest level, above a reliable transport protocol, is a Record Protocol which is used for encapsulation of various higher level protocols. For example, an encapsulated protocol may allow the server and client to authenticate each other and to negotiate an encryption algorithm and cryptographic keys before the application protocol transmits or receives its first byte of data. The advantages of such layering is that it is application protocol independent and a higher level protocol can execute on top of the lower layer protocol transparently.

Connection security infers three basic properties:

  • The connection is private. Encryption is used after an initial handshake to define a secret key. Symmetric cryptography is used for data encryption
  • The peer's identity can be authenticated using asymmetric, or public key, cryptography
  • The connection is reliable.

Standards and Specifications

3.9 Certificate Repositories

Any entity may need to enquire about the status of any certificate at any time since a certificate only proves that the entity has been registered according to described procedures and not the certificate is current and valid. This process is terms 'Directory access' or 'access to the certificate repository'. The certificate repository enables checking of both valid and of revoked certificates. Since certificates can be held by multiple directory hosting agents, it is important that open, standardized specifications are available so they can be accessed in a networked environment, particularly for use by SMEs and private citizens. Within corporate and closed environments other access methods, such as full X.500 directory access protocol, may be more appropriate.

There are two basic mechanisms to ascertain the status of a certificate from a certification authority: retrieval via file information transfer and direct on-line communication through appropriate protocols. Of particular importance in this area is the specification being developed by the IETF Public Key Infrastructure (PKIX) group - the Lightweight Directory Access Protocol (LDAP). LDAP, developed by the IETF working group for Access, Searching and Indexing of Directories (ASID), has a wide applicability in networked applications and its use should be promoted.

Standards and Specifications
Retrieval via file information transfer:

Retrieval via on-line communication:

3.10 Verification

The framework for PKI and the provision of providing secure services is based around the use of strong authentication, involving credentials formed using cryptographic techniques. Strong authentication is generally provided by public key cryptosystems in turn based upon certificates. A major advantage of such systems is that user certificates may be held within the Directory of information as attributes, and may be freely communicated within the Directory System and obtained by users of the Directory in the same manner as other Directory information. Most often user certificates are formed by "off-line" means, and placed in the Directory by their creator. The generation of user certificates is performed by off-line Certification Authorities which are completely separate from the Directory.

The process is one whereby one user is authenticated to another using a certificate which contains a public key and is signed by a Certification Authority which the user trusts. This process is based on a certification path which logically forms an unbroken chain of trusted points in the Directory Information Tree between two users wishing to achieve authentication.

The management of keys and certificates, the responsibilities of a Certification Authority and a procedure for the revocation of certificates all need to be addressed. In order to use such systems, each user must possess a unique distinguished name and users wishing to achieve authentication must use the same public key cryptographic algorithms and hash functions.

Standards and Specifications

  • ISO/IEC 9594-8:1995 Information technology -- Open Systems Interconnection - The Directory: Authentication framework. (Also published as ITU-T Recommendation X.509.)
  • ISO/IEC 9796:1991 Information technology -- Security techniques -- Digital signature scheme giving message recovery. Includes definition of a scheme for verifying the originator and integrity of a block of data
  • ISO/IEC 14888 Digital signatures with appendix. Part 2 includes definition data objects which constitute a signature and verification processes of such mechanisms
  • The Public Key Login Protocol (IETF)

3.11 Revocation

Certification revocation, is a response to a user's request when a key is no longer valid or has been compromised. To perform revocation there needs to be specifications which profile the format and semantics of certificate revocation lists (CRLs) as well as procedures are described for processing of certification paths in the Internet environment including path validation procedures. Features of CRLs which are necessary include:

  • Indications of revocation reason
  • Provide for temporary suspension of a certificate
  • Sequence numbers to allow certificate users to detect missing CRLs in a sequence from one CRL issuer.

Standards and Specifications

  • IETF Draft: X.509 Certificate and Certificate Revocation List Profile. X.509 certificates and certificate revocation lists for the Internet PKI.

3.12 Recovery

Key recovery may be needed for a variety of reasons, including:

  • If the user has lost the key
  • If the user is no longer able to provide the key, because of illness, absence, etc., and another authorized party needs access to the information
  • When an employee, as a user, has left a company and there is a need by the data owner, the employer, to access files he has encrypted
  • Legal authorities may need to recover stored data from its encrypted form in the pursuance of their investigations
  • When data is being communicated (i.e. during exchange) in an encrypted form, law enforcement authorities may need to intercept it in decrypted form. In general, users and owners of data do not require access to data being transmitted in encrypted form; their requirement is to access data stored in encrypted form.

For the last two examples, there is a significant national, regional and international political debate on the use of such techniques which centres on the legality of such an act or the establishment of legal safeguards to enable it. Often third party access to private keys is termed 'key escrow'. (Collins Dictionary defines 'escrow' as "a written document held by a third party pending fulfillment of some condition".) The cryptographic keys in question are held in escrow by a TTP which itself may have been licensed by the legal authority. Thus there may be a number of different requirements for key recovery services with respect to keys used to encrypt information. These requirements will also depend on whether the data is being stored as encrypted files, or communicated in encrypted form over telecommunications links.

The clarification between the owner of the data and the user of the data is also important. An employer will be the owner of the data; an employee the user. If the data is being stored, then those who use or own the data may need to be able to gain access to keys used to encrypt the data and to recover it in a decrypted form.

Key recovery services only apply to keys used for the encryption of data. It should be noted that private keys used to sign documents or authenticate an entity must not be communicated to any party other than those whom they represent ("the owner"); this is to avoid any possibility of others masquerading as the owner of a key.

Keys may be recovered either mathematically, through secure storage, or other procedures. The key recovery function assures an institution that it can always have access to information within its information processing resources; for example, such recovery service may be essential in disaster recovery. It may also satisfy law enforcement regulations in some jurisdictions enabling an institution to produce such a key or encrypted information in answer to a lawful court order.

Standards and Specifications

  • FIPS 185 Key escrow(Clipper & Skipjack)
  • ISO/IEC 9796:1991 Digital signature scheme giving message recovery
  • PKCS #1 Rivest-Shamir-Adelman (RSA) signature based on ISO/IEC 9796-2 Digital signature scheme giving message recovery.

Relevant on-going activities
The Security Technical Committee of ETSI is preparing a standard for Trusted Third Parties (TTP), including key management and key escrow/recovery.

Section Contents
OII Home Page
OII Index
OII Help

This information set on OII standards is maintained by Martin Bryan of The SGML Centre and Man-Sze Li of IC Focus on behalf ofEuropean Commission DGXIII/E.

File last updated: January 1998

Home - Gate - Back - Top - Secguide - Relevant