Network Middleware Applications

HIPAA-compliance is
NMA ZSentry»
NMA Tech
Contact us. We can provide a fresh exploration in various areas of current and emerging IT security technology and compliance.

Please contact us using our Support Center

Audience: CIO, IT professionals, security and application developers, SaaS developers.

End-to-End (E2E) security
E2E is safeguarding information from point of origin to point of destination in a communication system. E2E may use cryptography and/or other protection means.

The Fort Knox Syndrome
The conventional risk model used in IT security is that of a link chain. The system is seen as a chain of events, where the weakest link is found and made stronger. But this approach is bound to fail, as this article discusses.

End-To-End IT Security
and The Fort Knox Syndrome

by Ed Gerck, Ph.D.
Copyright © E. Gerck, 2002, 2009, 2013

This paper discusses some limitations and failure modes of the conventional approach to information security, including aspects that are frequently treated in isolation but have a strong coupling to each other, such as physical, technical, administrative, and external control elements. We analyze a common failure mode that we call "The Fort Knox Syndrome" and its deceptive attraction. An improved security foundation is discussed in terms of Trust (Gerck, 1997), Risk, and the ABC Multi-Risk Design (Gerck, 2000). This approach leads to an End-to-End IT security solution summarized in four points: (1) integrate core security services; (2) eliminate known weak or costly links such as password lists, access control databases, shared secrets, and client-side PKI; (3) avoid the seemingly desirable scenario of a single point of control, which is recognized as a single point of failure; and (4) bind a coherent system of trust to the solution and to its users. The NMA ZSentry™ solution is introduced and compared as an example of a system that complies with the End-to-End IT security requirements and provides a cost-effective implementation. NMA ZSentry enables the ultimate and fail-safe defense against data theft, which is to not have the data in the first place.


With more users, more applications and more revenue depending on online resources, it is more important than ever before to provide remote user access while protecting the enterprise's resources. Mission-critical online resources often need centralized user administration and control delegation to be effective in today's enterprises with multiple administrative domains and quick response to market changes.

However, hackers, password crackers, information leakers, internal attacks, and naive users are likely to continue to be a fact of life, while business evolution will force further exposure as we can see with mobile devices, BYOD (Bring-Your-Own-Device), social networks, and cloud storage. As a result, security threats, leaks and lack of scale will constantly plague and erode user access control solutions based on password lists (even if correctly encrypted), access control databases (ditto), and shared secrets (ditto). The more administrative domains and delegation your organization needs to control, the more likely the problems — as one can become the backdoor for another.

These technical problems do not disappear with other conventional technology options for user access control, such as:

  • Two-factor authentication, based on something you know (a PIN), and something you have (an authenticator, usually a smart card or token that generates unpredictable passwords), provides a more reliable level of user authentication than password lists but current systems still suffers from shared secrets (the PIN, even if correctly encrypted), lack of scalability, and dependence on databases for user access control, in addition to high recurring costs for the authenticator (recurring costs may be lower if a physical authenticator is not used).
  • Client-side PKI, while free from passwords lists and shared secrets, such user authentication solutions depend on local / user security for the private-key, do not scale well and still depend on databases for user access control, in addition to lack of interoperability and high cost of ownership.
  • SSO (Single Sign On) and Web SSO, which currently depend on passwords, shared secrets or client-side PKI for user authentication, and also on databases for user access control, with high cost of ownership.

The business problem of IT security is, however, worse than all the technical problems. Because current user access control solutions involve different components for authentication, authorization and administration (AAA), a solution can fail for those many reasons. For example, a required upgrade of one component may no longer interoperate with another one, which may alienate users and lead to lost business in addition to security breaches. 

The end result is that IT managers face continuous, onerous cycles of development and maintenance. In a survey published in January 2002 and which trend continues to today, Tech Update reported that IT managers were no longer focusing purchases on technology for technology's sake but on strategic systems that bring an immediate dollar benefit to the business. Dollar benefit includes revenue, cost avoidance and cost savings.

In short, the business problem of IT security is to prioritize that which can simplify and enhance the user experience, in order to support revenue and revenue growth, while reducing the enterprise's liability and expenditure.

However, all of today's IT security solutions will need to be continually updated, in ever faster cycles, to remain effective: they will need more frequent patches, upgrades, support, and perhaps replacement to provide the same amount of value tomorrow. 

To cope with the accelerated risks and obsolescence that are typical of IT security solutions, enterprises need an End-To-End IT security solution that can provide shorter and less expensive deployment, development and maintenance cycles. The solution also needs to minimize the probability of patches, upgrades and support during the lifetime of an IT security system, thereby reducing its total cost of ownership. The solution also needs to integrate core security services and eliminate known or costly weak links such as password lists, access control databases, shared secrets, and client-side PKI. 

But what are these core security services? What else is required in order to solve both the technical as well as the business problems of IT security? To answer these questions, we need to first look at the security gaps that can be exploited and what security services are necessary to prevent such breaches. Second, we need to realize that it is the adequate combination, and interoperation, of security properties that can provide the required resiliency of a secure IT system. An IT security system needs to have the equivalent of several independent, active barriers, controlling different security aspects but working together. Last, an IT security solution needs to be highly scalable, supporting from hundreds of users to millions and billions, compatible with the current infrastructure and standards, and extensible.

Security Gaps

The continuous, onerous cycles of development and maintenance not only make IT security more expensive than it would look at first sight, but they also make IT security solutions less secure by increasing the number and extent of security gaps that may exist at any time. Security gaps represent the weak areas that could be attacked in an IT system.

In a broad generalization, two types of attacks can exploit security gaps: network and data attacks. A network attack tries to interfere with client and/or server systems that participate in a transaction, in terms of their communication processes. A network attack, for example, may try to gain or deny access, read files, or insert information or code that affects communication. On the other hand, a data attack tries to tamper with and/or read data in the files or messages, as they are stored or exchanged in a system, for example by inserting false data, by deleting or changing data or by reading the data. 

The Technical Problem of Information Security

The technical problem of information security can be stated as "to avoid too much concentration of information and power, while allowing enough information and power so as to make a task possible to execute." An all-knowing, all-powerful entity would be the perfect attacker and could break any security measure.

That is why we oftentimes talk about "need to know" and "separation of powers." We name these principles, respectively, information granularity and power granularity. These principles mean that information should not be provided in its entirety to a single entity and one should also avoid the scenario where any entity is, at the same time, user, administrator and auditor. That is why business information and power should be carefully divided, for example, among local employees, the office management, the enterprise  management and the customer.

Also, contrary to what is oftentimes advocated in regard to IT security solutions, there should be no single point of control in an IT security system, which point must be recognized as a single point of failure -- i.e., no matter how trustworthy that single point of control is, it may fail or be compromised and there is no recourse available because it is the single point of control.

One of the earliest references to this principle can be found some five hundred years ago in the Hindu governments of the Mogul period, who are known to have used at least three parallel reporting channels to survey their provinces with some degree of reliability, notwithstanding the additional efforts.

Tools such as authentication and authorization can help define information and power granularity. However, at its most basic level, a secure IT system needs to do much more than just control authentication and authorization. No matter how much assurance is provided that each component of a secure system is correct, when operational factors such as collusion, internal attacks, hackers, bugs, viruses, worms or errors are taken into account, the system may fail to be effective -- i.e., may fail to be secure in the context of its operational use. In addition, underlying assurance problems such as insecure operating systems and reoccurring buffer overflow vulnerabilities are not likely to improve over the coming years.

There is a real need, thus, to bring together policy, management and implementation considerations that could influence effectiveness assurances for each particular IT security solution. Other security principles such as redundancy, diversity, no single point of failure and least privilege also need to be used in defining the specific requirements for a secure IT system. In addition to being specific, such requirements need to be clearly formulated, decidable and, as much as possible, complete. Additionally, an end-to-end design is important to assure effectiveness, because attacks and errors are hard to detect and prevent at the interface points.

Weak and Strong Non-Repudiation

For lack of paper trails, non-repudiation is also essential for Internet and IT security systems. A common but weak definition states that non-repudiation is about providing proof that a particular act was actually performed, for example as demonstrated by a trusted time-stamp. However, this paper considers that the concept of non-repudiation may also be taken in a much stronger sense in support of both technical and business needs of a particular application, meaning "to prevent the effective denial of an act".

To be effective, non-repudiation needs to take into account technical and business considerations. For example, bank checks are non-repudiable in the sense that a check is paid if (1) you did not tell the bank beforehand that the check should not be paid and (2) the signature does not look like a signature you did not make. The reader should note the double-negative, which provides less room for customer repudiation -- the signature does not have to look like a signature that you made, it just has to not look like a forgery.

Security Standards

Commonly, IT secure systems must also satisfy current and evolving security standards such as the ISO 27000 series of standards. Previous notable examples include the Controlled Access Protection (C2) level of security as defined in DoD 5200.28-STD (the Department of Defense Trusted Computer System Evaluation Criteria), the ITSEC Level 3 as defined in the Common Criteria for Information Technology Security Evaluation, and the Code of Practice for Information Security Management BS 7799 (a British Standard that was the basis of the ISO/IEC 17799-1/2/3 Standards).

The security standards are typically broad in scope, covering more than just privacy, confidentiality and IT or technical security issues. The standards usually provide best practice recommendations on information security management, risks and controls within the context of an overall information security management system.

To meet these objectives, an effective IT security solution should be based on two points:

  • Clear security principles, algorithms and products based on time-proven designs; and
  • Independent, permanent verification and validation of the systems' security features. 

The first point defines the component quality used in the IT system, where a weak component may compromise the whole system. The second point focuses on the need to continuously evaluate all potential and existing threats, and verify any additional security design features that might be necessary to mitigate the risks stemming from the most likely and/or most damaging threats associated with the customer environment, and eventual changes in that environment.

Effective IT Security Solution

An effective IT security solution needs to deal with an extensive list of security properties. It is the adequate combination, and interoperation, of security properties that provide the usually required resiliency of a secure IT system. A secure IT system must not "pop" like a balloon when subjected to an attack, or fail silently leaving no trace of the attack. There must be no single point of failure. There must be multiple and diverse channels of communication and correction, even if the channels are not 100% independent.

If two channels are not 100% independent (but not fully dependent), the probability that two channels may be compromised at the same time is smaller than that of any single channel. This mathematical principle supports the previously mentioned successful use five hundred years ago of at least three parallel reporting channels to survey provinces with some degree of reliability.

The intuitive increase in reliability by using multiple and diverse channels of information correlates well with our perception of trust and how trust can be defined — we know by experience that we can trust more when we have more evidence to deal with.


Of all properties of an IT security system, one of the most important seems to be trust. As often said, security is concerned with locks, fences and guards; trust is about whether they work. But, is that really so, and how is trust defined? How can I trust a set of bytes?

These questions were answered in [Toward Real-World Models of Trust: Reliance on Received Information, E. Gerck, 1997], with a framework that has been useful in the fields of information security, business, Internet of Things, and communication in general. The framework allows us to use the concept of trust not only in homogenous (e.g., human-human, or machine-machine) but also in heterogeneous environments (e.g., human-machine-human). For example, in today's commonly used environment comprising humans and machines, trust should be understood exactly as what we humans call trust (e.g., as expected fulfillment of behavior) and could bridge to machines in terms of qualified information based on factors independent of that information.

In terms of a communication process, trust has nothing to do with feelings or emotions. As defined by Ed Gerck (1997), trust is qualified reliance on information, based on factors independent of that information. In short, trust needs multiple, independent channels to be communicated. Trust cannot be induced by self-assertions. More precisely, "Trust is that which is essential to a communication channel but cannot be transferred using that channel." See "Trust Points" by E. Gerck in "Digital Certificates: Applied Internet Security" by Jalal Feghhi, Jalil Feghhi and Peter Williams, Addison-Wesley, ISBN 0-20-130980-7, pages 194-195, 1998.

We therefore realize that trust is essentially communicable. But trust, as qualified reliance on information, needs multiple, independent channels to be communicated. If we have two entities (e.g., a client and server) talking to one another, we have only one channel of communication. Clearly, we need more than two entities. It seems unreasonable to require a hundred entities.

Looking into millennia of human uses of trust, we realize that we need at least four parties to induce trust (i.e., to communicate trust in a "clean slate" scenario): (1-2) the two parties in a dialogue, (3) at least one trusted introducer, and (4) at least one trusted witness. Trusted introducers and trusted witnesses allow you to build two open-ended trust chains for every action, the witness chain providing the assurances ("how did we get here?") that led to action (including the action itself) while the introducer chain ("where do we go from here?") provides the assurances both for a continuation of that action and for other actions that may need assurances stemming from it. This was called the Trust Induction Principle by E. Gerck ["Toward Real-World Models of Trust: Reliance on Received Information", 1997, op. cit.], and states: to induce trust, every action needs both a trusted introducer and a trusted witness.

How about trust values? In this framework, trust has a minimum of three possible values (+ , 0 , -):

+       trusted according to policy(+), called trust

0       trust value not assigned, called atrust

-       trusted according to policy(-), called distrust

The respective (+) and (-) policies define the extent of trust for each positive and negative range. The (0) value is equivalent to the statement "zero trust", neither positive nor negative.

More generally, the trust value depends on the extent of trust. The larger the extent, the more you trust (or distrust). However, within that extent trust (or distrust) is always 100%.

In short, there is no need for the imprecise concept of "partial trust" or "degree of trust". Instead, you limit the extent of trust. What changes in value then is the extent of trust, not its degree. This also corresponds to the real-world use of the concept of trust where people find it useful to limit the extent that they trust a friend. Additional considerations were discussed by E. Gerck ["Toward Real-World Models of Trust: Reliance on Received Information", 1997, op. cit.].


Risk can only be defined after one defines what is at risk, and we consider that what is at risk must be that which is trusted to some extent — otherwise, there would be no risk. In other words, trust "makes" and risk "breaks" and that is why we first need to see what might be there to "break". Risk has to do with loss and probability of loss but only the loss of what is trusted would affect the system.

The diagram below illustrates a model for the general process of establishing trust in the presence of risk, from its Initiation (Absence of Trust) to the final Verification Decision (Trust or Not Trust?). Trust refinement and risk refinement are shown as two independent internal feedback loops -- the half-circle at the right represents the processes that "make trust" while the other half-circle, at the left, represents the processes that "break trust". To ensure trust, we need to look at what breaks it.

Trust Process (Gerck)

The purpose of trust refinement is to verify the continuous aggregation of values to the initial considerations and avoid, for example, unverified transitive trust (e.g., if I trust Bob, should I trust Bob's friends?). The purpose of risk refinement is to include in the risk management all the considerations that lead to the verification decision (trust or not trust?) and avoid, for example, unverified associative trust (if I trust Bob, should I continue to trust Bob after I know that Bob trusts Malice -- whom I do not trust?).

Security Properties

In addition to trust and risk considerations, many other security properties are frequently required in IT security systems. The ten most commonly needed security properties are:

  • Usability -- The Most Important Property of an IT system is Usability [Secure Email Technologies, E. Gerck, 2007]. In practice, users will rather use an insecure IT system that is easy to use than a secure IT system where even the help text may look intimidating. The secure IT system has to be easy enough to use when compared with simple, familiar, regular systems - not when compared with other secure IT systems. If security is too difficult or annoying, users may give up on it altogether. Ease of use is considered here to be a self-evident need in all IT security systems.
  • Trust -- qualified reliance on information, based on factors independent of that information.
  • Access control -- granting access to objects, based on the trusted identity of users; limiting access to system resources only to authorized users, processes or systems.
  • Audit -- maintenance of a historical log of all transactions that can be reviewed to maintain accountability for all security relevant events.
  • Authentication -- corroboration of a credential or claim; the ability to establish and verify the validity of a user, user device or other entity, or the integrity of the information stored or transmitted.
  • Authorization -- conveyance of rights, power or privilege to see, do or be something.
  • Confidentiality -- ensuring that data is not available or disclosed to unauthorized individuals, entities or processes.
  • Integrity -- ensuring that data is not altered or destroyed in an unauthorized manner.
  • Non-repudiation -- the ability to prevent the effective denial of an act; the ability to prove the origin and delivery of transactions.
  • Process validation -- the ability to periodically validate the correct operation of the solution's processes and security functions.

Additionally, specific security requirements and properties may include:

  • Security management, a defined process to perform system security functions such as audit, credential management and configuration management. Security management must be based on a security policy, as the set of laws, rules, and practices that regulate how an enterprise manages, protects, and distributes sensitive information.
  • Availability, ensuring that system functionality and data are available as required to meet operational requirements; ensuring high quality of service.
  • Control delegation, allowing local control of a sub-domain within a domain.
  • Spoof prevention, preventing a user from relying on false information that masquerades as legitimate.
  • Identification, to link a credential to an entity's name; identify and authenticate the identity of users prior to granting them the appropriate system access.
  • Single sign on, as an access control method that handles multiple user authorizations and/or sessions, possibly at different servers, while authenticating a user only once.
  • Prevent the unauthorized disclosure or dissemination of data.
  • Prevent unauthorized modification of system components.
  • Provide mechanisms and procedures to detect system failure and prevent degradation of security processes.
  • Assure that each processing site shall be authorized for the information being processed.
  • Assure that each processing site shall be accessible only to individuals authorized for the information being processed at that site.
  • Provide contingency plans that will be maintained for emergency situations and disaster recovery.
  • Assure that configuration management shall be enforced to control all physical, hardware, software, firmware, and documentation changes.
  • Assure that system administrators and users shall be trained to operate the system in a secure fashion in accordance with a security policy.
  • Assure that the system shall record and provide tools for the analysis of significant security events. 
  • Control access to audit analysis, which shall be available and controlled so as to be performed only by authorized system administrators. 
  • Assure that audit logs shall be maintained for the duration of system operation and archived to conform to corporate, county, state, federal and international requirements regarding the preservation of records.

Provision of what we may call the basic IT security properties listed above, along with additional properties that may be needed according to operational and regulatory requirements, also should include reasonably independent and trustworthy certification and accreditation procedures to assure their effectiveness. For example, producing the required evidence to support an informed decision as to whether to grant approval to operate the solution with an acceptable level of residual security risk or not to grant such approval.

Centralized User Administration and Control Delegation

Centralized user administration is important in network environments, for a variety of reasons:

  • It is commonly used for embedded network devices such as routers, modem servers, and switches, which do not have the resources to handle user authentication and authorization by themselves. 
  • Many ISPs and enterprises have thousands or even millions of users. Users are added and deleted continuously throughout the day, and user authentication information changes constantly.
  • IT managers desire centralized user administration, otherwise conflicts appear and the system is no longer coherent. 
  • It provides a single point where each user's identity and authorizations are defined and managed, which is necessary to enable SSO (Single Sign On).
  • It can eliminate multiple usernames and passwords for separate applications as well as eliminate redundant, inconsistent security policies dispersed across different Web servers or applications.

In short, with centralized user administration, security policies can remain consistent and easy to manage and audit. Centralized administration of users is, thus, a common operational requirement in network environments.

But the need for centralized user administration does not mean lack of delegation. Delegated or distributed administration is a requirement for medium-size to large enterprises, where administrative domains within organizational unit or divisional lines are common -- i.e., it is not realistic to think that one group can be responsible for administration for the entire enterprise. Delegated administration is also necessary for B2B/partner e-business models (e.g., a partner company administers their own employees within a restricted administrative domain in your infrastructure). Delegated administration is frequently implemented by means of control delegation, defined as "allowing local control of a sub-domain within a domain."

The need for centralized user administration also does not mean a need for centralized control in the security solution that provides it. In fact, we need to avoid the seemingly desirable scenario of a single point of control, which E. Gerck ["Toward Real-World Models of Trust: Reliance on Received Information", 1997, op. cit.] has pointed out to be a single point of failure.

Thus, to achieve central user administration and to provide control delegation, an IT security solution should use a distributed, highly non-local system, transparent to the users of the system.  In short, one needs a decentralized central control system, where different authority subdomains can be activated, suspended and revoked by the central administration.

Risk Models

The conventional risk model used in IT security is that of a link chain.  The system is seen as a chain of events, where the weakest link is found and made stronger. But this approach is bound to fail, creating what I have called "The Fort Knox Syndrome".

The Fort Knox Syndrome

Otherwise known as the United States Bullion Depository, Fort Knox is a fortified vault in Kentucky that can nominally hold 4,578 metric tons (147.2 million oz. troy) of gold bullion. As you might imagine security in and around the building and its grounds is impressive.

According to Bob Deutsche [Cloud Security Frameworks: The Current State, 2012], given the stratification of skills, responsibilities, and budgets today, it should not come as a great surprise that for most organizations, security means building the equivalent of a Fort Knox-type fortification around their platforms and, by default, their application portfolio. In a blog written by Billy Cox [referenced in Bob Deutsche op. cit.], one might also envision this defense as a string of very strong fortifications, erected around your platforms or line of business units, which are purpose-built to keep the bad guys out.

The Fort Knox Syndrome can be diagnosed in the information framework used many enterprises and government agencies today. Typically, at a platform level you see connected components and each component is protected by its own firewall. Within each component, nobody is really concerned about how their firewall impacts any other components of the system (although, because they are connected, an impact will certainly happen). Each component of the framework demands some (perhaps different) level of security compliance and ultimately has the right to determine who will — or will not — play within their domain. Inside each component, there are control elements representing, for example, identity, policy, and compliance enforcers. Their functions may be provided in different degrees for each component. For example, the particular identity enforcer in a component may have just a simple device authentication capability. The policy enforcer can represent a set of rules defining who can have access, the conditions, and under what criteria a device or its user is granted access. The compliance enforcer represents policies such as maintenance of patch levels, firewall uptime, anti-virus definitions, and configuration vulnerability throughout the infrastructure. In a centralized IT shop as commonly used in enterprises today, it is likely the data center component of this framework that typically drives compliance of the associated elements in all other components. It is also possible, as used in government sectors and large enterprises, that external control elements are used to drive and/or verify compliance of all elements in any component of the framework.

But even with this simple model, problems are plentiful [Bob Deutsche op. cit.]. For example when was the last time that your organization experienced some type of security glitch when one component was updated and perhaps not fully tested against the umbrella security framework? The more federated your framework becomes (via a cloud ecosystem, for example), the more likely the problems that the Fort Knox Syndrome may generate.

Thus, "The Fort Knox Syndrome" model fails to solve the problem of how to provide a secure IT system because in this model the entire chain can still be compromised by failure of one weak link -- even if that link is made stronger.

And the addition of any link, even if very strong, would not make the system less vulnerable, and might make the system more vulnerable because the security of the system would still depend on the weakest link (which might be the newest link). Further, such solutions are actually based on the impossible assumption that "no part will fail at any time" -- but if that critical part fails, the system fails.  In short, there is an inevitable single point of failure, which is that weakest link, and making the link stronger will not make the single point of failure go away -- at most it may shift it.

The design followed by NMA uses multiple links arranged in time and space, which links build a special manifold of closed control loops, which we call a meshwork, under the principle that every action needs both a trusted introducer and a trusted witness. We discuss this principle as the "Trust Induction Principle" in the Trust section of this paper.

Finally, the NMA meshwork system design closes the "loop of trust" and forces all endpoints of the transactions to be under the control of a single authority (e.g., the manager or a role fulfilled by more than one person using split keys), without forcing a single point of failure into the system. Of course, that single authority can become a single point of failure at the management level, which is a meta-level to the system, but this is solved in the NMA solution by applying the meshwork system to the management level itself. 

ABC Multi-Risk Design

In terms of risk models, the closed-loop meshwork system used by NMA implements the Multi-Risk Design by E. Gerck [Voting System Requirements, page 12, 2012], exemplified by the following simple equation that formally represents risk in first order analysis:

 A = B * C 

where A = average amount lost, B = probability of failure and C = value at stake.

Clearly, one can equally well call "risk" either A or B or C. Each choice leads, however, to a different risk model:

  • If  risk = A, this means that  one is talking about the security of the system as a whole, an average for a certain period of time, in terms of expected loss. 

  • If risk = B this means that one is talking about an individual transaction in terms of probability of loss of one event, for a significant ensemble of events. 

  • If risk = C this means that one is talking about risk as a total loss. This could be used to describe risk when loss of life or other irreparable damage may occur in a single event. Or, to describe risk over a period of time long enough so that one can observe repeated failure events until losses have eroded all that was at stake (being nothing else left, the loss ends there, hence the risk). 

In the closed-loop meshwork system used by NMA's End-to-End secure IT system, the risk model that considers risk = C is called "fail-safe" for any risk beyond C, because C is all there is ever to be lost in the system even if every link fails.

Thus, rather than seeking "infinite protection" or "absolute proof" by means of one link (which is clearly impossible), NMA's E2E can provide a measure of protection as large as desired by using a suitable number of open-ended, closed loop meshwork of links, each link individually affording some "finite" protection and collectively contributing to higher-orders of integrity in closed loops of trust. It is the adequate combination, and interoperation, of the security properties that provides the required resiliency of the secure IT system.

However, it is important to note that adding channels (even physical channels) can also decrease reliance if adequate design principles are not followed. For example, networks and computers cannot be trusted a priori. Trust is qualified reliance on information, based on factors independent of that information. By distrusting the networks, the computers, the users and the managers, the NMA E2E solution creates the condition to define trusted modes of operation for the system. In this approach, trust is developed by initially considering everything to be untrustworthy. Then, by testing a given information against factors independent of that information, you decide what you can initially trust. In other words, you doubt till you have proof by means that are independent of that which you doubt. 

Therefore, by developing a system where we initially trust the least number of components we are able to develop trust in additional components in terms of qualified reliance and risk models that are adequate for the operational assumptions, building a manifold of trust relationships with no single point of failure. The absence of a single point of failure can be extended to the absence of a N-point of failure, so that even if N points fail the system still performs as desired. In short, we are able to trust the system under its operational assumptions even in the presence of failure of one or more components. Thus, by initially trusting the least, we can eventually trust the most.

End-To-End (E2E) IT Security Solution

End-to-End (E2E) IT security is defined as "safeguarding information from point of origin to point of destination in a communication system". E2E may use cryptography and/or other protection means. The vision is that security needs to possess an end-to-end property, otherwise security breaches are possible at the interfaces, which build gaps.

As it is clear from the previous discussion, authentication and authorization are not enough for this purpose. In providing an E2E solution for IT security and user access control, we should recognize the need to integrate and provide for a number of core capabilities in IT security solutions, including: tamperproof cryptographic credentials; authentication; authorization; centralized user administration; control delegation; access control; session control; no single point of control; least privilege; data confidentiality; data integrity; non-repudiation; spoof prevention; and immediate suspension as well as immediate revocation of credentials. We should also recognize the need to bind a system of trust to IT security solutions, where we need to communicate trust not only machine-to-machine but also human-to-machine and (critically) machine-to-human (why should a human obey a machine?).

Additionally, we need to provide these capabilities in a global, scalable system, supporting from hundreds of users to millions and billions, and which is compatible with the existing infrastructure, with current Internet standards and their evolution as well as backward-compatible as much as possible.

Finally, we should recognize the need to take into account the business drivers, including: shorter and less expensive deployment, development and maintenance cycles; less integration and re-integration with other (changing) products; easy-to-use; tight back-end to front-end integration so that legacy systems can be reliably used; and low cost of ownership.

The technical requirements for an E2E IT security solution can thus be summarized in four points:

  1. integrate core security services;
  2. eliminate known weak or costly links such as password lists, access control databases, shared secrets, and client-side PKI;
  3. avoid the seemingly desirable scenario of a single point of control, which is recognized as a single point of failure; and
  4. bind a coherent system of trust to the solution and to its users.

NMA ZSentry

Following the requirements above for an E2E IT security solution, NMA developed a comprehensive and cost-effective implementation called NMA ZSentry™.

NMA ZSentry enables the ultimate and fail-safe defense against data theft, which is to not have the data in the first place. In IT security terms, ZSentry shifts the information security solution space from the hard and yet-unsolved security problem of protecting servers and clients against penetration attacks to a connection reliability problem that is solvable today.

The NMA ZSentry solution is based on a new cryptographic primitive that uses time-proven cryptographic components and is capable of handling multiple channels of information. The new cryptographic primitive is the Digital Transaction Certificate™ (DTC™), also called the ZSentry Usercode. The ZSentry Usercode is a "nonce" (typically, an unpredictable number that occurs only once; a ZSentry Usercode, however, may be used more than once and still remain unpredictable) and represents both a signature with appendix (a signature method that needs the signed text in order to verify the signature) for its password channel and a signature without appendix (a signature method that can recuperate the signed text in order to verify the signature) for additional information channels and spoof prevention. This allows user passwords to be truly private, known only by the user, and yet verifiable by an identifiable server while eliminating secrets shared with the user and a server. In other words, the user and not ZSentry or your provider holds the keys.

Contrary to conventional passwords and as discussed by E. Gerck and V. Neppe ["Take Five" In Internet Security, 2009], dictionary and brute force attacks on the ZSentry credentials can be made as difficult as desired. Most importantly, the ZSentry Usercode can be cast as a short — even mnemonic — string, allowing users to enter it directly or as part of a "user name" or "password" in a familiar password control dialogue, for example. This feature increases usability and is particular useful with cell phones and for backward-compatibility. 

In the NMA ZSentry system, the ZSentry credentials provide the security context for On-Demand (Software-as-a-Service) applications, On-Site security appliances, and any service such as identity, policy, and compliance enforcers, requiring information and/or validation responses.  Issuance, verification, suspension, revocation, and administration services for ZSentry credentials are provided by ZAuthority™, a distributed component of the ZSentry system.

NMA ZSentry is certified by the U.S. Government to provide an ARRA and HIPAA-compliant EMR (Electronic Medical Records) solution (CHPL Product Number: IG-2482-11-0040), including encryption when exchanging electronic health information (170.302.v) and providing an electronic copy of health information (170.304.f). ZSentry operates in full HIPAA compliance without requiring customers to sign a Business Associate Agreement (BAA), although a BAA can be signed if desired. ZSentry satisfies other regulatory regimes, including for protecting information in transit and at rest, and HITECH Safe Harbor.

The NMA ZSentry technology provides a certified, scalable foundation for integrated E2E IT solutions, and can be readily used in any platform for user access control, credential management, key management, and in applications including secure email, secure SMS, and secure file sharing.

ZSentry Free Trial »
ZSentry »
More about ZSentry »


The author is indebted to many people who have contributed ideas, questions, comments, criticism and motivation for this work.  I would like to thank specially Tony Bartoletti, Thomas Blood, Nicholas Bohm, Gordon Cook, Bill Franchey, Einar Stefferud, Larry Suto, Eva Waskell, Peter Williams, Mike Norden, Graham Tanaka, Vernon Neppe, Tiffany Gerck, Michael Hetherington, Allen Schaaf, customers and users.

Employment    Legal Statement    Privacy Statement
This paper is Copyright, Ed Gerck, 2002, 2009, 2013. Contents of this entire site are © Copyright, NMA Inc. Titles and product names are trademarks of NMA, Inc. as described in our Legal Statement.