Blog

Secrecy And Privacy

By Dr. Alan Radley,  25 Aug 2017

IT is insightful to ponder a little on the nature (and fundamental definition(s)) of secrecy and privacy…

To begin with, let us imagine that you are standing next to someone in a private location, before passing a real-world object to that person, and in a such manner that ensures (for argument’s sake) that this same action cannot be overlooked/discovered. Accordingly, it is easy to understand—that this act is absolutely private.

However things are not quite so simple—when you pass datagrams (messages, folders and files etc) across a remote wired/wireless communication system (aka the Internet). In particular, such a data-transfer may be visible and/or exposed to the actions of other programs/actors/people—and primarily because it has a public aspect—in terms of the visibility/accessibility of associated communications data. This is because the network itself is public—or open. For example, the packetised-data may be visible, and/or the wire/wifi communications may be observable/hackable; and/or the associated Internet traffic could be spied upon in some way etc.

Regardless of whether or not any exposed—or persisted—copies exist on the communication system itself (i.e. central copies)—one has to admit—that on an open-network—aspects of the live communication process may be visible to nth-parties. Hence communications must be (theoretically) no longer entirely private/secret—or at least in terms of the existence of any transferred packets etc; and most probably in terms of other aspects of the copy’s form. Ergo, we are forced to conclude—that total privacy/secrecy—in relation to the—sum total of all aspects of a copy’s form/content—for such a digital communication process—is quite simply, impossible to achieve.

Another problem, in our terms, relates to the mixing-up—of the media of storage, transfer and access—and in way(s) that likewise result in aspects of a copy’s form being rendered publicly visible/accessible.

Our discussion implies that you (the owner of the copy)—plus the system designer(s)/operator(s)—must choose which aspects of the copy—and hence communication process as whole—to make secret/private. However certain aspects will, nevertheless, remain public! In other words—security is all about deciding which aspects of a datum-copy can be wholly removed from public view (aka beholder’s share etc)— and which (inevitably public) aspects to protect using locking/blocking/concealment mechanism(s) etc.

We can conclude that a secret/private communication process—taking place in a semi-public arena—always has public aspects—or facets—regardless of how powerful—or impenetrable—may be the protection mechanism(s).

Secrecy Defined

WHAT is secrecy, in-and-of-itself? And how do we keep something secret? What are the fundamental techniques for attaining/securing/preserving secrecy?

Unquestionably, these are fundamental questions (for any society)—and answering the same can help us to understand secrecy at a deep, and even philosophical, level. Ergo, we wish to come up with a strict definition for the term. In this respect, right away, we notice that it is necessary to protect an item by concealing, blocking and/or locking its specific material-form and/or inner-meaning from others. In other words we must prevent other people from finding, contacting and/or knowing the item.

Obviously we can build a protective barrier (i.e walls) around the item (i.e place it in a safe/vault); and then create a locked door—being one that requires some form of password/secret-key in order to open. Alternatively, we can prevent any unwarranted person from reaching the it—by means of blocked/inaccessible pathways. Finally, we could hide the item in a secret location known only to ourselves—and the same being one that is—for some reason—difficult to see/ nd by other people.

But all of this begs the question—what is the common feature of secrecy—and can we identify any fundamental characteristic(s)—in terms of being able to attain it by means of a particular method? Put simply, attaining/defending secrecy—for any item—may be defined as protecting the material/virtual-form of a thing; or restricting its contents to the actual owner of the thing alone. In other words, we wish to protect the secrecy of the item—in terms of who can see, know and/or change it.

The concept of secrecy is at the same time—and equally—socially defensive (broadest possible terms) and socially restrictive (narrowest possible terms). Above all, secrecy requires that the genuine entry-method(s)—or valid pathway(s)—used to reach the item’s form/content—must be exceptionally well-defended (in social accessibility terms)—and remain so perpetually. Ergo, any and all unauthorised pathways/ surreptitious entry-methods must be untenable.

Additionally—authorised entry-method(s)—must be of such a form/type/kind that they cannot be attained/ guessed/stumbled-upon, or otherwise discovered/used by any unwarranted-party/breaching-technique (including statistical methods etc).

In a nutshell, secrecy is the attenuation/ whittling-down—or drastic reduction—of unwarranted accessibility options (entry-methods/pathways) for an item—whereby the (relatively scarce) authentic entry-method(s)/pathway(s) are perpetually out-of-reach to any and all unwarranted people/actors.

What Kind of a Science is Cybersecurity?

By Dr. Alan Radley,  1 Sep 2017

IF cybersecurity is—in actual fact—a science (or could potentially be established as a science), then we must ask—what kind of a science is cybersecurity? In his excellent article: “Cybersecurity; From Engineering To Science”, Carl Landwehr asked a number of related questions such as: “What would a scientific foundation for a cybersecurity science look like? [1]”.

It is salient to quote from Carl’s article:

Science can come in several forms, and these may lead to different approaches to a science of cybersecurity. Aristotelian science was one of definition and classification. Perhaps it represents the earliest stage of an observational science, and it is seen here both in attempts to provide a precise characterisation of what security means but also in the taxonomies of vulnerabilities and attacks that presently plague the cyberinfrastructure. A Newtonian science might speak in terms of mass and forces, statics and dynamics. Models of computational cybersecurity based in automata theory and modelling access control and information might fall in this category, as well as more general theories of security properties and their composability… A Darwinian science might reflect the pressures of competition, diversity, and selection. Such an orientation might draw on game theory and could model behaviours of populations of machines infected by viruses or participating in botnets, for example. A science drawing on the ideas of prospect theory and behavioural economics developed by Kahneman,Tversky, and others might be used to model risk perception and decision-making by organizations and individuals.

As I latterly examine—that is consider—Carl’s list of the different kinds of science (some period of time after I developed my own theory of cybersecurity); I do notice that the approach presented here in my book matches most closely with an Aristotelean science (i.e one that focusses on definition, classification and establishing taxonomies plus topic/concept ‘maps’). I am in agreement with Carl when he says that he does not believe that it is possible to develop a science of Information Security—without first establishing an observational science that identifies what we are dealing with in the first place (i.e. recognition of particular security-related things/events and subsequent definition of object/process classes). Ergo, we become able to know what kinds of phenomena to look for, measure, model and control etc.

However elements of the other kinds of science described by Carl are evident in my approach. For example—and especially in terms of a Newtonian science that places emphasis on fundamental objects, processes, forces and their composability. In this respect, note the emphasis upon, and identification of, the different kinds of foundational ‘building blocks’ – or axioms – for a science of cybersecurity.

In my book – The Science Of Cybersecurity, I seek to establish a comprehensive definition of Security—for a private, secret and/or open datum—as the preservation of social accessibility status. We named this as Socially Secure Communication. This principle is, in fact (or should-be) the central axiom of Information Security (communication aspects); and is based upon a set of underpinning conceptual definitions as follows: Classification of the fundamental types of datum as secret, private and open; datum-copies as primary, secondary and tertiary; network types as primary, secondary and tertiary; demarcation of datum meanings into metrical, descriptive and selectional kinds; plus definition of system entrance aperture types that are identified by the following (often nested) entry methods: physical, virtual and meaning gateways etc.

Building upon these axioms, we can establish a set of Absolute Security metrics [ref. Absolute Security: TARGETS/ METHODS]—and accordingly fully prescribe the various classes/types of cybersecurity: system attack surfaces/vectors/methods, system-access-gateways/entrance-apertures, vulnerabilities plus defensive-methods and protective measures etc.

Overall I would suggest that the over one-hundred new security related definitions, axioms, concepts and principles introduced in The Science Of Cybersecurity book; do amount to a logically true, consistent, integrated and also coherent set of natural laws for cybersecurity in general. Or at least, it is my hope that there may be—detailed in that book—at least a few—salvageable definitions, axioms, principles and/or other ideas that may be re-used in relation to the development of a future (yet to be envisaged/foreseen) far more comprehensive: Science of Cybersecurity / Information Security.

Carl ends his article by putting forward the interesting idea that cybersecurity might be more akin to an engineering school that develops and teaches a Science of Design; whereby teachers/theory can only offer useful guidance, but no set of hard and fixed rules, to the developer of a security system. Sensibly therefore, we allow space for a creative approach to security system design—and in order to confidently—stay-ahead-of, mitigate and repel—all human/machine: opponents and hacks.

References

[1] Carl Landwehr – “Cybersecurity: From Engineering to Science”, The Next Wave – The National Security Agency’s Review Of Emerging Technologies – Vol 19. No2, 2012.

In Defence of Absolutes

By Dr. Alan Radley,  10 Sep 2017

YOU MAY sometimes here a security professional say something like: ‘in the field of information security— (there are no absolutes)—except that (there are no absolutes)’—or words to that effect. Perhaps these same people do not realise that this statement is, in actual fact, an example of circular reasoning—or a logical statement that restates the premise as the conclusion. Anyway, a few eminent security experts—have expressed objection to the word ‘absolute’ in our book’s title.

What I think these same experts are alluding to—is the impossibility of making any absolute security predictions; or attaining perpetual—ever-lasting—security protection in relation to information that is stored/transferred by means of networked computers. Such an interpretation is correct—because security is (and always has been throughout history) an arms race between those who seek to protect information and those who seek to circumvent those protections. Today’s best ciphers will doubtless be trivially broken in the future at some point. However, it seems that the dissent surrounding the word “absolute” is due to varied interpretations of what it means. In this post I would like to fully define “absolute” in the context of security literature.

Let us begin by assuming that the term ‘absolute security’—alludes to a system that is permanently impregnable for all time (i.e. it can never be broken into).

That is not what I am claiming here for the meaning of the term absolute security—and for several reasons. Earlier (in my book: “The Science Of Cybersecurity”); I had defined security as protection of Privacy Status for an item; and absolute security (for a private-copy) as single-copy-send—or no access whatsoever for unsafe-actors. Wherein absolute security is a kind of ruler or metric—one that indicates/reflects the specific Accessibility (or Privacy) Status for the datum-copy.

An item is absolutely secure when it is—at the present epoch—out of reach of any unsafe actors—and there are no illegitimate copies. Henceforth, I would suggest that absolute security is a measurable protective status—and one that does not have to be possible—or permanent—in order for it to be a valid goal or metric in relation to a copy. Accordingly, we have neatly moved emphasis away from systems—and onto datum-copies—in accordance with the basic theme of the present site (security = protecting copies). However any copy-related insecurity must be the result of system failure(s)—so how/where do these problems arise?

Evidently, computing systems are extremely complex, varied and changeable—and many uncertainties can be the case for a datum-copy existing in a networked computing environment (even an ostensibly protected one). It follows that the privacy status for any item on a networked computer system—is a situation-specific property that may (quite possibly) change over time. However this does not mean that we should adopt an attitude whereby we just shrug our shoulders whenever a leak/data-breach occurs. And then make the excuse that when it comes to security there are no absolutes—or even idealised metrics with which to judge security status. Systematic security is therein misrepresented as (forever) a contradiction in terms—something not even worthy of comprehensive de nition and/or accurate measurement.

Inevitably, security experts encourage us all to install protective mechanisms, but often without providing the concordant means to adequately adjudge/measure if they are, in fact, working. It would seem essential to first-of-all define the security goal for a private datum-copy—being absolute security (i.e. single-copy-send for a specific communication instance). A clear security target is required in order to have any chance of discovering whether we have attained it—or lost it—and why! Surely we cannot be expected to just passively await the arrival of evil tidings in the form of system exploits— without full knowledge of what is the key goal/measure of communications security (single-copy-send).

Unsurprisingly, such an ‘no-absolutes’ attitude pre-shadows a built in excuse for the designers of security systems. It gives them a get-out-clause; because they do not have to explain why or how the security targets failed—and because there are none—or at least highly specific ones like single-copy-send—complete with appropriate logical happenings. We may conclude that successful exploits are not the result of a lack of absolutes in security—that is a wholly illogical argument—and because it renders uncertainty/lack-of-knowledge/poor-defences as a valid excuse for failure. Whereby we put the symptom ahead of the cause. Rather we must accurately define continuous security as the goal—which is itself a type of absolute—or how else would you define successful protection of privacy—but as a kind of temporary permanence to be constantly achieved.

Please note, that I am not claiming here that we cannot have zero-day-exploits—or unknown-unknowns in terms of system design/operations—but rather that we should wake up and smell the gunpowder. We must seek to identify bone de explanations for our security failure(s)—and not hide behind logical-conundrums/meaningless-mantras. Rather, we embrace the truth—that it is a complete lack of precise, logical and measurable—security targets that holds us back. Accordingly, we hereby define: A) The absolute security method(s) for a communications system as consideration of every aspect of security to produce an all-round system that works coherently as a whole against all types of attacks, using the full gamut of known defensive techniques.

We do not mean that the system is permanently impregnable for all time (i.e. that it can never be broken). Absolute security is an attainable ideal (potentially), with a robust theoretical footing to back up its practicality and achievability. We also provide a second related definition: B) The absolute security target for a private datum-copy is de ned as single-copy-send—whereby it is the communications system’s absolute security method(s) that helps to deliver the same. Note that both definitions are ideal status metrics to be achieved and not permanent features that somehow self-perpetuate.

In conclusion, we need absolutes—and the concept of absolute security—not because it is a nieve dream-like state of system/data safety. We need the target(s) and method(s) of absolute security because these are idealised goal(s)—or assurance objective(s)—and reflect the very status values that we seek to measure our success and/or failure against. We could choose another grouping of words to represent the goal of continuous security (i.e comprehensive security). Nevertheless the underlying security metric is the same—a system that strives towards ideal and (hopefully) attainable security protection for our private information.