As stated, our purpose – in proposing the development of a World-Brain – is to allow humans to think better thoughts; and to generate more useful, salient, practical, true, informative and relevant ideas – being ones that (hopefully) lead to a better life for all. But we do not intend to limit our conception of the World-Brain merely to metaphysical thoughts; but rather desire the UKM to be as much about physical processes, actions and real-world events/data gathering also. Ergo we embrace the Internet of Things (IOT).
Our assumption here is that the UKM will be a comprehensive perceptive/thinking/acting tool that enhances the clarity of our views of real-world objects, events and processes; and so it is to be a universal recorder/planner/organizer for datums and real world social actions as much as it is about human thoughts alone. The UKM shall be a faithful reflection of the full contents of all three of Popper’s ‘worlds’; that of the nature/physical, mental and products of the human mind improved individual plus collective intelligence – a distributed mind ‘amplifier’ that takes inputs from from everywhere and anywhere – but importantly unites the same in order to provide coherent, integrated and re- combinable perspective(s). The UKM is to be a vast agora/arena of all possible/potential/true: knowledge/ datums / theories / events / opinions / processes / happenings – being an accurate reflection of all creation.
The UKM provides rapidly-configurable/vastly- informative, fine-tuned and comprehensive ‘Knowledge- Windows’ – and with respect to all human knowledge.
But how specifically is it possible to improve our thinking processes – individually and/or collectively? Evidently we already do so by use of all previously developed thinking aids, perceptive and communication aids/tools/technologies; which are used to enhance our natural capabilities in this respect; whilst sometimes adding new super-efficient data capture/storage/retrieval/sharing capabilities (e.g. ones that overcome the limitations of space/time). But why do we not see a clarion-call for improved thinking methods? And what is the evidence that such an improvement is even possible to achieve (individually and collectively)? Is the UKM a mere pipe-dream?
Perhaps the ‘patternist’ view of thinking processes provides a useful perspective here. A patternist viewpoint is basically that thinking is solely a process of reflecting ‘patterns’ of meaning / relationships that are – for example – present in nature. Ergo once captured pattern images enter into human mind(s) (via direct capture, cognition, intuition, and communication with others and/or media systems) – for recording/analysis and ultimately to provide understanding.
An example thinking process is where a particular pattern – for example the idea of a car – is comprised of a series of universal/particular structures of partitioned/aggregated ‘ideas’ – linked together usefully. Patterns within the mind may more or less accurately – resemble real- world aspect(s) – or the objective pattern(s) that one is trying to map. Accordingly, major thinking/memory tools exist to aid in this ‘patterning’ process – and these are diverse and range from: A) Generalized theoretical symbolic/semantic structuring systems within: language, mathematics, science, logic, culture etc; to: B) Physical or virtual recoding/output media and communication aids including: books, calculators, telephones, computers, mobile phones etc.
In fact the whole history of mankind can be viewed as the collection and development of a gradually increasing number of tools to aid and augment our mind’s ability to capture and manipulate patterns in a variety of different ways. Evidently our minds become systematized, and ‘spread-out‘ or ‘projected’ in/onto the real world in terms of – a great number and variety of – patterns or recorded thoughts/data. Accordingly the clarity, degree of correctness and relevance of human thoughts/actions/choices are dependent on the efficiency and effectiveness of our mind enhancing tools – and hence relevance/truth of human thought-spaces.
Ergo a great variety of cognitive aids plus communication processes for manipulating thought patterns have been developed; and many of these mind augmentation tools would seem to be a natural byproduct of language – being a kind of ultimate pattern – in which all of the other patterns may be represented or contained.
Mind reflects the world – using patterns – and the world ‘pattern’ or model in turn makes the mind. Accordingly we end up with a vast range of symbolic and visual/text/image based language(s) and thinking constructs; plus reflective, logical, explanatory and causal systems. Hence the development of science, art, history and logical plus scientific method proceeds; aided by various media such as printed text, photography, film and digital media/computers etc. In a way, computers are a type of ultimate aid for cognition; to the extent that they are memory banks for anything and everything – being ‘wired’ or networked thinking tools of particularly diverse application. In fact what we are really talking about when we speak of a World-Brain is a new type of computer – admittedly a massively networked, hugely data populated and well-organized machine that renders information incredibly easy to capture, record, and way-find to items of interest.
Our proposed World-Brain may also be a new type of computer in a separate ‘metaphysical’ sense – in terms of employing a new philosophical and etymological approach to the organization of knowledge that has universality of relation ‘baked- in’ (ref. hyper-context). Hence the UKM has capabilities that no computer thus far has possessed. However our UKM conception – despite the fact that it is envisaged as a kind of ultimate thinking machine – nevertheless remains that of a computer of one kind or another. Vitally this World-Brain is to be the first example of a new ‘micro-thought’ based grouping (partition/ aggregation/linkage) of knowledge – or a single system capable of capturing, storing, organizing – and enabling humans to efficiently access – everything known.
Alan Turing and the Computer
Alan Turing first established the basic concept of a computer or universal thinking machine, defined thusly:
The digital computers… are the machines which move by sudden jumps or clicks from one quite definite state to another.These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there, are no such machines. Everything really moves continuously. But there are many kinds of machine which can profitably be thought of as being discrete- state machines… This special property of digital computers, that they can mimic any discrete-state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes.They can all be done with one digital computer, suitably programmed for each case… A digital computer can usually be regarded as consisting of three parts: (i) Store. (ii) Executive Unit. (iii) Control. The store is a store of information, and corresponds to the human computer’s paper… In so far as the human computer does calculations in his head a part of the store will correspond to his memory. The executive unit is the part which carries out the various individual operations involved in a calculation. What these individual operations are will vary from machine to machine… – Alan Turing
In fact, it is no exaggeration to say that everything we have today in the digital world – from the central processing unit (CPU) to the Internet and the Web etc – all emanate from Alan Turing’s idea. Whilst now we have a great variety of different kinds ofnetworked electronic digital devices; from PCs, mobile- phones to tablets and even smart watches etc; all of these innovations are simply evolutionary developments patterned after Turing’s basic idea.
The so-called ‘Turing Machine’ was envisaged by Alan Turing in 1950 – as an automatic machine – or an A-MACHINE (Turing’s term) – that merely ‘mimicked’ intelligence or intelligent behavior and was not considered by him to be able to think by itself – in any way whatsoever. However Alan did predict that sometime around the year 2000 computers would be so sophisticated as to be able to converse with humans and be mistaken for intelligent beings (or humans).
In the over 65 years since that seminal paper was published, Artificial Intelligence (AI) has developed considerably as a field – and to such an extent that IBM’s Watson computer was able to win the Jeopardy knowledge game show on television against excellent human opponents.
Experiments, judgements and opinions as to whether computers can, do or could potentially – think remain hot topics in related fields. And it is true, that on occasion, and in carefully prescribed circumstances, computing systems have passed Turing’s test for ‘human’ seeming intelligence (question and answer language type interrogation by humans of the computing system). Putting aside questions related to the ability – actual/potential – of computers to actually ‘think’ in human-like ways; we note that here in this paper we are not concerned with like questions or AI. Rather we shall focus solely on thinking as a process taking place in a single human brain – or across a collection of multiple brains – and its perfection and/or optimization; and hence we consider thinking from a purely humanistic-perspective.
In a nutshell, we seek to boost/augment the computing power of the human mind itself – and to provide a facility for more focussed and expansive lines of enquiry. We wish to see networks of association on the global data web; and to perceive previously hidden relations on all knowledge scales/spans. But we must understand human thinking processes in order to do so (see Figures 2 & 3).
Accordingly, we seek an aetiology of thought – that is to comprehend what the process of thinking is, first of all – because it would seem evident – that one cannot improve something if one does not have at least some understanding of the nature of what one is trying to improve. Aetiology is the science of philosophy or causation; that is, speculation on the causes of phenomena. In the present context aetiology relates to study of: where thoughts come from, what they mean, how they form, combine and interrelate, and where they go to – plus how they are best used – individually, collectively and socially (or responsibly).
Our ambitious goal (with the UKM) is to render possible a comprehensive map for all the items/ mechanism(s) present in all the territories of human thought/action; being a type of universal Memory Palace – and one aided by standard human capabilities such as perception, cognition, vision, language etc. Only here expanded to include all kinds of thinking aids, data- capture mechanisms, technologies and media types etc (we embrace all current knowledge tools/engines).
Computers for Thinking
Despite the fact that our top-level combined individual/collective ‘thinking’ comprehension aim in this respect is a very tall order – we shall nevertheless still attempt the same – and in order to elucidate the types of questions, problems and topics that the developer(s) of a World-Brain would need to address in the first instance.
But lest we forget – an individual thinking act has distinct purposes/goals; and normally is focussed on a specific ‘object-of-thought’ or ‘object-of- examination’ (e.g. an item of perception, imagination or reality). Henceforth focussed cognition helps us to gather relevant information conceptually and/or to form new concepts applicable to specific situational context(s) etc; or simply just to understand, act; and so to predict/ control natural objects and events etc. Put simply, goal-oriented thinking gives mankind the necessary power to study the past, control the present, and shape / predict the future.
Embodied / contextualised thinking is how we humans make and understand our realities – whether real-world resident or imagined. Accordingly, desired is a comprehensive theory of thinking processes – or at least a flexible ‘container’ for all possible theories/methods; and in order to be able to produce a UKM that can perfect both real- world processes and outcomes in this respect.
The desired thinking theory/procedures/ method(s) will consist of ideas/elements/structure/ processes that is/are borrowed/extracted from elsewhere; but it may also contain some new elements. I shall attempt to provide sources for ‘borrowed’ ideas wherever possible (or leave room for such links/annotation(s)); but overall I seek here to collect together any and all useful/ compatible (but fundamental) techniques related to thinking tools/methods/procedures – and familial concepts – obtained from everywhere and anywhere. Our study of human cognition initially presents itself as list of axioms/definitions or logical ‘atoms’ (see end-notes).
And we shall attempt to show how these items are supposed to work together on a UKM in order to form a comprehensive, logical, self-consistent and inter-related scheme of perfected human thinking processes. Where a useful thinking-related item/ concept cannot be made compatible with the overall scheme – then it shall be listed at the end for possible inclusion later on.
My aim is a comprehensive – but practical – theory of human thought that would take place with the aid of our hypothetical World-Brain, being one that encompasses/condenses down all other theories (where possible), plus can represent any possible thought- structure both flexibly and with great clarity. Where theories-of-cognition cannot be found to be compatible with one another – then we shall leave room in our system to accommodate competing or contradictory theories wherever possible. No scratch that – our UKM must be fully compatible with all theories, thinking systems/tools/ representations etc. Ergo we need: Centralized Classification Systems; plus organizing, consolidating, and uniting schemas/templates/mechanisms – built in to the UKM – and to bind everything together.
Perhaps we need to go back to a more Aristotelian ‘Tree of Knowledge’ approach. Here we can differentiate between the underlying theory of representation, storage and access (the A-Machine) – the physical aspects of the system or brain – from the actual virtual information that is organized/managed centrally (i.e. metaphysical information – B and C Knowledge Machines – see later). Our UKM must itself consist of an operational theory that is fully specified in both aspects – physical implementation and logical / working structure. I know that many others have sought the same goal, and had years of troubles, but that does not put me off because the task is a true one: understanding/modeling/perfecting: how, why, and by what methods: we humans think. QED.
Despite an emphasis on thinking procedures, the present paper also places primary emphasis on human knowledge – its capture, representation and retrieval. Of necessity, we find ourselves dealing with that branch of philosophy known as Theory of Knowledge. But once again, there is no one universal theory of knowledge – but an immense number of rival theories that deal with same subject matter. Accordingly it is impossible to avoid controversy with respect to related organizational constructs, or to ignore controversial opinions/questions with respect to concepts and ideas present in any specific knowledge classification or representation scheme.
A theory of knowledge must also be a theory about the range, depth and limits of knowing – and it patently must allow the objects of knowledge to be: known, considered, compared, judged and thought-about. Ergo the mental operations of perception, belief, memory and judgement are preeminent plus interrelated activities – and ones that impinge on knowing, believing, truth/ falsehood and acting. Accordingly our World-Brain must aid the same processes – and for all facts, variants and claims gathered from everywhere and anywhere – and if it is to be considered a success.
It is tempting to think that building a universal knowledge machine (UKM) would be relatively straightforward procedure – and that one might (for example) just go ahead and build an ordinary networked data-base – but then optimistically make it far larger and more efficient in terms of factors such as speed and remoteness of connection plus ubiquity of access etc. This is really what we got with the system that is Tim Berners-Lee’s World Wide Web – which is of course a vast improvement over previous systems in that there is near unlimited storage room for vast amounts of superficially linked documents, images and videos etc. However experts have noted the limitations and systematic/organizational failure of the ‘Web’ (ref. ignores PMEST facets, lack of high-level overviews, plus semantic depth/breadth missing).
It is true that the Web makes items more accessible in one way – in that they can be readily stored on a distributed network; but items are easily lost, links break, and items often became ‘islands unto themselves’.
In truth, when seeking an item of knowledge on a useful system – one seeks to ask questions or to engage in activities that enable people to:
- FRAME the Item – Identify / Perceive / Delimit / Partition a Thing (Thought/Datum/Object/Process).
- FILE the Item – Save / Remember / Store (Thought/Datum/Object/Process) a Thing.
- ORGANIZE / LINK the Item – Contextualize / Form Relations and Aggregations/Partitions to/with other Thing(s).
- FIND the Item – Efficiently Locate / View / Extract a Thing (Thought/Datum/Object/Process).
- EDIT the Item – Effectively Add-to / Delete-from a Thing.
The Items captured can perhaps be superficially classified as: A) Facts/Thoughts; B) Objects/ Processes of Knowledge and C) Records of Sensation or Data or real-world Objects, Events or Processes etc. Note that on most network systems, these tasks would often be performed by separate actors – individuals or collectives (people and/or machines); and at different places/times.
It is salient at this point to ask if the UKM is in any way analogous to a real-world brain?
To begin with, an organism is defined a collection of living cells grouped together, carrying out discrete tasks for the benefit of the whole (i.e. an animal). Ergo our UKM does seem to be quite organic. Whereas a brain serves as the centre of the nervous system in animals. It is comprised of neurons that communicate with one another to exert centralized control over the other organs of the body. In this respect we can say that our conception of a UKM makes it a brain in the important sense(s) of:
• Centrality of organization
• Interdependence of parts
• Broad distribution of knowledge
It is a whole systems approach to the generation of a collective intelligence; aka the World-Brain. Overall, in the case of a World Brain – all of humanity is involved in building, organizing and using the vast amalgamated system of all human knowledge. The UKM is to be a system that is constantly evolving/ever-growing – and so it must be permanent – and the contained knowledge must ‘live’ forever.
So far so good – but how do we proceed to build the UKM? Perhaps we need to develop a little theory – and to help us approach such an ambitious task. But first, it might be informative to delve into the history of the concept of a collective World-Brain.
World Brain Vision
During the early 1920s, the French paleontologist
Pierre Teilhard de Chardin [1881-1955] first described the emergence of what he named the noosphere, the network of ideas and communications that eventually envelops the planet . In the same time period, Paul Otlet and H.G.Wells both dreamed of building a universal catalogue of all human knowledge which would be stored – and accessible – as an enormous cross linked repository (a type of automatic card- indexing system for all the world’s ideas/datums).
But ambitions of building a Universal Knowledge Machine and/or Memory Palace – do have a much earlier source – and in fact go right back to Ancient times. The idea of collating or gathering together everything known into a single world encyclopedia or universal knowledge corpus; began with Plato and Aristotle’s attempt(s) to group together knowledge of everything in the known universe under consistent laws, frameworks and knowledge classifications. This thrust towards causative description(s) and logical understanding continued with Western attempts during the 17th, 18th and 19th centuries to unify knowledge according to universally applicable laws and principles.
Subsequently, several prominent thinkers took up – and developed/prescribed – the idea of a universal knowledge repository. Scientific and philosophical efforts towards universalism gathered pace during the 19th and 20th centuries, and a number of schemes to build global knowledge catalogues emerged. It is useful for us to briefly review key developments in this respect.
3.1 Encyclopedist / Utopian Origins
The quest for ultimate truth is pre-figured by the quest for a universal source/bank of knowledge that occurred during the Enlightenment; whereupon (for example) the French philosopher Denis Diderot (1713-1784) and Jean le Rond d’Alembert (1717-1783) created the Encyclopédie or Systematic Dictionary of the Sciences, Arts, and Crafts; a type of general encyclopedia published between 1751 and 1772; and which was envisaged as a universal storehouse for all human knowledge.
Diderot stated that his Encyclopédie should:
Encompass not only the fields already covered by the academies, but each and every branch of human knowledge… comprehensive knowledge will give ‘the power to change men’s common way of thinking’.
These are noble and worthy aims indeed; however Diderot’s work on his Encyclopédie suffered from many obstacles. The project was mired in controversy from the beginning; and was suspended by the courts in 1752. Just as the second volume was completed; accusations arose, regarding seditious content, and concerning the editor’s entries on religion and natural law. Diderot was detained and his house was searched for manuscripts and subsequent articles. Sadly, it would be 12 years, in 1772, before the subscribers received the final 28 folio volumes of the Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers.
Over 170 years later, and shortly after World War 2, Pierre Teilhard de Chardin (1881-1955) wrote:
No one can deny that…. a world network of economic and psychic affiliations is being woven at ever increasing speed which envelops and constantly penetrates more deeply within each of us.With every day that passes it becomes a little more impossible for us to act or think otherwise than collectively…. we are faced with a harmonized collectivity of consciousness, the equivalent of a sort of super-consciousness. The idea is that the earth is becoming enclosed in a single thinking envelope, so as to form, functionally, no more than a single vast grain of thought on the cosmic scale.
The alluded to Global-Brain is a metaphor for an emerging, collectively intelligent network that may be formed by all the people of the earth; together with the media, computers, thoughts, knowledge and communication- links that connect them together. It is a vast, complex and (partially) self-organizing knowledge system.
The idea for a Global Brain was first codified in 1935, by Belgian Paul Otlet (1868-1944), who developed a conception of a network that seems eerily prescient of the world-wide web:
Man would no longer need documentation if he were assimilated into a being that has become omniscient, in the manner of God himself. To a less ultimate degree, a machinery would be created that would register [at a distance] everything in the universe, and everything of man, as it was produced. This would establish a moving image of the world, its memory, its true duplicate. From a distance, anyone would be able to read an excerpt, expanded and restricted to the desired subject, which would be projected onto an individual screen. In this way, anyone from his armchair will be able to contemplate creation, as a whole or in certain of its parts. 
Knowledge is to be coalesced/aggregated/linked; and in accordance with the fundamental definition of a computer—to arrange items clearly in one’s mind— thus forming a type of universal oracle (OED – 2nd Ed.).
Jumping backwards in time once-more to the late 19th century, William James (1842-1910) said that for thoughts to fuse together (within a mind), there must be some sort of medium; hence he postulated a stream of consciousness where:
Every thought, dies away, and is replaced by another… the other knows its predecessor, and finding it warm, greets it saying ‘Though art mine, and art the same self with me’.
Implicit here is that thinking happens within an environment; and the same being one which supports the process, remembering terms and happenings, and also provides access by the thinker to contained knowledge. But what will be the nature of such an environment (or thought-space) for multiple brains? Or how can a new reality be constructed in such a manner that all the correct/true thoughts, choices and actions of mankind are facilitated, and in which any and all thoughts are free to work together for the benefit of man?
In the 1930s, futurist Herbert George Wells (1866-1946) wrote a book entitled World Brain; whereby he describes his vision of the World Brain; a new, free, synthetic, authoritative, and permanent ‘World Encyclopaedia’; that could help citizens make the best
use of universally accessible information resources; and for the benefit of all mankind. According to H.G. Wells, this global entity would provide an improved educational system throughout the whole body of humanity. He said that the World Brain would be a sort of mental clearing house for the mind, a depot where knowledge and ideas are received, sorted, digested, clarified and compared; and it would have the form of a network whereby it is the interconnectedness that makes it a Brain.
In World Brain ; Wells wrote:
My particular line of country has always been generalization of synthesis. I dislike isolated events and disconnected details. I really hate statements, views, prejudices and beliefs that jump at you suddenly out of mid-air. I like my world as coherent and consistent as possible… we do not want dictators, we do not want oligarchic parties or class to rule, we want a widespread world intelligencia conscious of itself… and… without a World Encyclopedia to hold men’s minds together in something like a common interpretation of reality, there is no hope whatever of anything but an accidental and transitory alleviation of any of our world troubles. 
Wells believed that technological advances such as microfilm could be used towards this end, so that:
Any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, anydocument, in an exact replica.
Many people, including Brian R. Gaines (1938-) in his book Convergence to the Information Highway, see the World Wide Web as an extension of the World- Brain; allowing individuals to connect and share information remotely .
These concepts surrounding the nature of human thinking, in fact relate to far older ideas; for example in the Indian Upanishads a four part description of the inner instrument of understanding is supplied. Stated is that the function of mind is association and disassociation; synthesis and analysis; whereby internal and external perceptions are evaluated.
The World-Encyclopedia envisaged by H.G.Wells likewise proposed the building of a vast machine or mechanism; which would be a university-like global encyclopedia that would collect, organize and make available to everyone a properly contextualised (and approved) knowledge bank of all human knowledge.
Such early 20th century ‘mental-visions’ or predictive foresight of a World-Brain, had to wait until the second half of that same century, and to have any chance of becoming physical reality.
For both Otlet and Wells, and their World Encyclopedias – the central idea behind the global networks was curation – and they were more than just vast repositories for knowledge; but ‘played the role of a cerebral cortex to these essential ganglia.‘ And for both men the ‘permanent world encyclopedia’ was partially formed, organized and curated by teams of distributed experts – bibliographic specialists – whose job it was to administer the vast collections/aggregations of everything known.
Ultimately we have a network of experts contributing to a network of knowledge.
Otlet foresaw a global information network that he dubbed the Mundaneum (see Figure 4); a system which would organize descriptions and analyses of items, and in particular automate how to synthesize and distribute them to a broader public. His global network would one day make knowledge freely available to people all over the world. And already in 1934 he described his vision for a system of networked computers or ‘electric telescopes’ – that would allow people to search through millions of interlinked documents, images and audio/video files.
Otlet imagined individuals sat at workstations – each equipped with a viewing screen connected to a central repository that provides access to the widest range of resources and connects people to topics of interest.
How revolutionary – workstations foreseen in the 1930s!
This system sounds eerily prescient of the Internet / Web – and all decades before the first digital Personal Computers (PCs) or computer networks. But Otlet’s concept had even more advanced capabilities that no modern knowledge machine has yet been able to implement. In particular, his system would include so- called selection machines that are able to pinpoint a particular passage or individual fact (i.e. a micro- thought/datum) in a document stored on remote microfilm repositories – and all connected together by sophisticated telecommunication links. The entire system was called a reseau mondial: a worldwide network; being a kind of analogue World Wide Web.
Otlet’s vision even included a social network; whereby users could ‘participate applaud, give ovations, and sing in chorus’. The idea included an ability to construct social spaces or collective thinking spaces – and around individual knowledge units, much the way hyperlinks are used on the Web. Except here the implementation would seem to include categorized omni-links and mani-fold links (overlaid links with multiple source/end-points).
Otlet’s idea would be far more organized, categorized and structured, than today’s single word conversation tagging methods on social media such as Facebook or Twitter. Put simply, what he dreamed of was a Universal Knowledge Network (see Figure 4).
This all-encompassing knowledge machine would be universal in three senses:
• It would provide universal availability of content; in terms of no spatial/time access limitations or other social access restrictions – and by means of remote access location(s) (terminals) that could be located anywhere and everywhere; and secondly:
• It would allow universal access to all types/levels/ relations/scales/magnifications/perspectives of/ on content; for example way-finding to any: facts, relations, texts, images, videos, data, opinions etc.
• Itwouldfacilitateuniversalknowledgesharingamongst the general public and by means of an advanced type of social network that allowed facts, opinions, and personal expressions (sourced from anywhere and everywhere) to travel freely and be recorded / linked in every which way imaginable.
Unfortunately, Otlet’s exciting vision of a true World Encyclopedia has yet to see the light of day.
Otlet’s vision of a global knowledge system included: distributed encyclopedias (in the sense that knowledge units are sourced/edited/commented- upon by individuals located all across the network), virtual classrooms, three-dimensional information spaces, social networks, and other forms of knowledge made possible by hyperlinking together vast aggregations of micro-thoughts/datums.
And in a similar spirit to later researchers such as Ted Nelson and Kim Veltman, Otlet did not restrict himself to merely theoretical work; but rather he actually went ahead and began building his knowledge machine. He hoped to address the key problem of managing humanity’s growing intellectual output, by beginning to assemble a great catalogue of all the world’s published information; whereby he wished to create an Encyclopedic Atlas of All Human Knowledge. His broader ideas included a new internationalism, and even the concept of a World City that would serve as the headquarters for a new World Government. For Otlet these (apparently) utopian ideas formed part of his larger vision of worldwide harmony – whereby he saw his global knowledge network as a route to a universal consciousness and collective enlightenment.
Similar ideas had been expressed much earlier; for example by Francis Bacon, in his Natural History of the year 1627; whereby Bacon aimed to classify knowledge itself (ideas); as opposed to merely things. Accordingly he suggested classification of all human knowledge into two main areas: Human-Learning and Divine-Learning. The former he separated into three kinds: Memory/ History, Imagination/Drama/Narrative, and Reason/ Philosophy/Sciences. Bacon wished that:
All partitions of knowledge be accepted rather for lines and veins, than for sections and separations, and that the continuance and entireness of knowledge be preserved.
Bacon hoped that his classification scheme would foster unity; rather than mere isolated atomisation. He wished for synthesis and objectivism, which he placed over pure analysis and narrow subjectivism. And he was correct; because only by means of centralized classification plus top-level aggregations/partitions/ linkages, could we possibly organize everything known.
Jumping forwards in time nearly 400 years, what the world got was the World Wide Web, combined with Google-type search engine(s). Such systems supposedly being capable of providing access to anything and everything – but not so in actuality.
The instantiated version of the Web was most definitely not Otlet’s vision. He wanted a global knowledge machine that was fundamentally decentralized, utopian, optimistic and collective. But the machine itself was one that allowed centralized control/coherence (i.e. centralized organization and not centralized control of content), standardized ontologies and systematic coordination of the whole. It included sophisticated mechanisms for the cross-referencing of information sources, tracking versions and fostering collaboration amongst users. In other words it would be a system with a very high degree of central coordination – but with classification schemes, and aggregated atomic micro-thoughts/datums that could be classified/related in each and every way, plus social/data groupings that are not really possible on the Web.
Early on, Otlet realized that needed is a more coherently organized arrangement of knowledge – and in particular sophisticated new kinds of hyperlinks that (for example) can denote whether or not a particular document ‘agrees’ with another one. Hyperlinks should themselves be omni/multiple/overlaid-links, capable of being categorized and the user should be presented with a range of options or choices as to which path to follow for each forward/backward link. He wished to be able to cross-reference, collate and aggregate knowledge units together at a range of scales and so as to foster improved synthesis views. Ergo summary tools, visual indexing and ‘zoomable’ interfaces would allow us to drill down from the universal to the particular. Otlet envisaged a comprehensive, tight and focussed access to literature that would all be offered by a centrally organized system.
In my view, World-Brain centrality does not necessarily come from a single organization, or a group of powerful overseers. Rather it is possible for the community as a whole to agree on, and implement together, an efficient organizing structure for all. Put simply, we seek a unified whole in terms of the structure of knowledge – but this overarching organization can be developed collectively and be sourced in a distributed ‘opinion-led’ fashion.
During the early part of the 20th century, a number of individuals proffered their own visions of the World-Brain / World Encyclopedia. One such person was Emanuel Goldberg, who first described his Statistical Machine in the year 1927. Goldberg’s invention comprising a document search engine of novel conception, which entailed a new form of microfilm using microdot technology, that permitted storage of 50 complete bibles per square inch. But most crucially Goldberg’s system allowed indexing of individual pages/units-of-knowledge – for rapid and automated access – using a combination of specialized optical, photographic and mechanical machinery.
Goldberg’s knowledge (finding) machine included plans to atomize literature in the form of micro- thoughts – which would be individually accessible:
“Facts” or “micro-thoughts” could then be arranged, rearranged and linked in multiple ways using the expanded decimal classification for the especially important and difficult task of linking each chunk with other chunks on the same topic and also those on related topics.
Goldberg’s knowledge machine was ingenious; and specified an individual search query as a pattern of light beams created by passing said beams through punched indexing holes – or meta- data – that ran along side microfilm images.
Despite the fact that Goldberg’s World-Brain was very much a physical mechanism – or a moving part – based microfiche system; Goldberg nevertheless was one of the first to recognize that search required combinations of logical – or boolean terms to narrow down – or way-find – to items of interest. His system also included many modern search concepts such as: A) counting the number of ‘hits’; B) use of a remote telephone dialing procedure to call up records; plus most startlingly: C) use of light intensity to add non- binary categories / meta-data to knowledge-units.
Accordingly, Goldberg’s innovation record firmly establishes him as one of the founding fathers of the concept of the Universal Library/World-Brain.
Vanivar Bush and the Memex
In 1945, Dr Vannevar Bush (1890-1974) had another related solution laid out in a 1945 article: As we May Think; published in The Atlantic Monthly. His idea (related-to/developed-from Emanuel Goldberg’s ‘micro-thoughts’ concept) was to make more accessible the bewildering storehouse of all human knowledge; and he suggested giving man access to, and command of, the ‘inherited knowledge of the ages’. Bush wrote:
A record, if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted… our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing… filed alphabetically or numerically (in subject form), information is found by tracing it from subclass to subclass… (therefore)… information can only be in one place! (unless duplicates are used)… Our mind does not work that way, it works by association…. (and thus) …selection must be by association rather than by indexing.
This problem of universal access to all human knowledge remains largely unsolved even in the present day. Bush places his finger right on the central issue of the Global-Brain, as identified by earlier by Otlet, Wells and Goldberg, and much later in detail by Nelson, Veltman and others; or how to freely access anything from anywhere; and when there are not really (in truth) any subject categories and everything is deeply connected together or intertwingled [6,7,24,28]. Desired is free and open: assembly, plus easy access to, all thoughts/datums. Bush focussed on two areas: A) manipulation of ideas and insertion into the record; and B) extraction of items, whereby the prime actions are selection/ navigation/exploration. Identified here is not only the problem, but its (partial) solution:
When the user is building a TRAIL (of multiple items through the knowledge system), he names it and inserts the name in his code book; whereby… the items are permanently joined… thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the
corresponding code space… when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever, rapidly or slowly, a lever like that used for turning the pages of a book; and it is exactly as though the physical items have been gathered together to form a new book… ANY ITEM CAN BE JOINED TOGETHER INTO NUMEROUS TRAILS. (He called this device a MEMEX) A memex is a device in which an individual stores all of his books, records and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility… it is an enlarged intimate supplement of his memory… Books of all sorts, pictures, periodicals, newspapers… are all dropped into place… Any given book of his library can then be called up and consulted with far greater facility than if it were taken from the shelf… As he has several projective positions, he can leave one item in position while he calls up another! He can add marginal notes and comments… and allowed is… associative indexing; whereby any item may be caused at will to select immediately and automatically another; this is the essential feature of a Memex; tying two things together.
Vannevar Bush foresaw (with the help of his predecessors Otlet, Wells and Goldberg) the basic features of the World Wide Web (and more)—and especially the basic functions of a hyperlink between items—way back in 1945! He predicted that whole new encyclopedias would appear, ready-made with a mesh of associative trains running through them, ready to be dropped into
the Memex and amplified. The envisaged knowledge system, if it could be built, would be a true World Brain, or universal oracle (See Figure 5).
A salient perspective in this respect, is provided by Jules Henri Poincaré (1854-1912); who said that the question is (always) not ‘what is the answer’ but ‘what is the question’. It does seem that Bush’s
Memex would help man to ask the right questions with respect to anything whatsoever—and in turn rapidly and efficiently find answers. However, having of a correct vision, whilst vital, is not always sufficient to bring plans into reality, sometimes one has to wait for technology to catchup.
Perhaps the greatest pioneer of computing is Dr Douglas Engelbart (1925-2013); who was an American engineer and inventor. In the 1950s, he decided that instead of having a steady job, he would focus on making the world a better place to live in. His idea was to use networks of computers to help mankind cope with the world’s increasingly urgent and complex problems; and he believed that enabling all of the people on the planet to solve important problems (together) was key (the sooner the better). Accordingly, because of their efficiency of operation in relation to recognising and delineating problems, organising ideas, and arranging associated problem elements together; Engelbart believed that computers were the pre-eminent vehicle for dramatically improving the world.
He envisioned intellectual workers sitting at display ‘working stations’, flying through information spaces, and harnessing their collective intellectual capacity to solve key problems together, faster and in much more powerful ways.
Under Engelbart’s guidance, the Augmentation
Research Centre (ARC) developed, located at the
Stanford Research Institute (SRI), with funding primarily from the Defense Advanced Research Project Agency
(DARPA); and with the aim to develop the oN Line System (NLS) which was a revolutionary computer
collaboration system designed around Engelbart’s
ideas. The NLS demonstrated numerous technologies, most of which are today in widespread use; including the computer mouse, bitmapped screens, and hypertext; all of which were seen at the so-called Mother of All Demos in 1968. [See YouTube]
Engelbart possessed a clear vision for how computers should develop. He reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. Engelbart set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work. Despite primitive components by today’s standards; many of which Engelbart invented for himself (such as the mouse and the interactive display); he succeeded in demonstrating the validity of his overall vision of a connected world. But his research funding was cruelly shut-off; and although he did continue on his own, many of his researchers left and Doug himself fell into relative obscurity.
Over the next two decades the microchip and personal computer arrived, and a decentralized vision of the Internet as a computing medium developed. Engelbart’s stamp is nevertheless very much present on all modern systems. His inventions include: the mouse, hypertext, collaborative tools, and precursors to the GUI.
Begging the reader’s pardon; we shall now skip- over the next 20 years of computing history; and because this territory has been adequately covered
elsewhere.[21,22,23] In any case the topic of the present paper is mind-expanding technologies; hence we restrict ourselves to key milestones.
In the 1990s a new era of computer interconnectivity and satellite communications dawned. Might we at last see a World-Brain as
Fourth International Conference in Multimedia, Information and Visualization for Information Systems And Metrics Seville, Spain 26-28th January 2017 (Expanded and Evolving Paper)
envisaged by Engelbart, Otlet and Wells? not completely; but something did arrive.
3.8 The World Wide Web
In December 1990, Sir Tim Burners-Lee built what most came to see as the first true World-Brain; whilst working at the European Centre For Nuclear Research (CERN). The World Wide Web (or ‘Web’) is a system of interlinked hypertext documents (distributed ones) that are accessed via the Internet. Using a web-browser, users can view web pages that contain: text, images, videos etc; and navigate them using hyperlinks. People and organizations could then create their own web-pages; and ‘serve’ information in a user-friendly format to people all around the world.
Although the Web enjoys unparalleled popularity and success right down to the present day; many pioneers and experts have criticized the basic design structure of this system and highlighted its tremendous shortcomings.
Perhaps the most vocal critic of the Web is Dr Theodor Holm Nelson (1937-); who is the computer visionary who first coined the terms hypertext and hypermedia in 1963. His ideas run at a tangent to, and in some ways contradict the notion of the World Wide Web as developed by Tim Berners-Lee. It is worth looking at Nelson’s ideas in detail; and because it seems that his original vision for what became the Web—which originates from him as much as anyone else—is only partially fulfilled; and in fact his ideas point to a far superior and more functionally sophisticated hypermedia filled world.
Some ‘experts’ have unfairly and somewhat foolishly dismissed Ted’s work as utopian, idealistic or unrealistic/ impractical – but in so doing they turn their backs on a clear path to a far superior, and more humanistic world, so ingeniously laid out by him in the early 1970s.
It is difficult to address adequately the all- encompassing nature of Ted Nelson’s vision; and because he disagrees with so many of the basic concepts of computing that most other computer scientists generally agree on.
Let us start with HTML documents; which are the basic data-units of the Web; Nelson says that they are ‘rubbish’ and simply virgin scrambled by markup. We need a few pages of explanation, and in order to understand this statement. Nelson’s vision of the web is a far larger, richer, and more interconnected one; but in some senses simpler. His concept is of a true Global-Brain; whereby emphasis is placed on the accessibility, free assembly and open publication of all knowledge.
Nelson would re-design the current Web from scratch. He starts with a clean data structure; with which you can do much more. He says that the original idea of hypertext (his own concept) consisted of a whole suite of ideas, and provided user freedom/capability to do a lot more; and specifically included publication mechanisms. Crucially, needed are two way connections; whereby a ‘forward’ link not only moves you from one item to another; but ‘backward’ links allow you to see everywhere in the network where an item is used/ referenced. Also he has two types of links, a basic link, and a transclusion; which shows you an item’s original ‘context’ (see Figure 8, 9).
Multiple overlaid links, and omni-links (multi- end-point ones); would be allowed, whereby one item could be linked to many other items; and thus many named-links can be overlaid one on top of another. Before getting into Nelson’s ideas in detail; it is interesting to look at motivations and where these ideas come from. To begin with Nelson says that most of the current Web, Libraries, books, journals etc, are all arranged into hierarchies; whereby subject areas, websites, topics, classes, titles, bibliographies etc; are organized into (basically) opaque piles of stuff. Nelson (rhetorically) asks if this is the correct structure—or in other words is knowledge fundamentally structured like this; or is it a human projected/imposed structure.
He comes to the latter conclusion; and claims that all data/information/knowledge (in general) has no structure whatsoever! None at all.
Rather parallel cross-connections, random and unstructured connections, relations which go in all directions, causations, jumping and branching points; interpenetrate the entire corpus of human knowledge; and in every way imaginable.[24,25,26]
Thus paper documents have an intrinsic parallelism; and document boundaries (in fact all boundaries between information units) are arbitrary; hence anything may be potentially connected to everything else; and in every way imaginable. The question then becomes – how to manage the cross- connections. This is the model for everything, and hence for all knowledge, according to Nelson.
Documents have deep and parallel structure, endlessly and without limit (Knowledge is best arranged in ‘Structangles’).
Within such a scheme, we have only two types of structural connection. Links connect things together that are basically the same (units of knowledge in the same context). Whereas a transclusion connects between things that are basically different (same content re-used in different contexts). For Nelson, it is the complexity of links that is the main problem with hypertext as implemented in the World Wide Web. Here links are unmanageable, hidden in amongst the documents; and so cannot be removed from the information. And this key problem – first identified by Nelson – has major implications for knowledge organization/visibility/accessibility.
Currently on the Web, one cannot see where any item of knowledge is used across the network as whole; hence we need search engines like Google to attempt to remedy (poorly) this
situation. Google is bolted on to the HTML based Web; and in an attempt to fix a fundamentally broken data structure!
According to Nelson, this situation is a continuation of a movement whereby the computer is dumbed-down to simulate paper.
Here the filing structure adopted by IBM/Apple forced a false (arbitrary) file plus directory structure onto information; and remade the open/unstructured world (of the real); in terms of black boxes or opaque hidey-holes for information (data-silos). Thus even locally, information cannot be searched (easily) across different files; and because the (lumped) information is invisible and unsearchable —for non-geeks—and especially for networked users.
Nelson’s criticism goes still further, because he also says that in the graphical user interface (GUI) we see a copying of the paper metaphor; whereby (in fact) many other (superior) forms of GUI are possible. We do not have a very good metaphor here (paper), and with word processing programs, because side-notes are not allowed, and copy and paste is not properly implemented. But it is largely the underlying information structure which severely limits capabilities at the level of the user interface. And even if this were not so, who says that knowledge should be represented in a rectangle. Rather according to Nelson, there is a deep structure to knowledge, with all kinds of parallel- linkages and information-shapes of every type present; and in every way imaginable.
Undoubtably Nelson is correct; and what we have today is a severe case of knowledge lock-down; or silos of invisible/lost/lonely: thoughts / datums.
Nelson claims that software is a branch of
cinema; whereby events on a screen: affect the
heart and minds of the viewer! Unification and clarification of what the user does, is what’s important! If these ideas do seem a little disconcerting, it is only because the reader is so used to seeing the computing universe in terms of rectangles, files, one-way (singular) links and a so- called networked system that wholly lacks context. Currently the vast interconnectedness of everything to everything else is simply not represented; hence it is impossible to visualise and explore all data/thought atoms (see Nelson’s own Xanadu project for a description of his vision [25,26]).
With the advent of electronic media it became clear that there were many advantages to storing knowledge in a digital format; including remote accessibility, effortless replication and indestructibility, speed of access, linkage, and limitless storage etc. A major advantage would be that the original dream of Vannevar Bush might (possibly) come true;
whereby his Memex system would be achievable; being a desk (at home or work) from which all the world’s information would be immediately available, or on call.
Hypertext as conceived by Ted Nelson in the 1960s, embodies Goldberg/Bush’s vision; and is a broad category of media defined as: a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper. Nelson explained that:
Hyper-media are branching or performing presentations which respond to user actions, systems of prearranged words and pictures (for example) which may be explored freely or queried in stylised ways… they are in some sense ‘multi-dimensional’, we may call them hyper-media, following mathematical use of the term ‘hyper!’… the sense of ‘hyper!’ used here connotes extension and generality; cf.‘hyperspace’
At first sight, and superficially, it may appear that Nelson’s vision of a hypermedia world of connected/ branching information has come true. We have the World Wide Web, Apps that provide access to vast cloud systems, and social networks; plus fabulous
television/movie systems like YouTube and Netflix that provide, in effect, millions of ‘branching’ channels (for example). Certainly progress has been made, and information is more connected, and accessible, than ever before. But speaking in 2010, Ted Nelson said:
Today’s computer world is based on techi- misunderstandings of human thought and human life, and the imposition of inappropriate structures through the computer and through the file that is used by the applications, and the imposition of inappropriate structures on the things that we want to do in the human world. It is time to re- imagine the computer world.
Fourth International Conference in Multimedia, Information and Visualization for Information Systems And Metrics Seville, Spain 26-28th January 2017 (Expanded and Evolving Paper)
A Universal Knowledge Machine
According to Nelson his original vision of hypertext is in no way represented by the World Wide Web (or Web). In Nelson’s words: that’s NOT what I was talking about! Instead, Nelson imagined a world knowledge system (in the 1960s) based on tiny pieces of data (atoms) that could be connected in every possible way, complete with overlaid links and connections that went in every direction. Nelson said (and continues to say most fervently) that documents (and human thoughts) have deep structure; and it is this that must be represented, made visible and accessible to all. To sum-up Ted’s position, most people don’t (in a deep sense) realize that computers have (in actual fact) been designed; and furthermore that (often) all of the wrong design decisions have been made; leading to vastly inferior, restrictive and/or sub-optimal computing systems.
Let us begin by looking at Ted Nelson’s Xanadu system, which is a (partially built) network for authoring and publishing the diverse forms of hypertext/media as envisioned by Nelson.
In the Xanadu vision, authors work on small pieces of media—an explanatory paragraph, a movie segment, a schematic etc. These segments can be connected with other things or segments/quotations from other documents/sources. One of the most common types of connection is a transclusion, which brings a small (re-usable) segment into a larger document (an essay, a song etc). Other types of connection are also possible (text/book/image/movie
links). The Xanadu system keeps track of all changes to a document’s segments (for example), rather than just a few discrete versions. Any version of any segment can be requested. A document can choose to transclude the most current version or a particular historical version.[1,2,3]
Authors don’t just have their own knowledge segments to play with. Any public segment by any author can (potentially) be transcluded into any document by any author. Nelson understands that creative work builds on prior creative work; and
Xanadu had the remix, the sample, and the quotation built into its basic design (see Figures: 10:15).
Like the web, Xanadu employed front-end viewer software for wandering over a world of information (segments); located on back-end servers. On-the-fly assembly is key. While today we (apparently) live in the age of remix, the Web mostly serves up big, untranscludable/opaque chunks. The strength of the Web lies in making a vast amount of information available. And this explosion was made possible by the Web’s openness and non-proprietary nature. The Web engendered not a ‘culture of piracy’ but a ‘culture of the library’. But what is missing, is access to a vast, almost limitless archive of tiny data segments (or thought-atoms) which could then be overlaid with vast numbers of interconnecting linkages and
transclusions. On the Xanadu system, Nelson says:
We need software that allows everything (including documents, file listings, bookkeeping) to be annotated. We need systems that allow our work to criss-cross and overlap and interpenetrate like the real concerns of our documents and lives, and like (for instance) the topics of this book. If your work is a unified conglomerate that does not divide the way the software does, if your life is a unified conglomerate that you wish to manage from computers that we set up all wrong, you see the problem. Not everyone does. I see today’s computer world as a nightmare honkytonk prison, noisy and colourful and wholly misbegotten.We must everywhere use ghastly menus designed by people with no sense of the human mind. We are imprisoned in applications that can be customized only in ways that the designers allow.We are in the Dark Ages of documents, most locked in imprisoning formats, canopic jars from which they can never escape, or mangled within and by markup which hinders re-use, indexing, connection and overlay and overlap.
Xanadu and the World Wide Web are totally different and incompatible. The Web has one-way links and a fixed rectangular visualization based on the strictly enforced rules of the browser.The browser will not composite or inter- compare documents side by side. Xanadu alumni consider the Web illicit and broken, exactly what they were trying to prevent; for having only one-way links, for conflating a document with a place, for locking it to one view, for having no way to maintain identifiable content from other sources, for having no means of visible connection to points within a document, for imposing hierarchy in a variety of ways.
Put simply, in Nelson’s view; today we should have a unification of the design of information; whereby if everything was interconnected and interrelated the way it should be, then we would have instant access to all the world’s thoughts/ideas/datums. But this is precisely what we do not have; because almost nothing is connected, no discrete ideas (or very, very few) are even addressable or findable; and certainly information (as a whole) is not linked and connected in almost any way whatsoever. One might wish to see, for example, where the word ‘God’ or ‘Blue-Bell’ is used (and how) across the entire corpus of human knowledge. To see every single related item/thought-pattern/relation/opinion/datum – and for all knowledge: scales, levels, granularities/aggregations/stories/ causal-pathways etc etc. A simple request.
But one cannot do so; and because the vast majority of thought-atoms and patterns that relate to the concept of God are hidden deep inside web-pages, files, books, papers, databases, and documents (externally visible or not) etc. These thought patterns are invisible because they are not addressable; individually and separately from the enclosing media. In fact, the vast majority of items are hidden; and most thoughts, sentences, ideas etc; are all trapped inside of opaque lumps; that is they are lost/hidden to most people.
What we take away from our discussion of Ted Nelson’s work is overwhelming (in terms of potential impact on the world’s thoughts); because it becomes obvious that the World Wide Web, and every other large-scale networked system, one and all, suffer from a fundamentally crippled design methodology. The problem is clearly one whereby our current systems do not reflect the structure of knowledge itself (it has none!); and as a result do not enable users to likewise explore the local structure(s) for themselves. In Nelson’s ideal system we would be able to search a term such as ‘computer’ and see a categorized listing of links to every: quotation and article, book, web page etc; containing this phrase; and link back to the original sources of every quotation, complete with links, annotations, comments and guiding markers to help us to explore the term; as it is used throughout the world knowledge system (like a universal linked/highly-structured ‘Hashtag’).
In Nelson’s system anyone may publish anything to be viewed/edited/annotated by all; and it is a system of constant revision; whereby all past edits are saved and truly available for examination/ correction (unlike Wikipedia). Units of information are not saved all over the place (logically if not in storage terms—for an atomic network); but rather there is just one (original source) copy of each data atom; which is served/implanted into any documents that may use this item. Much disc-drive space is saved as a result; and electricity is conserved, and the item’s author may be paid each time his copyrighted information is used via a micro-payments system.
These ideas go right to the heart of key social and organizational problem(s) of today’s Web/Internet; and with respect to the cloud(s)/poor integration and wholly localized centralizations-of-data.
It is organization that is missing from the world’s thought and data atoms. Kim Veltman has written copiously on the related concept of Digital Reference Rooms (DRR). According to Kim, a key problem with the Web is that everyone has their own rules for organising knowledge. On this point, Veltman says that if:
There is no common framework for translating and mapping among these rules, then the whole is only equal to the largest part rather than to the sum of the parts… Hence we need standardized authority lists of names, subjects and places. This may require new kinds of meta-data. Indeed the reference rooms of libraries have served as civilization’s cumulative memory concerning search and structure methods through classification systems, dictionaries, encyclopedias, book catalogues, citation indexes, abstracts and reviews. Hence, digital reference rooms offer keys to more comprehensive tools.
Fourth International Conference in Multimedia, Information and Visualization for Information Systems And Metrics Seville, Spain 26-28th January 2017 (Expanded and Evolving Paper)
Kim Veltman’s requirements (for DRR):
Standardizednames,subjects,placeswiththeirvariants Knowledge in context
Multiculturalapproachesthroughalternativeclassifications Geographic access with adaptive historical maps Views of changing terms and categories of knowledge
A Universal Knowledge Machine
• Common interfaces for libraries, museums etc
• Adaptive interfaces for different levels of education • Seamless links with learning tools
Once again we see a vision for a unification of data/thoughts; combined with efficient accessibility tools. According to Kim Veltman, systematic access requires integrating tools for searching, structuring, using and presenting knowledge; linked with digital reference rooms that provide the aforementioned list of capabilities. Like Nelson, Veltman sees not only the problem, but ambitiously goes ahead and builds a solution; named his System for Universal Media Searching
(SUMS)[7,8,9]. His approach includes a few similar ideas to Nelson, but with Classifications, Learning Filters and Knowledge Contexts identified to help the user cope with ten kinds of materials; namely:
- Terms (classification systems, subject headings, indexes)
- Definitions (dictionaries, etymologies)
- Explanations (encyclopedias)
- Titles (library catalogues, book catalogues, bibliographies)
- Partial contents (abstracts, reviews, citation indexes)
- Full contents which can be divided into another
- Internal analyses (when the work is being studied in its own right)
- External analyses (when it is being compared with other works)
- Restorations (when the work has been been altered and thus has built into it the interpretations of the restorer)
Veltman says that all of these are pointers to the books/items/thoughts in the rest of the digital library. The vision is one of unification and centralized organization – necessary for efficient searching/access.
By use of such schemes the ‘reader’ may progress from universal categories to particulars and using ordinal and/or subsumptive relations between items/ subject categories etc. When querying a knowledge system one may (for example) progress form broader to narrower terms in a quest for specifics.
However as both Nelson and Veltman note; our thoughts and ideas are rarely hierarchically linked or arranged in singular contexts. Thoughts and ideas are related in a myriad of overlapping patterns of unbelievable complexity, beauty and meaning!
Possible relations between thought/data atoms and patterns include: alternatives, associations, complementaries, duals, identicals, opposites, antonyms, indicators, contextualisers etc. Strangely, the problem becomes even more obtuse when we consider logical functions; including: and/or/not, alternation, conjunction, reciprocal, converse, negative, subsumptive, determinative and ordinal relations etc. Veltman says that a possible solution to all of this complexity in terms of organization and classifying of ideas, is the concept of different types of ‘knowledge object’; stating:
Such that if one has a term, one can see its synonyms without needing to refer to a thesaurus. All these kinds of relations thus become different hooks or different kinds of net when one is searching for a new term and its connections.
Our abbreviated discussion of Veltman’s work highlights the enormity of the challenge facing the developer of a proposed UKM; and in terms of efficiently organizing all the World’s ideas/datums. Major problem areas include: finding ways to bring adequate: centralization, standard classifications, aggregation/partition and contextualization for knowledge units whilst retaining option(s) for multi- cultural windowing and accessing the geographic and historical dimensions of knowledge, etc.
Google simply blasts lists of stuff at you; in response to keywords, which are entirely unclassified (apart from overly simplistic groupings of images, web-links, videos etc); and in no way lets you see the structural patterns of thought. But it is these patterns that (in fact) underly knowledge at the biggest/smallest senses.
Googling is like the game you play as a child whereby one person thinks of something (located in the environment) in his/her head, and the other person attempts to guess the item in question. But this is no way to link-to/access knowledge! (Googling)
A Universal Knowledge Machine
With Google you never even approach/begin-to- see the true pattern of knowledge, or what are the diversity of opinions/ideas; but can only scramble about in an ad-hoc manner. This is because data and thought-atoms are grouped into files, isolated and hidden one from another; and so by and large free- assembly of thought-patterns is blocked/impossible.
Veltman again on knowledge organization:
We want to find something particular and yet we use single words, which are universal. The semantic web entails only subsumptive relations: what and who. Needed is a fuller approach that treats who as living entities, separate from what, and includes determinative and ordinal relations which are basic aspects of human life and knowledge: where, when, how, and why.
In his 2016 paper: Means of Certain Knowledge and Interfaces, Veltman outlines his continuing vision of how to obtain efficient access to everything known. He speaks of the random word approach of the search engines; as opposed to the lists of catalogues of authors, titles, and keywords in titles; plus controlled vocabularies in classification schemes and thesauri. Veltman says the ‘The goal is to find an item in the collection ranging from a book or article to manuscripts, letters, newspapers, maps, or other media.’ [Ref. below for Kim’s ‘Six Worlds’ framework.]
Veltman discusses 3 approaches to efficient knowledge organization/access as follows:
A) One is the potential of searching the complete contents of these materials. This invites new links to sources and implies a need for different levels of searching.
B) In addition, different means of certain knowledge could be identified and used as search criteria.
C) Another entails the possibility of different levels of knowledge relating to a given text.
Kim also states that texts can have verbal, numerical and geometrical levels; which he links to matrices of knowledge connections – leading to an Internet of Knowledge and Wisdom – as opposed to mere isolated facts/images/videos/ numbers/descriptions/definitions etc.
In his copious writings, Veltman carefully opposes current unhelpful trends towards the internet(s) of opinions, habits, services, things, experiences, plus military, and spying etc. His is the eminently humanistic perspective; and one that is by no means out of the question for us to develop. But needed are adequate conceptual schemes and effective designs to enable the UKM to work in reality. Kim speaks of the need to separate Concepts into facets – and in this way to fully identify (or fix) the foundations/contexts of anything in particular. A key person in related fields is the Indian scholar Ranganathan, who classified the world in terms of five elements (or facets): Personality, Matter, Energy, Space, Time (or PMEST). These elements are related to the 5 basic ‘W’ questions of Who, Where, Why, What, When (plus also How). Accordingly, Kim Veltman summarizes the desired approach of synthesizing information from multiple sources, combined with the power to obtain high-level overviews of knowledge as follows:
We need a web of enduring knowledge, understanding and wisdom, that is independent of this social web of habits, changing opinions and fashions. It must be independent of military, secret services and corporate interests. This web of knowledge should give us access not only to sources, but also to hidden layers of knowledge at the level of individual words, letters and numbers. An internet of our experiences, opinions and fears is attractive and legitimate. A cumulative internet of man’s achievements through the ages is more important. This entails much more than a reassessment of the past decades. It is a question of understanding how our current methods of understanding the world have evolved.
Centralized classification of a universe of micro- thoughts plus effective command-and-control (for knowledge itself); is required to bind information together in such a manner that it is readily available. Webs of data, and networks of association must be organized/visualized. Ted Nelson called this open, transparent approach: ‘Promiscuous Linkage and Windowing among all the materials.’ Accordingly, we need multiple filtered entry points, global synthesis and cross- references – but it is filtering on the way-in with respect to our knowledge queries and not on the way-out.