IoT History

How can we define the Internet of Things, or IoT?

A commonly employed definition states  that—the IoT—is a system of interrelated computing devices, mechanical and digital machines, objects, systems, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human intervention.

But the IoT has come to mean more than just connected Things—but rather relates to Things that are imbued—at the same time—with useful abilities surrounding the concept of ‘Smartness’ and/or with what we name here as ‘Situated Intelligence’.

Situated Intelligence refers to an IoT Thing or IoT System that has a unique ability to Monitor, Act and Integrate within its environment in useful ways; and in particular to how objects that are ‘awake’ to the environment can/will usefully automate and enhance human problem solving capabilities.

Doubtless the IoT is a very exciting part of our collective future; henceforth on this site we examine how Situated Intelligence is/can/could be applied to a range of different problem application area(s). But first in this section it is salient to briefly examine the history of the IoT, and thus to learn where these ideas came from—specifically for Connected and Intelligent Things.

We begin with a brief definition of the IoT as it exists today.


IOT Defined

The definition of the Internet of Things has evolved due to the convergence of multiple technologies; including real-time analytics, machine learning, commodity sensors and embedded systems etc. A number of related technologies are typically employed to enable the Internet of Things; including combinations of embedded systems, wireless sensor networks, plus control systems and automation etc.

In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the “smart home”, covering a variety of appliances such as as lighting fixtures, thermostats, home security systems and cameras. Said devices are typically designed to support one or more common ecosystems, and can be controlled via devices associated with each ecosystem, such as smartphones and smart speakers.

But how did we arrive at the modern conception of the IOT? In order to find the answer we must turn to the many and varied worlds of science-fiction.


Science-Fiction and the IoT

Science fiction (SciFi) is a good place to begin our study of the history of the IoT—and associated language—and so to discover where and how these ideas first came to be employed within the public agora. Indeed some people even think that the IoT was first conceptualised within the minds of a variety of SciFi writers. This may be so—or it may not be so—but certainly we can say that SciFi has played a big part in popularising related concepts.

For example , the concept of connected devices has been around since at least the 1930s and seems to have first emerged in SciFi stories of the 1930s, 1940s and 1950s. In early SciFi magazines such as Amazing Stories and Astounding Stories of Super-Science many aspects of the IOT were foreseen and/or foreshadowed.

IoT-like technologies depicted in early SciFi stores include:

  • Radio controlled devices
  • Mechanical / electronic brains
  • Human-brain interfaces (usually wired implants)
  • Voice controlled robots and remote devices
  • Computer terminals (TV-like)
  • Speaking / thinking robots
  • Automated lights/doors/windows/escalators etc

Later more sophisticated IoT-like devices were depicted in science fiction movies such as 2001 (1968), Colossus: The Forbin Project (1970), WestWorld (1973), Logan’s Run (1976), Blade Runner (1982), Back to the Future (1989) and Minority Report (2002), etc. Foreseen in these films were voice-operated computers, automated doors, robot helpers, video conferencing, video surveillance 3D displays, voice / gesture User Interfaces and 3D scanners/sensors etc.

And similar IoT-like devices were seen on television Sci-Fi programmes such as The Jetsons (1962), Star Trek (1967) and Space 1999 (1975)—where notable examples include mobile-phone like communicators, connected tablets, robot helpers,  3D printers, human to machine communication (and vice-versa), machine-to-machine communication (M2M), driverless cars, automated sensors, wearables, real-time and remote data gathering, data-rich screens and multi-touch displays (big and small), remote screens to summarise real-time information and also to control objects from distant locations etc.

In actual fact, all of these ideas relate to a merging of the real world of Things with embodied intelligence and connectivity.

Perhaps the SciFi movie that most predicts the future in this respect is the Matrix (1999)—which is about a hacker named Neo who is recruited by a group of humans who’ve broken free of the digital matrix controlling humans in the future. Perhaps the IoT will place us in a matrix and it may very well be nigh impossible to escape.

Doubtless the exciting theme—within SciFi stories—of an epic struggle between human beings and AI plus IoT-like technology has been an excellent plot vehicle. However the the fact that technologies can have (intentionally or not) both positive and negative effects is a truism proven by past events including not least the industrial revolution which left countless millions unemployed as a result of the efficiencies of mass production.

Sci-Fi has always been a good way to examine—or predict—what may be the future benefits and dangers of any new and/or envisaged technology—indeed perhaps that may be part its true purpose—as an imaginary testing ground to explore possible future world(s). The issue of human control seems to be a key part of that process. The question of how can/do we control our inventions—and thus to manage the purposes that they are put to – remains evident today.

With the emergence of AI in the real world—capable thinking machines—the dangers seem only to multiply—and making accurate predictions in relation to what may happen as a result of AI becomes very difficult or nigh impossible.  As a result, thinkers like Elon Musk and Steven Hawking have warned that we must place humanistic controls on AI to prevent a catastrophe in this respect.

The emergence of  an AI that is somehow out of control remains a real danger—no matter if it is an AI that somehow misunderstands its true purpose (helping humans), or rather more likely—where AI is the result of powerful humans wielding technology towards their own personal gain and/or at the expense of the masses.

The upshot is that it is up to—us all as a collective—to work out how to build humanity into our IoT future; and in this respect it is salient to remember the past.


Real-world Origins

It is important to realise that the IoT as it exists today is a technology that has a far longer history than one might think—because the IoT sits atop of a vast number of constituent technological inventions, scientific breakthroughs and supporting plus encapsulated products and communication systems/protocols etc.

Early breakthroughs that were necessary to make the IoT possible go right back to 1832 with the invention of the telegraph. Other important developments in this respect include:  binary mathematics, morse-code, radio communications, ASCII coding, transistors, integrated circuits, the Arpnet, micro-processors, satellite communications, mobile technologies including GSM, CDMA 1G-4G, 5G, TCP/IP, domain names, the World Wide Web and the Cloud etc.

At various times throughout the 20th century, several visionaries have made prescient comments that appear to foresee the IoT:

  • 1926: Nikola Tesla  said: “When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole… and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.”
  • 1950: Alan Turing commented: “It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.”
  • 1964:  Marshall McLuhan in Understanding Media stated: “by means of electric media, we set up a dynamic by which all previous technologies—including cities—will be translated into information systems.”
  • 1966: Karl Steinbuch a German computer scientist said: “In a few decades time, computers will be interwoven into almost every industrial product.”
  • 1999: Neil Gross in Business Week commented: “In the next century, planet earth will don an electronic skin. It will use the Internet as a scaffold to support and transmit its sensations. This skin is already being stitched together. It consists of millions of embedded electronic measuring devices: thermostats, pressure gauges, pollution detectors, cameras, microphones, glucose sensors, EKGs, electroencephalographs. These will probe and monitor cities and endangered species, the atmosphere, our ships, highways and fleets of trucks, our conversations, our bodies–even our dreams.”

These interesting comments indicate that the origins of the IoT has a far longer history than most people realise. Seen in its proper light, the IoT can be seen as nothing more than an extrapolation from, and combination of, many previous ideas related to mechanisation and systemisation, long-distance communication, automation, AI, plus media making efforts etc.


The Merging of Atoms and Bits

The IoT is set to become a revolution as great as any previous one. The IoT is a new phenomenon that potentially swallows-up and/or radically changes many traditional industries—including many manual jobs and manual processes plus industrial manufacturing and in many cases the building and operation of systems and machines. Everything in the real-world becomes merged with integrated circuits, digital communications, software and computers etc.

Put simply the world will soon be resplendent with countless millions of ‘intelligent and connected’ Things that are able to sense, act and integrate—just as we do—relative to the real world, and in combination with the animate and inanimate world(s), and also each-other.

Soon all around us objects will wake-up—and be actively engaged in going about their daily business as a result of hidden programming logic—seemingly self-powered, self-actuated, self-controlled and self-motivated. Sometimes they will do so with our help/approval and so be under overt human command (localised and/or remote control); but more often these smart Things will just be following their own inner programming—based on programmed decision making processes—alone and/or in combination with other remote Things and/or networked controlling logic.

The IoT merges the old world of Atoms with the new World of Bits. Henceforth each smart Thing in the real-world is to be imbued with what we used to call an electronic brain. And the big picture is that these smart Things will together form a new super-intelligent environment that hopefully can provide a better world for us humans to live inside.

Dangers do exist for such an IoT dominated world.

Privacy and security dangers could be magnified and many new dangers may emerge for example. And as with any technology, the IoT can be used to promulgate good or evil; either deliberately or by accident. The IoT may have inadvertently bad consequences for ordinary humans in certain circumstances—such as reduced opportunities for Jobs and/or less say in how society operates. As a result—we must find ways to humanise this new IoT revolution (see the author’s book: Self As Computer—for related issues/arguments).

In any case, the instrumenting of the entire world has begun. Let is now examine some key issues—and in order to discover what may be the choices, and/or advantages and disadvantages, in terms of the design and operation of our newly intelligent and super-connected habitat.


IOT Today

The actual term “Internet of Things” was first coined by Kevin Ashton  in 1999 during his work at Procter&Gamble. Ashton worked on supply chain optimisation, and he wanted to attract senior management’s attention to a new exciting technology called RFID. Partly because the Internet was the hottest new trend in 1999 and partly because it somehow made sense, he called his presentation “Internet of Things” or IoT.

Despite the fact that Kevin indeed grabbed the interest of some P&G executives, the term Internet of Things did not get widespread attention for around the next 10 years. Whereby the concept of IoT began to gain popularity in the summer of 2010. In that year Google’s StreetView service made 360 degree pictures available for exploration from everybody’s computer, smart phone or tablet. Google’s strategy seemed to be the indexing of the physical world.

The term Internet of Things reached mass market awareness when in January 2014 Google announced that it would buy the Nest connected home system for $3.2bn. At the same time the Consumer Electronics Show (CES) in Las Vegas was held under the theme of IoT.

But before we can go on discussing the IoT, we must ask ourselves what exactly is the IoT? What kinds of technologies does the term encompass? And how can we define the IoT? Well as a partial answer, Gartner has defined the The Internet of Things as: “a concept that describes how the Internet will expand as sensors and intelligence are added to physical items such as consumer devices or physical assets and these objects are connected to the Internet.”

Gartner identifies IoT related technologies as:

  1. Media Tablets and Beyond
  2. Mobile-Centric Applications and Interfaces
  3. Contextual and Social User Experience
  4. App Stores and Marketplaces
  5. Next-Generation Analytics
  6. Big Data
  7. In-Memory Computing
  8. Extreme Low-Energy Servers
  9. Cloud Computing

A number of terms have been applied to IoT-like systems. For example, Cisco has been driving the term Internet of Everything (IoE). Intel called it the “embedded internet”. Whilst others have spoken of Edge Computing and Mobile IoT etc.

A sampling of other terms that have been proposed but don’t mean exactly all the same are (see our IOT Lexicon for a more complete list):

  • IOT Cloud Platform
  • IOT Protocol
  • Global Navigation Satellite System
  • Bluetooth Low Energy / Narrowband IOT
  • M2M (Machine to machine) communication
  • Web of Things
  • Industry 4.0
  • Mesh Network
  • Connected Human / Smart Wearables
  • Near-Field Communication
  • Telematics / Big Data
  • Industrial internet (of Things)
  • Smart Systems / Smart Meter
  • Smart City; Smart Buildings;
  • Pervasive computing
  • Intelligent systems
  • Embedded Software
  • Sensor Network
  • System on a Chip

Whilst these varied definitions are somewhat broad, vague and/or ill-defined, the same allegation can be applied to the Internet of Things. Indeed there seems hardly to be an envisaged future technology that does not relate to one or more of these terms in one way or another.

So how can we clarify the vision for the IoT? How do we grasp what this term is all about in the most fundamental way? Perhaps a good starting point is to go back to the beginning, and look at where these ideas came from in the first place—irrespective of the actual term(s) employed to encapsulate the area as a whole.

A good starting place is to examine the work of scientist Dr Mark Weiser.


Ubiquitous Computing

Back in the early 1990s Dr Mark Weiser from Xerox Parc wrote several papers on the future of computing; whereupon he summed up his ideas by means of the concept of ‘Ubiquitous Computing’—which is the idea of integrating computers seamlessly into the world to invisibly enhance pre-existing objects / things / machines. Dr Weiser was also the first to define invisible computing; also referring to as embodied virtuality, whereby the key goal is to help users to activate the world in helpful ways!

At their core, all models of ubiquitous computing and hence the IoT—share a vision of small, inexpensive, networked and processing devices, distributed at all scales throughout everyday life.

In a 1991 article for Scientific American entitled ‘The Computer for the 21st Century’, Dr Weiser first introduced the related concept of Pervasive Computing whereby he stated that: ‘the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it… In the 21st century… specialised elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.’

In summary, with these ideas, Dr Weiser conceived of a new way of thinking about computers, the same being a vision that takes into account the human world and allows the computers themselves to vanish into the background. Along these lines Dr Weiser makes a rather salient point as follows: ‘in essence, the only way things disappear is when we are freed to use them without thinking and so to focus beyond them on new— and often inherently better—goals.


Embodied Virtuality

In a 1993 paper named ‘Ubiquitous Computing’, Dr Weiser further explored his concept of what it might be like to live in a world of invisible and intelligent widgets—something he named embodied virtuality. Therein Dr Weiser remarked ‘computing access will be everywhere’: in walls, on wrists, and in ‘scrap’ computers; (like paper, lying about). This is called ‘ubiquitous computing’.

Dr Weiser noted that: “Ubiquitous computing has as its goal enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. You need not carry anything with you, since information will be accessible everywhere. Unlike the intimate agent computer that responds to one’s voice and is a personal friend and assistant, ubiquitous computing envisions computation primarily in the background where it may not even be noticed. Whereas the intimate computer does your bidding, the ubiquitous computer leaves you feeling as though you did it yourself.”

Dr Weiser continued: “A good tool is an invisible tool. By invisible, I mean that the tool does not intrude on your consciousness; you focus on the task, not the tool. Eyeglasses are a good tool—you look at the world, not the eyeglasses.”

Of course, such tools are not (always) invisible in themselves, but as part of a context of use. Good tools enhance invisibility—but it is often an invisibility that relates only to those specific aspects of the environment that the human user does not need to see at any particular moment.


Calm Technology

Another way of thinking about invisible computers is in terms of the affect a technology has on our level of arousal or simulation (when using a particular tool).

In 1995 Dr Weiser in collaboration with John Seely Brown wrote a paper on this topic entitled: Designing Clam Technology; in which they said: ‘Our computers should be like our childhood: an invisible foundation that is quickly forgotten but always with us, and effortlessly used throughout our lives. We should be creating technologies that encalm and inform. Calm technology engages both the center and the periphery of our attention, and in fact moves back and forth between the two.’

The term “periphery” refers to a situation we are attuned to without attending to it explicitly. Ordinarily when driving our attention is centred on the road, the radio, our passenger, but not the noise of the engine. But an unusual noise is noticed immediately, showing that we were attuned to the noise in the periphery, and could come quickly to attend to it. A calm technology will move easily from the periphery of our attention, to the center, and back.

This is fundamentally encalming, for two reasons. First, by placing things in the periphery we are able to attune to many more things than we could if everything had to be at the center. Things in the periphery are attuned to by the large portion of our brains devoted to peripheral (sensory) processing. Thus the periphery is informing without overburdening. Second, by recentering something formerly in the periphery we take control of it.


Smart Things

A Smart Thing is an object that can solve problems with respect to its environment (consisting of other People, Things/Systems [smart or stand-alone] and Services etc).

In order to solve problems with respect to its environment – a Smart Thing must adequately Interpret, securely Process and appropriately Act on the information it collects, whilst protecting itself from from threats and intrusions, plus communicating the results to other smart objects, people and systems (while at the same time managing power consumption).

Importantly, connectivity also enables capabilities of the product to exist outside of the physical device itself, in what is (sometimes) known as the product cloud. The data collected from these products can be then analysed to inform decision-making (human and/or machine), enable operational efficiencies and so to continuously improve the level of performance of the product and related classes of products.

Henceforth Smart Object interaction not only refers to interaction with physical world objects but also to interaction with virtual computing environments—and in this way the real world becomes part of the digital world.

It is important to realise that the smartness capability of a thing/system typically will relate to several problem-solving facets. The job of the Smart Thing is to help humans achieve  certain Goals (relating to the specific purpose(es) for which it has been specifically designed to meet). Now regardless of whether or not a particular Goal is met – by the Smart Thing – by means of automatic, semi-automatic and/or entirely human-controlled decisions, actions and procedures—it is a fact that all Goals tend to share certain common features.

A famous design guru in relation to Smart Things is Dr Donald Norman – who in his book ‘The Design Of Everyday Things’ – has made an exhaustive study of the nature of Goals and how Things may be designed for efficient purposes. For an explanation of Dr Norman’s work as applied to the IoT, take a look at Situated Intelligence as explained on this site.


The Future of the IOT

The internet of things and devices such as machines and sensors are expected to generate approximately 80 Zettabytes of data in 2025—as predicted by IDC (International Data Corporation). Also, IoT will grow at a compound annual growth rate of 28 % over 2020 to 2025. According to the projection of the statista Research Department, 75 billion devices will be connected with the IoT worldwide by 2025.

The IoT will generate almost unimaginable amounts of data—(the so-called Big-Data problem)  and the question of how to manage this amount of data remains unsolved. Also, organizations must work to protect all of this this data wherever it relates to customers and their personal information.

Machine Learning is a type of AI (Artificial Intelligence) that helps computers to learn—and it is likely that the solution to the Big-Data problem will be found in this subject area. Ergo, AI is considered a key propellant to the growth of the IoT revolution. 5G is another technology that is central to the IoT—both for fundamental connectivity speed—and also to enable a single network for billions of applications (the so-called IoT of Everything/Things).

In a span of ten years, from 2020 to 2030, IoT devices will grow from 75 billion to more than 100 billion, and the improvement—of data flow speeds—from 4G to 5G is set to be a major factor in this revolution. Today’s 4G network can support up to 5500 to 6000 IoT devices on a single cell (geographical region). With a 5G network, up to one million devices can be handled by a single cell—henceforth there should be no limits to the number of objects and Things that can be placed onto the global network and/or countless smaller sub-networks.

The upshot of all these breakthroughs and advancements is that the future of the IoT is not likely to be limited by technological advancements—and that anything we can imagine—or put resources into—can be achieved or made into a reality. Henceforth when it comes to the exciting future of the IoT—we are—truly—limited only by our individual and collective imagination(s)!


IoT History – Conclusion

At the present date—in the year 2020—the internet of things and devices is no longer a dream, prediction or some kind of vapour-tech; but rather it is already here—and developing/evolving  rapidly. Whereby everything we do—all of our activities (or most daily interactions with objects, systems and machines)—is/are about to be transformed— and in a huge variety of different ways.

Predicting exactly what these changes will be—that are enabled by the IoT—and how they will affect us in 5-10 years time would seem to be an impossible task. All most of us can do is—to sit back—and keep a close eye on developments!

Increasingly, we see connectivity being built into ordinary smart-home devices such as lights, heating appliances, speakers, TVs, speakers, kitchen appliances etc. Plus the industrial internet is taking off with smart-buildings, smart-factories, smart supply-chain etc; indeed there seems to be nowhere that the IoT—is not present—and almost no area of human activity that the IoT is not set to revolutionise or dramatically improve. And all of this will happen sooner than you might believe.

The future for us humans is one of Situated Intelligence embedded into almost every action, trip, task and need—no matter if we are talking about leisure and/or working activity.

And these multiple intelligences will be alive to our specific and local needs and requirements, whereby they will provide fully tailored—and real-time—solutions that both see and adapt themselves carefully to the intricacies of a particular situation and henceforth to the diversity and uniqueness of human activity.