Technologies that are created and developed for real-time applications with a focus on human/user input and environmental sensing.
The first element is the technology. These technologies come in many different shapes, sizes, and forms. They range from web services, hardware sensors, desktop or mobile applications, and more. They can be completely digital, such as web services, or they can have analog components, such as buttons, sliders, switches, and toggles such as those built into control panels.
The second element is the notion of “real-time applications”. This is a very important concept for interactive technology and immersive design. Real-time technologies are contrasted by more traditional “offline” or “non-real-time” technologies. A situation that many creative professionals can relate to is that of rendering/saving images or videos. Architects, designers, advertising professionals, and many other disciplines work with computationally expensive media, such as 3D models, high-definition video, and multi-tracked audio. All of these formats have very similar workflows, where a user will input data, process their assets, preview the output, and then set their software on the job of rendering the final product. The length of time it takes to render is dependant on the complexity of the work and can range from minutes to hours to days.
With the advent of affordable and powerful computers, applications have tried to move from this traditional system to real-time systems, where there is no difference between the preview and the output. What is seen in the preview is actually the final render being generated at the speed the user is working at. Simply put, “real-time” means there is no delay between user input and the final output, because the output is being generated in “real-time”. Many advancements to real-time softwares have been driven by the gaming industry, who share many similar goals with interactive technology and immersive design, because their goal is to build incredibly immersive applications and games that are built to react to human/user input in real-time.
The third element is the focus on human/user input and environmental sensing. The ultimate goal of interactive technology is to capture and analyze the different ways humans interact with each other, their environments, and objects around them. When humans communicate with each other, much of the meaning and context lies behind many layers of subtlety that are generally taken for granted, such as tone of voice, eye contact, facial expressions, body movements, and more. All the different interactive technologies seek to harness the specific ways humans interact to allow creative professionals the ability to harness these interactions to change environments, drive experiences, and tell stories.
With a concrete definition let us examine some examples of interactive technologies.
The best example is the smartphone. The smartphone is a powerhouse of interactive technologies and they’re so widely used that people across all age groups, nationalities, world views, etc, have experience with one. Brands and models aside, many of them share the same fundamental technologies. They feature touch screens with gestural inputs. They have seamless and constant access to the Internet, blurring the line between what is stored on the phone and what is being loaded from the Internet in real-time (many mobile apps are just fancy web browsers). They all have gyroscopes, accelerometers, and other sensors to allow screen content to automatically rotate based on the phone’s orientation. These are perfect examples of interactive technologies in our everyday lives.
Many spaces we visit daily have interactive technologies in them. Consider the simplest example of them all – automatic doors in retail environments. Without any active input from the user, these doors use sensors to create a simple open and close triggers based on the motion of users. This simple use case can be implemented in many other situations, where a user’s motion could generate a greeting at a digital kiosk, for example.
The Internet of Things collects under it all things connected to the Internet and all services and applications built on the Internet. This include services like Twitter, Google, and hardware such as smartphones.
Gestural technologies are all the technologies that create experiences similar to the Tom Cruise film “Minority Report”.These technologies allow us to control technology with the wave of our hands and generate interactive content with our body motions. This category includes the ever-popular Microsoft Kinect, as well as other technologies like traditional cameras, Leap Motion sensors, infrared lasers, and more.
Physical interfaces range from simple hardware remote controls (similar to the one for your television) to in-wall control panels to more interesting and experimental methods of interacting with objects using physical objects or surfaces.
Spatial & Environmental sensors include passive and active sensors that monitor all aspects and elements of a space or environment. This include sensors for elements such as temperature, movement, and sound levels.
Actuators are the things that go boom AFTER you interact with one of the other kinds of interactive technologies, or types of physical hardware you control. We’ll briefly go over some common ones but this will be mostly in the technical implementation sections.