Telexistence

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Summary

Telecommunication and remote-controlled operations are common in our daily lives. While performing these operations, users want to have a feeling of being present on site and directly performing jobs that they would like to do, instead of controlling them remotely. However, the present commercially available telecommunication systems and/or telepresence systems do not provide the sensation of self-presence or self-existence, and hence, users do not get the feeling of being present or existing in remote places. Moreover, these systems do not provide haptic sensation, which is necessary for the direct manipulation of remote operations, resulting in not only the lack of reality, but also difficulty in performing tasks.


Prof. Tachi and his team have been working on telexistence, in which the aim is to enable a human user to have the sensation of being present on site, and to perform tasks as if he is directly performing them there. By using a telexistence master-slave system, the human user can have a feeling of being present in a remote environment or have a sensation of self-existence or self-presence in the remote environment, and would be able to perform tasks directly as though he is present there. The telexistence master–slave system is a virtual exoskeleton human amplifier, by which a human user would be able to operate a remote avatar robot as if it is his or her own body; he or she can have the feeling of being inside the robot or wearing it as a garment.


The concept of telexistence was invented by Dr. Susumu Tachi in 1980, and it was the fundamental principle of the eight-year Japanese national large-scale "Advanced Robot Technology in Hazardous Environment" project, which began in 1983, together with the concept of third generation robotics. Theoretical considerations and the systematic design procedure for telexistence systems were established through the project. Since that time, experimental hardware for the telexistence systems have been developed, and the feasibility of the concept has been demonstrated.


Two important problems that remained to be solved are mutual telexistence and haptic telexistence. Mutual telexistence is a telexistence system that can provide the sensations of both self-presence and their presence, and is used mainly for communication purposes, while haptic telexistence adds haptic sensation to the visual and auditory sensations of self-presence, and is used mainly for remote operations of real tasks.


Recent advancements in telexistence have partly solved the above problems. TELESAR II (telexistence surrogate anthropomorphic robot version II) is the first system that provided the sensations of both self-presence and their presence for communication purpose using RPT (retroreflective projection technology). For remote operation purposes, Prof. Tachi and his team have developed a telexistence master-slave system named TELESAR V, which can transmit not only visual and auditory sensations, but also haptic sensation. Haptic sensation is displayed based on the principle of haptic primary colors.

 

(Revised from Susumu Tachi,"Telexistence: Enabling humans to be virtually ubiquitous," Computer Graphics and Applications, vol. 36, No.1, pp.8-14, 2016.)

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

How Telexistence was evolved

Figure 1 illustrates the emergence and evolution of the concept of telexistence. Teleoperation emerged in Argonne National Laboratory soon after World War II to manipulate radioactive materials [Goertz 1952].  In order to work directly in the environment rather than work remotely, an exoskeleton human amplifier was invented in the 1960s.  In the late 1960s, a research and development program was planned to develop a powered exoskeleton that an operator could wear like a garment.  The concept for the Hardiman exoskeleton was proposed by General Electric Co.; an operator wearing the Hardiman exoskeleton would be able to command a set of mechanical muscles that could multiply his strength by a factor of 25; yet, in this union of a man and a machine, the man would feel the object and forces almost as though he is in direct contact with the object [Mosher 1967].  However, the program was unsuccessful for the following reasons: (1) wearing the powered exoskeleton was potentially quite dangerous for a human operator in the event of a machine malfunction; and (2) it was difficult to achieve autonomous mode, and every task had to be performed by the human operator.  Thus, the design proved impractical in its original form.  The concept of supervisory control was proposed by T. B. Sheridan in the 1970s to add autonomy to the human operations[Sheridan 1974].

Figure 1 Evolution of the concept of telexistence through Aufheben or sublation of contradictory concepts of exoskeleton human amplifier and supervisory control.

 

In the 1980s, the exoskeleton human amplifier evolved into telexistence, i.e., into the virtual exoskeleton human amplifier [Tachi 1984].  In using a telexistence system, since it is not necessary for a human user to wear exoskeleton robot and be actually inside the robot, the human user can avoid the danger of crashing or exposure to hazardous environment in the case of a machine malfunction, and also make the robot work in its autonomous mode by controlling several robots in the supervisory control mode.  Yet, when the robot is used in the telexistence mode, the human user feels as if he is inside the robot, and the system works virtually as an exoskeleton human amplifier, as shown in Fig. 2.

Figure 2 Exoskeleton Human Amplifier (left), and telexistence virtual exoskeleton human amplifier (middle and right), i.e., a human user is effectively inside the robot as if wearing the robot body. Avatar robots can be autonomous and controlled by supervisory control mode while the supervisor can use one of the robots as his virtual exoskeleton human amplifier by using telexistence mode.

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Mutual Telexistence

There have been several commercial products with the name telepresence, such as Teliris telepresence videoconferencing system, Cisco telepresence, Polycom telepresence, Anybots QB telepresence robot, Texai remote presence system, Double telepresence robot, Suitable Beam remote presence system, and VGo robotic telepresence.

Current commercial telepresence robots that are controlled from laptops or intelligent-pads could provide a certain sense of their presence on the side of the robot, but the remote user has a poor sense of self-presence.  As for the sense of their presence, commercial products have problems that the image presented on a display is only a two-dimensional face, which is far from real, and that multi-viewpoint images are not provided; this results in the same front face being seen even when viewed from the side.  The ideal systems should be mutual telexistence systems, which provide both the sense of self-presence and their presence, i.e., the user should get a feeling of being present in the remote environment where his or her surrogate robot exists, and at the same time, the user who remotely visited the surrogate robot’s location could be seen naturally and simultaneously by several people standing around the surrogate robot, as if he actually exists there.  However, almost none of the previous systems could provide both the sense of self-presence and the sense of their presence.

Figure 3 shows the conceptual sketch of an ideal mutual telexistence system using a telexistence cockpit and an avatar robot.  User A can observe remote environment [B] using an omnistereo camera mounted on surrogate robot A.  This provides user A with a panoramic stereo view of the remote environment displayed inside the cockpit.  User A controls robot A using the telexistence master-slave control method. Cameras B' and C' mounted on the booth are controlled by the position and orientation of users B and C, respectively, relative to robot A'. Users B and C can observe different images of user A projected on robot A by wearing their own head mounted projectors (HMP) to provide the correct perspective.  Since robot A' is covered with retroreflective material, it is possible to project images from both cameras B' and C' onto the same robot while having both images viewed separately by users B and C.

Figure 3 Proposed mutual telexistence system using RPT.

 

A method for mutual telexistence based on the projection of real-time images of the operator onto a surrogate robot using the RPT was first proposed in 1999 [Tachi 1999b],together with several potential applications such as transparent cockpits [Tachi et al. 2014], and the feasibility of the concept was demonstrated by the construction of experimental mutual telexistence systems in 2004 [Tachi et al. 2004].

In 2005, a mutual telexistence master-slave system called TELESAR II was constructed for the Aichi World Exposition.  TELESAR II is composed of three subsystems: a slave robot, a master cockpit, and a viewer system, as shown in Fig. 4.

Figure 4 Schematic diagram of mutual telexistence system TELESAR II.

 

The robot has two human-sized arms and hands, a torso, and a head. Its neck has two degrees-of-freedom (DOFs), by which it can rotate around its pitch and roll axes.  The robot has four pairs of stereo cameras located on top of its head for a three-dimensional surround display, so that an operator can see the remote environment naturally with a sensation of presence.  A microphone array and a speaker are also employed for auditory sensation and verbal communication. Each arm has seven DOFs, and each hand has five fingers with a total of eight DOFs [Tadakuma et al. 2005].

The cockpit consists of two master arms, two master hands, the aforementioned multi-stereo display system, speakers, a microphone, and cameras for capturing the images of the operator in real time.  For the operator to move smoothly, each master arm has a six-DOF structure so that the operator's elbow is free of constraints.  To control the redundant seven DOFs of the anthropomorphic slave arm, a small orientation sensor is mounted on the operator's elbow.  Therefore, each master arm can measure the seven-DOF motions of the corresponding slave arm, whereas the force from each slave arm is transmitted back to the corresponding master arm with six DOFs.

The most distinctive feature of TELESAR II is the use of an RPT viewer system. Both the motion and visual image of the operator are important factors to be determined for the operator to perceive existence at the place where the robot is working.  In order to view the image of the operator on the slave robot as if the operator is inside the robot, the robot is covered with a retroreflective material, and the image captured by the camera in the master cockpit is projected on TELESAR II.

TELESAR II acts as a screen, and a person seeing through the RPT viewer system observes the robot as though it were the operator because of the projection of the real image of the operator on the robot. The face and chest of TELESAR II are covered with retroreflective material.  A ray incident from a particular direction is reflected in the same direction from the surface of the retroreflective material. Because of the characteristics of retroreflective materials, an image is projected on the surface of TELESAR II without distortion.  Since many RPT projectors are used in different directions, and different images are projected corresponding to the cameras located around the operator, the corresponding images of the operator can be viewed (Fig. 5).

 

 

Figure 5 Principle of the Retroreflective Projection Technology (RPT) and multi-projection images from different angles at the same time.

 

Figure 6 (left view) illustrates the telexistence surrogate robot TELESAR II as a virtual exoskeleton human amplifier of a remote operator by showing his image as if he is inside the robot, and Fig. 6 (right view) shows the operator who is telexisting in the TELESAR II robot, feeling as if he were inside the robot.  Nonverbal communication such as gestures and handshakes could be performed in addition to conventional verbal communication, because a master-slave manipulation robot was used as the surrogate for a human [Tachi et al. 2004].  Moreover, a person who remotely visited the surrogate robot’s location could be seen naturally and simultaneously by several people standing around the surrogate robot, so that mutual telexistence is attained.

Figure 6 Mutual telexistence using avatar robot (left) and operator at the control (right).

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Haptic Telexistence

Although the ideal telexistence should provide haptic sensations, conventional telepresence systems provide mostly visual and auditory sensations with only incomplete haptic sensations.  TELESAR V, a master-slave robot system for performing full-body movements, was developed in 2011[Tachi et al. 2012].  In August 2012, it was successfully demonstrated at SIGGRAPH that the TELESAR V master-slave system can transmit fine haptic sensations, such as the texture and temperature of a material, from an avatar robot’s fingers to the human user’s fingers [Fernando et al. 2012] based on our proposed principle of haptic primary colors [Tachi et al. 2013].

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Haptic Primary Colors

Humans do not perceive the world as it is. Different physical stimuli give rise to the same sensation in humans, and are perceived as identical.  A typical example of this fact is color perception in humans. Humans perceive light of different spectra as having the same color if the light has the same proportion of red, green, and blue (RGB) spectral components.  This is because the human retina typically contains three types of color receptors called cone cells or cones, each of which responds to a different range of the RGB color spectrum. Humans respond to light stimuli via three-dimensional sensations, which generally can be modeled as a mixture of red, blue, and green, which are the three primary colors.

This many-to-one correspondence of elements in mapping from physical properties to psychophysical perception is the key to virtual reality (VR) for humans. VR produces the same effect as a real object for a human subject by presenting its virtual entities with this many-to-one correspondence. We have proposed the hypothesis that cutaneous sensation also has the same many-to-one correspondence from physical properties to psychophysical perception owing to the physiological constraints of humans. We call this the “haptic primary colors” [Tachi et al. 2013].  As shown in Fig. 7, we define three spaces: the physical space, physiological space, and psychophysical or perception space. Different physical stimuli give rise to the same sensation in humans and are perceived as identical.

Figure 7 Haptic primary color model.

 

In physical space, human skin physically contacts an object, and the interaction continues with time. Physical objects have several surface physical properties such as surface roughness, surface friction, thermal characteristics, and surface elasticity.

We hypothesize that at each contact point of the skin, the cutaneous phenomena can be resolved into three components: force f (t), vibration v (t), and temperature e (t), and objects with the same f (t), v (t), and e (t) are perceived as the same, even if their physical properties are different.

We measure f (t), v (t), and e (t) at each contact point with sensors that are mounted on the avatar robot’s hand, and transmit these pieces of information to the human user who controls the avatar robot as his or her surrogate.  We reproduce these pieces of information at the user’s hand via haptic displays of force, vibration, and temperature, so that the human user has the sensation that he or she is touching the object as he or she moves his or her hand controlling the avatar robot’s hand.  We can also synthesize virtual cutaneous sensation by displaying the computer-synthesized f (t), v (t), and e (t) to the human users through the haptic display.

This breakdown into force, vibration, and temperature in physical space is based on the human restriction of sensation in physiological space.  Human skin has limited receptors, as is the case in the human retina. In physiological space, cutaneous perception is created through a combination of nerve signals from several types of tactile receptors located below the surface of the skin.  If we consider each activated haptic receptor as a sensory base, we should be able to express any given pattern of cutaneous sensation through synthesis by using these bases.

Merkel cells, Ruffini endings, Meissner’s corpuscles, and Pacinian corpuscles are activated by pressure, tangential force, low-frequency vibrations, and high-frequency vibrations, respectively.  On adding cold receptors (free nerve endings), warmth receptors, and pain receptors to these four vibrotactile haptic sensory bases, we have seven sensory bases in the physiological space.  It is also possible to add the cochlea, to hear the sound associated with vibrations, as one more basis. This is an auditory basis and can be considered as cross-modal.

Since all the seven receptors are related only to force, vibrations, and temperature applied on the skin surface, these three components in the physical space are enough to stimulate each of the seven receptors.  This is the reason why in physical space, we have three haptic primary colors: force, vibrations, and temperature.  Theoretically, by combining these three components we can produce any type of cutaneous sensation without the need for any “real” touching of an object.

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Telexistence Avatar Robot System: TELESAR V

 

TELESAR V is a telexistence master–slave robot system that was developed to realize the concept of haptic telexistence. TELESAR V was designed and implemented with the development of a robot with high-speed, robust and full upper body, mechanically unconstrained master cockpit, and a 53-DOF anthropomorphic slave robot.  The system provides an experience of our extended “body schema,” which allows a human to maintain an up-to-date representation of the positions of his or her various body parts in space. Body schema can be used to understand the posture of the remote body and to perform actions with the perception that the remote body is the user’s own body.  With this experience, users can perform tasks dexterously and perceive the robot’s body as their own body through visual, auditory, and haptic sensations, which provide the simplest and fundamental experience of telexistence. TELESAR V master–slave system can transmit fine haptic sensations such as the texture and temperature of a material from an avatar robot’s fingers to a human user’s fingers.

As shown in Figs. 8 and 9, TELESAR V system consists of a master (local) and a slave (remote). The 53-DOF dexterous robot was developed with a 6-DOF torso, a 3-DOF head, 7-DOF arms, and 15-DOF hands.  The robot has Full HD (1920 × 1080 pixels) cameras also for capturing wide-angle stereovision, and stereo microphones are situated on the robot’s ears for capturing audio signals from the remote site. The operator’s voice is transferred to the remote site and output through a small speaker installed near the robot’s mouth area for conventional verbal bidirectional communication.  On the master side, the operator’s movements are captured with a motion-capturing system (OptiTrack). Finger bending is captured with 14-DOF using modified 5DT Data Glove 14.

Figure 8 General view of TELESAR V master (left) and slave robot (right).

Figure 9 TELESAR V system configuration.

 

The haptic transmission system consists of three parts: a haptic sensor, a haptic display, and a processing block.  When the haptic sensor touches an object, it obtains haptic information such as contact force, vibration, and temperature based on the haptic primary colors.  The haptic display provides haptic stimuli on the user’s finger to reproduce the haptic information obtained by the haptic sensor.  The processing block connects the haptic sensor with the haptic display and converts the obtained physical data into data that include the physiological haptic perception for reproduction by the haptic display. The details of the scanning and displaying mechanisms are described below.

First, a force sensor inside the haptic sensor measures the vector force when the haptic sensor touches an object.  Then, two motor-belt mechanisms in the haptic display reproduce the vector force on the operator’s fingertips.  The processing block controls the electrical current drawn by each motor to provide the target torques based on the measured force. As a result, the mechanism reproduces the force sensation when the haptic sensor touches the object.

Second, a microphone in the haptic scanner records the sound generated on its surface when the haptic sensor is in contact with an object.  Then, a force reactor in the haptic display plays the transmitted sound as a vibration. Since this vibration provides a high-frequency haptic sensation, the information is transmitted without delay.

Third, a thermistor sensor in the haptic sensor measures the surface temperature of the object using a thermistor sensor.  The measured temperature is reproduced by a Peltier actuator mounted on the operator’s fingertips.  The processing block generates a control signal for the Peltier actuator.  The signal is generated based on a PID control loop with feedback from a thermistor located on the Peltier actuator.  Figs. 10 and 11 show the structures of the haptic sensor and the haptic display, respectively.

 

Fig. 10 Structure of haptic sensor.    Fig. 11 Structure of haptic display.

Figure 12 shows the left hand of TELESAR V robot with the haptic sensors, and the haptic displays set in the modified 5DT Data Glove 14.

Figure12 Slave hand with haptic sensors (left) and master hand with haptic displays (right).

 

Figure 13 shows TELESAR V conducting several tasks such as picking up sticks, transferring small balls from one cup to another cup, producing Japanese calligraphy, playing Japanese chess (shogi), and feeling the texture of a cloth.

Figure 13 TELESAR V conducting several tasks transmitting haptic sensation to the user

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

Toward a "Telexistence Society"

At present, in Japan, people are faced with many difficult problems, such as an increasing concentration of population in the metropolitan areas, an increase in the number of elderly persons and decrease in the number of workers due to the trend toward smaller families, dilemmas posed by concurrent child-rearing and work, and the fact that a great deal of time is required for commuting to and from work and the resulting lack of personal time. If it were possible to create innovative technology that changes the conventional conceptualization of movement by transferring physical functions without this being accompanied by actual travel, these difficulties could be overcome.

Working at home remotely to date has been limited to communications and/or paperwork that transmit audio-visual information and data, as well as conversations. It was impossible to carry out the physical work at factories or operations at places such as construction sites, healthcare facilities, or hospitals; that cannot be accomplished unless the person in question is actually on site. Telexistence is a technology that departs from the conventional range of remote communications that transmit only the five senses, and it realizes an innovative method that transmits all the physical functions of human beings and enables the engagement of remote work accompanying labor and operations that was impossible until now.

If a telexistence society that can delegate physical functions were realized, the relationship between people and industry and the nature of society would be fundamentally changed. The problems of the working environment would be resolved, and it would no longer be necessary to work in adverse environments. No matter where a factory is located, workers would be assembled from the entire country or the entire world, so the conditions for locating factories will see revolutionary changes compared to the past, and population concentration in the metropolitan area can be avoided. Since foreign workers would also be able to attend work remotely, the myriad problems accompanying immigration as a mere labor force, separate from humanitarian immigration, can be eliminated. Moreover, it will be possible to ensure a 24-hour labor force at multiple overseas hubs by making use of time differences, rendering the night shift unnecessary. Both men and women will be able to participate in labor while raising children, and this will help to create a society in which it is easier to raise children.

The time-related costs due to travel in global business will be reduced. Commuting-related travel will become unnecessary, and transportation problems can be alleviated. It is predicted that it will no longer be necessary to have a home near one’s workplace, the concentration of population in the metropolis will be alleviated, the work-life balance will be improved, and the people concerned will be able to live where they wish and lead fulfilling lives.

In addition, owing to additional functions of an avatar robot, which is the body of the virtual self, even the elderly and handicapped will not be at a disadvantage physically compared to young people, since they can augment and enhance their physical functions to surpass their original bodies, and thus they can participate in work that gives full play to the abundant experience amassed over a lifetime. The quality of labor will rise greatly, thereby reinvigorating Japan. The hiring of such specialists as technicians and physicians with world-class skills will also be facilitated, and optimal placement of human resources according to competence can also be achieved.

With a view to the future, it will be possible to respond instantly from a safe place during disasters and emergencies, and this technology can also be used routinely to dispatch medical services, caregivers, physicians, and other experts to remote areas. In addition, owing to the creation of new industries such as tourism, travel, shopping, and leisure, it will greatly improve convenience and motivation in the lives of citizens, and it is anticipated that a healthy and pleasant lifestyle will be realized in a clean and energy-conserving society.

In this manner, it goes without saying that the realization of a ‘‘telexistence society’’ that makes it possible for human beings to virtually exist in remote places is an extremely high-impact challenge technologically. One can even conclude it that it is a non-continuous innovation that differs from novel improvements involving progress of existing work equipment and environments insofar as it radically alters both the nature of labor per se and people’s lifestyles.

Fig
(Excerpted from Susumu Tachi, “Memory of the Early Days and a View toward the Future,” Presence, Vol.25, No.3, pp.239-246, 2016.)

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

The Hour I First Invented Telexistence

From Susumu Tachi and Michitaka Hirose ed.: Virtual Technology Laboratory, p.102, Kogyo Chosa Kai, 1992. ISBN4-7693-5054-6)

 

The idea occurred to me in the late summer of 1980, when I had returned to the laboratory in Tsukuba after a year of research at MIT where I worked as a senior visiting scientist.  My research toward the world’s first Guide Dog Robot, invented by me in Japan, was at its final stage and I spent all my energy into it while incorporating research results from MIT.  At MIT, I worked with Professor Robert W. Mann, a renowned researcher known for Boston Arm, with whom I proposed the systematic and quantitative evaluation system of mobility aids for the blind by using a mobility simulator with virtual apparatus. 

 

I came up with the idea of telexistence as a new development in the research toward Guide dog robot and that from MIT as I was walking one morning in the corridor in the laboratory on September 19, 1980.  I was suddenly reminded of the fact that human vision is based on light waves and that humans only use two images that are projected on the retina; they construct a three-dimensional world by actively perceiving how these images change with time.  The robot would only need to provide humans, by measurement and control, with the same image on the retina as the one that would be perceived by human eyes.  As I somewhat rediscovered this fact, all my problems dissipated and I started to tremble with excitement.  I went back to my office immediately and wrote down what I had just discovered and all the ideas that sprang up following that discovery (Fig.1).  Surprising enough, I simultaneously came up with the idea of how to design telexistence visual display. 

 

The invention of telexistence changed the way I see the world.  I believe that telexistence and virtual reality will free people from temporal and spatial constraint.  Telexistence will also allow us to understand the essence of existence and reality.  Virtual reality and telexistence show ever more possibilities today and the world watches them as the key technology of the twenty-first century.  I will continue to explore the boundless world of telexistence and virtual reality for a while. 

 

Fig .1

Fig.1 Sketch of the idea when I invented the concept of telexistence.


 

【How Telexistence was Invented (From Susumu Tachi: Telexistence and Virtual Reality, pp.148-15, The Nikkan Kogyo Shimbun, 1992. ISBN4-526-03189-5】

 

Telexistence is a concept that originated in Japan. I would like to introduce how I invented the concept.

 

In 1976 I started the world-first research on a Guide Dog Robot dubbed MELDOG. MELDOG was a research and development project to assist vision-impaired people by providing a robot with the functions of a guide dog. This is a system whereby the robot recognizes the environment and communicates information about it to the human, to help vision-impaired people walk. One issue that arose in our research into guide dog robots was, when a mobility aid like the guide dog robot obtains environmental information, how should it present that information to the human to enable them to walk?

 

The following type of issue was also important. Supposing there is a parked car on the left 3m ahead, or a person walking in the same direction 1m in front, or a crossroads 10m ahead. How can a mobility aid for the blind like the Guide Dog Robot inform the human of this information? This information needs to be properly communicated to the user through their remaining senses, such as sound or cutaneous stimulation. This communication method is the design technique of mobility aids for vision-impaired people. In other words, we want to know what kind of information, communicated in what way, will allow people to walk freely. However, such a method was not established. It was difficult to even investigate how to communicate information to make it easier for people to walk.

 

All the equipment at that time was designed based on a repeated process of somebody thinking “I wonder if this will work”, designing a device based on that idea, trying it out and modifying it if it didn’t work. With such a method, it takes time to design, make, try out the device and then remake it again and again. In addition, even if a device was easy to use for one person who evaluated it, when a different user tried it they would not find it easy to use, and so on. So, at the same time as designing a device more systematically that would be easy for anyone to use, in fact we also needed to make a device that would suit each individual user. In reality, making an actual device based on this process was easier said than done, not just in terms of time and economics, but without even having a design and development methodology.

 

One method that was devised to solve this problem was using a computer to make a system to communicate the information to allow the person to walk, and presenting this to the person artificially while systematically changing the walking information and evaluating it. Professor Robert W. Mann, the well-known professor who developed the Boston Arm, the world’s first myoelectric-controlled prosthetic arm, was at Massachusetts Institute of Technology (MIT). I researched with Professor Mann at MIT from 1979 to 1980 as a senior visiting scientist. Our research was based on the following idea.

 

This was “basically, rather than actually making a device, make a transmission method inside the computer and use this to communicate information to the human. This can be optimized for each individual user, and will make it possible to create a device that is easy for anyone to use.” To explain in more detail, what we were thinking during my time at MIT was as follows. Using the mobility aid, a person walks around an actual-size model space similar to an actual space. Measure the person’s movements at that time. Perform a simulation on the computer, and work out what kind of environmental information is being obtained by the mobility aid from the person’s movements. Then communicate that information to the person in every way we could think of. Try changing the communication method and measure how the person walks. By systematically repeating this, we would quantitatively investigate what information conveyed in what way makes it easier to walk, by evaluating how the person walks (Figures 2 and 3).

 

 

 

Fig .2

Figure 2 Human / equipment / environment simulator. (From Robert W. Mann: The Evaluation and Simulation of Mobility Aids for the Blind, American Foundation for the Blind Research Bulletin, No.11, pp.93-98, 1965.)

 

Fig .3

Figure 3 Experimental arrangement for real-time evaluation of sensory display devices for the blind. (From Susumu Tachi, Robert W. Mann and Derick Rowell: Quantitative Comparison of Alternative Sensory Display for Mobility Aids for the Blind, IEEE Transactions on Biomedical Engineering, Vol.BME-30, No.9, pp.571-577, 1983.)

 

In the summer of 1980 I returned to Japan and attempted to develop this a bit further. This was a ground-breaking method, but it had its disadvantages. When walking in an actual space, it is dangerous to bump into things. What is more, actual spaces cannot easily be changed. This means that once you have moved in a space, the next time you can walk in the same space even without a mobility aid.

 

This was no good. We needed to be able to fully evaluate the mobility aid while freely changing the space in different ways. So, we needed to freely change the space using something like a computer. That was one of the ideas behind the concept of creating a virtual space on a computer that you could walk around. But this was before the term “virtual reality” had even been coined, and it was almost impossible to create both mobility aid and the space on a computer.

 

So, the idea I came up with as the next stage was that, instead of making it all on the computer, we would make the actual space, but get a robot to walk around it because it was dangerous for a human. Then, the person would walk in a safe area, and we could communicate these human movements to the robot, to get the robot to move. If we could evaluate it using this system, it would not put the human in the dangerous position of bumping into obstacles, and the person could walk in one place but have the sensation of being where the robot was. Another advantage of this method was that it allowed us to extend our research to look at how sighted people, as well as vision-impaired people, walk.

 

In other words, we were giving a robot a sense of sight and getting it to walk. Even for a sighted person, if the robot’s sight is made worse, the person gradually becomes unable to see. How does their way of walking change when this happens? What kind of information needs to be extracted and provided to the human to allow them to walk as they did before? I devised apparatus to make such research possible. This was proposed as a patent for “Evaluation Apparatus of Mobility Aids for the Blind” (Figure 4).        

 

Fig .4

Figure 4 Evaluation Apparatus of Mobility Aids for the Blind (patent filed 1980.12.26);

Science and Technology Agency Featured Invention No. 42 (1983.4.18).

 

At the same time, I realized that by making a robot move in a different space in this way, it might be possible to do various work using a robot in place of a human. My group had been studied robots, so I had already been familiar with teleoperation for a long time. By using this new technology, what had previously been thought of as teleoperation would advance to a new stage. With this method, unlike conventional teleoperation, the operator is able to work with the sensation as if he or she is where the robot is. I realized that this would efficiently and dramatically improve work. This was on September 19, 1980. This was the idea for the patent for “Operation Method of Manipulators with Sensory Information Display Functions” relating to manipulation (Figure 5).

 

Fig .5

Figure 5 Operation Method of Manipulators with Sensory Information Display Functions (patent filed 1981.1.11).

 

This is how the concept of telexistence originated. What is more, it came together with the Advanced Robot Technology in Hazardous Environments project, which started in 1983. During preparation for the national project on Advanced Robot Technology in Hazardous Environments, telexistence was recognized as a very important technology by the Machinery and Information Industries Bureau of the Ministry of International Trade and Industry, so it was developed as one of the core techniques of Advanced Robot Technology in Hazardous Environments. In fact, it would be more accurate to say that the National Large Scale Project on Advanced Robot Technology in Hazardous Environments was focused on telexistence as a key technology. The first conference presentation was held at the annual conference of the Society of Instrument and Control Engineers in July 1982. This is when the concept of telexistence was first introduced to the academic world with the world-first apparatus of telexistence.

 

As you can see from this background, telexistence came about completely independently from the American concept of telepresence. It is an interesting historical coincidence that around the same time in the USA, focusing on space development, scientists like Marvin Minsky were coming up with the ideas that led to telepresence. Perhaps technology also includes the timing when it is born and raised.        

 

National Large-scale Project: Advanced Robot Technology in Hazardous Environments

 

Just at that time, the MITI (Ministry of International Trade and Industry) was investigating a new national large-scale project concerning robots. Twenty years after the dawn of robotics in 1960 when Joseph Engelberger made the first industrial robot UNIMATE, 1980 was being called the first year of the robot, and Japan was known as the robot kingdom. However, the robots that were in use in 1980 were the so-called first-generation playback robots, and second-generation robots were just starting to appear in factories. Mr. Uehara, who was the Deputy Director of MITI at that time, knew that I was studying telexistence, and asked me to talk to him. I explained various ideas, and he told me to put them together and bring them to MITI at once, so I gave up my holiday and wrote the proposal shown in Figure 6. Based on this proposal, the Advanced Robot Technology in Hazardous Environments project was launched in 1983.

 

Fig .6

Figure 6 Proposal for large-scale project “Advanced Robot Technology in Hazardous Environments.”

 

When the Development Office for the Advanced Robot Technology in Hazardous Environments project was established in 1983, I worked at the office for the first year to develop the precise planning of the project including what the final goal of this eight-year project should be, how the results should be evaluated, what kind of technologies should be developed to attain the goal, which companies should participate, etc. Advanced Robot Technology in Hazardous Environments is used in nuclear power, ocean and oil facilities and so on. Robots with legs were particularly important. This is because conventional robots with wheels could not step over pipelines or go up and down stairs, so the robots had to have legs. They also needed hands as well as arms to move around while working, and of course they needed three-dimensional vision and sense of touch. Through this research, autonomous intelligent robots are to be developed.

 

However, autonomous intelligent robots can only work in structured environments. In an unstructured environment like an extreme work environment, supervisory control is necessary. But there are limits also to supervisory control, so there always remain aspects that need to do be done directly by humans. However, the environments in the Advanced Robot Technology in Hazardous Environments project included dangerous locations or situations where the operator was far away, so a person could not get there immediately. This is exactly the situation where telexistence can be used effectively. We called this framework the “third-generation framework”, or “third-generation robotics”.

 

Once the precise project development planning was in place and the project had started, I returned to the laboratory and carried on researching telexistence. Then, the idea of telexistence appeared in the USA, as mentioned above. They called it telepresence. Marvin Minsky, the leading figure in Artificial Intelligence, proposed this concept in the context of space development. This was in 1983. The concept of telepresence itself had appeared in 1980, before being presented in the ARAMIS report in 1983, so it can be said that the concept was born simultaneously in Japan and the USA in 1980. However, I was the first person in the world to actually make a device and test it. I presented the first telexistence machine together with the concept at the SICE annual conference in Japan in July 1982 (Figure 7), and at the RoManSy international conference in June 1984 (Tachi et al. 1984). That is how ground-breaking the concept of telexistence was.

 

Fig .7

Figure 7 The first paper on telexistence design method and telexistence machine.

(From Susumu Tachi and Minoru Abe: Study on Tele-existence (I). In: Proceedings of the 21st Annual Conference of the Society of Instrument and Control Engineers (SICE), pp. 167-168, July 1982)

 

| Summary | How Telexistence was evolved | Mutual Telexistence | Haptic Telexistence |

| Haptic Primary Colors | Telexistence Avatar Robot System: TELESAR V | Toward a "Telexistence Society" |

| The Hour I First Invented Telexistence | References |

 

References

[Goertz 1952] R. C. Goertz: Fundamentals of general-purpose remote manipulators, Nucleonics, vol. 10, no.11, pp. 36–42, 1952.

[Mosher 1967] R. S. Mosher: Handyman to Hardiman, SAE Technical Paper 670088, doi:10.4271/670088, 1967.

[Sheridan 1974]  T. B. Sheridan, and W. R. Ferrell: Man–Machine Systems, MIT Press, Cambridge, MA, 1974.

[Tachi et al. 1980] Susumu Tachi, Kazuo Tanie and Kiyoshi Komoriya: Evaluation Apparatus of Mobility Aids for the Blind (patent filed 1980.12.26); Science and Technology Agency Featured Invention No. 42 (1983.4.18). (in Japanese) [PDF]

[Tachi et al. 1981] Susumu Tachi, Kazuo Tanie and Kiyoshi Komoriya: Operation Method of Manipulators with Sensory Information Display Functions (patent filed 1981.1.11). ( in Japanese) [PDF]

[Tachi et al. 1982] Susumu Tachi and Minoru Abe: Study on tele-existence (I). In: Proceedings of the 21st Annual Conference of the Society of Instrument and Control Engineers (SICE), pp. 167-168, July 1982. (in Japanese) [PDF]

[Tachi et al. 1982] Susumu Tachi and Kiyoshi Komoriya: The Third Generation Robotics, Measurement and Control, Vol.21, No.12, pp.1140-1146 (1982.12) (in Japanese) [PDF]

[Tachi 1984] Susumu Tachi: The Third Generation Robot, TECHNOCRAT, Vol.17, No.11, pp.22-30 (1984.11) [PDF]

[Tachi et al. 1984] Susumu Tachi, Kazuo Tanie, Kiyoshi Komoriya and Makoto Kaneko: Tele-existence(I): Design and Evaluation of a Visual Display with Sensation of Presence, in A.Morecki et al. ed., Theory and Practice of Robots and Manipulators, pp.245-254, Kogan Page, 1984. [PDF]

[Tachi and Arai 1985] Susumu Tachi and Hirohiko Arai: Study on Tele-existence (II)-Three Dimensional Color Display with Sensation of Presence-, Proceedings of the '85 ICAR (International Conference on Advanced Robotics), pp. 345-352, Tokyo, Japan , 1985. [PDF]

[Hightower et al. 1987] J. D. Hightower, E. H. Spain and R. W. Bowles: Telepresence: A Hybrid Approach to High Performance Robots, Proceedings of the International Conference on Advanced Robotics (ICAR ’87), Versailles, France, pp. 563–573, 1987.

[Tachi et al. 1988a] Susumu Tachi, Hirohiko Arai and Taro Maeda: Tele-existence Simulator with Artificial Reality (1)-Design and Evaluation of a Binocular Visual Display Using Solid Models-, Proceedings IEEE International Workshop on Intelligent Robots and Systems -Toward the Next Generation Robot and system-, pp. 719-724, Tokyo, Japan,1988. [PDF]

[Tachi et al. 1988b] S. Tachi, H. Arai, I. Morimoto and G. Seet: Feasibility Experiments on a Mobile Tele-existence System, Proceedings of The International Symposium and Exposition on Robots, pp. 625-636, Sydney, Australia, 1988. [PDF]

[Tachi et al. 1989a] Susumu Tachi, Hirohilo Arai and Taro Maeda: Robotic Tele-existence, Proceedings of the NASA Conference on Space Telerobotics, pp. 171-180, Pasadena, California, USA, 1989. [PDF]

[Tachi et al. 1989b] Susumu Tachi, Hirohiko Arai and Taro Maeda: Development of an Anthropomorphic Tele-existence Slave Robot, Proceedings of the International Conference on Advanced Mechatronics, pp.385-390, Tokyo, Japan, 1989. [PDF]

[Tachi et al. 1989c] Susumu Tachi, Hirohiko Arai and Taro Maeda: Tele-existence Visual Display for Remote Manipulation with a Real-time Sensation of Presence, Proceedings of the 20th International Symposium on Industrial Robots, pp.427-434, Tokyo, Japan, 1989. [PDF]

[Tachi 1990a] Susumu Tachi: Tele-existence and / or Cybernetic Interface Studies in Japan, Human Machine Interface for Teleoperators and Virtual Environments, NASA Conference Publication, pp. 34-35, Santa Babara, California, U.S.A, 1990.3 [PDF]

[Tachi 1990b] Susumu Tachi: Japanese Programs in Robotics Field, Proceedings of the 1st International Symposium on Measurement and Control in Robotics (ICMSR '90), pp.A1.2.1-6, Houston, Texas, USA, 1990.6 (Invited Plenary Paper) [PDF]

[Tachi and Sakaki 1990] Susumu Tachi and Taisuke Sakaki: Impedance Controlled Master Slave System for Tele-existence Manipulation, Proceedings of the 1st International Symposium on Measurement and Control in Robotics (ICMSR '90), pp.B3.1.1-8, Houston, Texas, USA, 1990. [PDF]

[Tachi et al. 1990a] Susumu Tachi, Hirohiko Arai and Tari Maeda: Tele-existence master Slave System for Remote Manipulation, Proceedings of the IEEE International Workshop on Intelligent Robotics and Systems '90 (IROS '90), pp.343-348, Tsuchiura, Japan, 1990. [PDF]

[Tachi et al. 1990b] Susumu Tachi, Hirohioko Arai and Taro Maeda: Tele-existence Master Slave System for Remote Manipulation (II), Proceedings of the 29th IEEE Conference on Decision and Control, Vol.1, pp.85-90, Honolulu, Hawaii, USA, 1990. [PDF]

[Tachi et al. 1991a] S. Tachi, H. Arai, T. Maeda, E. Oyama, T. Tsunemoto and Y. Inoue: Tele-existence Experimental System for Remote Operation with a Sensation of Presence, Proceedings of '91 International Symposium on Advanced Robot Technology ('91 ISART), pp.451-458, Tokyo, Japan, 1991. [PDF]

[Tachi et al. 1991b] Susumu Tachi, Hirohiko Arai and Taro Maeda: Tele-existence Master Slave System for Remote Manipulation, Video Proceedings of the International Conference on Robotics and Automation, G1, Sacrament, California, USA, 1991.

[Tachi et al. 1991c] S. Tachi, H. Arai, T. Maeda, E. Oyama, N. Tsunemoto and Y. Inoue: Tele-existence in Real World and Virtual World , Proceedings of the fifth International Conference on Advanced Robotics ('91 ICAR), pp.193-198, Pisa, Italy, 1991. (Invited Paper) [PDF]

[Tachi 1991] Susumu Tachi: Tele-existence - Toward Virtual Existence in Real and/or Virtual Worlds -, Proceedings of the International Conference on Artificial Reality and Tele-existence (ICAT '91), pp.85-94, Tokyo, Japan, 1991. [PDF]

[Tachi et al. 1991d] Susumu Tachi, Hirohiko Arai and Taro Maeda: Tele-existence: An Advanced Remote Operation System with a Sensation of Presence, SMiRT 11 Transactions, pp.325-330, Tokyo, Japan, 1991. [PDF]

[Tachi et al. 1991e] Susumu Tachi, Hirohiko Arai and Taro Maeda: Measurement and Control in Tele-existence and Artificial Reality, Proceedings of the 12th Triennial World Congress of the International Measurement Confederation (ACTA IMEKO 1991), pp.1241-1248, 1991. (Invited Keynote Paper) [PDF]

[Tachi and Yasuda 1994] Susumu Tachi and Ken-ichi Yasuda: Evaluation Experiments of a Telexistence Manipulation System, Presence, Vol.3, No.1, pp.35-44, 1994. [PDF]

[Tachi 1997] Susumu Tachi, Hirohiko Arai: Design and Evaluation of a Visual Display with a Sensation of Presence in Tele-existence System, Journal of Robotics and Mechatronics, Vol. 9, No. 3, pp. 220-230, 1997. [PDF]

[Tachi 1998] Susumu Tachi: Real-time Remote Robotics - Toward Networked Telexistence, IEEE Computer Graphics and Applications, Vol.18, No.6, pp.6-9, 1998. [PDF]

[Tachi 1999a] Susumu Tachi: Telexistence and R-Cubed, Industrial Robot, Vol.26, No.3, pp.188-193, 1999. [PDF]

[Tachi 1999b] Susumu Tachi, “Augmented Telexistence,” in Y.Ohta and H.Tamura ed., Mixed Reality - Merging Real and Virtual Worlds, Springer-Verlag, ISBN3-540-65623-5, pp. 251-260, 1999.
[Tachi et al. 2003] Susumu Tachi, Kiyoshi Komoriya, Kazuya Sawada, Takashi Nishiyama, Toshiyuki Itoko, Masami Kobayashi and Kozo Inoue: Telexistence Cockpit for Humanoid Robot Control, Advanced Robotics, Vol.17, No.3, pp.199-217, 2003. [PDF]

[Tachi et al. 2004] Susumu Tachi, Naoki Kawakami, Masahiko Inami and Yoshitaka Zaitsu: Mutual Telexistence System Using Retro-reflective Projection Technology, International Journal of Humanoid Robotics, Vol.1, No.1, pp.45-64, 2004. [PDF]

[Tadakuma et al. 2005] Riichiro Tadakuma, Yoshiaki Asahara, Hiroyuki Kajimoto, Naoki Kawakami and Susumu Tachi: Development of Anthropomorphic Multi-D.O.F. Master-Slave Arm for Mutual Telexistence, IEEE Transactions on Visualization and Computer Graphics, Vol.11, No.6, pp.626-636, 2005. [PDF]

[Tachi et al. 2008] Susumu Tachi, Naoki Kawakami, Hideaki Nii, Kouichi Watanabe and Kouta Minamizawa: TELEsarPHONE: Mutual Telexistence Master Slave Communication System based on Retroreflective Projection Technology, SICE Journal of Control, Measurement, and System Integration, Vol.1, No.5, pp.335-344, 2008. [PDF]

[Tachi 2010] Susumu Tachi: Telexistence, World Scientific, ISBN-13 978-981-283-633-5, 2010. http://www.worldscientific.com/worldscibooks/10.1142/9248

[Tachi et al. 2012] Susumu Tachi, Kouta Minamizawa, Masahiko Furukawa and Charith Lasantha Fernando: Telexistence - from 1980 to 2012, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2012), pp.5440-5441, Vilamoura, Algarve, Portugal, 2012. [PDF]  video

[Fernando et al. 2012] Charith Lasantha Fernando, Masahiro Furukawa, Tadatoshi Kurogi, Kyo Hirota, Sho Kamuro, Katsunari Sato, Kouta Minamizawa, and Susumu Tachi: TELESAR V: TELExistence Surrogate Anthropomorphic Robot, ACM SIGGRAPH 2012, Emerging Technologies, Los Angeles, CA, USA , 2012. [PDF]

[Tachi et al. 2013] Susumu Tachi, Kouta Minamizawa, Masahiro Furukawa and Charith L. Fernando: Haptic Media: Construction and Utilization of Human-harmonized "Tangible" Information Environment, Proceedings of the 23rd International Conference on Artificial Reality and Telexistence (ICAT), Tokyo, Japan, pp.145-150, 2013. [PDF]

[Tachi et al. 2014] Susumu Tachi, Masahiko Inami and Yuji Uema: The Transparent Cockpit, IEEE Spectrum, vol.51, no.11, pp.52-56, 2014.

 [DOI:10.1109/MSPEC.2014.6934935] [IEEE]

[Tachi 2015a] Susumu Tachi: Telexistence -Past, Present, and Future-, in G. Brunnett et al. ed. Virtual Realities, ISBN 978-3-319-17042-8, Springer, pp.229-259, 2015. https://link.springer.com/chapter/10.1007/978-3-319-17043-5_13

[Tachi 2015b] Susumu Tachi: Telexistence 2nd Edition, World Scientific, ISBN 978-981-4618-06-9, 2015. http://www.worldscientific.com/worldscibooks/10.1142/9248

[Scoică 2015] Adrian Scoică: Susumu Tachi -The Scientist who Invented Telexistence, ACM Crossroads, vol.22, no.1, pp.61-62, 2015. [PDF]

[Tachi 2016a] Susumu Tachi: Telexistence: Enabling Humans to be Virtually Ubiquitous, Computer Graphics and Applications, vol. 36, No.1, pp.8-14, 2016. [PDF]

[Tachi 2016b] Susumu Tachi: Memory of the Early Days and a View toward the Future, Presence, Vol.25, No.3, pp.239-246, 2016. [PDF]