Virtual Reality and Tele-Operation: a Common Framework
Didier VERNA
LRDE / EPITA
14-16 rue Voltaire,
94276 Le Kremlin-Bicˆetre, France
http://www.lrde.epita.fr/
<didier@lrde.epita.fr>
ABSTRACT
This paper proposes an overview of a study that con-
ceptually unify the fields of virtual reality and tele-
operation, by analyzing the notion of “assistance” to
the operator of a virtual reality or tele-operation sys-
tem. This analysis demonstrates that cases of assis-
tance that are usually considered to belong to virtual
reality are not conceptually different from what has
been done in tele-operation since long before virtual
reality appeared. With this common framework for
virtual reality and tele-operation, we hope to provide
a theoretical formalization of many ideas acquired em-
pirically, and hence a basis onto which further discus-
sion could be undertaken in a constructive manner.
Keywords: Virtual Reality, Tele-Operation, Assis-
tance
INTRODUCTION
Virtual reality and tele-operation have been get-
ting closer to each other these last years, on two dif-
ferent plans. The first plan is related to the notion of
immersion: it is now possible, thanks to virtual reality
devices, to make the operator have the feeling of being
physically present in the distant working environment.
This brings interesting opportunities to improve the
operator’s efficiency in performing his task. The sec-
ond plan is related to the notion of assistance: one
major problem we are faced with, considering immer-
sion, is that current immersive technology is heavy,
expensive, and can be a handicap instead of a help
for the operator. The question of knowing whether
immersion is essential or not brought up interesting
debates [3]; people demonstrated that partial immer-
sion is often enough, if not better than full immersion
[16], people also demonstrated that there is a differ-
ence b e tween subjective and objective presence, which
actually means that cognitive immersion is not the
panacea [14].
One important relation between the notions of im-
mersion and assistance is that assisting the operator is,
among other things, a way to compensate for the lack
of effic iency when interacting with a complex immer-
sive system. Currently, scientists are acquiring many
empirical results related to this idea, and make the
notion of assistance a crucial one.
In this paper, we also manipulate these ideas, but
from a conceptual point of view. Namely, we ask our-
selves what exactly are the conceptual relations be-
tween virtual reality and tele-operation, and what ex-
actly is the status of assistance, when tele-operation
applications use virtual reality concepts. The analyzis
we provide for the notion of assistance to the operator
demonstrates that cases of assistance that are usually
considered to belong to virtual reality are not con-
ceptually different from what has been done in tele-
operation since long before virtual reality appeared.
In a first step, we propose to analyze several known
cases of assistance in virtual reality systems. In a sec-
ond step, we show how these cases of assistance revert
to ones that have been known in tele-operation for a
long time.
THE L
i
SA MODEL
In order to analyze the assistance processes in vir-
tual reality systems, we need to describe such systems.
The model we use is called L
i
SA (acronym for “Local-
ization and Semantics of Assistance”) and is shown in
figure 1.
This model is actually well known in robotics: a
tele-“operator” is using an “interface” to control, thr-
ough a computer and / or network “system”, a robotic
“manipulator” in a distant or inaccessible “environ-
ment”. However, we interpret this model in a more
general fashion: this model is not only suitable to de-
scribe tele-operation situations, but also virtual real-
ity systems implementing synthetic worlds: in that
case (in which the interface is very likely to be the
same), the “manipulator” would stand for the oper-
ator’s avatar in the virtual world, and the “environ-
ment” would be the virtual one.
Another difference with the traditional interpreta-
tion of this model is related to the notion of assistance:
we are more interested in the information conveyed
Operator
Interface
Environment
Manipulator
System
Figure 1: The L
i
SA model
through the model’s components than in the compo-
nents themselves. More precisely, this model will be
used to analyze the notion of assistance as follows:
consider a simple situation described by the L
i
SA mo-
del, in which no assistance process exist. Interaction
between the operator and the system results in infor-
mation being conveyed across the model. Consider
now that a certain assistance process occurs. This
results in a modification of the information conveyed
across the model. We hence analyze the notion of as-
sistance in two ways:
the semantics of assistance: which kind of mod-
ification does the assistance process imply on the
conveyed information,
the localization of assistance: where in the mo-
del does the assistance process modify the con-
veyed information.
CASES OF ASSISTANCES
Here, we analyze some common cases of assistance
encountered in virtual reality systems.
Perception Generation
This is perhaps the most common case of assistance,
the one that is at the basis of augmented reality: the
idea is to dynamically add perceptive information on
top of the normal perception. Systems can then pro-
vide textual annotations [13, 12] on top of books, help
navigating in buildings or nuclear plants by providing
virtual maps or signs [4] etc.
One shall notice that the added perception is not
originally present in the environment, but is actually
generated, from its own knowledge of the situation, by
the system itself. This assistance process is localized
on figure 2.
Perception Transmodalisation
Another well know case of assistance in augmented re-
ality consists in presenting perceptive information un-
der a diffe rent modality from which it was acquired.
Figure 2: Perception Generation
For instance, in [10], a visual feedback of objects col-
lision is provided as no force feedback device is avail-
able. In [11], a system for blind people is presented, in
which the distance to a potential obstacle is rendered
as an audio signal.
Contrary to the previous case of assistance, the af-
fected perception does exist in the environment. Only
it is displayed in a different modality, it is transmodal-
ized. This assistance process can be localized in the
interface itself, as conceptually speaking, only the way
to display the perception is modified. This is repre-
sented in figure 3
Figure 3: Perception Transmodalisation
Perception reconstruction
This case of assistance aims at reconstructing percep-
tions that happen to be missing for some reason. A
typical example is that of submarine tele-guidance, as
described by [2, 6], where the real underwater visual
scene is replaced by a simulation. Another example
lies in the medical field [1], where a virtual view of a
fœtus is reconstituted thanks to ultrasonic captors.
The particularity of these cases of assistance is
that the original perception is unavailable (because
the underwater environment is too dark, or because
the pregnant’s womb is hidding the fœtus). However,
it is possible to use a substitution perception (from
position or ultrasonic captors), in order to reconsti-
tute the missing one. In terms of localization, addi-
tional work is needed at the manipulator level in order
to acquire the substitution perception, and additional
work is nee ded in the system to convert this informa-
tion into the missing one. This is represented in figure
4.
Figure 4: Perception Reconstruction
Virtual Execution
As a last example of assistance, consider the case of
path or task planning [9], in semi-autonomous robotics.
The operator is given the ability to prepare a mission,
but also to test it before requiring its execution. The
test phase usually involves a virtual simulation of the
task, allowing the operator to check the validity of his
instructions.
This kind of assistance involves a rather complex
process because the system has to simulate both the
environment, the manipulator, and the interaction be-
tween each other. As for localizing this assistance pro-
cess, it has the particularity of cutting all information
paths at the right of the system. This is represented
in figure 5.
Figure 5: Virtual Execution
Summary
Many other cases of assistance could be analyzed in
the same fashion [19, 18]. The result, presented in
figure 6, still remains the same: when we think of how
virtual reality can be used to implement assistance
features, we generally adopt an egocentric point of
view. As we can see, few cases of assistance take into
account the operator’s actions, and all of them affect
the information in the perception sense.
THE OTHER WAY AROUND
To correct this imbalance, one has to notice the cen-
tral symmetry of the L
i
SA model: the operator and
the environment play analogous roles; the same ap-
plies to the interface and the manipulator. Given this
symmetry, we should be able to turn the preceding as-
sistance cases the other way around, and obtain new
examples which would not be conceptually different.
This is what we propos e to do in this subsection.
Action Generation
The counterpart of perception generation is action
generation. According to what has been said in the
preceding section, the idea is to dynamically add ac-
tions generated by the system on top of those re-
quested by the user, as illustrated on figure 7.
This kind of assistance process actually corresponds
to the idea of “adaptive robotics”, where for instance
robotic arms are capable of automatically grasping ob-
jects. In such a process , the operator has not the con-
trol: the system itself generates the required actions.
This also suggests, as the original examples (percep-
tion generation) were taken from augmented reality
applications, that the expression “augmented reality”
itself can be use to describe augmented actions as well
as augmented perceptions.
Figure 7: Action Generation
Action Transmodalisation
The counterpart of perception transmodalisation is ac-
tion transmodalisation. Acc ording to what has been
said in the preceding section, the idea is to present ac-
tions under a different modality from which they were
acquired, as represented in figure 8.
This kind of assistance process actually corresponds
to a known problem in tele-operation, namely the “trans-
position” problem. In [8, 7] Kheddar describes the
concept of “hidden robot” which principle is to hide
the complexity of the real task by giving the operator
the feeling of acting in a natural manner. As a con-
sequence, the correspondance between human actions
and robot actions is not isomorphic, and actions must
be transmodalized. Another good example is that of
vocal commands. The actions come from an audio
signal and must be converted to physical ones.
Figure 8: Action Transmodalisation
Virtual Execution
Operator
Interface
Manipulator
System
Perception Reconstitution
Perception Generation
Environment
Figure 6: Summary
Action Reconstruction
The counterpart of perception reconstruction is action
reconstruction. According to what has been said in
the preceding section, the idea is to reconstruct actions
that happen to be missing for some reason, by using
a substitution action instead, as shown in figure 9.
We b elieve that the category of systems provid-
ing such assistance processes is actually very broad,
as it includes all systems using non-immersive input
devices, or controlling non-anthropomorphic manipu-
lators: if you only have a joystick to control a robotic
arm, or if a handicap [17] prevents you from control-
ling it in an immersive manner, then we can consider
that the original action (the real operator’s arm move-
ment) is missing, but that a substitution action (the
joystick movement) can be used to reconstruct the
missing one. Similarly, if you have to control a com-
plex device (for instance with too many degrees of
liberty), then you will necessarily have to perform a
substitution action to control it. A good example of
this is the control of a virtual creature thanks to a
whole hand input device, described by Sturman[15].
Figure 9: Action Reconstruction
Virtual Command
The counterpart of virtual execution is virtual com-
mand. According to what has been said in the pre-
ceding section, the idea is to cut all information paths
at the left of the system, thus simulating the interac-
tion an operator would have with it, as shown in figure
10.
Here again, the category of systems providing such
assistance processes is actually very broad, as it in-
cludes all developments in autonomous robotics. More
precisely, robots belonging to this category are the
fourth class robots according to the classification from
Giralt[5]: contrary to purely reactive autonomous ro-
bots, task programmable autonomous robots must have
the ability to make decisions in case of unexpected
events, and thus must be provided with operator-like
capabilities.
Figure 10: Virtual Command
Summary
The preceding assistance cases, along with their coun-
terpart are summarized on gure 11, which provides a
more balanced view on the notion of assistance. This
analysis clearly demonstrates that whereas the initial
cases of assistance are regarded as virtual reality, they
are not conceptually different from their counterparts
well known in robotics long before virtual reality con-
cepts appeared.
Operator
Interface
Manipulator
System
Perception Reconstitution
Perception Generation
Environment
Virtual Execution
Virtual Command
Action Generation
Action Reconstitution
Figure 11: Global Summary
CONCLUSION
In this paper, we have presented an overview of a the-
ory aiming at formalizing the relations b e tween virtual
reality and tele-operation at the conceptual level. We
proposed a theoretical analysis of the notion of assis-
tance, based on a model called L
i
SA. Provided with
this model, we were able to represent the different
kinds of assistance processes found in current virtual
reality systems. By turning this analysis the other way
around, we fell back on common assistance processes
found in robotics, but not labeled as virtual reality
assistance processes. As a consequence, we demon-
strated that, in terms of assistance, the relations be-
tween virtual reality and tele-operation extend farther
than what is usually thought.
We believe that this analysis provides a clear demon-
stration of the tight relations unifying virtual reality
and tele-operation on a conceptual plan, and allow
to formalize ideas that are currently manipulated on
an empirical basis. We thus hope to have provided
a formal ground on which further discussion could be
undertaken.
REFERENCES
[1] M. Bajura, H. Fuchs, and R. Ohbuchi. Merg-
ing virtual reality with the real world: Seing ul-
trasound imagery within the patient. In SIG-
GRAPH’92, volume 26, pages 203–210, 1992.
[2] J. ot´e and J. Lavall´ee. augmented reality
graphic interface for upstream dam inspection.
In SPIE, Photonics East, 1995.
[3] Stephen Ellis. Presence of mind: a reaction to
thomas sheridan’s ”further musings on the psy-
chophysics of presence”. In Presence, volume 5/2,
pages 247–259. MIT Press, 1996.
[4] G. Fertey, T. Delpy, M. Lapierre, and
G. Thibault. An industrial application of virtual
reality: and aid for designing maintenance opera-
tions in nuclear plants. In L’interface des Mondes
Rels et Virtuels, pages 151–162, 1995.
[5] G. Giralt, R. Chatila, and R. Alami. Remote
intervention, robot autonomy and teleprogram-
ming: Generic concepts and real world appli-
cation cases. In International Conference on
Intelligent Robots and Systems, pages 314–320.
IEEE/RSJ, IEEE Computer Society Press, 1993.
[6] Butler P. Hine, Carol Stoker, Michael Sims, Daryl
Rasmussen, and Phil Hontalas. The application
of telepresence and virtual reality to subsea ex-
ploration. In ROV’94, May 1994.
[7] A. Kheddar, C. Tzafestas, P. Blazevic, and
P. Coiffet. Fitting tele-operation and virtual real-
ity technologies towards teleworking. In FIR’98,
4th French-Israeli Symposium on Robotics, pages
147–152, Besaon, France, May 1998.
[8] A. Kheddar, C. Tzafestas, and P. Coiffet. The
hidden robot concept: High level abstraction tele-
operation. In IROS’97, International Conference
on Intelligent Robot and Systems, pages 1818–
1824, Grenoble, France, September 1997.
[9] W. S. Kim. virtual reality calibration and preview
/ predictive displays for telerobotics. In Presence,
volume 5/2, pages 173–190. MIT Press, 1996.
[10] Yoshifumi Kitamura, Amy Yee, and Fumio
Kishino. A sophisticated manipulation aid in a
virtual environment using dynamic constraints
among object faces. In Presence, volume 7/5,
pages 460–477. MIT Press, October 1998.
[11] Jack M. Loomis, Reginald G. Golledge, and
Roberta L. Klatzky. Navigation system for the
blind: Auditory display modes and guidance. In
Presence, volume 7/2, pages 193–203. MIT Press,
April 1998.
[12] Katashi Nagao and Jun Rekimoto. The world
through the computer: Computer augmented in-
teraction with real world objects. In the 1995
User Interface Software Technology, pages 29–36,
Pittsburgh, 1995.
[13] Jun Rekimoto. The magnifying glass aproach to
augmented reality systems. In ICAT’95, 1995.
[14] David Schloerb. A quantitative measure of telep-
resence. In presence, volume 4/1, pages 64–80.
MIT Press, 1995.
[15] David Sturman. Whole Hand Input. PhD thesis,
MIT, 1992.
[16] Susumu Tachi and Kenichi Yasuda. Evaluation
experiments of a teleexistence manipulation sys-
tem. In Presence, volume 3/1, pages 35–44. MIT
Press, 1994.
[17] Greg Vanderheiden and John Mendenhall. Use
of a two-class model to analyse applications and
barriers to the use of virtual reality by people
with disabilities. In Presence, volume 3/3, pages
193–200. MIT Press, 1994.
[18] Didier Verna. T´el´e-Op´eration et ealit´e
Virtuelle: Assistance l’Oprateur par Modlisation
Cognitive de ses Intentions. PhD thesis, ENST,
46 rue Barrault, 75013 Paris, France, February
2000. ENST 00 E007.
[19] Didier Verna and Alain Grumbach. Augmented
reality, the other way around. In M. Gervautz,
A. Hildebrand, and D. Schmalstieg, editors, Vir-
tual Environments’99, pages 147–156. Springer,
1999.