вход по аккаунту



код для вставкиСкачать
Int. J. Mechatronics and Manufacturing Systems, Vol. 10, No. 3, 2017
Interactive programming of industrial robots for edge
tracing using a virtual reality gaming environment
S. Michas, E. Matsas and G-C. Vosniakos*
National Technical University of Athens,
School of Mechanical Engineering,
Iroon Polytehneiou 9,
15780 Athens, Greece
Fax: +302107724373
*Corresponding author
Abstract: This paper presents the methodology and development tools
employed to specify the path of an industrial robot in a virtual environment
(VE) that is typically used for game development. Initially, the cell and the
robot are designed and then imported into the VE, followed by the development
of the kinematic chain of the robot. Forward and inverse kinematics analysis of
the robot is accomplished, implemented in the C# programming language and
embedded into the VE vie inbuilt functionality. The end-effector path consists
of edges to be traced, such as in welding, deburring, water jet machining, etc.
The path is planned in the VE using the teaching or lead-through paradigm,
where instead of a teaching pendant the controller of a computer game console
is used in connection with visual aids augmenting the VE to help the
programmer. The robot path is, then, transcribed in the real robot, the pertinent
code being derived automatically within the VE. The method was applied to a
6-axis industrial robot, the obtained V+ program was run on the real robotic
cell and accuracy attained was documented proving that user friendly VR
interfaces deriving from the gaming world can provide an efficient and
effective robot programming paradigm.
Keywords: industrial robot; robotic manufacturing cell; virtual reality;
interactive path planning; virtual manufacturing; robot kinematics; game
Reference to this paper should be made as follows: Michas, S., Matsas, E. and
Vosniakos, G-C. (2017) ‘Interactive programming of industrial robots for edge
tracing using a virtual reality gaming environment’, Int. J. Mechatronics and
Manufacturing Systems, Vol. 10, No. 3, pp.237–259.
Biographical notes: Serafeim Michas has studied Mechanical Engineering at
the National Technical University of Athens (2015) and currently a researcher
and PhD candidate at University of Piraeus. His interests include mechatronics,
virtual reality and energy sustainability.
Elias Matsas holds an MSc in Mechanical Engineering (2006) and a PhD in VR
Techniques for Human-Robot collaboration (2015). He has professional
experience in design and manufacturing engineering in the consumer goods
industry. He has published ten papers on virtual reality and robotics in
Copyright © 2017 Inderscience Enterprises Ltd.
S. Michas et al.
George-Christopher Vosniakos has studied Mechanical Engineering at the
National Technical University of Athens (1986), and obtained an MSc in
Advanced Manufacturing Technology (1987) and a PhD on Intelligent
CAD-CAM Interfaces (1991) from UMIST (UK). He worked as a CIM R&D
project manager in the German software industry, as a Lecturer at UMIST and
as a consulting mechanical engineer. In 1999, he joined NTUA as an Assistant
Professor and since 2015 he is Professor. He has been involved in over 35
R&D projects with national, European and private funding. He is a member of
the editorial board of four journals and has authored 164 publications.
Robotic cells are a special kind of production system, employing industrial robots usually
for material handling, e.g., loading/unloading machine tools, but also for material
processing as such, e.g., welding, cleaning, deburring, etc. or even for the assembly of
parts into products. Programming of industrial robots as opposed to CNC machine tools
must take into account the more complex kinematics of robots in relation to machine
tools and the need to circumvent obstacles that are by far more common in the case of
robots compared to machine tools. Thus, robot programming has to be as intuitive and
user friendly as possible. Offline programming environments for industrial robots are
often preferred to lead-through programming because they impose no downtime to the
robot. Relevant software is CAD-based and provides path design (planning) and
simulation within the manufacturing facility even including relevant sensors, notably
machine vision, but with generally limited user interaction (Zlajpah, 2008; Jara et al.,
Due to the development of pertinent technologies, some interesting approaches
emerged recently exploiting virtual (VR), augmented (AR) and mixed (MR) reality in
industrial robotics, both in terms of the realistic behaviour of the robot and the production
environment (Novak-Marcincin et al., 2012), and in terms of better user-centric
perception (Rossman, 2010; Nathanael et al., 2016). Use of open software does enhance
the prospects of such approaches (Calvo et al., 2016). New, intuitive ways of moving and
programming the robot are being developed, supporting the programmer with auxiliary
collision avoidance algorithms based on novel input devices and automatic trajectory
planning by means of novel strategies (Hein and Worn, 2009; Abbas, et al., 2012). For
instance, an end-user interface for industrial robot tasks like polishing, milling or
grinding based on a visual programme as well monitoring the tasks by overlapping real
time information through AR (Mateo, et al., 2014).
A complete tutorial for the development of integrated virtual environments for robotic
applications – in a broader than just industrial context – based on commercially available
development software is given in Gupta and Jarvis (2009). Also, multimodal interfaces
that replace the traditional ones, e.g., mice, keyboards and two-dimensional imaging
devices have been and are still being invented, starting with CAVE systems (in the past)
and extending to 3D haptic desktop devices interacting with a virtual robotic environment
in demanding applications such as multi-arm robotic cells (Mogan et al., 2008).
In addition, various schemes including the necessary steps for creating interactions
between real and virtual objects have been presented for developing VR, AR and MR
environments in robot simulation (Chen et al., 2010; Crespo et al., 2015). For instance, in
Interactive programming of industrial robots for edge tracing
Schneider et al. (2014) the user can intuitively edit point-cloud-based waypoints
generated through various input channels and is provided with online visual and acoustic
feedback on the feasibility of the desired robot motion.
Programming of industrial robots has benefited greatly from the development of
VR/AR/MR, but it is still at its infancy, undergoing full development (Pan et al., 2012;
Aron et al., 2008). Intuitive approaches are pursued with highly multimodal interaction
with the user, whilst the presence of relevant objects of the real world is avoided as much
as possible (Akan et al., 2011). An AR system, focusing on the human ability to quickly
plan paths free of obstacles, through manual determination of the free volume is
presented in Chong et al. (2009). Forward and reverse kinematics was integrated into the
system and the virtual robot and other virtual items are observed through a head mounted
display (HMD). An analogous system is reported in Pai et al. (2015), drawing on
modifiable marker–based collision detection for the robot arm, as well as integration with
object manipulation to pick and place a virtual object around the environment A method
of curve learning, based on Bayesian neural networks, which is used to determine
trajectories passing through specified points and orientations of the end-effector along the
taught line, has been demonstrated by means of AR (Ong et al., 2010). Furthermore, for
applications where the end-effector is constrained to follow a visible path at suitable
inclination angles with respect to the path, an AR-based approach enabled the users to
create a list of control points interactively on a parameterised curve model, define the
orientation of the end-effector associated with each control point, and generate a ruled
surface representing the path to be planned (Fang et al., 2013). Virtual cues augmented in
the real environment can support and enhance human-virtual robot interaction at different
stages of the robot tasks planning process (Fang et al., 2014). Dynamic constraints of the
robot are taken into account and a marker cube materialises the main user interface,
allowing a preview of the simulated movement, perception of potential overshoot and
reconciliation of differences between planned and simulated trajectory in Fang et al.
(2012). A two-step hybrid process reported in Veiga et al. (2013) first allows the user to
sketch the desired trajectory in a spatial augmented reality programming table using the
final product and then relies on an advanced 3D graphical system to tune the robot
trajectory in the final work cell.
An AR-based interactive remote programming system for laser welding taking into
account the actual part data and the limitations of the process allows for quick definition
of functions on a task level even by new users through fast and effective spatial
interaction methods (Reinhart et al., 2008). A structured approach led to the development
of virtual environments, which support programming of robot projects focusing on
interesting events, objects and time frames (Amicis et al., 2006). In this light, it is thought
that integration of a robot in a production cell is better served by VR as opposed to AR.
The objective of this research is to plan the path of an industrial robot, such as the
staubli RX90LTM, in a VR environment constructed with a game development engine,
such as Unity3DTM, and employing a user interface that is derived from consumer
electronics, such as the controller of Microsoft’s console Xbox 360TM. The aims is to
make the path planning process as simple as playing a computer game, yet as accurate
and reliable as the traditional offline programming methods.
Section 2 introduces the basic functionality of the game development engine used to
construct the simulation environment, as well as the main procedure for building the
virtual robotic cell. Section 3 presents the robot and the pertinent kinematics models built
S. Michas et al.
into the simulation environment. Path planning methods are presented in Section 4 and a
particular implementation concerning edge tracing as in arc welding is presented and
discussed in Section 5. Section 6 summarises the main conclusions of this work.
Building the virtual environment using a game development engine
Typically, the process of creating a virtual environment supporting robot programming
and simulation based on a commercially available game development engine involves:
creating or importing the virtual objects
adding kinematics esp. for the robot
adding interactions among objects as well as between the user and the objects
defining a scenario for testing
running the scenario to obtain the path of the robot
executing the path on the real robot to validate for accuracy and make possible
Figure 1
The unity workspace (see online version for colours)
As far as the game development engine is concerned, Unity3DTM was used. It is a
cross-platform engine normally employed to develop video games for PC, consoles,
mobile devices and websites. It offers add-in software for object-oriented code
development in JavaScript, C# and Boo (Python-like) and excellent online documentation
as well as a graphical environment (workspace) for project programming see Figure 1.
Unity’s operation is based on the existence of a scene (the graphics displayed in the
middle of the screen), inside which there are objects (GameObjects: 3D objects, lights,
audio, user interface, particle system and camera), which have specific behaviour and
Interactive programming of industrial robots for edge tracing
interact with each other in various ways. They may be independent or bound with each
other with parent-child relations in a Hierarchy supporting inheritance of size, motion and
behaviour. The Hierarchy is shown at the left window of the screen and the user can add
parent-child relations just by dragging and dropping an object on another. The behaviour
of each GameObject depends on its attached components (source files), some being
created by the user in order to access and modify the content of other components. The
component that each GameObject has attached to it can be seen in the inspector, on the
right window of the screen. The main components that were employed in this project are
outlined next.
2.1 Components
In Unity3DTM there are default and additional components. The default components are
those attached to the GameObjects, per category, since their creation. Additional
components are added to a GameObject so that it acquires additional properties, but in
view of their sheer number, only those employed in this project will be explained next,
see Figure 2(a).
All objects, including empty GameObjects, have a ‘transform’ component by default.
It is essentially the homogeneous transformation of the GameObjects controlling their
position relative to the origin, their Euler angles and their size (scale) along each axis.
3D objects possess the Mesh Renderer and the Collider. The geometry of a
GameObject is determined by a Mesh and the Mesh Renderer determines whether the
geometry is displayed as a graphic on the screen or not. Colliders are either primitive, in
the shape of the basic 3D Objects available in Unity, or mesh, following the
GameObject’s mesh. There are appropriate functions to detect collisions and the user can
add code to modify the behaviour of the GameObject. A ‘trigger’ collider still detects
collisions, while allowing the objects to overlap (Unity-3D, 2013).
Lights contain only one component determining the type of light (sun, spot, point
etc.), the range, the diffusion area, colour and intensity.
The Camera determines what the user sees, the camera component including
perspective, depth of field, lens range, etc. and the audio listener component functioning
like a microphone of a real camera; receiving sounds from the virtual environment and
playing them back through speakers.
The objects AudioSource, graphical user interface and particle system were not really
exploited in this work although they might be in a future extension. Audio supports any
sounds played in the scene. User Interface (GUIElement) refers to any widget that is
displayed in the scene. Particle System supports animated effects such as smoke, spray,
The most significant additional components in this project include RigidBody and
Scripts. RigidBody sets the movement of a GameObject under the influence of physical
forces, such as gravity, inertia, etc. It is also necessary for colliders to function.
Characteristically, in order to maintain collision detection capability without any force
affecting the GameObject, the ‘IsKinematic’ option must be activated, whilst, if all forces
except gravity need to affect the GameObject, the ‘use gravity’ option should be
Figure 2
S. Michas et al.
Development platform structure (a) main object classes (b) monobehaviour functions
Scripts are classes within which code is written that controls the GameObjects in the
scene. They inherit properties from the ‘MonoBehaviour’ parent class containing the
functions Awake (), Start (), Update (), FixedUpdate () and OnGUI (). Note that the game
evolves through successive frames and in Play Mode Unity3DTM produces 50-60 fps of
the status of the GameObjects in the scene, so that the result is a smooth motion. The
Awake () is used mainly for variable initialisation. The Start () function behaves
similarly, but it is called during the first frame and its result is visible, e.g., movement of
Interactive programming of industrial robots for edge tracing
an object. The Update () function is called in every frame, while the FixedUpdate ()
function is called before any physics calculation in connection to the RigidBody
component. The OnGUI () function is called whenever a widget is depicted on the screen,
e.g., pop-up messages, buttons, etc.
2.2 Scene creation
Each project consists of elements called Assets. These can be 3D models imported from
external design programs, texture files, and audio files.
The 3D models can be either individual parts or assemblies. Length units are in m,
hence the appropriate scale factor needs to be set to get realistic results. For example, the
CAD model of the robot employed in this project was designed in mm space, with an
overall height of 1185mm, thus scale factor had to be set to 0.001. It should also be noted
that Unity recognises as 3D surfaces only solid models and not shell surfaces. After
importing the models of the robot and the cell, appropriate textures were attached to them
so that they look more realistic, see Figure 3(a).
Figure 3
(a) Virtual robotic cell (b) hierarchy of the robot kinematic chain (see online version
for colours)
S. Michas et al.
Two types of coordinate systems are used, namely the global CS and the local CS, both
counter clockwise. Local CS indicates the position and rotation of a GameObject relative
to the parent coordinate system.
Definition of kinematic chains is pertinent to robot modelling. Each joint is rotated
independently and connects a pair of links. When a joint is rotated, all links directly or
indirectly connected to it rotate along. Thus, a kinematic chain is modelled as an object
hierarchy, wherein each link is a child of the previous joint, see Figure 3(b).
Table 1
Joint specifications of the Staubli RX90L robotic arm
Range (º)
Limits (º)
–227.5, 47.5
–52.5, 232.5
–105, 120
Home (º)
Figure 4
Table 2
Coordinate systems for Stäubli RX90L (see online version for colours)
D – H parameters for Staubli RX90L
αi – 1
ai – 1
Interactive programming of industrial robots for edge tracing
Figure 5
Tree form of the solutions to the inverse kinematics problem
S. Michas et al.
Robot kinematics
The robot used in this work is the Stäubli RX90L used in various low payload – high
accuracy industrial tasks such as assembly, pick-and-place, painting, inspection etc. The
robot has a payload of 3.5 kg, a maximum payload of 6 kg (at reduced speed), a
maximum working speed of 12.6 m/s and a repeatability of 25 μm.
Its joint specifications are shown in Table 1. Initially, the coordinate systems (CS)
according to the Denavit-Hartenberg (D-H) method are placed as in Figure 4 and the D-H
parameters are shown in CS{0} was placed in the first joint of the robot, so that when the
value of the first joint is zero, the CS{0} and the CS{1} coincide. Note that the position
of the end effector refers to CS{1}.
3.1 Kinematics
Forward kinematics is formulated in the standard way as presented in the Appendix.
As far as inverse kinematics is concerned, the user provides as input the position of
the wrist along the three axes: pwx, pwy, pwz and the orientation of the end effector around
the three axes: ax, ay and az, expressed as Euler angles.
In the case of robotic arms which have six joints, from which the last three are
rotational and their axes intersect at the same point, there is a closed solution attributed to
Pieper. Thus, the problem of inverse kinematics is broken down into two simpler ones,
known as inverse position kinematics and inverse orientation kinematics respectively.
The solution to the former calculates the values of the first three joints using only the
position of the wrist, whilst the solution to the latter calculates the coordinate values of
the last three joints (Spong et al., 2005). In Section 3.1 the CS{6} was placed on the wrist
of the robot and not on the edge of the flange. That way, instead of measuring the
position of the end effector, the position of the wrist is measured. The rotation of the end
effector does not depend on this length, thus it is fully defined and intersection of the last
three coordinate systems is materialised, thus enabling application of Pieper’s method as
θ11 = Atan 2 ( p y , px ) , θ12 = Atan 2 ( − p y , − px )
⎛ px2 + p 2y + pz2 − a32 − d 42
θ31 = Asin ⎜
2 ⋅ a3 ⋅ d 4
⎟ , θ32 = −θ31 , θ33 = π + θ31 , θ34 = π − θ31
⎛ −a3c3 pz + ( d 4 + a3 s3 ) ( c1 ⋅ px + s1 ⋅ p y ) , ⎞
θ2 + θ3 = Atan 2 ⎜
⎜ ( d 4 + a3 s3 ) pz + a3c3 ( c1 ⋅ px + s1 ⋅ p y ) ⎟
θ2 = ( θ2 + θ3 ) − θ3
θS1 = Acos ( c1s23 r13 + c1 s23 r23 + c23 r33 ) , θS 2 = −θS1
θ41 = Atan 2 ( − s1r13 + c1r23 , c1c23 r13 + s1c23 r23 − s23 r33 ) , θS ≠ 0
θ42 = −360 + θ41 , 90D ≤ θ41 ≤ 180D
θ42 = −360 + θ41 , − 180D ≤ θ41 ≤ 90D
Interactive programming of industrial robots for edge tracing
θ4 = 0, θS = 0
θ61 = Asin ( − ( s1c4 + c1 s4 c23 ) r11 + ( c1c4 − s1 s4 c23 ) r21 + s4 s23 r31 ) , θ62 = 180 − θ61 ,
θ63 = −180 − θ61
For the calculation of the last three joint values the rotation matrix is calculated as:
R = Rz ⋅ Ry ⋅ Rx
Rx = ⎢⎢ 0 cos ( ax ) − sin ( ax ) ⎥⎥
⎢⎣ 0 sin ( ax ) cos ( ax ) ⎥⎦
⎡ cos ( a y ) 0 sin ( a y ) ⎤
Ry ⎢
0 ⎥
⎢ − sin ( a y ) 0 cos ( a y ) ⎥
⎡ cos ( az ) − sin ( az ) 0 ⎤
Rz = ⎢ sin ( az ) cos ( az ) 0 ⎥
⎢⎣ 0
1 ⎥⎦
The multiplicity of the inverse kinematics solutions is evident and is schematically shown
in Figure 5.
From those 96 solution combinations, some should be ruled out because the angle
values resulting from the inverse kinematics analysis exceed the limits of the
corresponding robot joints, see Table 1. Out of the remaining, the unique solution is
selected as the one minimising the total transition of the joints, i.e.:
∑ δθ
= min
i =1
3.2 Programming robot kinematics
The procedure implemented is shown in Figure 6. An inverse kinematics script
(IKMover.cs) has been created comprising of 2897 lines of C# code and attached as a
component on the GameObject ‘Staubli RX90L’. The calculation code has been written
into the function Update (), in order to continuously run in each frame and renew the
angle values. With the joint values known, the movement of the robot in the scene must
be displayed. In the Transform component the angles are given in Euler description so
that they are easily understood, but Unity3DTM operates using Quaternions in order to
reduce singularities, thus the class Quaternion.Euler is used to transform the Euler angles
in Quaternion description (Unity-3D, 2013). In addition, since the CS employed in the
inverse kinematics analysis is clockwise, the angles obtained were transformed to counter
clockwise according to the equation: Rotold – (Rotnew, counterclockwise = Rotold), so that they
conform to the CS used by UnityTM.
After creating the Quaternion of each angle, the movement of the robot is created
using the Lerp function of the Quaternion class, which gradually moves the joint from the
initial to the target position at a specified speed (Unity-3D, 2013).
S. Michas et al.
Figure 6
Robot programming flowchart
Finally, script DisplayEndEffectorCoordinates.cs is used for printing the end effector
coordinates relative to the CS of the base of the robot. This contains the OnGUI ()
function which runs on every frame and prints graphic elements on the screen. In this
context, to match the coordinates read from the Transform component to those
of the actual robot, the transformation: Real(x) = – Unity(x), Real(y) = – Unity(z),
Real(z) = Unity(y) is applied to convert the counter clockwise coordinate system to
Path planning
4.1 Principle
The path of the robot is defined by a series of points corresponding to the tool tip and
three Euler angles corresponding to the tool’s orientation. Once these are determined to
an acceptable accuracy, the joint coordinates are calculated by the inverse kinematics
algorithm and consequently they can be entered into the robot program in an interpolation
mode, i.e., the continuous path is derived by interpolating these joint coordinates. Thus,
the type of curve that needs to be followed does not affect the method of path planning,
except for the number of discrete ‘sampling’ positions and orientations in order to
achieve the required accuracy when interpolating. Moreover, the curve or surface to be
followed may not belong to the part as such but equally well to auxiliary objects defined
in the virtual environment if this makes path planning easier to define. Defining positions
with respect to these curves interactively makes use of the collision detection capability
of the VR platform. Defining orientations makes use of predefined artefacts visualised as
special aids and, when these are too many or impractical to define, can also be based on
Interactive programming of industrial robots for edge tracing
the close camera that can be suitably oriented and be used for visual alignment of the
tool. This text has been included in a new subsection in Section 4 – Path planning.
4.2 The user interface
The device selected for manipulating the virtual robot is the Microsoft Xbox 360
controller (Microsoft, 2015). The controller consists of two types of buttons, namely
digital and analogue. Digital buttons return 0 when not pressed and 1 when pressed; they
are recognised by Unity as ‘JoystickButton i’, where, in this case, i = 1: 9. Analogue
buttons return continuous values in the range 0–1 depending on the intensity of
depression; they are recognised by Unity as ‘Axis’. The coded buttons used in the
application developed are shown in Figure 7 and presented in Table 3. In inverse
kinematics mode the system takes as input 3 variables describing the position of the wrist
and 3 variables describing the end effector orientation. These variables are not expressed
in absolute value, but are incremented by some step per frame starting from their initial
The application starts with the robot in ‘Home’ or ‘Do Ready’ position corresponding
to: pwx = 0, pwy = 0, pwz = 1.1, ax = 0, ay = 0, az = 0. End-effector translation along the
global coordinate directions x, y and z is achieved by the buttons 12, 11 and 10 (Axes)
respectively, see Figure 7. The returned values are stored in variables using the Input
class of unity (Unity-3D, 2013), which will be used to increase or decrease the values of
the Cartesian coordinates of the end-effector, taking into account prevention of any
further increase or decrease when the robot is about to exceed the limits of its dexterity
space. At the beginning of the Update () function the variable corresponding to the sum
of angle transitions is initialised: minSum = ∞. For each of the 96 solutions of the inverse
kinematics problem the sum of angle transitions is calculated only if all six joint angles
are within limits and, of course, the solution corresponding to the minimum minSum is
selected. Otherwise, further change of the respective coordinate stops, a warning sound is
played and the execution of the code pauses for 500 ms to allow the user time to react and
release the coordinate increase/decrease key. By analogy, end-effector rotation is
achieved using three buttons, 2, 1 and 3, see Figure 7, in clockwise sense around axes x, y
and z, respectively. Counter-clockwise rotation is achieved by first pressing the Left
Bumper (button 13), see Figure 7. If, during rotation of the end effector, this reaches out
of the dexterity space, any alteration of any angle is stopped and the warning sound is
played, just as in the case of translatory movements.
Figure 7
XBoxTM controller (see online version for colours)
Table 3
S. Michas et al.
Controller interface (JB=joystick button)
Type (by unity)
13 + 1
JB-4 + JB-0
13 + 2
JB-4 + JB-1
13 + 3
JB-4 + JB-2
Axis-6 (analogue, +)
Axis-2 (vertical)
Axis-1 (horizontal)
Positive rotation of the end-effector around the Y-axis
Positive rotation of the end-effector around the X-axis
Positive rotation of the end-effector around the Z-axis
Negative rotation of the end-effector around the Y-axis
Negative rotation of the end-effector around the X-axis
Negative rotation of the end-effector around the Z-axis
Circular change of pre-set orientations
Write current angle values formatted in V+ style to the output file
Changes the viewpoint of the close camera
Moves robot to ‘Do Ready’ position
Reverse circular change of preset orientations
Toggle between main and close camera
Moves end-effector on Z-axis
Moves end-effector on Y-axis
Moves end-effector on X-axis
4.3 Application construction
There are many manufacturing applications that consist in a suitable end-effector tool
following a specified curve in space with a specific orientation. A prime example is arc
welding, but deburring, soldering, and chamfering also fall in the same class of processes.
Furthermore, auxiliary processes such as glue dispensing, and even special types of
inspection using a short range sensor, do qualify as well as edge tracing applications.
As an example, an edge tracing application is selected, consisting of following the
four top edges of a box in a sequence. The tool must be driven at a specific angle to each
edge, for instance 45°.
Figure 8
Edge tracing application setting (a) general view (b) close camera view (see online
version for colours)
Interactive programming of industrial robots for edge tracing
4.3.1 Camera setup
The user (programmer) makes us of the main camera to have an overview of the entire
space, but this does not allow following in detail the movement in three dimensions, see
Figure 8(a). Therefore, a close camera is added to the hierarchy, whose transform
component is changed in each frame, so that it is always at a certain distance from the
end effector equal to: Dx = 0.5 m, Dy = 0.5 m, Dz = 0.1 m, see Figure 8(b) and its motion
is controlled by a suitable script (SecondCameraFollow.cs). At any instant, the viewing
field of only one camera is displayed, namely of that with the highest depth value in the
Camera Component. This is achieved by ChangeView.cs, changing the depth of the
second camera when button No 9 on the controller is pressed, see Figure 7.
4.3.2 Close camera orientation
The Close Camera, when activated, is by default horizontal at the height of the end
effector providing the corresponding view to the user. However, it can be oriented
differently by the user. For example, in order for the user to be able to see the tool on x-y
plane, axis 6 of the controller was used, see Figure 7, to specify the amount of rotation of
the camera around the local x axis of each orientation. The code reads the return value of
the 6th axis in the interval [x1, x2] = [0, 1] and maps it through linear interpolation to the
corresponding rotation value of the camera in the interval [0, 90], see Figure 9. This
facility provides a valuable tool for the user to visually check orientation of the tool and,
if desired, alter it by using the exact orientation of the close camera as a ‘calibre’. In fact,
interactive online programming by the lead-through method functions similarly, except
that there are no visual aids such as the accurately oriented close camera that exists in the
virtual environment.
Figure 9
Three different view angles of the close camera for the same robot pose using button 6
of the controller (see online version for colours)
4.3.3 Collision detection
In the virtual environment, unlike the real one, collisions have no consequence and their
goal is to alert the user, so that they can be eliminated when planning the path of the
robotic arm, see Figure 10. However, collision detection is mainly exploited for
accurately positioning the end-effector or tool tip with respect to the surface/curve to be
followed, in connection with the orientation of the close camera, see Figure 11 and
Section 4.3.2.
S. Michas et al.
Figure 10 Collision with the cell’s wall (see online version for colours)
Figure 11 Tool – part collision detection exploited for accurate tool positioning and orienting on
y-z (a) and x-y (b) plane as viewed by suitably orientated close camera (see online
version for colours)
In implementation terms, at least one of a pair of objects being tested must have a
RigidBody Component, and both of them must have a Collider component attached, see
Section 2.1. The RigidBody component was attached to the robot links and the colliders
were attached both to the robot and to all surrounding GameObjects. In the Collider
components of the latter, the option ‘Is Trigger’ was activated, and in the Rigidbody
component the ‘Is Kinematic’ option was activated, see Section 2.1. The actions
following occurrence of a collision were defined in script CollisionDetection.cs, which
Interactive programming of industrial robots for edge tracing
contains the OnTriggerEnter, OnTriggerStay and OnTriggerExit functions and is attached
to all GameObjects that have a non-Trigger Collider attached to them. The
OnTriggerEnter function is called whenever a TriggerCollider starts entering a
GameObject with the CollisionDetection.cs script attached. Within this function, code
has been written that plays a crash sound to inform the user of the collision. The
OnTriggerStay function is called continuously during the overlap of the colliding
GameObjects and code has been written that ‘paints’ red the object with which the robot
collided. Finally, the OnTriggerExit function is called when contact of the colliding
GameObjects stops, and pertinent code restores their colour.
4.3.4 Tool orientation aids
The addition of the close camera has improved the accuracy of monitoring movement,
but achieving the favourable inclination, in this case of 45o, still relies on human vision
which may not be precise enough. To achieve the required precision and automation in
orienting the tool, ‘hidden’ GameObjects can be added to the scene as orientation aids.
Each aid is suitably oriented, in this case at (45°, 45° and 90°), with respect to the local
Cartesian coordinate system. The term “hidden” means that the object does exist in the
scene, but it is not seen by the camera, which is accomplished by disabling the Mesh
Renderer in the Inspector. These GameObjects are simple shapes, in this case cylinders
named Direction1 – 4, see Figure 12. In IKMover.cs script by continuously pressing
button 3 on the controller, see Figure 7, the robot end effector cycles through these
orientations, whereas with button 6 these orientations are cyclically assigned in reverse
Figure 12 Orientation aids (see online version for colours)
4.3.5 Path recording
The path of the robot as obtained in the virtual environment must be transcribed to the
real one in some robot programming language, in this case V+. The path is essentially a
series of sets of six numbers that describe the joint coordinate for joints 1–6. Each set
describes a unique position and orientation of the end effector in the Cartesian space. In
terms of implementation, inside the IKMover.cs script, after moving the robot to the next
position, when the user presses the button 5 on the controller the joint values of that pose
S. Michas et al.
are recorded in a txt file in a format suitable for the particular robot. This file is loaded to
the real robot, so that it performs the programmed path. In order to convert the joint
coordinate values to compatible V+ commands that the controller of the robot can
execute, a simple formatted printing in a .txt file named ppPoints is adequate. The C#
code that implements this conversion is the following:
ppPoints.WriteLine("DO DECOMPOSE J_V[" + decomposeIter + "]"
+ "=#PPOINT(" + rotZ1 + "," + rotZ2 + "," + rotZ3 + "," + rotZ4 + "," + rotZ5 + ","
+ rotZ6 + ")");
decomposeIter + = 6;
As an example of the output, the following V+ commands output move the robot to two
consecutive poses:
DO DECOMPOSE J_V[0]=#PPOINT(67.2654, –62.22573, 7.642985,
DO DECOMPOSE J_V[6]=#PPOINT(67.2654, –56.76529, –3.899122,
Figure 13 (a) Path snapshots of the virtual robot and (b) their real robot counterparts (see online
version for colours)
Interactive programming of industrial robots for edge tracing
Results and discussion
Figure 13 shows some snapshots of the path of the virtual and the real robot in the
particular application being examined, a carton box of the required dimensions being
used in the real environment. Moreover, in some intermediate positions of the path, the
coordinates of the end effector in millimetres printed by Unity as well as those printed by
the controller of the robot were recorded. The results are shown in Table 4.
Table 4
X – unity
Comparison of the end effector coordinates returned by unity and the controller of the
robot in mm
Y – unity Z – unity X – real Y – real Z – real X – error Y – error Z – error
–325.36 –429.35
–225.02 –612.53
–355.77 –781.37
–506.90 –781.25
–517.19 –907.96 –105.11
–517.18 –907.96 –175.11
–517.29 –858.53 –175.13
–492.05 –175.02
–432.15 –190.13
–584.07 –190.10
–648.67 –190.12
Note that the robot moves the joints from one specified position to the next using joint
interpolation. If sequential points are too far apart, the intermediate points may not match
the programmed path. Thus, the user needs to input as many points as possible during the
path planning in the virtual environment, but this may be cumbersome. If joint
coordinates are recorded at every frame, then the real path will be exactly the same as
that displayed in Unity. Furthermore, the robot program is defined by following edges of
virtual objects, whereby position and orientation of the tool can be interactively specified
by the user along the edges at as many instances as necessary, exactly as this is done in
online lead-through programming using teaching pendants. Furthermore, if the
processing path can be drawn in the virtual environment as a 2D or even 3D curve, the
same programming procedure can be applied in principle either by adopting more
predefined orientation aids or by allowing the user to position and orientate the endeffector interactively in a trial and error scenario. In fact, this is how lead-through
programming also works.
The conventional online programming method of an industrial robot using a teaching
pendant has always relied on the operator's experience, planning the path while it was
running. In addition to being time-consuming, this method is liable to error, possibly
endangering the integrity of the operator, the industrial equipment and the work
S. Michas et al.
performed. Conventional, CAD-based offline programming of robots, does allow for trial
and error, but is less intuitive than online programming and does not use pendant or
game-like devices as user interfaces. It has to be admitted, that when paths are complex
curves they have to be derived using a mathematical algorithm, so usefulness of the VR
system is reduced compared to a CAD-based system, because the former draws on
interactive path designation, whereas the latter relies on automatic path calculation.
Programming robots in virtual game environments is essentially an offline
programming method, but it does combine intuitive interfaces for robot movement as in
online programming, yet with the possibility to go far beyond the conventional ones, e.g.,
by including immersion possibilities based on HMDs, etc. In addition, they appeal to
young engineers and technicians because of their familiarity with the virtual reality game
technology. Furthermore, it is relatively straightforward, yet time consuming, to scan the
entire robotic cell right into the virtual scene.
Of course, these novel robot programming interfaces are enabled due to the enhanced
functionality of game development engines, such as Unity3DTM, the field of use of which
is gradually extending.
The methodology advocated as well as its implementation are by no means restricted
to either the particular robot type or to the scene modelled in the case study. On the
contrary, they are fully tailorable to any scene, provided that it can be modelled
realistically in the VR platform employed and to any industrial robot, provided that its
forward and most importantly inverse kinematic can be faithfully modelled. Furthermore,
by replacing the display device by a head-mounted display with tracking capabilities it is
expected that full immersion of the user into the robotic work cell would be achieved. In
this way, flexibility in selecting appropriate viewing angles to secure the right endeffector positions and orientations would be maximised.
In addition, by taking the interpolation data from the actual robot and the actual time
required to perform the move, it would be possible to simulate the time needed to
complete a task. Thus, the application can go beyond the limits of simple path planning
and can be used in programming the workflow of a production unit.
Abbas, S.M., Hassan, S. and Yun, J. (2012) ‘Augmented reality based teaching pendant for
industrial robot’, Jeju Island, Korea, IEEE, pp.2210–2213.
Akan, B., Ameri, A., Curuklu, B. and Asplund, L. (2011) ‘Intuitive industrial robot programming
through incremental multimodal language and augmented reality’, in Proceedings of the 2011
IEEE International Conference on Robotics and Automation (ICRA2011), Shanghai, China
IEEE, pp.3934–3939.
Amicis, R.d., Gironimo, G.d. and Marzano, A. (2006) ‘Design of a virtual reality architecture for
robotic workcells simulation’, in Proceedings of the Virtual Concept 2006 Conference, Playa
Del Carmen, Mexico: s.n., pp.1–6.
Aron, C., Ionescu, M., Cojanu, C. and Mogan, G. (2008) ‘Programming of robots using virtual
reality technologies’, in Product Engineering: Tools and Methods Based on Virtual Reality,
pp.555–563, Springer, Chania, Greece.
Calvo, I., López, F., Zulueta, E. and González-Nalda, P. (2016) ‘Towards a methodology to build
virtual reality manufacturing systems based on free open software technologies’, International
Journal on Interactive Design and Manufacturing, pp.1–12, doi: 10.1007/s12008-016-0311-x.
Interactive programming of industrial robots for edge tracing
Chen, Y-H., MacDonald, B.A. and Wunsche, B.C. (2010) ‘Designing a Mixed reality framework of
enriching interactions in robot simulation’, in Proceedings of the International Conference on
Computer Graphics Theory an Applications (GRAPP2010), Angers, France, Springer,
Chong, J.W.S., Ong, S.K., Nee, A.Y.C. and Yousef-Youmi, K. (2009) ‘Robot programming using
augmented reality: an interactive method for planning collision-free paths’, Robotics and
Computer-Integrated Manufacturing, Vol. 25, No. 3, pp.689–701.
Crespo, R., García, R. and Quiroz, S. (2015) ‘Virtual reality application for simulation and off-line
programming of the mitsubishi movemaster RV-M1 robot integrated with the oculus rift to
improve students training’, Procedia Computer Science, Vol. 75, pp.107–112.
Fang, H.C., Ong, S.K. and Nee, A.Y.C. (2012) ‘Interactive robot trajectory planning and simulation
using augmented reality’, Robotics and Computer Integrated Manufacturing, Vol. 28, No. 2,
Fang, H.C., Ong, S.K. and Nee, A.Y.C. (2013) ‘Orientation planning of robot end-effector using
augmented reality’, The International Journal of Advanced Manufacturing Technology,
Vol. 67, Nos. 9–12, pp.2033–2049.
Fang, H.C., Ong, S.K. and Nee, A.Y.C. (2014) ‘A novel augmented reality-based interface for
robot path planning’, International Journal on Interactive Design and Manufacturing, Vol. 8,
Vol. 1, pp.33–42.
Gupta, O.K. and Jarvis, R.A. (2009) ‘Using a Virtual world to design a simulation platform for
vision and robotic systems’, Advances in Visual Computing, Lecture Notes in Computer
Science, Vol. 5875, pp.233–242.
Hein, B. and Worn, H. (2009) ‘Intuitive and model-based on-line programming of industrial robots:
New input devices’, IEEE, St Louis, USA, pp.3064–3069.
Jara, C.A. et al. (2011) ‘An interactive tool for industrial robots simulation, computer vision and
remote operation’, Robotics and Autonomous Systems, Vol. 59, No. 6, pp.389–401.
Mateo, C., Brunete, A., Gambao, E. and Hernando, M. (2014) Hammer: an android based
application for end-user industrial robot programming’, IEEE/ASME, Senigallia-Ancona, Italy,
Microsoft (2015) Get to know your Xbox One Wireless Controller [online] [accessed
29 October 2016].
Mogan, G. et al. (2008) ‘A generic multimodal interface for design and manufacturing
applications’, in Proceedings of the 2nd International Workshop Virtual Manufacturing
(VirMan 08) – part of the 5th INTUITION International Conference: Virtual Reality in
Industry and Society: From Research to Application, Heriot Watt Univ., Torino, Italy,
Nathanael, D., Mosialos, S. and Vosniakos, G-C. (2016) ‘Development and evaluation of a virtual
training environment for on-line robot programming’, International Journal of Industrial
Ergonomics, May, Vol. 53, pp.274–283.
Novak-Marcincin, J. et al. (2012) ‘Verification of a program for the control of a robotic workcell
with the use of AR’, International Journal of Advanced Robotic Systems, Vol. 9, No. 2, p.54.
Ong, S.K., Chong, J.W.S. and Nee, A.Y.C. (2010) ‘A novel AR-based robot programming and path
planning methodology’, Robotics and Computer-Integrated Manufacturing, Vol. 28, No. 2,
Pai, Y.S., Yap, H.J. and Singh, R. (2015) ‘Augmented reality–based programming, planning and
simulation of a robotic work cell’, Proceedings of the Institution of Mechanical Engineers,
Part B: Journal of Engineering Manufacture, Vol. 229, No. 6, pp.1029–1045.
Pan, Z. et al. (2012) ‘Recent progress on programming methods for industrial robots’, Robotics and
Computer-Integrated Manufacturing, Vol. 28, No. 2, pp.87–94.
S. Michas et al.
Reinhart, G., Munzert, U. and Vogl, W. (2008) ‘A programming system for robot-based
remote-laser-welding with conventional optics’, CIRP Annals – Manufacturing Technology,
Vol. 57, No. 1, pp.37–40.
Rossman, J. (2010) ‘Advanced virtual testbeds: robotics know how for virtual worlds’, IEEE,
Singapore, pp.133–138.
Schneider, K., Weber, W., Weigl-Seitz, A. and Kleinmann, K. (2014) ‘Interactive path editor for
industrial robots using a 3D-simulation environment’, VDE/IEEE, Munich, Germany, pp.1–6.
Spong, M., Hutcinson, S. and Vidyasagar, M. (2005) Robot Modeling and Control, s.l.:Wiley.
Unity-3D (2013) Scripting Reference [online] [accessed 8
August 2016].
Veiga, G., Malaca, P. and Cancela, R. (2013) ‘Interactive industrial robot programming for the
ceramic industry’, International Journal of Advanced Robotic Systems, Vol. 10, No. 10,
Zlajpah, L. (2008) ‘Simulation in robotics’, Mathematics and Computers in Simulation, Vol. 79,
No. 4, pp.879–897.
Forward kinematics
The homogeneous transformation tables are formulated as follows for each link’s
coordinate system, where ci = cos(θi), si = sin(θi), cij = cos(θi + θj), sij = sin(θi + θj), and θi
the angular coordinate of joint i:
⎢ c1
T1 = ⎢
− s1
⎢ c3
T3 = ⎢
− s3
⎢ c5
⎢ 0
T5 = ⎢
⎢ − s5
⎣ 0
− s5
0 0⎥
0 0 ⎥⎥
1 0⎥
0 1⎦
⎢ c2
⎢ 0
T2 = ⎢
⎢ − s2
⎣ 0
− s2
0 a3 ⎥
0 0 ⎥⎥
1 0⎥
0 1⎦
T4 = ⎢
⎢ s4
− s4
0 0⎥
1 0 ⎥⎥
0 0⎥
0 1⎦
T6 = ⎢
⎢ s6
− s6
0 0⎥
1 0 ⎥⎥
0 0⎥
0 1⎦
0 ⎥
−1 −d 4 ⎥⎥
0 ⎥
1 ⎦
0 0⎥
−1 0 ⎥⎥
0 0⎥
0 1⎦
Thus, the homogeneous transformation of the end effector relatively to the base CS is:
⎡ r11
T6 =0 T1 ⋅1 T2 ⋅2 T3 ⋅3 T4 ⋅4 T5 ⋅5 T6 = ⎢
⎢ r31
px ⎤
p y ⎥⎥
pz ⎥
Interactive programming of industrial robots for edge tracing
r11 = c1 ⋅ [ c23 ⋅ ( c4 ⋅ c5 ⋅ c6 − s4 ⋅ s6 ) − s23 ⋅ s5 ⋅ s6 ] + s1 ⋅ ( c4 ⋅ s6 − s4 ⋅ c5 ⋅ c6 )
r12 = c1 ⋅ [ c23 ⋅ ( −c4 ⋅ c5 ⋅ s6 − s4 ⋅ s6 ) + s23 ⋅ s5 ⋅ s6 ] + s1 ⋅ ( s4 ⋅ c5 ⋅ s6 − c4 ⋅ c6 )
r13 = c1 ⋅ ( c23 ⋅ c4 ⋅ s5 + s23 ⋅ c5 ) − s1 ⋅ s4 ⋅ s5
r21 = s1 ⋅ [ − s23 ⋅ s5 ⋅ c6 − c23 ⋅ ( s4 ⋅ s6 − c4 ⋅ c5 ⋅ c6 )] + c1 ⋅ ( s4 ⋅ c5 ⋅ c6 + c4 ⋅ s6 )
r22 = s1 ⋅ [ c23 ⋅ ( −c4 ⋅ c5 ⋅ s6 − s4 ⋅ c6 ) + s23 ⋅ s5 ⋅ s6 ] − c1 ⋅ ( s4 ⋅ c5 ⋅ s6 − c4 ⋅ c6 )
r23 = − s23 ⋅ ( c4 ⋅ c5 ⋅ c6 − s4 ⋅ s6 ) − c23 ⋅ s5 ⋅ c6
r31 = − s23 ⋅ ( c4 ⋅ c5 ⋅ c6 − s4 ⋅ s6 ) − c23 ⋅ s5 ⋅ c6
r32 = s23 ⋅ ( c4 ⋅ c5 ⋅ s6 + s4 ⋅ c6 ) − c23 ⋅ s5 ⋅ c6
r33 = c23 ⋅ c5 − s23 ⋅ c4 ⋅ s5
px = c1 ⋅ s23 ⋅ d 4 + c1 ⋅ c2 ⋅ a3
p y = s1 ⋅ s23 ⋅ d 4 + s1 ⋅ c2 ⋅ a3
pz = c23 ⋅ d 4 − a3 ⋅ s2
Без категории
Размер файла
1 161 Кб
2017, ijmms, 087548
Пожаловаться на содержимое документа