close

Вход

Забыли?

вход по аккаунту

?

CCTA.2017.8062562

код для вставкиСкачать
2017 IEEE Conference on Control Technology and Applications (CCTA)
August 27-30, 2017. Kohala Coast, Hawai'i, USA
Moving Target Localization Method using Foot Mounted Acceleration
Sensor for Autonomous Following Robot
Ryosuke Tasaki1 , Hiroto Sakurai2 and Kazuhiko Terashima3
the medical round supporting robot Terapio, which is shown
in Fig. 1, [4] and [5]. Terapio carries medical devices and
is equipped with an electronic health data system. Terapio automatically tracks a physician and supports medical
rounds. Among various in-hospital tasks, carrying medical
devices is a relatively simple one. Automated rounds trolleys
capable of tracking a physician and carrying medical devices
would reduce the burden of healthcare professionals and help
physicians focus on medical examination and treatment. This
is expected to lead to enhanced safety and improvement of
the level of medical care services.
Numerous approaches, that apply CCD cameras and Laser
Range Finders (LRFs) to tracking robots, have been proposed. However, in an indoor space, such as in a hospital,
many physical obstacles and people exist and the problem of
occlusion, that is, the occurrence of a dead angle of sensors
owing to obstacles, is not negligible. In particular, if several
persons exist in the same space, the robot may mistakenly
recognize another person as the tracking target and fail to
achieve automatic tracking.
Thus, there are possibilities for losing sight of the tracking
target or mistakenly recognizing another person as the tracking target. Therefore, we use an inertial sensor in addition
to LRFs, and the objective of this study is to establish an
automatic tracking system by sensor fusion of the LRFs
and inertial sensor that, by considering the attitude angle
of the inertial sensor mounted on the instep of the tracking
target, does not lose sight of the tracking target. This paper
describes the outline of existing approaches and clarifies the
Abstract— The development of an autonomous mobile robot
system has expanded its application across various fields such
as for medical service support, automatic vehicle operation,
and delivery businesses. In recent years, we have developed
a mobile robot which is tasked to follow a medical doctor
from an arbitrary starting point to a patient’s room, manage
medical resources and a patient’s electronic information in
a hospital. The robot follows with its position being just a
few meters behind the moving target person or the medical
doctor. Here, when there is a blocking object between the
target being followed and the observation sensor, the robot
stops because it loses the target’s position. Obstacles for these
robots can be passers-by in straight corridors and corner
walls encountered during turns. In this research, we propose
a moving target localization method that directly measures or
indirectly estimates the position of the robot’s following target.
A wearable acceleration sensor is attached to one-leg of the
following target person and self-localization with high accuracy can be estimated by our proposed calculation algorithm
based on inertial navigation information. As a compensation
for positioning errors accumulated over time, the estimation
calculation is processed according to correction information of
the stance phase judgment by an acceleration sensor and the
target’s moving direction by LRF during walking in a straight
line. Construction of the estimation compensation algorithm, its
implementation on the mobile robot and a trial experiment of
the robot following a specified person are described in detail.
I. I NTRODUCTION
In the healthcare and nursing care fields, while there
is a growing need for improved healthcare technologies
and greater safety, the shortage of physicians is an issue.
According to the Organization for Economic Co-operation
and Development (OECD), the number of physicians per
1,000 persons in Japan was 2.3 in 2012, which was far below
the average of OECD member countries. Moreover, the many
of physicians in urban areas but a few in rural is a social
issue and there are concerns about the increased burden of
healthcare professionals and the consequent decline in both
safety and service levels attributable to it.
In view of this social background, in order to reduce the
burden of healthcare professionals, the development of robots
to support healthcare services is being vigorously pursued
(Sucher J.F et al., 2011; M. Imai et al., 2009; R. Murai,
T. Sakai, and Y. Kitano, 2012). Our laboratory developed
1 Ryosuke Tasaki is with Mechanical Engineering Department, Toyohashi
University of Technology, Aichi, Japan tasaki@me.tut.ac.jp
2 Hiroto
Sakurai is with Mechanical Engineering Department,
Toyohashi
University
of
Technology,
Aichi,
Japan
sakurai hiroto@syscon.me.tut.ac.jp
3 Kazuhiko
Terashima
is
with
Department, Toyohashi University of
Mechanical
Engineering
Technology, Aichi, Japan
Fig. 1.
terasima@me.tut.ac.jp
978-1-5090-2182-6/17/$31.00 ©2017 IEEE
827
Operating scenario
influence of the instep attitude angle. Then, attitude angles of
the inertial sensor mounted on the instep are estimated and
the accuracy of pass estimation is evaluated. Finally, a system
for tracking a specified person that uses estimation of attitude
angles is established and various experiments for automatic
tracking in an occlusion environment and in many passages
are carried out using Terapio to validate the effectiveness of
this study.
II. S YSTEM FOR TRACKING A SPECIFIED PERSON USING
A L ASER RANGE FINDER
Fig. 2.
Terapio’s principal role is to support healthcare professionals when they see patients in wards for examination
and treatment. Typically, in hospitals at present, carts are
used for transporting armamentarium and used items to be
disposed, at the nurses’ station and in the patient’s bedroom.
The robot, with an autonomous, omnidirectional mobility and
abilities to follow a specified person with obstacle avoidance,
should well be able to perform such transportation. The
robot tracks a target position being a few meters behind the
moving target person or the medical doctor. Here, when there
are blocking objects between the robot and the following
target, the robot stops because it loses the target’s position.
Obstacles for these robots are passers-by in straight corridors
and corner walls in turns. A wearable acceleration sensor
is attached to one-leg of the following target person and
self-localization with high accuracy can be estimated by our
proposed calculation algorithm based on inertial navigation.
Data association in tracking
the detected candidate position in the current frame, the
candidate position with the shortest distance is recognized
as the person’s location in the current frame. Fig. 2 shows a
situation in which two candidates are detected in the current
frame. Here, the distance from the position in the previous
frame is also considered in order to deal with cases in which
a supporting leg is tracked as the person’s position.
B. Detection of surrounding objects using Laser Range
Finders
Local obstacle maps in the robot-centric coordinate system
are created by detecting objects within the surroundings of
the robot. Data obtained by a range sensor that scans in the
horizontal direction are used for detecting objects that are
sufficiently high, such as walls. A range sensor that irradiates
laser light in a diagonally downward direction is used for
detecting obstacles and areas where access is prohibited that
are not detectable by horizontal scanning at a certain height,
such as a small gap on the floor, beds, chair legs, or a
descending staircase (Y. Okada et al., 2013). Positions of low
objects are identified by detecting the change in height by a
scanning line. A descending staircase is detected based on the
difference in the height from the floor. Maps corresponding
to individual objects are created separately. A region where
a robot can move around safely (free space) is determined
by integrating all the created maps.
A. Detection and tracking of a person
Ambient environment data obtained by three LRFs that
scan in the horizontal direction are converted to distance data
observed by the robot coordinate system and synthesized
to form a single distance data. A person, whose legs are
detected upon switching to the mode to track a specified
person within the fan-shaped observation range with a radius
of 2.5(m) within ±45(deg) from the robot when the direction
facing the robot is considered to be 0(deg), is recognized as
the tracking target. If there is no one within range, the robot
stands by until a person appears and recognizes the first
person who enters range as the tracking target. The robot
defines the position of a person by his/her legs detected
among the distance data and tracks the person between
frames, considering continuity of the movement. In this
way, it is possible to reduce errors whereby another person
or an obstacle whose shape is similar to that of a person
is mistakenly recognized as the tracking target (K. Koide,
J. Miura, and J. Satake, 2013, M. Kobilarov et al., 2006).
In each frame, firstly, the local minima method by [7] is
used to detect the local minimum of distance data and a
candidate is specified. The two legs of the same person may
be detected simultaneously. In such case, they are treated
as separate candidates. By assuming uniform linear motion
from the person’s position in the previous two frames (t − 2
and t − 1), the position in the current frame (t) is predicted.
Comparing all the combinations between the position in
the current frame, the position in the previous frame and
C. Path planning
In a dynamic environment, it is necessary to plan an
appropriate path in a limited time. Therefore, a method,
that is an extension of Rapidly-exploring Random Tree
(RRT) by [10], which is a path planning technique based on
randomized search, is used. This method generates an arrival
time field with a description of the projected time of arrival
at the goal (in the case of the proposed system, the position
of the person who is the tracking target) at each location
based on the velocity function corresponding to the space
constraints of the region and uses it to control expansion
of branches so that sufficiently good paths are generated
efficiently (O. Khatib, 1986).
D. Software for person tracking
Fig. 3 shows the configuration of a function module
for tracking a specified person. A PID controller based
on mathematical model of omni-directional mechanism is
828
B. inertial navigation
Fig. 3.
Inertial navigation is an approach for estimating the path
by second-order integration of acceleration (U. Steinhoff,
and B. Schiele, 2010; N. B. Priyantha, A. Chakraborty
and H. Balakrishnan, 2000; D. Hallaway, T. Höllerer and
S. Feiner, 2003; E. Foxlin, 2005; A. Hamaguchi, M. Kanbara
and N. Yokoya, 2006). With this approach, firstly, walking is
judged by using acceleration. Here, a dead zone of ±1(m/s2 )
by the trial-and-error method is set and the tracking target
is judged to be in the stance phase when the measured
acceleration is within the dead zone and is judged to be in the
swing phase in other cases. However, if only this condition
is provided, the tracking target is judged to be in the stance
phase when the velocity is constant, i.e., when acceleration
is 0. To avoid this problem, the tracking target is judged
to be in the swing phase when the difference between the
maximum and the minimum of acceleration within a certain
period of time exceeds 1(m/s2 ). Gravity acceleration is the
only acceleration component of an inertial sensor obtained
in the stance phase, and therefore, the roll angle and the
pitch angle of the inertial sensor can be calculated. In the
swing phase, acceleration during walking is converted from
the sensor coordinate system to the compensation coordinate
system using the calculated roll angle and the pitch angle.
Path estimation is carried out by second-order integration of
converted acceleration. The yaw angle is obtained by firstorder integration of the angular velocity.
Software configuration for person tracking
used as the mobile robot controller, with which details have
been omitted due to paper space, as it has already been
described in a previous paper (R. Tasaki et al., 2015). In
addition to modules corresponding to the three functions
described above, a module for robot motion control using the
PID controller is added in this configuration. The software
performs all the processing of person detection and tracking,
map generation, and path planning in a processing cycle of
about 0.5(s). A virtual external diameter is set outside the
robot’s body. If the subject rapidly approaches this region, the
robot quickly stops to avoid collision. Here, approximately
0.1(s) is secured for communication of information between
the LRFs and the drive unit.
III. P ROPOSED S ENSOR FUSION SYSTEM USING L ASER
R ANGE F INDERS AND INERTIAL SENSOR TO AVOID
O CCLUSION AND MISUNDERSTANDING OF MULTIPLE
C. Tracking algorithm
PEOPLE
A. System overview
Terapio tracks the person in accordance with the operational flow shown in Fig. 6. Firstly, the position of the
candidate person in front of Terapio captured by LRFs at
the start-up of the tracking system is recognized as the initial
position of the tracking target. Subsequently, the candidate
person closest to the position of the tracking target in a
Fig. 4 shows the configuration of the system. Terapio recognizes the surrounding environment by using three LRFs.
The LRFs are mounted on foot at a height of 360(mm) from
the floor and detects the legs of people. Thus, considering
U-shaped clusters in the environment as tracking target
candidates, position estimation is carried out. An inertial
sensor is mounted on foot as shown in Fig. 5 and path
estimation is carried out based on inertial navigation as
described below.
Fig. 5.
Setting position of inerial sensor
TABLE I
S PECIFICATION OF INERTIAL SENSOR
Fig. 4.
Size(W×D×H)
Mass
Interface (Wireless)
Continuous operating time
Sampling cycle
Range of acceleration
Range of angular velocity
Control unit of Terapio
829
37(mm)×46(mm)×12(mm)
22(g)
Bluetooth Ver.2.0+EDR Class2
6 hours
1(kHz)
8(G)
1000(dps)
Fig. 6. System flowchart for identifying target person by LRFs and inertial
sensor
Fig. 7.
previous sampling is recognized as the tracking target and
Terapio tracks that person. If tracking by LRFs is difficult
owing to obstacles or the presence of two or more persons,
path estimation is performed by inertial navigation using an
inertial sensor from the position where the LRFs lose sight
of the tracking target. A circular error region with a radius of
0.5(m) is set along the path and when LRFs detect a person
within the circular error region, compensation is performed
to identify that person as the tracking target by LRFs is
resumed. If there are several persons within the circular
error region, compensation will not be perfomed and path
estimation by inertial navigation using the inertial sensor is
continued. In this method, the time that LRFs is losing sight
of the tracking target is a few seconds.
Fig. 8.
D. Influence of the change in the sensor attitude angle
Fig. 7 and 8 show the change in the attitude angle of
the instep during walking. Because the instep attitude angle
changes greatly during walking, path estimation based on the
compensation coordinate system that takes into consideration
only the sensor attitude during the stance phase would be
inaccurate. In order to redetect the tracking target by LRFs,
the tracking target must be within the circular error region.
Therefore, highly accurate path estimation is essential and it
is necessary to consider changes in the instep attitude angle
during walking.
Attitude angle of the instep
Observation value of attitude angle
ωx
=
−Ωy sin Θp + Ωr
(1)
ωy
=
Ωy cos Θp sin Θr + Ωp cos Θr
(2)
ωz
=
Ωy cos Θp cos Θr − Ωp sin Θr
(3)
The Euler angular velocities Ω are expressed as follows:
Ωy
Ωp
Ωr
(ωy sin Θr + ωz cos Θr )
cos Θp
= ωy cos Θr − ωz sin Θr
(ωy sin Θr + ωz cos Θr ) sin Θp
+ ωx
=
cos Θp
=
(4)
(5)
(6)
From the above, the Euler angles Θ are expressed as
follows:
Z t
Ωy dτ
(7)
Θy = Θy0 +
0
Z t
Ωp dτ
(8)
Θp = Θp0 +
0
Z t
Ωr dτ
(9)
Θr = Θr0 +
E. Attitude angle estimation method
In order to consider the attitude during walking, Euler
angles are used to calculate the attitude angle. As shown in
Fig. 9, the Z-Y-X Euler angles are used in this work. From
the relationship with the coordinate axis, the sensor angular
velocities ω are expressed as follows:
0
830
F. Accumulated error compensation for moving direction
and displacement
Fig. 9.
The attitude angle time-behavior of the instep mounted
sensor during a single walked step is the solid line shown in
Fig. 10. As can be seen from this figure, the estimated data
increases accumulated error with time. This is an issue of a
decrease in the sensing precision of the moving displacement
of the target and due to the integration calculation of angular
velocity for deriving the estimation angle. The accumulated
error is able to assume as a linear relationship depending on
the time and compensates with the following equation:
Z t
{Ω(τ ) − Ωe (τ )}dτ
(25)
Θ(t) = Θ0 +
Z0 t
Θe
Ω(τ )dτ −
= Θ0 +
t
(26)
T
0
Z-Y-X Euler angles
Thus, it is possible to estimate the attitude angle of the
sensor during walking. Using this attitude angle of the sensor,
rotation matrices R are expressed as follows:


1
0
0
Rx =  0 cos Θr − sin Θr 
(10)
0 sin Θr
cos Θr


cos Θp 0 sin Θp

0
1
0
Ry = 
(11)
− sin Θp 0 cos Θp


cos Θy − sin Θy 0
cos Θy 0 
Rz =  sin Θy
(12)
0
0
1
Here, Θ(t) is the compensated attitude angle of the sensor,
Θ0 is its initial angle, Ω(t) is the estimated Euler angular
velocity of the sensor, Ωt (t) is the actual Euler angular
velocity, Ωe (t) is the error of Euler angular velocity and
T is the correction time or leg swing time for one step.
Target movement direction and displacement must be
measured accurately with an error compensation calculation.
Fig. 11(a) shows both measured trajectory by LRF and
estimated trajectory by IMU. The estimated result is different
from the actual trajectory on the line of x = 0(m) because
of the initial angle error of the moving direction information.
Before the target goes onto an occlusion field, the robot
needs to sense the moving direction of the target. On the
other hand, a vertical direction error also occurs by an
acceleration information error in the absolute coordinate
system as shown in Fig. 11(b). We proposed a position error
compensation calculation using the measurable data of actual
vertical-displacement during the stance phase of walking.
As mentioned above, using both of the measurable moving
direction errors Pex , Pey in horizontal and the vertical error
Pez , the accumulated movement error can be compensated
properly by the following equations:
Direction cosign matrices C for conversion from the
sensor coordinate system to the ground surface coordinate
system are expressed as follows:
C
= Rz · Ry · Rx

M11 M12
=  M21 M22
M31 M32
(13)

M13
M23 
M33
cos Θy cos Θp
(14)
M11
=
M12
= − sin Θy cos Θr + cos Θy sin Θp sin Θr (16)
M13
=
sin Θy sin Θr + cos Θy sin Θp cos Θr
(17)
M21
=
sin Θy cos Θp
(18)
M22
=
cos Θy cos Θr + sin Θy sin Θp sin Θr
(19)
M23
=
− cos Θy sin Θr + sin Θy sin Θp cos Θr (20)
M31
=
− sin Θp
(15)
at (t)
M32
=
cos Θp sin Θr
(22)
=
cos Θp cos Θr
(23)
(27)
(28)
Z tZ
τ2
C · asen (τ1 )dτ1 dτ2 −
P(t) = P0 +
(21)
M33
= aabs (t) − ae (t)
= C · asen (t) − ae (t)
0
0
(29)
Here, asen (t) is the target’s movement acceleration in the
sensor coordinate system. aabs (t) is acceleration, at (t) is
Thus,
aabs = C · asen
Pe
t
T
(24)
Therefore, conversion from acceleration asen of the sensor
coordinate system to acceleration aabs of the ground surface
coordinate system is possible. Second-order integration of
acceleration aabs of the ground surface coordinate system
enables highly accurate path estimation.
Fig. 10.
831
Sensor attitude angle
Fig. 11.
Accumulated error of moving direction and displacement
actual acceleration and ae (t) is the acceleration error amount
in the absolute coordinate system.
The accumulated error relating to ae is able to be assumed
as a linear relationship depending on the time by the fact of
several fundamental experiments as
Z t Z τ2
Pe
t
(30)
ae (τ1 )dτ1 dτ2 '
T
0
0
Fig. 12.
IV. V ERIFICATION OF PROPOSED APPROACH AND
C ONTROL EXPERIMENTAL RESULTS
A. Verification of accuracy of path estimation
To verify the accuracy of path estimation, a subject walked
along the circumference of a circle with a diameter of 3(m)
in 16 steps and paths were estimated with and without compensation by the attitude angle. The data set of three walking
experiments is identical for the case with compensation and
for the case without compensation. Fig. 12 shows estimated
paths and Fig. 13 shows the errors of estimated positions
for each step. In Fig. 12, the accuracy of path estimation
is improved owing to compensation by the inertial sensor
attitude angle. As shown in Fig. 13, the maximum position
error is within 0.5(m) as a result of compensation by the
inertial sensor attitude angle. Thus, it is possible to shrink
the error region circle and reduce the frequency of losing
sight of the tracking target or mistakenly recognizing another
person as the tracking target.
Fig. 13.
B. Experiment of automatic tracking in an occlusion environment
A corner is assumed as an occlusion environment and
an experiment of automatic tracking was carried out using
Estimated walking path
Position error of each step
Terapio. Fig. 14 and 15 show the outline of the experiment
and the experimental result, respectively. Images captured
by a camera mounted at the front of the robot are shown
832
accuracy of estimation of the path of the tracking target
was improved and effectiveness was confirmed by means
of experiments using the actual system. This study does not
consider the influence of the difference in walking patterns
of tracking targets on path estimation. Therefore, in future
studies it will be necessary to perform path estimation
and evaluation in experiments involving a larger number
of subjects. Furthermore, to enhance tracking accuracy for
specified person, an Extended Kalman filter with both LRFs
and inertial sensors will be applied in near future.
Fig. 14.
R EFERENCES
Outline of the experiment in case of occlusion
Fig. 15.
[1] Sucher J.F.,Todd S.R., Jones S.L., Throckmorton T., Turner K.L., and
Moore F. A.. (2011). Robotic telepresence: a helpful adjunct that
is viewed favorably by critically ill surgical patients The American
Journal of Surgery, 202(6), 843-847.
[2] M. Imai, M. Takahashi, T. Morigushi, T. Okada, Y. Minato, T. Nakano,
S. Tanaka, H. Shitamoto, and T. Hori. (2009) A Transportation System
using a Robot for a Hospital The Robotics Society of Japan, 27(10),
1101-1104.
[3] R. Murai, T. Sakai, and Y. Kitano. (2012) “Autonomous Navigation
Technology for ‘HOSPI,’ a Hospital Delivery Robot System” Collection of Manuscripts for the 55th Japan Joint Automatic Control
Conference, CD-ROM No. 1K303
[4] K. Terashima, S. Takenoshita, J. Miura, R. Tasaki, M. Kitazaki,
R. Saegusa, T. Miyoshi, N. Uchiyama, S. Sano, J. Satake, R. Ohmura,
T. Fukushima, K. Kakihara, H. Kawamura, and M. Takahashi. (2014).
“Medical Round Robot-Terapio-” Journal of Robotics and Mechatronic, 26(1), 112-114.
[5] R. Tasaki, M. Kitazaki, J. Miura, and K. Terashima. (2015). “Prototype Design of Medical round Supporting Robot, “Terapio”” IEEE
International Conference on Robotics and Automation (ICRA2015),
829-834.
[6] K. Koide, J. Miura, and J. Satake. (2013). Development of an attendant
robot with a person tracking capability The Robotics and Mechatronics
Conference 2013, 1P1-L03.
[7] S. Okusako, and S. Sakane. (2016). Human Tracking with a Mobile
Robot using a Laser Range-Finder Journal of Robotics Society of
Japan, 24(5), 605-613.
[8] Y. Okada, J. Miura, I. Ardiyanto, and J. Satake. (2013). Indoor
Mapping including Stpdf by a Mobile Robot with Multiple Range
Sensors The Robotics and Mechatronics Conference 2013, 1P1-L02.
[9] M. Kobilarov, G. Sukhatme, J. Hyams and P. Batavia. (2006) People
tracking and following with mobile robot using an omnidirectional
camera and a laser IEEE International Conference on Robotics and
Automation, 557-562.
[10] I. Ardiyanto, and J. Miura. (2012). Real-time Navigation using Randomized Kinodynamic Planning with Arrival Time Field Robotics and
Autonomous Systems, 60(12), 1579-1591.
[11] O. Khatib (1986). Real-Time Obstacle Avoidance for Manipulators
and Mobile Robots The International Journal of Robotics Research,
5(1), 90-98.
[12] U. Steinhoff, and B. Schiele (2010). Dead reckoning from the pocket
- An experimental study IEEE International Conference on Pervasive
Computing and Communications, 162-170.
[13] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan. (2000). The
Cricket Location-Support System 6th Annual ACM International
Conference on Mobile Computing and Networking, 32-43.
[14] D. Hallaway, T. Höllerer, and S. Feiner. (2003). Coarse, Inexpensive,
Infrared Tracking for Wearable Computing International Symposium
on Wearable Computers, 69-78.
[15] E. Foxlin. (2005). Pedestrian Tracking with Shoe-mounted Inertial
Sensors IEEE Computer Graphics and Applications, 25(6), 38-46.
[16] A. Hamaguchi, M. Kanbara, and N. Yokoya. (2006). User Localization
Using Wearable Electromagnetic Tracker and Orientation Sensor IEEE
International Symposium on Wearable Computers, 55-58.
Experimental result in occlusion case
on the left in Fig. 15 and an environmental map made by
LRFs is shown on the right. Firstly, LRFs detect the tracking
target, and the robot’s following movement are then well
performed. Then, the corner wall covers the tracking target
and it becomes impossible for the LRFs to detect the tracking
target. Therefore, estimation of paths of the tracking target is
performed using the inertial sensor. Finally, LRFs judge that
the target person is within the preset circular error region
and redetect the tracking target. Based on the result of this
experiment, it is presumed that it is possible to continue
following without losing sight of the tracking target.
V. C ONCLUSION
This study proposed a system for tracking a specified
person with highly accurate estimation of the path of the
tracking target using an inertial sensor so that the system
does not lose sight of the tracking target or mistakenly
recognize another person as the tracking target even in
an occlusion environment for LRFs. In particular, taking
the attitude angle of the inertial sensor into consideration,
833
Документ
Категория
Без категории
Просмотров
2
Размер файла
1 899 Кб
Теги
ccta, 2017, 8062562
1/--страниц
Пожаловаться на содержимое документа