close

Вход

Забыли?

вход по аккаунту

?

INFOCOM.2017.8057100

код для вставкиСкачать
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
LiCompass: Extracting Orientation from
Polarized Light
Yu-Lin Wei†* , Hsin-I Wu†* , Han-Chung Wang† , Hsin-Mu Tsai† , Kate Ching-Ju Lin‡ ,
Rayana Boubezari§ , Hoa Le Minh§ , and Zabih Ghassemlooy§
†
Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
‡
Department of Computer Science, National Chiao Tung University, Taiwan
§
Faculty of Engineering and Environment, Northumbria University, Newcastle upon Tyne, NE1 8ST, UK
Email:{r03922027,r02944020,b02902124,hsinmu}@csie.ntu.edu.tw, katelin@cs.nctu.edu.tw,
{rayana.boubezari, hoa.le-minh, z.ghassemlooy}@northumbria.ac.uk
*
Co-primary Authors
Abstract—Accurate orientation information is the key in
many applications, ranging from map reconstruction with
crowdsourcing data, location data analytics, to accurate indoor
localization. Many existing solutions rely on noisy magnetic and
inertial sensor data, leading to limited accuracy, while others
leverage multiple, dense anchor points to improve the accuracy,
requiring significant deployment efforts. This paper presents
LiCompass, the first system that enables a commodity camera
to accurately estimate the object orientation using just a single
optical anchor. Our key idea is to allow a camera to observe
varying intensity level of polarized light when it is in different
orientations and, hence, perform estimation directly from image
pixel intensity. As the estimation relies only on pixel intensity,
instead of the location of the anchor in an image, the system
performs reliably at long distance, with low resolution images,
and with large perspective distortion. LiCompass’ core designs
include an elaborate optical anchor design and a series of signal
processing techniques based on trigonometric properties, which
extend the range of orientation estimation to full 360 degrees.
Our prototype evaluation shows that LiCompass produces very
accurate estimates with median errors of merely 2.5 degrees at
5 meters and 7.4 degrees at 2.5 meters with an irradiance angle
of 55 degrees.
I. I NTRODUCTION
Knowing the orientation with respect to a certain waypoint is important for numerous applications (Fig. 1(a)). The
walking direction of users is essential for retail store data
analytics to understand customer behaviors, and for constructing detailed indoor map using crowdsourced data [1].
The information is also crucial for an indoor localization
system to produce minimal error, as many existing solutions
either rely solely on orientation information (i.e., triangulation) [2] or combine orientation information with distance
information [3] to produce their position estimation. The
orientation information would also be very useful for the
emerging Millimeter-wave (mmWave) systems as the cost of
beam searching can be avoided if a pair of transceivers can
quickly steer their highly directional transmissions toward
each other.
Existing solutions providing orientation information rely
on magnetic and/or inertial sensors [4], [3], [5], [6], which
978-1-5090-5336-0/17/$31.00 ©2017 IEEE
can be hindered by severe noise or interference. Others use
visual patterns such as QR-code, which only works with
limited distance and is especially vulnerable to perspective
distortion. A potential solution to improve orientation accuracy is to make use of multiple, dense anchors [7], [8],
[9], [2], or leverage a large-scale antenna array [10]. These
either require costly initial deployment or need tight synchronization that is difficult to perform. Deriving a solution
with minimal deployment effort and good accuracy remains
an open challenge.
This paper develops LiCompass, the first system that
enables a commodity camera to accurately estimate the
device orientation directly from image pixel intensity using
just a single optical Anchor. Compared to other angleof-arrival-based (AoA-based) techniques, LiCompass only
needs one anchor, and thus can easily be adopted with low
deployment cost. We also show that the polarization characteristic leveraged in our solution can constantly yield reliable
estimation, producing a median error of merely 2.5 degree.
The result predominantly outperforms existing solutions in
a wide range of scenarios, suggesting that LiCompass can
be used as a key component in many systems to improve
their performance, such as an indoor localization system.
More importantly, it can further be incorporated with existing
visible light communication (VLC) techniques [11], [12] and
naturally combine the detected orientation with the delivered
information, broadening its applicability.
LiCompass only relies on pixel intensity, instead of
the location of the anchor in the image, to estimate the
orientation. Such an intensity-based approach is more reliable
with low resolution images, at longer distance, and with
larger perspective distortion. The key idea of LiCompass is
to make use of linear polarizers, which turn unpolarized light
into polarized light. By using two pieces of thin-film linear
polarizers, one attached to the receiver’s camera lens and the
other as the anchor attached to the object, the angle between
the directions of the two polarizers can be estimated by the
intensity of light that passes through them and captured by the
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
• The naı̈ve scheme that uses only a single polarizer as well
as the estimated initial intensity can support an estimation
range of only 90 degrees and have an average estimation
error of 18.25◦ , which is less than desirable for many
practical systems.
• By leveraging multiple polarizers to exclude the need of
knowing initial intensity, LiCompass not only extends
the supported range to full 360 degrees, but also reduces
the median error to 2.5◦ .
• LiCompass is especially resilient to perspective distortion. In a challenging scenario at 2.5 meters with 55 degree
irradiance angle, where QR-code or vector-based estimation completely fails, LiCompass produces a median
error of merely 7.4◦ .
(a)
(b)
Fig. 1: LiCompass usage scenario: (a) Orientation estimation with LiCompass (b) Camera receiver with polarizer
camera. A nice property is that human eyes cannot distinguish
different polarization directions, and thus the optical anchor
is visually unobtrusive1 . Thin-film polarizer is cost effective
($0.05/cm2 ), and can be integrated with a case or detachable
lens for a mobile device (Fig. 1(b)) so that it does not
interfere with the camera when the system is not in use.
To realize this idea, there are however two fundamental
challenges to be addressed.
i) The previously mentioned intensity-based orientation estimation technique only works for a limited angular range
of 90 degrees. To overcome this limitation, LiCompass
combines a set of polarizers as an optical anchor. By cleverly
arranging the directions and geometric layout of the anchor’s
polarizers (see Section IV-A), LiCompass can extend the
service range to full 360 degrees. We then develop a series
of techniques to correctly identify these different polarizers
(see Section IV-B), and accurately determine the device
orientation (see Section IV-C).
ii) The naı̈ve estimation approach requires the knowledge
of initial intensity, i.e., the output intensity through the first
polarizer on the object, but not the second polarizer on
the camera lens. Even worse, this value may greatly vary
with the relative location of the receiving camera as well
as the transparency of the polarizers, making its estimation
a great challenge. We resolve this difficulty by developing
a technique that takes advantage of the multiple polarizers
in the optical anchor to eliminate the need of knowing this
information (see Section IV-C).
We have implemented a prototype of LiCompass using
commodity optical linear polarizers, cameras, and smartphones. To evaluate its accuracy, we carry out extensive
measurements across the entire range of device orientations.
Our key findings include the following:
1 When the unpolarized light passes through the polarizer, the intensity
will be darkened by half. However, we claim that the anchor is unobtrusive
because the intensity reduction is uniform regardless of the viewing angle.
II. R ELATED W ORK
Related work falls in the following three categories.
Visible light positioning and orientation detection: Past
works have demonstrated that visible light positioning is a
promising approach with fine granularity. A number of existing works use a dedicated photodiode (PD) as the receiver
to estimate the receiver’s position and orientation using the
phase difference [13], [14] or the received power [15]. They
however require a dedicated PD and very dense deployment
of light sources. Recent studies [7], [8] have demonstrated
that camera-based visible light communications can achieve
sub-meter indoor positioning. They modulate the identity
of each light either using flickering frequency [7] or using polarization as well as color dispersion [8]. However
unlike these approaches, which require at least three light
sources, LiCompass only needs a single anchor. Some
other works [16], [17] exploit the captured images of an
environment to localize a camera. Those methods however
need a significant effort in constructing the image database.
RF-based localization and orientation detection: RFbased localization has been well investigated in the past
decade. Prior works either leverage landmarks [18], fingerprinting [19] or AoA-capable nodes [20], [21] to localize a
mobile device, or further exploit user mobility for tracking [3]
. Due to noisy and dynamic channels, the typical accuracy of
those systems is sub-meter to a few meters. Later proposals
then adopt emerging MIMO techniques [22] to improve the
accuracy of RF-based localization, or rotate a single-antenna
device to mimic the functionality of multiple antennas [23].
Other alternative approaches estimate the orientation or location using RFID tags [9], [24] , FMCW radar [25] or fullduplex radios [26]. Those systems however require expensive
or special-purpose hardware.
Detection using inertial sensors: Some systems [5], [6]
leverage inertial sensors of a smartphone, including accelerometer, gyroscope, and compass, to detect its orientation. While these sensors are more energy efficient, their
estimation error typically accumulates over time, e.g., 7◦ er-
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
/01'&+2)3'4536*-!
!"#
!$#
!"#$%&'
%($)!*!
φ0 = 0◦
Fig. 2: Optical polarization.
"!&%*+,-*!
ror on average over five minutes as reported in [5], which can
only be fixed via frequent calibration, introducing additional
overhead.
$%.-*%!
!&#
!%#
(a) LiCompass framework
(b) anchor design
Fig. 3: LiCompass’ design.
III. P OLARIZATION P RIMER
'"!3*4 324 5'"!
!"#$%&!
Before describing our design, we first introduce optical
polarization, and outline the potential and challenges of
leveraging polarization to estimate the device orientation.
()"!3+4
!!
,"-."/0-1!
'"!
()"!
*+)"!
*)'"!
2*'"!
3*4 324
3+4
A. Optical Polarization
!!
!vr
Regular light sources such as LED typically emit unpolarized light, which oscillates in more than one direction,
as shown in Fig. 2(a). A linear polarizer, as shown in
Fig. 2(b), is an optical filter that can transform unpolarized
light into polarized light, which only oscillates along one
particular direction. Specifically, a polarizer passes through
only light of a specific polarization, but filters out all the other
polarizations. Human eyes cannot recognize the difference in
polarization, and therefore the polarization effect can only be
observed by placing another polarizer in front of the first one,
as shown in Fig. 2(c). If the directions of the two filters are
parallel, the light can pass through both filters and, hence, be
perceived by human eyes or cameras behind the second filter.
On the contrary, if the two filters are perpendicular to each
other, the polarized light will be blocked by the second one
and cannot be perceived. By adjusting the angle between the
two polarizers, we can further change the intensity of light
that passes through. This polarization property hence gives us
an opportunity to accurately estimate the device orientation
based only on the intensity of light passing through.
B. Orientation and Pixel Intensity
According to the Malus’ law [27], when a polarized light
beam traverses through a linear polarizer, the intensity of the
beam that passes through, denoted by Iθ , is given by
Iθ = I0 cos2 θ,
+'"!
(1)
where I0 is the initial intensity of the polarized beam and
θ is the angle between the direction of the initial polarized
beam and the polarization direction of the filter. In the setting
of Fig. 2, I0 is the intensity of the polarized beam after the
first filter but before the second one, while θ is the angle
between the polarizations of the two filters, i.e., the rotation
of the second polarizer along axis α that is perpendicular to
the surface of the first polarizer. In theory, we can measure
,"-."/0-1!
!vr
(a) changes in orientation
(b) received intensity pattern
Fig. 4: Intensity patterns vary with orientation.
the light intensity that passes through the two filters, and use
it to estimate the inter-polarizer angle as follows:
r
Iθ
.
(2)
θ = arccos
I0
Note that Eq. (2) holds for arbitrary rotation of the second
polarizer with respect to the other two axes along the surface
of the first polarizer, i.e., β and γ in Fig. 2.
While the idea is simple, to realize intensity-based orientation estimation, we still confront two practical challenges.
First, since both I0 and Iθ are nonnegative, the Malus’ law
only allows us to determine the orientation within a limited
range of 90 degrees. Second, orientation estimation relies on
the information about initial intensity I0 , which might not
be obtained easily. In particular, we may need to take off the
second polarizer for measuring I0 , which is impractical. Even
worse, the polarizer is usually not perfectly transparent [28],
as a result introducing additional noise to Eq. (1) and making
estimation of I0 even more difficult.
The key idea of LiCompass is to incorporate multiple
polarizers to resolve the two critical problems at a time.
We will show in Section IV how LiCompass leverages
multiple polarizers placed in different directions to achieve
a full range of orientation estimation from 0 to 360 degrees
and, meanwhile, eliminate the need for knowing the initial
intensity I0 .
IV. L I C OMPASS D ESIGN
LiCompass is a camera-based solution that enables invisible polarization-based orientation estimation. Fig. 3(a)
illustrates the framework of LiCompass. We concatenate
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
four polarizers, denoted by P1 , P2 , P3 and P4 , as a 2×2
square optical anchor, as shown in Fig. 3(b), and attach it to
an LED light fixture. Each receiving camera has a small and
low-cost polarizer (∼2cm×2cm, each costs roughly $0.05)
in front of its lens in order to capture pixel intensity of the
anchor. When a client holding the camera moves around the
anchor, the camera’s location and, thereby its orientation with
respect to the anchor, also changes, as illustrated in Fig. 4(a).
Essentially, changing a camera’s orientation is equivalent to
rotating the anchor but keeping the camera static. Also, the
polarization effect of rotating the anchor is similar to that
of rotating the camera (and its polarizer). Hence, the camera
can observe varying pixel intensities of the captured anchor
when it moves.
We will first introduce LiCompass’ anchor design in
Section IV-A, and explain why this design allows us to
support a full detection range of 360 degrees. The anchor
area can be found using existing computer vision techniques,
which are not the main contributions of this work and will
be described in Section V. Once the anchor area is located,
LiCompass’ receiving camera first measures the intensity
of the four areas in the anchor, and leverages the structure
of the anchor to determine the identity of each polarizer in
the anchor (see Section IV-B). After identifying the areas
of the four polarizers, the camera uses the intensity of the
four areas to eliminate the effect of initial intensity, I0 , and
accurately estimate its orientation (see Section IV-C). The
above design can generally be applied to an environment with
multiple anchors since they will occupy different pixels in a
captured image and can be processed separately2 .
A. Anchor Design
LiCompass’ anchor consists of four polarizers,
P1 , P2 , P3 and P4 . To extend the detection range to 360
degrees, we place P1 and P4 along the same polarization
direction, place P2 along the direction perpendicular to P1 ,
and place P3 along a direction in between, as illustrated in
Fig. 3(b). We define P1 ’s polarization as the reference zero
degree φ0 = 0◦ . Hence, the direction of P3 can be set to
45◦ , which is the optimal choice as we derive in [29]. We
choose this anchor design based on three key principles.
First, P1 and P4 are aligned so that LiCompass’ camera
can locate the two polarizers that have the same intensity
and determine the identities of the four anchor polarizers.
Second, to avoid ambiguity, we need to place at least two
polarizers that are not in perpendicular to or in parallel with
each other. Third, we place the four polarizers as a square
anchor spanning two dimensions so that their geometric
structure can further be used to extend the detection range
to 360 degrees.
To see why adding a diagonal polarizer P3 is necessary, we
plot the theoretical intensities of P1 , P2 and P3 derived based
2 To distinguish different anchors, we use a light behind each anchor to
broadcast its identity using VLC techniques, e.g., [7], [11].
#"!#"!
#"!#%&'!$"!
"!
!!
"!
#"!$"!
#"!#%&'!#"!
(a) received intensity
(b) ambiguous orientations
Fig. 5: Intensity of ambiguous orientations.
on Eq. (1) in Fig. 5(a) when the orientation of a camera varies
from 0◦ to 360◦ , assuming I0 = 1. The figure shows that the
intensity of P1 observed by the camera will be the same when
the camera orientation is φ, 180 ± φ and 360 − φ degrees,
respectively. This is because the four ambiguous orientations
correspond to the same inter-polarizer angle between P1 and
the camera polarizer, as illustrated in Fig. 5(b). For example,
the intensity of P1 is the same when the camera orientation
is 30◦ , 150◦ , 210◦ and 330◦ , respectively. Unfortunately, in
those four orientations of φ, 180 − φ, 180 + φ and 360 − φ,
the camera also receives the same intensity in P2 .3 Since P1
and P2 correspond to the same set of ambiguous orientations,
the camera cannot distinguish between them simply based on
the intensity pattern of P1 and P2 . However, by introducing
another polarizer P3 in 45◦ , the camera then receives the
same intensity of P3 in a different set of ambiguous orientations {φ, 90 − φ, 180 + φ, 270 − φ}4 , as shown in Fig. 5(a).
This diversity renders us an opportunity to resolve orientation
ambiguity. For example, though, in both the orientations
of 30◦ and 150◦ , the camera observes the same intensity
pattern from P1 and P2 , it, however, receives a very different
intensity in P3 , as can be seen in Fig. 4(b). Thus, the camera
now can clearly differentiate the two cases.
Unfortunately, orientation ambiguity cannot be completely
addressed by simply observing the intensities of the four
areas. We can see from Fig. 5(a) that we still cannot tell
the difference between the intensities in φ and 180 + φ
since, in both cases, the camera will receive exactly the same
intensity pattern from P1 , P2 , and P3 . For example, the
camera receives the same intensity pattern when it rotates
to 30◦ and 210◦ , respectively, as shown in Fig. 4(b) and
Fig. 5(a). Fortunately, by placing the three polarizers across
two dimensions, we can further use their geometric structure
to differentiate the range of 0˜180 degrees from the range
of 180˜360 degrees, achieving the goal of full range detection. LiCompass’ orientation estimation algorithm will be
detailed in Section IV-C.
3 Since the angle between P and the camera polarizer is θ = φ − 90, the
2
set of ambiguous orientations obtained from P2 is {90 ± (φ − 90), 90 +
180 ± (φ − 90)} = {φ, 180 − φ, 180 + φ, 360 − φ}.
4 The set of ambiguous orientations obtained from P is {45 ± (φ − 45),
3
45 + 180 ± (φ − 45)} = {φ, 90 − φ, 180 + φ, 270 − φ}.
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
B. Determining Identities of Polarizers
Before estimating the orientation, we need to first determine the identities of the four polarizer areas for capturing
their geometric relationship. However, the camera can only
measure the average intensity of each of the four areas,
temporarily denoted by Ia , Ib , Ic and Id clockwise. We use
the sub-indices, a, b, c and d, to avoid confusion because,
for now, we have no idea what the identity of each area is.
Our goal is hence to map a, b, c and d to the corresponding
identities of P1 , P2 , P3 and P4 , respectively. To do so, we use
the measured intensity to roughly estimate the angles between
the camera polarizer and the anchor’s four polarizers, θa , θb ,
θc and θd , which are within 90 degrees. In particular, here,
we estimate the initial intensity I0′ by averaging intensity
of the pixels occupied by the light source but not covered
by the first polarizer, and use I0′ to find the estimated angles
based on Eq. (2). Though those estimated angles might not be
very accurate due to imperfect knowledge of initial intensity,
i.e., I0′ 6= I0 , those coarse information, however, is already
sufficient for identifying the four areas. We will describe how
we fine-grained estimation in Section IV-C.
Recall that P1 and P4 align along the same polarization
direction. Therefore, we first calculate the difference in the
angle between each diagonal pair of the two areas, i.e., |θa −
θc | and |θb − θd |, and look for the pair with the smaller angle
difference. The selected pair should be mapped to the pair of
polarizers (P1 , P4 ). Without loss of generality, let’s assume
that (a, c) is the pair mapping to the pair (P1 , P4 ). The next
step is to identify P2 from the remaining two areas, b and d.
Since P2 is perpendicular to both P1 and P4 , we can naturally
designate the area with an angle closest to 90◦ minus the
average angle of P1 and P4 , i.e., 90◦ − (θa + θc )/2, as P2 .
Once P2 is identified, we can easily distinguish between the
selected diagonal pair P1 and P4 according to the geometric
structure of the anchor.
Note that our mapping scheme relies on first locating
the diagonal pair of areas that have similar intensity as
P1 and P4 . Therefore, a mismatch could occur when the
other diagonal pair, P2 and P3 , happen to also have similar
intensity. In particular, this confusion takes place when the
camera orientation is around (90k− 45
2 ) degree, k = 1, 2, 3, 4,
as can be observed in Fig. 5(a). In those regions, the camera
also receives the same intensity in the areas of P2 and P3 ,
making it very difficult to correctly locate the diagonal pair
of P1 and P4 . However, the size of those regions is fairly
small. In addition, this problem can easily be resolved if the
user has just a small mobility. Due to the space limitation,
we refer the readers to [29] for the details.
C. Intensity-based Orientation Estimation
We now focus on how to estimate the orientation of
the camera φ based on the average intensity of the four
identified polarizers, I1 , I2 , I3 , and I4 . With LiCompass’
anchor design, the camera in both orientations φ and 180+φ,
0 ≤ φ < 180, will receive the same intensity pattern
in the four areas. Therefore, we start by explaining how
LiCompass estimates the orientation within 180 degrees,
and then demonstrate how we distinguish between the upper
quadrants (0◦ ˜180◦ ) and lower quadrants (180◦ ˜360◦ ).
Let ϕi denote the polarization of the anchor polarizer
Pi , i = 1, 2, 3, and define θi = φ − ϕi as the interpolarizer angle between the camera polarizer and polarizer
Pi . Based on the Malus’ law, one can only estimate the
inter-polarizer angle θi ranging between 0 and 90 degrees,
but fails to distinguish between the ambiguous angles θi and
−θi due to the periodicity of the cosine function. However,
fortunately, we will be able to obtain, from each anchor
area, a pair of ambiguous orientations locating between 0
and 180 degrees. More importantly, as mentioned in Section IV-A, LiCompass’ anchor design ensures that there
exists exactly one common solution among all the pairs
of ambiguous orientations obtained from the three different
polarizers. LiCompass hence leverages this property to find
the orientation φ.
To realize the above intuition, we need to further eliminate
the effect of initial intensity I0 . Recall that the intensity of Pi
can be expressed by Ii = I0 cos2 (θi ). Taking advantage of
the multiple polarizers in our anchor, we can easily remove
the term I0 by finding the ratio of the intensity of an area to
the intensity of another area. In particular, we exploit II21 to
estimate two candidate orientations φ′P2 and φ′′P2 , while using
I3
′
I1 to estimate the other two candidate orientations φP3 and
′′
φP3 . Ideally, if the estimation is perfect without error, the
intersection {φ′P2 , φ′′P2 } ∩ {φ′P3 , φ′′P3 } will be the final camera
orientation φ.
Specifically, the intensity ratio of P2 to P1 is given by
I0 cos2 (θ2 )
cos2 (θ2 )
I2
1
=
=
,
=
2
I1
I0 cos (θ1 )
tan2 (θ2 )
sin2 (θ2 )
(3)
in which the term I0 can be eliminated. Note that we are
interested in finding φ ranging from 0 to 180 degrees. Thus,
θ2 = φ−90◦ ranges from −90◦ to 90◦ , and, based on Eq. (3),
we get
( p
− I1 /I2 , when − 90◦ ≤ θ2 < 0◦
(4)
tan(θ2 ) = p
I1 /I2 ,
when 0◦ ≤ θ2 < 90◦ .
Hence, we can estimate the
p two ambiguous inter-polarizer
I1 /I2 ), when −90◦ ≤ θ2 < 0◦ ,
angles by θ2′ = arctan(−
p
′′
and θ2 = arctan( I1 /I2 ), when 0◦ ≤ θ2 < 90◦ , without
the need of knowing I0 . The two estimated candidate orientations can then be found as follows:
p
φ′P2 = ϕ2 + θ2′ = 90◦ + arctan(− I1 /I2 ), and
p
φ′′P2 = ϕ2 + θ2′′ = 90◦ + arctan( I1 /I2 ),
(5)
where φ′P2 must be within [0◦ , 90◦ ) and φ′′P2 must be within
[90◦ , 180◦ ).
Estimated orientations [degree]
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
180
150
120
φ’
P
2
90
φ’’
60
φ’
2
φ’’
3
P
P
P
30
0
3
0
30
60
90
120
True orientation φ [degree]
150
180
Fig. 6: Ambiguous estimated orientations.
Similarly, the intensity ratio of P3 to P1 is given by
I3
I1
2
=
=
(a) Rx on Flea
(b) experiment scenario
2
I0 cos (θ3 )
cos (θ3 )
=
I0 cos2 (θ1 )
cos2 (θ3 + π/4)
2
2 cos2 (θ3 )
=
. (6)
(cos(θ3 ) − sin(θ3 ))2
(1 − tan(θ3 ))2
Since θ3 = φ−45◦ ranges from −45◦ to 135◦ and the tangent
function is no less than 1 when 45◦ ≤ θ3 < 90◦ , based on
Eq. (6), we have
(
p
1 + 2I1 /I3 , when 45◦ ≤ θ3 < 90◦
p
(7)
tan(θ3 ) =
1 − 2I1 /I3 , otherwise .
Therefore, we can estimate the two ambiguous
inter-polarizer
p
angles θ3′ and θ3′′ by arctan(−1 ± 2I1 /I3 ), and find the
two estimated candidate orientations
p
φ′P3 = ϕ3 + θ3′ = 45◦ + arctan(1 + 2I1 /I3 ), and
p
φ′′P3 = ϕ3 + θ3′′ = 45◦ + arctan(1 − 2I1 /I3 ),
(8)
where φ′P3 must be within [90◦ , 135◦ ) and φ′′P3 must be
within [0◦ , 90◦ ) or [135◦ , 180◦ ).
Fig. 6 plots the four estimated ambiguous orientations,
φ′P2 , φ′′P2 , φ′P3 and φ′′P3 , for each true orientation within 180
degrees, assuming no estimation error. The figure shows that
there always exist two estimated orientations in common,
which is indeed the true orientation. However, the intensity
measure will never be perfect due to the interference of
ambient light or environmental dynamics. Hence, instead of
determining the true orientation by finding the intersection
{φ′P2 , φ′′P2 } ∩ {φ′P3 , φ′′P3 }, we take the minimum of the following three terms, |φ′P2 −φ′′P3 |, |φ′′P2 −φ′P3 | and |φ′′P2 −φ′′P3 |,
and regard the average of the two directions in that term as
the final estimated orientation φ′ . Note that we omit checking
the difference between φ′P2 and φ′P3 because they, by default,
belong to two non-overlapping angular ranges.
So far we haven’t discussed how to detect the orientation
between 180 and 360 degrees because the intensity pattern
received in orientation φ will be the same with that received
in 180◦ + φ, as illustrated in Fig. 5(a). However, since our
anchor design spans two dimensions, its geometric structure
of the anchor can be used to distinguish the two cases. In
particular, we can generate a Cartesian coordinate system
with the origin at the centroid of the anchor, and let ~vr be
Fig. 7: Experimental setup.
the vector connecting from the origin to the joint point of P2
and P4 on the anchor border (see Fig. 4(b)). Then, 6 ~vr will
be the vector-based estimation of the orientation. We then
check whether the intensity-based estimation φ′ or 180◦ + φ′
is closer to this vector-based estimation 6 ~vr . The closer one
is then our final estimation.
V. I MPLEMENTATION
We used PointGrey Flea3 camera [30] as the receiver that
captures images with 2048×1080 resolution. The exposure
time of the camera is set to 0.015 ms, s.t. no area in the
captured image is saturated. An anchor of 2×2 polarizers
is attached to a ceiling LED light fixture, while another
polarizer is attached to the camera lens (see Fig. 7(a)). The
captured images are offline processed in a Linux machine
to estimate the camera orientation. Before performing the
estimation, we apply two different approaches to locate the
area occupied by the anchor from the captured image. The
first one uses a graphical user interface (GUI) for a user to
manually tag the four polarizer areas in the image. Specifically, the user draws a rectangle in each of the four areas
in the image, as well as a rectangle around an arbitrary area
which is occupied by the light fixture but not the polarizers.
The last rectangle is for estimating I0 , required by polarizer identification (Section IV-B). Only the pixels in these
rectangles are used for orientation estimation. The second
approach adopts computer vision techniques to automatically
locate the anchor pixels. We developed two algorithms. One
determines the four polarizer areas by locating their corners,
while the other attempts to find the darkest area within the
boundary of the light fixture, which usually corresponds to
the polarizer areas. Due to space limit, we refer the readers
to [29] for the details. Unless stated otherwise, we use the
first approach as the default in order to exclude the effect of
erroneous detection, and focus on evaluating the performance
of orientation estimation.
VI. E VALUATION
We first conduct extensive measurements to verify the
effectiveness of LiCompass’ key components. We then ex-
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
50
25
0
80
60
50
500 250 (tilted)
Distance [cm]
0.25
θ’ by I /I
30
2
2 1
20
θ’ by only I
10
true θ2
2
0
2
10 20 30 40 50 60 70 80 90
True orientation [degree]
Fig. 8: Anchor detection rate. Fig. 9: Angle estimation error.
perimentally evaluate the accuracy of orientation estimation
in LiCompass. Experiments were carried out in a typical
conference room with normal daylight. To obtain the ground
truth of the camera orientation, we placed an angle scale on
the floor, as shown in Fig. 7(b), and mounted the receiving
camera on a tripod placed on a small trolley connected to the
origin of the angle scale (referred to as ground origin) by a
string. The camera height is 100 cm, similar to the height of
a hand-held device. We rotated the trolley around the ground
origin with 10-degree steps within the entire 360 degrees.
Five images were captured at each location, and we output
the average estimate. The experiments were carried out with
different distances between the camera and the ground origin,
at 0, 100 and 250 cm, with 0 cm being the default.
A. Micro Benchmark
The performance of LiCompass’ orientation estimation
relies on the accuracy of anchor detection and inter-polarizer
angle estimation, and therefore we first empirically measure
the performance of these key components.
(a) Success ratio of anchor detection: We first compare
the success rate of LiCompass’ anchor detection with that
of the detection of a version 2 QR-code marker of the same
size, i.e., 16 × 16 cm, with the configuration of level “M”
ECC (Error Correction Capability), a payload of 180 data
bits, and a module size of 25 × 25 [31].
Experiment: We tested two challenging scenarios: 1) large
irradiance angle5 , i.e., with perspective distortion, and 2)
long distance, i.e., with a low number of pixels occupied
by the anchor. In particular, in the first scenario, we attach
LiCompass’ optical anchor or a QR-code marker on the
ceiling and move the camera around the anchor at a distance
of 250 cm to the ground origin. In the second scenario,
to exclude the impact of perspective distortion, we instead
attach the light fixture with the anchor to a wall, and deploy
the camera facing directly toward the anchor at distances of
150 and 500 cm, respectively.
Result: Fig. 8 compares the success rates of anchor detection using LiCompass with the two automatic detection
algorithms and QR-code. Successful detection is defined as
5 Irradiance
0.5
40
0
150
LiCompass (0 cm)
vector−based (0 cm)
LiCompass (100 cm)
vector−based (100 cm)
LiCompass (250 cm)
vector−based (250 cm)
0.75
70
CDFs
Estimated orientation [degree]
Anchor detection rate [%]
QR code
auto1
auto2
75
0
1
90
100
angle is defined as the angle between the axis perpendicular
to the surface of the anchor and the direction to the camera.
0
0
5
10
15
20
25
Orientation estimation error [degree]
30
35
Fig. 10: Estimation error at different distances.
that the area(s) determined by the detection methods only
contains pixels occupied by corresponding polarizer(s) or
the marker. The results show that both LiCompass and
QR-code are more likely to fail with lower resolution or
with larger perspective distortion, as they adopt vision-based
automatic detection. LiCompass performs slightly worse
than QR-code at 150 and 500 cm, i.e., without distortion,
since anchor intensity varies with camera orientation and
automatic detection becomes more difficult when certain
polarizer areas become too bright. QR-code, however, cannot
be detected at all in the tilted scenario (at 250 cm) because,
unlike LiCompass’ simple 2×2 anchor design, the pattern
of a QR-code is more complex and only the three corner
blocks can be used for detection.
(b) Accuracy of angle estimation: LiCompass’ orientation
estimation is derived from the inter-polarizer angles. Hence,
we further check whether leveraging the intensity ratio of
the two polarizers can effectively improve the accuracy of
inter-polarizer angle estimation.
Experiment: We compare the estimation of the angle between
polarizer P2 and the camera polarizer, based on I2 /I1 , i.e.,
θP′ 2 in Eq. (5), to that based only on I2 using Eq. (2).
The estimation based on I2 requires the value of the initial
intensity I0 , which is approximated by the average intensity
of an area occupied by the light fixture but not the anchor.
Result: Fig. 9 shows that, without excluding the inaccuracy
of the estimation of I0 , a significant error (around 18.25◦
on average) exists in inter-polarizer angle estimation, which
would propagate to the subsequent orientation estimation.
LiCompass, however, removes this error by deriving the
angle from the intensity ratio of two different polarizer areas
(I2 /I1 ). Therefore, our estimation is fairly close to the ground
truth. The error of our ratio-based estimation is slightly larger
when the true orientation is around 0 or 90 degrees. This is
because, even when the camera polarizer is perpendicular to
P1 or P2 , the intensity of the polarizer area is not completely
dark due to small interference of ambient light. We will show
later that this small error, however, has limited impact on
the estimation accuracy, since LiCompass incorporates a
level of diversity by leveraging two intensity ratios I2 /I1
and I3 /I1 to estimate orientation.
B. Performance of LiCompass
We now examine the accuracy of orientation estimation
in static LiCompass, and compare our estimate φ with the
conventional vector-based estimation, i.e., 6 ~vr mentioned in
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
LiCompass
vector−based
LiCompass (1/8)
vector−based (1/8)
LiCompass (1/16)
vector−based (1/16)
10
20
30
40
Orientation estimation error [degree]
50
CDFs
0
CDFs
Fig. 11: Impact of image resolution.
1
0.75
0.5
0.25
0
0° °
20°
40°
60°
80
0
10
20
30
40
50
60
70
Orientation estimation error [degree]
80
90
Fig. 12: Impact of anchor irradiance angles.
Section IV-C and illustrated in Fig. 4(b). In both schemes,
we use manual anchor detection to exclude the impact of
erroneous anchor detection.
(a) Impact of distance and perspective distortion: We first
check the accuracy of LiCompass when the distance from
the camera to the ground origin is set to 0, 100 and 250
cm, respectively. Fig. 10 plots the CDFs of the orientation
estimation error in both schemes for various distance settings.
The results show that, when the camera directly faces toward
the anchor, i.e., at 0 cm, the anchor captured in the image
is large enough to ensure a fairly accurate estimation, and,
hence, the estimation error in 92% of the cases is less than 5◦
in both schemes. Since the camera at 0 cm can mostly capture
a perfect square anchor in the image without perspective
distortion, the error of the vector-based algorithm is slightly
lower than LiCompass, which estimates only according to
pixel intensity. Even so, LiCompass still produces very
accurate estimation with a median error of merely 2.5 degree.
However, the figure shows that the accuracy of the vectorbased algorithm gets worse as the distance increases. The
main reason is that the camera increases its irradiance
angle toward the anchor, thereby leading to more serious
anchor distortion. That is, the shape of the captured anchor
is distorted especially when the camera is far from the
ground origin. The vector-based algorithm is intrinsically
vulnerable to perspective distortion, and, hence, performs
poorly at long distance, in which there is large irradiance
angle. LiCompass, instead, estimates the orientation with
only pixel intensity. Thus, it is less affected by perspective
distortion, and can still produce good accuracy at long
distance. In a challenging scenario at 250cm, LiCompass’
median errors is still only 7.4 degrees. If we incorporate two
anchors to further position the camera via existing AoAbased localization techniques, the reported errors can give
us a positioning accuracy of around 0.05–0.32 meters, which
outperform the results reported in the state-of-the-art visible
light positioning systems [8], [7].
(b) Impact of image resolution: To further check the impact
CDFs
CDFs
0.5
0.25
0
CDFs
1
0.75
1
0.75
0.5
0.25
0
1
0.75
0.5
0.25
0
1
0.75
0.5
0.25
0
LiCompass (auto)
LiCompass (manual)
QR−code
0
10
20
30
40
50
60
70
80
(a) Orientation estimation error at 150cm [degree]
90
LiCompass (auto)
LiCompass (manual)
QR−code
0
10
20
30
40
50
60
70
80
(b) Orientation estimation error at 500cm [degree]
90
LiCompass (auto)
LiCompass (manual)
QR−code
0
10
20
30
40
50
60
70
80
90
(c) Orientation estimation error at 250cm (tilted) [degree]
Fig. 13: Overall estimation error.
of image resolution, we perform the same experiment, but
reduce the image size to 256×135 and 128×68, i.e., 1/8 and
1/16 of the default size. We plot in Fig. 11 the CDFs of the
estimation error for various image sizes. The results verify
that LiCompass only needs the information of average pixel
intensity, and, hence, experiences negligible accuracy degradation when the image size becomes smaller. By contrast,
decreasing the image resolution would produce worse results
for the vector-based algorithm because a smaller image leads
to a higher quantization error in the location of the anchor
in the image and, thereby, lower precision of the estimated
vector ~vr . This also explains why LiCompass is more
reliable at long distance, where the captured anchor usually
only occupies tens of pixels in the image.
(c) Impact of irradiance angle: We finally check how
the estimation accuracy is affected by the irradiance angle,
i.e., the level of perspective distortion. In this experiment,
we manually turn the anchor by 0, 20, 40, 60 and 80
degrees, respectively, with respect to an axis of the ceiling
surface, but fix the camera at the ground origin. With such
configuration, we can rule out the impact of distance but
focus on evaluating the effect of different irradiance angles.
Fig. 12 plots the CDFs of the estimation error for various
irradiance angles. With 80◦ , the accuracy drops because the
polarizer transparency degrades significantly in this challenging setting, and thus ambient light could become strong
interference compared to the intensity of the polarized light.
In addition, as previously mentioned, the initial intensity for
different polarizer areas can be significantly different if the
irradiance angle is large, which creates additional inaccuracy
in the ratio-based estimation. However, overall, the results
verify that LiCompass’ accuracy is mostly not affected by
irradiance angles, except for the extreme scenario.
(d) Overall performance: We finally compare the performance of LiCompass with the conventional vision-based
orientation estimation scheme using a QR-code marker. For
IEEE INFOCOM 2017 - IEEE Conference on Computer Communications
fair comparison, we also enable automatic anchor detection in
LiCompass in this experiment, and use LiCompass with
manual anchor detection as a upper bound to evaluate the
impact of vision-based detection on estimation accuracy. The
experimental settings are the same with those used in Fig. 8.
The estimation error is set to 90 degrees if anchor detection
fails. Fig. 13 plots the CDFs of the estimation error in the
three different configurations. Our findings are as follows:
First, QR-code is fairly accurate at 150 cm because it
spans across a large number of pixels, but is vulnerable
to the reduction of the captured marker size in the image,
i.e., at 500 cm. Second, LiCompass does not perform as
well as QR-code at 150 cm, since it only exploits pixel
intensity. Its accuracy, however, is almost not affected as the
distance increases. Third, Both LiCompass and QR-code
rely on vision-based techniques to detect the anchor, and
erroneous detection would reduce the accuracy of orientation
estimation. As one can observe from Fig. 13, in most cases,
LiCompass with auto anchor detection produces the same
accuracy as when using manual anchor detection. The former
can be further improved by leveraging more sophisticated
detection algorithms.
VII. C ONCLUSION
This paper presents LiCompass, an intensity-based orientation estimation system using polarized light. We leverage
the property that the intensity of light passing through
two polarizers changes with the angle difference between
their polarization directions. A camera can hence estimate
the inter-polarizer angle, and thereby its orientation, simply
from the pixel intensity of a single anchor, instead of the
knowledge of the specific locations of multiple anchors, in
the captured image. LiCompass’ anchor is a well-designed
optical filter that can support full 360-degree estimation.
Experiments using our prototypes show that LiCompass
can achieve very accurate estimation with median errors of
2.5 degrees at 5 meters and 7.4 degrees at 2.5 meters with
anchor distortion, respectively.
ACKNOWLEDGMENT
This work was supported in part by the Ministry of Science
and Technology of Taiwan, National Taiwan University, Intel
Corporation, and Delta Electronics under grants MOST 1052633-E-002-001, MOST 105-2221-E-002-139, and NTUICRP-105R104045.
R EFERENCES
[1] R. Gao, M. Zhao, T. Ye, F. Ye, Y. Wang, K. Bian, T. Wang, and X. Li,
“Jigsaw: Indoor Floor Plan Reconstruction via Mobile Crowdsensing,”
in ACM MobiCom, 2014.
[2] M. Kotaru, K. Joshi, D. Bharadia, and S. Katti, “SpotFi: Decimeter
Level Localization Using WiFi,” in ACM SIGCOMM, 2015.
[3] A. T. Mariakakis, S. Sen, J. Lee, and K.-H. Kim, “SAIL: Single Access
Point-based Indoor Localization,” in ACM MobiSys, 2014.
[4] N. Roy, H. Wang, and R. Roy Choudhury, “I Am a Smartphone and I
Can Tell My User’s Walking Direction,” in ACM MobiSys, 2014.
[5] P. Zhou, M. Li, and G. Shen, “Use It Free: Instantly Knowing Your
Phone Attitude,” in ACM MobiCom, 2014.
[6] A. M. Sabatini, “Quaternion-based extended kalman filter for determining orientation by inertial and magnetic sensing,” IEEE Transactions
on Biomedical Engineering, vol. 53, no. 7, pp. 1346–1356, July 2006.
[7] Y.-S. Kuo, P. Pannuto, K.-J. Hsiao, and P. Dutta, “Luxapose: Indoor
Positioning with Mobile Phones and Visible Light,” in ACM MobiCom,
2014.
[8] Z. Yang, Z. Wang, J. Zhang, C. Huang, and Q. Zhang, “Wearables Can
Afford: Light-weight Indoor Positioning with Visible Light,” in ACM
MobiSys, 2015.
[9] J. Wang, F. Adib, R. Knepper, D. Katabi, and D. Rus, “RF-compass:
Robot Object Manipulation Using RFIDs,” in ACM MobiCom, 2013.
[10] J. Xiong and K. Jamieson, “ArrayTrack: A Fine-grained Indoor Location System,” in USENIX NSDI, 2013.
[11] H.-Y. Lee, H.-M. Lin, Y.-L. Wei, H.-I. Wu, H.-M. Tsai, and K. C.-J.
Lin, “RollingLight: Enabling Line-of-Sight Light-to-Camera Communications,” in ACM MobiSys, 2015.
[12] P. Luo, M. Zhang, Z. Ghassemlooy, H. Le Minh, H.-M. Tsai, X. Tang,
L. C. Png, and D. Han, “Experimental Demonstration of RGB LEDBased Optical Camera Communications,” IEEE Photonics Journal,
vol. 7, no. 5, pp. 1–12, 2015.
[13] K. Panta and J. Armstrong, “Indoor Localisation Using White LEDs,”
Electronics Letters, vol. 48, no. 4, pp. 228–230, February 2012.
[14] S.-Y. Jung, S. Hann, and C.-S. Park, “TDOA-Based Optical Wireless
Indoor Localization Using LED Ceiling Lamps,” IEEE Transactions
on Consumer Electronics, vol. 57, pp. 1592–1597, 2011.
[15] L. Li, P. Hu, C. Peng, G. Shen, and F. Zhao, “Epsilon: A Visible Light
Based Positioning System,” in USENIX NSDI, 2014.
[16] V. Bettadapura, I. Essa, and C. Pantofaru, “Egocentric Field-of-View
Localization Using First-Person Point-of-View Devices,” in IEEE
Winter Conference on Applications of Computer Vision (WACV), 2015.
[17] N. Ravi, P. Shankar, A. Frankel, A. Elgammal, and L. Iftode, “Indoor
Localization Using Camera Phones,” in IEEE Workshop on Mobile
Computing Systems and Applications, 2006.
[18] K. Chintalapudi, A. Padmanabha Iyer, and V. N. Padmanabhan, “Indoor
Localization Without the Pain,” in ACM MobiCom, 2010.
[19] S. Sen, B. Radunovic, R. R. Choudhury, and T. Minka, “You Are Facing the Mona Lisa: Spot Localization Using PHY Layer Information,”
in ACM MobiSys, 2012.
[20] R. Peng and M. Sichitiu, “Angle of Arrival Localization for Wireless
Sensor Networks,” in IEEE SECON, 2006.
[21] D. Niculescu and B. Nath, “Ad Hoc Positioning System (APS) Using
AOA,” in IEEE INFOCOM, 2003.
[22] J. Xiong, K. Sundaresan, and K. Jamieson, “ToneTrack: Leveraging
Frequency-Agile Radios for Time-Based Indoor Wireless Localization,” in ACM MobiCom, 2015.
[23] S. Kumar, S. Gil, D. Katabi, and D. Rus, “Accurate Indoor Localization
with Zero Start-up Cost,” in ACM MobiCom, 2014.
[24] J. Wang and D. Katabi, “Dude, Where’s My Card?: RFID Positioning
That Works with Multipath and Non-line of Sight,” in ACM SIGCOMM, 2013.
[25] F. Adib, Z. Kabelac, and D. Katabi, “Multi-Person Localization via
RF Body Reflections,” in USENIX NSDI, 2015.
[26] K. Joshi, D. Bharadia, M. Kotaru, and S. Katti, “WiDeo: Fine-grained
Device-free Motion Tracing using RF Backscatter,” in USENIX NSDI,
2015.
[27] E. Collett, Field Guide to Polarization. SPIE Press, 2005.
[28] “Buy
Polarizer
100x100mm
3D
lens,”
http://www.3dlens.com/shop/polarizer-100x100mm.php.
[29] H.-I. Wu, Y.-L. Wei, H.-C. Wang, H.-M. Tsai, K. C.-J. Lin,
R. Boubezari, H. L. Minh, and Z. Ghassemloo, “LiCompass: Extracting Orientation from Polarized Light,” Technical Report, 2016,
https://www.dropbox.com/s/21a4o73i1nvn0y8/LiCompass.pdf?dl=0.
[30] I. Point Grey Research, “Flea 3 Product Data Sheet,”
https://www.ptgrey.com/flea3-usb3-vision-cameras.
[31] D. Wave, “Information Technology. Automatic Identification and Data
Capture Techniques. QR Code Bar Code Symbology Specification,”
no. ISO/IEC 18004:2015, 2005.
Документ
Категория
Без категории
Просмотров
3
Размер файла
1 036 Кб
Теги
8057100, 2017, infocomm
1/--страниц
Пожаловаться на содержимое документа