close

Вход

Забыли?

вход по аккаунту

?

JP2010026551

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2010026551
[PROBLEMS] To provide a display system capable of effective information transmission. A display
device 2 simultaneously displays a plurality of images on a liquid crystal display panel 10 so that
a plurality of images can be viewed only from different specific directions, a superdirective
speaker 4 outputting audio in a specific direction, and display Based on the camera device 5 for
shooting the front of the liquid crystal display panel 10 of the device 2 to generate a shot image
and the shot image generated by the camera device 5, a person appearing in the shot image is
detected. Based on the movement direction detection unit 31b that detects the movement
direction and position, and the movement direction and position of the person detected by the
movement direction detection unit 31b, a voice corresponding to an image that can be visually
recognized by the person is directed to the person And a control unit 31 for outputting from the
speaker 4. [Selected figure] Figure 7
Display system and control method of display system
[0001]
The present invention relates to a display system including a display device that simultaneously
displays a plurality of images on a display screen so as to be visible only from different specific
directions.
[0002]
Conventionally, there has been known a display device provided with display means for
simultaneously displaying an image that can be viewed from one viewing angle direction and an
03-05-2019
1
image that can be viewed from a viewing angle direction different from the one viewing angle
direction. (See, for example, Patent Document 1).
Since this type of display device can simultaneously display different images according to the
viewing angle, it is used, for example, as a display of a car navigation device, simultaneously for
the driver sitting in the driver's seat and the passenger sitting in the passenger seat. They are
used in the form of displaying different images. JP, 2005-284592, A
[0003]
The display device described above is installed at a place where pedestrians come and go, such as
a hallway in a building, and different images are displayed according to the viewing angle of the
pedestrians, and it is considered to transmit information indicated by the images to the
pedestrians. In this case, there is a need to effectively communicate information to pedestrians
using the characteristics of the display device that different images can be displayed
simultaneously according to the viewing angle. The present invention has been made in view of
the above-described circumstances, and an object thereof is to provide a display system capable
of effective information transmission.
[0004]
In order to achieve the above object, according to the present invention, a display system
simultaneously displays a plurality of images on a display screen so that a plurality of images can
be viewed only from different specific directions, and outputs sound in a specific direction. Based
on the super-directional speaker, the photographing means for photographing the front of the
display screen of the display device to generate a photographed image, and the person shown in
the photographed image based on the photographed image generated by the photographing
means Corresponding to the image that can be viewed by the person based on the movement
direction detection unit that detects the movement direction and position of the person, and the
movement direction and position of the person detected by the movement direction detection
unit Audio output control means for directing the voice to the person and outputting it from the
superdirective speaker. According to this configuration, when the voice is output from the
superdirective speaker toward one person, the voice corresponding to the image visible from the
person is output based on the movement direction and the position of the person. . Therefore, it
is possible to output a voice having information suitable for the person, which corresponds to an
image visible to the person and on which the situation of the person based on the traveling
03-05-2019
2
direction of the person is reflected. Communication can be realized.
[0005]
Here, in the display system of the above invention, the direction of the sound output of the
superdirective speaker is movable, and a person appearing in the photographed image is
detected based on the photographed image generated by the photographing means. And a
speaker control unit for detecting the movement of the person and moving the direction of the
audio output of the superdirective speaker in response to the movement of the detected person.
According to this configuration, by using the characteristic of the superdirective speaker that
outputs sound in a specific direction, even if one person moves, the state of outputting a specific
sound to this person is maintained It is possible to realize effective communication of information
by voice. In particular, since the specific sound output to the person is a sound having
information suitable for the person, more effective communication can be realized.
[0006]
Further, the display system according to the above-mentioned invention further comprises an
attribute discrimination means for detecting a person appearing in the photographed image
based on the photographed image generated by the photographing means, and judging an
attribute of the person. The voice output control means corresponds to the image that can be
viewed by the person based on the moving direction and position of the person detected by the
position etc. detecting means and the attribute of the person determined by the attribute
determining means. A voice may be directed to the person and output from the superdirective
speaker. According to this configuration, when the voice is output from the superdirective
speaker to the person, the voice having information according to the attribute of the person as
well as the position and the traveling direction of the person is output. Therefore, the attribute of
the person is reflected, and a voice more suitable for the person can be output, and more
effective communication can be realized.
[0007]
In the display system of the above invention, a plurality of the superdirective speakers may be
provided. According to this configuration, since it is possible to simultaneously output voices
suitable for each of a plurality of persons, effective information transmission can be realized.
03-05-2019
3
[0008]
Further, in order to achieve the above object, the present invention provides a display device
which simultaneously displays a plurality of images on a display screen so as to be visible only
from different specific directions, and an apparatus for outputting sound in a specific direction. A
control method for controlling a display system including a directional speaker, comprising:
photographing a front of the display screen of the display device to generate a photographed
image; and capturing the photographed image based on the photographed image. Detecting a
moving person and detecting a moving direction and a position of the person, and based on the
detected moving direction and the position of the person, directing a voice corresponding to the
image visible to the person to the person It is characterized by outputting from a superdirective
speaker. According to this control method, when the voice is output from the superdirective
speaker toward one person, the voice corresponding to the image visible from the person is
output based on the movement direction and the position of the person. Ru. Therefore, it is
possible to output a voice having information suitable for the person, which corresponds to an
image visible to the person and on which the situation of the person based on the traveling
direction of the person is reflected. Communication can be realized.
[0009]
According to the present invention, since it is possible to output a voice having information
suitable for a person according to the movement direction and position of the person using a
display system for displaying an image, effective information transmission can be realized. .
[0010]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings.
FIG. 1 is a diagram showing the configuration of a display system 1 according to the present
embodiment. The display system 1 has a display control device 3 capable of simultaneously
displaying a plurality of images, a plurality of (two) superdirective speakers 4L and 4R
(hereinafter referred to as “4” unless otherwise distinguished), And, it is configured by
connecting the camera device 5 (shooting means). The image displayed by the display device 2
may be either a still image or a moving image.
03-05-2019
4
[0011]
The display device 2 is viewed from the right side (viewing direction ER) toward the display
device 2 and from the left side (viewing direction EL) toward the display device 2, as shown in
plan view in FIG. Then, a plurality of images are displayed so that different images are viewed.
Here, the viewing direction refers to the direction of the line of sight (view angle) in which the
display device 2 is viewed from the outside of the display device 2. In the present embodiment,
the display device 2 is configured as a liquid crystal display device having a liquid crystal display
panel 10 (display screen), and the liquid crystal display panel 10 and a backlight (not shown)
illuminated from the back side of the liquid crystal display panel 10; Is equipped. Further, in the
liquid crystal display panel 10, the right display pixel 10A and the left display pixel 10B are
provided side by side in the horizontal direction, and the liquid crystal display panel 10 transmits
light passing through the right display pixel 10A only in the visual direction ER side. A
polarization member is provided which emits light and emits light passing through the left
display pixel 10B only to the visual direction EL side. Therefore, the display device 2 displays an
image for the viewing direction ER using only the right display pixel 10A, while displaying an
image for the viewing direction EL using only the left display pixel 10B. Different images can be
viewed between viewing the display device 2 from the ER and viewing the display device 2 from
the viewing direction EL.
[0012]
FIG. 2 is a top view of the display device 2 according to the present embodiment provided on the
wall 7 of the walkway 6. FIG. 3A is a view showing an example of the image GL which is an image
for the viewing direction EL, and FIG. 3B is a view showing an example of the image GR which is
an image for the viewing direction ER. is there. The walkway 6 in FIG. 2 is a walkway provided at
the entrance of a certain store, and a person entering the store enters the store by advancing
along the walkway 6 in the traveling direction Y1. It is assumed that a store outlet who leaves the
store and leaves the store leaves the store by advancing the walkway 6 in the traveling direction
Y2. In the present embodiment, by displaying an image on the liquid crystal display panel 10,
information useful for each of the shopkeeper and the shopkeeper viewing the image is provided,
and the shopper and the shopkeeper are further withdrawn. A display device 2 is provided on the
wall 7 of the walkway 6 for the purpose of outputting a voice to the shopkeeper through the
superdirective speaker 4 and providing information indicated by the voice.
03-05-2019
5
[0013]
In FIG. 2, an area AL is an area in which an image GL (FIG. 3A), which is an image for the viewing
direction EL, can be viewed. Specifically, when a shopper J1 who is present in the area AL and
moves in the traveling direction Y1 and tries to enter the store looks at the display device 2, the
visual direction becomes the visual direction EL, so the visual direction The image GL (FIG. 3A)
displayed on the display device 2 as an image for EL is visually recognized. As shown in FIG. 3
(A), the image GL displays information for entering customers (display “Welcome to welcome”)
and information useful for entering customers (display “special product of the day”) The
shopper J1 can obtain useful information by viewing the display device 2. Further, in FIG. 2, an
area AR is an area in which an image GR (FIG. 3B), which is an image for the viewing direction
ER, can be viewed. Specifically, when a leaving person J2 who is present in the area AR and
moves in the traveling direction Y2 and wants to leave the store visually observes the display
device 2, the visual direction is the visual direction ER, so for the visual direction ER The image
GR (FIG. 3B) displayed on the display device 2 as the image of FIG. As shown in FIG. 3 (B), the
image GR includes information for the exit (indication of "Thank you") and information useful for
the exit (indication of "Tomorrow's highlight item") It is displayed, and the shopkeeper J2 can
obtain useful information by viewing the display device 2.
[0014]
FIG. 4 is a cross-sectional view of an essential part showing a schematic structure of the liquid
crystal display panel 10, FIG. 5 is an enlarged cross-sectional view of a portion shown by a
symbol A in FIG. It is a top view for explaining matrix wiring and a stripe-like light shielding film.
The liquid crystal display panel 10 includes a TFT (not shown) which is a semiconductor
switching element, a pixel electrode 12, a scanning line 13 extending in the horizontal direction
(see FIG. 6), and a signal line extending in the vertical direction on the surface of the glass
substrate 11. 14 is arranged in a matrix, and an organic insulating film 15 is provided so that the
surface becomes flat, and an active matrix is provided on the insulating film 15 with a light
shielding film 16 made of a material containing a metal such as Cr (chromium). A color filter
layer 20 is provided on the surface of another glass substrate 19 so as to face the substrate 18
and the active matrix substrate 18, and a transparent opposite made of ITO (Indium Tin Oxide) or
the like on the color filter layer 20. The color filter substrate 23 provided with the electrodes 21
is provided. Then, the liquid crystal 24 is filled in the space in which the active matrix substrate
18 and the color filter substrate 23 face each other. The active matrix substrate 18 and the color
filter substrate 23 integrally constitute the liquid crystal display panel 10, and on the outer
surface of the liquid crystal display panel 10, ie, on the back side of the active matrix substrate
18 and the front side of the color filter substrate 23, , And the polarizing plate 17 and the
03-05-2019
6
polarizing plate 22 are provided. Light distribution films 25 and 26 are formed on the inner
surface of the liquid crystal display panel 10, that is, the front surface of the active matrix
substrate 18 and the back surface of the color filter substrate 23, and the light distribution films
25 and 26 face each other. doing. The light distribution film 25 is formed on the light shielding
film 16 constituting the active matrix substrate 18, and the light distribution film 26 is formed on
the counter electrode 21 constituting the color filter substrate 23.
[0015]
In the liquid crystal display panel 10, wide signal lines 14 and narrow signal lines 14 'are
alternately arranged in the left-right direction, and between the respective signal lines 14, 14',
pixel electrodes for the right viewpoint are provided. 12R1, 12G1, and 12B1 and pixel electrodes
for left viewpoint 12R2, 12G2, and 12B2 are disposed. The arrangement order is as shown in
FIGS. 4 and 6, the right-view pixel electrode 12R1, the wide signal line 14, the left-view pixel
electrode 12R2, the narrow-width signal line 14 ', and the right-view pixel electrode 12G1. The
wide signal line 14 and the left viewpoint pixel electrode 12G2 are arranged in this order. Since
the signal lines 14 and 14 'and the scanning lines 13 are formed of a light shielding conductive
material containing a metal such as Al (aluminum) / Mo (molybdenum), at least the signal lines
14 and 14 in the liquid crystal display panel 10. 'Functions as a lower light shielding film, and
the path between the signal line 14 and the signal line 14' is a path for transmitting light.
Further, in the insulating film 15 covering the pixel electrode 12, a stripe-shaped light shielding
film 16 is formed. A slit-like opening 27 extending in the longitudinal direction is formed in the
light shielding film 16, and light can be transmitted only through the opening 27. Therefore, the
light emitted from the backlight (not shown) provided on the back side of the liquid crystal
display panel 10 is blocked by the signal lines 14 and 14 'and the light shielding film 16, and the
signal lines 14 and 14' and Only the light passing through the opening 27 is incident on the eyes
of the person in front of the liquid crystal display panel.
[0016]
Therefore, in the liquid crystal display panel 10, the light transmission direction, that is, the
directivity can be controlled by the mutual positional relationship between the signal lines 14
and 14 'and the openings 27 of the stripe-shaped light shielding film. From different positions on
the front surface of the liquid crystal display panel 10, different images can be simultaneously
observed from different directions. The liquid crystal display panel 10 of the first embodiment
simultaneously displays two types of images in the directions corresponding to the viewing
direction ER (FIG. 1) and the viewing direction EL (FIG. 1). For this reason, by blocking the light
03-05-2019
7
traveling in the front direction of the liquid crystal display panel 10 by the black matrix 28, light
leakage in the front direction is prevented, and different images can be clearly viewed from the
visual directions ER and EL. Further, the opening 27 may or may not be provided in the light
shielding film 16 at a position overlapping the narrow signal line 14 ′. In the case where the
opening is provided at this position, it is better to separately provide a black matrix at the
corresponding position of the color filter layer 20 in order to block light emerging from the front
of the liquid crystal display panel 10 in the perpendicular direction.
[0017]
In the liquid crystal display panel 10, since the wide signal lines 14 and the narrow signal lines
14 'are alternately provided via the pixel electrodes 12 for either the left or right viewpoint, the
left and right of the wide signal lines 14 are Since the viewing angle at which each of the pixel
electrodes 12 located at can be visually recognized becomes large, it becomes easy for a person
who is located in another direction on the left and right to clearly recognize the respective
information. Further, in the liquid crystal display panel 10, the stripe-shaped light shielding film
16 covers the left side in the width direction of the signal line by the width w1 and the right side
by the width w2 on the surface of the insulating film 15 that hits the wide signal line 14 The
other part is an opening 27. Then, from the viewing directions ER and EL, the image displayed by
each pixel electrode is viewed through the opening 27 through the right-view pixel electrode
12R1, 12G1, 12B1 or the left-view pixel electrode 12R2, 12G2, 12B2. I will see it. In this case, if
w1 = w2, symmetrical directivity can be realized, and if w1 ≠ w2 asymmetric directivity can be
realized, the balance of the visual directions ER and EL and the thickness of the insulating film 15
can be realized. The position and size of the opening 27 are determined by defining w1 and w2
in consideration of Note that at least one of w1 and w2 may be "0".
[0018]
FIG. 7 is a block diagram showing a functional configuration of the display system 1. The display
control device 3 constituting the display system 1 is realized, for example, as a personal
computer, and as shown in FIG. 7, the control unit 31, the input unit 32, the display unit 33, and
the recording medium reading device 34, an interface unit 35, an audio output unit 36, a
pedestal drive unit 37, and a storage unit 38. The control unit 31 centrally controls each unit of
the display control device 3 and includes a central processing unit (CPU) as an operation
execution unit, a basic control program executed by the CPU, and the basic control program.
ROM (Read Only Memory) for storing data etc. in a nonvolatile manner, RAM (Random Access
Memory) for temporarily storing a program executed by the CPU, data related to this program,
03-05-2019
8
etc., and other peripheral circuits etc. There is. Further, as shown in FIG. 7, the control unit 31
includes an area entry / exit detection unit 31a, a movement direction detection unit 31b
(movement direction detection unit), a speaker pedestal control unit 31c (speaker control unit),
and an attribute determination unit 31d (attribute determination means), and these functions and
operations will be described later.
[0019]
The input unit 32 is connected to an input device such as a mouse or a keyboard, detects an
operation of the input device by the operator, and outputs an operation signal corresponding to
the operation to the control unit 31. The display unit 33 displays various information under the
control of the control unit 31, and is configured using, for example, a liquid crystal display panel.
The recording medium reader 34 uses an optical disk type recording medium such as a CD, a
DVD, or a next-generation type DVD, a magneto-optical recording medium such as an MO, a
magnetic recording medium, a storage device using a semiconductor storage element, and a
magnetic recording medium. It is an apparatus which reads a program or data from the
recording apparatus etc. The recording medium reading device 34 reads data on an image to be
displayed on the liquid crystal display panel 10 and data on an audio output from the
superdirective speaker 4 under the control of the control unit 31 and outputs the data to the
control unit 31. . The control unit 31 causes the storage unit 38 to store data relating to the
image read by the recording medium reading device 34 and data relating to the sound. The
interface unit 35 is connected to the display device 2 via a signal transmission cable or the like,
and transmits and receives various signals to and from the display device 2 according to the
control of the control unit 31. The interface unit 35 is connected to the camera device 5 via a
signal transmission cable or the like, and transmits and receives various signals to and from the
camera device 5 according to the control of the control unit 31.
[0020]
The audio output unit 36 is connected to the superdirective speaker 4, and outputs, from the
superdirective speaker 4, audio relating to the audio data stored in the storage unit 38 under the
control of the control unit 31. The superdirective speaker 4 is a speaker capable of outputting
sound in a specific direction, and includes, for example, an ultrasonic transducer, which
modulates a modulated wave obtained by modulating a carrier wave of an ultrasonic band by an
audio signal in the audible band. By outputting, an ultrasonic speaker capable of outputting voice
with high directivity is used. The display system 1 according to the present embodiment includes
a plurality (two) of superdirective speakers 4 as shown in FIG. 2 and FIG. 3, and each
03-05-2019
9
superdirective speaker 4 includes speaker pedestals 41L and 41R. (Hereinafter, in the case where
no distinction is made, reference numeral 41 is attached.) The pedestal drive unit 37 rotates the
superdirective speaker 4 by driving a motor (not shown) built in the speaker pedestal 41 under
the control of the control unit 31. Specifically, superdirective speaker 4 is connected to a rotary
shaft that transmits the power of the motor contained in speaker pedestal 41, and as shown in
FIG. It is configured to rotate in the horizontal direction indicated by the rotational direction Y3
and the rotational direction Y4 around the. The pedestal drive unit 37 transmits a motor drive
control signal to the motor under the control of the control unit 31 to turn the superdirective
speaker 4 under the control of the control unit 31 in order to set the direction of the sound
output from the superdirective speaker 4 to a desired direction. Do. The storage unit 38 includes
a storage device using a magnetic or optical recording medium or a semiconductor storage
element, and stores various programs and data in a non-volatile manner. Further, as shown in
FIG. 7, the storage unit 38 stores an image data table 38a and an audio data table 38b, and the
contents of these tables will be described later.
[0021]
The display device 2 includes an interface unit 201 connected to the display control device 3, a
drawing control unit 202 acquiring a display signal input through the interface unit 201, and a
drawing memory 203 connected to the drawing control unit 202. The liquid crystal display panel
10 is provided with a liquid crystal drive circuit 204 that drives the liquid crystal display panel
10 according to the control of the drawing control unit 202. The drawing control unit 202
acquires a display signal input from the display control device 3 via the interface unit 201, and
temporarily stores the display signal in the drawing memory 203. Then, the drawing control unit
202 reads a display signal from the drawing memory 203 at the drawing timing of the liquid
crystal display panel 10, and the liquid crystal driving circuit according to the configuration of
the right display pixel 10A and the left display pixel 10B of the liquid crystal display panel 10. It
outputs to 204 one by one. The drawing memory 203 temporarily stores the display signal
according to the control of the drawing control unit 202. The liquid crystal drive circuit 204
causes the liquid crystal display panel 10 to display an image by driving the liquid crystal display
panel 10 according to a display signal input from the drawing control unit 202. Further, the
liquid crystal drive circuit 204 performs lighting control and the like of a backlight (not shown)
in accordance with the input state of the display signal from the drawing control unit 202 and
the like.
[0022]
03-05-2019
10
The camera device 5 includes an interface unit 501 connected to the display control device 3, a
photographing control unit 502, and a photographing unit 503. The imaging control unit 502
controls the imaging unit 503 under the control of the control unit 31 of the display control
device 3. An imaging unit 503 includes an imaging element (not shown) such as a CCD image
sensor or a CMOS image sensor, a photographing lens group (not shown), and a lens drive unit
(not shown) for driving the lens group to adjust zoom and focus. And the like, and performs
imaging according to the control of the imaging control unit 502. The imaging control unit 502
converts data output from the imaging device included in the imaging unit 503 into data of a
predetermined format, and outputs the data as imaging image data to the control unit 31 of the
display control device 3. As shown in FIGS. 2 and 3, the camera device 5 is provided at the top of
the display device 2 and shoots the front of the liquid crystal display panel 10. For example, a
wide-angle camera is used as the camera device 5 and is configured to be able to capture a space
corresponding to at least the area AL and the area AR. Therefore, when a person exists in the
area AL or the area AR, the camera device 5 captures an image of the person.
[0023]
Next, the area entry / exit detection unit 31a, the movement direction detection unit 31b, the
speaker pedestal control unit 31c, and the attribute determination unit 31d included in the
control unit 31 of the display control device 3 will be described. The area entry / exit detecting
unit 31a detects that a person has entered either the area AL or the area AR based on the
captured image data input from the camera device 5, and the person from either the area AL or
the area AR. It detects that it has left. Here, a specific operation of the area entry / exit detection
unit 31a will be described by way of an example. For example, the area entry / exit detection unit
31a performs sampling of photographed image data, analyzes the sampled photographed image
data, detects a face of a person appearing in the photographed image data, and identifies a
detected face area. . For face detection and face area identification, for example, existing
techniques such as pattern matching using a template representing a face can be used.
Furthermore, the area entry / exit detection unit 31a estimates the position of the person based
on the position of the face area in the photographed image data, and this person is either in the
area AL, in the area AR, or outside these areas. Detect if it exists. The area entry / exit detection
unit 31a tracks the face area specified in each of the sampled captured image data, and performs
the above-described operation for each of the captured image data, and a person enters either
the area AL or the area AR. Also, it detects that a person has exited from either the area AL or the
area AR.
[0024]
03-05-2019
11
The movement direction detection unit 31b detects a person appearing in the photographed
image data based on the photographed image data input from the camera device 5, and detects
the traveling direction and the position of the person. . In the present embodiment, the position
of a person is an area where the person is present when the person is present in the area AR or
the area AL. Here, a specific operation of the movement direction detection unit 31b will be
described by way of an example. For example, the moving direction detection unit 31b performs
sampling of photographed image data, analyzes the sampled photographed image data, detects a
face of a person appearing in the photographed image data, and specifies an area of the detected
face. Next, the movement direction detection unit 31 b estimates the position of the person based
on the position of the face area in the captured image data, and whether the person is present in
either the area AL or the area AR In this case, it is detected in which area. Furthermore, when the
face area is detected, the movement direction detection unit 31b tracks the face area in each of
the sampled captured image data, detects a movement vector of the face area, and advances this
person based on the detected movement vector. In this case, it is detected whether the person is
traveling in the traveling direction Y1 or the traveling direction Y2.
[0025]
FIG. 8 is a diagram for explaining the operation of the speaker pedestal control unit 31c. The
speaker pedestal control unit 31 c detects the person shown in the photographed image data
based on the photographed image data input from the camera device 5, and the superdirective
speaker 4 is configured to output the voice toward the person. By detecting the movement of the
person and rotating the superdirective speaker 4 according to the movement of the detected
person, the direction of the sound output from the superdirective speaker 4 is the person
Maintain the state of For example, as shown in FIG. 8, when the person J5 moves in the direction
of the arrow Y6, the superdirective speaker 4 is rotated in the rotational direction Y3 in
accordance with this movement, and the sound output from the superdirective speaker 4 The
direction is kept directed to the person J5. Here, a specific operation of the speaker pedestal
control unit 31c will be described by way of an example. For example, the speaker pedestal
control unit 31c performs sampling of photographed image data input from the camera device 5,
analyzes the sampled photographed image data, and detects and detects the face of a person
shown in the photographed image data. Identify the face area. Furthermore, the speaker pedestal
control unit 31c analyzes the position of the face area in the photographed image data, and
based on the analysis result, in order to direct the direction of the sound output from the
superdirective speaker 4 to the face of this person, The direction in which the superdirective
speaker 4 should be rotated is calculated. At this time, the storage unit 38 stores the position of
the face area in the photographed image data in association with the direction of the
superdirective speaker 4 when outputting the voice toward the face of the person related to the
03-05-2019
12
face area. A table may be stored, and the direction of rotation and the amount of rotation of
superdirective speaker 4 may be calculated based on the contents of this table. Next, the speaker
pedestal control unit 31 c controls the pedestal drive unit 37 to control the motor drive control
signal according to the calculated rotation direction of the superdirective speaker 4 and the
amount of rotation to the motor incorporated in the speaker pedestal 41. Output and rotate the
superdirective speaker 4. By continuing this operation while the face area is detected from the
photographed image data, the speaker pedestal control unit 31c maintains the state in which the
direction of the sound output from the superdirective speaker 4 is directed to the face of the
person Do.
[0026]
The attribute determination unit 31 d detects a person appearing in the captured image data
based on the captured image data input from the camera device 5 and determines the attribute
of the person. In the present embodiment, the attribute of a person is the gender of the person.
Here, a specific operation of the attribute determination unit 31d will be described by way of an
example. For example, the attribute discrimination unit 31d performs sampling of photographed
image data input from the camera device 5, analyzes the sampled photographed image data,
detects a face of a person appearing in the photographed image data, and detects the detected
face. Identify the area of Next, the attribute discrimination unit 31d extracts the features of the
identified face area, and discriminates the gender based on the extracted features of the face
area.
[0027]
Next, the image data table 38a and the audio data table 38b stored in the storage unit 38 will be
described with reference to FIG. FIG. 9A is a view schematically showing the configuration of the
image data table 38a stored in the storage unit 38. As shown in FIG. As shown in FIG. 9A, the
image data table 38a includes an image name field, a left direction image data field, and a right
direction image data field. Left-direction image data is stored in the left-direction image data
field. The left direction image data is image data for the viewing direction EL, and for example,
the image data LD1 related to the image GL shown in FIG. 3A is stored in the storage unit 38 as
the left direction image data. . Further, rightward image data is stored in the rightward image
data field. The right direction image data is image data for the viewing direction ER, and for
example, the image data RD1 related to the image GR shown in FIG. 3B is stored in the storage
unit 38 as the right direction image data. . As shown in FIG. 9A, in the image data table 38a, the
left direction image data of 1 and the right direction image data of 1 are stored in association
03-05-2019
13
with each other, and the image is displayed on the liquid crystal display panel 10. When
displayed, an image relating to the left direction image data stored in association with one
another and an image relating to the right direction image data are simultaneously displayed. An
image name is stored in the image name field. The image name is a generic name uniquely
assigned to the left direction image data and the right direction image data stored in association
with each other, and in the example of FIG. 9A, the corresponding image data LD1 and image An
image name GD1 is attached as an image name of the data RD1.
[0028]
FIG. 9B is a view schematically showing the configuration of the audio data table 38b stored in
the storage unit 38. As shown in FIG. As described above, the display system 1 outputs the sound
from the superdirective speaker 4 to the person walking on the walking path 6 and provides the
person with information related to the sound. Here, the display system 1 according to the present
embodiment is configured such that the position of the person, the traveling direction of the
person, the person, for one person walking on the walking path 6 when outputting the sound by
the superdirective speaker 4 In accordance with the three conditions of the attribute of (1), a
specific voice is output from a specific superdirective speaker 4 toward this person. Then, in the
voice data table 38b, these three conditions, voice data relating to voice to be output when this
condition is established, and a value indicating the particular superdirective speaker 4 that
outputs this voice are associated with each other. I remember.
[0029]
As shown in FIG. 9B, the audio data table 38b includes a condition name field, a condition field,
an output target audio data field, and an output target superdirective speaker field. The condition
name uniquely stored for each condition is stored in the condition name field. The condition field
is a field in which the condition described above is stored, and includes a position field of a
person, a direction of travel field, and an attribute field. The position field of the person is a field
in which the condition on the position of one person is stored, and a value indicating the area AL
or a value indicating the area AR of FIG. 2 is stored. As described above, the position of the
person is detected by the movement direction detection unit 31 b. The traveling direction field is
a field in which a condition for the traveling direction of one person is stored, and a value
indicating the traveling direction Y1 or a value indicating the traveling direction Y2 in FIG. 2 is
stored. As described above, the traveling direction of the person is detected by the moving
direction detection unit 31 b. The attribute field is a field in which a condition for an attribute of
one person is stored, and a value indicating gender as an attribute, specifically, a value indicating
03-05-2019
14
male or a value indicating female is stored. As described above, the attribute of the person is
detected by the attribute determination unit 31d. In the output target voice data field, voice data
to be output is stored for each condition when the condition stored in the condition field is
satisfied. In the output target superdirective speaker field, a value indicating one superdirective
speaker 4 that outputs sound is stored when the condition stored in the condition field is
satisfied. For example, referring to FIG. 9B, for one person, when this person is in area AL, the
traveling direction of this person is traveling direction Y1, and this person is a male, the
condition P1 is Since the above is true, the audio data LM is output from the superdirective
speaker 4L.
[0030]
Here, the relationship among the conditions in the present embodiment, the output target audio
data, and the output target superdirective speaker will be described in detail with reference to
FIGS. 2 and 9B. When a condition P1 indicated by the voice data table 38b in FIG. 9B is
established for one person, this person is present in the area AL and is in the direction of
movement Y1 as in the shopkeeper J1 in FIG. It will be a male entrance person who is advancing.
In this case, if an image visible to the person in the area AL, that is, an image GL shown in FIG.
Communicate effectively. Further, if the sound is output from the superdirective speaker 4L
located near the area AL, ie, near the person among the superdirective speakers 4L and 4R, it is
possible to output the sound efficiently. In view of this, the sound relating to the sound data LM
output when the condition P1 is satisfied corresponds to the image GL, and is a sound for the
shopper and for men. For example, the voice according to the voice data LM is a voice
introducing male cosmetics as a featured item of the day. In addition, when the condition P1 is
satisfied, the sound relating to the sound data LM is output from the superdirective speaker 4L
located near the person. In addition, when the condition P2 is satisfied for one person, this
person is a woman who is present in the area AL and proceeds in the traveling direction Y1. In
view of this, the audio data LW output when the condition P2 is satisfied corresponds to the
image GL which can be visually recognized from the area AL, and is audio including contents for
the entry person and for the woman. When the condition P2 is satisfied, the voice related to the
voice data LW is output from the superdirective speaker 4L located near the person. Further,
when the condition P3 is satisfied for one person, this person is a male outlet who exists in the
area AR and proceeds in the traveling direction Y2. In view of this, the audio data RM outputted
when the condition P3 is satisfied corresponds to the image GR which can be visually recognized
from the area AR, and is an audio including the contents for the storefront person and for the
male. When the condition P3 is satisfied, the sound relating to the sound data RM is output from
the superdirective speaker 4R located near the person. Further, when the condition P4 is satisfied
for one person, this person is a woman who is present in the area AR and proceeds in the
traveling direction Y2.
03-05-2019
15
In view of this, the audio data RW output when the condition P4 is satisfied corresponds to the
image GR visible from the area AR, and is audio including the content for the exit store and for
the woman. When the condition P4 is satisfied, the sound relating to the sound data RW is output
from the superdirective speaker 4R located near the person.
[0031]
As described above, in the present embodiment, when a voice is output from superdirective
speaker 4 for one person walking on walking path 6, voice data to be output is specified
according to the position of the person and the traveling direction. , This voice is output. Here,
while the image which can be visually recognized from this person is specified by the position of
the person, it is specified whether this person is an entering person or a leaving person from the
traveling direction of the person. Therefore, by defining the voice to be output according to the
position of the person and the traveling direction, it corresponds to the image visible to this
person, and whether this person is an entering person or a leaving person According to the
above, it is possible to output an optimum voice to this person, and the display system 1 can
realize effective information transmission. Further, in the present embodiment, when a voice is
output from the superdirective speaker 4 for one person walking on the walking path 6, not only
the position and the traveling direction of the person but also the attributes of the person are
output. Audio data is determined, and this audio is output. Therefore, the attribute of the person
is reflected, and a voice more suitable for the person can be output to the person, and more
effective communication can be realized. In the example of the voice data table 38b shown in FIG.
9B, no voice is output to the store outlet J3 who is present in the area AL in FIG. 2 and is going to
leave the store by proceeding in the traveling direction Y2. It will not be done. However, the store
outlet J3 passes through the area AR in the process of walking, and in this area AR, information
useful for the store outlet J3 is obtained by the image and the sound, so there is nothing for the
store outlet J3. There is no disadvantage. The same applies to the entry person J4 who exists in
the area AR and is going to enter the store by advancing in the traveling direction Y1.
[0032]
Next, the operation of the display system 1 will be described. FIG. 10 is a flowchart showing the
operation of the display system 1. As a premise of the operation of the flowchart shown in FIG.
10, the image according to the image name GD1 of the image data table 38a of FIG. 9A is
displayed on the liquid crystal display panel 10, and as shown in FIG. The image GL pertaining to
03-05-2019
16
the image data LD1 that is image data for the left direction as an image for EL is displayed on the
liquid crystal display panel 10 as the image for the visual direction ER. It shall be in the state of
being Further, it is assumed that the front of the liquid crystal display panel 10 is photographed
by the camera device 5. In the flowchart of FIG. 10, the control unit 31 of the display control
device 3 functions as an audio output control unit. The area entry / exit detection unit 31a of the
control unit 31 of the display control device 3 monitors whether a person has entered the area
AL or the area AR (step S11). When a person has entered the area AL or the area AR (step S11:
YES), the movement direction detection unit 31b of the control unit 31 determines the position
of the person, that is, the area where the person exists and the person The traveling direction is
detected (step S12). Next, the attribute determining unit 31d of the control unit 31 determines
the attribute of this person, that is, whether this person is a male or a female (step S13).
[0033]
After detecting the position and the direction of movement of the person and determining the
attribute of the person, the control unit 31 refers to the voice data table 38b stored in the
storage unit 38, and the conditions among the conditions stored in the condition field are met. It
is determined whether there is a condition to be set (step S14). If there is no condition that holds
(step S14: NO), the control unit 31 returns the process to step S11. On the other hand, when
there is a condition that holds (step S14: YES), the control unit 31 refers to the output target
sound data field, specifies the sound data corresponding to the condition that holds, and selects
the output target superdirective speaker field. One superdirective speaker 4 that outputs a voice
related to the identified voice data is identified by reference (step S15). As described above, the
voice data specified in step S15 is voice data optimum for the person based on the position, the
traveling direction, and the attribute of the person. Further, the specified superdirective speaker
4 is the superdirective speaker 4 located at a position near the person among the two
superdirective speakers 4. Then, the speaker pedestal control unit 31 c of the control unit 31
controls the pedestal drive unit 37 to rotate the superdirective speaker 4 specified in step S 16,
and the direction of the sound output from the superdirective speaker 4. Are directed to the
person, and the superdirective speaker 4 is rotated in response to the movement of the person
(step S16). Furthermore, the control unit 31 reads out the audio data specified in step S16 from
the storage unit 38, controls the audio output unit 36, and outputs the audio related to the read
out audio data from the superdirective speaker 4 (step S17). ).
[0034]
After outputting the voice, the control unit 31 detects the current position and direction of
03-05-2019
17
movement of the person by the movement direction detection unit 31b, and refers to the
condition field of the sound data table 38b, and the condition is established for this person as
well. It is monitored whether or not it is present (step S18). When the condition is satisfied (step
S18: YES), the direction of the sound output from superdirective speaker 4 is directed to the
person by speaker pedestal control unit 31c of control unit 31, and the sound output is
continued. (Step S19). As described above, in the present embodiment, while the condition is
satisfied, the speaker pedestal control unit 31 c changes the direction of the superdirective
speaker 4 so that the voice is output toward one person, and the person In this case, the
superdirective speaker 4 is rotated in accordance with the movement of the detected person, and
the state in which the voice is output to the person is maintained. Therefore, effective
information transmission by voice can be realized. On the other hand, when the condition is not
satisfied (step S18: NO), the control unit 31 stops the output of the voice (step S20).
[0035]
As described above, in the present embodiment, when a voice is output from the superdirective
speaker 4 for one person walking on the walking path 6, the movement direction detection unit
31b detects the position and the traveling direction of the person. Audio data to be output is
determined according to the position of the person detected and detected and the direction of
movement, and this audio is output. Here, while the image which can be visually recognized from
this person is specified by the position of the person, it is specified whether this person is an
entering person or a leaving person from the traveling direction of the person. Therefore, by
defining the voice to be output according to the position of the person and the traveling
direction, the person corresponds to an image visible to the person, and the person corresponds
to either an entering person or a leaving person. It is possible to output an optimum voice
corresponding to a person to this person, and effective information transmission can be realized
in the display system 1.
[0036]
Further, in the present embodiment, when outputting sound to one person, the speaker pedestal
control unit 31 c changes the direction of the superdirective speaker 4 so that the sound is
output to the one person. The movement of the person is detected, and in accordance with the
movement of the detected person, the superdirective speaker 4 is rotated to maintain the state in
which the voice is output toward the person. Therefore, it is possible to maintain the state of
outputting a specific sound to only one person by utilizing the characteristic of the superdirective
speaker 4 to output the sound in a specific direction, which is effective by the sound
03-05-2019
18
Communication can be realized.
[0037]
Further, in the present embodiment, when the voice is output from the superdirective speaker 4
for one person walking on the walking path 6, the person determined by the attribute
determining unit as well as the position and the traveling direction of the person. The voice data
to be output is determined in accordance with the attribute of B, and this voice is output.
Therefore, the attribute of the person is reflected, and a voice more suitable for the person can be
output to the person, and more effective communication can be realized.
[0038]
Further, the display system 1 according to the present embodiment includes a plurality of
superdirective speakers 4. Therefore, voices suitable for each of a plurality of persons walking on
the walking path 6 can be simultaneously output. In the present embodiment, two superdirective
speakers 4 are provided, but the number of superdirective speakers 4 is not limited to this.
[0039]
The embodiment described above merely shows one aspect of the present invention, and any
modification and application can be made within the scope of the present invention. In the
present embodiment, the attribute determination unit 31d determines the gender of a person as
an attribute, but the determined attribute is not limited to the gender. For example, the attribute
discrimination unit 31d is configured to determine whether a person is Japanese or a foreigner,
and in the case of a Japanese, speech of Japanese is output to this person, and in the case of a
foreigner The voice of the foreign language may be output to this person. Moreover, although the
case where the display apparatus 2 was provided in the wall 7 of the walkway 6 was made into
the example and demonstrated in this embodiment as an example, the place where the display
apparatus 2 is provided is not limited to this, It can be provided in any place.
[0040]
03-05-2019
19
It is a figure which shows schematic structure of a display system. It is a figure which shows a
mode that the display apparatus was provided in the wall of a walkway. It is a figure which shows
the example of the image displayed on a display apparatus. It is principal part sectional drawing
which shows the general | schematic structure of a liquid crystal display panel. FIG. 5 is an
enlarged cross-sectional view of a portion indicated by a symbol A in FIG. 4; It is a principal part
top view which shows the structure of wiring of a liquid crystal display panel. It is a block
diagram showing functional composition of a display system. It is a figure for demonstrating the
operation | movement of a speaker base control part. It is a figure which shows typically the
structure of the table memorize | stored in a memory | storage part. It is a flow chart which
shows operation of a display system.
Explanation of sign
[0041]
Reference Signs List 1 display system 2 display device 4, 4L, 4R superdirective speaker 5 camera
device (shooting means) 10 liquid crystal display panel (display screen) 31 control unit (voice
output control means) 31b: movement direction detection unit (movement direction detection
means) 31c: speaker pedestal control unit (speaker control means) 31d: attribute judgment unit
(attribute judgment means).
03-05-2019
20
Документ
Категория
Без категории
Просмотров
0
Размер файла
37 Кб
Теги
jp2010026551
1/--страниц
Пожаловаться на содержимое документа