close

Вход

Забыли?

вход по аккаунту

?

JP2013058897

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2013058897
Abstract: In a case where a plurality of users view the same content at the same time, different
voice data linked to different display objects displayed in the content is reproduced for each user.
SOLUTION: A plurality of oscillation devices 12 for outputting modulation waves of parametric
speakers, a display unit 40 for displaying image data including a plurality of display objects, a
recognition unit 30 for recognizing positions of a plurality of users, Control unit 20 for
controlling the oscillation device 12 to reproduce a plurality of voice data linked to each of the
display objects, and the control unit 20 directs the position of each user recognized by the
recognition unit 30 The oscillator 12 is controlled to reproduce audio data linked to the display
item selected by the user. [Selected figure] Figure 2
Electronic device
[0001]
The present invention relates to an electronic device provided with an oscillating device.
[0002]
As a technique regarding the electronic device provided with the audio | voice output means,
there exist some which are described, for example in patent documents 1-5.
The technology described in Patent Document 1 measures the distance between the portable
terminal and the user, and controls the brightness of the display and the volume of the speaker
03-05-2019
1
based on this. The technology described in Patent Document 2 is a technology related to a
directional speaker system provided with a directional speaker array. Specifically, control points
for reproduction are set in the main lobe direction to suppress deterioration of reproduced
sound. The technology described in Patent Document 3 is to reproduce a sound suitable for both
a deaf and a hearing person by a speaker control device provided with a highly directional
speaker and a normal speaker.
[0003]
Patent Documents 4 and 5 describe techniques relating to parametric speakers. The technique
described in Patent Document 4 includes an ultrasonic generator that generates an ultrasonic
wave by expansion and contraction of a medium due to heat generation of a heating element. The
technology described in Patent Document 5 relates to a mobile terminal device having a plurality
of superdirective speakers such as parametric speakers.
[0004]
JP, 2005-202208, A JP, 2008-252625, A JP, 2008-197381, A JP, 2004-147311, A JP, 200667386, A
[0005]
In the case where a plurality of users view the same content at the same time, it is possible for
each user to be able to reproduce different voice data linked to different display objects
displayed in the content. Provide a new way to enjoy content.
[0006]
An object of the present invention is to reproduce, for each user, different voice data linked to
different display objects displayed in the content when a plurality of users view the same content
simultaneously. is there.
[0007]
According to the present invention, there are provided a plurality of oscillation devices for
outputting modulation waves of a parametric speaker, a display unit for displaying first image
data including a plurality of display objects, a recognition unit for recognizing positions of a
plurality of users. A control unit configured to control the oscillation device to reproduce a
plurality of audio data linked to each of the plurality of display objects; and the control unit is
03-05-2019
2
directed to the position of each user recognized by the recognition unit An electronic device is
provided for controlling the oscillating device to reproduce the audio data associated with the
display item selected by each user.
[0008]
According to the present invention, in a case where a plurality of users view the same content at
the same time, it is possible to reproduce, for each user, different voice data linked to different
display objects displayed in the content. it can.
[0009]
It is a schematic diagram which shows the operation | movement method of the electronic device
which concerns on 1st Embodiment.
It is a block diagram which shows the electronic device shown in FIG.
It is a top view which shows the parametric speaker shown in FIG.
FIG. 4 is a cross-sectional view showing the oscillation device shown in FIG. 3;
It is sectional drawing which shows the piezoelectric vibrator shown in FIG.
It is a flowchart which shows the operation | movement method of the electronic device shown in
FIG. It is a block diagram showing the electronic device concerning a 2nd embodiment.
[0010]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings. In all the drawings, the same components are denoted by the same reference numerals,
and the description thereof will be appropriately omitted.
03-05-2019
3
[0011]
FIG. 1 is a schematic view showing an operation method of the electronic device 100 according
to the first embodiment. FIG. 2 is a block diagram showing the electronic device 100 shown in
FIG. An electronic device 100 according to the present embodiment includes a parametric
speaker 10 having a plurality of oscillation devices 12, a display unit 40, a recognition unit 30,
and a control unit 20. The electronic device 100 is, for example, a television, a display device for
digital signage, or a portable terminal device. As a portable terminal device, a mobile telephone
etc. are mentioned, for example.
[0012]
The oscillator 12 outputs an ultrasonic wave 16. The ultrasonic wave 16 is a modulation wave of
a parametric speaker. The display unit 40 displays image data including a plurality of display
objects 80. The recognition unit 30 recognizes the positions of the plurality of users 82. The
control unit 20 controls the oscillation device 12 to reproduce a plurality of audio data
associated with each of the plurality of display objects 80 displayed on the display unit 40. The
control unit 20 controls the oscillation device 12 so as to reproduce the voice data linked to the
display 80 selected by each user 82 toward the position of each user 82 recognized by the
recognition unit 30. The configuration of the electronic device 100 will be described in detail
below with reference to FIGS.
[0013]
As shown in FIG. 1, the electronic device 100 includes a housing 90. The parametric speaker 10,
the display unit 40, the recognition unit 30, and the control unit 20 are disposed, for example,
inside the housing 90 (not shown).
[0014]
The electronic device 100 receives or stores content data. The content data includes audio data
and image data. Image data of the content data is displayed by the display unit 40. Further, audio
data of the content data is output by the plurality of oscillation devices 12. Image data of the
03-05-2019
4
content data includes a plurality of display objects 80. The plurality of display objects 80 are
each associated with different audio data. When the content data is a concert, the plurality of
displays 80 are, for example, each performer. In this case, the plurality of display objects 80 are,
for example, each associated with audio data for reproducing the tone of the musical instrument
played by each player.
[0015]
As shown in FIG. 2, the recognition unit 30 includes an imaging unit 32 and a determination unit
34. The imaging unit 32 captures an area including a plurality of users 82 to generate image
data. The determination unit 34 determines the position of each user 82 by processing the image
data captured by the imaging unit 32. The determination of the position of each user 82 is
performed, for example, by individually storing and storing in advance feature quantities
identifying the respective users 82 and collating the feature quantities with the image data. The
feature amount includes, for example, the size of the distance between the eyes, or the size and
shape of a triangle connecting the eyes and the nose. The recognition unit 30 can also specify,
for example, the position of the ear of the user 82, and the like. In addition, even if the
recognition unit 30 has a function of automatically following the user 82 and determining the
position of the user 82 when the user 82 moves in the area imaged by the imaging unit 32. Good.
[0016]
As shown in FIG. 2, the electronic device 100 includes a distance calculation unit 50. The
distance calculation unit 50 calculates the distance between each user 82 and the oscillation
device 12. As shown in FIG. 2, the distance calculation unit 50 includes, for example, a sound
wave detection unit 52. In this case, the distance calculation unit 50 calculates the distance
between each user 82 and the oscillation device 12 as follows, for example. First, ultrasonic
waves for sensors are output from the oscillation device 12. Next, the distance calculation unit
50 detects the ultrasonic wave for the sensor reflected from each user 82. The distance between
each user 82 and the oscillation device 12 is calculated based on the time from when the sensor
ultrasonic wave is output by the oscillation device 12 to when it is detected by the sound wave
detection unit 52. In addition, when the electronic device 100 is a mobile telephone, the sound
wave detection unit 52 can be configured by a microphone, for example.
[0017]
03-05-2019
5
As shown in FIG. 2, the electronic device 100 includes a selection unit 56. Each user 82 uses the
selection unit 56 to select one of the plurality of display items 80 included in the image data
displayed on the display unit 40. The selection unit 56 is incorporated, for example, in the inside
of the housing 90. Furthermore, the selection unit 56 may not be incorporated in the inside of
the housing 90. In this case, a plurality of selection units 56 may be provided so as to be held by
each of the plurality of users 82.
[0018]
As shown in FIG. 2, the control unit 20 is connected to the plurality of oscillation devices 12, the
recognition unit 30, the display unit 40, the distance calculation unit 50, and the selection unit
56. The control unit 20 controls the plurality of oscillation devices 12 to reproduce the audio
data linked to the display 80 selected by each user 82 toward the position of each user 82. This
is performed, for example, as follows. First, for each user 82, the feature amount of each user 82
is registered in association with the ID. Next, the display 80 selected by each user 82 is stored in
association with the ID of each user 82. Then, the ID corresponding to the specific display object
80 is selected, and the feature amount associated with the selected ID is read out. Next, the user
82 having the read feature amount is selected by image processing. Then, the audio data linked
to the display object 80 is reproduced to the user 82. The control unit 20 also adjusts the volume
and sound quality of the audio data to be reproduced for each user 82 based on the distance
between each user 82 and the oscillation device 12 calculated by the distance calculation unit
50.
[0019]
FIG. 3 is a plan view showing the parametric speaker 10 shown in FIG. The parametric speaker
10 is configured, for example, by arranging a plurality of oscillating devices 12 in an array, as
shown in FIG.
[0020]
FIG. 4 is a cross-sectional view showing the oscillation device 12 shown in FIG. The oscillation
device 12 includes a piezoelectric vibrator 60, a vibrating member 62, and a support member 64.
03-05-2019
6
The piezoelectric vibrator 60 is provided on one surface of the vibrating member 62. The
support member 64 supports the edge of the vibrating member 62.
[0021]
The control unit 20 is connected to the piezoelectric vibrator 60 via the signal generation unit
22. The signal generation unit 22 generates an electrical signal to be input to the piezoelectric
vibrator 60. The control unit 20 controls the signal generation unit 22 based on the information
input from the outside, thereby controlling the oscillation of the oscillation device 12. The control
unit 20 inputs a modulation signal as a parametric speaker to the oscillation device 12 via the
signal generation unit 22. At this time, the piezoelectric vibrator 60 uses a sound wave of 20 kHz
or more, for example, 100 kHz as a transport wave of the signal.
[0022]
FIG. 5 is a cross-sectional view showing the piezoelectric vibrator 60 shown in FIG. As shown in
FIG. 4, the piezoelectric vibrator 60 includes a piezoelectric body 70, an upper electrode 72 and a
lower electrode 74. The piezoelectric vibrator 60 is, for example, circular or elliptical in plan
view. The piezoelectric body 70 is sandwiched between the upper electrode 72 and the lower
electrode 74. Further, the piezoelectric body 70 is polarized in the thickness direction. The
piezoelectric body 70 is made of a material having a piezoelectric effect, and is made of, for
example, lead zirconate titanate (PZT) or barium titanate (BaTiO3), which is a material having
high electromechanical conversion efficiency. The thickness of the piezoelectric body 70 is
preferably 10 μm to 1 mm. The piezoelectric body 70 is made of a brittle material. Therefore, if
the thickness is less than 10 μm, breakage or the like is likely to occur during handling. On the
other hand, when the thickness exceeds 1 mm, the electric field strength of the piezoelectric
body 70 is reduced. This leads to a decrease in energy conversion efficiency.
[0023]
The upper electrode 72 and the lower electrode 74 are made of a material having electrical
conductivity, such as silver or silver / palladium alloy. Silver is a general purpose material with
low resistance, and is advantageous in terms of manufacturing cost and manufacturing process.
In addition, a silver / palladium alloy is a low resistance material excellent in oxidation resistance
and excellent in reliability. The thickness of the upper electrode 72 and the lower electrode 74 is
03-05-2019
7
preferably 1 μm to 50 μm. If the thickness is less than 1 μm, uniform molding becomes
difficult. On the other hand, if it exceeds 50 μm, the upper electrode 72 or the lower electrode
74 becomes a constraining surface with respect to the piezoelectric body 70, resulting in a
decrease in energy conversion efficiency.
[0024]
The vibrating member 62 is made of a material such as metal or resin which has a high elastic
modulus with respect to ceramic which is a brittle material. As a material which comprises the
vibration member 62, general purpose materials, such as phosphor bronze or stainless steel, are
mentioned, for example. The thickness of the vibrating member 62 is preferably 5 μm to 500
μm. The longitudinal elastic modulus of the vibrating member 62 is preferably 1 GPa to 500
GPa. If the longitudinal elastic modulus of the vibrating member 62 is excessively low or high,
the characteristics and reliability as a mechanical vibrator may be impaired.
[0025]
In the present embodiment, sound is reproduced using the operation principle of the parametric
speaker. The principle of operation of the parametric speaker is as follows. The principle of
operation of the parametric speaker is that ultrasonic waves with AM modulation, DSB
modulation, SSB modulation, FM modulation are emitted into the air, and the audible sound
appears due to non-linear characteristics when the ultrasonic waves propagate in the air Sound
reproduction. The term "nonlinear" as used herein means transition from laminar flow to
turbulent flow when the Reynolds number represented by the ratio of the inertial action of the
flow to the viscous action increases. That is, since the sound wave is finely disturbed in the fluid,
the sound wave is non-linearly propagating. In particular, when ultrasonic waves are emitted into
the air, harmonics associated with the non-linearity are significantly generated. In addition,
sound waves are in a dense / dense state in which molecular groups in the air are mixed in
density. If more time is available for air molecules to recover than for compression, air that can
not be recovered after compression will collide with continuously propagating air molecules,
producing shockwaves and audible sounds. The parametric speaker can form a sound field only
around the user 82, and is excellent in terms of privacy protection.
[0026]
03-05-2019
8
Next, the operation of the electronic device 100 according to the present embodiment will be
described. FIG. 6 is a flow chart showing an operation method of the electronic device 100
shown in FIG. First, image data is displayed by the display unit 40 (S01). Next, the user 82 selects
one of the plurality of display objects 80 included in the image data displayed on the display unit
40 (S02).
[0027]
Next, the recognition unit 30 recognizes the positions of the plurality of users 82 (S03). Next, the
distance calculation unit 50 calculates the distance between each user 82 and the oscillation
device 12 (S04). Next, based on the distance between each user 82 and the oscillation device 12,
the volume and sound quality of the audio data to be reproduced for each user 82 are adjusted
(S05).
[0028]
Next, the voice data linked to the display 80 selected by each user 82 is reproduced toward each
user 82 (S06). When the recognition unit 30 recognizes the position of the user 82 following the
position, the control unit 20 determines the direction in which the oscillation device 12
reproduces the voice data based on the position of the user 82 recognized by the recognition unit
30. It may be controlled at any time.
[0029]
Next, the effects of the present embodiment will be described. According to the present
embodiment, the oscillation device 12 outputs the modulation wave of the parametric speaker.
Further, the control unit 20 controls the oscillation device 12 so as to reproduce the voice data
linked to the display object 80 selected by each user 82 toward the position of each user 82.
According to the above configuration, since the parametric speaker with high directivity is used,
the audio data reproduced for each user do not interfere with each other. Then, using such a
parametric speaker, audio data linked to the display item 80 selected by each user 82 is
reproduced to each user 82. Therefore, when a plurality of users view the same content at the
same time, different voice data linked to different display objects displayed in the content can be
reproduced for each user.
03-05-2019
9
[0030]
FIG. 7 is a block diagram showing the electronic device 102 according to the second
embodiment, which corresponds to FIG. 2 according to the first embodiment. The electronic
device 102 according to the present embodiment is the same as the electronic device 100
according to the first embodiment except that a plurality of detection terminals 54 are provided.
[0031]
The plurality of detection terminals 54 are held by each of the plurality of users 82. Then, the
recognition unit 30 recognizes the position of the user 82 by recognizing the position of the
detection terminal 54. The recognition of the position of the detection terminal 54 by the
recognition unit 30 is performed, for example, when the recognition unit 30 receives a radio
wave emitted from the detection terminal 54. The recognition unit 30 may have a function of
automatically tracking the user 82 to determine the position of the user 82 when the user 82
holding the detection terminal 54 moves. When a plurality of selection units 56 are provided so
as to be held by each user 82, the detection terminal 54 may be formed integrally with the
selection unit 56.
[0032]
In addition, the recognition unit 30 may include the imaging unit 32 and the determination unit
34. The imaging unit 32 captures an area where the user 82 is recognized by recognizing the
position of the detection terminal 54, and generates image data. The determination unit 34
determines the position of the ear of the user 82 by processing the image data generated by the
imaging unit 32. Therefore, the position of the user 82 can be more accurately recognized by
using the detection terminal 54 in combination with the position detection.
[0033]
In the present embodiment, control of the oscillation device 12 by the control unit 20 is
performed as follows. First, the ID of each detection terminal 54 is registered in advance. Next,
the volume and sound quality set for each user 82 are associated with the ID of the detection
03-05-2019
10
terminal 54 held by each user 82. Next, each detection terminal 54 transmits an ID indicating
each detection terminal 54. The recognition unit 30 recognizes the position of the detection
terminal 54 based on the direction in which the ID has been transmitted. Then, to the user 82
holding the detection terminal 54 having the ID corresponding to the setting of the specific
volume and sound quality, the audio data corresponding to the setting is reproduced.
[0034]
Also in this embodiment, the same effect as that of the first embodiment can be obtained.
[0035]
Although the embodiments of the present invention have been described above with reference to
the drawings, these are merely examples of the present invention, and various configurations
other than the above can also be adopted.
[0036]
10 parametric speaker 12 oscillation device 16 ultrasonic wave 20 control unit 22 signal
generation unit 30 recognition unit 32 imaging unit 34 determination unit 40 display unit 50
distance calculation unit 52 acoustic wave detection unit 54 detection terminal 56 selection unit
60 piezoelectric vibrator 62 vibrating member 64 Support member 70 Piezoelectric body 72
Upper electrode 74 Lower electrode 80 Display item 82 User 90 Case 100 Electronic device 102
Electronic device
03-05-2019
11
Документ
Категория
Без категории
Просмотров
0
Размер файла
21 Кб
Теги
jp2013058897
1/--страниц
Пожаловаться на содержимое документа