close

Вход

Забыли?

вход по аккаунту

?

JP2015126358

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2015126358
A speaker device capable of clearly localizing a sound source using localization of a virtual sound
source while utilizing the characteristics of a sound beam. SOLUTION: A level difference at which
an audio beam of each channel reaches a listening position is detected, and a gain of a gain
adjustment unit 43SR is set higher than that of other level adjustment units based on the
detected level difference. Set the level of the localization addition section higher than the other
channels, strengthen the effect of localization addition by the virtual sound source, adjust the
level ratio between each channel of the localization addition section 42 and each channel of the
audio beam, and set the audio signal Are synthesized, and the audio signals VL and VR are output
to the correction unit 51. The correction unit 51 performs filter processing to output audio
signals VLD, VLC, VRC, and VRD, which are input to the woofers 33L and 33R via the combining
units 52L and 52R, the delay processing units 60L and 60R, and the addition processing unit 32.
[Selected figure] Figure 6
Speaker device
[0001]
The present invention relates to a speaker apparatus that outputs an audio beam having
directivity.
[0002]
2. Description of the Related Art Conventionally, an array speaker apparatus is known which
outputs an audio beam having directivity by delaying an audio signal and distributing it to a
plurality of speaker units (see Patent Document 1).
09-05-2019
1
[0003]
The array speaker apparatus of Patent Document 1 is configured to reflect a sound beam of each
channel on a wall and to cause the sound source to reach from the periphery of a listener to
localize a sound source.
[0004]
Moreover, the array speaker apparatus of Patent Document 1 performs a process of localizing a
virtual sound source by performing a filtering process based on a head related transfer function
for a channel to which an audio beam can not reach due to a room shape or the like. .
[0005]
Patent document 1: JP 2008-227803
[0006]
However, even in a channel where the sound beam can be delivered, the sound source may not
be clearly localized depending on the listening environment.
For example, in an environment where the distance between the listening position and the wall is
long or an environment made of a wall material having a low acoustic reflectance, a sufficient
sense of localization can not be obtained.
[0007]
On the other hand, the virtual sound source can not obtain a sense of distance as compared to
the voice beam.
Further, in the localization by the virtual sound source, when the listening position deviates from
the prescribed position, the localization feeling is weakened, so that the region where the
localization feeling can be obtained is narrow.
09-05-2019
2
Moreover, since the head related transfer function is set based on the shape of the head to be a
model, there is individual difference in the sense of localization.
[0008]
In addition, when the filtering process based on the head related transfer function is performed
only for a specific channel as in Patent Document 1, a channel of only an audio beam and a
channel of only a virtual sound source are generated, and a difference occurs in localization
feeling for each channel. Therefore, the surround feeling may be reduced.
[0009]
Therefore, an object of the present invention is to provide a speaker device capable of clearly
localizing a sound source using localization by a virtual sound source while making use of the
characteristics of a sound beam.
[0010]
The speaker apparatus according to the present invention delays the audio signals of the
plurality of channels input to the input unit, the plurality of speakers, and the plurality of
channels input to which the audio signals of the plurality of channels are input, and distributes
them to the plurality of speakers. By performing a filtering process based on a head-related
transfer function on any one of the directivity control unit for outputting a plurality of audio
beams to the plurality of speakers and the audio signal of a plurality of channels input to the
input unit. A localization adding unit to be input to a speaker, and a level adjusting unit adjusting
a level ratio of an audio signal of each channel of the localization adding unit to an audio signal
of each channel of the audio beam.
[0011]
As described above, the speaker device of the present invention is configured to compensate the
sense of localization by the sound beam by the virtual sound source.
Thereby, the localization feeling can be improved as compared with the case where only the
sound beam is used or the case where only the virtual sound source is used.
Then, the speaker device of the present invention detects the level difference at which the sound
09-05-2019
3
beam of each channel reaches the listening position, and based on the detected level difference,
the level ratio between each channel of the localization addition unit and each channel of the
sound beam adjust.
For example, for a channel where the level of the audio beam is lowered due to the influence of a
wall or the like having a low acoustic reflectance, the level of the localization addition portion is
set higher than the other channels to strengthen the localization addition effect by the virtual
sound source.
Further, in the speaker apparatus of the present invention, even for a channel in which the effect
of localization addition by virtual sound source is strongly set, a sense of localization by sound
beam exists, so that only a specific channel becomes a virtual sound source and discomfort
occurs. Instead, connections between channels are maintained.
[0012]
In the speaker apparatus according to the present invention, the level adjustment may be
performed based on a microphone installed at a listening position, a detection unit that detects a
level at which an audio beam of each channel reaches the listening position, and a level detected
by the detection unit. Setting means for setting a level ratio in the unit, wherein the detection
means inputs a test signal to the directivity control unit to output a test voice beam to the
plurality of speakers, and the test voice beam is transmitted to the microphone Preferably, the
setting unit measures the level input to the input unit, and the setting unit sets the level ratio in
the level adjustment unit based on the level difference between the peaks.
[0013]
In this case, the output angle of the audio beam of each channel and the level ratio between each
channel of the localization addition unit and each channel of the audio beam are automatically
adjusted only by setting a microphone at the listening position and performing measurement.
[0014]
The speaker apparatus according to the present invention further comprises comparison means
for comparing the levels of audio signals of a plurality of channels input to the input unit,
wherein the setting means is based on the comparison result of the comparison means in the
level adjustment unit. It is also possible to set the level ratio.
09-05-2019
4
[0015]
For example, when a high level signal is input for a specific channel, it is considered that the
producer of the content intends to give the channel a sense of localization, so that the specific
channel has a clear localization feeling. Is desirable.
Therefore, in the channel in which the high level signal is input, the level of the localization
addition unit is set higher than that of the other channels, and the effect of the localization
addition by the virtual sound source is enhanced to clearly localize the sound image.
[0016]
Further, the comparison means compares the levels of the front channel and the surround
channel, and the setting means determines the level of the localization addition unit based on a
relative level difference between the front channel and the surround channel. It is preferable to
set the level ratio.
[0017]
The surround channel needs the sound beam to reach from behind the listening position and the
sound beam needs to be reflected twice to the wall.
Therefore, the surround channel may not have a clear sense of localization compared to the front
channel.
Therefore, for example, when the level of the surround channel becomes relatively high, the level
of the localization addition unit is set high, and the effect of the localization addition by the
virtual sound source is strengthened, so that the localization feeling of the surround channel is
maintained. If the level of the front channel is relatively high, the localization feeling by the
sound beam is set strongly.
On the other hand, if the level ratio of the localization addition portion is lowered when the
surround channel level is relatively lowered, the surround channel may become difficult to hear,
so the surround channel level is relatively lowered. In this case, the level ratio of the localization
09-05-2019
5
addition unit may be increased, and the level ratio of the localization addition unit may be
reduced when the level of the surround channel becomes relatively high.
[0018]
The comparison means may divide the audio signals of the plurality of channels input to the
input unit into predetermined bands and compare the levels of the divided bands.
[0019]
The speaker apparatus according to the present invention may further include a volume setting
reception unit for receiving volume settings of the plurality of speakers, and the setting unit may
set the level ratio in the level adjustment unit based on the volume setting. .
[0020]
In particular, if the volume settings (master volume settings) of multiple speakers are lowered,
the level of the reflected sound from the wall will be reduced to a thick sound, the connection
between the channels may be lost, and the surround feeling may be reduced. There is.
Therefore, it is preferable to maintain the connection between the channels and maintain the
surround feeling by setting the level of the localization addition unit to be higher as the master
volume setting becomes lower and strengthening the localization addition effect by the virtual
sound source.
[0021]
Since the speaker device of the present invention gives localization feeling with both the sound
beam and the virtual sound source, the sound source can be clearly localized using the
localization by the virtual sound source while making use of the characteristics of the sound
beam.
[0022]
It is the schematic which shows the structure of AV system.
09-05-2019
6
It is a block diagram which shows the structure of an array speaker apparatus.
It is a block diagram which shows the structure of a filter process part.
It is a block diagram which shows the structure of a beam formation process part. It is the figure
which showed the relationship between the audio | voice beam and channel setting. It is a block
diagram which shows the structure of a virtual process part. It is a block diagram which shows
the structure of a localization addition part and a correction | amendment part. It is a figure
explaining the sound field which array speaker apparatus 2 generates. It is a block diagram
which shows the structure of the array speaker apparatus which concerns on the modification 1.
FIG. FIG. 10 is a block diagram showing a configuration of an array speaker device according to a
modification 2; It is a figure which shows the array speaker apparatus which concerns on the
modification 3. FIG.
[0023]
FIG. 1 is a schematic view of an AV system 1 provided with an array speaker device 2 according
to the present embodiment. The AV system 1 includes an array speaker device 2, a subwoofer 3,
a television 4, and a microphone 7. The array speaker device 2 is connected to the subwoofer 3
and the television 4. The array speaker device 2 receives an audio signal corresponding to the
video reproduced by the television 4 and an audio signal from a content player (not shown).
[0024]
As shown in FIG. 1, the array speaker device 2 includes, for example, a rectangular parallelepiped
casing, and is installed in the vicinity of the television 4 (the lower portion of the display screen
of the television 4). The array speaker device 2 includes, for example, 16 speaker units 21A to
21P, a woofer 33L, and a woofer 33R on the front surface (the surface facing the listener). In this
example, the speaker units 21A to 21P, the woofer 33L, and the woofer 33R correspond to the
"plural speakers" in the present invention.
[0025]
09-05-2019
7
The speaker units 21A to 21P are arranged in a line along the lateral direction as viewed from
the listener. The speaker unit 21A is disposed on the leftmost side as viewed from the listener,
and the speaker unit 21P is disposed on the right side as viewed from the listener. The woofer
33L is disposed further to the left of the speaker unit 21A. The woofer 33R is disposed further to
the right of the speaker unit 21P.
[0026]
The number of speaker units is not limited to sixteen, and may be eight, for example. Further, the
arrangement mode is not limited to the example of arranging in one row along the lateral
direction, and may be arranged in three rows along the lateral direction, for example.
[0027]
The subwoofer 3 is installed near the array speaker device 2. In the example of FIG. 1, although it
arrange | positions on the left side of the array speaker apparatus 2, an installation position is
not restricted to this example.
[0028]
Further, a microphone 7 for listening environment measurement is connected to the array
speaker device 2. The microphone 7 is installed at the listening position. The microphone 7 is
used when measuring the listening environment, and does not need to be installed when actually
viewing the content.
[0029]
FIG. 2 is a block diagram showing the configuration of the array speaker device 2. The array
speaker device 2 includes an input unit 11, a decoder 10, a filter processing unit 14, a filter
processing unit 15, a beam forming processing unit 20, an addition processing unit 32, an
addition processing unit 70, a virtual processing unit 40, and a control unit 35. ing.
09-05-2019
8
[0030]
The input unit 11 includes an HDMI receiver 111, a DIR 112, and an A / D converter 113. The
HDMI receiver 111 inputs an HDMI signal conforming to the HDMI standard and outputs the
signal to the decoder 10. The DIR 112 inputs a digital audio signal (SPDIF) and outputs it to the
decoder 10. The A / D conversion unit 113 receives an analog audio signal, converts it into a
digital audio signal, and outputs the digital audio signal to the decoder 10.
[0031]
The decoder 10 comprises a DSP and decodes an input signal. The decoder 10 inputs signals of
various formats such as AAC (registered trademark), Dolby Digital (registered trademark), DTS
(registered trademark), MPEG-1 / 2, MPEG-2 multi-channel, MP3, etc. Audio signal (FL channel,
FR channel, C channel, SL channel, and digital audio signal of SR channel). Hereinafter, when
simply referred to as an audio signal, a digital audio signal shall be indicated. Convert to) and
output. Thick solid lines shown in FIG. 2 indicate multi-channel audio signals. The decoder 10
also has a function of expanding, for example, an audio signal of a stereo channel into a
multichannel audio signal.
[0032]
The multi-channel audio signal output from the decoder 10 is input to the filter processing unit
14 and the filter processing unit 15. The filter processing unit 14 extracts and outputs a band
suitable for each speaker unit for the multi-channel audio signal output from the decoder 10.
[0033]
FIG. 3A is a block diagram showing the configuration of the filter processing unit 14, and FIG. 3B
is a block diagram showing the configuration of the filter processing unit 15.
[0034]
The filter processing unit 14 includes an HPF 14FL, an HPF 14FR, an HPF 14C, an HPF 14SL,
09-05-2019
9
and an HPF 14SR which respectively input digital audio signals of an FL channel, an FR channel,
a C channel, an SL channel, and an SR channel.
Further, the filter processing unit 14 includes an LPF 15FL, an LPF 15FR, an LPF 15C, an LPF
15SL, and an LPF 15SR to which digital audio signals of an FL channel, an FR channel, a C
channel, an SL channel, and an SR channel are input.
[0035]
The HPF 14FL, the HPF 14FR, the HPF 14C, the HPF 14SL, and the HPF 14SR respectively
extract and output the high band of the audio signal of each channel input. The cutoff
frequencies of the HPF 14FL, HPF 14FR, HPF 14C, HPF 14SL, and HPF 14SR are set to match the
lower limit (for example, 200 Hz) of the reproduction frequency of the speaker units 21A to 21P.
Output signals of the HPF 14 FL, HPF 14 FR, HPF 14 C, HPF 14 SL, and HPF 14 SR are output to
the beam forming processing unit 20.
[0036]
The LPF 15FL, the LPF 15FR, the LPF 15C, the LPF 15SL, and the LPF 15SR extract and output
the low band (for example, less than 200 Hz) of the audio signal of each channel input thereto.
The cutoff frequencies of the LPF 15FL, LPF 15FR, LPF 15C, LPF 15SL, and LPF 15SR
correspond to the cutoff frequencies of the HPF 14FL, HPF 14FR, HPF 14C, HPF 14SL, and HPF
14SR (for example, 200 Hz).
[0037]
The output signals of the LPF 15FL, the LPF 15C, and the LPF 15SL are added by the adding unit
16 to be an L channel audio signal. The L channel audio signal is further input to the HPF 30L
and the LPF 31L.
[0038]
09-05-2019
10
The HPF 30L extracts and outputs the high band of the input audio signal. The LPF 31L extracts
and outputs the low band of the input audio signal. The cutoff frequencies of the HPF 30L and
the LPF 31L correspond to the crossover frequency (for example, 100 Hz) between the woofer
33L and the subwoofer 3. The crossover frequency may be changed by the listener.
[0039]
The output signals of the LPF 15 FR, the LPF 15 C, and the LPF 15 SR are added by the adding
unit 17 to become an R channel audio signal. The R channel audio signal is further input to the
HPF 30R and the LPF 31R.
[0040]
The HPF 30R extracts and outputs the high band of the input audio signal. The LPF 31R extracts
and outputs the low band of the input audio signal. The cutoff frequency of the HPF 30R
corresponds to the crossover frequency (for example, 100 Hz) between the woofer 33R and the
subwoofer 3. As mentioned above, the crossover frequency may be changeable by the listener.
[0041]
The audio signal output from the HPF 30L is input to the woofer 33L via the addition processing
unit 32. Similarly, the audio signal output from the HPF 30R is input to the woofer 33R via the
addition processing unit 32.
[0042]
The audio signal output from the LPF 31 L and the audio signal output from the LPF 31 R are
added by the addition processing unit 70 to be monaural and input to the subwoofer 3. Although
not shown, the addition processing unit 70 also receives the LFE channel, adds the audio signal
output from the LPF 31 L and the audio signal output from the LPF 31 R, and outputs the result
to the subwoofer 3.
09-05-2019
11
[0043]
On the other hand, the filter processing unit 15 includes an HPF 40FL, an HPF 40FR, an HPF
40C, an HPF 40SL, and an HPF 40SR which respectively input digital audio signals of the FL
channel, the FR channel, the C channel, the SL channel, and the SR channel. The filter processing
unit 15 further includes an LPF 41 FL, an LPF 41 FR, an LPF 41 C, an LPF 41 SL, and an LPF 41
SR which respectively input digital audio signals of the FL channel, the FR channel, the C channel,
the SL channel, and the SR channel.
[0044]
The HPF 40FL, HPF 40FR, HPF 40C, HPF 40SL, and HPF 40SR respectively extract and output
the high band of the audio signal of each channel input. The cutoff frequencies of the HPF 40FL,
HPF 40FR, HPF 40C, HPF 40SL, and HPF 40SR correspond to the crossover frequency (for
example, 100 Hz) between the woofer 33R and the woofer 33L and the subwoofer 3. As
mentioned above, the crossover frequency may be changeable by the listener. The cutoff
frequencies of HPF40FL, HPF40FR, HPF40C, HPF40SL, and HPF40SR may be the same as the
cutoff frequencies of HPF14FL, HPF14FR, HPF14C, HPF14SL, and HPF14SR. In the filtering
processing unit 15, the low pass may not be output to the subwoofer 3 as an aspect including
only the HPF 40 FL, HPF 40 FR, HPF 40 C, HPF 40 SL, and HPF 40 SR. Audio signals output from
the HPF 40 FL, HPF 40 FR, HPF 40 C, HPF 40 SL, and HPF 40 SR are output to the virtual
processing unit 40.
[0045]
The LPF 41FL, the LPF 41FR, the LPF 41C, the LPF 41SL, and the LPF 41SR respectively extract
and output the low band of the audio signal of each channel input. The cutoff frequencies of the
LPF 41FL, the LPF 41FR, the LPF 41C, the LPF 41SL, and the LPF 41SR correspond to the abovementioned crossover frequency (for example, 100 Hz). The audio signals output from the LPF 41
FL, the LPF 41 FR, the LPF 41 C, the LPF 41 SL, and the LPF 41 SR are added by the adder 171
to be monaural, and then input to the subwoofer 3 through the addition processing unit 70. The
addition processing unit 70 adds the audio signals output from the LPF 41 FL, the LPF 41 FR, the
LPF 41 C, the LPF 41 SL, and the LPF 41 SR, the audio signals output from the LPF 31 R and the
LPF 31 L, and the audio signal of the LFE channel described above. The addition processing unit
70 may include a gain adjustment unit that changes the addition ratio of these signals.
09-05-2019
12
[0046]
Next, the beam forming processing unit 20 will be described. FIG. 4 is a block diagram showing
the configuration of the beam processing unit 20. As shown in FIG. The beam forming processing
unit 20 is provided with a gain adjustment unit 18FL, a gain adjustment unit 18FR, a gain
adjustment unit 18C, a gain adjustment unit 18SL, which receive digital audio signals of the FL
channel, the FR channel, the C channel, the SL channel and the SR channel, respectively. And a
gain adjustment unit 18SR.
[0047]
The gain adjusting unit 18FL, the gain adjusting unit 18FR, the gain adjusting unit 18C, the gain
adjusting unit 18SL, and the gain adjusting unit 18SR adjust the gain of the audio signal of each
channel. The gain-adjusted audio signal of each channel is input to directivity control unit 91FL,
directivity control unit 91FR, directivity control unit 91C, directivity control unit 91SL, and
directivity control unit 91SR. The directivity control unit 91FL, the directivity control unit 91FR,
the directivity control unit 91C, the directivity control unit 91SL, and the directivity control unit
91SR distribute the audio signal of each channel to the speaker units 21A to 21P. The audio
signals for the distributed speaker units 21A to 21P are combined by the combining unit 92 and
supplied to the speaker units 21A to 21P. At this time, directivity control unit 91FL, directivity
control unit 91FR, directivity control unit 91C, directivity control unit 91SL, and directivity
control unit 91SR adjust the delay amount of the audio signal supplied to each speaker unit.
[0048]
The sounds output from the speaker units 21A to 21P are mutually intensified at the point where
the phases are aligned, and are output as an audio beam having directivity. For example, when
sound is output from all the speakers at the same timing, an audio beam having directivity is
output in front of the array speaker device 2. The directivity control unit 91FL, the directivity
control unit 91FR, the directivity control unit 91C, the directivity control unit 91SL, and the
directivity control unit 91SR output an audio beam by changing the delay amount applied to
each audio signal. You can change the direction.
[0049]
09-05-2019
13
In addition, directivity control unit 91FL, directivity control unit 91FR, directivity control unit
91C, directivity control unit 91SL, and directivity control unit 91SR are configured such that the
phase of each sound output from speaker units 21A to 21P is at a predetermined position. It is
also possible to provide a delay amount to form an audio beam focused at the predetermined
position.
[0050]
The sound beam can be reflected directly from the array speaker device 2 or reflected on the
wall of the room to reach the listening position.
For example, as shown in FIG. 5C, the sound beam of the C-channel audio signal can be output in
the front direction, and the sound beam of the C-channel can reach from the front of the listening
position. Also, the sound beams of the FL channel audio signal and the FR channel audio signal
are output in the left and right direction of the array speaker device 2 and reflected on the walls
existing on the left and right of the listening position, respectively from the left and right of the
listening position. It can be reached. Also, the voice beam of the SL channel audio signal and the
SR channel audio signal is output in the left and right direction, and reflected twice to the wall
existing to the left and right of the listening position and the wall existing to the rear. It can be
reached from the right rear direction.
[0051]
Such setting of the output direction of the sound beam can be performed automatically by
measuring the listening environment using the microphone 7. As shown in FIG. 5A, when the
listener places the microphone 7 at the listening position and operates the remote controller or
main body operation unit (not shown) to instruct setting of the sound beam, the control unit 35
For example, an audio beam consisting of white noise is output to the beam forming processing
unit 20.
[0052]
The control unit 35 is in the left direction parallel to the front surface of the array speaker device
2 (referred to as a 0 degree direction. From the right) parallel to the front surface of the array
09-05-2019
14
speaker device 2 (referred to as the 180.degree. Direction). Turn the voice beam until). When the
sound beam is turned on the front of the array speaker device 2, the sound beam is reflected on
the wall of the room R according to the turning angle θ of the sound beam, and the sound beam
is collected by the microphone 7 at a predetermined angle. .
[0053]
The control unit 35 analyzes the level of the audio signal input from the microphone 7 as follows.
[0054]
The control unit 35 stores the level of the audio signal input from the microphone 7 in a memory
(not shown) in association with the output angle of the audio beam.
Then, based on the peak of the level of the audio signal, the control unit 35 assigns each channel
of the multi-channel audio signal and the output angle of the audio beam. For example, the
control unit 35 detects a peak equal to or higher than a predetermined threshold value in the
sound collection data. The control unit 35 assigns the output angle of the voice beam at the
highest level among these peaks as the output angle of the C channel voice beam. For example, in
FIG. 5B, the angle θ3a at the highest level is assigned as the output angle of the C channel sound
beam. In addition, the control unit 35 assigns the output angles of the audio beams of the SL
channel and the SR channel to the adjacent peaks across the peak set to the C channel. For
example, in FIG. 5B, an angle θ2a close to the C channel is assigned as an audio beam of the SL
channel and an angle θ4a close to the 180 channel direction is allocated to the voice of the SR
channel. Assign as output angle of. Furthermore, the control unit 35 assigns the output angles of
the audio beams of the FL channel and the FR channel to the outermost peak. For example, in the
example of FIG. 5B, the angle θ1a closest to the 0 degree direction is assigned as the audio beam
of the FL channel, and the angle θ5a closest to the 0 degree direction is assigned as the output
angle of the voice beam of the FR channel. Thus, the control unit 35 detects the difference in
level at which the audio beam of each channel reaches the listening position, and the beam for
setting the output angle of the audio beam based on the peak of the level measured by the
detector. Implement angle setting means.
[0055]
As described above, as shown in FIG. 5C, at the position of the listener (microphone 7), setting is
made such that the sound beam reaches from the surroundings.
09-05-2019
15
[0056]
Next, the virtual processing unit 40 will be described.
FIG. 6 is a block diagram showing the configuration of the virtual processing unit 40. As shown
in FIG. The virtual processing unit 40 includes a level adjustment unit 43, a localization addition
unit 42, a correction unit 51, a delay processing unit 60L, and a delay processing unit 60R.
[0057]
The level adjustment unit 43 is a gain adjustment unit 43FL, a gain adjustment unit 43FR, a gain
adjustment unit 43C, a gain adjustment unit 43SL, and the like that receive digital audio signals
of the FL channel, the FR channel, the C channel, the SL channel, and the SR channel,
respectively. A gain adjustment unit 43SR is provided.
[0058]
The gain adjusting unit 43FL, the gain adjusting unit 43FR, the gain adjusting unit 43C, the gain
adjusting unit 43SL, and the gain adjusting unit 43SR adjust the gain of the audio signal of each
channel.
The gain of each gain adjustment unit is set by the control unit 35 which is setting means based
on the detection result of the test voice beam. For example, as shown in FIG. 5B, the sound beam
of C channel is the highest level because it is a direct sound. Therefore, the gain of the gain
adjustment unit 43C is set to the lowest. Also, since the sound beam of the C channel is a direct
sound and is unlikely to depend on the room environment, it may be a fixed value, for example.
The other gain adjustment units set gains according to the level difference with the C channel.
For example, when setting the gain 0.1 of the gain adjustment unit 43C with the detection level
G1 of C channel = 1.0, if the detection level G3 of FR channel = 0.6, the gain of the gain
adjustment unit 43FR is 0.4. Assuming that the detection level G2 of the SR channel = 0.4, the
gain of the gain adjustment unit 43SR is 0.6. Thus, the gain of each channel is adjusted. 5 (A), 5
(B), and 5 (C), the control unit 35 turns the voice beam of the test signal and the level difference
of the voice beam of each channel reaches the listening position An example is shown in which
the listener manually instructs the control unit 35 to output an audio beam using a user interface
09-05-2019
16
(not shown), and the level difference at which the audio beam of each channel reaches the
listening position May be detected. In addition, the settings of gain adjustment unit 43FL, gain
adjustment unit 43FR, gain adjustment unit 43C, gain adjustment unit 43SL, and gain adjustment
unit 43SR are different for each channel separately from the level detected by sweeping the test
voice beam. May be measured. Specifically, the test sound beam can be output in the direction
determined by the test sound beam sweep for each channel, and the sound collected by the
microphone 7 at the listening position can be analyzed.
[0059]
The audio signal of each channel whose gain has been adjusted is input to the localization adding
unit 42. The localization adding unit 42 performs processing to localize the input audio signal of
each channel as a virtual sound source at a predetermined position. For localization as a virtual
sound source, a head-related transfer function (hereinafter referred to as HRTF) indicating a
transfer function between a predetermined position and a listener's ear. Use).
[0060]
The HRTF is an impulse response that expresses the size of the sound from the virtual speaker
installed at a certain position to the left and right ears, the arrival time, the frequency
characteristics, and the like. The localization addition unit 42 can localize the virtual sound
source to the listener by adding the HRTF to the input audio signal of each channel and emitting
the sound from the woofer 33L or the woofer 33R.
[0061]
FIG. 7A is a block diagram showing the configuration of the localization addition unit 42. As
shown in FIG. The localization adding unit 42 includes an FL filter 421L, an FR filter 422L, a C
filter 423L, an SL filter 424L, an SR filter 425L, an FL filter 421R, and an FR filter 422R for
convolving the HRTF impulse response with the audio signal of each channel. And a C filter
423R, an SL filter 424R, and an SR filter 425R.
[0062]
09-05-2019
17
For example, the audio signal of the FL channel is input to the FL filter 421L and the FL filter
421R. The FL filter 421L is a virtual sound source VSFL in front of the listener on the left side of
the audio signal of the FL channel (see FIG. 8). The HRTF of the route from the position of) to the
left ear is given. The FL filter 421R gives the audio signal of the FL channel the HRTF of the path
from the position of the virtual sound source VSFL to the right ear. Similarly, for each channel,
HRTFs from the position of the virtual sound source around the listener to each ear are given.
[0063]
The addition unit 426L combines audio signals to which HRTFs have been added by the FL filter
421L, the FR filter 422L, the C filter 423L, the SL filter 424L, and the SR filter 425L,
respectively, and outputs the result as the audio signal VL to the correction unit 51. The addition
unit 426R combines audio signals to which HRTFs are respectively added by the FL filter 421R,
the FR filter 422R, the C filter 423R, the SL filter 424R, and the SR filter 425R, and outputs the
combined audio signal as an audio signal VR to the correction unit 51.
[0064]
The correction unit 51 performs crosstalk cancellation processing. FIG. 7B is a block diagram
showing the configuration of the correction unit 51. As shown in FIG. The correction unit 51
includes a direct correction unit 511L, a direct correction unit 511R, a cross correction unit
512L, and a cross correction unit 512R.
[0065]
The audio signal VL is input to the direct correction unit 511L and the cross correction unit
512L. The audio signal VR is input to the direct correction unit 511R and the cross correction
unit 512R.
[0066]
The direct correction unit 511L performs a process of causing the listener to perceive that the
09-05-2019
18
sound output from the woofer 33L is emitted near the left ear. The direct correction unit 511L is
set with a filter coefficient such that the frequency characteristic of the sound output from the
woofer 33L becomes flat at the position of the left ear. The direct correction unit 511L processes
the input audio signal VL with the filter to output an audio signal VLD. The direct correction unit
511R is set with a filter coefficient such that the frequency characteristic of the sound output
from the woofer 33R becomes flat at the position of the right ear. The direct correction unit
511R processes the input audio signal VL with the filter to output an audio signal VRD.
[0067]
The cross correction unit 512L is set with a filter coefficient for giving a frequency characteristic
of sound coming from the woofer 33L to the right ear. By combining the sound (VLC) of the
woofer 33L into the right ear and causing the woofer 33R to emit the sound in reverse phase in
the synthesis unit 52R, it is possible to suppress the sound of the woofer 33L from being heard
by the right ear. This causes the listener to perceive the sound emitted from the woofer 33R as if
emitted near the right ear.
[0068]
The cross correction unit 512R is set with a filter coefficient for giving a frequency characteristic
of sound coming from the woofer 33R to the left ear. The sound (VRC) of the woofer 33R coming
to the left ear is reversed in phase by the synthesis unit 52L and emitted from the woofer 33L,
thereby suppressing the sound of the woofer 33R from being heard by the left ear. As a result,
the sound emitted from the woofer 33L is perceived by the listener as if emitted near the left ear.
[0069]
The audio signal output from the combining unit 52L is input to the delay processing unit 60L.
The audio signal delayed for a predetermined time by the delay processing unit 60L is input to
the addition processing unit 32. Also, the audio signal output from the combining unit 52R is
input to the delay processing unit 60R. The audio signal delayed for a predetermined time by the
delay processing unit 60R is input to the addition processing unit 32.
09-05-2019
19
[0070]
The delay time by the delay processing unit 60L and the delay processing unit 60R is set, for
example, to be longer than the longest delay time among the delay times given by the directivity
control unit of the beamforming processing unit 20. Thus, the sound that causes the virtual
sound source to be perceived does not interfere with the formation of the sound beam. Note that
a delay processing unit may be provided downstream of the beam forming processing unit 20,
and a delay may be added on the audio beam side so that the audio beam does not interfere with
the sound that causes the virtual sound image to be localized.
[0071]
The audio signal output from the delay processing unit 60L is input to the woofer 33L via the
addition processing unit 32. The addition processing unit 32 adds the audio signal output from
the delay processing unit 60L and the audio signal output from the HPF 30L. The addition
processing unit 32 may have a configuration of a gain adjustment unit that changes the addition
ratio of these audio signals. Similarly, the audio signal output from the delay processing unit 60R
is input to the woofer 33R via the addition processing unit 32. The addition processing unit 32
adds the audio signal output from the delay processing unit 60R and the audio signal output
from the HPF 30R. The addition processing unit 32 may have a configuration of a gain
adjustment unit that changes the addition ratio of these audio signals.
[0072]
Next, a sound field generated by the array speaker device 2 will be described with reference to
FIG. In FIG. 8 (A), solid arrows indicate the paths of the sound beam output from the array
speaker device 2. In FIG. 8A, a white star indicates the position of the sound source generated by
the sound beam, and a black star indicates the position of the virtual sound source.
[0073]
In the example of FIG. 8 (A), the array speaker apparatus 2 outputs five voice beams as in the
example shown in FIG. 5 (C). The audio signal of C channel is set to an audio beam focused at a
position behind the array speaker device 2. Thus, the listener perceives that the sound source SC
09-05-2019
20
is in front of the listener.
[0074]
Similarly, the audio signal of the FL channel is set to an audio beam focused on the position of
the front left wall of the room R, and the listener perceives that the sound source SFL is on the
front left wall of the listener. The audio signal of the FR channel is set to an audio beam focused
on the position of the front right wall of the room R, and the listener perceives that the sound
source SFR is on the front right wall of the listener. The audio signal of the SL channel is set to an
audio beam focused on the position of the rear left wall of the room R, and the listener perceives
that the sound source SSL is on the rear left wall of the listener. The audio signal of the SR
channel is set to an audio beam focused on the position of the rear right wall of the room R, and
the listener perceives that the sound source SSR is on the rear left wall of the listener.
[0075]
Further, the localization adding unit 42 sets the position of the virtual sound source at
substantially the same position as the positions of the sound sources SFL, SFR, SC, SSL, and SSR.
Therefore, as shown in FIG. 8A, the listener perceives the virtual sound sources VSC, VSFL, VSFR,
VSSL, and VSSR in substantially the same positions as the positions of the sound sources SFL,
SFR, SC, SSL, and SSR. The position of the virtual sound source does not have to be set to the
same position as the focal point of the audio beam, and may be a predetermined direction. For
example, the virtual sound source VSFL is set to 30 degrees left, the virtual sound source VSFR is
set to 30 degrees right, the virtual sound source VSSL is set to 120 degrees left, and the virtual
sound source VSSR is set to 120 degrees right.
[0076]
Thereby, the array speaker apparatus 2 can compensate for the localization feeling by the sound
beam by the virtual sound source, and can improve the localization feeling as compared to the
case where only the sound beam is used or the case where only the virtual sound source is used. .
In particular, since the sound source SSL and the sound source SSR of the SL channel and the SR
channel are generated by reflecting the sound beam twice on the wall, there may be a case where
a clear sense of localization can not be obtained compared to the front side channel. However,
the array speaker device 2 can compensate for the sense of localization with the virtual sound
09-05-2019
21
source VSSL and the virtual sound source VSSR which are generated by the woofer 33L and the
woofer 33R with sounds directly reaching the listener's ear. There is no loss.
[0077]
Then, as described above, the control unit 35 of the array speaker device 2 detects the level
difference at which the sound beam of each channel reaches the listening position, and the gain
adjustment unit 43FL of the level adjustment unit 43 based on the detected level difference. The
levels of the gain adjustment unit 43FR, the gain adjustment unit 43C, the gain adjustment unit
43SL, and the gain adjustment unit 43SR are set. Thus, the level ratio between each channel of
the localization addition unit 42 and each channel of the audio beam is adjusted.
[0078]
For example, on the right wall of the room R in FIG. 8A, a curtain 501 with low acoustic
reflectance is present, and it is difficult for the sound beam to be reflected. Therefore, as shown
in FIG. 8 (B), the level of the peak of the angle θa4 is lower than the levels of peaks of other
angles. In this case, the level of the sound beam reaching the listening position of the SR channel
is lower than that of the other channels.
[0079]
Therefore, the control unit 35 sets the gain of the gain adjustment unit 43SR higher than that of
the other level adjustment units, and for the SR channel, sets the level of the localization addition
unit higher than that of the other channels, and adds localization by the virtual sound source.
Strengthen the effect of As described above, the control unit 35 sets the level ratio in the level
adjustment unit 43 based on the level difference detected by the test sound beam. As a result, the
virtual sound source strongly compensates for the sense of localization for the channel where the
sense of localization due to the audio beam is low. Even in this case, since the voice beam itself is
output, the sense of localization due to the voice beam remains, and there is no sense of
discomfort as a virtual sound source for only a specific channel, and the auditory connection
between the channels is maintained. Be done.
[0080]
09-05-2019
22
In addition, as shown in FIG. 8C, even if the number of peaks detected with respect to the number
of channels is small as shown in FIG. It is preferable to assign the output angle of the voice beam.
For example, in the example of FIG. 8C, although the peak is not detected at the angle to which
the SR channel should be allocated, the SR channel is set at an angle θa4 symmetrical to the
angle θa2 with the angle θa3 at the highest level as the central angle. Assign and output an
audio beam of SR channel. Then, the control unit 35 sets the gain of the gain adjustment unit
43SR high according to the level difference between the detection level G1 at the angle θa3 and
the detection level G2 at the angle θa4. As a result, even with respect to a channel in which the
effect of localization addition by the virtual sound source is strongly set, the sound beam itself is
output, so the sound of the sound beam of the channel can be heard to a certain extent.
Therefore, only a specific channel does not become a virtual sound source and a sense of
discomfort does not occur, and the auditory connection between channels is maintained.
[0081]
In this embodiment, by adjusting the gain of each gain adjustment unit of the level adjustment
unit 43, the level ratio between each channel of the localization addition unit 42 and each
channel of the audio beam is adjusted. By adjusting the gains of the gain adjustment unit 18FL,
the gain adjustment unit 18FR, the gain adjustment unit 18C, the gain adjustment unit 18SL, and
the gain adjustment unit 18SR in the beam forming processing unit 20, each channel of the
localization addition unit and each of audio beams The level ratio with the channel may be
adjusted.
[0082]
Next, FIG. 9A is a block diagram showing a configuration of an array speaker device 2A
according to a first modification.
The same components as those of the array speaker device 2 shown in FIG. 2 will be assigned the
same reference numerals and descriptions thereof will be omitted.
[0083]
The array speaker device 2A further includes a volume setting receiving unit 77. The volume
09-05-2019
23
setting receiving unit 77 receives the master volume setting from the listener. The control unit
35 adjusts the gain of a power amplifier (for example, an analog amplifier) (not shown) according
to the master volume setting received from the volume setting receiving unit 77. Thereby, the
volume of all the speaker units is changed collectively.
[0084]
Then, in accordance with the master volume setting received from the volume setting receiving
unit 77, the control unit 35 sets the gains of all the gain adjusting units in the level adjusting unit
43. For example, as shown in FIG. 9B, as the value of the master volume decreases, the gains of
all the gain adjusting units in the level adjusting unit 43 are set to be higher. Thus, if the master
volume setting is lowered, the level of the reflected sound from the wall of the audio beam may
be lowered, and the surround feeling may be lowered. Therefore, the control unit 35 sets the
level of the localization addition unit 42 higher as the value of the master volume decreases, and
maintains the surround feeling by strengthening the localization addition effect by the virtual
sound source.
[0085]
Next, FIG. 10A is a block diagram showing a configuration of an array speaker device 2B
according to a second modification. The same components as those of the array speaker device 2
shown in FIG. 2 will be assigned the same reference numerals and descriptions thereof will be
omitted.
[0086]
In the array speaker apparatus 2B, the control unit 35 inputs the audio signal of each channel,
and compares the level of the audio signal of each channel (functions as a comparison unit). The
control unit 35 dynamically sets the gain of each gain adjustment unit in the level adjustment
unit 43 based on the comparison result.
[0087]
09-05-2019
24
For example, when a high level signal of a specific channel is input, it can be determined that the
signal of the specific channel has a sound source, so the gain adjustment unit of this channel is
set high to make a clear sense of localization. give. Further, for example, as shown in FIG. 10B,
the control unit 35 calculates the level ratio (front level ratio) between the front channel and the
surround channel, and each gain in the level adjustment unit 43 according to the front level ratio.
It is also possible to set the gain of the adjustment unit. That is, when the surround channel level
is relatively high, control unit 35 sets the gain of level adjustment unit 43 (gain adjustment unit
43SL and gain adjustment unit 43SR) high, and the surround channel level is relatively low. In
this case, the gain of the level adjustment unit 43 (gain adjustment unit 43SL and gain
adjustment unit 43SR) is set low. Therefore, the effect of the surround channel is emphasized by
intensifying the effect of the localization addition by the virtual sound source when the level of
the surround channel becomes relatively high. On the other hand, when the level of the front
channel is relatively high, by setting the level by the audio beam large, by emphasizing the effect
of the front channel by the audio beam, localization feeling can be obtained compared to virtual
sound source localization It is possible to make the listening area relatively wide.
[0088]
It should be noted that if the gain of the level adjustment unit 43 (gain adjustment unit 43SL and
gain adjustment unit 43SR) is lowered when the surround channel level is relatively lowered, the
surround channel by the audio beam may be more difficult to hear. When the surround channel
level is relatively low, the gain of the level adjustment unit 43 (gain adjustment unit 43SL and
gain adjustment unit 43SR) is set high, and when the surround channel level is relatively high,
the level adjustment unit 43 ( The gain of the gain adjustment unit 43SL and the gain adjustment
unit 43SR may be reduced.
[0089]
The level comparison between channels and the calculation of the level ratio between the front
and surround channels may be performed for all frequency bands, but the audio signal of each
channel is divided into predetermined bands and divided. The level of each band may be
compared, or the level ratio between the front channel and the surround channel may be
calculated.
For example, since the lower limit of the reproduction frequency of each of the speaker units 21A
to 21P for outputting an audio beam is 200 Hz, the level ratio between the front channel and the
surround channel is calculated in a band of 200 Hz or more.
09-05-2019
25
[0090]
Next, FIG. 11A is a diagram showing an array speaker apparatus 2C according to a third
modification. The description of the configuration overlapping with the array speaker device 2 is
omitted.
[0091]
The array speaker device 2C is different from the array speaker device 2 in that the sound output
from the woofer 33L and the woofer 33R is output from the speaker unit 21A and the speaker
unit 21P, respectively.
[0092]
The array speaker apparatus 2C outputs a sound causing the virtual sound source to be
perceived from the speaker units 21A and 21P at both ends of the speaker units 21A to 21P.
[0093]
The speaker unit 21A and the speaker unit 21P are the speaker units arranged at the most end in
the array speaker, and are arranged on the left side and the right side respectively as viewed
from the listener.
Therefore, the speaker unit 21A and the speaker unit 21P are suitable for outputting the sound
of the L channel and the R channel, respectively, and are suitable as a speaker unit for outputting
a sound that causes a virtual sound source to be perceived.
[0094]
Moreover, the array speaker apparatus 2 does not need to provide all the speaker units 21A-21P,
the woofer 33L, and the woofer 33R in one housing | casing.
For example, as in a speaker set 2D illustrated in FIG. 11B, the speaker units may be provided in
09-05-2019
26
individual housings, and the housings may be arranged side by side.
[0095]
In any case, the input audio signals of a plurality of channels are respectively delayed and
distributed to a plurality of speakers, and any of the input audio signals of a plurality of channels
is subjected to a filtering process based on the head-related transfer function. If it is an aspect
input to a plurality of speakers, it belongs to the technical scope of the present invention.
[0096]
DESCRIPTION OF SYMBOLS 1 ... AV system 2 ... Array speaker apparatus 3 ... Subwoofer 4 ...
Television 7 ... Microphone 10 ... Decoder 11 ... Input part 14 and 15 ... Filter processing part
18C, 18FL, 18FR, 18SL, 18SR ... Gain adjustment part 20 ... Beam formation process Parts 21A to
21P: Speaker unit 32: Addition processing part 33L, 33R: Woofer 35: Control part 40: Virtual
processing part 42: Localization adding part 43: Level adjustment part 43C, 43FL, 43FR, 43SL,
43SR: Gain adjustment part 51 ... correction unit
09-05-2019
27
Документ
Категория
Без категории
Просмотров
0
Размер файла
40 Кб
Теги
jp2015126358
1/--страниц
Пожаловаться на содержимое документа