close

Вход

Забыли?

вход по аккаунту

?

JP2016092767

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2016092767
Abstract: A target sound source can be localized in one direction also by two microphones, and
the balance between the accuracy of localization of the target sound source and the cost is
achieved. A wall portion 4 is erected around two microphones 2a and 2b which are disposed
apart from each other. The wall 4 is formed only between one point P1 and the microphones 2a
and 2b with respect to two points P1 and P2 which are line-symmetrical to the axis L connecting
the microphones 2a and 2b. The wall portion 4 has an S shape, and the microphones 2a and 2b
are separately disposed in the recess inner side 4a and 4b. Then, based on the difference
between the sound reception signals Sa and Sb output from the microphones 2a and 2b, the
target sound source is localized. [Selected figure] Figure 2
Sound processing apparatus and sound processing program
[0001]
The present invention relates to an acoustic processing device and an acoustic processing
program that cause a target sound source to be localized.
[0002]
It is well known that accurate localization of a target sound source leads to improvement in
speech quality, speech recognition accuracy, and recording quality.
In recent years, sound source localization of a target sound source is regarded as important also
03-05-2019
1
from the viewpoint of improvement in the accuracy of speaker recognition by a robot. Various
methods have been studied with regard to sound source localization, and, for example, the
MUSIC method (MUltiple SIgnal Classification) can be mentioned (see, for example, Patent
Document 1).
[0003]
The MUSIC method Fourier-transforms the sound reception signal output from a large number of
microphone arrays, and calculates the present correlation matrix from the obtained sound
reception signal vector and the past correlation matrix. Then, the current correlation matrix is
subjected to eigenvalue decomposition to obtain a noise space which is an eigenvector of the
maximum eigenvalue and other eigenvalues. Based on one of the microphones, the direction of
the sound source is estimated by the phase difference of the sound reception signals output from
each microphone, the noise space, and the maximum eigenvalue.
[0004]
Such current sound source localization methods are based on the results of arranging a large
number of microphones, carrying out statistical analysis, frequency analysis, complex analysis,
etc., of the waveform structure of the sound reception signal of the microphones. Is the main
direction of research and development. In technical common sense, if at least three microphones
are present, the direction of the target sound source present on the plane can be confirmed. If at
least four microphones are present, three-dimensional localization of the target sound source is
also possible in which the angle of attack as well as the azimuth angle is taken into consideration.
[0005]
However, with the two microphones arranged at a distance from each other, it is difficult to
localize the direction of the target sound source with high accuracy even by the known sound
source localization method. As shown in FIG. 9, consider two acoustically geometrically identical
points P1 and P2, ie, two points P1 and P2 that are line-symmetrical to the array of microphones.
When sound sources that output the same acoustic signal are installed at the two points P1 and
P2, the phase difference, the amplitude difference, and the magnitude relationship of the sound
reception signals output by the two microphones 2a and 2b are the same. Therefore, with the
03-05-2019
2
two microphones 2a and 2b, it is very difficult to narrow down the localization candidate of the
target sound source from two directions.
[0006]
On the other hand, the method of arranging a large number of microphones is excellent in the
accuracy of sound source localization, but the cost is increased by the number of microphones,
and a large number of sound reception signals are processed by statistical analysis, frequency
analysis, complex analysis, etc. In order to do this, it requires a powerful computing power, and
the cost of the computing circuit increases, and the cost of the sound processing apparatus as a
whole is a problem.
[0007]
JP 2008-175733 A
[0008]
The present invention has been made to solve the problems of the prior art as described above,
and its object is to enable localization of a target sound source in one direction by two
microphones, and to localize a target sound source. It is an object of the present invention to
provide an acoustic processing device and an acoustic processing program that can balance the
accuracy and the cost of
[0009]
In order to achieve the above object, the sound processing device according to the present
invention comprises two microphones spaced apart from one another with respect to two points
in line symmetry with an axis connecting the microphones as an axis of symmetry. It is
characterized by including a wall portion interposed between the point and at least one of the
microphones, and an acoustic processing unit for sound source localization of a target sound
source based on a difference between sound reception signals output from the microphone.
[0010]
The wall portion has a substantially S-shape to connect two oppositely opened concave portions,
and the two microphones are separately installed inside the two concave portions formed by the
wall portion. May be
[0011]
03-05-2019
3
The wall portion is erected on one side of the array of the microphones, a first wall portion
erected along the array of the microphones, and one side of the array of the microphones,
intersecting the array of the microphones And the second wall portion may be provided.
[0012]
The sound processing unit is a combination of an azimuth and a gain, and forms a gain pattern
having a singular point of a unique gain in only one direction as compared to the other position,
and signal components derived from each azimuth included in the reception signal Based on the
comparison result of the comparison unit, and a comparison unit that compares a sound
reception signal passing through the signal suppression unit with a reception signal that is
output from the microphone, and a signal suppression unit that changes An azimuth estimation
unit for selecting an azimuth of a target sound source, the signal suppression unit forms gain
patterns in which the singular points are shifted in all directions, and a sound reception signal
output from the microphone in each gain pattern Are changed in parallel, and the direction
estimation unit determines that the received signal passing through the signal suppression unit
and the received signal output from the microphone do not match. The orientation of the
singularities of emission pattern may be determined and the orientation of the target sound
source.
[0013]
The singular point may have an extremely low gain compared to the other position, and the gain
pattern may have an extremely low gain in only one direction as compared to the gain in the
other direction.
[0014]
The gain pattern is formed by a filter and a subtractor of the signal processing unit, and the filter
transmits a transfer function from one azimuth to one of the microphones and transmission from
the one azimuth to the other of the microphones. The signal component derived from the one
azimuthal angle contained is matched with the received sound signal output from both the
microphones based on the function, and the subtractor outputs the received sound signal of the
one microphone passed through the filter. And the sound reception signal of the other
microphone may be differentiated.
[0015]
An acoustic separation unit may be provided to relatively emphasize an azimuth component
derived from the azimuth selected by the azimuth estimation unit and suppress an azimuth
component derived from the other.
03-05-2019
4
[0016]
Further, in order to achieve the above object, the sound processing program of the present
invention is characterized in that at least one of the microphones from one of two points in line
symmetry with an axis connecting the two microphones as a symmetry axis. And a computer in
which a sound receiving signal is input from the two microphones in which a wall portion is
formed between the two microphones is a combination of an azimuth and a gain, and a singular
point of a unique gain compared to the other place in one direction. And a signal suppression
unit that changes a signal component derived from each direction included in the sound
reception signal according to the gain pattern, and a sound reception signal passing through the
signal suppression unit and the microphone. A comparison unit that compares the received
sound signal and an azimuth estimation unit that selects the azimuth of the target sound source
based on the comparison result of the comparison unit. The signal suppression unit forms each
gain pattern in which the singular points are shifted in all directions, changes in parallel the
received signal output from the microphone with each gain pattern, and the direction estimation
unit It is characterized in that the direction of the singular point of the gain pattern where the
sound reception signal passing through the suppression unit and the sound reception signal
output from the microphone do not match is determined as the direction of the target sound
source.
[0017]
The singular point may have an extremely low gain compared to the other position, and the gain
pattern may be such that the gain in only one direction is extremely low compared to the gain in
the other direction.
[0018]
The gain pattern is formed by the filter means and the subtraction means of the signal processing
unit, and the filter means includes a transfer function from one azimuth angle to one of the
microphones, and one azimuth angle to the other of the microphones. And matching the signal
component derived from the one azimuthal angle contained in the sound reception signal output
from both of the microphones based on the transfer function of The sound signal and the sound
reception signal of the other microphone may be differentiated.
[0019]
The computer may be made to further function as an acoustic separation unit that relatively
emphasizes the azimuth component derived from the azimuth selected by the azimuth estimation
unit and suppresses the azimuth component derived from the other position.
03-05-2019
5
[0020]
According to the present invention, even with the arrangement of two microphones, the target
sound source can be localized with high accuracy, and it is possible to achieve both improvement
in accuracy of the source localization and cost.
[0021]
FIG. 1 is a block diagram showing an overall configuration of a sound processing device
according to an embodiment of the present invention.
It is a block diagram showing arrangement of a microphone concerning an embodiment of the
present invention.
It is a polar plot which shows the amplitude of the sound reception signal which the microphone
which concerns on embodiment of this invention outputs.
It is a schematic diagram which shows the structure of the microphone folder which arrange |
positions the microphone which concerns on embodiment of this invention.
It is a block diagram which shows the detailed structure of the sound processing part which the
sound processing apparatus which concerns on embodiment of this invention has.
It is a polar plot which shows the gain pattern which the sound processing unit concerning the
embodiment of the present invention generates.
It is a polar plot which shows the various gain patterns which the sound processing apparatus
which concerns on embodiment of this invention produces | generates.
It is a schematic diagram which shows the other structure of the microphone folder which
arrange | positions the microphone which concerns on embodiment of this invention.
03-05-2019
6
It is a block diagram showing arrangement of conventional two microphones.
[0022]
Hereinafter, the sound processing apparatus according to the present embodiment will be
described in detail with reference to the drawings.
The sound processing apparatus 1 shown in FIG. 1 collects sound signals propagating in the
sound space, causes a target sound source to be localized, and separates the target sound source.
The sound processing apparatus 1 includes two microphones 2a and 2b arranged to be
separated from each other for sound collection, and includes a sound processing unit 3 for sound
source localization and sound source separation.
[0023]
The microphones 2a, 2b are arranged in an acoustic space where a sound source may exist at
each azimuthal angle x.
The microphones 2a and 2b convert the acoustic signal derived from the azimuth angle x into an
electrical signal and output sound reception signals Sxa and Sxb.
At the azimuth angle t, a target sound source which is a target of sound source localization and
sound source separation is arranged.
The microphones 2a and 2b synthesize the sound reception signals Sta and Stb of the target
sound source at the azimuth angle t and the sound reception signals Sxa and Sxb of the
environmental sound generated at other than the azimuth angle t, and output them as sound
reception signals Sa and Sb. .
03-05-2019
7
In other words, the sound reception signals Sa and Sb are formed by combining the azimuth
components which are the sound reception signals Sxa and Sxb of the respective azimuths.
[0024]
The transfer function from the sound source in the direction of x degrees clockwise to the
microphone 2a with respect to the axial line L connecting the microphones 2a and 2b is Cax, and
the transfer function from the sound source in the direction of x degrees to the microphone 2b is
Cbx .
An acoustic signal A output from a sound source in the direction of x degrees is convoluted with
the transfer functions Cax and Cbx in the propagation process of the acoustic space, and the
sound reception signal Sxa shown in the following equations (1) and (2) It becomes Sxb.
[0025]
<img class = "EMIRef" id = "390830154-00003" /> <img class = "EMIRef" id = "390830154000004" />
[0026]
The microphones 2a and 2b are arranged such that the transfer function Cax and the transfer
function Cbx, which are convoluted into the acoustic signal A in the propagation process of the
acoustic space, correspond to the azimuths in a one-to-one manner.
In other words, the azimuth angle with respect to the array manifold vector is uniquely
determined. The microphones 2a and 2b are arranged such that the acoustic signal follows
different transmission paths even if the directions are matched acoustically by utilizing the
effects of diffraction and reflection of the propagation wave. The acoustic geometrically identical
directions indicate two points P1 and P2 which are in line symmetry with the axis L of the
microphones 2a and 2b as the axis of symmetry as shown in FIG. The fact that the acoustic signal
follows different transmission paths means that the sound rays starting from the two points P1
and P2 and reaching the microphones 2a and 2b are geometrically different.
03-05-2019
8
[0027]
For example, as shown in FIG. 2, this sound processing apparatus 1 arranges the wall 4 around
the microphones 2a and 2b. The wall 4 is curved in a substantially S-shape. The substantially Sshape includes a shape formed by a straight line and a right angle, and a shape of inverted Z. The
microphones 2 a, 2 b are arranged on different recess inner sides 4 a, 4 b and are surrounded by
the wall 4. For example, the microphone 2a is located on the inner side 4a of the left semicircular
recess above the S-shape. The microphone 2b is located on the inner side 4b of the right
semicircular recess on the lower side of the S-shape.
[0028]
Therefore, the directions in which the wall 4 intervenes and the directions in which the
microphones 2a and 2b are open from the wall 4 are exactly opposite. The wall 4 intervenes
between the point P1 and the microphone 2a, but the wall 4 does not intervene between the
point P2 and the microphone 2b. The wall 4 intervenes between the point P2 and the
microphone 2b, but the wall 4 does not intervene between the point P1 and the microphone 2a.
[0029]
FIG. 3 shows polar plots of the directivity of the sound reception signals Sxa and Sxb output from
the microphones 2a and 2b. The circumferential direction is the azimuth angle of the sound
source, and the radial direction is an amplitude normalized to one. As shown in FIG. 3, the range
in which the directivity is strong between the microphone 2a and the microphone 2b is a reverse
direction. In other words, the range in which the directivity of the microphone 2a is strong is
blocked by the wall 4 for the microphone 2b, so that it receives sound shielding and distance
attenuation. Therefore, in this range, the sound reception signal Sb of the microphone 2b is
smaller than the sound reception signal Sa of the microphone 2a. Moreover, since the range in
which the directivity of the microphone 2b is strong is blocked by the wall 4 for the microphone
2a, it is shielded by sound and distance attenuation. Therefore, in this range, the sound reception
signal Sa of the microphone 2a becomes smaller than the sound reception signal Sb of the
microphone 2b.
03-05-2019
9
[0030]
Therefore, even if the same sound signal is output from two points that are line-symmetrical with
the axis L connecting the microphones 2a and 2b as the symmetry axis, a difference occurs in the
magnitude relationship between the sound reception signal Sa and the sound reception signal Sb.
In other words, the difference between the transfer function Cax and the transfer function Cbx is
different. When the directions are not geometrically identical, the difference in distance from the
sound source to the microphone 2a and the microphone 2b changes depending on the azimuth
angle, so the magnitude relationship between the received signal Sxa and the received signal Sxb,
the amplitude difference, and the magnitude The phase difference changes.
[0031]
For example, as shown in FIG. 3, in the direction of the azimuth angle of 10 degrees, the received
signal Sxa (x = 10) of the microphone 2a has an amplitude of 0.83, and the received signal Sxb (x
=) of the microphone 2b. 10) has an amplitude of 0.63 and a ratio of about 1.3. In the direction
of the azimuth angle of 40 degrees, the received signal Sxa of the microphone 2a has an
amplitude of 0.92, and the received signal Sxb of the microphone 2b has an amplitude of 0.29,
and the ratio is approximately 3 .1. In the direction of the azimuth angle of 70 degrees, the
received signal Sxa of the microphone 2a has an amplitude of 0.98, and the received signal Sxb
of the microphone 2b has an amplitude of 0.64, and the ratio is about 1 .5.
[0032]
Also, in the direction of azimuth angle 10 degrees and the direction of 190 degrees true opposite,
the received signal Sxa (x = 190) of the microphone 2a has an amplitude of 0.61 and the received
signal Sxb of the microphone 2b has an amplitude Is 0.78, and the ratio is about 0.8. In the
direction of the azimuth angle of 40 degrees and the direction of the exact opposite 220 degrees,
the received signal Sxa of the microphone 2a has an amplitude of 0.45, and the received signal
Sxb of the microphone 2b has an amplitude of 0.98, The ratio is about 0.46. In the direction of an
azimuth angle of 70 degrees and a direction opposite to 250 degrees, the received signal Sxa of
the microphone 2a has an amplitude of 0.61 and the received signal Sb of the microphone 2b
has an amplitude of 1.0, The ratio is about 0.6.
[0033]
03-05-2019
10
In the arrangement of the microphones 2a and 2b, as shown in FIG. 4, the sound processing
apparatus 1 includes a microphone folder 5 integrally holding the microphones 2a and 2b. The
wall 4 may be formed on the surface of the microphone folder 5, and as shown in FIG. 4, the
microphone folder 5 may be the wall 4. Two dish-shaped portions 5a and 5b cut into a parabolic
shape are formed on the substantially rectangular microphone holder 5. The two countersunk
portions 5a and 5b are formed separately on the opposite side surfaces, and the formation
positions are different in the longitudinal direction. In the microphone folder 5, the housing itself
functions as an S-shaped wall 4. When the sound processing device 1 is a portable device such as
a smart phone or an IC recorder, the wall 4 may be formed on the surface of the housing of the
sound processing device 1.
[0034]
The sound processing unit 3 localizes the target sound source in one direction by using the oneto-one correspondence between the transfer functions Cax and Cbx and the azimuth angle.
Further, the sound processing unit 3 relatively emphasizes the reception signals Sta and Stb of
the target sound source by the delay sum method (DS method) or the like using the array
manifold vector corresponding to the azimuth angle t of the target sound source A sound source
is separated by forming directivity that relatively suppresses other received signals Sxa, Sxb such
as sounds.
[0035]
As shown in FIG. 5, the sound processing unit 3 includes a signal suppression unit 31, a
comparison unit 32, and a direction estimation unit 33 in the process of sound source
localization. The signal suppression unit 31 forms a gain pattern 6 shown in FIG. The gain
pattern 6 sets a gain of 1 or less for each azimuth, and suppresses each of the reception signals
Sxa in each direction contained in the reception signal Sa in accordance with the gain. This gain
pattern 6 has a singular point 61 of a unique gain compared to other azimuth angles at only one
azimuth angle. For example, the singular point 61 has an extremely low gain compared to other
azimuth angles and is set to almost zero. Hereinafter, the singular point 61 whose gain is
extremely low is referred to as a null point 61.
[0036]
03-05-2019
11
As shown in FIG. 5, the signal suppression unit 31 includes a filter 311, a delay 312, and a
subtractor 313. The filter 311 is connected to one of the microphones 2a, filters the sound
reception signal Sa, and outputs a new reception signal Sc. The delay 312 is connected to the
other microphone 2b, and delays the reception signal Sb for a predetermined time and outputs it.
The subtractor 313 is connected downstream of the filter 311 and the delay 312, and subtracts
the sound reception signal Sc output from the filter 311 from the sound reception signal Sb
output from the delay 312.
[0037]
The filter 311 has a transfer function Hx. The filter 311 convolutes the transfer function Hx into
the sound reception signal Sa output from the microphone 2a. The transfer function Hx is
expressed by the following equation (3). In the equation, Cax is a transfer function of the path
from the sound source coincident with the azimuth x of the null point 61 formed by the signal
suppression unit 31 to the microphone 2a. Cbx is a transfer function of the path from the sound
source coincident with the azimuth angle x of the null point 61 formed by the signal suppression
unit 31 to the microphone 2b.
[0038]
<img class = "EMIRef" id = "390830154-000005" />
[0039]
The sound reception signal Sxa arriving at the microphone 2a from the azimuth angle x
coinciding with the null point 61 by the convolution of the transfer function Hx is the sound
reception arriving at the microphone 2a from the azimuth angle x, as shown in the following
equation (4) It is equivalent to the signal Sxb.
On the other hand, the reception signal Sya arriving at the microphone 2a from the azimuth y
different from the null point 61 and the azimuth y different from the null point 61 is the sound
reception signal Syb arriving at the microphone 2b from the azimuth y as shown in equation (5)
below. And disagree with.
03-05-2019
12
[0040]
<img class = "EMIRef" id = "390830154-000006" /> <img class = "EMIRef" id = "390830154000007" />
[0041]
The subtractor 313 subtracts the reception signal Sa output from the filter 311 from the
reception signal Sb output from the delay 312.
At this time, since the sound reception signal Sxa of an azimuth angle x component that matches
or approximates the null point 61 of the filter 311 passes through the filter 311 and becomes
almost the sound reception signal Sxb, the azimuth angle included in the reception signal Sb The
reception signal Sxb of the x component is almost zero by the difference of the reception signal
Sa.
[0042]
On the other hand, the sound reception signal Sya of the azimuth y component different from the
null point 61 of the filter 311 does not become the reception signal Syb even though passing
through the filter 311, and the reception signal Syb of the azimuth y component contained in the
reception signal Sb is Even if the received signal Sa is differentiated, it does not become zero.
[0043]
The delay 312 gives a fixed delay time to the reception signal Sb of the microphone 2b.
The delay time by the delay 312 makes the distance between the microphones 2a and 2b equal
to or longer than the time required for the sound wave to travel. The delay 312 always keeps the
reception signal Sb ahead of the reception signal Sa in time waveform.
[0044]
03-05-2019
13
As shown in FIG. 5, the sound processing unit 3 includes a plurality of signal suppression units
31 and is parallelized in the subsequent stage of the microphone 2 a. The delay 312 is common
to the plurality of signal suppression units 31. Each signal suppression unit 31 has a different
gain pattern 6. In each gain pattern 6, as shown in FIG. 7, the azimuth of the singular point 61 is
shifted in all directions. That is, each signal suppression unit 31 has a unique transfer function
Hx. For example, as in the various gain patterns 6, in the signal suppression unit 31, the
azimuthal angle x for forming the null point 61 is shifted by 5 degrees, and a total of 72 are
disposed.
[0045]
The signal suppression units 31 output the reception signals Sc that are changed in parallel.
Then, the null point 61 formed by any of the signal suppression units 31 coincides with or
approximate to the azimuth angle t of the target sound source, and the signal suppression unit
31 substantially zeros the received signal Stb of the azimuth angle t component of the target
sound source. The received signal Sc is output.
[0046]
The comparison unit 32 compares the sound reception signal Sc passed through each of the
signal suppression units 31 and one sound reception signal Sb output from the microphones 2a
and 2b in terms of sound output energy or amplitude. As a result of comparison, if the difference
between the sound output energy or the amplitude of the two sound reception signals Sc and Sb
is equal to or more than the threshold value, the detection signal is output to the direction
estimation unit 33. The comparison unit 32 is, for example, a logic gate that outputs a match and
a mismatch.
[0047]
Here, the signal suppression unit 31 which forms the null point 61 shifted from the azimuth
angle t of the target sound source only attenuates the noise in a part of the direction, and makes
the sound reception signal Stb which is a component of the target sound source zero. Absent.
That is, the signal suppression unit 31 which forms the null point 61 shifted from the azimuth
angle t of the target sound source outputs the sound reception signal Sc having no large
difference between the original sound reception signal Sb and the sound output energy or
03-05-2019
14
amplitude. On the other hand, the signal suppression unit forming the null point 61 coincident
with the azimuth angle t of the target sound source attenuates the reception signal Stb, which is
the component of the target sound source included in the reception signal Sb, to almost zero. The
acoustic output energy or amplitude of Sc is extremely reduced compared to the original received
signal Sb.
[0048]
Therefore, the comparison unit 32 to which the sound reception signal Sc in which the reception
signal Stb, which is a component of the target sound source, is attenuated to almost zero is input
outputs a mismatch signal to the direction estimation unit 33. On the other hand, the comparison
unit 32 to which the received signal Sc in which the noise in only a part of the direction is
attenuated is input outputs the coincidence signal to the direction estimation unit 33.
[0049]
The direction estimation unit 33 treats the unmatched signal from the comparison unit 32 as a
detection signal that specifies the azimuth angle t of the target sound source, and forms a null
formed by the signal suppression unit 31 that is paired with the comparison unit 32 to which the
detection signal is input. The azimuth of the point 61 is determined to be the azimuth t of the
target sound source. As a result, the target sound source is localized. Therefore, the sound
processing unit 3 separates the sound source by the delay sum method (DS method) or the like
using the array manifold vector corresponding to the azimuth angle of the target sound source.
[0050]
Such sound processing unit 3 may be realized as software processing of a CPU or DSP, or may be
configured by a dedicated digital circuit. When realized as software processing, in a computer
including a CPU, an external memory, and a RAM, a program in which the same processing
content as the signal suppression unit 31, the comparison unit 32, the direction estimation unit
33, and the sound source separation unit is described Or, it may be stored in an external memory
such as a flash memory, expanded appropriately in a RAM, and operated by the CPU according to
the program of the present invention.
03-05-2019
15
[0051]
As described above, in the sound processing apparatus 1, the two microphones 2 a and 2 b which
are disposed apart from each other are arranged such that the propagation waves follow
different transmission paths when the directions are matched acoustically. Therefore, the
difference between the transfer functions Cax and Cbx does not become the same in each
direction, and the direction of the target sound source can be localized to one. Therefore, it is
possible to maintain the accuracy of the sound source localization while reducing the cost of the
sound processing apparatus 1 because a large number of microphones are not needed for the
sound source localization.
[0052]
As a method of sound source localization, for example, the direction of the target sound source
may be searched by a statistical method such as an EM algorithm. However, in the sound
processing apparatus 1 of the present embodiment, while suppressing the signal components in
each direction included in the sound reception signal Sa with a gain equal to or less than zero
decibel, the gain is entirely one side using the unique gain pattern 6. It was made to sweep
around the circumference. If the direction of the singular point 61 matches the direction of the
target sound source, the sound output energy or amplitude of the sound reception signal Sc
changes extremely, so that it can be identified by comparison with the original sound reception
signals Sa and Sb.
[0053]
According to such a combination of the arrangement of the microphones 2a and 2b and the
localization method of the target sound source, the singular point 6 can be formed in only one
direction. Therefore, sound source localization can be performed by a simple search method
using instantaneous values of the sound reception signals Sa and Sb. Conventionally, when
converting to the frequency domain, the frame length is increased, a large number of delay
elements are arranged, a long filter length, a long filter coefficient, and the like, and real-time
sound source localization is difficult due to the processing power. However, in the present
embodiment, very rapid sound source localization is possible.
[0054]
03-05-2019
16
With respect to the singular point 61, the beam may be formed in one direction to sweep the
direction of the beam so as to increase the gain. However, in the present embodiment, the null
point 61 is formed, and the null point 61 is swept over the entire circumference. The null point
61 has a sharp change in gain compared to the formation of a beam, and a narrow range in
which the gain drops extremely. Also, there is no side lobe around the null point 61 around the
beam. Therefore, if the null point 61 is formed, the resolution of the sound source localization is
high, and errors in the sound source localization are small.
[0055]
The shape of the wall portion 4 is not limited to the embodiment, and various shapes can be
applied. For example, as shown in FIG. 8, the wall 4a and the wall 4b may be arranged around the
microphones 2a and 2b. The wall portion 4a is provided on one side of the microphones 2a and
2b along the array of the microphones 2a and 2b. The wall portion 4b is provided to cross the
array of the microphones 2a and 2b either in front of or in the rear of the row of the
microphones 2a and 2b.
[0056]
Sound output energy of sound coming from an azimuth angle at which the wall 4a or the wall 4b
intervenes between the microphones 2a and 2b is sound output energy of sound coming from an
azimuth at which the wall 4a or the wall 4b does not intervene In contrast to the above, both of
the sound reception signals Sa and Sb are reduced due to sound shielding and distance
attenuation. Therefore, even if the same acoustic signal is output from two points which are line
symmetrical with respect to the axis L connecting the microphones 2a and 2b, a difference
occurs in the acoustic output energy. If the directions do not match geometrically, the difference
in distance from the sound source to the microphone 2a and the microphone 2b changes
depending on the azimuth, so the magnitude relationship, amplitude difference and phase
difference between the reception signal Sxa and the reception signal Sxb Change.
[0057]
DESCRIPTION OF SYMBOLS 1 sound processing apparatus 2a microphone 2b microphone 3
sound processing part 31 signal suppression part 311 filter 312 delay 313 subtractor 32
comparison part 33 direction estimation part 4 wall part 4a recessed part inside 4b recessed part
03-05-2019
17
inside 4a wall part 4b wall part 5 microphone folder 6 gain Pattern 61 singularity
03-05-2019
18
Документ
Категория
Без категории
Просмотров
0
Размер файла
29 Кб
Теги
jp2016092767
1/--страниц
Пожаловаться на содержимое документа