close

Вход

Забыли?

вход по аккаунту

?

JP2011254243

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2011254243
PROBLEM TO BE SOLVED: To provide a technique of wavefront synthesis method that
reproduces a wavefront on the sound source side of a microphone array. A frequency domain
conversion unit 1 converts a sound signal collected by a microphone array into a frequency
domain signal by Fourier transform. The window function unit 2 multiplies the frequency domain
signal by the window function to generate a window function frequency domain signal. The
space inverse Fourier transform unit 3 calculates an angular spectrum representation of the
frequency domain signal after the window function by the inverse Fourier transform of space.
The space differentiation unit 4 calculates a space differential angular spectrum representation
which is the product of the angle spectrum representation and a coefficient corresponding to the
derivative in the vertical direction of the microphone array surface. The spatial Fourier transform
unit 5 transforms the spatial differential angular spectrum representation into a frequency
domain signal by Fourier transform of space. The time domain transformation unit 6 transforms
the frequency domain signal transformed by the two-dimensional Fourier transform of space into
a time domain signal by inverse Fourier transform. [Selected figure] Figure 1
Sound field sound collecting and reproducing apparatus, method and program
[0001]
The present invention relates to a wave field synthesis technique in which a sound signal is
collected by a microphone array installed in a certain sound field, and the sound field is
reproduced by a speaker array using the sound signal.
[0002]
The technology of the wave field synthesis method which reproduces a sound field using the
09-05-2019
1
sound pressure gradient measured with the microphone array constituted by the microphone of
a dipole characteristic is known (for example, refer to nonpatent literature 1).
[0003]
The microphone m1 of the dipole characteristic can be constituted by, for example, two
monopole microphone m2 as shown in FIG.
In this case, the difference between the sound signals collected by the two monopole
microphones m2 may be the sound signals collected by the dipole microphone m1.
In FIG. 14, the middle point of the two monopole microphones m2 is the position of the dipole
microphone m1, and the plane M of the microphone array is formed at this position. The straight
line connecting the two monopole microphones m2 is orthogonal to the plane M of the
microphone array.
[0004]
Sascha Spors, Rudolf Rabenstein, and Jens Ahrens, “The Theory of Wave Field Synthesis
Revisited,” 124th Convention of the Audio Engineering Society Amsterdam, 2008 May 17-20
[0005]
However, as shown in FIG. 14, when the microphones of the dipole characteristic are configured
by two monopole microphones, the number of required microphones increases, and the distance
between the two monopole microphones is increased. Since the sound pressure gradient of the
wave having a wavelength close to the length can not be acquired, there is a problem that the
frequency characteristic can be dented.
[0006]
In order to solve the above problems, an angular spectrum representation of the amplitude
distribution of the sound signal collected by the microphone array is calculated by inverse
Fourier transform of space.
09-05-2019
2
A spatial differential angular spectral representation is calculated which is the product of the
angular spectral representation and a coefficient corresponding to the derivative in the vertical
direction of the microphone array surface.
After spatial differentiation, the angular spectrum representation is transformed to the amplitude
distribution of the sound signal by Fourier transform of the space.
[0007]
Compared with the case where the microphone of the dipole characteristic is configured by the
two monopole microphones, sound field sound pickup reproduction can be performed with fewer
microphones. Also, the problem of not being able to acquire the sound pressure gradient of a
wave having a wavelength close to the distance of the two monopole characteristic microphones
can not be obtained by not using the dipole characteristic microphones configured by two
monopole characteristic microphones. Can not be dented in the frequency characteristic.
[0008]
FIG. 1 is a functional block diagram of an example of a sound field collection and reproduction
device according to a first embodiment. FIG. 2 is a view for explaining an example of the
arrangement of a microphone array and a speaker array of the sound field collection and
reproduction device according to the first embodiment. The functional block diagram of the
example of the sound field sound collection reproducing | regenerating apparatus of 2nd
embodiment. The figure for demonstrating the example of arrangement | positioning of the
microphone array of the sound field sound collection reproducing | regenerating apparatus of
2nd embodiment, and a speaker array. The flowchart which shows the example of the sound field
sound collection reproduction | regeneration method. The figure for showing the conditions of
simulation. The figure which shows distribution of the sound pressure gradient in a certain time
calculated | required by the sound field sound collection reproducing | regenerating apparatus
and method of 2nd embodiment. The figure which shows distribution of the sound pressure
gradient in a certain time calculated | required by the conventional method using the
microphone of a dipole characteristic. The figure which shows distribution of the average of the
sound pressure gradient calculated | required by the sound field sound collection reproducing |
regenerating apparatus and method of 2nd embodiment. The figure which shows distribution of
the average of the sound pressure gradient calculated | required by the conventional method
09-05-2019
3
using the microphone of a dipole characteristic. The figure for showing the conditions of
simulation. The figure which shows the frequency characteristic in both ears in the reproduction
sound field calculated | required by the sound field sound collection reproducing | regenerating
apparatus and method of 2nd embodiment. The figure which shows the frequency characteristic
in both ears in the reproduction sound field calculated | required by the conventional method
using the microphone of a dipole characteristic. The figure for demonstrating the example of the
microphone of a dipole characteristic.
[0009]
Hereinafter, an embodiment of the present invention will be described with reference to the
drawings.
[0010]
First Embodiment The sound field collection and reproduction apparatus and method of the first
embodiment are Nx × Ny omnidirectional microphones arranged at the position of z = z1 of the
first room shown in FIG. Using the sound signals collected by the two-dimensional microphone
arrays M1-1, M2-1,..., MNx-Ny, the sound pressure gradient at the position z = z1 is calculated by
the method described below.
Then, using the calculated sound pressure gradients, two-dimensional speaker arrays S1-1, S21,..., SNx-, each of which comprises Nx × Ny monopole characteristic speakers arranged in the
second room. Reproduce the sound field with Ny.
[0011]
Nx and Ny are arbitrary integers. The number of microphones constituting the two-dimensional
microphone arrays M1-1, M2-1,..., MNx-Ny and the number of speakers constituting the twodimensional speaker arrays S1-1, S2-1,. is there. The sizes of the two-dimensional microphone
arrays M1-1, M2-1,..., MNx-Ny and the sizes of the two-dimensional speaker arrays S1-1, S2-1,.
The positions of the microphones Mi-j in the two-dimensional microphone arrays M1-1, M2-1,...,
MNx-Ny are the two-dimensional speaker arrays S1-1, S2 of the speaker Si-j corresponding to the
respective microphones Mi-j. The same position as in -1, ..., SNx-Ny is desirable, but may be
different. If the positions are the same, the sound field can be reproduced more faithfully.
09-05-2019
4
[0012]
The positions of the microphones constituting the two-dimensional microphone arrays M1-1,
M2-1,..., MNx-Ny arranged at the position of z = z1 in the first room are represented by rs = (xi,
yj, z1) To
[0013]
As shown in FIG. 1, the sound field sound collecting and reproducing apparatus according to the
first embodiment includes a frequency domain conversion unit 1, a window function unit 2, a
space inverse Fourier transform unit 3, a space differentiation unit 4, a space Fourier transform
unit 5, and a time domain. For example, the conversion unit 6 is included, and the process shown
by the solid line in FIG. 5 is performed.
[0014]
The two-dimensional microphone arrays M1-1, M2-1,..., MNx-Ny arranged at the position z = z1
of the first room pick up the sound emitted by the sound source S of the first room Generate a
time domain sound signal.
The generated sound signal is sent to the frequency domain conversion unit 1.
The sound signal of time t collected in the microphone Mi-j of rs = (xi, yj, z1) is denoted as f (i, j,
t).
[0015]
The frequency domain conversion unit 1 Fourier-transforms the sound signal f (i, j, t) collected by
the microphone arrays M1-1, M2-1, ..., MNx-Ny into the frequency domain signal F (i, j, t).
Convert to ω) (step S1). The generated frequency domain signal F (i, j, ω) is sent to the window
function unit 2. ω is a frequency. For example, the frequency domain signal F (i, j, ω) is
generated by short time discrete Fourier transform. Of course, the frequency domain signal F (i, j,
ω) may be generated by another existing method.
09-05-2019
5
[0016]
The window function unit 2 multiplies the frequency domain signal F (i, j, ω) by the window
function to generate a window function after frequency domain signal Fw (j, j, ω) (step S2). The
window function after frequency domain signal Fw (j, j, ω) is sent to the spatial inverse Fourier
transform unit 3. As a window function, a so-called Turkey window function w (i, j) defined by the
following equation is used, for example. Ntpr is a score to which a taper is applied, and is an
integer of 1 or more and Nx and Ny or less.
[0017]
[0018]
The space inverse Fourier transform unit 3 calculates an angular spectrum representation G (n,
m, ω) of the frequency domain signal Fw (j, j, ω) after the window function by two-dimensional
inverse Fourier transform of space (step S3).
The angular spectrum representation G (n, m, ω) is calculated for each ω. The calculated angular
spectrum representation G (n, m, ω) is sent to the spatial differentiation unit 4. Specifically, the
space inverse Fourier transform unit 3 calculates G (n, m, ω) defined by the following equation
(1).
[0019]
[0020]
kx (n) is an angular spectrum in the x-axis direction, n is an index of the angular spectrum kx (n),
ky (m) is an angular spectrum in the y-axis direction, and m is the angular spectrum ky (m) It is
an index.
The angular spectrum is the so-called spatial frequency or wave number.
09-05-2019
6
[0021]
The spatial differentiation unit 4 is an angular spectral representation Gd (n) after spatial
differentiation which is the product of the angular spectrum representation G (n, m, ω) and the
coefficient D (n, m, ω) corresponding to the differentiation in the z-axis direction. , M, ω) are
calculated (step S4). The z-axis direction is the vertical direction of the plane of the microphone
arrays M1-1, M2-1,..., MNx-Ny. The calculated spatial differential angular spectrum
representation Gd (n, m, ω) is sent to the spatial Fourier transform unit 5.
[0022]
Gd (n, m, ω) = D (n, m, ω) × G (n, m, ω) The coefficients corresponding to the angular spectrum
representation G (n, m, ω) are defined as Ru. k is a wave number and c is k = ω / c, where c is
the speed of sound. The reason why the coefficient D (n, m, ω) corresponds to the differentiation
in the z-axis direction will be described later.
[0023]
[0024]
In the case of kx (n) <2> + ky (m) <2 >> k <2>, the reason why D (n, m, ω) = 0 may be set is kx (n)
<2> + ky ( m) This is because the component of <2 >> k <2> can be ignored because it
corresponds to so-called inhomogeneous waves.
[0025]
The spatial Fourier transform unit 5 transforms the spatially differentiated angular spectrum
representation Gd (n, m, ω) into a frequency domain signal Fd (i, j, ω) by two-dimensional
Fourier transformation of space (step S5).
The converted frequency domain signal Fd (i, j, ω) is sent to the time domain converter 6.
09-05-2019
7
Specifically, the space Fourier transform unit 5 calculates Fd (i, j, ω) defined by the following
equation (3).
[0026]
[0027]
The time domain conversion unit 6 converts the frequency domain signal Fd (i, j, ω) into a time
domain signal fd (i, j, t) by inverse Fourier transform (step S6).
As the inverse Fourier transform, an existing method such as a short time discrete inverse
Fourier transform may be used. The time domain signals fd (i, j, t) are sent to the loudspeaker
arrays S1-1, S2-1, ..., SNx-Ny. The time domain signal fd (i, j, t) obtained for each frame by the
inverse Fourier transform is appropriately shifted and a linear sum is taken to be a continuous
time domain signal.
[0028]
The speaker arrays S1-1, S2-1,..., SNx-Ny reproduce sound based on the time domain signal fd (i,
j, t). More specifically, the speaker Si-j reproduces a sound based on the time domain signal fd (i,
j, t) as i = 1, ..., Nx, j = 1, ..., Ny. As a result, the wave front at the position z = z1 of the first room is
reproduced by the speaker arrays S1-1, S2-1,..., SNx-Ny of the second room, and the sound field
of the first room is It can be reproduced in two rooms.
[0029]
By performing such processing, it is possible to calculate the sound pressure gradient without
using a dipole characteristic microphone. This makes it possible to perform sound field sound
collection and reproduction with a smaller number of microphones, as compared to the case
where the microphones of the dipole characteristics are configured by the two monopole
microphones. Also, the problem of not being able to acquire the sound pressure gradient of a
wave having a wavelength close to the distance of the two monopole characteristic microphones
can not be obtained by not using the dipole characteristic microphones configured by two
09-05-2019
8
monopole characteristic microphones. Can not be dented in the frequency characteristic.
[0030]
"About the reason that the coefficient D (n, m, ω) corresponds to the differentiation in the z-axis
direction" Hereinafter, z1 = 0. The angular spectrum representation G (kx, ky, ω) of the planar
steady-state sound pressure distribution F (x, y, ω) = P (x, y, 0, ω) at z = z1 has the following
sound It is given as a two-dimensional inverse Fourier transform of pressure distribution.
[0031]
[0032]
The angular spectrum representation is such an operation that the sound pressure distribution in
a certain plane is decomposed as superposition of plane waves, and if a certain planar angular
spectrum is obtained, the sound pressure distribution in the whole space is derived as follows It
is known to be possible.
[0033]
[0034]
Here, kz is defined as k = ω / c as follows.
[0035]
[0036]
This is nothing less than deriving an angular spectrum at an arbitrary z by phase shift to the
angular spectrum at z = z1, and further performing two-dimensional Fourier transformation.
[0037]
The sound pressure gradient at z = z1 is to be derived this time.
09-05-2019
9
This can be easily derived by differentiating in the z-axis direction in the angular spectrum
region, as follows.
[0038]
[0039]
That is, if sound pressure distribution at z = z1 can be obtained, two-dimensional inverse Fourier
transform is performed on it, and after multiplying by −jkz / 4π <2>, two-dimensional Fourier
transform is performed, z = The sound pressure gradient at z1 will be obtained.
Therefore, the coefficient D (n, m, ω) =-jkz / 4π <2> corresponds to the differentiation in the zaxis direction.
[0040]
In addition, what described said (4) formula discretely is said (1) Formula, what expressed said
(5) formula discretely is said (3) Formula.
[0041]
Second Embodiment While the sound field collection and reproduction apparatus and method of
the first embodiment use a two-dimensional microphone array and a two-dimensional speaker
array, the sound field collection and reproduction apparatus and method of the second
embodiment A one-dimensional microphone array and a one-dimensional speaker array are used.
As a result, the number of microphones, the number of speakers, and the number of channels can
be reduced, which makes the implementation relatively easy.
[0042]
09-05-2019
10
The sound field collection and reproduction apparatus and method of the second embodiment is
a first-order apparatus including Nx nondirectional microphones arranged at the positions of y =
y1 and z = z1 of the first room shown in FIG. Using the sound signals collected by the original
microphone arrays M1-1, M2-1,..., MNx-1, the sound pressure gradient at the position of z = z1 is
calculated by the method described below.
Then, using the calculated sound pressure gradients, one-dimensional speaker arrays S1-1, S21,..., SNx-1 ′ configured by Nx monopole speaker disposed in the second room Reproduce at.
[0043]
Nx is an arbitrary integer.
The number of microphones constituting the one-dimensional microphone array M1-1, M2-1,...,
MNx-1 and the number of speakers constituting the one-dimensional speaker array S1-1, S2-1,. is
there.
The size of the one-dimensional microphone arrays M1-1, M2-1,..., MNx-1 and the size of the onedimensional speaker arrays S1-1, S2-1,.
The positions of the microphones Mi-1 in the one-dimensional microphone arrays M1-1, M2-1,...,
MNx-1 are the one-dimensional speaker arrays S1-1, S2 of the speaker Si-1 corresponding to the
microphones Mi-1. The same position as in -1, ..., SNx-1 is desirable, but may be different.
If the positions are the same, the sound field can be reproduced more faithfully.
[0044]
The positions of the microphones constituting the two-dimensional microphone arrays M1-1,
M2-1,..., MNx-1 arranged at the position of z = z1 in the first room are represented by rs = (xi, y1,
z1) To
09-05-2019
11
[0045]
As shown in FIG. 3, the sound field sound collection and reproduction apparatus according to the
first embodiment includes a frequency domain conversion unit 1, a window function unit 2, a
space inverse Fourier transform unit 3, a space differentiation unit 4, a space Fourier transform
unit 5, and a time domain For example, the conversion unit 6 and the correction filter unit 7 are
included, and the process shown by the broken line in FIG. 5 is performed.
[0046]
The two-dimensional microphone arrays M1-1, M2-1,..., MNx-Ny arranged at the position z = z1
of the first room pick up the sound emitted by the sound source S of the first room Generate a
time domain sound signal.
The generated sound signal is sent to the frequency domain conversion unit 1.
The sound signal of the time domain t collected by the microphone Mi-j of rs = (xi, yj, z1) is
denoted as f (i, t).
[0047]
The frequency domain conversion unit 1 transforms the sound signal f (i, t) collected by the
microphone arrays M1-1, M2-1,..., MNx-1 into a frequency domain signal F (i, ω) by Fourier
transformation. To do (step S1). The generated frequency domain signal F (i, ω) is sent to the
window function unit 2. ω is a frequency. For example, frequency domain signal F (i, ω) is
generated by short time discrete Fourier transform. Of course, the frequency domain signal F (i,
ω) may be generated by another existing method.
[0048]
The window function unit 2 multiplies the frequency domain signal F (i, ω) by the window
function to generate a window function after frequency domain signal Fw (i, ω) (step S2). The
window function after frequency domain signal Fw (i, ω) is sent to the spatial inverse Fourier
09-05-2019
12
transform unit 3. As a window function, for example, a so-called Turkey key function wx (i)
defined by the following equation is used. Ntpr is a score to which a taper is applied, and is an
integer of 1 or more and Nx or less.
[0049]
[0050]
The space inverse Fourier transform unit 3 calculates an angular spectrum representation G (n,
ω) of the frequency domain signal Fw (j, ω) after the window function by two-dimensional
inverse Fourier transform (step S3).
An angular spectral representation G (n, ω) is calculated for each ω. The calculated angular
spectrum representation G (n, ω) is sent to the space differentiation unit 4. Specifically, the space
inverse Fourier transform unit 3 calculates G (n, ω) defined by the following equation (1).
[0051]
[0052]
kx (n) is an angular spectrum in the x-axis direction, and n is an index of the angular spectrum kx
(n).
The angular spectrum is the so-called spatial frequency or wave number.
[0053]
The spatial differentiation unit 4 generates a spatial differential angular spectrum representation
Gd (n, ω) which is the product of the angular spectrum representation G (n, ω) and the
coefficient D (n, ω) corresponding to the differentiation in the z-axis direction. Calculate (step
S4). The calculated spatial differential angular spectrum representation Gd (n, ω) is sent to the
09-05-2019
13
spatial Fourier transform unit 5.
[0054]
Gd (n, ω) = D (n, ω) × G (n, ω) The coefficients corresponding to the angular spectrum
representation G (n, ω) are defined as follows. k is a wave number and c is k = ω / c, where c is
the speed of sound. The reason why the coefficient D (n, ω) corresponds to the differentiation in
the z-axis direction will be described later.
[0055]
[0056]
In the case of kx (n) <2 >> k <2>, the reason why D (n, ω) = 0 may be considered is that the
components of kx (n) <2 >> k <2> are so-called inhomogeneous. Because it corresponds to the
waves, it can be ignored.
[0057]
The spatial Fourier transform unit 5 transforms the spatially differentiated angular spectrum
representation Gd (n, ω) into a frequency domain signal Fd (i, ω) by one-dimensional Fourier
transform of space (step S5).
The converted frequency domain signal Fd (i, ω) is sent to the correction filter unit 7.
Specifically, the space Fourier transform unit 5 calculates Fd (i, ω) defined by the following
equation (8).
[0058]
[0059]
The correction filter unit 7 corrects Fd (i, ω) using the following correction filter in order to
09-05-2019
14
correct an error caused by approximation with the primary array, and the corrected signal Fd ′
(i, ω) Are obtained (step S7).
[0060]
[0061]
In the case of the DC component of ω = 0, since the denominator jω of the square root on the
right side of the above equation becomes 0, the above correction filter can not be applied.
Therefore, in the case of ω = 0, Fd ′ (i, ω) = Fd ′ (i, 0) = 0.
[0062]
The time domain conversion unit 6 converts the corrected signal Fd '(i, ω) into a time domain
signal fd (i, t) by inverse Fourier transform (step S6).
As the inverse Fourier transform, an existing method such as a short time discrete inverse
Fourier transform may be used.
The time domain signals fd (i, t) are sent to the loudspeaker arrays S1-1, S2-1, ..., SNx-1. The time
domain signal fd (i, t) obtained for each frame by the inverse Fourier transform is appropriately
shifted and linear sum is taken to be a continuous time domain signal.
[0063]
The speaker arrays S1-1, S2-1,..., SNx-1 reproduce sound based on the time domain signal fd (i, t).
More specifically, the speaker Si-1 reproduces the sound based on the time domain signal fd (i, t)
as i = 1, ..., Nx. Thereby, the wave front at the position of z = z1 of the first room is reproduced by
the speaker arrays S1-1, S2-1,..., SNx-1 of the second room, and the sound field of the first room
is It can be reproduced in two rooms.
09-05-2019
15
[0064]
By performing such processing, it is possible to calculate the sound pressure gradient without
using a dipole characteristic microphone. This makes it possible to perform sound field sound
collection and reproduction with a smaller number of microphones, as compared to the case
where the microphones of the dipole characteristics are configured by the two monopole
microphones. Also, the problem of not being able to acquire the sound pressure gradient of a
wave having a wavelength close to the distance of the two monopole characteristic microphones
can not be obtained by not using the dipole characteristic microphones configured by two
monopole characteristic microphones. Can not be dented in the frequency characteristic.
[0065]
“The reason why the coefficient D (n, ω) corresponds to the differentiation in the z-axis
direction” The reason why the coefficient D (n, ω) corresponds to the differentiation in the zaxis direction will be described below. なお、z1=0とする。 The angular spectrum
representation G (kx, ω) of the planar steady-state sound pressure distribution F (x, ω) = P (x, 0,
ω) at z = z1 is the inverse of the space of the sound pressure distribution as It is given as a
Fourier transform.
[0066]
[0067]
The angular spectrum representation is such an operation that the sound pressure distribution in
a certain plane is decomposed as superposition of plane waves, and if a certain planar angular
spectrum is obtained, the sound pressure distribution in the whole space is derived as follows It
is known to be possible.
[0068]
[0069]
Here, kz is defined as k = ω / c as follows.
09-05-2019
16
[0070]
[0071]
This is nothing less than deriving an angular spectrum at an arbitrary z by phase shift to an
angular spectrum at z = z1, and further performing a spatial one-dimensional Fourier transform.
[0072]
The sound pressure gradient at z = z1 is to be derived this time.
This can be easily derived by differentiating in the z-axis direction in the angular spectrum
region, as follows.
[0073]
[0074]
In other words, if the sound pressure distribution at z = z1 can be obtained, one-dimensional
inverse Fourier transform of space is performed on it, and after multiplying by -jkz / 2π, onedimensional Fourier transform of space is performed, z The sound pressure gradient at = z1 will
be obtained.
Therefore, the coefficient D (n, ω) =-jkz / 2π corresponds to the differentiation in the z-axis
direction.
[0075]
It is to be noted that the equation (9) is represented discretely as the equation (6), and the
equation (10) is represented discretely as the equation (8).
09-05-2019
17
[0076]
[Simulation Results] As shown in FIG. 6, a one-dimensional microphone array consisting of 96
microphones linearly arranged at a distance of 2 cm at the position of z = 0 collects the sound
signal of the sound field in the region of z <0 Then, the sound field is reproduced using a sound
signal collected in the region of z> 0 with a one-dimensional speaker array consisting of 96
microphones arranged linearly at 4 cm intervals at the position of z = 0 The simulation was done.
The total length of the one-dimensional microphone array and one-dimensional speaker array is
3.84 m, and the number of channels is 96.
The sound source is a point sound source located at a position of (x, z) = (0.4 m, −1.0 m).
The source signal is a 3000 Hz sine wave.
[0077]
FIG. 7 is a distribution of sound pressure gradients at a certain time obtained by the sound field
collection and reproduction apparatus and method of the second embodiment.
FIG. 8 is a distribution of sound pressure gradients at a certain time obtained by the conventional
method using a microphone of dipole characteristics.
FIG. 9 is a distribution of averages of sound pressure gradients obtained by the sound field
collection and reproduction apparatus and method of the second embodiment.
FIG. 10 shows the distribution of the average of sound pressure gradients obtained by the
conventional method using a microphone with dipole characteristics.
[0078]
09-05-2019
18
In FIG. 7 to FIG. 10, solid lines are values obtained by each method, and dotted lines are true
values. It can be understood from FIGS. 7 to 10 that the sound field reproduction and sound
collection apparatus and method of the second embodiment can obtain the distribution of the
sound pressure gradient with high accuracy.
[0079]
Next, as shown in FIG. 11, the sound signal of the sound field in the region of z <0 is collected by
a microphone array consisting of 192 microphones linearly arranged at a position of 2 cm at the
position of z = 0, as shown in FIG. The sound signal of the sound field in the region of z <0 is
collected, and the sound is collected in the region of z> 0 by a speaker array consisting of 192
microphones arranged at a distance of 2 cm linearly at the position of z = 0. The simulated sound
signal was used to simulate the sound field. The number of channels is 192. The sound source is
a point sound source located at a position of (x, z) = (0.0 m, −1.0 m). Assuming that there is a
head with a diameter of 0.29 m at the position of (0.68 m, 0.88 m), the frequency characteristics
in both ears were calculated.
[0080]
FIG. 12 shows the frequency characteristics of both ears in the reproduction sound field obtained
by the sound field collection and reproduction apparatus and method of the second embodiment.
FIG. 13 shows the frequency characteristics of both ears in a reproduced sound field obtained by
the conventional method using a microphone with dipole characteristics.
[0081]
Comparing FIGS. 12 and 13, it can be seen that while dips occur in the frequency characteristics
in FIG. 13, flat characteristics are obtained in a band lower than the spatial aliasing frequency in
FIG.
[0082]
[Modifications, Etc.] Each part constituting the sound field sound collecting and reproducing
09-05-2019
19
apparatus may be provided in either the sound collecting apparatus arranged in the first room or
the reproduction apparatus arranged in the second room.
In other words, each process of the frequency domain transformation unit 1, the window
function unit 2, the spatial inverse Fourier transformation unit 3, the spatial differentiation unit
4, the spatial Fourier transformation unit 5, the time domain transformation unit 6, and the
correction filter unit 7 is It may be performed by a sound collection device disposed in one room,
or may be performed by a reproduction device disposed in a second room. The signal generated
by the sound collection device is transmitted to the reproduction device.
[0083]
The positions of the first room and the second room are not limited to those shown in FIGS. 2
and 4. The first room and the second room may be adjacent or separated from each other. Also,
the orientation of the first room and the second room may be any.
[0084]
The correction filter unit 7 may be omitted. In this case, the time domain transform unit 6
transforms the frequency domain signal Fd (i, ω) from the space Fourier transform unit 5 into a
time domain signal fd (i, t) by inverse Fourier transform.
[0085]
The processing of the window function may be performed in the frequency domain or in the time
domain.
[0086]
Further, the frequency domain conversion unit 1 and the window function unit 2 may be omitted.
In this case, the spatial inverse Fourier transform unit 3 processes the amplitude distribution of
09-05-2019
20
the sound signal input by the microphone array, and the spatial Fourier transform unit 5
generates an amplitude distribution of the sound signal by Fourier transform of space.
[0087]
The sound field sound collecting and reproducing apparatus can be realized by a computer. In
this case, the processing content of each part of this apparatus is described by a program. And
each part in this apparatus is implement | achieved on a computer by running this program by
computer.
[0088]
The program describing the processing content can be recorded in a computer readable
recording medium. Further, in this embodiment, these devices are configured by executing a
predetermined program on a computer, but at least a part of the processing contents may be
realized as hardware.
[0089]
The present invention is not limited to the above-described embodiment, and various
modifications can be made without departing from the spirit of the present invention.
[0090]
Reference Signs List 1 frequency domain conversion unit 2 window function unit 3 space inverse
Fourier transform unit 4 space differentiation unit 5 space Fourier transform unit 6 time domain
conversion unit 7 correction filter unit
09-05-2019
21
Документ
Категория
Без категории
Просмотров
0
Размер файла
31 Кб
Теги
jp2011254243
1/--страниц
Пожаловаться на содержимое документа