close

Вход

Забыли?

вход по аккаунту

?

JP2008271532

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008271532
A stereo output signal is generated using a small number of omnidirectional microphones. Nondirectional microphones 51 and 52 are connected to a sound processing apparatus 100. The first
processing unit 10 adds the input signal A0 generated by the microphone 51 and the input
signal B0 generated by the microphone 52 to generate an intermediate signal A1. The second
processing unit 20 subtracts the input signal B0 from the input signal A0 to generate an
intermediate signal B1. The addition unit 32 generates the output signal A2 of the right channel
by adding the intermediate signals A1 and B1. The subtracting unit 34 subtracts the intermediate
signal B1 from the intermediate signal A1 to generate an output signal B2 of the left channel.
[Selected figure] Figure 1
Sound processing device and program
[0001]
The present invention relates to a technology for processing input signals from a plurality of
microphones.
[0002]
There has been conventionally proposed a technique for generating a plurality of stereo output
signals based on input signals from a plurality of microphones.
For example, Patent Document 1 discloses a technique for generating two systems of output
10-05-2019
1
signals of the right channel and the left channel using four omnidirectional microphones.
Further, Patent Document 2 discloses a technique of generating output signals of two systems by
the MS method using a unidirectional M microphone and a bidirectional S microphone. JP-A-775195 JP-A-7-298387
[0003]
However, in the technique of Patent Document 1, as many as four microphones are required, and
therefore, the increase in size and cost of the device becomes a problem. Moreover, in the
technique of Patent Document 2, there is a restriction that directional (unidirectional and
bidirectional) microphones are essential. With the above background in mind, the present
invention aims at solving the problem of generating a stereo output signal using a small number
of omnidirectional microphones.
[0004]
In order to solve the above problems, the sound processing apparatus according to one aspect of
the present invention is configured by using the first input signal (for example, the input signal
A0 in FIG. 1) generated by the nondirectional first microphone and the first microphone A second
input signal (eg, output signal A2 of FIG. 1) and a second output signal (eg, FIG. 1) from a second
input signal (eg, input signal B0 of FIG. 2) generated by the separated nondirectional second
microphone (eg, input signal B0 of FIG. A sound processing apparatus that generates an output
signal B2 of 1), and adds a first input signal and a second input signal to generate a first
intermediate signal (for example, the intermediate signal A1 of FIG. 1); Processing means, second
processing means for generating a second intermediate signal (for example, the intermediate
signal B1 of FIG. 1) by subtracting the second input signal from the first input signal, and the first
intermediate signal and the second intermediate signal Adding means for generating a first
output signal by adding Comprises a subtraction means for generating a second output signal by
subtracting the second intermediate signal from the first intermediate signal.
[0005]
According to the above configuration, the first intermediate signal generated based on the
addition of the first input signal and the second input signal, and the first generated based on the
difference between the first input signal and the second input signal Based on the two
intermediate signals, a stereo first output signal and a second output signal are generated.
10-05-2019
2
Therefore, while the number of microphones required in principle for generating the first output
signal and the second output signal is reduced compared to Patent Document 1, the principle of
the directional microphone required in the technique of Patent Document 2 is Has the advantage
of being unnecessary.
[0006]
The "non-directivity" in the present invention is not limited to the nature in which the directivity
is completely excluded. That is, even if there is a slight difference in the sensitivity of the sound
reception depending on the direction in which the sound wave arrives, the property of being
regarded as substantially omnidirectional is included in the concept of "nondirectionality" in the
present invention. Further, in the above configuration, only the first microphone and the second
microphone are explicitly defined, but input signals from microphones other than the first
microphone and the second microphone may be the first output signal or the second output
signal. It is not the meaning which excludes the structure utilized for a production | generation
from the scope of the present invention.
[0007]
In a preferred aspect of the present invention, the first processing means includes a first
correction means for reducing the level of a signal obtained by adding the first input signal and
the second input signal. According to this aspect, since the level of the signal obtained by adding
the first input signal and the second input signal is reduced, it is possible to generate voice rich
in sense of direction on the left and right. However, the presence or absence of the first
correction means is optional in the present invention.
[0008]
In a preferred aspect of the present invention, the second processing means reduces the level of
the high frequency component of the signal obtained by subtracting the second input signal from
the first input signal relative to the level of the low frequency component. (In other words, the
level of the low frequency component is increased relative to the level of the high frequency
component) second correction means. In this aspect, the level of the high frequency component
of the difference signal between the first input signal and the second input signal is relatively
10-05-2019
3
reduced with respect to the level of the low frequency component, so the level is wide over a
wide band. It is possible to generate equal voice (especially voice with sufficiently low volume in
the low frequency range). However, the presence or absence of the second correction means is
optional in the present invention.
[0009]
It is also preferable to limit the correction by one of the first correction means and the second
correction means in order to secure the aural balance of the reproduced sound according to the
first output signal and the second output signal. For example, in a configuration in which means
for multiplying a signal by a predetermined correction value is used as the first correction means
or the second correction means, the correction value is limited (corrected) within a
predetermined range.
[0010]
In addition, in order to correct the phase difference between the first intermediate signal and the
second intermediate signal (for example, to be in phase or reverse phase), the phase of a signal
obtained by adding the first input signal and the second input signal is adjusted First phase
adjustment means (for example, the phase adjustment unit 18 of FIG. 4) is installed in the first
processing means, or second phase adjustment means (for example, the phase adjustment means
for subtracting the second input signal from the first input signal) A configuration in which the
phase adjustment unit 28) in FIG. 4 is installed in the second processing unit, or a configuration
in which both the first phase adjustment unit and the second phase adjustment unit are installed
are also preferable.
[0011]
The sound processing apparatus according to the present invention is realized by hardware
(electronic circuit) such as a DSP (Digital Signal Processor) dedicated to each processing, and a
general-purpose arithmetic processing apparatus such as a CPU (Central Processing Unit) and a
program. It is also realized by collaboration with
A program according to the present invention is a stereo first output signal from a first input
signal generated by a first nondirectional microphone and a second input signal generated by a
second nondirectional microphone separated from the first microphone. Processing for
10-05-2019
4
generating a first intermediate signal by adding the first input signal and the second input signal
to the computer in order to generate the first output signal and the second output signal, and the
first input signal to the second input signal A second process of generating a second intermediate
signal by subtracting a signal, an addition process of generating a first output signal by adding
the first intermediate signal and the second intermediate signal, a first intermediate signal to a
second intermediate signal The second intermediate signal is subtracted to execute a subtraction
process for generating a second output signal. Also by the program exemplified above, the same
operation and effect as the sound processing device of the present invention are exerted. The
program of the present invention is provided to the user in the form of being stored in a portable
recording medium such as a CD-ROM and installed on the computer, and is also provided in the
form of distribution via a communication network to the computer. Will be installed.
[0012]
Also, a first output signal and a second output signal of stereo are generated from the first input
signal generated by the first omnidirectional microphone and the second input signal generated
by the second omnidirectional microphone separated from the first microphone. The present
invention is also specified as a method of generating. A sound processing method according to
one aspect of the present invention includes the steps of: generating a first intermediate signal by
adding a first input signal and a second input signal; and subtracting a second input signal from
the first input signal Generating a second intermediate signal, generating a first output signal by
adding the first intermediate signal and the second intermediate signal, and subtracting the
second intermediate signal from the first intermediate signal. And generating a second output
signal. Also by the above method, the same operation and effect as the sound processing device
according to the present invention can be achieved.
[0013]
<A: Sound Processing Device> FIG. 1 is a block diagram showing a configuration of a sound
processing device according to one embodiment of the present invention. As shown in FIG. 1, two
microphones 51 and 52 are connected to the sound processing apparatus 100. The microphones
51 and 52 are arranged at a distance d from each other along a predetermined direction (left and
right direction) D1.
[0014]
10-05-2019
5
Each of the microphones 51 and 52 is a nondirectional microphone in which the sensitivity of
the sound reception is substantially uniform in all directions. The microphone 51 generates an
input signal A0 according to the surrounding sound, and the microphone 52 generates an input
signal B0 according to the surrounding sound. Although the input signals A0 and B0 are actually
converted into digital signals and then input to the sound processing apparatus 100, the
illustration of the A / D converter is omitted in FIG. 1 for the sake of convenience.
[0015]
The sound processor 100 is a device that generates stereo output signals A2 (right channel) and
B2 (left channel) from the input signals A0 and B0. As shown in FIG. 1, the sound processing
apparatus 100 includes a first processing unit 10, a second processing unit 20, an adding unit
32, and a subtracting unit 34. Each element of the sound processing device 100 is realized, for
example, by execution of a program by an arithmetic processing device such as a CPU. However,
the sound processing apparatus 100 is also realized by an electronic circuit such as a DSP
dedicated to audio processing. In addition, a configuration in which each of the above elements is
partially mounted on a separate electronic circuit is also adopted.
[0016]
The first processing unit 10 is means for generating the intermediate signal A1 based on the
addition of the input signals A0 and B0. As shown in FIG. 1, the first processing unit 10 includes
an addition unit 12, a correction unit 14, and an amplification unit 16. The addition unit 12
generates and outputs a signal a1 obtained by adding the input signals A0 and B0.
[0017]
Since the sound wave traveling in the direction (front direction) perpendicular to the direction D1
reaches the microphones 51 and 52 in the same phase, the signal a1 (a1 = a1 = a12) output from
the addition unit 12 when the sound wave arrives from the direction D0. The level of A0 + B0) is
approximately twice the level of the input signal A0 (or B0). On the other hand, since the sound
wave coming from the direction D1 arrives at the microphones 51 and 52 with a phase
difference (time difference) according to the interval d, the level of the signal a1 output from the
adding unit 12 when the sound wave comes from the direction D1 is It is lower than the level in
10-05-2019
6
the direction D0 of the signal a1. In other words, the interval d is chosen such that the level of
the signal a1 is reduced when the main sound wave that the human hears comes from the
direction D1.
[0018]
Therefore, the distribution of the level of the signal a1 when a sound wave of a predetermined
intensity arrives from each direction is the pattern illustrated in FIG. The direction of the radius
in FIG. 2 is the level of the signal a1. As shown in FIG. 2, the distribution of the levels of the
signal a1 is a pattern in which two circles arranged in the direction D0 are connected. As
understood from FIG. 2, the microphones 51 and 52 and the adding unit 12 operate in the same
manner as a directional microphone (M microphone in the MS system) pointing in the direction
D0. In FIG. 2, the relationship between the level of each of the input signals A0 and B0 and the
direction of arrival of the sound wave is illustrated by a broken line.
[0019]
As shown in FIG. 2, the level of the signal a1 when the sound wave arrives from the direction D0
is approximately twice that of the input signal A0. The correction unit 14 in FIG. 1 is a unit that
generates the signal a2 by reducing the level of the signal a1. More specifically, the correction
unit 14 generates the signal a2 by correcting the signal a1 so that the level of the signal a1 when
the sound wave arrives from the direction D0 becomes the same level as the input signal A0 (or
B0). Do. A multiplier (amplifier) that multiplies the signal a 1 by the coefficient “1⁄2” is
preferably employed as the correction unit 14. The amplification unit 16 amplifies the signal a2
output from the correction unit 14 to generate an intermediate signal A1.
[0020]
The second processing unit 20 in FIG. 1 is means for generating the intermediate signal B1 based
on the difference between the input signals A0 and B0. The second processing unit 20 includes a
subtraction unit 22, a correction unit 24, and an amplification unit 26. The subtraction unit 22
generates and outputs a signal b1 obtained by subtracting the input signal B0 from the input
signal A0.
10-05-2019
7
[0021]
Since the sound waves arriving from the direction D0 arrive at the microphones 51 and 52 in the
same phase, when the sound waves arrive from the direction D0, the level of the signal b1 (b1 =
A0−B0) output from the subtraction unit 22 becomes zero. . On the other hand, since the sound
wave coming from the direction D1 arrives at the microphones 51 and 52 with a phase
difference according to the interval d, the level of the signal b1 output from the subtraction unit
22 when the sound wave comes from the direction D1 is The level corresponds to the
relationship between the wavelength and the distance d. Therefore, the distribution of the level of
the signal b1 when a sound wave of a predetermined intensity arrives from each direction is the
pattern illustrated in FIG. As understood from FIG. 3, the microphones 51 and 52 and the
subtractor 22 function in the same manner as a bi-directional microphone (S microphone in the
MS system) that points in the direction D1 with the direction D0 as a dead angle.
[0022]
When sound waves of wavelength λ 0 (λ 0 = 2 d) corresponding to twice the distance d of the
microphones 51 and 52 arrive from the direction D 1, the input signals A 0 and B 0 have
opposite phases. Therefore, the signal b1, which is the difference between the input signals A0
and B0, has the maximum intensity of the component of the frequency f0 (f0 = v / λ0 (v: sound
velocity)) corresponding to the wavelength λ0 and a frequency compared to the frequency f0.
The lower the component is, the lower the strength.
[0023]
The correction unit 24 of FIG. 1 generates the signal b2 by correcting the frequency
characteristic (an imbalance that increases in intensity toward the high frequency side) of the
signal b1 as described above. More specifically, the correction unit 24 reduces the level of the
high frequency side component of the signal b1 relatively to the level of the low frequency side
component (a low frequency side level to the high frequency side level). Signal b2 is generated.
Various filters such as a low pass filter are preferably adopted as the correction unit 24. The
amplification unit 26 amplifies the signal b2 output from the correction unit 24 to generate an
intermediate signal B1. The gains of the amplification units 16 and 26 are controlled, for
example, according to the content of the operation given to the operator by the user.
10-05-2019
8
[0024]
As described above, the intermediate signal A1 is regarded as an output signal of a directional
microphone whose direction axis is the direction D0. More specifically, although the intermediate
signal A1 has a directivity pattern different from that of the M microphone in the MS system, the
general tendency of the directivity is similar and can be substituted as an output signal of the M
microphone. Further, the intermediate signal B1 is regarded as an output signal of a bidirectional microphone whose direction axis is the direction D1. That is, the intermediate signal
B1 can be substituted as an output signal of the S microphone in the MS system. Therefore, as
described below, the sound processing apparatus 100 according to the present embodiment uses
the intermediate signal A1 as a signal corresponding to the output of the M microphone in the
MS system and the intermediate signal B1 as a signal corresponding to the output of the S
microphone. By utilizing it, two stereo output signals A2 and B2 are generated.
[0025]
The addition unit 32 adds the intermediate signal A1 generated by the first processing unit 10
and the intermediate signal B1 generated by the second processing unit 20 to generate an output
signal A2. The output signal A2 is an acoustic signal of the right channel (Rch) corresponding to
the output signal of a directional microphone whose directional axis is the right diagonal
direction of the direction D0. Further, the subtracting unit 34 subtracts the intermediate signal
B1 from the intermediate signal A1 to generate an output signal B2. The output signal B2 is an
acoustic signal of the left channel (Lch) corresponding to an output signal of a directional
microphone whose directional axis is the left diagonal direction in the direction D0. The output
signals A2 and B2 are reproduced as stereo sound waves by being supplied to output devices
such as speakers and headphones. A configuration in which the output signals A2 and B2 are
stored in various recording devices is also adopted.
[0026]
As described above, in this embodiment, two nondirectional microphones 51 and 52 can be used
to generate stereo output signals A2 and B2. Therefore, the device can be miniaturized and the
cost can be reduced as compared to the technique of Patent Document 1 that requires four
microphones. In addition, since the microphones 51 and 52 are nondirectional, there is an
advantage that the convenience is high because the restrictions on the position and direction of
each microphone are small as compared with the technique of Patent Document 2 that requires
10-05-2019
9
directional microphones.
[0027]
It should be noted that since the sound wave coming from the direction D0 is particularly
emphasized in the signal a1, in the configuration in which the correction unit 14 is omitted, the
influence of the sound wave from the front becomes excessive. Speech may be generated. In the
present embodiment, since the correction unit 14 reduces the level of the signal a1, it is possible
to generate natural sound rich in sense of direction on the left and right as compared with the
configuration in which the correction unit 14 is omitted. However, if the lack of the sense of
direction in the reproduced sound is not a particular problem, a configuration in which the
correction unit 14 is omitted may be employed.
[0028]
Further, since the frequency characteristic of the signal b1 is a characteristic in which the level
on the low frequency side is reduced compared to that on the high frequency side, in the
configuration in which the correction unit 24 is omitted, the volume of the bass region of the
reproduced sound may be insufficient. There is sex. In this embodiment, since the correction unit
24 corrects the signal b1 so that the level becomes even over a wide band, it is possible to
generate a reproduced sound with a sufficient sound volume in the low tone range. If the lack of
volume in the low-pitched range does not particularly cause a problem, a configuration in which
the correction unit 24 is omitted may be employed.
[0029]
Furthermore, in the present embodiment, since the gains of the amplification units 16 and 26 are
variably controlled, it is possible to appropriately adjust the directivity and sense of direction of
the reproduced sound. For example, by increasing the gain of the amplification unit 26 (or
decreasing the gain of the amplification unit 14), it is possible to increase the sense of direction
in the left and right of the reproduced sound.
[0030]
10-05-2019
10
<B: Modification> Various modifications can be added to the above-described embodiments. It
will be as follows if the aspect of a specific deformation is illustrated. In addition, each aspect
illustrated below may be combined arbitrarily.
[0031]
(1) Modification 1 In the above embodiment, the intermediate signal A1 is generated by adding
the input signal A0 from the microphone 51 and the input signal B0 from the microphone 52.
The input signal A0 or the input signal B0 from the microphone 52 may be used as the
intermediate signal A1. In addition, a configuration using a signal supplied from a nondirectional
microphone other than the microphones 51 and 52 as the intermediate signal A1 is also adopted.
However, according to the configuration of FIG. 1, since the directivity is given to the
intermediate signal A1 as illustrated in FIG. 2, compared with the configuration using the
nondirectional intermediate signal A1 as in this modification. There is an advantage that the
sense of direction of the reproduced sound can be sufficiently secured.
[0032]
(2) Modified Example 2 When the levels of the input signals A0 and B0 are significantly different,
the accuracy of the localization of the sound image may be lowered. Therefore, a configuration in
which a calibration mechanism for matching the levels of the input signals A0 and B0 is installed
on the output side of the microphones 51 and 52 is preferably adopted. When one level of the
input signals A0 and B0 is matched to the other level, a calibration mechanism is provided at
only one output side of the microphones 51 and 52.
[0033]
(3) Modification 3 If the processing in the correction unit 24 requires a long time as compared
with the processing in the correction unit 14, the intermediate signal B1 (signal b2) is delayed
with respect to the intermediate signal A1 (signal a2). The first processing unit 10 may be
provided with means for delaying the intermediate signal A1 for a time corresponding to the
delay in the correction unit 24 in order to suppress the time difference between the two. For
example, a configuration in which the correction unit 14 has a function of delaying the signal a1
is employed. Also, contrary to the above, when the processing in the correction unit 14 is for a
10-05-2019
11
long time as compared with the processing in the correction unit 24, means for delaying the
intermediate signal B1 is installed in the second processing unit 20.
[0034]
(4) Modification 4 The positions and the number of the correction units 14 and 24 and the
amplification units 16 and 26 are appropriately changed. For example, a configuration in which
the amplification unit 16 is installed between the addition unit 12 and the correction unit 14 and
a configuration in which the amplification unit 26 is installed between the subtraction unit 22
and the correction unit 24 are also adopted. Also, the amplification units 16 and 26 may be
omitted. Furthermore, a configuration in which an amplifier is provided between the microphone
51 or 52 and the first processing unit 10 or the second processing unit 20, or the first
processing unit 10 or the second processing unit 20 and the addition unit 32 or the subtraction
unit 34 A configuration in which an amplifier is installed between is also employed.
[0035]
(5) Modification 5 In the above embodiment, the subtraction is performed with the signal a1
generated by the adding unit 12 due to the phase difference between the input signals A0 and B0
(that is, the time difference for the sound wave to reach each of the microphones 51 and 52). A
phase difference of ± 90 degrees occurs between the signal b1 generated by the unit 22 and the
direction of arrival of the sound wave. Therefore, by adjusting the phase of the signal in at least
one of the first processing unit 10 and the second processing unit 20, the phase angle between
the intermediate signals A1 and B1 is expanded to 0 degrees (in phase) or 180 degrees
(antiphase). The following configuration is preferably employed. For example, as shown in FIG. 4,
a phase adjustment unit 28 that adjusts the phase of the signal b2 corrected by the correction
unit 24 is installed in the second processing unit 20.
[0036]
When sound waves arrive from the right direction to the microphones 51 and 52, the signal a1
lags the signal b1 by 90 degrees. Since the two stereo output signals A2 and B2 are generated by
addition and subtraction of the intermediate signals A1 and B1, it is desirable that the
intermediate signals A1 and B1 be in phase to enhance the stereo feeling of the reproduced
sound. . Therefore, the phase adjustment unit 28 delays the signal b2 corrected by the correction
10-05-2019
12
unit 24 by 90 degrees. On the other hand, when sound waves arrive from the left direction to the
microphones 51 and 52, the signal a1 advances by 90 degrees with respect to the signal b1.
Since it is desirable for the intermediate signals A1 and B1 to be in antiphase to emphasize the
stereo feeling of the reproduced sound, the phase adjustment unit 28 delays the signal b2 by 90
degrees.
[0037]
Instead of the phase adjustment unit 28 of the second processing unit 20, a configuration in
which the phase adjustment unit 18 illustrated by a broken line in FIG. 4 is installed in the first
processing unit 10 is also preferable. When the phase adjustment unit 18 advances the phase of
the signal a2 by 90 degrees with respect to the signal b2, the phase angles of the intermediate
signals A1 and B1 are corrected to 0 degree (in phase) or 180 degrees (opposite phase). Further,
a configuration in which both of the phase adjustment units 18 and 28 are disposed is also
adopted.
[0038]
In addition, the position (position of the phase adjustment parts 18 and 28) which adjusts the
phase of a signal is arbitrary. For example, a configuration in which the phase adjustment unit 18
or 28 is disposed in the previous stage of the correction unit 14 or 24 or a configuration in
which the phase adjustment unit 18 or 28 is disposed in the subsequent stage of the
amplification unit 16 or 26 is also adopted. According to the configuration of the present
modification, the phase difference between the intermediate signals A1 and B1 is corrected to 0
degrees (in phase) or 180 degrees (opposite phase), so that the output signals A2 and B2 rich in
stereo sense in auditory sense It is possible to generate The adjustment amount by the phase
adjustment units 18 and 28 may be variably controlled. For example, when the stereo feeling is
overemphasized, the adjustment amount of the phase in the phase adjustment units 18 and 28 is
reduced. Further, for example, according to the configuration in which the adjustment amount
(delay amount of signal) in the phase adjustment units 18 and 28 is controlled according to an
instruction from the user, the advantage is that the user can arbitrarily adjust the stereo feeling
of reproduced sound. There is.
[0039]
10-05-2019
13
It is a block diagram which shows the structure of the sound processing apparatus which
concerns on one form of this invention. It is a conceptual diagram for demonstrating the
characteristic of signal a1. It is a conceptual diagram for demonstrating the characteristic of the
signal b1. It is a block diagram which shows the structure of the sound processing apparatus
which concerns on a modification.
Explanation of sign
[0040]
100: sound processing device 10: first processing unit 20: second processing unit 12, 32
addition unit 14, 24 correction unit 16, 26 amplification unit 18, 28 Phase adjustment unit 22,
34 Subtraction unit 51 52 Microphone.
10-05-2019
14
Документ
Категория
Без категории
Просмотров
0
Размер файла
25 Кб
Теги
jp2008271532
1/--страниц
Пожаловаться на содержимое документа