close

Вход

Забыли?

вход по аккаунту

?

JP2010028181

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2010028181
A sound collecting headphone capable of giving strong directivity to a point sound source whose
sound pressure is attenuated by distance and reproducing the sound from the same direction as
the direction of the sound source with respect to the microphone with a speaker obtain.
SOLUTION: Two microphones M1 and M2 arranged at regular intervals, a delay 10 for delaying
an output signal of at least one microphone, and an attenuator 20 for attenuating the output
signal level of the delay 10 An arithmetic unit 1 that adds the output signal of the other
microphone and the output signal of one microphone that has passed through the delay unit 10
and the attenuator 20, and two speaker units HP1 and HP2 driven by the output signal of the
arithmetic unit 1 Prepare. The two signals input to the computing unit 1 have the same time
difference and amplitude difference. [Selected figure] Figure 1
Sound collecting headphones
[0001]
The present invention relates to a sound collecting headphone in which the sound from a specific
direction is emphasized by making use of the principle of the addition type array microphone
and by adding a device to this to make it easy to hear the sound in the desired direction.
[0002]
For a sound that is a plane wave from a specific direction, the delay time between the output
signals of the two microphones caused by the time difference between the sound waves reaching
the two microphones arranged at a predetermined distance is aligned by a delay unit, and these
04-05-2019
1
signals are added by a computing unit A summing array microphone that processes and outputs
is known.
FIG. 10 shows the principle of a summing array microphone. As shown in FIG. 10A, there are two
external environment sound collecting microphones M1 and M2 arranged at a fixed distance, and
the audio signals electroacoustically converted by these microphones are respectively delayed by
1 and 2 Is input to While the delay time by the delay unit 1 is set to τ, the delay time by the
delay unit 2 is set to zero. The output signals of the delay units 1 and 2 are added by the
computing unit 3, and the added output is input to the amplifier 4 so that the speaker 5 drives
the speaker 5.
[0003]
The time for the sound to reach the two microphones M1 and M2 from the external sound
source is different depending on the direction in which the sound is incident on the microphones
M1 and M2 and the position of the microphones M1 and M2 or the distance between them. As
shown in FIG. 10A, when the traveling direction of the plane wave is inclined with respect to the
vertical plane passing through the microphones M1 and M2 and the sound source of the plane
wave is closer to the microphone M1, arrival of the sound wave to the microphone M2 The time
is later than the arrival time to the microphone M1, and a time difference τ occurs. Therefore,
the delay time τ is set to the delay device 1 on the microphone M1 side to correct the time
difference, the phases of the output signals of the microphones M1 and M2 are aligned, added by
the computing unit 3, and the level is adjusted by the amplifier 4 Output voice from 5 In this
way, the output from the summing array microphone is output from the speaker 5 with the voice
from a specific direction emphasized.
[0004]
Assuming that the delay time by the delay unit 1 on the microphone M1 side is fixed at τ, as
shown in FIG. 10B, the traveling direction of the plane wave is perpendicular to the vertical plane
passing through the two microphones M1 and M2. In the orthogonal direction, that is, when the
sound sources are equidistant to the microphones M1 and M2, the output signal level of the
computing unit 3 becomes smaller. There is no time difference in sound waves reaching the
microphones M1 and M2, and the output signals of the microphones M1 and M2 are in phase,
while the output signal of the microphone M1 is delayed by τ by the delay unit 1 and the output
signal of the microphone M2 without delay And the arithmetic unit 3. As a result of the addition
04-05-2019
2
of the signals in which the time difference τ occurs, the signal level is lowered, and the sound
pressure level outputted from the speaker 5 is lowered. Thus, the addition type array microphone
has an effect of emphasizing the external sound from the direction in which the time difference
between the output signals of the microphones M1 and M2 becomes zero due to the time
difference correction by the delay unit. By setting the delay time in the delay units 1 and 2, voice
from an intended direction can be emphasized.
[0005]
As described above, the addition type array microphone considers the sound wave from the
target sound source as a plane wave, and does not consider the attenuation of the sound due to
the distance. Since the time difference between the sound signals converted by the microphones
M1 and M2 is only corrected by the delay unit, it is not suitable when the sound pressure is
attenuated depending on the distance as in the case where the sound source is a point sound
source. That is, the directivity to the point sound source at the specific position is weak. In
addition, since the output is monaural, the sound image can not be returned to the original
position, that is, can not be reproduced by the speaker as sound from the same direction as the
direction of the sound source with respect to the microphone.
[0006]
The present invention is to provide a sound collecting headphone which solves the abovementioned problems by utilizing the principle of the addition type array microphone and adding
a device thereto. As prior art related to such a technology, there are inventions described in the
following patent documents. The invention described in Patent Document 1 relates to a hearing
aid with a noise suppression function, in which sounds input from left and right microphones of a
binaural hearing aid are added by an adder to output a sum signal and subtracted by a subtractor
to obtain only noise. It is possible to suppress the noise separately by subtracting the noise
spectrum by the noise suppression means from the spectrum of the above-mentioned sum signal
which outputs the difference signal and attenuates the low-pass component by the filter, and
suppresses the noise separately and performs the hearing aid processing by the signal
processing means Is configured. It is said that the sound coming from the front direction is
emphasized, and the noise is suppressed.
[0007]
04-05-2019
3
According to the invention described in Patent Document 2, the first, second and third
nondirectional microphones are arranged on a straight line at a constant interval d, and a phase
delay corresponding to the interval d is given to the output of the second microphone. Is
subtracted from the output signal of the first microphone to constitute a first primary sound
pressure gradient type unidirectional microphone, and similarly, a second primary sound
pressure gradient using the second microphone and the third microphone. The second sound
pressure gradient type superdirective microphone is constructed by constructing the second
sound pressure gradient type microphone by taking the difference signal of the output signals of
these first and second primary sound pressure gradient type unidirectional microphones. The
high-frequency component of the output signal of the superdirective microphone and the lowfrequency component of the output signal of the first microphone are added and output. Nondirectionality in the low frequency range can reduce noise, mechanical vibration, and wind
effects in the vicinity of the microphone, and operates as a secondary sound pressure gradienttype hyper-directional microphone in the high frequency range, and a desired sound source It
can be said that the sound from can be collected at a high S / N ratio.
[0008]
JP, 2000-261894, A JP, 05-168,85, A
[0009]
The invention described in Patent Document 1 emphasizes the sound from the direction to be
heard, specifically the sound from the front, and although it is related to the present invention in
this point, the principle or specific configuration is different.
In addition, it is not possible to set a plurality of directions for emphasizing the sound, and it is
not possible to restore the localization of the sound source. Although the invention described in
Patent Document 2 seems to be related to the present invention in that it passes through a
process of applying phase delay to one of a plurality of microphone outputs, subtraction, etc., the
principle or specific configuration differs and It is not something to emphasize and collect sound
from the direction of.
[0010]
The present invention can give strong directivity to a point sound source whose sound pressure
is attenuated by distance while utilizing the principle of a summing array microphone, and
04-05-2019
4
returning the sound image to its original position, ie, An object of the present invention is to
provide a sound collecting headphone which can be reproduced by a speaker as sound from the
same direction as the direction of a sound source with respect to a microphone.
[0011]
The present invention comprises two microphones arranged at regular intervals, a delay unit for
delaying an output signal of at least one microphone, an attenuator for attenuating the output
signal level of the delay unit, and an output of the other microphone A computing unit that adds
a signal and the output signal of one of the microphones through the delay unit and the
attenuator, and two speaker units driven by the output signal of the computing unit; The most
important feature is that the time difference and the amplitude difference of two signals are
equal without being equal.
[0012]
The delay time by the delay unit is set according to the time difference between sound waves
reaching the two microphones depending on the sound source direction to collect sound, and the
attenuation amount by the attenuator is the sound pressure difference between the sound waves
reaching the two microphones. It is good to set correspondingly.
[0013]
There are a plurality of pairs of delayers and attenuators that attenuate the output signal level of
the delayers, corresponding to the number of sound sources to be collected, and the coefficients
of each set of delayers and attenuators are to collect the sound. It is preferable to set according
to the time difference of the sound waves reaching the two microphones according to the
direction of the sound source.
[0014]
By setting the difference in time for sound waves from a specific sound source to reach two
microphones as the delay time by the delay unit, it is possible to eliminate the time difference
between the output signals of the two microphones.
Also, since there is a difference between the sound pressure of sound waves reaching two
microphones due to the difference in distance from a specific sound source, the amplitude of the
output signals of the two microphones can be set by setting this difference as the attenuation
04-05-2019
5
amount by the attenuator. Can be equal.
Thus, by adding the output signals of two equal microphones having no time difference and
amplitude difference to the computing unit and adding them, the sound from the specific sound
source is emphasized and output, and the sound pressure is attenuated depending on the
distance Strong directivity can be given to point sound sources.
[0015]
Hereinafter, an embodiment of a sound collection headphone according to the present invention
will be described with reference to the drawings.
First, an example of a sound collecting headphone according to the present invention will be
schematically described with reference to FIG.
In FIG. 9, the sound collecting headphone includes an ear-cover type left and right headphone
housings 61 and 62 and a head band 63 connecting the housings 61 and 62. The left and right
housings 61 and 62 have head pads, and the elasticity of the head band 63 causes the left and
right housings 61 and 62 to be pressed against the left and right side heads of the user via the
head pads. One housing 61 is provided with a microphone M1 for receiving an acoustic wave
from an external sound source and converting it into an audio signal, a speaker unit HP1, and a
signal processing circuit 64 for delaying and attenuating an output signal of the microphone M1.
. The other housing 62 is provided with a microphone M2 and a speaker unit HP2. The left and
right microphones M1 and M2 are disposed at a predetermined distance L determined by the
distance between the left and right housings 61 and 62. In the head band 63, wiring lines
connecting output lines of the left and right microphones M1 and M2 and driving lines of the
speaker units HP1 and HP2 are drawn. Although the signal processing circuit 64 is disposed only
on the left side in the example shown in FIG. 9, the signal processing circuit 64 may be disposed
on both the left and right depending on the embodiment to be described later. Hereinafter,
various embodiments of the present invention will be described in more detail.
[0016]
FIG. 1 shows a first embodiment which is a basic form of a sound collecting headphone according
04-05-2019
6
to the present invention. In FIG. 1, reference numerals M1 and M2 denote microphones which
are disposed at a predetermined distance from each other. The microphones M1 and M2 are
respectively incorporated in the headphone housings 61 and 62 forming a pair on the left and
right. Two point sound sources are assumed as sound sources of sound waves reaching the
microphones M1 and M2. In the technical development site, a speaker is disposed as a point
sound source, and a voice is emitted from the speaker to perform a test or the like. Therefore, the
point sound source is indicated by marks SP1 and SP2 of the speaker. In the example of FIG. 1,
assuming that a vertical plane passing the centers of the two microphones M1 and M2 (direction
perpendicular to the paper in FIG. 1) is a reference plane, the plane is divided by a plane
orthogonal to the reference plane at the position of the microphone M1. In two spaces, the point
sound source SP1 is in a space not including the microphone M2, and the point sound source
SP2 is in a space including the microphone M2.
[0017]
The audio signal electroacoustically converted by one of the two microphones M 1 is input to a
signal processing circuit 64 including the delay unit 10, the attenuator 20, and the computing
unit 1. The output signal of the other microphone M2 is input to the arithmetic unit 1, and is
added to the output signal of the microphone M1 which has passed through the delay unit 10
and the attenuator 20. The addition signal is configured to be input to the speaker unit HP1
through the amplifier 31 and to the speaker unit HP2 through the amplifier 32. When collecting
the sound from the point sound source SP1 among the point sound sources SP1 and SP2,
depending on the distance and position between the microphones M1 and M2 with respect to the
point sound source SP1, the arrival time and amplitude of the sound wave reaching the
microphones M1 and M2 Since a difference arises, a difference also occurs in the arrival time
and the amplitude of the audio signal that is electroacoustically converted and output from the
microphones M1 and M2.
[0018]
Therefore, the delay unit 10 is disposed on the output line of the microphone M1 closer to the
point source SP1, the coefficient, ie, the delay time is set to the delay time corresponding to the
time, and the coefficient of the attenuator 20, ie, The amount of attenuation is set to an amount
of attenuation corresponding to the amplitude difference of the audio signal output from the
microphones M1 and M2. By doing this, the time difference and the amplitude difference
between the signal input from the microphone M1 side to the computing unit 1 in FIG. 1 and the
signal directly input from the microphone M2 to the computing unit 1 become equal without any
04-05-2019
7
difference. Therefore, the signal output from the computing unit 1 is a signal in which the voice
signal from the point sound source SP1 is emphasized, and the voice based on this signal is
emphasized and output from the two speakers HP1 and HP2. The sound from the other point
sound source SP2 is converted by the microphones M1 and M2, and the signal output from the
computing unit 1 has a difference in time and amplitude, so the output signal level decreases.
Therefore, it is possible to clearly hear only the voice from the target point sound source SP1.
[0019]
When the microphones M1 and M2 are nondirectional microphones, the sound from the side
opposite to the sound collecting direction, that is, the side rotated 180 degrees is also
emphasized and output. M2 may be a unidirectional microphone.
[0020]
Next, a second embodiment shown in FIG. 2 will be described.
This embodiment is an embodiment in which sounds from two directions can be emphasized and
output. In the example of FIG. 2, the microphones in two spaces divided by planes orthogonal to
each other at the position of the microphone M1 with respect to the vertical plane passing the
central points of the two microphones M1 and M2 (direction orthogonal to the paper in FIG. 2)
The point sound source SP1 is in a space not including M2. The point sound source SP2 is a
space divided by a plane orthogonal to the vertical plane passing through the center points of the
microphones M1 and M2 at the position of the microphone M2, and is a space not including the
microphone M1. A delay unit 11, an attenuator 21, and a first computing unit are disposed on an
output line of the microphone M1, and a delay unit 12, an attenuator 22, and a second
computing unit are disposed on an output line of the microphone M2. The first computing unit is
connected so as to add the output of the subtractor 21 and the direct output of the microphone
M2, and the second computing unit is connected so as to add the output of the subtractor 22 and
the direct output of the microphone M1. The outputs of the first operator and the second
operator are added by the third operator and input to the speaker unit SP1 of the headphone,
and the outputs of the second operator and the first operator are added by the fourth operator,
the headphone The speaker units SP2 and SP2 are connected so that audio is output from the
speaker units SP1 and SP2. The speaker units SP1 and SP2 are driven via the drive amplifier, but
the depiction of the drive amplifier is omitted in FIG.
[0021]
04-05-2019
8
In the embodiment shown in FIG. 2, with respect to one point sound source SP1, the delay time
and the amplitude are respectively set by the delay unit 11 and the attenuator 21 on the
microphone M1 side as in the first embodiment. The voice from the point sound source SP1 can
be emphasized and output from the first computing unit. For the other point sound source SP2,
the voice from the point sound source SP2 is emphasized and output from the second computing
element by setting the delay time and the amplitude respectively with the delay unit 12 and the
attenuator 22 on the microphone M2 side. be able to. The outputs of the first and second
computing units are added by the third and fourth computing units and output from the speakers
SP1 and SP2. The voices output from the speakers SP1 and SP2 are the same voice, but the
voices from the two point sound sources SP1 and SP2 are emphasized, so that only the voice
from two specific places can be reliably heard.
[0022]
The third embodiment shown in FIG. 3 is also an embodiment in the case of emphasizing and
outputting sounds from two specific directions, but unlike the second embodiment, two point
sound sources SP (Rch 1), In this embodiment, it is possible to cope with the case where SP (Rch
2) is biased to one microphone M 2 side. The point sound sources SP (Rch1) and SP (Rch2) are
spaces separated by planes perpendicular to the vertical plane passing through the centers of the
two microphones M1 and M2 at the midpoints of the microphones M1 and M2, and the
microphone M1 In a space that does not contain
[0023]
The output line of the microphone M2 is divided into two, and the delay unit 11, the attenuator
21 and the computing unit CR1 are arranged in one output line, and the delay unit 12, the
attenuator 22 and the computing unit CR2 are arranged in the other output line . The calculator
CR1 adds the output of the other microphone M1 and the output of the attenuator 21, and the
calculator CR2 is connected so as to add the output of the other microphone M1 and the output
of the attenuator 22. Further, the output of the arithmetic unit CR1 and the output of the
arithmetic unit CR2 are added by the arithmetic unit CR, and the amplifiers 31 and 32 drive the
left and right speaker units HP (Lch) and HP (Rch) as headphone units by this addition output. It
is supposed to
04-05-2019
9
[0024]
The time difference and amplitude difference between the two microphones M1 and M2 with
respect to one point sound source SP (Rch1) are corrected by the delay unit 11 and the
attenuator 22 and the two microphones M1 and M2 with respect to the other point sound source
SP (Rch2) The time difference and the amplitude difference are set to be corrected by the delay
unit 12 and the attenuator 22. Therefore, the output signal of the arithmetic unit CR is an output
signal in which the sounds from the two point sound sources SP (Rch1) and SP (Rch2) are
converted and emphasized, and the sound corresponding to this signal is the speaker unit HP on
the left and right. Output from (Lch) and HP (Rch).
[0025]
As can be understood from the second and third embodiments described above, voices from
point sound sources at a plurality of positions can be detected, and these can be emphasized and
output from the speaker unit. Therefore, it can be understood that the arrangement can be
realized by arranging the delayer, the set of attenuators, and further the set of the delayers, the
attenuators, and the operators as many as the number of sound sources to be emphasized. FIG. 4
shows a fourth embodiment capable of emphasizing and outputting sounds from a large number
of sound source positions.
[0026]
In FIG. 4, two microphones M1 and M2 are disposed at a constant interval, that is, a base length.
These microphones receive voices from a plurality of point sound sources SP (Lch 1), SP (Lch 2),
SP (Lch 3), SP (Rch 1), SP (Rch 2), and SP (Rch 3), and each voice is electroacoustically It is
converted and output as an audio signal. The sound sources SP (Lch 1), SP (Lch 2), and SP (Lch
3) are on the side closer to the microphone M1, and the sound sources SP (Rch 1), SP (Rch 2),
and SP (Rch 3) are on the side closer to the microphone M2. Therefore, the output line of the
microphone M1 is divided into three, and in each line, the delay unit 111, the attenuator 211, the
set of the computing unit CL1, the delay unit 112, the attenuator 211, the set of the computing
unit CL2, the delay unit 113, A set of attenuator 213 and computing unit CL3 is provided. In
addition, the output line of the microphone M2 is divided into three, and in each line, the delay
unit 121, the attenuator 221, the set of the arithmetic unit CR1, the delay unit 122, the
attenuator 222, the set of the arithmetic unit CR2, the delay unit 123, A set of an attenuator 223
and a computing unit CR3 is provided.
04-05-2019
10
[0027]
The calculator CL1 adds the output of the attenuator 211 and the output of the microphone M2,
the calculator CL2 adds the output of the attenuator 212 and the output of the microphone M2,
and the calculator CL3 adds the output of the attenuator 213 and the microphone M2.
Connected to add the outputs. The calculator CR1 adds the output of the attenuator 221 and the
output of the microphone M1, the calculator CR2 adds the output of the attenuator 222 and the
output of the microphone M1, and the calculator CR3 adds the output of the attenuator 223 and
the microphone M1. Connected to add the outputs.
[0028]
The outputs of the computing units CL1, CL2 and CL3 are added by the computing unit CL4, the
outputs of the computing units CR1, CR2 and CR3 are added by the computing unit CR4, and the
output of the computing unit CL4 and the output of the computing unit CR4 are both computed.
It is connected so as to be added by the unit CL4 and the calculator CR4. The output of the
computing unit CL4 drives the speaker unit HP (Lch) through the amplifier 31, and the output of
the computing unit CR4 drives the speaker unit HP (Rch) through the amplifier 32.
[0029]
Regarding the sound from the point sound source SP (Lch1), the coefficients of the delay unit
111 and the attenuator 211 are set corresponding to the time difference and the amplitude
difference in the two microphones M1 and M2. With regard to the sound from the point sound
source SP (Lch 2), the coefficients of the delay unit 112 and the attenuator 212 are set according
to the time difference and the amplitude difference in the two microphones M1 and M2.
Regarding the sound from the point sound source SP (Lch 3), the coefficients of the delay unit
113 and the attenuator 213 are set corresponding to the time difference and the amplitude
difference in the two microphones M1 and M2. Further, regarding the sound from the point
sound source SP (Rch1), the coefficients of the delay unit 121 and the attenuator 221 are set
corresponding to the time difference and the amplitude difference in the two microphones M1
and M2. Regarding the sound from the point sound source SP (Rch 2), the coefficients of the
delay unit 122 and the attenuator 222 are set corresponding to the time difference and the
amplitude difference in the two microphones M1 and M2. Regarding the sound from the point
04-05-2019
11
sound source SP (Rch 3), the coefficients of the delay unit 123 and the attenuator 223 are set
corresponding to the time difference and the amplitude difference in the two microphones M1
and M2.
[0030]
The above settings are made, and the left and right speaker units HP (Lch) and HP (Rch) are
connected by the respective computing units, and the point sound sources SP (Lch 1), SP (Lch 2)
and SP (Lch 3) are provided. The speech is emphasized and output from the SP) (R ch 1), SP (R ch
2), and SP (R ch 3). Incidentally, the sounds outputted from the left and right speaker units HP
(Lch) and HP (Rch) are the same sound, and the left and right sounds are not different stereo
sounds. If you want to increase the number of point sound sources you want to emphasize and
hear, you can increase the set of delay units and attenuators, that is, set the delay set and
attenuators according to the number of point sound sources you want to emphasize and listen It
should be set.
[0031]
In the embodiment shown in FIG. 4, computing units are arranged in three stages. That is, CL4 is
disposed below the computing units CL1, CL2, and CL3, and CL5 is disposed below the
computing units. Further, CR4 is disposed below the computing units CR1, CR2, and CR3, and
CR5 is disposed further below. Since all these three-stage computing units function as adders, it
is possible to construct a two-stage or one-stage computing circuit by using a computing unit
capable of receiving many input signals and adding them. it can.
[0032]
In each of the embodiments described so far, the sound output from the left and right speaker
units is the same sound, and from which direction the sound comes from, that is, the position of
the sound image (also referred to as "localization") do not know. The fifth embodiment shown in
FIG. 5 is the simplest embodiment devised so that the position of the sound image is also known.
In the embodiment shown in FIG. 5 (a), a delay unit 10, an attenuator 20, and an arithmetic unit
are provided on the output line side of M1 on the left side of the two microphones M1 and M2
corresponding to a point sound source SP to be collected. The phase shifter 40 and the amplifier
50 are disposed on the left HP (Lch) line among the left and right speaker units HP (Lch) and HP
04-05-2019
12
(Rch) which are disposed and driven by the output of the computing unit. The phase shifter 40 is
delayed by the delay unit 10 and corrected by the amount of phase shift to restore the phase
difference, and the amplifier 50 is amplified by the amount by which the attenuator 20
attenuates the amplitude to make the difference based on the amplitude difference. It is
something to return. As described above, by arranging the phase shifter 40 and the amplifier 50
in the drive line of the left speaker unit HP (Lch) corresponding to the sound source SP biased to
the left side, the sound from the above-mentioned sound source SP is emphasized, And, it is
possible to hear the sound image position back.
[0033]
The embodiment shown in FIG. 5 (b) shows another embodiment capable of obtaining the same
effect as the embodiment shown in FIG. 5 (a). In this embodiment, instead of arranging the phase
shifter 40 and the amplifier 50 in the drive line of the left speaker unit HP (Lch) as in the
embodiment shown in FIG. 5A, the left speaker unit HP (Rch) The arrangement of the delay unit
15 and the attenuator 25 in the drive line is the same as that of the embodiment shown in FIG.
5A. Since the output of the microphone M1 on the left side is delayed by the delay unit 10, the
delay unit 15 is disposed on the drive line of the speaker unit HP (Rch) on the right side to delay
the output signal of the speaker drive line on the right side. Undo the time difference. Further,
the portion attenuated by the attenuator 20 is attenuated by the attenuator 25 in the right
speaker drive line, and the amplitude difference is restored. Thus, the embodiment shown in FIG.
5 (b) can obtain the same function and effect as the embodiment shown in FIG. 5 (a), emphasizes
the sound from the sound source SP, and the sound image position is the original You can hear it
back to.
[0034]
FIG. 6 is a development of the embodiment shown in FIG. 5 and shows a sixth embodiment of
sound collecting headphones capable of returning the sound image to the respective sound
source positions when there are a plurality of sound collecting directions. In the embodiment
shown in FIG. 6, two microphones M1 and M2, six point sound sources SP (Lch 1), SP (Lch 2), SP
(Lch 3), SP (Rch 1), SP (Rch 2) and SP (Rch 3) Six delay units 111, 112, 113, 121, 122, 123
corresponding to sound sources, six attenuators 211, 212, 213, 221, 222, 223, six arithmetic
units CL1, CL2, CL3, CR1, CR2, And CR3, which are connected as in the fourth embodiment
shown in FIG. This embodiment differs from the fourth embodiment shown in FIG. 4 in the circuit
configuration from each computing unit CL1, CL2, CL3, CR1, CR2, CR3 to the speaker unit HP
(Lch), HP (Rch). . The circuit configuration will be described below.
04-05-2019
13
[0035]
The output of the arithmetic unit CL1 is transmitted via a phase shifter PL1 which shifts the
phase corresponding to the delay time by the delay unit 111 to restore the phase, and an
amplifier AL1 which restores the amplitude by an amount corresponding to the attenuation
amount of the attenuator 211. Is configured to be input to the computing unit CL4. Similarly, the
output of the computing unit CL2 is input to the computing unit CL4 via the delay unit 112, the
phase shifter PL2 in which the coefficient corresponding to the coefficient of the attenuator 212
is set, and the amplifier AL2. The output of the computing unit CL3 is input to the computing
unit CL4 via the delay unit 113, the phase shifter PL3 in which the coefficient corresponding to
the coefficient of the attenuator 213 is set, and the amplifier AL3. The arithmetic unit CL4 adds
the outputs of the three amplifiers AL1, ARL2 and AL3.
[0036]
The output of the computing unit CR1 is input to the computing unit CR4 via the delay unit 121,
a phase shifter PR1 in which a coefficient matching the coefficient of the attenuator 221 is set,
and the amplifier AR1. The output of the arithmetic unit CR2 is input to the arithmetic unit CR4
via the delay unit 122, a phaser PR2 in which a coefficient corresponding to the coefficient of the
attenuator 222 is set, and an amplifier AR2. The output of the arithmetic unit CR3 is input to the
arithmetic unit CL4 via the delay unit 123, a phaser PR3 in which the coefficient corresponding
to the coefficient of the attenuator 223 is set, and the amplifier AR3. The arithmetic unit CR4
adds the outputs of the three amplifiers AR1, AR2 and ARR3.
[0037]
Both the output of the arithmetic unit CL4 and the output of the arithmetic unit CR4 are added
by the arithmetic unit CL5 and the arithmetic unit CR5, and the output of the arithmetic unit CL5
drives the speaker unit HP (Lch) through the driving amplifier 31 to calculate the arithmetic unit
The output of CR5 is configured to drive the speaker unit HP (Rch) via the driving amplifier 32.
[0038]
In the fourth embodiment shown in FIG. 4, the sounds from the point sound sources SP (Lch 1),
04-05-2019
14
SP (Lch 2), SP (Lch 3), SP (Rch 1), SP (Rch 2) and SP (Rch 3) are emphasized. The right and left
headphone units HP (Lch) and HP (Rch) are to be heard, and the sounds heard from the left and
right headphones are the same sound, and the position of the sound image is not known.
However, according to the embodiment shown in FIG. 6, the time difference and the amplitude
difference of the sound delayed and attenuated by the delayer and attenuator arranged
corresponding to each sound source are phase arranged corresponding to each sound source
Since the sound image position is returned to the original state by the unit and the amplifier, it is
possible to recognize the position where the sound comes out. Also in the embodiment shown in
FIG. 6, the two-stage computing units CL4, CL5 and CR4, CR5 can be configured in one stage.
[0039]
Although the above embodiment is an embodiment in which the sound image position can be
returned to the original position, it can not be reproduced as stereo sound. The embodiment
shown in FIG. 7 is an embodiment of a sound collecting headphone capable of stereo output. In
FIG. 7, two microphones M1 and M2 are provided, and a pair of delay device 11, attenuator 21
and computing device CL1 are disposed on the output line of the microphone M1. Another set of
a delay unit 12, an attenuator 22, and an arithmetic unit CR1 are disposed on the output line of
the microphone M2. The configuration up to this point is substantially the same as the
configuration of the second embodiment shown in FIG. 2, and the coefficients of the delay unit
11 and the attenuator 21 arrive from the point sound source SP '(Lch) to the left and right
microphones M1 and M2. It is set corresponding to the time difference and the amplitude
difference of the sound to be done. The coefficients of the delay unit 12 and the attenuator 22
are set corresponding to the time difference and the amplitude difference of the sound arriving
from the point sound source SP ′ (Rch) to the left and right microphones M1 and M2. The
arithmetic unit CL1 adds the output of the attenuator 21 and the output of the microphone M2.
The arithmetic unit CR1 adds the output of the attenuator 22 and the output of the microphone
M1. Therefore, if the outputs of the computing units CL1 and CR1 are added and the left and
right speaker units are driven by this addition signal, the point sound source SP '(Lch) shown in
FIG. The sound from SP '(Rch) can be emphasized and reproduced.
[0040]
The embodiment shown in FIG. 7 differs from the embodiment shown in FIG. 2 in the circuit
configuration after the computing units CL1 and CR1. A pair of phase shifter PL1, amplifier AL1
04-05-2019
15
and calculator CL2 are disposed on the output line of the calculator CL1. A pair of phase shifter
PR1, amplifier AR1, and calculator CR2 are disposed on the output line of the calculator CR1. The
computing unit CL2 adds the output of the amplifier AL1 and the output of the computing unit
CR1, and the computing unit CR2 is connected so as to add the output of the amplifier AR1 and
the output of the computing unit CL1. The driving amplifier 31 drives the speaker unit HP (Lch)
by the output signal of the computing unit CL2, and the driving amplifier 32 is connected by the
output signal of the computing unit CR2 to drive the speaker unit HP (Rch). The coefficients of
the phase shifter PL1, the amplifier AL1 and the coefficients of the phase shifter PR1 and the
amplifier AR1 are set to be audible from the positions of the two point sound sources SP (Lch)
and SP (Rch) shown in FIG.
[0041]
According to the seventh embodiment shown in FIG. 7, the output signals of the arithmetic unit
CL1 and the arithmetic unit CR1 in which the sounds from the point sound sources SP ′ (Lch)
and SP ′ (Rch) are emphasized are compared with the point sound source SP (Lch Correction by
using a phase shifter and an amplifier so as to be audible from the position of SP) (Rch) enables
stereo sound collection and stereo reproduction divided into L channel and R channel. In the
configuration of the above embodiment, if the coefficients of the delay units 11 and 21 and the
attenuators 12 and 22 can be adjusted, the two microphones such as the user wearing the
headphones move relative to the sound source or the sound source moves. Even if the relative
positional relationship between the sound source and the sound source changes, it is possible to
always hear the sound to be collected in the same localization by adjusting the above coefficients
accordingly.
[0042]
In the embodiment shown in FIG. 7, the phase shifter and the amplifier are connected after the
arithmetic units CL1 and CR1 to correct the phase and the amplitude, but the embodiment of FIG.
5 (b) has been described. As such, a delay and an attenuator may be used to correct the phase
and the amplitude.
[0043]
Although all the embodiments described so far have emphasized and reproduced the sound from
the sound source at a specific position, the principle of the present invention is used to remove
the sound from a specific direction. It is also possible to realize a sound collecting headphone
04-05-2019
16
that can do the same.
FIG. 8 shows an embodiment in which this is realized, which is a space separated by a plane
orthogonal to the vertical plane passing through the centers of the two microphones M1 and M2
at the position of the microphone M1 and which does not include the microphone M2 There is a
point sound source SP1 in FIG. 1, and the sound from the point sound source SP1 is removed.
Similar to the embodiment shown in FIG. 1, the delay unit 10, the attenuator 20, and the
computing unit 1 are arranged in this order on the output signal line of the microphone M1, and
in the computing unit 1, from the output of the attenuator 20, The output signal of the
microphone M2 is configured to be subtracted. The coefficients of the delay unit 10 and the
attenuator 20, that is, the delay time and the attenuation amount are set according to the time
difference and the sound pressure difference of the sound waves from the point sound source
SP1 reaching the microphones M1 and M2.
[0044]
According to the embodiment shown in FIG. 8, there is no time difference and amplitude
difference between the output signal of the attenuator 20 input to the computing unit and the
output signal of the microphone M2, and these signals are mutually subtracted by the computing
unit. Sound from the point sound source SP1 is removed. The left and right speaker units HP1
and HP2 are driven via the driving amplifiers 31 and 32 by the output of the arithmetic unit
from which the sound from the point sound source SP1 has been removed, and the speaker units
HP1 and HP2 drive the sound other than the sound from the point sound source SP1. Sound is
emitted. Therefore, when the sound of the point sound source SP1 is unnecessary or offensive,
etc., the sound from the unnecessary or offensive point sound source SP1 is removed by using
the sound collecting headphones according to the present embodiment. You can clearly hear
what you need.
[0045]
By using the sound collecting headphone according to the present invention, it is possible to
emphasize and hear the sound emitted from a specific position, or to remove the sound emitted
from a specific position to hear other sounds. For example, the present invention can be applied
to an application where it is desired to collect only the sound from a specific object while the
ambient noise is high.
[0046]
04-05-2019
17
It is a block diagram which shows the 1st Example of the sound collection headphone which
concerns on this invention.
It is a block diagram which shows the 2nd Example of the sound collection headphone which
concerns on this invention. It is a block diagram which shows the 3rd Example of the sound
collection headphone which concerns on this invention. It is a block diagram which shows the
4th Example of the sound collection headphone which concerns on this invention. It is a block
diagram which shows the 5th Example of the sound collection headphone which concerns on this
invention. It is a block diagram which shows the 6th Example of the sound collection headphone
which concerns on this invention. It is a block diagram showing a 7th example of sound
collection headphones concerning the present invention. It is a block diagram which shows the
8th Example of the sound collection headphone which concerns on this invention. It is a model
figure showing an example of a sound collection headphone concerning the present invention
notionally. FIG. 1 is a block diagram illustrating the principle of a conventionally known summing
array microphone.
Explanation of sign
[0047]
DESCRIPTION OF REFERENCE NUMERALS 10 delay unit 11 delay unit 12 delay unit 20
attenuator 21 attenuator 22 attenuator 31 speaker unit 32 speaker unit 64 signal processing
circuit M1 microphone M2 microphone SP1 point sound source SP2 point sound source
04-05-2019
18
Документ
Категория
Без категории
Просмотров
0
Размер файла
32 Кб
Теги
jp2010028181
1/--страниц
Пожаловаться на содержимое документа