close

Вход

Забыли?

вход по аккаунту

?

JP2007036608

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2007036608
PROBLEM TO BE SOLVED: To provide a headphone device capable of clearly listening for a target
sound, a required ambient sound and a generation direction of the ambient sound while reducing
unnecessary noise included in the ambient sound. A headphone device includes a left speaker
11L, a right speaker 11R, a plurality of directional microphones 14FL, 14FR, 14RL, 14RR, a left
ear proximity microphone 15L, a right ear proximity microphone 15R, and a control unit 16. The
control unit localizes the sound based on the signals from the music player 12 and the mobile
phone 13 at a predetermined position, and localizes the sound picked up by the plurality of
directional microphones in the generation direction of the sound. Output to the left and right
speakers. Furthermore, the control unit inverts the phase of the noise obtained from the left ear
proximity microphone and the right ear proximity microphone by ANC16dL and ANC16dR,
superimposes the signal (-NL, -NR) on the signals SL and SR, Reduce [Selected figure] Figure 3
Headphone device
[0001]
The present invention relates to a headphone device capable of causing a listener who is a
wearer of headphones to clearly hear a target sound such as music and a surrounding sound
while reducing unnecessary noise included in the surrounding sound. .
[0002]
Conventionally, ambient sound (hereinafter referred to as "ambient sound").
10-05-2019
1
To generate a negative phase signal of the signal obtained, and this negative phase signal is
called a target sound from a music player or the like (hereinafter referred to as "target sound"). )
Is added to the signal representing) to generate a synthesized sound signal, and by driving the
speaker of the headphone based on the synthesized sound signal, many headphone devices can
make the headphone wearer clearly hear the target sound. It is known (for example, refer to
patent documents 1 and 2). A technique for reducing the ambient sound heard by a headphone
wearer by generating a negative phase signal of a signal representing such ambient sound is
called active noise control (ANC). JP-A-9-93684 JP-Hei 5-36991
[0003]
However, according to the above-mentioned conventional apparatus, since all the ambient sound
is extinguished, there is a problem that the headphone wearing person can not grasp the
surrounding situation based on the sound. Such a problem leads, for example, to the problem
that when the headphone device is mounted on a helmet for a motorcycle, the driver wearing the
helmet does not easily detect the surrounding situation important for driving by sound.
[0004]
On the other hand, a signal representing a sound of a specific frequency (for example, the
frequency of a human voice) is extracted from the sound signals representing the ambient sound,
and the extracted sound signal is further added to the synthetic sound signal and output.
Technology is also considered. However, even in this case, since the listener can not listen to the
sound other than the specific frequency, it is not a fundamental solution to the above problem.
[0005]
The present invention has been made to address the above problems, and provides a headphone
device capable of clearly listening to a necessary ambient sound and a target sound while
reducing unnecessary noise contained in the ambient sound. One of the goals.
[0006]
A headphone device according to the present invention for achieving such an object comprises: a
left speaker generating sound in response to an input signal; a headphone having a right speaker
generating sound in response to the input signal; The sound source for generating the sound
10-05-2019
2
source signal and the plurality of directional microphones The ambient sound generated around
the headphone can be identified by the directional microphone and the generation direction of
the ambient sound with respect to the headphone Ambient sound signal generating means for
generating an ambient sound signal representing a sound picked up and collected together, and
based on the sound source signal and the ambient sound signal, setting a sound image based on
the sound source signal to a predetermined virtual sound source position A left speaker signal
and a right speaker signal for localizing and localizing a sound image based on the ambient
sound signal at a predetermined position in the generation direction of the ambient sound And
sound source localization means for generating the left side speaker signal and the right side
speaker signal to the left side speaker and the right side speaker, respectively.
[0007]
According to this, an ambient sound signal representing a sound (ambient sound) generated
around the headphones is generated in a state where it can be identified in which direction each
ambient sound is generated with respect to the headphones.
Then, a sound image based on the sound source signal for generating the target sound is
localized at a predetermined virtual sound source position, and a sound image based on the
ambient sound signal is localized at a predetermined position in the generation direction of the
ambient sound.
Therefore, the headphone wearing person can surely recognize in which direction the ambient
sound is generated while, for example, listening to music or making a call by a mobile phone.
[0008]
Furthermore, the headphone device according to the present invention generates a near-ear
sound signal representing a sound in the vicinity of the left ear by picking up a sound disposed
near the left speaker and reaching the vicinity of the left speaker from the outside of the
headphone. And a right ear proximity sound signal representing a sound near the right ear by
picking up a sound that is disposed near the right speaker and is from the outside of the
headphone and reaches the vicinity of the right speaker. A near-ear microphone, generating a
near-ear negative sound signal that is a reverse-phase signal of the near-ear sound signal, and
superimposing the generated near-ear negative sound signal on the left speaker signal; Noise
reduction means for generating a right ear proximity sound reverse phase signal, which is a
reverse phase signal of an ear proximity sound signal, and superimposing the generated right ear
10-05-2019
3
proximity sound opposite phase signal on the right speaker signal Equipped with a.
[0009]
As a result, ambient sound in the vicinity of the left ear of the headphone wearer (that is,
unnecessary noise that can be heard by the left ear) is canceled by the near-ear signal in the
vicinity of the left ear.
Similarly, ambient sound in the vicinity of the right ear of the headphone wearer (i.e., unwanted
noise that can be heard by the right ear) is canceled by the near-ear signal anti-phase signal.
Therefore, for example, when the driver of the motorcycle wears a helmet equipped with the
headphone device, noise such as wind noise caused by traveling of the motorcycle is reduced, so
the driver can It can be heard clearly. Furthermore, as described above, since the sound image
based on the ambient sound signal is localized at a predetermined position in the generation
direction of the ambient sound, the driver clearly recognizes in which direction the ambient
sound such as an alarm is generated. be able to.
[0010]
Another headphone device according to the present invention is the headphone device in which
the speaker for the target sound and the speaker for noise removal are different speakers.
[0011]
Also by this, the left and right objective sound speakers allow the sound image based on the
sound source signal for generating the target sound to be localized at a predetermined virtual
sound source position, and the sound image based on the ambient sound signal is a
predetermined position in the generation direction of the ambient sound It is localized to
Therefore, the headphone wearing person can surely recognize in which direction the ambient
sound is generated while, for example, listening to music or making a call by a mobile phone. In
addition, the noise that reaches the ear of the headphone wearer is eliminated by the left and
right noise removal speakers. Therefore, for example, when the driver of the motorcycle wears a
helmet equipped with the headphone device, noise such as wind noise caused by traveling of the
motorcycle is reduced, so the driver can As well as being able to listen clearly, it is possible to
10-05-2019
4
clearly recognize the direction of generation of ambient sound.
[0012]
In this case, the sound source localization means corrects the ambient sound signal by increasing
the amplitude spectrum of a specific frequency included in the ambient sound signal, and based
on the corrected ambient sound signal, the left speaker signal and Preferably, it is configured to
generate a right speaker signal.
[0013]
According to this, for example, since the sound of a specific frequency (specific frequency range)
included in an emergency vehicle such as an ambulance or an alarm sound of a crossing is
emphasized, the headphone wearer can reliably recognize the surrounding situation It becomes
possible.
[0014]
Further, the sound source localization means extracts a specific ambient sound signal from the
ambient sound signals based on the amplitude spectrum of each ambient sound signal, and the
magnitude of the sound based on the extracted ambient sound signal is increased. As described
above, the left speaker signal and the right speaker signal may be generated.
[0015]
In this case, the loudness itself based on the extracted ambient sound signal may be increased,
and as a result of such an increase in sound, the loudness based on the ambient sound signal is
based on other ambient sound signals. It may be smaller than the size of the sound.
Of course, the extracted ambient sound signal may be corrected such that the sound based on the
extracted ambient sound signal is larger than the sound based on the unextracted ambient sound
signal.
[0016]
According to this, for example, the level (volume, volume) of the ambient sound (for example, the
emergency vehicle including the amplitude spectrum of the specific frequency, the warning
sound of the crossing, etc.) to be particularly urged to the headphone wearer is increased. The
10-05-2019
5
headphone wearer can more reliably recognize important information about the surrounding
situation by sound.
[0017]
Further, the sound source localization means extracts a specific ambient sound signal from the
ambient sound signals based on the amplitude spectrum of each ambient sound signal, and the
amplitude of the high frequency component of the sound based on the extracted ambient sound
signal It is also preferable that the same ambient sound signal is corrected by increasing the
spectrum, and the left speaker signal and the right speaker signal are generated based on the
corrected ambient sound signal.
[0018]
According to this, for example, since the high frequency components of the ambient sound (for
example, an emergency vehicle including an amplitude spectrum of a specific frequency, a
warning sound of a takeoff, etc.) to particularly prompt the headphone wearer to be raised are
emphasized. The sound makes it possible to more reliably recognize important information about
the surrounding situation.
[0019]
Hereinafter, embodiments of a headphone device according to the present invention will be
described with reference to the drawings.
First Embodiment (Configuration of Device) As shown in FIG. 1, the headphone device according
to the first embodiment is applied to a driver's helmet 10 of a motorcycle BY.
The helmet 10 is a full face type.
This headphone device comprises a headphone 11 mounted inside the helmet 10.
[0020]
10-05-2019
6
The headphone 11 includes a left speaker 11L that generates a sound in response to an input
signal as shown in FIGS. 2 and 3 and a right speaker 11R that generates a sound in response to
an input signal.
The left speaker 11 </ b> L is disposed on the inner left side of the helmet 10 so as to be
positioned in the vicinity of the left ear of a driver (helmet wearer) wearing the helmet 10.
The right speaker 11R is disposed on the inner right side of the helmet 10 so as to be positioned
in the vicinity of the right ear of the driver wearing the helmet 10.
[0021]
The headphone device further includes a music player 12, a mobile phone 13, a plurality of (four
in this example) directional microphones 14FL, 14FR, 14RL, 14RR, a left ear proximity
microphone 15L, a right ear proximity microphone 15R, and a control unit 16 Have.
[0022]
The music player 12 (for example, an MP3 player or the like) stores music data therein.
The music player 12 is connected to the control unit 16 and is a signal for reproducing music
based on the stored music data (hereinafter referred to as "music signal". ) Is output to the
control unit 16.
[0023]
The mobile phone 13 is a commonly used mobile phone. The mobile phone 13 is connected to
the control unit 16 and is a signal for generating sounds such as a ringing tone and the voice of
the other party (hereinafter referred to as "incoming call signal"). ) Is output to the control unit
16. The music player 12 and the mobile phone 13 constitute a sound source that generates a
sound source signal (a music signal and an incoming call signal) for generating a predetermined
target sound. Such sound sources include voice navigation devices, radios, and passenger
conversation devices that wirelessly transmit voices from passengers of the motorcycle BY by
radio.
10-05-2019
7
[0024]
The plurality of directional microphones 14FL, 14FR, 14RL, and 14RR are disposed at the upper
front portion of the helmet 10 at the left front position, the right front position, the left rear
position, and the right rear position. The directional microphones 14FL, 14FR, 14RL and 14RR
mainly pick up the sounds generated at the left front, right front, left rear and right rear of the
helmet 10 (headphones 11), respectively. , "Ambient sound signal". ) To generate. Each of the
plurality of directional microphones 14FL, 14FR, 14RL, and 14RR is independently connected to
the control unit 16 so as to output an ambient sound signal to the control unit 16.
[0025]
The left ear proximity microphone 15 </ b> L is disposed in the vicinity of the left speaker 11 </
b> L and connected to the control unit 16. The left ear proximity microphone 15L generates a
left ear proximity sound signal representing a sound near the left ear by picking up a sound (left
noise) that has reached the vicinity of the left speaker 11L from the outside of the headphone 11
and is generated Are output to the control unit 16.
[0026]
The right ear proximity microphone 15 </ b> R is disposed near the right speaker 11 </ b> R and
connected to the control unit 16. The right ear proximity microphone 15R generates a right ear
proximity sound signal representing a sound near the right ear by picking up a sound (right
noise) that has reached the vicinity of the right speaker 11R from the outside of the headphone
11 Are output to the control unit 16.
[0027]
The control unit 16 includes a case of a portable size and an electronic circuit described below
housed inside the case, and is held by the driver. As shown in FIG. 3, the control unit 16 includes
a level detection circuit 16a, a sound source position determination device (sound image position
determination device) 16b, a sound source localization device (sound source localization means)
10-05-2019
8
16c, and a left active noise control device (ANC) 16dL, A right side active noise control device
(ANC) 16dR, a left side delay volume controller 16eL, a right side delay volume controller 16eR,
and adders 16f to 16i are provided.
[0028]
The level detection circuit 16a is connected to each of the plurality of directional microphones
14FL, 14FR, 14RL, 14RR. The level detection circuit 16a detects and detects the level
(magnitude) of the signal input from each of the microphones 14FL, 14FR, 14RL, 14RR based on
the signals from the plurality of directional microphones 14FL, 14FR, 14RL, 14RR. A signal
representing the magnitude (hereinafter referred to as "ambient sound level signal"). ) Is output
to the sound source position determination device 16b. That is, the level detection circuit 16a
compares each of the signals from the directional microphones 14FL, 14FR, 14RL, 14RR with the
predetermined threshold value Sth, and "1" for a signal whose level is larger than the
predetermined threshold value Sth. A signal of “0” is generated as an ambient sound level
signal for a signal whose level is smaller than Sth. Furthermore, the level detection circuit 16a is
connected to the sound source position determination device 16b and the sound source
localization device 16c. The level detection circuit 16a outputs signals (amplified ambient sound
signals) obtained by amplifying the signals from the plurality of directional microphones 14FL,
14FR, 14RL, 14RR to the sound source position determination device 16b and the sound source
localization device 16c. ing.
[0029]
The sound source position determination device 16b is connected to the level detection circuit
16a, the music player 12, the mobile phone 13, and the sound source localization device 16c.
The sound source position determination device 16b is configured to receive an ambient sound
level signal and signals from the plurality of amplified directional microphones 14FL, 14FR,
14RL, 14RR. In addition, the sound source position determination device 16 b is configured to
input the music signal from the music player 12 and the incoming call signal from the mobile
phone 13.
[0030]
Then, the sound source position determination device 16b is configured to receive the music
10-05-2019
9
signal, the incoming call signal and the incoming call signal based on the music signal, the
incoming call signal, the signals from the plurality of amplified directional microphones 14FL,
14FR, 14RL, 14RR and the ambient sound level signal. Positions of sound images based on the
respective signals from the plurality of directional microphones 14FL, 14FR, 14RL, 14RR
amplified (hereinafter referred to as "virtual sound source positions"). ) And information on the
determined virtual sound source position (hereinafter referred to as “virtual sound source
position information”). ) Is output to the sound source localization device 16c. The virtual sound
source position is a virtual position of a sound source that can be made to be heard by the
headphone wearer (driver) as if a certain sound source is present at the virtual sound source
position.
[0031]
The sound source localization device 16c sets the sound image based on each of the music signal,
the incoming call signal, and the signals from the plurality of amplified directional microphones
14FL, 14FR, 14RL, 14RR to the virtual sound source position represented by the virtual sound
source position information. Signals to be localized (the left speaker signal SL and the right
speaker signal SR) are generated, and the left speaker signal SL and the right speaker signal SR
are respectively output to the left speaker 11L and the right speaker 11R.
[0032]
Here, the configuration of the sound source localization device 16c will be described with
reference to FIG.
The sound source localization device 16c includes an A / D converter 161, a gain multiplier 162
for preceding sound, a multiplexer 163, a delay line 164, a band pass filter 165, a gain multiplier
166 for following sound, FIR (Finite Impulse Response) A filter 167, an adder 168, a D / A
converter (DAC) 169 and the like are provided. The sound source localization device 16c and the
sound source position localization method (sound image localization method for localizing sound
images of a plurality of sounds in three dimensions using headphones) are disclosed, for
example, in JP-A-2002-44796 and JP-A-10-23600. It is known by the gazette etc.
[0033]
The A / D converter 161 has a plurality of A / D converters (ADCs) connected to the plurality of
10-05-2019
10
sound sources (the music player 12 and the mobile phone 13) and the output lines of the level
detection circuit 16a. . Each ADC converts an analog signal on each output line into a digital
signal and outputs the digital signal.
[0034]
The gain multiplier 162 is composed of a set of gain multipliers equal in number to the number
of signals input from the plurality of sound sources and level detection circuits 16 a to the A / D
converter 161. Each gain multiplier set includes a total of eight amplifiers corresponding to the
first channel Ch1 to the eighth channel Ch8. The eight amplifiers of each gain multiplier set are
connected to each ADC. The gain multiplier 162 is also connected to the sound source position
determination device 16b. The gain multiplier 162 is configured to change the gain of the
amplifier of each gain multiplier set on the basis of virtual sound source position information
output from the sound source position determination device 16b.
[0035]
An input end of each channel of the multiplexer 163 is connected to each amplifier of the gain
multiplier 162. The output end of each channel of the multiplexer 163 is connected to each of
the first to eighth channel Ch 1 to ch 8 FIR filters 167 (in fact, the left ear FIR filter 167 L and
the right ear FIR filter 167 R). A signal directly input from the output end of the multiplexer 163
to the input end of the FIR filter 167 is called "preceding tone". The multiplexer 163 selects one
of the plurality of gain multiplier sets of the gain multiplier 162 based on the virtual sound
source position information output from the sound source position determination device 16b,
and selects one of the amplifiers of the selected gain multiplier set. The signal of each channel is
output from the output terminal.
[0036]
The delay line 164 is provided with a multistage shift register in which a plurality of shift
registers are connected in series for each of the first channel Ch1 to the eighth channel Ch8. The
delay line 164 is designed to provide a tap between a predetermined register of the multistage
shift register of each channel and a register following the predetermined register. The input end
of each channel of the delay line 164 is connected to the output end of each channel of the
multiplexer 163. A signal input from the output end of the multiplexer 163 to the input end of
10-05-2019
11
the delay line 164 and delayed by a predetermined time and taken out from the tap is referred to
as "following sound". The delay line 164 is also connected to the sound source position
determination device 16b. The delay line 164 is configured to change the position of the tap of
each channel based on the virtual sound source position information output from the sound
source position determination device 16b.
[0037]
The tap of each channel of the delay line 164 is connected to the input end of each channel of
the band pass filter 165. Note that in FIG. 4, the lines denoted by “8” are eight lines.
[0038]
The gain multiplier 166 for the trailing sound includes a total of eight amplifiers corresponding
to the first channel Ch1 to the eighth channel Ch8. The input end of the amplifier of each channel
is connected to the output end of each channel of the band pass filter 165. The output of the
amplifier of each channel of the gain multiplier 166 is provided between the input of the FIR
filter 167 of each channel and the output of each channel of the multiplexer 163 as shown in
FIG. 4 and Table 1. It is connected to any one of the adders. As a result, the preceding sound and
the following sound (the following sound after passing through the band pass filter 165 and the
gain multiplier 166 for the following sound) are added by each adder and then input to the FIR
filter 167.
[0039]
The FIR filter 167L of the first channel Ch1 simulates the frequency characteristic when the
sound is heard by the left ear of the listener from the direction of # 1 in FIG. 5 by the head
related transfer function (HRTF). It is supposed to be. That is, the FIR filter 167L of the first
channel Ch1 receives the input signal according to the head related transfer function, which is
the transfer characteristic of the sound when the sound is heard to the left ear of the listener
from the direction of # 1 in FIG. It is a filter that performs a convolution operation.
[0040]
10-05-2019
12
In FIG. 5, the head of the listener (driver) is at the center of the cube defined by the positions # 1
to # 8, and the listeners are # 1, # 2, # 6, and # 5. Face the surface formed by the position of
(facing the vertical direction of this surface). The surface formed by the positions # 1 to # 4 is the
lower surface of the listener, and the surface formed by the positions # 5 to # 8 is the upper
surface of the listener. The orientation of the cube changes with the change in the orientation of
the listener's head (the forward orientation of the driver's helmet 10 in the front-rear direction).
[0041]
The FIR filter 167R of the first channel Ch1 simulates the frequency characteristic when the
sound is heard from the direction of # 1 in FIG. 5 to the right ear of the listener by the headrelated transfer function. That is, the FIR filter 167R of the first channel Ch1 performs a
convolution operation on the input signal according to the head related transfer function of the
sound when the sound is heard by the right ear of the listener from the direction of # 1 in FIG. It
is.
[0042]
Similarly, the FIR filter 167L and the FIR filter 167R of the second to eighth channels Ch2 to Ch8
respectively transfer the sound from the direction of # 2 to # 8 in FIG. 5 to the left and right ears
of the listener as a head-related transfer function. It is designed to simulate. The FIR filter 167L
and the FIR filter 167R of each channel are also connected to the sound source position
determination device 16b, and change the coefficient of each filter based on the sound source
position information output from the sound source position determination device 16b. ing.
[0043]
The FIR filters 167L and 167R of each channel are connected to the input ends of the adder 168
(actually, the left ear adder 168L and the right ear adder 168R). Thus, the signals output from
the FIR filters 167 L and 167 R of each channel are added and synthesized by the adder 168.
[0044]
10-05-2019
13
The output ends of the adder 168 (left ear adder 168L and right ear adder 168R) are
independently connected to the input end of the DAC 169. Thereby, the signal output from each
adder 168 is converted into an analog signal. As described above, the left speaker signal SL
based on the signal from the left ear adder 168L and the right speaker signal SR based on the
signal from the right ear adder 168R are generated.
[0045]
By the way, in general, when the listener listens to two sounds identical to each other like the
preceding sound and the following sound with a time difference, the listener may hear the sound
that arrived earlier and / or the sound of the higher level. It perceives that the sound source is
present in the direction. This effect is a well-known effect called a leading sound effect (Heath
effect). Therefore, if the following sound is localized at a point symmetrical position with respect
to the listener of the position of the virtual sound source localized by the preceding sound, the
preceding sound effect does not cause the listener to erroneously recognize the localization
position by the preceding sound. The sense of direction of the preceding sound can be further
emphasized.
[0046]
From this, in the sound source localization device 16c, the output end of the amplifier of each
channel of the subsequent sound gain multiplier 166 is point-symmetrical (oppositely opposite)
with respect to the listener of each channel of the preceding sound connecting the multiplexer
163 and the FIR filter 167. Channels are connected via adders (see Table 1).
[0047]
However, when the time difference and level difference between the preceding sound and the
following sound are extremely small, the sound image is localized in the head and the sense of
direction is lost.
Conversely, when the time difference becomes excessive, the listener perceives the listening
sound as two separate sounds. In addition, when the level difference becomes excessive, the
effect of the following sound (the effect of emphasizing the sense of direction of the preceding
sound) is lost.
10-05-2019
14
[0048]
Based on such a viewpoint, the delay line 164 and the gain multiplier 166 set the time difference
(tap position) and the level difference (gain for subsequent sound) to appropriate values
according to the sound source position information. It has become.
[0049]
Here, the case where the virtual sound source position is a direction between the basic directions
# 1 to # 8 described above will be described.
Now, it is assumed that the virtual sound source position should be set to an arbitrary point Ptgt
in the space shown in (A) to (C) of FIG. In this case, the virtual sound source Ptgt is in the square
formed by # 1, # 2, # 6 and # 5, as viewed from the center point D of the listener's left and right
ears. FIGS. 6B and 6C are a top view and a side view of the cube of FIG. 6A, respectively.
[0050]
Here, an intersection point of a straight line L passing the point D and the point Ptgt and a plane
formed by the positions of # 1, # 2, # 6 and # 5 is referred to as an intersection point Cr. In this
case, the intersection point Cr is shown in side # 1- # 2 (FIGS. 5 and 6A) in the lateral direction
(horizontal direction, ie, the direction along # 1- # 2 or # 5- # 6). It exists in the position P1
which internally divided a cube of one side in the horizontal direction at a: b. Further, the
intersection point Cr is shown in side # 1- # 5 (FIGS. 5 and 6A) in the vertical direction (vertically
vertical direction, ie, direction along # 1- # 5 or # 2- # 6). It exists in the position P2 which
internally divided c) and d) one side of the cube in the vertical up-down direction. Further, it is
assumed that the distance L1 from the point D to the virtual sound source Ptgt is x times the
distance L2 from the point D to the intersection point Cr (sound source distance ratio x = L1 /
L2).
[0051]
At this time, the sound source position determination unit 16b sets the gain of each amplifier of
10-05-2019
15
the gain multiplier 162 to the preceding sound of each of the first channel Ch1, the second
channel Ch2, the fifth channel Ch5, and the sixth channel Ch6 at It is set to a gain obtained from
the inter-speaker distance ratio a: b, c: d) and the sound source distance ratio x. Specifically, the
gain of each channel may be determined to be inversely proportional to the inter-channel
distance ratio a: b, c: d and inversely proportional to the square of the sound source distance
ratio x. That is, the gain kChn for the preceding sound of the n-th channel (n is 1, 2, 5, 6) is
determined by the following equations (1) to (4). Also, the gain kChm for the preceding sound of
the m-th channel (m is 3, 4, 7, 8) is set to “0”.
[0052]
As described above, the virtual sound source Ptgt 'of the trailing sound is localized at a pointsymmetrical position with respect to the point D of the virtual sound source Ptgt localized by the
preceding sound. The following sound is produced so as to have a fixed amount of attenuation
(gain) and delay (time difference) with respect to the preceding sound. The delay amount is given
by the delay line 164. The gain kCh'm of the m-th channel (m is 3, 4, 7, 8) is determined based
on the following equations (5) to (8) according to the gain with respect to the preceding sound.
Here, the coefficient p is a coefficient that determines the amount of attenuation for the
preceding sound, and is given by the following sound gain multiplier 166. The amount of
attenuation and the amount of delay do not depend on the sound source position, and are set
based on the listener's hearing characteristics and preferences. The gain kCh'n for the
subsequent sound of the n-th channel (n is 1, 2, 5, 6) is "0".
[0053]
When the direction of the driver's helmet 10 (the direction of the driver's headphone 11) and /
or the point Ptgt which is a virtual sound source position changes, the angle between the point
Ptgt and the driver's headphone 11 also changes. Also in this case, the straight line L passing
through the point D and the point Ptgt intersects any surface of the cube formed by the positions
# 1 to # 8, and the intersection point Cr is obtained on the surface.
[0054]
Therefore, the sound source localization device 16c can calculate the inter-channel distance ratio
a: b, c: d and the sound source distance ratio x from the four positions of the intersection point Cr
10-05-2019
16
and the positions # 1 to # 8 defining the plane. The gain of each amplifier of the preceding sound
gain multiplier 162 of each channel and the gain of each amplifier of the trailing sound gain
multiplier 166 are set to gains determined by these ratios. As a result, regardless of the direction
of the headphones 11, the virtual sound source position can be always localized at the point Ptgt.
[0055]
The sound source position determination device 16b sends an instruction signal to the
multiplexer 163 so as to select a sound source to be localized, and the virtual sound source
position for the selected sound source is a gain multiplier 162 for preceding sound, a delay line
164, and a following line. It is output to the gain multiplier for sound 166 and the FIR filter 167.
Such sound source selection is performed on all the sound source signals, whereby a sound
image based on all the sound source signals is localized.
[0056]
Referring again to FIG. 3, the left active noise control device (ANC) 16dL is connected to the left
ear proximity microphone 15L and receives a signal from the left ear proximity microphone 15L
and inverts the phase of the input signal. The output signal is output.
[0057]
The left delay volume controller 16eL is connected to the sound source localization device 16c
(DAC 170), and is configured to receive the left speaker signal SL output from the sound source
localization device 16c.
The left delay volume controller 16eL delays the input left speaker signal SL by a predetermined
time, and generates and outputs a signal obtained by multiplying the delayed signal by a
predetermined gain k (k <1).
[0058]
The adder 16f is connected to the left active noise control device 16dL and the left delay volume
10-05-2019
17
controller 16eL, and outputs a signal obtained by adding and combining the output signals from
these. The adder 16g is connected to the sound source localization device 16c (DAC 170), the
adder 16f, and the left speaker 11L. The adder 16g is configured to output a signal obtained by
adding and combining the signal SL from the sound source localization device 16c and the
output signal from the adder 16f to the left speaker 11L.
[0059]
Similarly, the right active noise control device (ANC) 16dR is connected to the right ear proximity
microphone 15R and inputs a signal from the right ear proximity microphone 15R and outputs a
signal obtained by inverting the phase of the input signal. It is supposed to
[0060]
The right delay volume controller 16eR is connected to the sound source localization device 16c
(DAC 170), and is configured to receive the right speaker signal SR output from the sound source
localization device 16c.
The right delay volume controller 16eR delays the input right side speaker signal SR by a
predetermined time, and generates and outputs a signal obtained by multiplying the delayed
signal by a predetermined gain k (k <1).
[0061]
The adder 16h is connected to the right side active noise control device 16dR and the right side
delay volume controller 16eR, and outputs a signal obtained by adding and combining the output
signals from these. The adder 16i is connected to the sound source localization device 16c (DAC
170), the adder 16h, and the right speaker 11R. The adder 16i outputs a signal obtained by
adding and combining the signal SR from the sound source localization device 16c and the
output signal from the adder 16h to the right speaker 11R.
[0062]
(Operation) Next, an outline of the operation of the headphone device configured as described
10-05-2019
18
above will be described. The function of the sound source position determination device 16b is
actually achieved by the CPU of the microcomputer performing the processing shown by the
flowchart of FIG. 7 at predetermined time intervals. Also, the function of the sound source
localization device 16c is achieved by the DSP (digital signal processor) performing the
processing shown by the flowchart of FIG.
[0063]
The sound source position determination device 16 b is configured to receive a signal from the
music player 12 (music signal) and a signal from the mobile phone 13 (incoming call signal), and
a plurality of amplified directional microphones 14FL, 14FR, 14RL, 14RR ( Based on the
amplified ambient sound signal from each microphone, in order to determine the position (virtual
sound source position) of the sound image based on each signal, the processing is started from
step 100 in FIG. Determine if there is.
[0064]
Now, it is assumed that there is a music signal from the music player 12 and an incoming call
signal from the mobile phone 13.
In this case, the sound source position determination device 16b determines "Yes" in step 110,
proceeds to step 115, and determines whether or not there is an incoming call signal. In this
case, the sound source position determination device 16b determines “Yes” in step 115,
proceeds to step 120, and generates localization information for localizing the sound image of
the music signal in the lower front of the headphone wearer, and continues the following step
125 In the above, localization information for localizing the sound image of the incoming call
signal on the front of the headphone wearer is generated. Thereafter, the sound source
localization device 16b proceeds to step 145.
[0065]
On the other hand, when there is a music signal from the music player 12 but there is no
incoming call signal from the mobile phone 13, the sound source position determining device
16b determines "Yes" in step 110 and "No" in step 115. Then, in step 130, localization
information for generating a sound image of the music signal in front of the headphone wearing
person is generated. Thereafter, the sound source localization device 16b proceeds to step 145.
10-05-2019
19
[0066]
On the other hand, if there is no music signal from the music player 12, but there is an incoming
call signal from the mobile phone 13, the sound source position determining device 16b
determines "No" in step 110 and proceeds to step 135 to receive an incoming call. It is
determined whether there is a signal. In this case, the sound source position determination device
16b determines “Yes” in step 135 and proceeds to step 140, and generates localization
information for localizing the sound image of the incoming call signal on the front of the
headphone wearer. Thereafter, the sound source localization device 16b proceeds to step 145.
[0067]
If neither the music signal nor the incoming call signal is present, the sound source position
determination device 16 b determines “No” in both steps 110 and 135 and proceeds directly
to step 145. As described above, localization information for localizing the sound image based on
the music signal and the sound image based on the incoming call signal at positions according to
the presence or absence of the sound image is generated.
[0068]
Next, the sound source localization device 16b proceeds to step 145 to step 180 and performs
the processing described below.
[0069]
Step 145 and Step 150: When the ambient sound level signal to the signal from the microphone
14FL is “1”, the sound image based on the signal from the microphone 14FL is at a
predetermined position on the left front of the headphone wearer (a predetermined distance
from the headphone 11 Localization information for localization at the front position).
Step 155 and Step 160: When the ambient sound level signal to the signal from the microphone
14FR is "1", the sound image based on the signal from the microphone 14FR is located at a
predetermined position on the right front of the headphone wearer (a predetermined distance
10-05-2019
20
from the headphone 11 Localization information for localization at the front position).
[0070]
Step 165 and Step 170: If the ambient sound level signal to the signal from the microphone 14RL
is "1", the sound image based on the signal from the microphone 14RL is at a predetermined
position on the left rear of the headphone wearer (a predetermined distance from the headphone
11 Localization information for localization at the rear position). Step 175 and Step 180: When
the ambient sound level signal to the signal from the microphone 14RR is "1", the sound image
based on the signal from the microphone 14RR is at a predetermined position on the right rear of
the headphone wearer (a predetermined distance from the headphone 11 right Localization
information for localization at the rear position).
[0071]
Thereafter, the sound source position determining device 16b proceeds to step 185, outputs the
localization information generated so far to the sound source localization device 16c as virtual
sound source position information, and proceeds to step 195 to temporarily end this routine.
[0072]
On the other hand, the DSP of the sound source localization device 16c starts the processing of
the routine shown in FIG. 8 from step 200 every lapse of a predetermined time, proceeds to step
210, and based on "virtual sound source position information output by CPU" Gain of each
channel of gain multiplier 162 with respect to the virtual sound source position of each signal
(music signal, incoming call signal and amplified ambient sound signal whose ambient sound
level signal is “1”), tap of delay line 164 The position, the gain of the gain multiplier 166
(coefficient p described above) and the coefficients of the FIR filter 167 are determined.
In the following, devices that output music source signals are generically referred to as music
signal, incoming call signal and amplified ambient sound signal whose ambient sound level signal
is “1” as “generation source signal”. Collectively referred to as an origin.
[0073]
10-05-2019
21
Next, the DSP proceeds to step 220, and based on the sound source position information, outputs
a signal specifying one of the sources that are currently generating the source signal to the
multiplexer 163, and from the designated source A signal (generation source signal) which is
converted into a digital signal by the A / D converter 161 is input to the gain multiplier 162.
Next, the DSP proceeds to step 230 and outputs the tap position for the generation source
(generation source signal) designated in step 220 to the delay line 164.
[0074]
Thereafter, the DSP proceeds to step 240, and gains of amplifiers of the respective channels of
the gain multiplier 162 with respect to the designated source (source signal) designated in the
step 220, gains of the amplifiers of the respective channels of the gain multiplier 166 and the FIR
filter 167. The coefficients of are output to each. Thus, the left speaker signal SL and the right
speaker signal SR for localizing the generation source signal designated in step 220 at the virtual
sound source position relative to the generation source signal are generated.
[0075]
Next, the DSP proceeds to step 250, and whether or not the above-described steps 220 to 240
have been completed for all the sound sources generating the sound source signal at this time
(whether one cycle is completed) If it is not completed, the process returns to step 220. Also, if
one cycle is completed, the DSP proceeds to step 295 and temporarily terminates this routine. As
described above, signals (the left speaker signal SL and the right speaker signal SR) for localizing
all of the sound images based on the generated sound signal at the current point in time to the
virtual sound source position according to the sound source position information are generated.
[0076]
On the other hand, ambient noise is reduced as follows. In addition, although only the operation |
movement for noise reduction by the output from the left side speaker 11L is demonstrated
below, the output from the right side speaker 11R is controlled similarly to the left side speaker
11L.
10-05-2019
22
[0077]
First, as shown in FIG. 3, the left ear proximity microphone 15L picks up the sound (left noise)
that has reached the vicinity of the left speaker 11L from the outside, and generates a left ear
proximity sound signal NL.
[0078]
The left speaker 11L outputs a sound corresponding to the left speaker signal SL.
Therefore, a part of the sound generated by the left speaker 11L leaks, and the leaked sound is
collected by the left ear proximity microphone 15L. Therefore, the signal generated by the left
ear proximity microphone 15L includes the signal k · SL (k <1) according to the sound due to the
leakage (round-in) from the left speaker 11L. That is, the left ear proximity microphone 15L
generates a signal NL + k · SL. The signal NL + k · SL is phase-inverted by the left active noise
control device 16dL and converted to a signal − (NL + k · SL). This signal − (NL + k · SL) is input
to the adder 16 f.
[0079]
On the other hand, the left delay volume controller 16eL delays the left speaker signal SL by a
predetermined delay time and outputs a signal k · SL obtained by multiplying the delayed signal
by k. The signal SL included in the signal − (NL + k · SL) output from the left active noise control
device 16dL is delayed by a predetermined time from the left speaker signal SL output from the
sound source localization device 16c. Therefore, the left delay volume controller 16eL delays the
left speaker signal SL output from the sound source localization device 16c by this
predetermined time. The signal k · SL from the left delay volume controller 16eL is input to the
adder 16f.
[0080]
The adder 16 f adds and synthesizes the input signal − (NL + k · SL) and the signal k · SL. As a
result, the adder 16f outputs the signal -NL to the adder 16g. The adder 16g adds the left
speaker signal SL from the sound source localization device 16c and the signal -NL from the
adder 16f and outputs the result to the left speaker 11L. Normally, the signal k · SL
10-05-2019
23
corresponding to the sound due to the sneaking from the left speaker 11L included in the left ear
near-field sound signal is much smaller than the signal NL corresponding to the sound reaching
the vicinity of the left speaker 11L from the surroundings ( That is, k is a value close to 0.).
Therefore, since it can be said that the left ear near-field sound signal is substantially the signal
NL, the adder 16 g generates a sound source near the left ear near-ear phase opposite signal
−NL which is a reverse phase of the left ear near-field sound signal NL. It is substantially
superimposed on the left speaker signal SL from the localization apparatus 16c.
[0081]
Thus, the left speaker 11L generates a sound corresponding to the signal SL-NL. As a result, to
the left ear of the headphone wearer, a sound reaching from the outside of the headphone 11 to
the vicinity of the left ear (a sound represented by the left ear near sound signal NL) and a signal
SL-NL from the adder 16g A sound obtained by adding a corresponding sound, that is, a sound
corresponding to the left speaker signal SL arrives. In this way, since the ambient sound in the
vicinity of the left ear of the headphone wearer (that is, unnecessary noise heard in the left ear) is
eliminated by the left ear proximity phase signal -NL, the headphone wearer can respectively
generate virtual sound sources. It is possible to clearly listen to the sound based on the music
signal localized at the position, the incoming call signal and the signals from the amplified plural
directional microphones 14FL, 14FR, 14RL, 14RR.
[0082]
As described above, in the headphone device according to the first embodiment of the present
invention, the left speaker (11L) that generates a sound in response to an input signal and the
right speaker (11R) that generates a sound in response to an input signal ), A sound source (12,
13) for generating a sound source signal (music signal, incoming call signal) for generating a
predetermined target sound, and a plurality of directional microphones (14FL, 14FR, 14RL). , 14
RR) and multiple directional microphones so that the ambient sound generated around the
headphone can be identified so that the direction in which the ambient sound is generated for
the headphone can be identified and collected Ambient sound signal generating means (14FL,
14FR, 14RL, 14RR, control unit 16 and connection between them) for generating an ambient
sound signal representing the sound source signal And a signal for the left speaker that localizes
a sound image based on the sound source signal to a predetermined virtual sound source
position based on the ambient sound signal and localizes a sound image based on the ambient
sound signal to a predetermined position in the generation direction of the ambient sound Sound
source localization means (16b, 16c) for generating the SL and right side speaker signal SR, and
outputting the left side speaker signal SL and the right side speaker signal SR to the left side
speaker 11L and the right side speaker 11R, A near-ear microphone (15L) that generates a near-
10-05-2019
24
sound signal (NL), a near-ear microphone (15R) that generates a near-ear sound signal (NR), and
a reverse phase of the near-ear sound signal (NL) A left ear proximity sound negative phase
signal (-NL) which is a signal is generated, and the left ear proximity sound negative phase signal
generated is superimposed on the left speaker signal (SL) Generating a right ear near sound
reverse phase signal (-NR) which is a reverse phase signal of the right ear near sound signal (NR)
and generating the same generated right ear near sound reverse phase signal as the right
speaker signal (SR) And noise reduction means (16dL, 16eL, 16f, 16g, 16dR, 16eR, 16h, 16i) to
be superimposed.
[0083]
Therefore, the sound image based on the sound source signal for generating the target sound is
localized at a predetermined virtual sound source position, and the sound image based on the
ambient sound signal is localized at a predetermined position in the generation direction of the
ambient sound.
As a result, the headphone wearing person can surely recognize in which direction the ambient
sound is generated while, for example, listening to music or making a call by a mobile phone.
[0084]
Furthermore, ambient sound in the vicinity of the left and right ears of the headphone wearer
(that is, unnecessary noise heard by the left and right ears) is eliminated. Therefore, noise such as
wind noise during driving of the motorcycle is reduced, so that the driver wearing the headphone
can clearly hear the target sound and the ambient sound.
[0085]
Second Embodiment Next, a headphone device according to a second embodiment of the present
invention will be described with reference to FIG. In the following, the same components as those
of the headphone device according to the first embodiment are denoted by the same reference
numerals, and the detailed description thereof is omitted.
10-05-2019
25
[0086]
Like the headphone device according to the first embodiment, the headphone device according to
the second embodiment is applied to the driver's helmet 10 of the motorcycle BY, and includes a
headphone 11 ′ replacing the headphone 11.
[0087]
The headphone 11 'includes a left speaker 11L that generates sound in response to an input
signal and a right speaker 11R that generates sound in response to the input signal.
The left speaker 11L and the right speaker 11R in the second embodiment generate only the
target sound (the sound based on the music signal, the sound based on the incoming call signal,
and the amplified ambient sound signal). It is referred to as the target sound right speaker 11R.
[0088]
The headphone 11 'further includes a noise removal left speaker 17L and a noise removal right
speaker 17R. The noise removal left speaker 17L is disposed on the inner left side of the helmet
10 so as to be positioned in the vicinity of the target sound left speaker 11L (near the left ear of
the driver wearing the helmet 10). Similarly, the noise removing right speaker 17R is disposed on
the inner right side of the helmet 10 so as to be located in the vicinity of the target sound right
speaker 11R (near the right ear of the driver wearing the helmet 10). Are
[0089]
The left ear proximity microphone 15L is connected to the input terminal of the left active noise
control device 16dL. The output terminal of the left side active noise control device 16dL is
connected to the noise removal left speaker 17L. Similarly, the right ear proximity microphone
15R is connected to the input terminal of the right side active noise control device 16dR. The
output terminal of the right side active noise control device 16dR is connected to the right side
speaker 17R for noise removal. In the second embodiment, the headphones 11 are configured
such that the sounds generated from the target sound left speaker 11L and the target sound
right speaker 11R do not substantially leak.
10-05-2019
26
[0090]
Furthermore, this headphone device includes a level detection circuit 16a, a sound source
position determination device 16b, and a sound source localization device 16c, as in the
headphone device of the first embodiment. A plurality of directional microphones 14FL, 14FR,
14RL, 14RR are connected to the level detection circuit 16a. A music player 12 and a mobile
phone 13 are connected to the sound source position determination device 16 b and the sound
source localization device 16 c. The connection relationship among the level detection circuit
16a, the sound source position determination device 16b, and the sound source localization
device 16c is the same as that of the headphone device of the first embodiment.
[0091]
Next, the operation of the headphone device according to the second embodiment configured as
described above will be described. In the headphone device, the sound source localization device
16c generates the left speaker signal SL and the right speaker signal SR, as in the headphone
device according to the first embodiment. The left speaker signal SL and the right speaker signal
SR are input to the target sound left speaker 11L and the target sound right speaker 11R,
respectively.
[0092]
As a result, by the sounds generated from the target sound left speaker 11L and the target sound
right speaker 11R, the music signal and the incoming call signal, and the ambient sound signal
(amplified ambient sound whose ambient sound level signal is “1”) A sound image based on
the signal) is localized at a predetermined virtual sound source position.
[0093]
On the other hand, a left ear near-field sound signal NL representing “a sound reaching from
near the left ear of the headphone wearer from the outside of the headphone device” generated
by the left ear near-field microphone 15L is a left active noise control device 16dL After being
phase-inverted by the removal signal generating means and converted into a signal -NL, the
signal is input to the noise removal left speaker 17L.
10-05-2019
27
As a result, in the left ear of the headphone wearer, the sound reaching the vicinity of the left ear
(sound represented by the near-ear sound signal NL) corresponds to the signal −NL generated
by the left speaker 17L for noise removal It is erased by sound.
[0094]
Similarly, a right near-ear sound signal NR representing “a sound reaching from near the right
ear of the headphone wearing person from the outside of the headphone device” generated by
the right near-ear microphone 15R is also included in the right active noise control device 16dR
(right After being phase-inverted by the noise removal signal generation means and converted
into a signal -NR, the signal is input to the noise removal right speaker 17R. As a result, in the
right ear of the headphone wearer, the sound reaching the vicinity of the right ear (sound
represented by the near-ear sound signal NR) corresponds to the signal -NR generated by the
right speaker 17R for noise removal It is erased by sound.
[0095]
Therefore, the headphone wearer can clearly listen to the sounds based on the music signal
localized at the virtual sound source position, the incoming call signal, and the signals from the
plurality of amplified directional microphones 14FL, 14FR, 14RL, 14RR. it can.
[0096]
Third Embodiment Next, a headphone device according to a third embodiment of the present
invention will be described.
This headphone device differs from the headphone device according to the first embodiment only
in that the sound source localization device 16c performs the processing shown by the
flowcharts of FIGS. 10 and 11 in place of FIG. Therefore, the following description will focus on
the differences. In FIG. 10, the same steps as the steps shown in FIG. 8 are denoted by the same
reference numerals.
[0097]
10-05-2019
28
The sound source localization device 16c of the third embodiment starts the process from step
300 of FIG. 10 each time a predetermined time elapses, proceeds to step 210 described above,
and determines each gain, coefficient, and the like. Next, the sound source localization device 16c
proceeds to step 310 to designate (select) a source signal to be localized, and when the
designated signal is an amplified ambient sound signal, execute the processing shown in FIG.
Enhancement processing of the ambient sound signal amplified by
[0098]
That is, the sound source localization apparatus 16c proceeds from step 400 to step 410 in FIG.
11, performs frequency analysis by FFT (Fast Fourier Transform) on the "amplified ambient
sound signal" specified in step 310, The "amplified ambient sound signal" represented in the time
domain is represented in the frequency domain. Thereby, the amplitude spectrum and the phase
spectrum of "amplified ambient sound signal" are acquired.
[0099]
Then, the sound source localization apparatus 16c proceeds to step 420 and corrects the
ambient sound signal by increasing only the amplitude spectrum for a specific frequency (for
example, a specific frequency included in the siren of the emergency vehicle). Then, the sound
source localization apparatus 16c performs inverse FFT in step 430, and a signal (a corrected
ambient sound signal) which is an “amplified ambient sound signal” and in which the
amplitude spectrum of a specific frequency is increased 10 and proceeds to step 315 of FIG. 10
via step 495.
[0100]
The sound source localization device 16 c includes the source signal (corrected ambient sound
signal) specified at step 310 in step 315. And the digital signal converted by the A / D converter
161 is input to the gain multiplier 162. Thereafter, the sound source localization device 16c
performs the processing of steps 230 and 240 described above. As a result, the left speaker
signal SL and the right speaker signal SR are generated which localize the generation source
signal designated in step 310 at the virtual sound source position relative to the generation
source signal.
10-05-2019
29
[0101]
As described above, according to the headphone device of the third embodiment, the amplitude
spectrum of the specific frequency included in the ambient sound signal is increased, and based
on the resulting signal, the left speaker signal SL and the right side are generated. A speaker
signal SR is generated.
[0102]
Therefore, for example, since the sound of a specific frequency included in an emergency vehicle
such as an ambulance or the alarm sound of a crossing is emphasized, the headphone wearing
person can recognize the surrounding situation more reliably.
[0103]
Fourth Embodiment Next, a headphone device according to a fourth embodiment of the present
invention will be described.
This headphone device differs from the headphone device according to the third embodiment
only in that the sound source localization device 16c performs the processing shown by the
flowchart of FIG. 12 in place of that of FIG.
Therefore, the following description will focus on the differences. In FIG. 12, the same steps as
the steps shown in FIG. 11 are denoted by the same reference numerals.
[0104]
When the sound source localization apparatus 16c of the fourth embodiment proceeds to step
310 shown in FIG. 10, if the designated (selected) signal is an amplified ambient sound signal, the
process shown in FIG. 12 is performed. The emphasizing process of the amplified ambient sound
signal is performed according to
[0105]
10-05-2019
30
That is, the sound source localization apparatus 16c proceeds from step 500 to step 410 in FIG.
12, performs frequency analysis by FFT on the "amplified ambient sound signal" specified in step
310, and expresses it in the time domain. The "amplified ambient sound signal" is expressed in
the frequency domain.
Thereby, the amplitude spectrum and the phase spectrum of "amplified ambient sound signal"
are acquired.
[0106]
Next, the sound source localization device 16c proceeds to step 510, and determines whether the
amplitude spectrum for the specific frequency described above is equal to or greater than a
predetermined value (predetermined threshold). At this time, if the amplitude spectrum for the
specific frequency is greater than or equal to the predetermined value, the sound source
localization apparatus 16 c determines “Yes” in step 510 and proceeds to step 520 to increase
the sound level (volume) based on the specified signal. In step 595, the same designated signal is
further amplified, and the process proceeds to step 315 of FIG.
[0107]
On the other hand, if the sound source localization device 16 c determines “No” in step 510,
the process proceeds directly to step 315 of FIG. 10 via step 595.
[0108]
Thereafter, the sound source localization device 16c executes the processing of step 315, step
230 and step 240 to generate the left speaker signal SL and the right speaker signal SR.
Thus, the signal designated (selected) in step 310 or the signal subjected to the enhancement
processing in step 310 is localized at the virtual sound source position with respect to the signal.
[0109]
As described above, the headphone device of the fourth embodiment extracts a specific ambient
sound signal from the ambient sound signals based on the amplitude spectrum of each ambient
10-05-2019
31
sound signal, that is, specifies from the ambient sound signals The left speaker signal SL and the
right speaker signal SR are extracted so as to extract an ambient sound signal whose amplitude
spectrum has a value larger than a predetermined value and to increase the magnitude of the
sound based on the extracted ambient sound signal. Generate
[0110]
According to this, in particular, the level (volume, volume) of the sound (for example, an
emergency vehicle including an amplitude spectrum of a specific frequency, an alarm sound of a
crossing, etc.) to be urged to the headphone wearer is increased. The sound makes it possible to
more reliably recognize important information about the surrounding situation.
[0111]
Fifth Embodiment Next, a headphone device according to a fifth embodiment of the present
invention will be described.
This headphone device differs from the headphone device according to the third embodiment
only in that the sound source localization device 16c performs the processing shown by the
flowchart of FIG. 13 in place of that of FIG.
Therefore, the following description will focus on the differences. In FIG. 13, the same steps as
the steps shown in FIG. 11 are denoted by the same reference numerals.
[0112]
When the sound source localization apparatus 16c according to the fifth embodiment proceeds
to step 310 shown in FIG. 10, if the designated (selected) signal is an amplified ambient sound
signal, the process shown in FIG. 13 is performed. The emphasizing process of the amplified
ambient sound signal is performed according to
[0113]
That is, the sound source localization apparatus 16c proceeds from step 600 to step 410 in FIG.
13, performs frequency analysis by FFT on the "amplified ambient sound signal" specified in step
10-05-2019
32
310, and expresses it in the time domain. The "amplified ambient sound signal" is expressed in
the frequency domain.
Thereby, the amplitude spectrum and the phase spectrum of "amplified ambient sound signal"
are acquired.
[0114]
Next, the sound source localization device 16c proceeds to step 610, and determines whether or
not the amplitude spectrum for the specific frequency is greater than or equal to a
predetermined value (predetermined threshold). At this time, if the amplitude spectrum for the
specific frequency is greater than or equal to the predetermined value, the sound source
localization apparatus 16 c determines “Yes” in step 610 and proceeds to step 620 to obtain a
predetermined frequency (may be the specific frequency) or more. Increase the amplitude
spectrum for the frequency of f. That is, the sound source localization device 16c performs a
process of emphasizing the high frequency region of the specified signal. On the other hand, if
the amplitude spectrum for the specific frequency is not equal to or greater than the
predetermined value, the sound source localization apparatus 16c determines “No” in step
610 and proceeds directly to step 430.
[0115]
Then, the sound source localization apparatus 16c performs inverse FFT in step 430, and the
"amplified ambient sound signal" or "amplified ambient sound signal" is an ambient sound signal
in which the high frequency region is emphasized in step 620. "Signal" is returned to the
representation in the time domain and proceeds to step 315 of FIG.
[0116]
Thereafter, the sound source localization device 16c executes the processing of step 315, step
230 and step 240 to generate the left speaker signal SL and the right speaker signal SR.
Thus, the signal designated (selected) in step 310 or the signal subjected to the enhancement
processing in step 310 is localized at the virtual sound source position with respect to the signal.
10-05-2019
33
[0117]
As described above, the headphone device according to the fifth embodiment extracts a specific
ambient sound signal from the ambient sound signals based on the amplitude spectrum of each
ambient sound signal, that is, specifies from the ambient sound signals Extracting an ambient
sound signal whose amplitude spectrum of frequency is larger than a predetermined value
(predetermined threshold) (step 610) and increasing the amplitude spectrum of high frequency
components of the sound based on the extracted ambient sound signal (step 620) Thus, the
ambient sound signal extracted in the same manner is corrected, and the left speaker signal SL
and the right speaker signal SR are generated based on the corrected ambient sound signal.
[0118]
According to this, since the high frequency component of the sound of the sound to be
particularly urged to the headphone wearer (that is, the emergency vehicle including the
amplitude spectrum of the specific frequency, the alarm sound of the crossing, etc.) is
emphasized, It becomes possible to recognize surrounding conditions more reliably.
[0119]
As described above, according to the headphone device according to each embodiment of the
present invention, the ambient sound (that is, unnecessary noise that can be heard by the left
ear) reaching the vicinity of the left ear of the headphone wearer from outside is the left ear An
ambient sound (that is, unnecessary noise that can be heard by the right ear) arriving from the
outside near the right ear of the headphone wearing person is canceled by the near ear antiphase
signal, and is erased by the near ear antiphase signal.
Furthermore, the target sound and the ambient sound are localized at an appropriate virtual
sound source position.
Therefore, the driver who is the headphone wearer can clearly recognize in which direction the
ambient sound such as the alarm is generated.
[0120]
10-05-2019
34
The present invention is not limited to the above embodiments, and various modifications can be
adopted within the scope of the present invention. For example, the headphone device may be a
normal headphone device mounted on a helmet but not mounted on a helmet. Also, at least two
directional microphones may be used to collect ambient sound, and four or more directional
microphones may be used.
[0121]
Also, the amplified ambient sound signal is represented in the frequency domain by FFT, and the
shape (envelope) of the amplitude spectrum is compared with the shape stored in the database in
advance, and when both are similar, The amplified ambient sound signal may be extracted as the
ambient sound signal to be subjected to the enhancement processing described in the third to
fifth embodiments. Also, the sound source position determination device 16b analyzes the
ambient sound signals obtained from the directional microphones 14FL, 14FR, 14RL, 14RR, and
determines the localization position of the sound image based on the signals from these
microphones based on the analysis. It may be configured as follows.
[0122]
It is a schematic block diagram of the headphone apparatus concerning a 1st embodiment of the
present invention. It is an expansion schematic block diagram of the headphone apparatus shown
in FIG. It is a block diagram of the headphone apparatus shown in FIG.1 and FIG.2. It is a block
diagram of the sound source localization apparatus shown in FIG. It is a figure for demonstrating
the sound image localization method by the headphone apparatus shown to FIG. 1 thru | or FIG.
It is a figure for demonstrating the sound image localization method by the headphone apparatus
shown to FIG. 1 thru | or FIG. It is the flowchart which showed the process sequence which the
sound source position determination apparatus shown in FIG.2 and FIG.3 performs. It is the
flowchart which showed the process sequence which the sound source localization apparatus
shown in FIG.2 and FIG.3 performs. It is a block diagram of the headphone device concerning a
2nd embodiment of the present invention. It is the flowchart which showed the process
procedure which the sound source localization apparatus of the headphone apparatus which
concerns on 3rd thru | or 5th embodiment of this invention performs. It is the flowchart which
showed the process sequence performed when the sound source localization apparatus of the
headphone device concerning a 3rd embodiment of the present invention performs emphasis
processing. It is the flowchart which showed the process procedure performed when the sound
source localization apparatus of the headphone device concerning a 4th embodiment of the
present invention performs emphasis processing. It is the flowchart which showed the process
10-05-2019
35
sequence performed when the sound source localization apparatus of the headphone device
concerning a 5th embodiment of the present invention performs emphasis processing.
Explanation of sign
[0123]
10 Driver helmet 11 Driver's headphone 11R Right speaker (right speaker for target sound) 11L
Left speaker (left speaker for target sound) 12 Music player 13 Mobile phone 14FL 14FR, 14RL,
14RR: directional microphone, 15R: near right ear microphone, 15L: near left ear microphone,
16: control unit, 16a: level detection circuit, 16b: sound source positioning device, 16c: sound
source localization device, 16dL ... Left side active noise control device, 16dR: right side active
noise control device, 16eR: right side delay volume controller, 16eL: left side delay volume
controller, 16f to 16i, adder, 17R: right side noise removal speaker, 17L: noise side removal left
speaker 161 ... AD converter, 162 ... preceding sound gain multiplier, 63: Multiplexer, 164: Delay
line, 165: Band pass filter, 166: Trailing sound gain multiplier, 167: FIR filter, 167L: FIR filter for
left ear, 167R: FIR filter for right ear, 168: Adder, 168R: Adder for right ear, 168L: Adder for left
ear, 169: D / A converter.
10-05-2019
36
Документ
Категория
Без категории
Просмотров
0
Размер файла
54 Кб
Теги
jp2007036608
1/--страниц
Пожаловаться на содержимое документа