close

Вход

Забыли?

вход по аккаунту

?

JP2007036610

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2007036610
PROBLEM TO BE SOLVED: To provide a sound generation device capable of reducing the
possibility that a listener misses a new sound or an important sound when generating a sound
generated based on sound source signals from a plurality of sound sources from headphones. .
SOLUTION: This sound generation device is applied to a motorcycle BY, and is mounted on a
helmet 10, a driver's headphone 11 and a driver's microphone 12, a driver unit 14, a mobile
phone 16, a music player 17, a navigation device 18. The vehicle state detecting device 19, the
traveling state detecting device 20, the other sound source 21, the in-vehicle unit 30, and the
like. The sounding device determines the position (virtual sound source position) where the
sound image should be localized based on each sound source signal according to the type of the
sound source signal generated from each of these sound sources and the change in the type of
sound source signal. [Selected figure] Figure 1
Pronunciation device
[0001]
The present invention relates to a sound generation apparatus (sound localization apparatus) for
localizing a sound image based on sound source signals from a plurality of sound sources and
generating sound from headphones.
[0002]
Conventionally, a helmet for a driver of a motorcycle and a helmet for a passenger are equipped
with a microphone and a speaker, and using these, it is possible to have a conversation by
10-05-2019
1
wireless communication between the driver and the passenger. An apparatus configured is
known (see, for example, Patent Document 1).
JP 2001-280982 A
[0003]
Furthermore, according to the above-mentioned conventional apparatus, the driver can use not
only the voice of the passenger from the speaker but also the ringing tone of the ordinary
cellular phone, the voice of the other party, the traffic information from the music and the
navigation system, etc. You can also listen to the sound of
[0004]
However, according to the above-mentioned conventional apparatus, for example, a conversation
by a passenger makes a conversation start while listening to music, important traffic information
is emitted while listening to music, or ride on The driver is listening up to the present time when
the sound from the new sound source is emitted in addition to the sound from the sound source
being generated continuously, such as when the alarm is issued while talking with the person,
etc. Since the degree of concentration is increased with respect to noise, it is possible to miss the
sound from the new sound source.
Further, according to the above-mentioned conventional apparatus, the music is reproduced
similarly in the case of listening only to the music and in the case of talking with the passenger
etc. while listening to the music. Therefore, when talking with a passenger or the like while
listening to music, there is a problem that it is difficult to talk.
[0005]
The present invention has been made to address the above problems, and it is an object of the
present invention to provide a sound generation device suitable for generating sounds based on
sound source signals from a plurality of sound sources from headphones. . In the sound
generation device according to the present invention for achieving the object, a pattern for a
virtual sound source position, which is a position to localize sound images based on each of the
sound source signals, at least one of a type of the sound source signal and a change in the type of
10-05-2019
2
the sound source signal Based on at least one of the actual type of the same sound source signal
and the change of the actual type of the same sound source signal, based on the virtual sound
source position pattern stored in advance. Sound source position determining means for
determining the virtual sound source position of each of the generated sound source signals; and
localization of the sound image based on the sound source signal corresponding to the respective
virtual sound source positions to the respective determined virtual sound source positions. Sound
source position localization means for outputting an output signal obtained by converting the
same sound source signal into And it includes a headphone to generate the actual sound, the.
[0006]
According to this, the virtual sound source position, which is the position to localize the sound
image based on the “signal for generating sound (sound source signal)” generated from each
of the plurality of sound sources, is the type of sound source signal being generated A
determination is made for each source signal based on at least one of the changes in source
signal type. At this time, the virtual sound source position pattern stored in association with at
least one of the type of the sound source signal and the change in the type of the sound source
signal, and at least one of the actual type of the sound source signal and the actual type of the
sound source signal The virtual sound source position for each of the actually generated sound
source signals is determined based on.
[0007]
Then, sound images based on sound source signals corresponding to the respective virtual sound
source positions are localized at the determined virtual sound source positions. Therefore, the
listener can easily recognize which sound source is generating the sound at the present time or
which sound is important at the present time. In other words, the listener can intuitively select
and listen to the sound required at the present time from the sounds based on the plurality of
sound source signals generated from the plurality of sound sources.
[0008]
Furthermore, since the sound image of each sound source signal is localized, the listener can be
made to recognize a specific sound direction and / or sense of distance. Therefore, for example,
when the listener is a driver of a motorcycle or a four-wheeled vehicle, the driver can recognize
10-05-2019
3
the direction and / or distance of the approach of an emergency vehicle, an obstacle or the like
by sound. In addition, according to the above configuration, since the virtual sound source
position is determined based on the virtual sound source position pattern, it is possible to obtain
a virtual sound source position suitable for a situation assumed in advance with a simple
configuration.
[0009]
In this case, the sound source position determining means may be configured to determine the
virtual sound source position based also on incidental information attached to the sound source
signal.
[0010]
The accompanying information is, for example, information on the volume of a sound to be
generated based on the sound source signal, the direction of the sound to be generated with
reference to at least one sound source of the plurality of sound sources, or the sound It is an
importance degree etc.
[0011]
Furthermore, the sound source position determining means may be configured to determine the
virtual sound source position based on an orientation of a headphone with respect to the one
sound source.
[0012]
This enables the sound image to be properly localized and the listener's attention to be
appropriately directed regardless of the direction the listener is facing.
[0013]
Further, in the above-described sound generation device of the present invention, it is preferable
that the headphones be headphones provided in a helmet for a two-wheeled vehicle.
[0014]
Since the motorcycle helmet has high sound insulation, by applying the sound generation device
according to the present invention, it is possible to appropriately give information by sound to
the driver who is the listener.
10-05-2019
4
[0015]
In the sound generation device of the present invention, preferably, the plurality of sound
sources include a mobile phone and a music player.
[0016]
According to the present invention, for example, when an incoming call arrives at a mobile phone
while playing music by a music player with a headphone, it is possible to surely notify the
listener of the incoming call.
Alternatively, when the mobile phone receives an incoming call while the music by the music
player is being reproduced by the headphone, it is possible to continue the reproduction of the
music in a state suitable for the call by the mobile phone thereafter.
[0017]
(Arrangement of Apparatus) Hereinafter, an embodiment of a sound generation apparatus
according to the present invention will be described with reference to the drawings.
In the present specification, claims and drawings, “position of sound source (virtual position of
sound source)” means “position of sound image of sound source signal which is signal for
generating sound output from sound source (sound image It is treated as synonymous with
"virtual position)".
[0018]
This sounding device is applied to a motorcycle BY, as shown in FIG. 1, and is directed to the
driver's headphones 11, the driver's microphone 12 and the driver's helmet mounted on the fullfaced driver's helmet 10 Geomagnetic sensor for detection 13, driver unit 14, geomagnetic
sensor for vehicle direction detection 15, mobile phone 16, music player 17, navigation device
18, vehicle state detection device 19, traveling state detection device 20, other sound sources 21
and in-vehicle unit It consists of thirty.
10-05-2019
5
[0019]
Further, when the passenger rides on the motorcycle BY, the sound generation device includes
the passenger's headphone 41 and the passenger's microphone 42 and the passenger's unit 43,
which are attached to the passenger's helmet 40.
[0020]
The driver's headphone 11 is composed of a right speaker 11R and a left speaker 11L (see FIG.
2).
The right speaker 11 </ b> R is disposed on the inner right side of the driver helmet 10 so as to
be positioned in the vicinity of the right ear of the driver wearing the driver helmet 10.
The left speaker 11 </ b> L is disposed on the inner left side of the driver's helmet 10 so as to be
positioned near the left ear of the driver wearing the driver's helmet 10.
[0021]
The driver microphone 12 is disposed at the lower inside of the driver helmet 10 so as to be
positioned in the vicinity of the mouth of the driver wearing the driver helmet 10.
The driver helmet direction detecting geomagnetic sensor 13 is formed of a geomagnetic sensor,
and is disposed on the outer upper portion of the driver helmet 10.
The driver helmet direction detecting geomagnetic sensor 13 detects the direction of the driver
helmet 10 (the direction in front of the longitudinal axis of the driver helmet 10).
[0022]
The driver unit 14 contains an electrical circuit that achieves a predetermined function.
10-05-2019
6
As shown in FIG. 1, the driver unit 14 is connected to the driver headphones 11, the driver
microphones 12 and the driver helmet direction detecting geomagnetic sensor 13 by cords.
The driver unit 14 drives the driver's headphone 11 to generate a sound from the driver's
headphone 11.
The driver unit 14 is configured to input an audio signal from the driver microphone 12 and to
input the direction of the driver helmet 10 detected by the driver helmet direction detection
geomagnetic sensor 13. Furthermore, the driver unit 14 exchanges necessary information with
the on-vehicle unit 30 by wireless communication.
[0023]
A vehicle direction detection geomagnetic sensor 15 is fixed to the vehicle body of the
motorcycle BY. The vehicle direction detecting geomagnetic sensor 15 is composed of a
geomagnetic sensor, and is adapted to detect an orientation in the front-rear direction of the
vehicle (an orientation in front of an axis in the front-rear direction of the vehicle). The mobile
phone 16 is not dedicated to this sound generation device, but is a commonly used mobile phone.
The mobile phone 16 is detachably held by a holding unit (not shown) provided on a handle of
the motorcycle BY.
[0024]
A music player 17 (for example, an MP3 player or the like) is fixed to the steering wheel of the
motorcycle BY. The music player 17 internally stores music data, and outputs a signal for
reproducing music based on the music data.
[0025]
The navigation device 18 is fixed to the steering wheel of the motorcycle BY. The navigation
device 18 incorporates a GPS device. The GPS device receives navigation radio waves for
detecting the vehicle position emitted from a GPS (Global Positioning System) satellite and
10-05-2019
7
acquires the current position of the motorcycle BY. The navigation device 18 calculates a route
from the current position to a destination set by a predetermined operation by calculation, and
outputs a signal for performing route guidance by voice.
[0026]
The vehicle state detection device 19 is fixed to a front portion near the steering wheel of the
motorcycle BY. The vehicle state detection device 19 is connected to various sensors, switches,
etc. (not shown) for detecting the state of the motorcycle BY. The vehicle state detection device
19 is configured to detect a vehicle state including an abnormal state of each part of the
motorcycle BY based on a signal from a connected sensor. The vehicle state detection device 19
is adapted to output a signal for notifying the driver of the detected vehicle state by voice or
warning sound as necessary.
[0027]
The vehicle state detection device 19 is adapted to achieve, for example, the following functions.
(1) The fuel remaining amount is detected and reported as the vehicle state based on the signal
from the fuel remaining amount detection sensor. (2) It is connected to the switch for direction
indicator control, and it is detected whether the direction indicator is blinking or not, and when
the direction indicator is blinking, it is detected which of the left and right direction indicators is
blinking. To inform. (3) The brake hydraulic pressure sensor is connected and by detecting
whether or not the brake hydraulic pressure is abnormally reduced, it is detected and notified
that the brake system is abnormal.
[0028]
The traveling state detection device 20 is fixed to a front portion near the steering wheel of the
motorcycle BY. The traveling state detection device 20 is connected to various sensors, switches,
etc. (not shown) for detecting the state of the motorcycle BY. The traveling state detection device
20 detects the current traveling state of the motorcycle BY (for example, the engine rotation
speed, the vehicle speed, the acceleration in the lateral direction of the motorcycle BY, the
inclination angle of the motorcycle based on the signal from the connected sensor). Etc.) is
detected. Furthermore, the traveling state detection device 20 communicates with a device
outside the vehicle (for example, a communication device disposed on a road surface or a
10-05-2019
8
roadside zone) to acquire information (such as the road surface state) about the traveling state. .
The traveling state detection device 20 is configured to output a signal for notifying the driver of
the detected and acquired traveling state by voice as necessary.
[0029]
The other sound source 21 is fixed to a front portion near the steering wheel of the motorcycle
BY. For other sound sources 21, a vehicle peripheral alarm device (see, for example, JP-A-20011851). Or a vehicle surrounding sound collecting device. The other sound source 21 outputs a
signal for notifying the driver of predetermined information (such as an alarm sound and an
ambient sound) by sound.
[0030]
The in-vehicle unit 30 is fixed to the steering wheel of the motorcycle BY. The in-vehicle unit 30
is connected to the geomagnetic sensor 15 for vehicle direction detection, the mobile phone 16,
the music player 17, the navigation device 18, the vehicle state detection device 19, the traveling
state detection device 20 and other sound sources 21 by code. These generated signals are input.
The in-vehicle unit 30 further exchanges necessary information with the passenger unit 43 by
wireless communication. Details of the in-vehicle unit 30 will be described later.
[0031]
The passenger unit 43 accommodates an electric circuit that achieves a predetermined function.
The passenger unit 43 is connected to the passenger headphone 41 and the passenger
microphone 42 by a cord. The passenger unit 43 drives the passenger's headphone 41 to
generate sound from the passenger's headphone 41. The passenger unit 43 is configured to
receive an audio signal from the passenger microphone 42.
[0032]
As shown in FIG. 2, the on-vehicle unit 30 includes a conversation wireless communication device
31, a communication device 32, a sound source position determination device 33, and a sound
source localization device (sound image localization means) 34.
10-05-2019
9
[0033]
The conversation radio communication device 31 exchanges signals with the passenger unit 43
by radio communication.
The conversation radio communication device 31 is connected to the communication device 32,
the sound source position determination device 33 and the sound source localization device 34.
The communication device 32 exchanges signals with the driver unit 14 by wireless
communication.
[0034]
More specifically, the driver unit 14 transmits to the communication device 32 the driver's voice
signal input from the driver microphone 12. The communication device 32 transmits a driver's
voice signal to the conversation radio communication device 31. The conversation radio
communication device 31 transmits the driver's voice signal received from the communication
device 32 to the passenger unit 43 by wireless communication. The passenger unit 43 amplifies
the driver's voice signal received from the conversation wireless communication device 31, and
outputs the amplified voice signal to the passenger's headphone 41 shown in FIG. With the above
configuration, the driver's voice is output from the passenger's headphone 41.
[0035]
On the other hand, the passenger unit 43 is configured to transmit the voice signal of the
passenger, which is input from the passenger microphone 42 shown in FIG. 1, to the
conversation radio communication device 31. The conversation radio communication device 31
transmits the voice signal of the passenger received from the passenger unit 43 to the sound
source position determination device 33 and the sound source localization device 34. As will be
described in detail later, the sound source position determination device 33 and the sound source
localization device 34 transmit the signal including the voice signal of the same passenger
received from the radio communication device 31 for conversation to the driver unit 14 via the
communication device 32. It is supposed to The driver unit 14 amplifies the received signal, and
outputs the amplified signal to the speakers 11R and 11L of the driver headphone 11. With the
above configuration, the voice of the passenger is output from the driver's headphone 11.
10-05-2019
10
[0036]
The sound source position determination device 33 is, in addition to the wireless communication
device 31 for conversation, the geomagnetic sensor 15 for detecting the vehicle direction, the
mobile phone 16, the music player 17, the navigation device 18, the vehicle condition detection
device 19, the traveling condition detection device 20, and the like The sound source 21, the
communication device 32 and the sound source localization device 34 are connected. The sound
source position determination device 33 includes a sound source position pattern database 33a.
[0037]
The sound source position determination device 33 includes a sound source signal and
accompanying information output from the mobile phone 16, the music player 17, the navigation
device 18, the vehicle state detection device 19, the traveling state detection device 20, the other
sound source 21 and the wireless communication device 31 for conversation. To receive a signal
representing The sound source signal is a signal generated from a certain sound source, and is a
signal for generating a sound by the sound source. The sound source position determination
device 33 sets virtual sound source positions (sound image positions of sound sources, virtual
sound source positions with respect to sound sources) which are positions to localize sound
images based on the received sound source signals into signals representing the received sound
source signals and incidental information. Based on the information, it is decided by referring to
the sound source position pattern database 33a, and information (sound source position
information) representing the virtual sound source position is outputted to the sound source
localization device 34.
[0038]
The virtual sound source position is a virtual position of a sound source that can be made to be
heard by the driver as a listener as if a certain sound source (sound image based on a sound
source signal from the sound source) exists at the virtual sound source position. It is. Therefore,
the wireless communication apparatus for conversation (also referred to as a passenger). 31), the
mobile phone 16, the music player 17, the navigation device 18, the vehicle state detection
device 19, the traveling state detection device 20, and the other sound sources 21 constitute a
plurality of sound sources.
10-05-2019
11
[0039]
Further, the sound source position determination device 33 determines the sound source
position information according to the output signal of the vehicle direction detection
geomagnetic sensor 15 and the output signal of the helmet direction detection geomagnetic
sensor 13 received via the communication device 32. (Correction) is to be done.
[0040]
The sound source localization device 34 includes a mobile phone 16, a music player 17, a
navigation device 18, a vehicle state detection device 19, a traveling state detection device 20,
other sound sources 21, a radio communication device for conversation 31, a communication
device 32, and a sound source position determination device It is connected with 33.
[0041]
The sound source localization device 34 is output by the mobile phone 16, the music player 17,
the navigation device 18, the vehicle state detection device 19, the traveling state detection
device 20, the other sound sources 21 and the wireless communication devices 31 for
conversation (that is, a plurality of sound sources). A sound localization signal (a signal for
receiving a sound source signal) and receiving sound source position information from the sound
source position determination device 33 and localizing a sound image based on each sound
source signal to a position according to the sound source position information Output signal).
[0042]
That is, the sound source localization apparatus 34 outputs an output signal obtained by
converting the sound source signal so as to localize a sound image based on the sound source
signal corresponding to each of the determined virtual sound source positions to the determined
virtual sound source positions. It constitutes localization means.
[0043]
Furthermore, the sound source localization device 34 is configured to transmit the sound
localization signal to the driver unit 14 via the communication device 32.
The driver unit 14 amplifies the received voice localization signal and outputs the amplified
10-05-2019
12
signal to the speakers 11R and 11L of the driver headphone 11.
As a result, sounds localized for each sound source signal are generated from the speakers 11R
and 11L.
[0044]
Here, the configuration of the sound source localization device 34 will be described with
reference to FIG.
The sound source localization device 34 includes an A / D converter 34a, a gain multiplier 34b
for preceding sound, a multiplexer 34c, a delay line 34d, a band pass filter 34e, a gain multiplier
34f for trailing sound, an FIR filter 34g, and an adder. 34 h, a D / A converter (DAC) 34 i and the
like are provided.
The sound source localization apparatus 34 and the sound source localization method (sound
image localization method for localizing sound images of a plurality of sounds in three
dimensions using headphones) are known, for example, by JP-A-2002-44796 and JP-A-1023600. It is.
[0045]
The AD converter 34a is connected to each of a plurality of sound sources (the mobile phone 16,
the music player 17, the navigation device 18, the vehicle state detecting device 19, the traveling
state detecting device 20, the other sound sources 21 and the wireless communication device 31
for conversation) Have multiple A / D converters (ADCs). Each ADC converts an analog signal
from each sound source into a digital signal and outputs the digital signal.
[0046]
The gain multiplier 34b is composed of the same number of gain multiplier sets as the plurality
of sound sources. Each gain multiplier set includes a total of eight amplifiers corresponding to
10-05-2019
13
the first channel Ch1 to the eighth channel Ch8. The eight amplifiers of each gain multiplier set
are connected to each ADC. The gain multiplier 34 b is also connected to the sound source
position determination device 33. The gain multiplier 34 b is configured to change the gain of
the amplifier of each gain multiplier set on the basis of the sound source position information
output from the sound source position determination device 33.
[0047]
The input end of each channel of the multiplexer 34c is connected to each amplifier of the gain
multiplier 34b. The output end of each channel of the multiplexer 34c is connected to each of the
FIR filters 34g of the first channel Ch1 to the eighth channel Ch8 (in fact, the FIR filter 34gL for
the left ear and the FIR filter 34gR for the right ear). A signal directly input from the output end
of the multiplexer 34c to the input end of the FIR filter 34g is called "preceding tone". The
multiplexer 34 c is also connected to the sound source position determination device 33. The
multiplexer 34 c selects one of the plurality of gain multiplier sets of the gain multiplier 34 b
based on the sound source position information output from the sound source position
determination device 33, and selects each of the selected gain multiplier sets from the amplifiers.
The channel signal is output from the output terminal.
[0048]
The delay line 34d is provided with a multistage shift register in which a plurality of shift
registers are connected in series for each of the first channel Ch1 to the eighth channel Ch8. The
delay line 34d is designed to provide a tap between a predetermined register of the multistage
shift register of each channel and a register following the predetermined register. The input end
of each channel of the delay line 34d is connected to the output end of each channel of the
multiplexer 34c. The signal input from the output end of the multiplexer 34c to the input end of
the delay line 34d and delayed by a predetermined time and taken out from the tap is referred to
as "following tone". The delay line 34 d is also connected to the sound source position
determination device 33. The delay line 34 d is configured to change the position of the tap of
each channel based on the sound source position information output from the sound source
position determination device 33.
[0049]
10-05-2019
14
The tap of each channel of the delay line 34d is connected to the input end of each channel of
the band pass filter 34e. In FIG. 3, the lines to which “8” is attached are eight lines.
[0050]
The gain multiplier for backward sound 34f includes a total of eight amplifiers corresponding to
the first channel Ch1 to the eighth channel Ch8. The input of the amplifier of each channel is
connected to the output of each channel of the band pass filter 34e. The output of the amplifier
of each channel of the gain multiplier 34f is provided between the input of the FIR filter 34g of
each channel and the output of each channel of the multiplexer 34c, as shown in FIG. 3 and Table
1. It is connected to any one of the adders. As a result, the preceding sound and the following
sound (the following sound after passing through the band pass filter 34 e and the gain
multiplier 34 f for the following sound) are added by each adder and then input to the FIR filter
34 g.
[0051]
The FIR filter 34gL of the first channel Ch1 simulates the frequency characteristic when the
sound is heard by the left ear of the listener from the direction of # 1 in FIG. 4 by the head
related transfer function (HRTF). It is supposed to be. That is, the FIR filter 34gL of the first
channel Ch1 receives the input signal according to the head related transfer function which is the
transfer characteristic of the sound when the sound is heard to the left ear of the listener from
the direction of # 1 in FIG. It is a filter that performs a convolution operation.
[0052]
In FIG. 4, the head of the listener (driver) is at the center of the cube defined by the positions # 1
to # 8, and the listeners are # 1, # 2, # 6, and # 5. Face the surface formed by the position of
(facing the vertical direction of this surface). The surface formed by the positions # 1 to # 4 is the
lower surface of the listener, and the surface formed by the positions # 5 to # 8 is the upper
surface of the listener. The orientation of the cube changes with the change in the orientation of
the listener's head (the forward orientation of the driver's helmet 10 in the front-rear direction).
[0053]
10-05-2019
15
The FIR filter 34gR of the first channel Ch1 simulates the frequency characteristic when the
sound is heard to the right ear of the listener from the direction of # 1 in FIG. 4 by the headrelated transfer function. That is, the FIR filter 34gR of the first channel Ch1 performs a
convolution operation on the input signal according to the head related transfer function of the
sound when the sound is heard from the direction of # 1 in FIG. 4 to the right ear of the listener.
It is.
[0054]
Similarly, the FIR filters 34gL and 34gR of the second channel Ch2 to the eighth channel Ch8
respectively transfer sound from the direction of # 2 to # 8 in FIG. 4 to the left and right ears of
the listener as a head-related transfer function It is designed to simulate. The FIR filter 34gL and
the FIR filter 34gR of each channel are also connected to the sound source position
determination device 33, and change the coefficient of each filter based on the sound source
position information output from the sound source position determination device 33. ing.
[0055]
The FIR filters 34gL and 34gR of each channel are connected to the input ends of the adder 34h
(in fact, the left ear adder 34hL and the right ear adder 34hR). Thus, the signals output from the
FIR filters 34gL and 34gR of each channel are added and synthesized by the adder 34h.
[0056]
The output end of the adder 34h (left ear adder 34hL and right ear adder 34hR) is independently
connected to the input end of the DAC 34i. Thereby, the signal output from each adder 34 h is
converted into an analog signal. The output end of the DAC 34 i is connected to the
communication device 32. Thus, the signals for left and right ears (audio localization signals)
converted into analog signals are transmitted to the driver unit 14 via the communication device
32.
[0057]
10-05-2019
16
By the way, in general, when the listener listens to two sounds identical to each other like the
preceding sound and the following sound with a time difference, the listener may hear the sound
that arrived earlier and / or the sound of the higher level. It perceives that the sound source is
present in the direction. This effect is a well-known effect called a leading sound effect (Heath
effect). Therefore, if the following sound is localized at a point symmetrical position with respect
to the listener of the virtual sound source localized by the preceding sound, the preceding sound
effect does not cause the listener to erroneously recognize the localization position by the
preceding sound. Sense of direction can be further emphasized.
[0058]
From this, in the sound source localization device 34, the output end of the amplifier of each
channel of the gain multiplier for backward sound 34f is point-symmetrical with respect to the
listener of each channel of the preceding sound connecting the multiplexer 34c and the FIR filter
34g ( Connected to the opposite channel via an adder (see Table 1).
[0059]
However, when the time difference and level difference between the preceding sound and the
following sound are extremely small, the sound image is localized in the head and the sense of
direction is lost.
Conversely, when the time difference becomes excessive, the listener perceives the listening
sound as two separate sounds. In addition, when the level difference becomes excessive, the
effect of the following sound (the effect of emphasizing the sense of direction of the preceding
sound) is lost.
[0060]
Based on such a viewpoint, the delay line 34 d and the gain multiplier 34 f respectively set the
time difference (tap position) and the level difference (gain for subsequent sound) to appropriate
values according to the sound source position information. It has become.
[0061]
10-05-2019
17
Here, the case where the virtual sound source position is a direction between the basic directions
# 1 to # 8 described above will be described.
Now, it is assumed that the virtual sound source position should be set to an arbitrary point Ptgt
in the space shown in (A) to (C) of FIG. In this case, the virtual sound source Ptgt is in the square
formed by # 1, # 2, # 5 and # 6, as viewed from the center point D of the listener's left and right
ears. (B) and (C) of FIG. 5 are a top view and a side view of the cube of (A) of FIG. 5, respectively.
[0062]
Here, an intersection point of a straight line L passing the point D and the point Ptgt and a plane
formed by the positions of # 1, # 2, # 5 and # 6 is referred to as an intersection point Cr. In this
case, the intersection point Cr is shown in side # 1- # 2 (FIG. 4 and FIG. 5A) in the horizontal
direction (horizontal direction, ie, direction along # 1- # 2 or # 5- # 6). It exists in the position P1
which internally divided a cube of one side in the horizontal direction at a: b. Further, the
intersection point Cr is shown in side # 1- # 5 (FIG. 4 and FIG. 5A) in the vertical direction
(vertically vertical direction, ie, direction along # 1- # 5 or # 2- # 6). It exists in the position P2
which internally divided c) and d) one side of the cube in the vertical up-down direction. Further,
it is assumed that the distance L1 from the point D to the virtual sound source Ptgt is x times the
distance L2 from the point D to the intersection point Cr (sound source distance ratio x = L1 /
L2).
[0063]
At this time, the sound source position determination device 33 sets the gain of each amplifier of
the gain multiplier 34b to the preceding sound of each of the first channel Ch1, the second
channel Ch2, the fifth channel Ch5, and the sixth channel Ch6 It is set to a gain obtained from
the inter-speaker distance ratio a: b, c: d) and the sound source distance ratio x. Specifically, the
gain of each channel may be determined to be inversely proportional to the inter-channel
distance ratio a: b, c: d and inversely proportional to the square of the sound source distance
ratio x. That is, the gain kChn for the preceding sound of the n-th channel (n is 1, 2, 5, 6) is
determined by the following equations (1) to (4). Further, the gain kChm of the m-th channel (m
is 3, 4, 7, 8) is set to “0”.
10-05-2019
18
[0064]
As described above, the virtual sound source Ptgt 'of the trailing sound is localized at a pointsymmetrical position with respect to the point D of the virtual sound source Ptgt localized by the
preceding sound. The following sound is produced so as to have a fixed amount of attenuation
(gain) and delay (time difference) with respect to the preceding sound. The delay amount is given
by the delay line 34d. The gain kCh'm of the m-th channel (m is 3, 4, 7, 8) is determined based on
the following equations (5) to (8) according to the gain with respect to the preceding sound. The
coefficient p is a coefficient that determines the amount of attenuation of the preceding sound
and is given by the following sound gain multiplier 34f. The amount of attenuation and the
amount of delay do not depend on the sound source position, and are set based on the listener's
hearing characteristics and preferences. The gain kCh'n for the subsequent sound of the n-th
channel (n is 1, 2, 5, 6) is "0".
[0065]
When the direction of the driver's helmet 10 (the direction of the driver's headphone 11) and /
or the point Ptgt which is a virtual sound source position changes, the angle between the point
Ptgt and the driver's headphone 11 also changes. Also in this case, the straight line L passing
through the point D and the point Ptgt intersects any surface of the cube formed by the positions
# 1 to # 8, and the intersection point Cr is obtained on the surface.
[0066]
Therefore, the sound source position determination device 33 and the sound source localization
device 34 can calculate the inter-channel distance ratio a: b, c from the four positions among the
positions of the intersection point Cr and the positions # 1 to # 8 defining the plane. The d and
the sound source distance ratio x are obtained, and the gains of the respective amplifiers of the
preceding sound gain multiplier 34b of the respective channels and the gains of the respective
amplifiers of the subsequent sound gain multiplier 34f are set to the gains determined by these.
As a result, regardless of the direction of the driver's headphone 11, the virtual sound source
position can be always localized at the point Ptgt.
[0067]
10-05-2019
19
The sound source position determination device 33 sends an instruction signal to the multiplexer
34c to select the sound source to be localized, and the virtual sound source position for the
selected sound source is the gain multiplier 34b for the preceding sound, the delay line 34d, and
the following It outputs to the gain multiplier 34f for sound and the FIR filter 34g. Such sound
source selection is performed on all the sound source signals, whereby a sound image based on
all the sound source signals is localized.
[0068]
(Overview of Operation) Next, an overview of the operation of the sound generation device
configured as described above will be described. The sound generation apparatus changes the
position (virtual sound source position) of the sound image based on each sound source signal
based on the change of the type of the sound source signal and / or the type of the sound source
signal output from the above-described sound source. First, in order to facilitate understanding,
an example of how the virtual sound source position is changed will be described.
[0069]
For example, as shown in FIG. 6A, it is assumed that the driver reproduces music based on only
the sound source signal generated by the music player 17 and listens to the music. In this case,
the sound source position determination device 33 is a music player so that the music to be
reproduced is spread over the driver's head (the driver's helmet 10) and the music can be heard
with a natural feeling as when listening to normal music. Determine a virtual sound source
position with respect to the sound image of the sound source signal from The sound source
localization device 34 then localizes the sound image based on the sound source signal from the
music player 17 at the virtual sound source position. As a result, as if the driver is enjoying music
at an appropriate position within the range enclosed by the four speakers (front left speaker,
front right speaker, rear left speaker and rear right speaker) You can listen to
[0070]
In this state, it is assumed that the passenger has started talking (speaked) using the passenger
microphone 42 and the passenger unit 43. That is, it is assumed that the tandem conversation is
started by the passenger's statement. Generally, when listening to music, the driver concentrates
10-05-2019
20
attention on the music, so it is highly likely that he will miss the unexpected remarks from the
passenger. Therefore, as shown in (B) of FIG. 6, the sound source position determination device
33 has the speech radio communication device 31 (same sound source as a sound source) that is
a sound source as if the passenger's voice was emitted immediately before the driver. The
position of the sound image (virtual sound source position) is determined based on the sound
source signal from the person). At the same time, the sound source position determination device
33 determines the position (virtual sound source position) of the sound image based on the
sound source signal from the music player 17 so that the music based on the sound source signal
from the music player 17 can be heard from below the driver's helmet 10. Do. This makes it
possible for the driver to hear music suddenly from below and to hear the passenger's remarks at
his ear, thus reducing the possibility that the driver will miss the first conversation of the
passenger.
[0071]
Thereafter, when the conversation with the passenger continues, the sound source signals from
the music player 17 and the radio communication apparatus for conversation 31 (the passenger)
continue. That is, the type of sound source does not change. However, in this case, the type of the
sound source signal (combination of the types of the sound source signal) is different from the
case where only the music shown in FIG. 6A is reproduced. Therefore, as shown in FIG. 6C, the
sound source position determination device 33 generates a sound source signal from the music
player 17 so that the music based on the sound source signal from the music player 17 can be
heard from below the driver's helmet 10. To determine the position of the sound image (virtual
sound source position) based on the sound source signal from the wireless communication device
31 for conversation so that the voice of the passenger can be heard over the top of the driver's
helmet 10 instead of the driver's immediate end The position (virtual sound source position) of
the sound image based on it is determined.
[0072]
When such a state continues, the vehicle peripheral sound collection device of the other sound
source 21 detects that an emergency vehicle (for example, an ambulance or a fire engine)
approaches from the right front, and warns to that effect. It is assumed that a sound source
signal for generating In this case, the sound source position determination device 33 prevents
the user from hearing the music based on the sound source signal from the music player 17 so
that the voice of the passenger based on the sound source signal from the conversation radio
communication device 31 can be heard by the driver. The position of the sound image (virtual
10-05-2019
21
sound source virtual) based on each sound source signal is determined so that an alarm (warning
sound) based on the sound source signal from the other sound source 21 can be heard from the
right front of the driver's helmet 10. Thus, the driver can reliably know the approach and
approach direction of the emergency vehicle and can continue the conversation with the
passenger.
[0073]
As described above, the sound source position determination device 33 includes the type of the
sound source signal (including the accompanying information. Each virtual sound source position
is determined in accordance with at least one of the change in the type of the sound source
signal and the type of the sound source signal. The sound source position determination device
33 actually includes the type of the sound source signal (accompanying information) in the
sound source position pattern database 33a. And a virtual sound source position pattern
corresponding to at least one of changes in the type of the sound source signal are stored in
advance. The sound source position determining device 33 is realized based on the type of the
sound source signal from each sound source actually input and the change in the type of the
sound source signal and the relationship stored in the sound source position pattern database
33a. Determine the pattern of the virtual sound source position to be done.
[0074]
By the way, as shown to (A) of FIG. 7, the vehicle periphery sound collection apparatus of the
other sound source 21 detects that "an emergency vehicle approaches from the front right of the
motorcycle BY", and warns that. It is assumed that a sound source signal for generating At this
time, the driver (the driver's helmet 10) is in front of the longitudinal axis of the vehicle body of
the motorcycle (this direction is also simply referred to as "the motorcycle longitudinal direction"
or "the vehicle direction"). It is assumed that he was facing). In this case, the sound image of the
warning sound based on the sound source signal from the other sound source 21 may be
localized to the right front of the helmet 10 (headphone 11).
[0075]
However, as shown in FIG. 7B, it is assumed that the driver is turned clockwise (ie, right) by 90
degrees in plan view with respect to the front of the longitudinal axis of the motorcycle. Then, the
sound image of the warning sound based on the sound source signal from the other sound
source 21 should be localized to the left front of the helmet 10 (headphone 11).
10-05-2019
22
[0076]
Therefore, the sound source position determining device 33 automatically performs the
operation based on the direction of the driver's helmet 10 detected by the driver's helmet
direction detecting geomagnetic sensor 13 and the direction of the motorcycle BY detected by
the vehicle body direction detecting geomagnetic sensor 15. Determine the angle of the driver's
helmet 10 relative to the front of the longitudinal axis of the two-wheeled vehicle BY (at least the
direction of the driver's headphone 11 with respect to one sound source fixed to the vehicle body
of the motorcycle BY) The virtual sound source position of the sound source signal from each
sound source is determined with respect to the driver's helmet 10 (driver's headphone 11).
Thereby, in the above example, the sound image of the warning sound based on the sound source
signal from the other sound source 21 is localized to the left front of the helmet 10 (headphones
11), so the driver erroneously recognizes the approaching direction of the emergency vehicle The
possibility can be reduced.
[0077]
(Actual Operation) Next, the actual operation of the above-mentioned sound producing device
will be described. The function of the sound source position determination device 33 described
above is actually achieved by the CPU of the microcomputer performing the processing shown by
the flowchart of FIG. 8 at predetermined intervals. Further, the function of the sound source
localization device 34 is achieved by the DSP (digital signal processor) performing the processing
shown by the flowchart of FIG. 9 at predetermined time intervals.
[0078]
Hereinafter, in order to facilitate understanding, the operation of the CPU will be described on
the assumption that the situation changes as in the example shown in FIG. That is, first, it is
assumed that the driver performs a predetermined operation to start reproduction of music by
the music player 17 when no sound is generated from the driver headphones 11.
10-05-2019
23
[0079]
At this time, when the CPU of the sound source localization device starts the process from step
100 shown in FIG. 8, the CPU proceeds to step 105 and the angle of the driver's helmet 10 with
respect to the longitudinal direction of the motorcycle BY (driver's helmet 10 Is determined
based on the direction of the vehicle body detected by the vehicle direction detection
geomagnetic sensor 15 and the direction of the driver helmet 10 detected by the driver helmet
direction detection geomagnetic sensor 13.
[0080]
Next, the CPU proceeds to step 110 and determines whether there is a change in the sound
source status, that is, whether or not a new sound source signal from another sound source is
generated in addition to the sound source signal from the sound source up to that point. judge.
[0081]
In this case, since the sound source signal of the music player 17 is newly generated, there is a
change in the state of the sound source.
Therefore, the CPU determines "Yes" in step 110, proceeds to step 115, and generates a sound
source according to this condition (a condition in which the music player 17 starts playing music
when no sound is generated). The position pattern is read out from the sound source position
pattern database 33a.
For example, this sound source position pattern is a pattern that allows the music reproduced by
the music player 17 to be heard with a natural feeling, as shown in FIG. That is, the CPU
determines the virtual sound source position of the sound source signal by referring to the sound
source position pattern database 33a based on the sound source signal that is actually generated
and the signal representing the accompanying information.
[0082]
Thereafter, the CPU proceeds to step 120, and the virtual sound source position of each sound
source signal determined in step 115 (in this case, only the sound source signal from the music
player 17) is determined in step 105 of driver's helmet 10 By correcting based on the angle
10-05-2019
24
(orientation), the final virtual sound source position of each sound source is determined. Next,
the CPU proceeds to step 125, outputs the virtual sound source position of each sound source
signal determined at step 120 to the sound source localization apparatus (DSP) 34 as sound
source position information, and proceeds to step 195 to end this routine once.
[0083]
Thereafter, when a predetermined time has elapsed, the CPU resumes the processing of this
routine from step 100. At the present time, the sound source signal is generated only from the
music player 17. Therefore, the condition of the sound source does not change from the previous
execution of the processing of this routine. Therefore, the CPU determines "No" in step 110
following step 105, proceeds to step 130, and determines "whether or not a predetermined time
has elapsed since the state of the sound source last changed".
[0084]
Since the present time is immediately after the reproduction by the music player 17 is started,
the predetermined time has not passed since the state of the sound source last changed.
Therefore, the CPU determines "No" in step 130, passes through the above-described steps 120
and 125, and once ends the present routine in step 195. Thus, the CPU maintains the virtual
sound source position when there is a change in the situation of the sound source until a
predetermined time has elapsed since the situation of the sound source (that is, the type of the
sound source signal) last changed.
[0085]
After that, when time passes, the CPU makes a “Yes” determination at step 130 following steps
100, 105 and 110, proceeds to step 135, and similarly to step 115, generates a sound source
position pattern matching the current situation. It reads from the pattern database 33a. The
situation in this case is a situation where only the music reproduction by the music player 17
continues for a predetermined time or more. However, the sound source position pattern in this
case is a pattern capable of listening to music with the natural feeling shown in FIG. 6A, and the
sound source position pattern does not change. After that, the CPU passes through the abovedescribed steps 120 and 125 and temporarily ends this routine at step 195.
10-05-2019
25
[0086]
Next, in such a situation, tandem conversation is started by the passenger speaking. Thereby, a
new sound source (sound source signal) is added. In this case, the CPU determines “Yes” in
step 110 subsequent to steps 100 and 105, and newly searches for a sound source position
pattern in step 115. This pattern is the pattern shown in FIG. After that, the CPU passes through
the above-described steps 120 and 125 and temporarily ends this routine at step 195.
[0087]
Thereafter, when the conversation with the passenger continues for a predetermined time or
more, when the CPU proceeds to step 130, the CPU determines “Yes” in step 130 and
proceeds to step 135, and the sound source position pattern conforming to the situation at that
time. Are read from the sound source position pattern database. The situation at this point in
time is a situation in which the music player 17 plays back music and the conversation with the
passenger continues for a predetermined time or more. Therefore, the sound source position
pattern in this case is the pattern shown in (C) of FIG.
[0088]
Next, when reproduction of music by the music player 17 and conversation with the passenger
are continuing, the information that the emergency vehicle is approaching from the right front
because the emergency vehicle is approaching from the front right To generate a sound source
signal including In this case, since the condition of the sound source changes, the CPU
determines “Yes” in step 110 and executes step 115 to step 125 so that the emergency
vehicle approaches the driver's helmet 10. The virtual sound source position of the sound source
signal (warning sound) from the other sound source 21 is determined so that the warning sound
can be heard from the coming direction (see (D) in FIG. 6). Then, the CPU passes through the
above-described steps 120 and 125, and once ends the present routine at step 195. Thus, based
on the type of the sound source signal, the change in the type of the sound source signal, and the
angle (direction) of the driver's helmet 10, the CPU localizes the sound image based on the sound
source signal from each sound source (virtual The sound source position is determined, and the
determined virtual sound source position is output to the sound source localization device 34 as
sound source position information.
10-05-2019
26
[0089]
On the other hand, the DSP of the sound source localization device 34 starts the processing of
the routine shown in FIG. 9 from step 200 every lapse of a fixed time, proceeds to step 210, and
based on "sound source position information output by CPU" The gain of the amplifier of each
channel of the gain multiplier 34b, the tap position of the delay line 34d, the gain (the above
coefficient p) of the gain multiplier 34f, and the coefficient of the FIR filter 34g with respect to
the virtual sound source position of each sound source signal are determined.
[0090]
Next, the DSP proceeds to step 220, and based on the sound source position information, outputs
a signal specifying one of the sound sources generating the sound source signal at the present
time to the multiplexer 34c, and is a signal from the specified sound source. Then, the signal
converted into a digital signal by the AD converter 34a is input to the gain multiplier 34b.
Next, the DSP proceeds to step 230, and outputs the tap position for the sound source designated
in step 220 to the delay line 34d.
[0091]
Thereafter, the DSP proceeds to step 240, and selects the gain of the amplifier of each channel of
the gain multiplier 34b, the gain of the amplifier of each channel of the gain multiplier 34f, and
the coefficient of the FIR filter 34g for the sound source designated in step 220, respectively.
Output. Thus, the sound source signal designated in step 220 is localized at the virtual sound
source position with respect to the sound source signal.
[0092]
Next, the DSP proceeds to step 250, and whether or not the above-described steps 220 to 240
have been completed for all the sound sources generating the sound source signal at this time
(whether one cycle is completed) If it is not completed, the process returns to step 220. Also, if
one cycle is completed, the DSP proceeds to step 295 and temporarily terminates this routine. As
described above, all sound images (sound sources) based on the sound source signals currently
10-05-2019
27
generated are localized at virtual sound source positions according to the sound source position
information.
[0093]
As described above, according to the sound generation device according to one embodiment of
the present invention, the virtual sound source position for the signal (sound source signal) for
generating the sound generated from each of the plurality of sound sources is generated. For
each source signal, it is determined based on at least one of the type of source signal and the
change in type of source signal. Then, sound images based on sound source signals
corresponding to the respective virtual sound source positions are localized at the determined
virtual sound source positions. Therefore, the listener can easily recognize which sound source is
generating the sound at the present time or which sound is important at the present time. Also,
the listener can intuitively select and listen to the sound required at the present time from the
sounds based on the plurality of sound source signals generated from the plurality of sound
sources.
[0094]
Furthermore, since the sound image of each sound source signal is localized, the listener can be
made to recognize a specific sound direction and sense of distance. Therefore, for example, when
the listener is a driver of a motorcycle or a four-wheeled vehicle, the driver can recognize the
direction and the distance of the approaching emergency vehicle or obstacle by sound.
[0095]
In addition, the above-mentioned sounding device is not limited to incidental information
(information on the volume of the sound to be generated based on the sound source signal,
information on the direction of the sound to be generated, importance of the sound, etc.) Since
the virtual sound source position can also be determined based on the above, it is possible to
perform more appropriate sound image localization. In this case, in the sound source position
pattern database 33a, virtual sound source position patterns corresponding to the type of the
sound source signal, the change in the type of the sound source signal, and the accompanying
information are stored in advance.
10-05-2019
28
[0096]
Note that the type of sound source signal indicates that a certain sound source does not output
two or more types of sound source signals having different meanings (for example, the music
player 17 outputs a signal having a meaning other than the meaning of being a musical tone) No,
it matches the type of sound source itself. On the other hand, the mobile phone 16 at least
includes a sound source signal corresponding to a ringing tone having a meaning of notifying an
incoming call, and a sound source signal corresponding to a voice of a calling party having a
meaning other than a meaning Generate a source signal of a kind. In this case, the sound source
signal corresponding to the ringing tone and the sound source signal corresponding to the voice
of the other party can be treated as sound source signals of different types.
[0097]
Further, the incidental information includes, for example, "sound source signal to generate
warning sound" generated by the vehicle state detection device 19, the traveling state detection
device 20, and the vehicle periphery alarm device 21 as another sound source. It contains
information on how far away the warning sound should be heard from the driver. For example,
when the emergency vehicle or obstacle is approaching the motorcycle, the direction or distance
of the emergency vehicle or obstacle with respect to the motorcycle is the accompanying
information about the sound source signal that generates a warning sound. Alternatively, when
an abnormality occurs in the motorcycle, the "information indicating the direction or position
where the abnormal part exists" generates a warning sound for generating a warning sound from
the abnormal part toward the driver. It becomes incidental information about the sound source
signal to be
[0098]
Furthermore, for example, if the vehicle state detection device 19 outputs a sound source signal
for generating a simple sound indicating that the direction indicator is blinking, either of the
direction indicators blinks. The information that indicates whether or not it is the accompanying
information.
[0099]
10-05-2019
29
The present invention is not limited to the above embodiments, and various modifications can be
adopted within the scope of the present invention.
For example, although the above-mentioned sounding device changes only the position (virtual
sound source position) for localizing the sound image, not only the virtual sound source position
but also each sound source signal based on the type of the sound source signal and the change of
the type of the same sound source signal You may change the size of the sound based on.
[0100]
Also, the virtual sound source position may be changed according to the position of the helmet
10 with respect to the vehicle body. Furthermore, the virtual sound source position may be
automatically determined by any processing method such as a neural network, for example,
without depending on the sound source position pattern database 33a.
[0101]
BRIEF DESCRIPTION OF THE DRAWINGS It is a schematic block diagram of the sound production
apparatus which concerns on embodiment of this invention. FIG. 2 is a circuit block diagram of
the sound generation device shown in FIG. It is a circuit block diagram of the sound source
localization apparatus shown in FIG. It is a figure for demonstrating the sound image localization
method by the sound-production apparatus shown in FIG. (A)-(C) are the figures for
demonstrating the sound image localization method by the sound-production apparatus shown in
FIG. It is the figure which showed an example of the sound source localization pattern by the
sound-production apparatus shown in FIG. It is a figure for demonstrating the relationship
between direction of a driver's helmet, and a warning sound. It is the flowchart which showed the
process sequence which the sound source position determination apparatus shown in FIG. 2
performs. It is the flowchart which showed the process sequence which the sound source
localization apparatus shown in FIG. 2 performs.
Explanation of sign
[0102]
DESCRIPTION OF SYMBOLS 10 Helmet for driver 11 Headphones for driver 11R Right speaker
10-05-2019
30
11L Left speaker 12 Microphone for driver 13 Geomagnetic sensor for detecting helmet
direction for driver 14 Driver unit 15 ... Vehicle orientation detection geomagnetic sensor, 16 ...
mobile phone, 17 ... music player, 18 ... navigation device, 19 ... vehicle state detection device, 20
... traveling state detection device, 21 ... sound source, 30 ... vehicle unit, 31 ... for conversation
Radio communication device 32 communication device 33 sound source position determination
device 33a sound source position pattern database 34 sound source localization device 34a AD
conversion device 34b gain multiplier for preceding sound 34c multiplexer 34d Delay line, 34e ...
Band pass filter, 34f ... Gain multiplier for following sound, 34g ... FIR Ta, 34 gL ... FIR filter for
the left ear, 34 gR ... FIR filter for the right ear, 34 h ... Adder, 34 hR ... Adder for the right ear, 34
h L ... Adder for the left ear, 40 ... Helmet for the passenger, 41 ... passenger Headphones, 42:
Microphone for passenger, 43: Passenger unit.
10-05-2019
31
Документ
Категория
Без категории
Просмотров
0
Размер файла
48 Кб
Теги
jp2007036610
1/--страниц
Пожаловаться на содержимое документа