close

Вход

Забыли?

вход по аккаунту

?

JP2008066872

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008066872
A sound space re-synthesis presentation system is provided which enables precise reproduction
of a sound space at other places such as a remote place following a head movement of a listener.
Kind Code: A1 A signal pickup means 10 for picking up an acoustic signal using a head model 11
attached with a large number of microphones, and movement of a head of a listener at a point
distant from the signal pickup means 10 A sensor 31 for detecting, a signal combining means 20
for performing signal processing according to the listener's head position and characteristics
specific to the listener based on the acoustic signal, and an acoustic signal processed by the
signal combining means 20 By providing the signal reproduction means 30 for outputting to an
audio output device such as the headphone 32 so that the listener can listen, it becomes possible
to realize a head-related transfer function unique to the listener, and to a plurality of listeners It
becomes possible to provide a reproduction of a realistic sound space. [Selected figure] Figure 1
Sound space resynthesis presentation system
[0001]
The present invention relates to a technique for constructing a system that precisely reproduces
a sound space at other places such as a remote place, following a listener's head movement.
[0002]
In recent years, there has been interest in reproducing sound space such as the position of a
sound source and the feeling of expansion of a sound source with a 5.1 channel surround system
or the like rather than simply reproducing sound.
10-05-2019
1
If you can reproduce the sound space, you can experience the feeling of being in a theater or a
stadium while staying at home.
[0003]
We humans perceive the position of the sound source by listening to the sound using the two left
and right ears. Interaural Level Difference (ILD): Frequency characteristics of the sound input to
both ears when the position of the sound source changes even if the sound source emits the
same sound due to reflection and diffraction by the face, shoulders, and pinna. Also, the
interaural phase difference (IPD) changes, and human beings can judge the position of the sound
source based on these changes. These clues vary depending on the size and shape of the face,
shoulders, and pinnacles, and therefore differ from person to person. In other words, in order to
convey information in the true sound space to listeners who are in other places, it is not
necessary to merely record the sound using two sound receiving points and transmit it as it is,
but to each listener for the recorded sound. It has to be presented through appropriate
processing. A function representing these clues as transfer characteristics is called a head-related
transfer function (HRTF).
[0004]
In Non-Patent Document 1, a movable dummy head shaped like a listener called Tele Head is
placed in the sound space to be collected, and the sound is adjusted according to the movement
of the head of the listener at a remote location. I played the space. However, in this method, since
the head of the dummy head is made to move, noise due to the servomotor and the pulley is
added to the recording sound. In addition, it is necessary to create an exact dummy head for each
listener, resulting in a large-scaled apparatus. On the other hand, Non-Patent Document 2
proposes a method in which microphones are disposed at equal intervals on the circumference of
a sphere and sound is collected using the sphere. In this method, sound is collected using the
input of one or two microphones closest to the position of each listener's ear. However, since this
system evaluates and builds the signal reproduction using the HRTF of the hard sphere instead of
the listener's HRTF, the reproduction of the position of the sound image becomes better when a
human listens to the presented sound. There is a problem that there is no guarantee.
[0005]
10-05-2019
2
I. Toshima et al., “A streerable dummy head that tracks three-dimensional head movement:
TeleHead,” Acoust. Sci. & Tech 24, 5, pp. 327-329 (2003) V. Ralph Algazi et al., "MotionTracked Binaural Sound," J. Audio Eng. Soc., Vol. 52, No. 11, pp. 1142-1153 (2004 Nov.). )
[0006]
As described above, in Non-Patent Document 1, by causing the head of the dummy head to move,
the noise due to the servomotor or the pulley is added to the recording sound, so that the precise
reproduction of the sound space can not be performed. In addition, it is necessary to create an
exact dummy head for each listener, resulting in a large-scaled apparatus. In addition, in NonPatent Document 2, the degree of reproduction of the signal is evaluated and constructed using
the HRTF of the hard sphere instead of the HRTF of the listener, so that when the human listens
to the presented sound, the degree of reproduction of the position of the sound image is There is
a problem that there is no guarantee that it will improve.
[0007]
In order to solve the above problems, the present invention uses a head model equipped with a
large number of microphones for sound collection, changes the method of adding signals input to
the microphones in a direction in which the listener faces the front, and listens Performs signal
processing to fit the HRTF of the person, presenting a sound that matches the listener's sense of
hearing while keeping the head model fixed, and following up the head movement with precise
reproduction of the sound space at a remote location The purpose is to provide a system that can
[0008]
In order to achieve the above object, the sound space re-combination presentation system
according to claim 1 is a system for presenting precise reproduction of sound space following a
head movement of a listener, comprising a large number of microphones. A signal pickup means
for picking up an acoustic signal using the head model, a sensor for detecting the movement of
the head of a listener at a point apart from the signal pickup means, and the acoustic signal
Signal synthesizing means for performing signal processing according to the head position of the
listener and characteristics specific to the listener, and outputting to the audio output device so
that the listener can listen to the acoustic signal subjected to signal processing by the signal
10-05-2019
3
synthesizing means And signal reproducing means.
Here, examples of the voice output device include a headphone and an earphone microphone,
and a bone conduction earphone microphone used in a hearing aid and the like.
[0009]
The sound space re-synthesis and presentation system according to claim 2, wherein the head
model comprises a head having a forehead part and an ear pin part and the like having the
longest circumferential length, and a body part, in a horizontal plane. It is characterized in that it
has an axisymmetric shape that is symmetrical and that a plurality of microphones for picking up
an acoustic signal are attached.
[0010]
The sound space re-combination and presentation system according to claim 3, wherein the
signal combining means uses, as a transfer function, an acoustic signal picked up by all the
microphones attached to the head model as a change in acoustic characteristics due to a
microphone position. Based on the deriving means to be derived, the transfer function, and the
information of the listener's head position acquired by the sensor, the acoustic signal of two
channels according to the listener's head position and the listener's specific characteristics is
synthesized And converting means.
[0011]
The sound space re-synthesis presentation system according to claim 4, wherein the deriving
means sets the front of the head model at 0 ° at a certain frequency f 1 and the horizontal angle
at which the sound source is clockwise with θ as a reference. The transfer function Hf, i (.theta.)
(I = 1 to n, n is the number of microphones) at the microphone position of the head model is
derived by using .theta. As a variable.
[0012]
The sound space re-synthesis and presentation system according to claim 5, wherein the deriving
means sets the front of the head model at 0 ° at a certain frequency f 1 and the elevation angle
of the sound source with that as a reference and φ as a variable. It is characterized in that the
transfer function Hf, i (.phi.) Or Hf, i (.theta.,. Phi.) (I = 1 to n, n is the number of microphones) at
the microphone position of the head model is derived.
10-05-2019
4
[0013]
The sound space resynthesis presentation system according to claim 6, wherein the conversion
means is an appropriate weight coefficient zf, i (i = 1) based on the information of the head
position of the listener acquired by the sensor at a certain frequency f. Using the number of
microphones to weight the transfer function derived by the derivation means, and combining the
weighted transfer functions to synthesize the left and right head transfer functions of the listener
It features.
[0014]
The sound space re-combination presentation system according to claim 7, wherein the
weighting coefficients zf, i correspond to n microphone positions at a certain frequency f 1 to
calculate a listener-specific head-related transfer function. As a weighting factor corresponding to
the range of 0 ° to 360 ° of the horizontal angle of the head position of the listener,
corresponding to the head position of the listener changing various angles. It is characterized in
that it is derived in advance.
Here, for example, the Levenberg-Marquardt method is used as a method of calculating the
weighting factor.
[0015]
The sound space re-combination presentation system according to claim 8, wherein the
weighting factor zf, i is a weighting factor corresponding to n microphone positions at a certain
frequency f to calculate a head-related transfer function specific to a listener. As a weighting
factor corresponding to the range of the elevation angle of the listener's head position ranging
from -90 ° to 90 °, corresponding to the listener's head position changing various angles. It is
characterized in that it is derived in advance.
Therefore, the weighting factor zf, i may be derived as a weighting factor corresponding to the
combination of the horizontal angle and the elevation angle.
Also, here, for example, the Levenberg-Marquardt method is used as a method of calculating the
weighting factor.
10-05-2019
5
[0016]
The sound space re-combination presentation system according to claim 9 is characterized in
that the weighting coefficient z f, i is derived for each listener according to the characteristics
such as the characteristic of diffraction or reflection of sound unique to the listener. Do.
[0017]
According to the first aspect of the present invention, a remote place or the like is obtained by
collecting an acoustic signal using a head model attached with a large number of microphones in
a sound space of a certain environment and performing signal processing according to each
individual. In other places, multiple people can simultaneously listen to the sound space, and it
becomes possible to facilitate construction of a virtual reality system.
In addition, by reproducing the sound space such as the position of the sound source and the
expansive feeling of the place, the listener can enjoy the feeling of being in the theater or a
stadium while staying at home, for example.
Also, as compared with the prior art using the dummy head of Non-Patent Document 1, there is
no need to create a dummy head for each listener, the system configuration is simplified, and the
head of the dummy head moves. The noise caused by the servomotors and pulleys to be
performed is not added to the recording sound, and the noise can be reduced.
[0018]
According to the second aspect of the present invention, not only the head but also the torso
portion is included in the head model to which a large number of microphones are attached to
collect sound, thereby taking into consideration reflection and diffraction of the acoustic signal.
The position of the sound source can be perceived, and the listener can accurately determine
from which direction the sound is coming.
Further, by forming the head model into an axisymmetric shape so as to be symmetrical on the
horizontal plane, it is possible to cope with changing the various positions of the listener's head
10-05-2019
6
position with a smaller number of data.
[0019]
According to the third aspect of the present invention, acoustic signals collected by all the
microphones attached to the head model are derived as transfer functions, and the transfer
functions are determined according to the listener's head position and listener's specific
characteristics. By combining the sound signal into a sound signal, it becomes possible to realize
the head-related transfer function of the listener, and it is possible to provide a plurality of
listeners with reproduction of a highly realistic sound space.
[0020]
According to the invention of claim 4 or claim 5, the sound source direction can be accurately
grasped by deriving the transfer function at the microphone position of the head model as a
variable of the horizontal angle and elevation angle with respect to the sound source position. , It
becomes possible to synthesize a head-related transfer function of the listener.
[0021]
According to the invention as set forth in claim 6, even if the listeners are different, each of the
listeners is different by weighting the transfer function using the appropriate weighting
coefficient zf, i based on the information on the head position of the listener. It becomes possible
to realize the head related transfer function of an individual, and to provide a plurality of
listeners with reproduction of a highly realistic sound space.
[0022]
According to the invention as set forth in claim 7 or 8, for the weighting coefficient zf, i, for each
of various angles (horizontal angle and elevation angle) corresponding to the change of various
angles by the listener's head position. In advance, it is possible to shorten the calculation time of
the head related transfer function of the listener, and to reduce the delay time when reproducing
the sound space of high presence to the listener at a remote place. Can.
[0023]
According to the invention as set forth in claim 9, even if the listener is different, the weighting
factor zf, i is derived for each listener according to the characteristics such as the characteristic
of the diffraction and reflection of the sound unique to the listener. It becomes possible to realize
the head transfer function of each individual, and it is possible to provide a plurality of listeners
with reproduction of a highly realistic sound space at the same time.
10-05-2019
7
[0024]
Next, a sound space resynthesis presentation system according to an embodiment of the present
invention will be described based on the drawings.
The present invention is not limited by the embodiment.
[0025]
FIG. 1 is a diagram showing the configuration of a sound space resynthesis presentation system
according to an embodiment of the present invention.
As shown in FIG. 1, the sound space resynthesis presentation system is a system that presents
precise reproduction of sound space following the head movement of a listener, and includes a
head model 11 with a large number of microphones attached. A signal pickup means 10 for
picking up an acoustic signal using the sensor, a sensor 31 for detecting the movement of the
head of the listener at a point distant from the signal pickup means 10, a listener based on the
acoustic signal A signal combining means 20 for performing signal processing according to the
head position of the head and characteristics specific to the listener, and an audio output device
such as headphones 32 so that the listener can listen to the acoustic signal processed by the
signal combining means 20 And signal reproducing means 30 for outputting the signal.
[0026]
In this system, a sound is collected by a head model having a large number of microphones, and
a signal so that the listener can accurately determine from which direction the sound is coming
from the sound input to each microphone Process and present to listeners.
For example, by placing the head model 11 in a theater as shown in FIG. 1 and appropriately
processing the input of a large number of microphones and presenting it to a listener at another
location, it is as if the listener is in the theater. You can listen to sounds with a sense of reality.
10-05-2019
8
Hereinafter, this head model having a large number of microphones is referred to as SENZI
(Symmetrical object with EN chased ZIIllion microphones).
[0027]
When manufacturing the head and body of SENZI for sound collection, the dimensions of the top,
forehead (longest circumference), neck, and trunk of SAMRAI's dummy head made by KOKEN
CO., LTD. It is based on that.
Not only the head, but also reflection and diffraction of the torso part play an important role in
order to perceive the position of the sound source. In addition, in order to be able to respond to
changes in the angle of the listener's head with a smaller number of data by providing symmetry,
the shape is axially symmetric so as to be symmetrical from any view on the horizontal surface.
In addition, based on the shape of SAMRAI's auricle viewed from behind, the auricle is also
manufactured, and is attached to the head in four directions. The head, auricle and shoulders can
be made, for example, using styrofoam, while the torso portion can be made, for example, using
polyurethane. FIG. 2 shows the dimensions of SENZI, and FIG. 3 shows an overview of SENZI.
[0028]
Also, the signal synthesis means 20 derives the acoustic signals picked up by all the microphones
attached to the head model 11 as a transfer function indicating the change in acoustic
characteristics according to the microphone position, and the transfer function and the sensor 31
And conversion means for combining two channels of audio signals according to the listener's
head position and listener-specific characteristics based on the acquired listener's head position
information. That is, when sound is collected using SENZI to reproduce the sound space, the
shape of the head and body of SENZI is different from that of the listener, so even if the collected
sound is presented as it is, diffraction or reflection of the sound is The nature is different and it is
not possible to give the listener an accurate sound image. Therefore, it is necessary to convert the
transfer function at the position of the microphone attached to SENZI to the listener in some way.
In Non-Patent Document 3, transfer function conversion is performed using a neural network,
but in the present invention, in order to make calculation easier, a large number of SENZI
microphones are used to collect sound and coefficients suitable for those signals are used. By
using weighting, addition and presentation, a signal that allows the listener to accurately
10-05-2019
9
recognize the position of the sound source is synthesized. In other words, the listener's HRTF is
synthesized by weighting and adding the transfer functions at the positions of the multiple
microphones.
[0029]
In the deriving means, the front of the head model is set to 0 ° at a certain frequency f 2, and a
horizontal angle with a sound source in the clockwise direction is set as θ 2 with reference to
that, microphones of the head model having θ as a variable The transfer function Hf, i (.theta.) (I
= 1 to n, n is the number of microphones) at the position is derived. Here, for the frequency f 2,
for example, when the frequency analysis of 8192 points is performed at a sampling frequency
of 48 kHz, the transfer function Hf, i (θ) is obtained at an interval of 48 k / 8192 = 5.86 Hz.
[0030]
Further, in the deriving means, the front of the head model is 0 ° at a certain frequency f, the
elevation angle of the sound source is φ with respect thereto, and the transfer function Hf at the
microphone position of the head model with φ as a variable. , i (φ) or Hf, i (θ, φ) (i = 1 to n, n is
the number of microphones) in some cases.
[0031]
In the conversion means, the derivation is performed using an appropriate weighting factor zf, i (i
= 1 to n, n is the number of microphones) based on the information of the head position of the
listener acquired by the sensor 31 at a certain frequency f The transfer function derived by the
means is weighted, and the weighted transfer function is calculated to synthesize the left and
right head transfer functions of the listener.
Here, the left and right head related transfer functions of the listener are calculated by, for
example, equation (1). Let the front be 0 ° at a certain frequency f 1 and let θ 1 be the
horizontal angle with the sound source clockwise with respect to that. The desired HRTF f,
listener (θ) is calculated using the transfer function Hf, i (θ) (i = 1 to n, n is the number of
microphones) at the microphone position possessed by SENZ I, using θ as a variable. zf, i is a
weighting factor and a complex number. <img class = "EMIRef" id = "202173133-00003" /> Note
that the weighting coefficient zf, i in the equation (1) is determined such that the residual ε (θ)
becomes as small as possible for all θ 1. (1) shows an example of synthesizing the head-related
10-05-2019
10
transfer function HRTFf, listener (θ) of the listener using the transfer function Hf, i (θ), but the
transfer function Hf, i (φ) or Hf, i (θ, φ) can also be used to synthesize a listener's head-related
transfer function HRTFf, listener (φ) or HRTFf, listener (θ, φ).
[0032]
The weighting factor zf, i is derived in advance as a weighting factor corresponding to n
microphone positions at a certain frequency f 1 in order to calculate a head-related transfer
function specific to the listener, and the head position of the listener Corresponds to changing
various angles, the horizontal angle of the listener's head position is derived in advance as a
weighting factor corresponding to the range of 0 ° to 360 °. Here, for example, the LevenbergMarquardt method is used as a method of deriving the weighting factor.
[0033]
The weighting factor zf, i is derived in advance as a weighting factor corresponding to n
microphone positions at a certain frequency f 1 in order to calculate a head-related transfer
function specific to the listener, and the head of the listener The elevation angle of the listener's
head position may be derived in advance as a weighting factor corresponding to the range of -90
° to 90 ° in response to the position changing various angles. Furthermore, the weighting
factor zf, i may be derived as a weighting factor corresponding to a combination of horizontal
angle and elevation angle. Also, here, for example, the Levenberg-Marquardt method is used as a
method of calculating the weighting factor.
[0034]
FIG. 10 shows an example of derivation of the weighting factor zf, i at a certain frequency f. In
this example, the horizontal angle α 1 of the listener's head position is derived in advance at 5
° intervals. For example, when the angle α of the head position of the listener is 15 °, the head
of the listener is added by weighting and adding the transfer function Hf, i (θ) using the
weighting factor zf, i (α = 15) Part transfer function is synthesized.
[0035]
Further, the transfer function Hf, i (θ) is represented by a complex number as information of ILD
10-05-2019
11
and IPD, for example, by converting an acoustic signal collected by a microphone into data of a
frequency domain by FFT (Fast Fourier Transform). Therefore, the weighting factor zf, i is also
represented by a complex number.
[0036]
Furthermore, the weighting factor zf, i is derived for each listener according to the listener's
characteristics because the characteristics such as the nature of the sound diffraction and
reflection differ from listener to listener.
[0037]
Therefore, it is necessary to prepare in advance a weighting factor zf, i as shown in the derivation
example of FIG. 10 for the number of (number of points of frequency analysis) × (number of
listeners).
This makes it possible to synthesize a head-related transfer function that follows the head
movement of the listener based on the information of the head position of the listener acquired
by the sensor, and simultaneously provide a high sense of realism to a plurality of listeners. It is
possible to provide a reproduction of the sound space of
[0038]
Next, the HRTF measurement method will be described.
FIG. 4 shows the position of the microphone used in SENZI used in the sound space resynthesis
presentation system of the present invention.
[0039]
In order to process the input signal to the SENZI microphone to suit the listener's HRTF, we must
know the transfer function that represents the change in acoustic characteristics due to the head
and torso at all microphone positions. It does not. Microphones were attached to various
10-05-2019
12
locations in SENZI, and the transfer function at each microphone position was measured. At any
position, the sound source direction was measured in the anechoic chamber from 0 ° to 355 °
at 5 ° intervals and at elevation angles from −80 ° to 90 ° at 10 ° intervals. The sound
signal used for the measurement is the 8192 point OATSP signal (non-patent document 4) at a
sampling frequency of 4848 kHz. In addition, as the HRTF to be synthesized, the HRTA of
SAMRAI of a precision dummy head was also measured. The measurement was performed under
the same conditions as the measurement of the transfer function of SENZI using the ear canal
block method in which an ear type having a microphone attached to the ear canal of the
measurement ear is embedded.
[0040]
The transfer function in the case of the measured elevation angle of 0 ° was synthesized as the
objective HRTF using the equation (1) described above, and the right ear elevation angle of
SAMRAI at an elevation angle of 0 °. At that time, the absolute value | zf, i | of the weight
coefficient zf, i was used as an index indicating the importance of the transfer function at that
position, and the transfer function having a small value was excluded. This method is repeated,
and finally the transfer function at the position shown in FIG. 4 is used for synthesis. Although
only one direction is shown in FIG. 4, this microphone arrangement is symmetrical in all four
directions. Therefore, the number of microphones used is 56 of 14 × 4.
[0041]
The transfer function of the microphone position used in HRTF synthesis has an important
characteristic on HRTF synthesis, whereas the transfer function of the microphone position not
used is not so important. Become. The transfer function at each microphone position was roughly
divided into two groups. One is a dip or peak due to reflection of the head or pinna at an angle at
which the sound source is not shadowed by the head with respect to the microphone, and the
other is a prominent dip or peak at this angle It can not be seen. Both groups were found for both
transfer functions used and those not used. From this, it is considered that a transfer function
that matches the position of the target HRTF dip or peak is used regardless of the dip or peak
size.
[0042]
10-05-2019
13
The transfer function of the microphone position (position of a of FIG. 4) used for FIG. 5 and the
example of the transfer function in the microphone installed in the position of the shoulder
which was not used for FIG. 6 are shown. Further, FIG. 7 shows the target HRTF. The vertical axis
of the graph represents a horizontal angle from the front to the direction of the sound source,
and represents an angle of 0 ° around the front and clockwise. The horizontal axis indicates the
frequency, and the lightness indicates the magnitude of the amplitude. Both FIG. 5 and FIG. 6 are
graphs in the case where there is a microphone near 90 °, and the angle at which the sound
source is not shaded by the head is around 0 ° to 180 °. In both cases, dips due to sound
reflection are observed at 0 ° to 180 °. However, compared to the HRTF to be synthesized in
FIG. 7, periodic dips appear densely in the transfer function of FIG. 6 and it can be seen that
there are few similarities. On the other hand, in the transfer function of FIG. 5, there are two
simple dips, and in particular, the similarities can be found in the dip positions of 10,000 Hz or
more at 0 ° to 50 ° in FIG.
[0043]
Next, the results of HRTF synthesis will be described. Fig. 7 shows the characteristics of HRTF
synthesized from 56 transfer functions. It seems that there is almost no difference in comparison
between FIG. 7 and FIG. The residual between the synthesized HRTF and the target HRTF is
shown in FIG. 9 to examine the differences in detail. The residual ε (f, θ) is obtained from the
following equation. HRTFSAMRAI (f, θ) represents HRTF of SAMRAI to be synthesized,
HRTFsynthesized (f, θ) represents synthesized HRTF. <img class = "EMIRef" id = "20217313300004" />
[0044]
It can be seen from FIG. 9 that there is almost no residual in the low frequency range up to 5000
Hz. This is considered to be because the position of the dip or peak at 5000 Hz or less of the
SAMRAI HRTF shown in FIG. 7 is similar to the position of the dip or peak of the SENZI transfer
function shown in FIG. Furthermore, the residual is small in the region of 12,000 Hz or less
around 0 ° to 180 °. The reason for this is that dips due to the reflection of sound to the head
and pinna appear just around this in the transfer function of SENZI, and these dips are
considered to work as they would be in HRTF synthesis. . 270The residual in the vicinity of ° is
large because there are deep dips locally at each transfer function and the target HRTF due to the
difference in the sound wrap around this angle, and their positions are different from each other.
Therefore, it is considered that it was because it was not able to synthesize finely.
10-05-2019
14
[0045]
From the above, in order to enable the listener to present a sound that can accurately recognize
the direction of the sound source, an attempt was made to synthesize human HRTFs by
appropriately adding the transfer functions at the positions of the microphones. As a result, it
was shown that a somewhat accurate synthesis is possible.
[0046]
T. Sugano et al., "Design of microphone array for sound field recording using neural network,"
ACTIVE '95, pp. 1233-1240 (1995 Jul.
Y. Suzuki et al., “An optimum computer-generated pulse signal suitable for the measurement of
very long impulse responses,” J. Acoust. Soc. Am. Vol. 97, pp. 1119-1123 (1995)
[0047]
It is a figure which shows the structure of the sound space re-synthesis presentation system of
this invention. It is a figure which shows the dimension of the head model (SENZI) used with the
sound space re-synthesis presentation system of this invention. It is a figure which shows the
whole image of the head model (SENZI) used with the sound space re-synthesis presentation
system of this invention. It is a figure which shows the position of the microphone used in the
head model (SENZI) used with the sound space re-synthesis presentation system of this invention.
It is a figure which shows the example of the transfer function of SENZI used for signal synthesis
in the sound space re-synthesis presentation system of this invention. It is a figure which shows
the example of the transfer function of SENZI which was not used for signal synthesis in the
sound space re-synthesis presentation system of this invention. It is a figure which shows HRTF
of SAMRAI in the sound space re-synthesis presentation system of this invention. It is a figure
which shows the characteristic of HRTF synthesize | combined from 56 transfer functions in the
sound space re-synthesis presentation system of this invention. It is a figure which shows the
remainder with synthetic HRTF and HRTF of SAMRAI in the sound space re-synthesis
presentation system of this invention. FIG. 8 is a diagram showing an example of derivation of
weighting factors zf, i at a certain frequency f in order to calculate a listener-specific head-related
transfer function in the sound space resynthesis presentation system of the present invention.
10-05-2019
15
Explanation of sign
[0048]
10 signal collecting means 11 head model 20 signal combining means 30 signal reproducing
means 31 sensor 32 headphones
10-05-2019
16
Документ
Категория
Без категории
Просмотров
0
Размер файла
29 Кб
Теги
jp2008066872
1/--страниц
Пожаловаться на содержимое документа