close

Вход

Забыли?

вход по аккаунту

?

JPH02298200

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JPH02298200
[0001]
FIELD OF THE INVENTION The present invention relates generally to a method and apparatus
for processing audio signals, and more particularly, to listeners as if the resulting sound
originates from a position other than the actual position of the speaker. The present invention
relates to a sound image forming method and apparatus for processing an audio signal. The
listener can easily estimate the direction and range of the sound source. If multiple sources are
distributed in the space around the listener, the position of each source can be sensed
independently and simultaneously. Despite many years of substantial ongoing research, no
satisfactory theory has yet been obtained to account for all the perceptible abilities of the
average listener. The method of measuring the pressure or velocity of a sound wave at a single
point and effectively reproducing the sound at a single point can maintain the ease of speech
comprehension and the integrity of the musical tone. However, in this method all information
needed to locate the sound in space is removed 0 eg if the orchestra is playing in this way all the
instruments are playing at a single playback point It is perceived as Efforts are made to preserve
the directional cues inherently contained in the sound during transmission or recording and
playback. U.S. Pat. No. 2,093,540 (September 1937) to A. D. Purmline shows substantial details
for such a two-channel system. An artificial enhancement of the differences between stereo
channels as a means of broadening the stereo image underlying many current stereo sound
enhancement techniques is shown in detail in this patent. Some stereo enhancement schemes
rely on combining stereo channels together in some way to emphasize existing cues to spatial
locations involved in stereo recording. The mutual coupling and the cancellation of interference
which is dependent on it depends on the listener and on the geometry of the loudspeaker, which
must be individualized in each case. Attempts to make stereo systems more sophisticated have
not resulted in major improvements in widely used systems. The actual listeners sit comfortably,
turn their heads, exercise, arrange the speakers to match the layout of the room, and the other
10-05-2019
1
furniture! I hope to do it. SUMMARY OF THE INVENTION Therefore, an object of the present
invention is to provide an audio transducer or speaker that can properly control an apparent
position of a sound source when an audio signal is reproduced by two audio conversion means. It
is an object of the present invention to provide a method of processing an audio signal, as well as
a device therefor, in such a way that it appears to the listener that the position of the sound
source is remote from the position.
SUMMARY OF THE INVENTION The present invention allows monophonic audio reproduction
using two separate channels and two speakers to provide a very clear, highly localized image at
various locations. It is based on knowledge. Observation of this phenomenon by the inventor
under special conditions in the recording studio led the inventor to a systematic exploration of
the conditions necessary to make this audio illusion live. Several years of research have provided
a substantial understanding of this effect and the ability to consistently reproduce this effect
from time to time. According to the invention, an auditory illusion characterized by placing the
sound source somewhere in the three-dimensional space surrounding the listener occurs without
constraints due to the position of the loudspeaker. Multiple images of independent sources
without known limits on number are reproduced at the same time using different two channels at
different locations. Only two independent channels and two loudspeakers are required for
playback, the separation distance or rotation of the two loudspeakers can be changed within
wide limits without destroying the torque 0 For example, rotation of the head in any plane of the
listener who is looking at the image does not destroy the image. The processing of the audio
signal according to the invention processes the audio signal of a single channel so as to yield a
two-channel signal, wherein the difference in phase and amplitude between the two signals is
spread over the entire audio spectrum, This process, which is performed by adjusting in a
frequency dependent manner, divides the monaural input signal into two signals and then
performs one or both of these signals in a non-uniform function whose amplitude and phase are
generally in frequency. Pass through a transfer function. The transfer function may include signal
inversion and frequency dependent delay 0 As far as we know, the transfer function used in the
process of the present invention is not derived from any theory currently known, These transfer
functions must be characterized by empirical means, and each processing transfer function
images at a single position determined by the characteristics of the transfer function, so the
position of the sound source is specific according to the number of transfers It is determined in
For a given position, there are several different transfer functions, each of which is generally
sufficient to place an image at a particular position. When moving images are required, one
transfer function is smoothly switched to another transfer function one after another. That is, the
embodiment with the appropriate variability of this process is not limited to the production of
still images.
The audio signal processed in accordance with the invention can be reproduced immediately
10-05-2019
2
after processing without the detrimental effect on the listening image provided in accordance
with the invention, as is usual for optical discs, magnetic tapes, recording discs, optical
soundtracks and other media It can be recorded by stereo recording technology or transmitted
by conventional stereo transmission technology such as radio or cable. The sound image forming
method according to the present invention is also applied recursively. For example, each channel
of a normal stereo signal is treated as a monophonic signal, and two channels are at two different
positions in the listener's space. When imaged, a full stereo image similar to that of the prior art
will be sensed that is straight to the line connecting the positions of the images of each channel.
Furthermore, for example, when recording stereo records or discs on multiple tracks with 24
channels, the recording engineer can each place various instruments and sounds at any time to
create special sound steps. The power supply channel may be supplied with power via a transfer
function processor. This operation also results in two channels of audio signals, which can be
played back by a conventional playback device, but with the capability of imaging according to
the invention. EXAMPLES Some dimensions and angles are shown in FIGS. 1 to 4 in order to
clearly explain the sound imaging method according to the invention. FIG. 1 is a plan view
showing a stereo listening state, in which the left speaker 101, the right speaker-102, the listener
103 and apparent sound image positions for the listener are shown. For the purpose of definition
only, the listener 103 is shown as lying on a straight line 105 perpendicular to the middle point
of the straight line 106 connecting the speakers 101, 102. Although this listening position is the
reference listening position, according to the present invention, the listener is not restricted to
this position. The sound image azimuth angle a is from the straight line 105 to the listener 103
with the sound image counterclockwise from the reference listening position. It is measured
between position 104. Similarly, the sound image slant range r is defined as the distance from
the listener 103 to the sound image position 104. This range is the true range measured in threedimensional space, not the projected range measured in plan or other orthographic projections.
According to the invention, the possibility of a sound image substantially out of the plane of the
loudspeaker 101 ° 102 arises. Therefore, in FIG. 2, an altitude angle for sound image is defined.
The listening position 201 corresponds to the position 103 in FIG. 1, and the image position 202
corresponds to the image position 104.
The elevation angle is measured upwards from a horizontal line 203 passing through the head of
the listener 103 to a straight line 204 connecting the head of the listener 103 and the image
position 202. The speakers 101 and 102 do not necessarily exist on the straight line 203. Having
defined the image position parameters for the reference listening configuration, we define the
parameters for possible variations of the listening configuration. In FIG. 3, the speakers 301.302
and the straight lines 304.305 correspond to the speakers 101 and 102 and the straight lines
106.105 in FIG. 1, respectively. The distance S between the loudspeakers is measured along line
304 and the distance Rd of the listener is measured along line 305. When the listener is located
along the straight line 306 and parallel to the straight line 304 to the position 307, the lateral
displacement e measured along the straight line 306 is defined. For each of the speakers 301
10-05-2019
3
and 302, azimuth angles p and q are respectively defined counterclockwise from the straight line
passing the speakers 301 and 302, at right angles to the straight line connecting these speakers,
and in the direction toward the listener. Similarly, for the listener, an azimuth angle m is defined
counterclockwise from the direction ← straight line 305 to which the listener is facing 直線
straight line 305. In FIG. 4, the height h of the speaker is measured from the horizontal line 401,
through the head of the listener 303, to the vertical centerline of the speaker 302. The
parameters defined above allow one or more descriptions of a given geometric shape. For
example, with perfect uniformity, the image positions (1 B 0. It can be described as O, x) or (0,
180, x). In the case of normal stereo reproduction, the sound image is limited to lie along the
straight line 106 of FIG. 1, but the sound image generated by the present invention can be freely
located in space. That is, the azimuth angle a can be in the range of -360 °, and the range r is
not limited to the distance corresponding to the distance Ns or d. The sound image is very close
to the listener, for example a fraction of the distance d or far, for example a few times the
distance d, and at the same time at the azimuth angle a with respect to the azimuth angle facing
the loudspeaker It can be formed into Also, according to the present invention, the sound image
can be positioned at any high angle. The listener's distance d can be 0.5 m to 30 m 5 or more,
and the sound image is apparently stationary during this variation. A good sound image is
formed when the speaker spacing is 0.2 m to 8 m and the same signal for driving the speakers is
used for all spacings.
The azimuth angle for the loudspeakers p, q can be varied widely independently without any
influence on the sound image. The vF @ of the present invention is that the slow change of the
height h of the speaker does not affect the elevation angle of the sound image perceived by the
listener. This holds true for positive values of the height h and for angular values, ie when the
loudspeaker is above or below the height of the listener's head. And, since the sound image to be
formed is very realistic, it is natural that the listener "directly looks at J in the direction of the
sound image, that is, points it directly. During this time the sound image is stable, ie the azimuth
angle m of the listener does not appreciably affect the spatial position of the sound image for at
least the range + 120 ° to -120 ′ ′ of the azimuth angle m. Because the impression of the
rendered sound source is so strong, the listener does not find it difficult to "look straight" at the
sound image, ie to direct it to the sound image. One group of listeners will report the same sound
image position. FIGS. 5a-5 show a set of ten geometric listening configurations that were tested
for sound image stability. In FIG. 5a, a listening form is shown in plan view. The left speaker 501
and the right speaker 502 respectively reproduce the sound to the listener 503 to form a sound
image 504. Figures 5a to 5 show changes in the orientation of the loudspeaker, similar to Figure
5a. A total of ten configurations were tested in three different listening rooms, with different
values S between the loudspeakers and the distance d of the listener shown in FIG. 5m. The first
room is a small studio management area with various devices, the second room is a large
recording studio where almost nothing is placed, and the third room is a three-sided wall with
sound absorption It is a small laboratory with attached matter. In each test, the listener was
10-05-2019
4
required to provide the sensed sound image position under two conditions: an angle m of the
listener's head was O and the head was directed to the apparent sound image position. Each test
was repeated for three listeners. That is, sound image stability was tested for a total of 180
configurations. Each of these 180 configurations used the same input signal to the speaker. The
azimuth angle a of the image was sensed at 60 ° for each case. FIG. 6 is a schematic diagram
showing a sound image transmission experiment state, in which the sound image 601 is formed
by the signal processed according to the present invention by driving the speakers 602 and 603
in the first chamber 604. .
The dummy head 605, which is shown for example in DE 1927 401, has the left and right
microphones 606 and 607 in the model ear. The electrical signals delivered by microhoses
7606.607 to lines 608 and 609 are separately amplified by an amplifier 610.611 which drives
the left speaker 612 and the right speaker -613 in the second chamber 614. The listener 615 in
the second chamber 614 acoustically isolated from the first chamber 604 will sense a sharp
secondary sound image 616 corresponding to the sound image 601 in the first chamber 604. In
FIG. 7 showing an example of the relationship of the sound processing apparatus according to
the present invention to the conventional system, one or more multitrack signal sources 701 (or
magnetic tape reproduction apparatus) such as a magnetic tape reproduction apparatus are
derived from a plurality of sound sources. The plurality of microphone signals 702 are supplied
to the mixing console 703 of the studio. This console may be used to change the signal by
changing levels and balancing frequency components in any desired form. A plurality of modified
monophonic signals 704 generated by the console 703 are provided to the input of the sound
image processing device 705 according to the invention. In this processing unit 705, each input
channel is assigned to one sound image position, and two channels of signals are generated from
each single input signal 704, with transfer frequency processing in progress. All two channel
signals are mixed to yield a final pair of signals 706.707, which can be returned to the mixing
console 708. It should be noted that although the two-channel signal generated by the present
invention is not a true left and right stereo signal, such implications provide one easy way to
describe these signals. That is, when all of the two channel signals are mixed, all left signals are
combined as one signal, and all right signals are combined as one signal. In practice the console
703. 708 may be two separate parts of the same console, utilizing the console facility, processing
signals can be applied to drive the speaker 709.71 O for monitoring purposes . The master
stereos 711, 712 are supplied to a master stereo recorder 713, for example a two-channel
magnetic tape recorder, after the required changes and level settings.
Note that reference numerals 705 and below represent conventionally known members. The
sound image processing device 705 is shown in more detail in FIG. 8, where the input signal 801
corresponds to the signal 704 and the output signals 807. 808 correspond to the signals 711
712 712 respectively. Each monophonic input signal 801 is provided to a separate signal
10-05-2019
5
processor 802. These processing units 802 operate independently and the audio signals are not
coupled to one another. Each signal processor 802 operates to generate a two channel signal
with phase and amplitude differences adjusted to a frequency dependent reference. As for these
transfer functions, those transfer functions described as true pulse responses in the zero time
zone to be described later, or as complex frequency responses or amplitude and phase responses
in the frequency domain in an isometric manner are projected as input signals. Only the desired
image position to be characterized is characterized. One or more processed signal pairs 803
generated by the signal processor are provided to the input of the stereo mixer 804. Some or all
of these may also be provided to the input of storage 805. A storage 805 can store the complete
processed stereo audio signals while replaying them on output 806. Typically, this storage has
any different number of input channel pairs. And an output channel pair. The plurality of outputs
806 of storage 805 are provided to separate inputs of stereo mixer 804. The stereo mixer 804
sums all the left inputs to form the left human power 807, and sums all the right inputs to form
the right output 808, and possibly changes the amplitude of each input before summing. . There
is no interaction or coupling of the left and right channels in the mixer. The operator 809
manages the operation of the device system via the large interface means 810 in order to
identify the desired sound image position to be assigned to each input channel. These digital
sound images provide differential adjustment of phase and amplitude on a zero frequency
dependent reference, which is particularly advantageous if the signal processor 802 is in digital
form and is not constrained by the position, trajectory or velocity of the sound image. The
processing apparatus will be described in more detail later. In the case of such a digital system, it
is not possible to do real-time signal processing that making such signal processing be performed
in real time, even though such operation is completely possible, is not always economical. If the
output 803 would be connected to a storage device 805 capable of slow recording and real-time
play, conversely, if an appropriate number of real-time signal processing devices 802 were
provided, the storage device 805. You may omit it.
In FIG. 9, the operator 901 manages the console 902 by mixing with the left and right stereo
monitor speakers 903, 904. The stability of the finally processed sound image is good up to a
distance ms as small as 0.2 m, but the operator for mixing should have speakers at a distance of
at least 0.5 m. It is particularly preferred to have this distance, so that accurate sound image
positions are more easily reached. The computer graphic ink display means 905, the multi-axis
control 906 and the keyboard 907 are provided with suitable calculation and storage devices for
supporting them. The computer graphic ink display means 905 can provide a graphical
representation of the position and trajectory of the sound image in space, as shown for example
in FIGS. 10 and 11. FIG. 10 shows the listening condition display 1001 with a typical listener
1002 and image trajectory 1003 shown with a projection screen 1004 and perspective space
cues 1005 1006. At the bottom of the display 1001 is a menu l007 of items relating to the
particular section of the soundtrack being manipulated, including recording, time,
synchronization and editing information. The items of the menu 1007 can be moved to the items
10-05-2019
6
by using the multi-axis control unit 906, or can be selected by the keyboard 907. The selected
item may be changed using the keyboard 907 or toggled using the buttons of the multi-axis
control unit 907 to produce an appropriate device-type operation. In particular, menu item 1009
allows the operator to link the multi-axis control 8-part 906 by software to control the
perspective from which the perspective view is projected or to control the position t / trajectory
of the current sound image? O Make it possible to do. Another menu item 1010 allows the
selection of the separate display shown in FIG. In the display of FIG. 11, instead of the virtually
full-screen perspective presentation shown in FIG. 10, three 1 & [1 orthographic or top view
1101, front view 1102 and side view of the same scene]. 1103 is shown. The remaining quarters
of the screen are occupied by a slightly obscured scaled diagram according to the scale of
perspective view 1101 to aid in understanding. Menu 1105, which is substantially similar and
has substantially the same function as menu 1107, again occupies the bottom of the screen.
A special menu item 1106 allows to toggle back to the display of FIG. In FIG. 12, the sound
source I201.1202.1203 in the first chamber 1204 is detected by the microphone 1205.1206
which generates the right stereo signal and the left stereo signal respectively recorded using the
normal stereo recording bag 5F12Q7. . When replaying using the ordinary stereo replay device
1208, when the right speaker-1209 and the left speaker 1210 are driven by the microphone
1205.1206, the ordinary stereo sound images 1211, 121, 1213 respectively corresponding to
the sound sources 1201.1202.1203. Is sensed by the listener in the second chamber 1215.
These sound images are put in a position that is a projection of the horizontal scale of the sound
fi 1201, 1202.1203 to the microphone 1205 ° 1206 to a straight line connecting the speakers
1209.1210. The two pairs of stereo signals are processed as described above by the sound
processor 1216 and combined and reproduced by the conventional stereo reproduction device j
1217 to the right speaker 121 B and the left speaker 1219 of the third chamber 1220 as the
speakers 1218. At a location independent of the actual location of 1219, a distinct spatially
localized sound image of the sound source is sensed by the listener 1226. Assuming that the
sound image of the original right channel signal is formed at the position 1224 and processing is
performed so as to form the original left channel signal image at the position 1225, each of these
sound images is a real speaker. It behaves as if it exists. These sound images can be viewed as
"virtual speakers". A transfer function that adjusts the differential amplitude and phase of the two
channel signal across the entire audio band on a frequency dependent basis is needed to project
the tonal image of the monophonic audio signal at a predetermined location. In order to identify
each such response, in typical applications, the amplitude and phase differences should be
separated by intervals not exceeding 40 Hz, independently for each of the two channels across
the entire audio spectrum, the best stability of the sound image And must be identified for
coherence. For applications that do not require high quality and sound images, the frequency
spacing must be extended, so identifying about such responses requires about 1000 real
numbers, or isometric 500 complex numbers.
10-05-2019
7
Individual differences in the sensing of spatial location of audible sounds are based on subjective
measurements and are somewhat unclear but within a true three-dimensional space over 1000
discrete locations are averaged listening Are resolved by the As such, the comprehensive
characterization of all responses for all possible locations forms a vast data population that
includes over one million real numbers in all. These real numbers are currently being collected.
By the way, the transfer functions of the acoustic processor according to the invention, which
provide a differential adjustment between the two channels, are created one by one by trial and
error over the audio spectrum for each 4011 z interval . Furthermore, as described below, in the
acoustic processor each transfer function is for two spatially separated transducers at only one
location, ie one azimuth, height and depth. Position the sound relatively. However, in practice,
there is generally no mirror object between the right and left channels, so it is not necessary to
clearly show the response of all transfer functions. When the responses for changing these
channels are interchanged, the azimuth angle a of the sound image is reversed, but the pitch b
and the band r of the sound remain unchanged. Conventional instruments and simplified signals
can be used to demonstrate the method and auditory sense of the invention. If the sine wave
bursts are switched on and off smoothly at relatively long time intervals at a known frequency, a
very narrow band in the frequency domain will be occupied by the result signal. Effectively, this
signal will sample the required response at a single frequency. Thus, the required response or
transfer function results in a simple control of the difference in amplitude or phase (or delay)
between the left and right channels according to the frequency dependent reference. That is,
transfer functions for specific sound positioning can be created empirically by adjusting the
phase and amplitude differences for each selected frequency interval across the entire audio
spectrum. According to Fourier's law, the signal used is completely general, since any signal can
be represented as the sum of 1 & [l sine waves. An example of a system for demonstrating the
invention is shown in FIG. 13, where an audio synthesizer 1.302 (Hyuret Pancard Multifunction
Synthesizer Model 8904A) is a computer 1301 (Heurette Pancard Model 330M). , And generates
a monaural audio signal, which is supplied to the two channels of the human power unit
1303.1304 of the audio delay line 1305 (Evenged Precision Delay Model PD 860).
The right channel signal is led from the delay line 1305 to the switchable inverter 1306, and the
left and right signals are output through the variable attenuators 1307 and 1308 to drive the left
speaker 1311 and the right speaker 1312 respectively. Supplied to The synthesizer 1302
generates a smoothly gated sinusoidal burst of the desired test frequency using the envelope
shown in FIG. The sine wave is gated on using a first linear ramp 1402 of 20 ms duration, dwells
for 45 ms at constant amplitude 1403 and is gated off using a second linear ramp 1404 of 20
ms duration. The bursts are repeated at intervals 1405 of about 1-5 seconds. Furthermore,
according to the present invention, the total audio can be adjusted by adjusting the delay time of
the delay line 1305 and the amplitude by the attenuators 1307 and 1308 using the apparatus
configuration of FIG. 13 and the waveform of FIG. Transfer functions can be created across the
10-05-2019
8
spectrum. The listener makes this adjustment and listens to the placement of the sound, which
will determine if it is at the correct location, and if it is at the correct location, then examines the
frequency intervals. If not, then make adjustments and repeat the listening process. In this way,
transfer functions are created across the entire audio spectrum. FIG. 15 shows practical data
used to form a transfer function suitable to allow the reproduction of sound images sufficiently
far from the direction of the loudspeaker for several sinusoidal frequencies. These data can be
created just before trial and error listening. All these sound images have a wide range of head
heights including head length pointing directly to the sound image as well as all three listening
rooms shown in FIG. 5m for a separate listener It has been revealed that it is stable and
reproducible. The narrow band signal placement described above can be generalized to enable
imaging of a wide band signal representing a complex source such as speech or music. Once the
difference in amplitude and phase shift for the two channels derived from a single input signal is
specified for all frequencies across the entire audio band, the complete transfer function is
specified. In practice, differential amplitudes and delays can be unambiguously identified for a
number of frequencies in the band of interest.
The amplitude and delay for any intermediate frequency of the values shown in FIG. 15 are
determined by interpolation. If the spacing of the frequencies for which the response is specified
is not too large, the interpolation method can not be too critical, taking into account the
smoothness or the rate of change of the displayed true response. In FIG. 15, the amplitudes and
delays are applied to the signals in each channel, which is generally shown in FIG. 16 with
separate sound processing units 1500.1501. A single-channel audio signal is supplied from the
input 1502 and supplied to the sound processor 1500.1501 where the amplitude and phase are
adjusted on a frequency-dependent basis so that at the outputs 1503.1504 of the left and right
channels The difference is as described above 1. The control parameters supplied to the 04 line
1505, which is an empirically defined accurate quantity, alter the adjustment of the phase
difference and the amplitude difference so that the sound image can be in different desired
locations 0, eg digital In design, the acoustic processor may be a finite impulse response (FIR)
filter whose coefficients are modified by the control parameter signal to provide different
effective transfer functions. The device configuration shown in FIG. 16 can be simplified as
shown by the following analysis. First, only the difference between the delays of the two channels
is valid. Let the delays of the left and right channels be L (1) and t (r) respectively. The new
delays L '(1) and t' (r) are added by adding any fixed delay t (a) It is defined as the following
equation. t '(1)-t (1) + t (a) Equation 1 t' (r) = t (r) + t (a) Equation 2 As a result, the whole effect is
time t (a) After, or if time t (a) is negative, it is heard earlier by time t (a). This general expression
holds for the special case t (a) =-t (r). L '(1) = t (+)-t (r) third equation t' (r) = t (r)-t (r) = 0 It
becomes 4 types. This variation allows one channel to have zero delay. In practical device
configurations, care is taken to subtract out relatively small delays so that the need for negative
delays does not occur. It is desirable to avoid this problem by leaving the fixed residual delay of
one channel and changing the delay of the other channel. If the fixed residual delay is of
10-05-2019
9
sufficient magnitude, the variable delay should be a debt.
Second, the channel amplitude does not have to be controlled independently, changing the
amplitude of the signal by amplification or attenuation means changing both stereo channels in
the same ratio, which is commonly done in audiology As far as possible, there is no change in the
conveyed position information, the important point to keep here is the ratio or difference of the
amplitudes. As long as this difference is maintained, all effects and illuminations herein are
completely independent of the total playback sound level. Thus, the same operation as described
above for timing or phase control allows all the amplitude control to be included in one channel
while leaving the other channels at fixed amplitude. It is sometimes advantageous to apply a
fixed residual attenuation to one channel so that all required ratios are achieved by the
attenuation of the other channel. In that case, sufficient control can be used by using a variable
attenuator for only one channel. By identifying the differential attenuation and delay as a
function of frequency for a single channel, all required information can thereby be identified.
Fixed attenuations and delays independent of frequency may be identified for the second
channel. If these are not specified, gain l and delay O may be assumed. Therefore, jPI of the phase
difference and the amplitude difference for any one sound image position, and thus for any one
left and right transfer function! The ff (filtering) can be organized in all one channel or another
channel or any combination thereof. One of the sound processing devices 1500.1501 can be
simplified to only one variable impedance or only one lead. This can not be an open circuit If the
0 phase and amplitude adjustments are made in only one channel to provide the required
difference between the two channels, the transfer functions are shown in FIGS. 17A and 17B. It
will be expressed as FIG. 17A shows a typical transfer function for the phase difference of the
two channels, where the left channel is not modified and the right channel is subjected to phase
adjustment on a frequency dependent basis for the entire audio spectrum . Similarly, FIG. 17B
shows generally a typical transfer function for the amplitude difference of the two channels,
where the left channel is not changed and the right channel is the frequency for the entire audio
spectrum Attenuation on a dependent basis. As understood, the acoustic positioning apparatus of
FIG. 16!
1500.1501-by way of example, may be analog or digital, and may include all or part of filters,
delay elements, inverters, adders, amplifiers and phase shift elements as circuit elements. These
functional circuit elements may be organized or arranged in any way that provides a transfer
function. Several equal configurations of this information are possible and commonly used in the
relevant art. As an example, the delay may be identified as a change of phase at any frequency
using the following equivalence: Phase phase) -360 × (delay time) × frequency phase (radian) =
2 × x (delay time) × frequency When this equivalence is applied, it is not sufficient to specify
only the main phase value, and the above equivalence is To be true, all phases are required. The
conventional display commonly used in electronics is the complex S-plane display. All filter
10-05-2019
10
characteristics that can be realized using true analog components are specified as the ratio of
two polynomials in the Laplace complex frequency variable S. In the general formula, T (s) is the
transfer function in five planes, Bin (s) and Bout (s) are the human power signal and the output
signal as a function of S, respectively, and the intermolecular number N (s) and The molecular
function D (s) is expressed as follows. N (s) = ao + a, s + a, s "+ a + s3 + ·-a, ls' 6th 弐 (s) = ba + bl, +
b, s" + b, s "+ + HHb, s" formula 7 of this notation The advantage is that it is very compact. In
order to completely specify the function at all frequencies without requiring interpolation, it is
only necessary to specify (n-1) coefficients a and (n + 1) coefficients. Once done, the amplitude
and phase of the transfer function at any frequency can be easily derived using known methods.
Another advantage of the above notation is that it is a form that is very easily derived from the
analysis of analog circuits, so it is the most natural, compact and widely accepted way to specify
the transfer function of such circuits. It is. Another display advantageously used in the
description of the invention is a two-sided display 0 According to a preferred embodiment of the
invention, the signal processing device is configured as a digital filter to obtain the advantage of
metamorphosis Ru. Since each sound image position can be defined as a single transfer function,
a form of filter is needed that allows the transfer function to be implemented quickly and easily,
with minimal limitations on what function to implement.
A fully programmable digital filter is appropriate to meet this requirement. The digital filter may
operate in the frequency domain, in which case the signal is the first Fourier transformed to
transition from the time domain representation to the frequency domain. The filter's amplitude
and phase response, which is determined by one of the aforementioned methods, is then applied
to the representation of the frequency domain of the signal by complex multiplication. After R,
the signal is restored in the time domain for D / A conversion by application of the inverse
Fourier transform. Alternatively, responses that are directly in the time domain can be identified
as true impulse responses. This response is mathematically equivalent to the amplitude and
phase response in the frequency domain and can be obtained from this, depending on the
application of the inverse Fourier transform. This impulse response may be applied directly to
the time domain by convolution with a time domain representation of the signal. Since the
convolution operation in the time domain is mathematically identical to the multiplication
operation in the frequency domain, the direct convolution is exactly equivalent to the frequency
domain operation described above. Since all digital calculations are non-consecutive and discrete,
the discrete representation is preferred in terms of the coefficients applied in the one-regressive
direct convolution digital filter rather than the continuous representation. It is advantageous to
specify the response directly, which can easily be done using two-sided notation as opposed to Ssided notation. That is, if T (z) is a time domain response equivalent to T (s) in the frequency
domain, then N (z), D (Z) have the form N(Z)−C11+C,Z−’+C! Z- "+--c, lz-'9th 弐
(z)-de + d + z-' + dzz-" 10 · · · dl, a- ". Equation 10 In this notation, the coefficients c and d, like the
coefficients a and b in the five planes, are sufficient to specify the function, and therefore the
same compactness is possible. A two-sided filter can be designed directly if the operator Z is
10-05-2019
11
interpreted, such that 2- + is a delay of n sampling intervals. In that case, the specification
coefficients c, d are directly multiplication coefficients in the configuration. Only a negative
power of 2 needs to be used here, since this corresponds to a positive delay.
A positive power of 2 corresponds to a negative delay, which is the response before the stimulus
is applied. According to these notations, a sound processor according to the invention which can
locate a sound image of a wide band sound such as speech or music 0, for example, the processor
802 of FIG. 8 is variable as shown in FIG. 18A. It can be embodied as a variable two-pass analog
filter with a path-coupled attenuator. In FIG. 18A, a monophonic or monaural input signal 1601
is input to two filters 1610.1630 and also input to two potentiometers 1651.1652. The output
from filter 1610.1630 is also connected to potentiometer 1653.1654. The four potentiometers
1651-1654 are configured as so-called joystick controls so as to operate differentially. One
joystick axis allows control of the potentiometer 1651.1652. As one moves to move a larger
portion of its input to its output, the other is mechanically inverted, causing a smaller portion of
its input to its output. ポテンショメーター1653は。 1654 is similarly actuated on a second
separate joystick axis. The output signal from the potentiometer 1653.1654 is respectively led to
a buffer 1655.1656 of gain 1 and the buffer 1653.1654 drives a potentiometer 1657.1658
which is coupled to operate. These increase or decrease the ratio of the inputs transferred to the
output in a step-wise fashion. The output signal from the potentiometer 1657. 1658 is directed
to the inverting switch 1659. The inverting switch 1659 supplies the filter signal directly or after
exchange to the first input of the summing element 1660.1670. Each responsive summing
element 1660.1670 receives a potentiometer 1651.1 652 at its second input and output.
Summing element 1670 drives inverter 1690, and switch 1691 allows selection of direct or
inverted signal to drive input 1684 of attenuator 1689. The output of the attenuator 1689 is a
so-called right channel signal. Similarly, summing element 1660 drives inverter 1681 and switch
1682 allows fixed contact 1683 to select an inverted or direct signal.
Switch 1685 allows signal 1683 or input signal 1601 to be selected as the drive signal for
attenuator 1686 to produce output 1688 for the left channel. The switch 1685 allows selecting
the signal 1683 or the input signal 1601 as a drive signal for the attenuator 1686 generating the
output 168 days of the left channel. The filters 1610.1630 are identical, one of which is shown in
detail in FIG. 18B. Gain buffer 1611 is capacitively coupled through capacitor 1612 to receive an
input signal through input 0t + child 1601 and drive filter 1613. Similar filter elements 1614 to
1618 are connected in cascade, and the last filter element 161 B is coupled via a capacitor 1619
and a gain buffer 1620 to drive an inverter 1621. The switch 1622 allows selecting the output of
the inverter 1621 or buffer 1620 at the filter output terminal 1623. The filter elements 1613 to
1618 are identical and are shown in detail in FIG. 18c. These differ only in the capacitance value
of each capacitor. The input 1632 is connected to a capacitor = 1631 and a resistor 1633, and
the resistor 1633 is connected to the inverting input of the operational amplifier 1634. The
10-05-2019
12
output end 1636 is an output of the filter element. The feed bank resistor 1635 is connected to
the operational amplifier 1634 in a conventional manner. The non-inverting input of the
operational amplifier 1634 is driven from the junction of one of the resistors 1637-1642
selected by the switch 1643 and the capacitor 1631. This filter is a full wave filter with a phase
shift that varies with the frequency following the contacts of switch 1643. Table 1 shows the
capacitance value of the capacitor 1631 used for each of the filter elements 1613 to 1618, and
Table 2 shows the resistance value selected by the switch 1642. These resistance values are the
same for all filter elements 1613 to 1618. One implementation M 加 算 of summing element
1660.1670 is shown in FIG. 18D, where one output 1664 results from the two inputs 1661 and
1662 for summing in the operational amplifier 1663.
The gain of the input-output part is determined by the resistor 1665 ° 1667 and the feedbank
resistor 1666. In either case, the human power unit 1662 is driven from the switch 1659 and the
input unit 1661 is driven from the potentiometer 1651.1652 of the joystick. As an example of
the positioning of the sound image, Table 3 shows the setting of the sound image and the
corresponding sound image position corresponding to the helicopter at a position sufficiently
above the plane including the speaker and the listener. In order to obtain the required
monophonic sound for the method according to the invention, the stereo tracks above the sound
effects are summed. In the apparatus set up as in the table, a realistic sound image was emitted
into space to sense the helicopter at the location of the display. In Table 1 Table 2 Table 3 Table
3 the setting of the reversing switch 1659 is such that in either case the signal from element
1657 drives element 1660 and the signal from element 1658 drives element 1670 . The addition
of two special elements to the above-described circuit provides a special ability to change how
the listening area is broadened. However, this is not essential for the formation of a sound image.
These special elements are shown in FIG. 19, where the left and right signals are from the output
1688.1689 of the signal processor of FIG. 16 via the left signal input 1701 and the right signal
end 1702 Are supplied separately. The delay units 1703.1704 are respectively inserted in the
respective channels, and the output signals from these delay units 1703.1704 become the
outputs from the output end 1705.1706 of the sound processing apparatus. The delay
introduced into the channel by this special arrangement is frequency independent. These are
each fully characterized by a single real number. Let the left channel delay be 1) and the right
channel delay be t (r), then, just as in the previous case, only the differences between the delays
become significant and specify the differences between these delays The device can be
completely controlled by this. In the implementation of the device, a fixed delay is added to each
channel such that at least a negative delay is not necessary to achieve the required difference.
The delay difference t (dl is defined as follows. t (d) = t (r)-L (1) where t (d) is O, the effect
produced is not affected by the added device. Also, if the above t (d) is positive, the center of the
listening area moves laterally to the right along the dimension (e) in FIG.
10-05-2019
13
A positive value of t (d) corresponds to a positive value of (e), which means moving to the right.
Similarly, the leftward movement corresponding to the negative value of (e) is obtained by the
negative value of t (d). According to this method, the entire listening area in which the listener
senses iridine has more than half the dimension (S) of the 0 dimension (e) projected laterally to
any point between the speakers or beyond Is easily possible, and good results are obtained up to
the limit shift where the dimension (e) is 83% of the dimension (S). This is not a limitation of
technology but represents the limitation of the present experimental method.
[0002]
Brief description of the drawings
[0003]
FIG. 1 is a plan view showing a listening form for defining parameters of positioning of a sound
image, FIG. 2 is a side view corresponding to FIG. 1, and FIG. 3 defines parameters of a listener's
location 4 is a side view corresponding to FIG. 3. FIG.
5a-5 are plan views showing several listening conditions with corresponding changes in the
positioning of the speakers, and FIG. 5m is an illustration showing the critical dimensions of the
three listening rooms. FIG. 6 is a plan view showing sound image transfer experiments performed
in two mutually isolated rooms, and FIG. 7 is a process block diagram relating the present
invention to the practice of the prior art, FIG. 1 is a schematic block diagram of a sound imaging
method according to an embodiment of the present invention. FIG. 9 is an illustration of operator
workstation silane according to an embodiment of the present invention, FIG. 10 is a plan view
showing a computer graphic perspective display used for the control of the present invention,
and FIG. FIG. 12 is a plan view showing a computer graphic display showing three orthogonal
projections by computer graphic used, and FIG. 12 is a diagram schematically illustrating a state
of forming a virtual sound source according to the present invention, which are isolated from
each other Is a plan view showing the room. FIG. 13 is a schematic block diagram showing one
apparatus for explaining the present invention, and FIG. 14 is a waveform diagram of a test
signal in which a voltage is plotted against time. FIG. 15 is an explanatory view of transmission
data according to an embodiment of the present invention, and FIG. 16 is a schematic block
diagram showing a sound image positioning system according to an embodiment of the present
invention; FIG. 17B is a characteristic diagram showing a typical transfer function used in the
sound processing apparatus of FIG. 18A to 18C respectively show schematic block diagrams of
one circuit embodying the present invention 1, and FIG. 19 is a block diagram showing a further
example of the circuit embodying the present invention. It is. 101.301 ・ ・ ・ Left speaker
10-05-2019
14
102.302 ・ ・ ・ Right speaker-103, 303 ・ ・ ・ Listener 104 ... applicant for sound image
position Patent applicant Que Sound Limited agent Patent Attorney Koike Satoshi Tamura Satoshi
Tamura -1 ” jX 10,000 a; 56, 4. j, Niji: m. ) P = -18 ° a, -1 el 'p = o ° Q; ◆ 15 ° p, oo Q, -15' P,
-14 '0.0 P; ◆ 21' Q = OP, -90 ° Q = +903 P = + 90 ° Q, -90 'P, -90 ° Q, -90' Q, 8 Fig, 10 Fig, 12
Fig, 12 Fig, 16 Fig, 19 Fig, 17 a Fig, 17 b Tekko 2 Main Shiho (I cut off January 8, 1990 Japanese
Patent Application No. 228,169, No. 2, method and apparatus for forming name-tone image
according to the invention
Relationship with the person making corrections Patent applicant Dec. 11, 1999 (ship date:
December 26, 1990) 6, target of correction Figure 7, Content procedure of correction Correction
of written work June 28
10-05-2019
15
Документ
Категория
Без категории
Просмотров
0
Размер файла
37 Кб
Теги
jph02298200
1/--страниц
Пожаловаться на содержимое документа