close

Вход

Забыли?

вход по аккаунту

?

JP2004194315

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2004194315
PROBLEM TO BE SOLVED: To provide a method of processing a multimedia audio signal.
SOLUTION: The method comprises mounting an electroacoustic transducer in a low frequency
enhancement device. The electroacoustic transducer emits different high frequency acoustic
energy and common low frequency acoustic energy. A first channel audio signal is separated into
a first channel first spectrum portion and a first channel second spectrum portion, and a first
transfer function is used to generate a first channel first processed signal, and a first transfer
function Generates a first channel second processed signal using a different second transfer
function and combines the first channel first processed signal and the first channel second
spectrum portion to generate a first channel first combined signal Do. [Selected figure] Figure 1A
Electro-acoustic conversion using low frequency reinforcement devices
[0001]
The invention relates to electro-acoustic conversion using low frequency enhancement devices,
and more particularly to the use of directional arrays with low frequency devices, and more
particularly to multimedia entertainment devices. , The use of low frequency directional arrays.
[0002]
This application claims the benefit of US patent application Ser. No. 10 / 309,395, filed Dec. 3,
2002.
10-05-2019
1
The entire contents thereof are hereby incorporated by reference into the present application.
[0003]
An important object of the present invention is the superior use of directional arrays with low
frequency enhancement devices and integration of directional arrays into multimedia
entertainment devices such as gambling machines and video games. It is to make things.
[0004]
According to the invention, a method of processing an audio signal comprises the steps of
receiving a first channel audio signal and separating the first audio channel signal into a first
channel first spectral portion and a first channel second spectral portion And the step of
The method also includes the steps of: generating a first channel first processed signal by first
processing of a first channel signal first spectral portion according to a first process represented
by a first transfer function that is neither 1 nor 0; Generating a first channel second processed
signal by a second process of the first channel first spectral portion according to a second
process represented by a second transfer function different from the first transfer function;
Combining (combining) the first processed signal and the first channel second spectral portion to
generate a first channel first combined signal. Furthermore, the method comprises the steps of:
converting the first combined signal by the first electro-acoustic transducer; combining the first
channel second processed signal; and the first channel second spectral portion; Generating a
second composite signal, and converting the second composite signal by a second electroacoustic transducer.
[0005]
In another aspect of the invention, a method of processing a multi-channel audio signal
comprises: separating a first audio channel signal stream into a first channel first spectral portion
and a first channel second spectral portion; Separating the two audio channel signal stream into a
second channel first spectral part and a second channel second spectral part, and a first process
represented by a first transfer function which is neither 1 nor 0, Processing the first spectral
portion of the one channel signal and generating the first processed signal, and the first audio
channel signal first according to a second process represented by a second transfer function
different from the first transfer function Processing the spectral portion to generate a second
10-05-2019
2
processed signal, represented by a third transfer function that is neither 1 nor 0 Processing the
second channel first spectrum portion according to a third process to generate a third processed
signal, and according to a fourth process represented by a fourth transfer function different from
the third transfer function Processing the first spectral portion of the second channel signal to
generate a fourth processed signal. In addition, the method comprises combining the first
channel second spectral portion and the second channel second spectral portion to generate a
combined first channel second spectral portion, the first electro-acoustic transducer Converting
the first channel combining second spectrum portion and one of the first channel first processing
signal, the first channel second processing signal, the first channel third processing signal, and
the first channel fourth processing signal Including.
[0006]
According to another aspect of the present invention, an electroacoustic device includes a first
directional array. The first directional array includes a first electroacoustic transducer and a
second electroacoustic transducer. The first and second electroacoustic transducers each include
a first radiation surface and a second radiation surface. The device further includes a low
frequency enhancement structure having an inner surface and an outer surface, the first emitting
surface of the first electroacoustic transducer and the first emitting surface of the second
electroacoustic device facing the surrounding environment, the first electrical The electroacoustic
device is configured and arranged such that the second radiation surface of the acoustic
transducer and the second radiation surface of the second electroacoustic transducer face the
interior of the low frequency enhancement structure.
[0007]
According to another aspect of the invention, in a method of operating a multi-channel audio
system, the multi-channel audio system includes first and second electro-acoustic transducers
and an acoustic waveguide. The method comprises first and second transducers in the waveguide
such that the first radiation surface of the first transducer and the first radiation surface of the
second transducer radiate sound waves into the acoustic waveguide. Placing the first channel
signal into a first channel high frequency audio signal and a first channel low frequency audio
signal, and second channel signal with a second channel high frequency audio signal. Separating
into a second channel low frequency audio signal and combining the first channel low frequency
audio signal with a second channel low frequency audio signal to form a common low frequency
audio signal. The method further comprises the steps of transmitting the common low frequency
audio signal to the first converter and the second converter, transmitting the first channel high
10-05-2019
3
frequency audio signal to the first converter, and a second channel high frequency audio signal.
Transmitting to the second transducer, emitting the sound waves corresponding to the first
channel high frequency signal and the common low frequency audio signal into the waveguide
by the first transducer, and second by the second transducer. Emitting sound waves into the
waveguide corresponding to the channel high frequency signal and the common low frequency
audio signal.
[0008]
Another aspect of the invention is a method of operating a multimedia entertainment device
having an audio system having first and second loudspeaker arrays and first and second audio
channels, the method comprising: The audio channels each have a high frequency portion and a
low frequency portion, and the multimedia entertainment device includes an associated listening
space. The method comprises the steps of directionally emitting, by the first loudspeaker array,
sound waves corresponding to the high frequency part of the first audio channel towards the
listening space, and by the second loudspeaker array the second audio. Radiating the sound wave
corresponding to the high frequency part of the channel towards the listening space, and by the
first loudspeaker array and the second loudspeaker array, the low frequency part of the first
channel and the second channel Emitting the low frequency portion omnidirectionally.
[0009]
In another aspect of the invention, the entertainment area includes a first multimedia
entertainment device with an audio system. The audio system includes a first audio channel and
a second audio channel. The first audio channel and the second audio channel each include a
high frequency portion and a low frequency portion. The first multimedia entertainment device
includes a first loudspeaker array and a second loudspeaker array. The entertainment area
includes a listening space associated with the first multimedia entertainment device. The area
further includes a second multimedia entertainment device with an audio system. The audio
system includes a first audio channel and a second audio channel. The first audio channel and the
second audio channel each include a high frequency portion and a low frequency portion. The
second multimedia entertainment device includes a first loudspeaker array and a second
loudspeaker array. The entertainment area includes a listening space associated with the second
multimedia entertainment device. The first multimedia entertainment device and the second
multimedia entertainment device are in a common listening area. The first multimedia
entertainment device directly emits sound waves corresponding to the first channel high
frequency portion of the first device and the second channel high frequency portion of the first
10-05-2019
4
device, and the sound waves corresponding to the first channel high frequency portion of the
first device And the acoustic wave corresponding to the second channel high frequency part of
the first device is configured and arranged to be able to hear more noticeably in the listening
space associated with the first device than in the listening space associated with the second
device There is. The second multimedia entertainment device directly emits sound waves
corresponding to the first channel radio frequency portion of the second device and the second
channel radio frequency portion of the second device, and the first channel radio frequency
portion of the second device and the second device The sound wave corresponding to the second
channel high frequency part is arranged so that it can be heard more noticeably in the listening
space attached to the second device than in the listening space attached to the first device.
[0010]
In another aspect of the invention, an audio system emitting sound waves corresponding to a
first audio signal and a second audio signal includes an indicator that indicates the priority of the
directional radiation pattern. This indicator has at least two states. The audio system includes a
detector that detects the indicator and a directional array that emits sound waves in a plurality of
directional radiation patterns. The directional array emits acoustic energy according to the first
directional radiation pattern upon detection of the first indicator state, and emits acoustic energy
according to the second directional radiation pattern upon detection of the second indicator state
So, arranged and arranged.
[0011]
In another aspect of the invention, a method of dynamically equalizing an audio signal comprises
the steps of: providing an audio signal; and a first attenuation by a first attenuation of the audio
signal by a variable factor G (0 <G <1). Generating a signal; and generating a second attenuated
signal by a second attenuation of the audio signal by a variable factor 1-G. The method further
includes equalizing the first attenuation signal to generate an equalized first attenuation signal,
and combining the equalized first attenuation signal with a second attenuation signal to generate
an output signal. .
[0012]
In another aspect of the present invention, a method of clipping an audio signal and performing
10-05-2019
5
post-clip processing comprises the steps of clipping the audio signal and generating the clipped
audio signal and filtering the audio signal with a first filter. And generating the filtered unclipped
audio signal, and filtering the clipped audio signal with the second filter to generate the filtered
clipped audio signal. The method further comprises the steps of: differentially combining the
filtered clipped audio signal and the clipped audio signal to generate a differentially combined
audio signal; filtering the unclipped audio signal and the differentially combined audio signal
Combining to generate an output signal.
[0013]
In another aspect of the invention, a method of controlling the directivity of a sound radiation
pattern includes the step of providing an audio signal to a first attenuator, a time delay, and a
first summer. The method also includes the steps of: generating a first variable attenuation audio
signal by a first attenuation of the audio signal by a variable coefficient G (0 <G <1) using a first
attenuator; G) generating a second variable attenuation audio signal by a second attenuation of
the audio signal, delaying the first audio signal in time and generating a delayed audio signal, and
Generating a first variable attenuation delayed audio signal by three attenuations; and generating
a second variable attenuation delayed audio signal by fourth attenuation with a variable factor
(1-H). Further, the method combines the first variable attenuation audio signal with the second
variable attenuation delayed audio signal to generate a first convertible audio signal, and the first
variable attenuation audio signal is a first variable attenuation delayed audio signal. And
generating a second convertible audio signal.
[0014]
In yet another aspect of the invention, the gambling device includes an associated listening space
and an audio system. The audio system includes a directional loudspeaker array of transducers.
The sound waves emitted by the first of the plurality of transducers are additively (in a
constructive manner) combined in the first direction and subtracted (cancelled) combined in the
second direction. The first direction is towards the listening space.
[0015]
Other features, objects, and advantages will be apparent from the following detailed description
read in conjunction with the accompanying drawings.
10-05-2019
6
[0016]
Referring now to the drawings and more particularly to FIG. 1A, an audio signal processing
system 1 according to the invention is shown.
The input terminals 10, 12 receive audio signals corresponding to two channels A, B of a stereo
or multi-channel audio system. The input terminals 10, 12 are coupled to a filter and synthesis
circuit (filter / synthesis circuit) 14, which outputs the modified audio signal onto the audio
signal lines 16, 18, 20. . Audio signal line 16 is coupled to processing blocks 23-26 of audio
signal processing circuit 22. Signal processing block 23 is coupled to summer 27A, which is
coupled to electro-acoustic transducer 27B. Signal processing block 24 is coupled to summer
28A, which is coupled to electro-acoustic transducer 28B. Signal processing block 25 is coupled
to summer 29A, which is coupled to electro-acoustic transducer 29B. Signal processing block 26
is coupled to summer 30A, which is coupled to electro-acoustic transducer 30B. Audio signal line
18 is coupled to processing blocks 31-34 of audio signal processing circuit 22. Signal processing
block 31 is coupled to adder 27A. Signal processing block 32 is coupled to adder 28A. Signal
processing block 33 is coupled to adder 29A. Signal processing block 34 is coupled to summer
30A. Audio signal line 20 is coupled to processing block 35 of audio signal processing circuit 22.
Processing block 35 is coupled to adders 27A-30A.
[0017]
The combining / filtering circuit 14 may include a high pass filter 36 coupled to the input
terminal 10 and a high pass filter 40 coupled to the input terminal 12. The combining / filtering
circuit 14 may also include an adder 38 coupled to the input terminal 10 and the input terminal
12 optionally via phase shifters 37A, 38B, respectively. Adder 38 is coupled to low pass filter 41
which outputs to signal line 20. The characteristics and functions of phase shifters 37A, 37B are
described in co-pending US patent application Ser. No. 09 / 735,123. As long as the phase
shifters 37A, 37B have the cumulative effect described in co-pending US patent application Ser.
No. 09 / 735,123 over the frequency range within the pass band of the low pass filter 41, Have
similar parameters or different parameters. The system of FIG. 1A may also include conventional
components such as DACs and amplifiers, not shown in this figure.
[0018]
10-05-2019
7
In operation, the combining / filtering circuit 14 generates the high frequency A channel signal
[Ahf] on the signal line 16, the high frequency B channel signal [Bhf] on the signal line 18, and
the combined low frequency signal [Bhf] on the third signal line 20. Output (A + B) lf]. The audio
signal on signal line 16 is processed in processing blocks 23 to 26 as represented by transfer
functions H1 (s) to H4 (s), where s is the Laplace frequency variable jω, and ω = 2πf Because H
(s) is a representation in the frequency domain of the transfer function), it is output to the adders
27A-30A and then output to the electroacoustic transducers 27B-30B, respectively. The signals
on the signal line 18 are processed as represented by transfer functions H5 (s) to H8 (s) in
processing blocks 31 to 34, and are output to the adders 27A to 30A, and further the
electroacoustic transducer 27B. Are output to .about.30 B respectively. The signal on the signal
line 20 is processed in the processing block 35 as represented by the transfer function H9 (s),
output to the adders 27A to 30A, and further output to the converters 27B to 30B. As a result of
the processing of the system of FIG. 1A, each of the transducers 27B-30B receives the processed
signal Ahf, Bhf according to a different transfer function, and each of the transducers 27B-30B
produces a combined (A + B) lf signal. Receive
[0019]
As a result of the processing of the system of FIG. 1A, converter 27B receives signals H1 (s) Ahf +
H5 (s) Bhf + H (9) (A + B) 1f, and converter 28B receives signal H2 (s) Ahf + H6 (s) Bhf + H9 (s) (A
+ B) lf is received and converter 29B receives signal H3 (s) Ahf + H7 (s) Bhf + H9 (s) (A + B) lf and
converts Unit 30B receives signals H4 (s) Ahf + H8 (s) Bhf + H9 (s) (A + B) lf. If there are phases
36A or 37B, or both, the signals received by the respective transducers may include phase shifts.
[0020]
The transfer functions H1 (s) to H9 (s) may be attenuation / amplification, time delay, phase shift,
equalization, HRTF processing (described later in the description of FIGS. 17A and 17B), or any
other linear or non-linear signal processing function Can represent one or more of Also, the
transfer functions H1 (s) to H9 (s) can represent no change (i.e., expressed mathematically, have
a value of 1), or may be absent (i.e. expressed mathematically, 0) Can have a value of. An example
of these two states is described below. In addition, in addition to any equalization that may be
performed in processing blocks 23-26 and 31-35, each of electro-acoustic transducers 27B-30B
can also be equalized individually. Equalization of the individual transducers is most conveniently
handled by a processor associated with the individual transducers.
10-05-2019
8
[0021]
The system of FIG. 1A is shown as a logic block diagram. In FIG. 1A and the other logical block
diagrams that follow, there may or may not be physical elements corresponding to each of the
components of FIG. 1A. For example, input terminals 10, 12 may be implemented as a single
physical input terminal that receives a stream of digitally encoded signals. Components such as
high pass filters 36, 40 or processing blocks 23-26 or otherwise may be implemented by a digital
signal processor (DSP) operating on digitally encoded data. In addition, other circuit
configurations can achieve substantially the same result as the configuration of FIG. 1A. For
example, channels A, B can be filtered by a low pass filter, such as filter 41, prior to synthesis.
High pass filters 36, 40 may also be implemented as low pass filters that perform differential
summation with the unfiltered signal, as shown below in FIG. More than one block can be
represented by a single component, or multiple blocks can be combined into one. For example,
high pass filters 36, 40 can be incorporated into the transfer functions of blocks 23-26 and 3134, and low pass filter 41 can be incorporated into the transfer function of block 35.
[0022]
As used herein, “coupled” means “communicable (communicable) coupled”. That is, the two
coupled components are configured to transmit an audio signal. The coupled components can be
physically connected by conductive wires or optically communicable fibers, or can be
communicated by wireless techniques such as infrared or radio frequency (RF), or other signal
communication techniques. Can also be combined with If the component is implemented as a
DSP operating on a digitally encoded signal, the "combination" is digital as the DSP is represented
by that component and described in the relevant part of this disclosure. It means that it can
process the encoded audio signal. Similarly, as used herein, a "signal line" may be a conductive
wire, an optical communication enabled fiber, a wireless communication path, or any other type
of signal transmission path for transmitting analog or digitally encoded audio signals. Means any
transmission path, including
[0023]
As used herein, "directional" means that the amplitude of the sound radiated in the direction of
maximum radiation is emitted in the direction of minimum radiation at frequencies where the
corresponding wavelength is long relative to the dimensions of the radiation surface It means
that it is at least 3 dB greater than the amplitude of the sound. "Directive in direction X (or more
10-05-2019
9
directional)" means that even if direction X is not the maximum radiation direction, the audible
radiation level is larger in direction X than in any other direction. means. The elements included
in the directional acoustic device typically change the radiation pattern of the transducer so that
the radiation from the transducer can be heard better in one place in space than in another.
There are two types of directional devices: a wave directing device (wave directing device) and an
interference device. The sound wave delivery device includes a barrier that causes sound waves
to be emitted in one direction with greater amplitude than the other direction. An acoustic wave
delivery device is typically effective for radiation having a wavelength that corresponds to or is
smaller than the dimensions of the acoustic wave delivery device. Examples of sound
transmission devices include horns and acoustic lenses. In addition, the acoustic driver is
directional at wavelengths corresponding to or shorter than its diameter. As used herein, "nondirectional" refers to the dimension of the emitting surface in the direction of maximum radiation
rather than the amplitude of the sound radiated in the direction of minimum radiation at the
corresponding wavelength at a longer frequency. It means that the degree to which the
amplitude of the emitted sound is large is 3 dB or less. As used herein, "listening space" means a
portion of the space typically occupied by one listener. Examples of listening spaces include a
seat in a cinema, an easy chair, a recliner chair, or a sofa seating position in a home
entertainment room, a seating position in a vehicle passenger compartment, a single listener
gambling machine This includes video games, etc. played alone. In some cases, there may be
more than one listener in one listening space. For example, two people playing the same video
game. As used herein, "listening area" means a collection of listening spaces that are acoustically
adjacent, ie not separated by an acoustic barrier.
[0024]
The interference device has at least two radiating elements, which can be two emitting planes of
two acoustic drivers or a single acoustic driver. The two radiating elements emit sound waves
that interfere in a frequency range whose wavelength is greater than the diameter of the
radiating element. Sound waves interfere in such a way that they destructively interfere more in
one direction than in the other. In other words, the amount of destructive interference is a
function of the angle to the midpoint between the drivers. As used herein, the term "low
frequency" is up to about 200 Hz (corresponding wavelength of 5.7 feet or 1.7 meters), or about
400 Hz (corresponding wavelength is about 2.8 feet or 86 centimeters) Say the frequency up to.
As used herein, "high frequency" refers to frequencies where the corresponding frequency is
higher than the low frequency range. For a conical electroacoustic transducer having a conical
diameter of about 4 inches, a typical high frequency range is about 200 Hz or more. As used
herein, "ultra-high frequency" is a subset of high frequencies and refers to frequencies in the
audible spectrum with corresponding wavelengths less than the diameter of the transducer used
to emit them (approximately 4 About 3.5 kHz or more for electro-acoustic transducers with
10-05-2019
10
conical diameters of inches).
[0025]
The audio signal processing system according to FIG. 1A is advantageous because the plurality of
transducers can radiate the sound waves corresponding to the high frequency audio signal in a
directed manner, using signal processing techniques that cause subtractive interference.
Subtractive interference is described in more detail in US Pat. No. 5,809,153 and US Pat. No.
5,870,484. At the same time, the transducers may cooperate to deliver more acoustic energy in
the low frequency range by emitting sound waves corresponding to the low frequency audio
signal in the frequency range that the sound waves combine additively it can.
[0026]
Referring to FIG. 1B, an alternative implementation of the embodiment of FIG. 1A is shown. In
FIG. 1B, a time delay is provided in the signal path between the processing block 35 and the one
or more transducers. For example, processing block 35 may be coupled to summers 29A, 30A via
time delay 61. Alternatively, processing block 35 may be coupled to summer 29 A via time delay
62 and coupled to summer 30 A via time delay 63. Time delay elements similar to the delays 61,
62, 63 can also be inserted between the processing block 35 and the converters 27B, 28B.
Additional time delays may be incorporated into processing blocks 23-26 and 31-34 of FIG. The
time delay can be implemented as an all pass filter, as a complementary all pass filter, as a nonminimum phase filter, or as a delay element. A time delay can be used to cause a relative time
difference between the signals applied to the transducers.
[0027]
Referring now to FIG. 2, one implementation of the audio signal processing system of FIG. 1A is
shown. In the embodiment of FIG. 2, the input terminals 10, 12 represent the left (L) and right (R)
input terminals of a conventional multi-channel system. The transfer functions H1 (s) and H8 (s)
in the processing blocks 23 and 34 indicate no change (having unit values), and the transfer
functions H3 (s) and H4 in the processing blocks 25, 26, 31 and 32. (s), H5 (s), H6 (s) have a
value of 0 and are not shown. Processing block 35 operates on the lf signal that includes the
transfer function H9 (s) and is equally transmitted to the four transducers. The transfer functions
H2 (s) and H7 (s) of the processing blocks 24, 33, respectively, represent phase reversal
10-05-2019
11
(indicated by a negative sign) and time shift (Δt2 and Δt7, respectively). As a result of the signal
processing in the embodiment of FIG. 2, the converter 27B emits an acoustic wave corresponding
to the combined signal Lhf + (L + R) lf, and the converter 28B corresponds to the combined signal
-LhfΔt2 + (L + F) lf. The transducer 29B emits an acoustic wave corresponding to the combined
signal -Rhf.DELTA.t7 + (L + R) lf, and the transducer 30B emits an acoustic wave corresponding
to the combined signal Rhf + (L + R) lf. .
[0028]
Referring to FIG. 3A, a schematic diagram of the embodiment of FIG. 2 is shown, illustrating one
use of the present invention. Transducers 27B, 28B can be conventional 4 inch diameter cone
acoustic drivers, with one emitting surface of each transducer radiating energy directly into
waveguide 39A or through acoustic volume 80, Alternatively, it emits through some other
acoustic element. The other emitting surface of each transducer radiates acoustic energy directly
to the external environment. The characteristics of the transfer functions H1 (s), H2 (s) including
the time delay .DELTA.t2, and the settings of the position and orientation of the transducers 27B,
28B, the front of the transducers 27B, 28B functions as a directional array, and the audio In a
radiation pattern (such as a cardioid 40) that emits more acoustic energy in a direction 44
generally towards the listener 46 at the listening position where the signal processing system 1
is involved , Emitting a sound wave corresponding to the high frequency spectrum component of
the left channel. Transducer 29B may be a conventional 4 inch diameter cone-shaped acoustic
driver, with one emitting surface of each transducer radiating energy directly into waveguide
39A, or through acoustic volume 82, or It emits through other acoustic elements. The other
emitting surface of each transducer radiates acoustic energy directly to the external environment.
The characteristics of the transfer functions H7 (s) and H8 (s) including the time delay Δt7 and
the settings of the position and orientation of the transducers 29B, 30B, the front of the
transducers 29B, 30B functions as a directional array, the audio In the radiation pattern (such as
the cardioid 42) which emits more acoustic energy in a direction 44 generally towards the
listener 46 at the listening position where the signal processing system 1 is involved, into the
high frequency spectral components of the right channel Do so as to emit corresponding sound
waves. Directional arrays are discussed in more detail in US Pat. Nos. 5,809,153 and 5,870,484.
The acoustic waves emitted by the back of the cone into the waveguide, in particular the low
frequency acoustic waves, enhance the low frequency acoustic waves emitted by the front of the
cone. In this embodiment of the embodiment of FIG. 2, the transducers 29B, 30B are acoustically
coupled to the waveguide 39A near the closed end of the waveguide, and the transducers 27B,
28B are connected to each other at the waveguide ends. And acoustically coupled to the
waveguide 39A approximately midway between them. With the transducer so arranged, the
waveguide 39A and the transducer operate as described in co-pending US patent application Ser.
No. 09 / 753,167.
10-05-2019
12
The acoustic volume 80, 82 can act as an acoustic low pass filter as described in co-pending
patent application Ser. No. 09 / 886,868. The low pass filter effect of the cubic space 80, 82 is
particularly advantageous in the present invention. Because the enhancement effect of the
waveguide 39A is more important at low frequencies than at high frequencies. The structure
consisting of waveguides and transducers may also include other elements to reduce high
frequency resonances, such elements including, for example, strategically positioned portions of
foam. it can. Closed-end waveguides of substantially constant cross-sectional area may be of any
other shape, such as open-ended waveguides or tapered or stepped waveguides, as described in
US patent application Ser. No. 09/146662 It may be replaced with a waveguide. The low
frequency acoustic energy may radiate omnidirectionally.
[0029]
In a variation of the embodiment of FIG. 3A, transfer function H 1 (a) such that transducers 27 B,
28 B radiate high frequency acoustic energy omnidirectionally and transducers 29 B, 30 B
radiate high frequency acoustic energy omnidirectionally. s) Set the characteristics of H8 (s). The
omnidirectional radiation pattern may be obtained by setting the transfer functions H1 (s), H2 (s)
so that the audio signals to the transducers 27B, 28B and the transducers 29B, 30B arrive
simultaneously and in phase. it can. In another embodiment of FIG. 3A, the characteristics of the
transfer functions H1 (s) to H8 (s) are variable, and the transducers 29B and 30B have a first
operation mode in which the radiation pattern is directional and the radiation pattern is
nondirectional. The transducers 29B and 30B may have a first operation mode in which the
radiation pattern is directional in one direction, and a second radiation pattern in which the
radiation pattern is directional. It may have two operation modes. In addition, by making the
transfer functions H1 (s) to H8 (s) variable incrementally or continuously, the directivity can be
incrementally or continuously variable between the two modes, The transfer functions H1 (s) to
H8 (s) can also be formulated.
[0030]
FIG. 3B shows another implementation of the embodiment of FIG. In the embodiment of FIG. 3B,
the transducer 28B is acoustically coupled to the waveguide 39A near the first end of the
waveguide, and the transducer 27B is about the distance from the first end to the second end of
the waveguide. The transducer 30B is acoustically coupled to the waveguide 39A at 1/4 position,
the transducer 30B is coupled to the waveguide 39A at about half the distance from the first end
10-05-2019
13
to the second end, and the transducer 29B is It is coupled to the waveguide 39A at about 3/4 of
the distance from one end to the second end. By changing the geometry of the waveguide and
the mounting position of the transducer, a combination of the behavior of the directional array
and the behavior of the waveguide can be obtained. The transducer can be a cone that couples to
the waveguide through a three-dimensional space, such as three-dimensional space 84-87.
[0031]
For practical reasons, it may be difficult to obtain complex waveguide / converter configurations
such as that of FIG. 3B. In such situations, it is advantageous to use the time delays 61-63 of FIG.
1B as it is possible to change the effective position in the waveguide of one or more transducers.
[0032]
The illustration of the radiation pattern is schematic and the arrangement of transducers shown
is not necessarily the arrangement of transducers used to produce the illustrated radiation
directivity pattern. The directivity pattern can be controlled in many ways. One way is to change
the placement of the transducers. Several examples of different transducer arrangements for
controlling directivity patterns are shown in FIG. 3C. As shown in arrangements 232, 234, the
distance between the transducers is variable, and the transducers are acoustically coupled to the
waveguide by an acoustic volume or some other acoustic element as shown in arrangement 236.
, Or the orientation of the transducers relative to the listening space can also be changed, the
orientation of the transducers relative to one another can be changed, or additional
transformations as shown in one or more arrangements 238, 240, 242 Many other
arrangements can be devised, using different transducer arrangements, or by a combination of
the arrangements shown in FIG. 3C. In addition, the directivity pattern changes the phase
between the signals, changes the time for the signal to reach the converter, changes the
amplitude of the signal transmitted to the two converters, or changes the relative value of the
two signals. It can also be changed by a signal processing method of changing the polarity, other
individual signal processing methods, or a combination thereof. Control of radiation directing
patterns is discussed in more detail in US Pat. Nos. 5,809,153 and 5,870,484.
[0033]
At very high frequencies, the transducer tends to be directional in the direction of the axis of the
10-05-2019
14
surface of the transducer, i.e. the direction of movement of the cone. In arrangements such as
arrangements 238, 240, 242, axis 246 of transducer 244 is generally oriented to the listening
space in which the audio system participates, and is oriented to the listening space by additional
circuitry and signal processing. At very high frequencies, the sound waves are emitted only by
the transducer 244, which has an axis 246 which is generally oriented in the listening space,
since the signal can be rolled off to the non- Directional radiation is obtained at. Alternatively, an
additional transducer with a small emission surface may be added very closely to the listening
space, emitting very high frequency acoustic energy at low levels, and the high frequency sound
waves in the listening space where the audio system is involved, It can be made to hear much
better than in a listening space that involves adjacent listening spaces.
[0034]
Referring to FIG. 4, a plurality of audio signal processing systems according to the embodiment
of FIG. 3A are shown. This is one of the possible uses of the present invention and discloses
another feature of the present invention. In the illustration of FIG. 4A, the listeners 46A-46H
corresponding to each of the nine signal processing systems 1A-1H are located and located in the
acoustically open area. Each of the audio signal processing systems is accompanied by a video
device (not shown) to allow the listener to manipulate the interactive multimedia entertainment
equipment, along with the audio signal processing system and the user interface. Video games
(for homes and game centers) as an example of multimedia entertainment equipment, and
gambling machines (slot machines, bingo devices, video lottery terminals, poker cards) as second
kind of multimedia entertainment Machines, gambling kiosks, or local or wide area progressive
gambling etc.), and in particular there are gambling machines intended for a casino environment
including many gambling machines in an acoustically open area. Each of the audio systems 1A1H may further have two modes of operation, as described in the discussion of the variation of
FIG. 3A. In the mode in which the audio systems 1A-1D and 1F-1H operate, the transducers 27A,
28A and the transducers 29A, 30A radiate high-frequency acoustic energy in a directional
manner, and the sound emitted by each audio system is the said audio • Make listeners using the
system hear much better than listeners using other audio systems. Since the audio system 1E
radiates high frequency sound waves omnidirectionally, the high frequency sound waves emitted
by the system 1E can not be heard much better by the listener 46E than those who use other
systems. The audio signal processing systems 1A to 1H may be configured to operate in a first
mode under certain conditions and to operate in a second mode under other conditions, or to
switch between modes when an event occurs. It can be configured as follows. Switching between
modes can be accomplished by digital signal processing, or by manual or automatic analog or
digital switches, or changing signal processing parameters. There are many ways to change the
signal processing parameters, such as manual control, voltage control filters or voltage control
amplifiers, or updating or changing the transfer function coefficients.
10-05-2019
15
When the audio systems 1A to 1H make a network with each other and with the control device 2,
the audio systems can be individually controlled by the audio system itself or externally
controlled by the control device 2. Become. Also, the audio systems 1A-1E can be networked so
that the audio signal source can be remote, local, or partially remote and partially local. In FIG. 4,
the audio system 1E causes the array including the transducers 27A, 28A and the array including
the transducers 29A, 30A to omnidirectionally emit high-frequency acoustic energy in response
to a certain condition or occurrence of an event. It can also operate in such a mode. For example,
in a game center implementation, the audio system operates in a directional mode under normal
conditions and switches to an omnidirectional mode for a predetermined period of time when the
player reaches a certain achievement level. be able to. In the game room implementation, the
audio system operates in directional mode under normal conditions and switches to nondirectional mode for a predetermined period of time when the player hits the "jackpot". It is
possible to excite and stimulate all listeners near the audio system 1E.
[0035]
One embodiment of the present invention is particularly advantageous in a gambling and casino
environment. It is desirable to place as many machines as possible in one space, it is desirable for
each machine to produce a sufficient level of sound to maintain excitement, and further that the
acoustic energy emitted by each machine is a device It is desirable to make it easier to hear in the
listening space in which is involved than in the listening space in which adjacent devices are
involved.
[0036]
In another embodiment, the directivity pattern is continuously or incrementally variable between
directivity and omnidirectional, or continuous between directional radiation in one direction and
directional radiation in the other direction. Can also be variable or incremental. A method of
obtaining continuous directivity or incremental variable directivity is shown below in FIG. 16 and
in the corresponding part of the present disclosure.
[0037]
10-05-2019
16
Referring to FIG. 5, a diagram of an alternative aspect of the embodiment of FIG. 3A is shown.
Corresponding reference numerals in FIG. 5 indicate like-numbered elements in FIG. In the
embodiment of FIG. 5, the transducers 27 B, 28 B, 29 B, 30 B are mounted in an enclosure 39 B
having a port 50. The transducers 27B, 28B are cone-shaped acoustic drivers, and one cone
surface emits sound waves toward the inside of the ported enclosure, and one cone surface is
mounted so as to emit sound waves in free space. There is. The values of the time delay
.DELTA.t2 in FIG. 2, the characteristics of the transfer functions H1 (s) and H2 (s) (FIG. 2), and the
position and orientation of the transducers 27B and 28B, the front of the transducers 27B and
28B is a directional array And the high frequency spectral content of the left channel in a
radiation pattern (such as the cardioid 40) directed in a direction 44 directed generally to the
listener 46 at the listening position where the audio signal processing system 1 is involved. Set to
emit sound waves corresponding to. The value of the time delay Δt7, the transfer function H7
(s), the characteristics of H8 (s), and the position and orientation of the transducers 29B, 30B, the
front of the transducers 29B, 30B functions as a directional array, and audio signal processing It
emits an acoustic wave corresponding to the high frequency spectral component of the right
channel in a directional radiation pattern (such as a cardioid 42) in a direction 44 directed
generally to the listener 46 at the listening position where the system 1 is involved To set. With
the back of the cone, the acoustic waves radiated into the ported enclosure, in particular the low
frequency sound waves, enhance the low frequency sound waves emitted by the front of the
cone.
[0038]
Referring now to FIG. 6, another embodiment of the present invention is shown. In the
embodiment of FIG. 6, the input terminals represent the left and left surround input terminals of
the surround sound system. Transfer functions H1 (s), H6 (s) represent no change (having a
mathematical value of 1), transfer functions H3 (s), H4 (s), H7 (s), H8 (s) are processed Not
shown in blocks 25, 26, 33, 34 (FIG. 1A) (with a mathematical value of 0). The transfer function
H9 (s) of the processing block 35 acts equally on the lf signal transmitted to the transducers. The
transfer functions H2 (s), H5 (s) represent phase reversals (indicated by a negative sign) and time
shifts (.DELTA.t2 and .DELTA.t5). As a result of the signal processing of the embodiment of FIG. 2,
the converter 27B emits an acoustic wave corresponding to the combined signal Lfh-LShfΔt5 +
(L + LS) lf, and the converter 28B combines the combined signal LShf-LhfΔt2 + (L + LS) Emit an
acoustic wave corresponding to lf. The right and right surround channels may have similar audio
signal processing systems.
[0039]
10-05-2019
17
Referring now to FIG. 7, a diagram of an aspect of the embodiment of FIG. 6 is shown. In the
embodiment of FIG. 7, transducers 27B-L ("L" indicates left / left surround audio signal
processing system), 28B-L are mounted in ported enclosure 52L. The ported enclosure is
configured to enhance the low frequency sound waves emitted by the transducers 27B-L, 28B-L.
The transducer spacing and the value of Δt 2 are set so that the sound wave corresponding to
the Lhf signal is directed towards the listener 46, as indicated by the arrow 54. The transducer
spacing, and the value of Δt5, are radiated directionally in a direction 56 where the sound wave
corresponding to the LShf signal is not directed to the listener, and after the sound wave reflects
off the room boundaries and objects in the room, the listener Set to reach. Similarly, transducers
27B-R ("R" indicates a right / right surround audio signal processing system), 28B-R are attached
to ported enclosure 52R. The ported enclosure is configured to enhance the low frequency sound
waves emitted by the transducers 27B-R, 28B-R. The transducer spacing and the value of Δt 2
are set so that sound waves corresponding to the Rhf signal are directed towards the listener, as
indicated by the arrow 58. The transducer spacing, and the value of Δt5, are directed radiation
in a direction 56 where the sound wave corresponding to the RShf signal is not directed to the
listener, and after the sound wave reflects off the room boundaries and objects in the room, the
listener Set to reach. In another embodiment of FIG. 6, as in the embodiment of FIG. 4, the signal
processing, transducer spacing, and values of .DELTA.t2 and .DELTA.t5 are acoustic waves
corresponding to both L and LS signals, and both R and RS signals. Are set to emit toward the
listening space occupied by the listener 46. If there is a central channel, the central channel may
be emitted by a single centrally located transducer, or by a structure similar to the device shown
in FIG. 7, or as shown below in FIG. 8B, central channel May be downmixed.
[0040]
Referring to FIG. 8A, another embodiment of the present invention is shown. In the embodiment
of FIG. 8, the input terminals 10, 12 represent the input terminals of a conventional stereo audio
system, or the L, R input terminals of a conventional multi-channel audio system. A center
channel input 70 may also be included, in which case it is the center channel of the multi-channel
audio system. In the embodiment of FIG. 8A, the high frequency and low frequency spectral
components of the audio signal are not separated, so the combiner / filter circuits and adders of
the other embodiments are not necessary. The input terminal 10 is coupled via the processing
block 23 to the electroacoustic transducer 27B. The input terminal 12 is coupled to the
electroacoustic transducer 28B via the processing block 34. Input terminal 70 is coupled to
electro-acoustic transducer 74 via processing block 72. Transfer function H1 (s) (applied to L
signal in processing block 23), H8 (s) (applied to R signal in processing block 34), and H10 (s)
(applied to C signal in processing block 72) ) Include individual channel equalization, individual
equalization of the converter taking into account room effects, volume and balance control,
10-05-2019
18
functions like image spreading, or other similar functions It can also or can represent no change.
A sound wave corresponding to the full range left channel signal is emitted by the converter 27B,
a sound wave corresponding to the full range right signal is emitted by the converter 28B, and a
sound wave corresponding to the full range center audio signal is converted by the transducer
74. It is emitted. Further details of this embodiment are shown in FIG.
[0041]
FIG. 8B shows an alternative processing circuit that processes the center channel signal. In the
system of FIG. 8B, the center channel can be downmixed to the left and right channels at
summers 76,78. Downmixing involves scaling of the center channel signal and can be performed
according to conventional techniques.
[0042]
FIG. 8C shows an alternative embodiment of FIG. 8A. The embodiment of FIG. 8C includes the
components of FIG. 8A and additional circuitry for processing low frequency signals, such as the
combining / filtering circuit 14 that couples the input terminals 10,12. The synthesis / filter
circuit 14 includes the adder 38, the low pass filter 41, the high pass filters 36 and 40, and the
signal lines 16, 18 and 20 of FIGS. In addition, the embodiment of FIG. 8A may include phase
shifters, such as the phase shifters 37A, 37B (not shown in this figure) of the previous
embodiments, for central channel signals: The input terminals 10, 12, 70 are combined with the
adder 38 and the high pass filter 142. If so, the relative phase applied by the phase shifter can be
set so that the signals from the input terminals 10, 12, 70 combine in the proper phase
relationship. Adders 27A, 28A, 74A couple elements of audio signal processing circuit 22 with
transducers 27B, 28B, 74B, respectively, for the left, right, and center channels. The embodiment
of FIG. 8C functions similar to the embodiment of FIG. 8A except that the bass portion of the
three channel signal is combined and transmitted to each of the transducers.
[0043]
Referring now to FIG. 9, one aspect of the embodiment of FIGS. 8A and 8C is shown. In the
embodiment of FIG. 9, the transducers 27B, 28B, 74 are disposed in the waveguide 39A, one side
of each cone of transducers facing the external environment, the other side of each cone of
transducers is It is acoustically coupled to the waveguide. In this embodiment, the transducer can
10-05-2019
19
be acoustically coupled to the waveguide by the acoustic volume 80, 82, 84 according to the
principles described above in the discussion of FIGS. 3A and 3B. Transducer groups may be
coupled to the waveguide 39A at about 1/4, 1/2 and 3/4 of the distance between the waveguide
ends as shown, or selected by empirical or simulation Alternatively, they may be coupled at other
locations to mitigate the unwanted resonance effects of the waveguide.
[0044]
Referring to FIGS. 10A and 10B, another embodiment of the present invention is shown. Inputs
110-113, 115 receive audio signals corresponding to the left, left surround, right, right surround,
and center channels of the surround audio system, respectively. The input terminals 110-113,
115 are coupled to the combining / filtering circuit 114, which combines the high frequency L
signal (Lhf) on the first signal line 116 and the high frequency LS signal (on the second signal
line 117). LShf), high frequency R signal (Rhf) on signal line 118, high frequency RS signal (RShf)
on signal line 119, high frequency C signal (Chf) on signal line 121, and combined low frequency
signal (on signal line 120) C + L + LS + R + RS) Outputs lf. The signals on signal lines 116-121 are
processed by processing circuit 122. The signal on signal line 116 is processed as represented by
transfer functions H1 (s), H2 (s) in processing blocks 123, 124 and output to summers 127A,
128A and then electro-acoustic transducer 127B, Each is output to 128B. The signal on signal
line 117 is processed as represented by transfer functions H3 (s), H4 (s) in processing blocks
125, 126 and output to summers 127A, 128A, then electro-acoustic transducer 127B, Each is
output to 128B. The signals on signal line 118 are processed as represented by transfer
functions H5 (s), H6 (s) in processing blocks 131, 132 and output to summers 129A, 130A and
then electro-acoustic transducer 129B, It is outputted to 130 B respectively. The signal on signal
line 119 is processed as represented by transfer functions H7 (s), H8 (s) in processing blocks
133, 134 and output to summers 129A, 130A and then electro-acoustic transducer 129B, It is
outputted to 130 B respectively. The signal on signal line 120 is processed as represented by
transfer function H9 (s) at processing block 135, output to adders 127A-130A, 173A, and then
output to converters 127B-130B, 173B. . The signal on signal line 121 is processed as
represented by transfer function H10 (s) at processing block 172, output to adder 173A, and
then output to electro-acoustic transducer 173B. As a result of the processing of the system of
FIGS. 10A and 10B, transducers 127B, 128B can receive the processed signals Lhf, LShf
according to different transfer functions, and transducers 129B, 130B handle according to
different transfer functions. The received signals Rhf and RShf can be received, and the converter
173B can receive the processed Chf channel signal, and the converters 127B to 130B and 173B
can respectively process the combined signal processed according to the same transfer function.
Receive (C + L + LS + R + RS) lf.
10-05-2019
20
[0045]
As with the embodiment of FIG. 1A, even when combining any combination of Llf, LSlf, Rlf, and
RSlf, using any phase shifter such as the components 37A and 37B of FIG. A phase relationship
to be synthesized can be obtained. If the audio system does not have a single center channel
converter 173B, then the center channel signal may be downmixed as shown in FIG. 8B.
[0046]
The topology implementing the synthesis / filter circuit 114 is shown in FIGS. 10A and 10B.
Input terminal 110 is coupled to high pass filter 136 and summer 138. Input terminal 111 is
coupled to high pass filter 137 and adder 138. Input terminal 112 is coupled to high pass filter
240 and summer 138. Input terminal 113 is coupled to high pass filter 143 and summer 138.
Coupling from any one of the terminals to the summer may also be via a phase shifter such as
phase shifter 37A or 37B as shown in FIG. 1A. Adder 138 is coupled to low pass filter 141 which
outputs on signal line 120. Substantially similar results can be obtained with other filter
topologies. For example, before combining the channels, the low pass filter may be passed or the
high pass filter may be low pass filtered as shown below in FIG. 14 to perform a differential
addition with the unfiltered signal. It can be realized as a filter. The transfer functions H1 (s) H10 (s) may represent one or more of attenuation / amplification, time delay, phase shift,
equalization, or other acoustic signal processing functions. Also, the transfer functions H1 (s) to
H9 (s) can represent no change (i.e., expressed mathematically, have a value of 1) or may be
absent (i.e. expressed mathematically, 0) Can have a value of. Examples of these two states are
described below. The system of FIGS. 10A and 10B can also include conventional elements, such
as DACs and filters, not shown. In addition, in addition to any equalization that can be performed
in processing blocks 23-26 and 31-35, each of electro-acoustic transducers 27B-30B can be
equalized individually. In FIGS. 10A and 10B, the same result can be obtained with other
topologies. For example, the low pass filter 141 disposed between the adder 138 and the signal
line 120 can be replaced with a low pass filter between each of the input terminals and the adder
138.
[0047]
In one embodiment of the present invention, transfer functions H1 (s), H4 (s), H6 (s), H7 (s)
represent no change (mathematically, a value of 1), and transfer function H2 (s) ), H3 (s), H5 (s),
H8 (s) are phase reversal (represented by a negative sign) and time delay (denoted by Δtn). nは
10-05-2019
21
それぞれ2、3、6、7である)。
[0048]
When viewed from the viewpoint of the electroacoustic transducer, the converter 127B receives
the combined signal Lhf-LShfΔt3 + (L + LS + R + RS + C) lf, and the converter 128B combines the
combined signal LShf-LhfΔt2 + (L + LS) + R + RS + C) lf is received, converter 129B receives
combined signal RShf−RhfΔt5 + (L + LS + R + RS + C) lf, and converter 130B is combined signal
Rhf−RShfΔt8 + (L + LS) The converter 173B receives the + R + RS + C) 1f, and the converter
173B receives the combined signal Chf + (C + L + LS + RS) 1f.
[0049]
Referring to FIG. 11, a diagram of one implementation of the embodiment of FIGS. 10A and 10B
is shown.
The value of the time delay Δt2, the transfer function H1 (s), the characteristics of H2 (s), and
the position and orientation of the transducers 127B, 128B, the front of the transducers 127B,
128B functions as a directional array, and audio signal processing A sound wave is emitted
corresponding to the high frequency spectral components of the left channel in a directional
radiation pattern in a direction 54 generally towards the listener 46 at the listening position
where the system 1 is involved. The values of the time delays Δt 3, the transfer functions H 3 (s),
the characteristics of H 4 (s), and the position and orientation of the transducers 127 B, 128 B
make the front of the transducers 127 B, 128 B act as a directional array , Emits a sound wave
corresponding to the high frequency spectral component of the left channel in a directional
radiation pattern in a different direction 56, in this case an outward direction. Alternatively, the
value of the time delay Δt3, the transfer function H3 (s), the characteristics of H4 (s), and the
position and orientation of the transducers 127B, 128B are such that the front face of the
transducers 127B, 128B functions as a directional array The case emits an acoustic wave
corresponding to the high frequency spectral component of the left channel in the directional
radiation pattern in the inward direction 54. The value of the time delay Δt6, the transfer
function H5 (s), the characteristics of H6 (s), and the position and orientation of the transducers
129B, 130B, the front of the transducers 129B, 130B functions as a directional array and audio
signal processing A sound wave is emitted corresponding to the high frequency spectral
component of the right channel in a directional radiation pattern in a direction 58 directed
generally towards the listener 46 at the listening position where the system 1 is involved. The
values of the time delay Δt7, the transfer functions H7 (s), H8 (s) characteristics, and the
position and orientation of the transducers 129B, 130B are such that the front face of the
10-05-2019
22
transducers 129B, 130B functions as a directional array , Emits a sound wave corresponding to
the high frequency spectral component of the right channel in a directional radiation pattern in a
different direction 60, in this case an outward direction. Alternatively, the value of the time delay
Δt7, the transfer function H7 (s), the characteristics of H8 (s), and the position and orientation of
the transducers 129B, 130B, such that the front face of the transducers 129B, 130B functions as
a directional array The case emits an acoustic wave corresponding to the high frequency spectral
components of the right channel in the directional radiation pattern in the inward direction 58.
[0050]
Directional arrays are discussed in more detail in US Pat. Nos. 5,809,153 and 5,870,484. Sound
waves, particularly low frequency sound waves, emitted into the waveguide from the back of the
cone, enhance the low frequency sound waves emitted by the front of the cone. In this aspect of
the embodiment of FIG. 11, the transducers 129B, 130B are located near the closed end of the
waveguide and the transducers 127B, 128B are located approximately halfway between the ends
of the waveguide. With the transducer so arranged, waveguide 139A and the transducer operate
as described in co-pending US patent application Ser. No. 09 / 753,167. The structure of
waveguides and transducers may also include elements to reduce high frequency resonances.
Such an element can include, for example, a portion of foam positioned in a planned manner.
[0051]
Using the presentation mode signal processing method of co-pending US patent application Ser.
No. 09 / 886,868 in addition to the direction of directivity shown in FIG. 11, the directional
patterns of the L, LS, R, RS channels are Different combinations can be formed.
[0052]
Referring now to FIG. 12, an audio system is shown that includes alternative configurations of
the synthesis / filter circuit 114 and the audio processing circuit 122 and includes the additional
features of the present invention.
Input terminal 10 is coupled to signal conditioner 89, which is coupled to combining / filtering
circuit 14 by signal line 210. Input terminal 12 is coupled to signal conditioner 90, which is
coupled by signal line 212 to combining / filtering circuit 14. The synthesis / filter circuit 14 is
coupled to the directivity control circuit 91 of the audio signal processing circuit 22. The
10-05-2019
23
directivity control circuit 91 is coupled to signal summers 27A, 28A, each of which is coupled to
corresponding electro-acoustic transducers 27B, 28B. The synthesis / filter circuit 14 is also
coupled to the directivity control circuit 92 of the audio signal processing circuit 22. The
directivity control circuit 92 is coupled to signal summers 29A, 30A, each of which is coupled to
a corresponding electro-acoustic transducer 29B, 30B. The synthesis / filter circuit 14 is also
coupled to the processing block 35 of the audio signal processing circuit 22, while the signal
processing circuit 22 is coupled to the signal adders 27A-30A, each of which is an
electroacoustic transducer 27B. To 30B.
[0053]
Further details regarding the components of FIG. 12 and an operational description of the
components of FIG. 12 can be found in the discussion of FIGS. The signal conditioner 89 is shown
in further detail with reference to FIGS. 13A-13C. The signal conditioner 89 includes a signal
compressor 160 and a level aware dynamic equalizer 162. The compressor 160 includes a
multiplier 164, which is coupled to the input terminal 10 and differentially coupled to the adder
166. Input terminal 10 is also coupled to summer 166. Adder 166 is coupled to amplifier 168,
which is coupled to signal line 169 and to level aware dynamic equalizer 162. Level aware
dynamic equalizer 162 includes an input signal line coupled to multiplier 170 and adder 172.
Multiplier 170 is differentially coupled to adder 172 and adder 174. Adder 172 is coupled to
equalizer 176, which is coupled to adder 174.
[0054]
The operation of the signal conditioner 89 will be described along an example where the input
terminal 10 is the left terminal of a stereo or multi-channel system. Here, L and R are left and
right channel signals, respectively, and L bar (hereinafter referred to as L) and R bar (hereinafter
referred to as R) are amplitudes of the left and right channel signals, respectively. . The system is
also applicable to other channel combinations, such as surround channels. In operation,
multiplier 164 of compressor 160 applies a factor, ie, an attenuation factor of Y / (Y + K1), to the
input signal. Here, Y = | L ||||||, K1 is a constant, and its value differs depending on the desired
dynamic range compression. Typical values for K1 are in the range of 0.09. The adder 166
differentially adds the output signal of the multiplier 164 and the input signal, and the signal
applied to the amplifier 168 is compressed by an amount determined by the value of the
coefficient Y / (Y + K1). Let's do it. The amplitude L of the input signal L is virtually attenuated by
a factor {1-Y / (Y + K1)} and amplified by a factor of K2 to generate the amplitude Lcom of the
compressed signal and the amplitude Lcom of the compressed signal. Let K2 be {1-Y / (Y + K1)}
10-05-2019
24
L. Since the equation {1-Y / (Y + K1)} is transformed to Y / (Y + K1), the amplitude Lcom of the
compressed signal is also described as K2 {(K1 / (Y + K1)) L¯. Ru. The compressed signal having
an amplitude Lcom = K2 {(K1 / (Y + K1)) L 伝 送 is transmitted to the level corresponding
dynamic equalizer 162.
[0055]
When | L || and | R || are larger than K1, the value of Y / (Y + K1) approaches 1 and the value of
{1-Y / (Y + K1)} approaches 0. So this signal is quite compressed. When the values of | L | | and |
R | | are small, the value of {1-Y / (Y + K1)} approaches 1 and the compression of the signal is
very small. A typical value for the amplifier gain K2 is five.
[0056]
The multiplier 170 of the level corresponding dynamic equalizer 162 applies the coefficient of
Y2 / (Y2 + Y3). Here, K3 is a constant related to the audio signal and the amount of dynamic
equalization applied to Y2 = [Y- {Y <2> / (Y + K1)}] K2. Y2 = [Y- {Y <2> / (Y + K1)}] K2 can be
expressed as Y2 = {YK1 / (Y + K1)} K2. A typical value for K3 is 0.025. The adder 172
differentially combines the output signal from the multiplier 170 with the compressed signal
Lcom, and the signal from the adder 172 is virtually attenuated by a factor [1− {Y2 / (Y2 + K3)}].
Let's do it. The signal from the adder 172 is then equalized by the equalizer 176 and combined
with the unequalized output of the multiplier 170 in the adder so that the output signal of the
signal conditioner 89 has a factor Y2 / (Y2 +). It is formed by combining the unequalized signal
attenuated by K3) with the equalized signal attenuated by the equalization coefficient of {1-Y2 /
(Y2 + K3)). When the value of Y2 is large, the value of the equalization coefficient approaches
zero and equalization is applied to a small part of the signal. When the value of Y2 is small, the
value of the coefficient approaches 1 and equalization is applied to a large part of the input
signal.
[0057]
Signal conditioner 90 has components corresponding to those of signal conditioner 89, and may
be arranged substantially similarly and perform substantially the same function substantially
similarly. FIG. 14 shows the combining / filtering circuit 14 of FIG. 12 in more detail. Signal line
210 is coupled to all-pass filter 94, which is differentially coupled to summer 96. Line 210 is also
10-05-2019
25
coupled to low pass filter 98, which is coupled to all pass filter 140 and to an adder. Adder 96 is
coupled to signal processing block 91 of audio signal processing circuit 22. All-pass filter 140 of
phase shifter 37A is coupled to all-pass filter 142 of phase shifter 37A. All-pass filter 142 is
coupled to summer 38. Signal line 212 is coupled to all-pass filter 95, which is differentially
coupled to summer 97. Adder 97 is coupled to signal processing block 92 of audio signal
processing circuit 22. Signal line 212 is also coupled to low pass filter 99, which is coupled to all
pass filter 144 and further coupled to summer 97. All-pass filter 144 of phase shifter 37B is
coupled to all-pass filter 146 of phase shifter 37B. All-pass filter 146 is coupled to summer 38.
Adder 38 is coupled to signal processing block 135 of audio processing circuit 22.
[0058]
The characteristics of the pass filter are shown in the following table.
[0059]
The phase shifters 37A, 37B can be implemented as two all-pass filters as shown, or as more or
less than two all-pass filters, depending on the frequency range for which the relative phase
difference is desired. You can also
The filter may have a different specificity (point) than that shown in Table 1. The low pass filters
98, 99 may be second order low pass filters with a corner frequency of about 200 Hz. Other
breakpoint frequencies and other filter orders may also be used depending on the transducer
used and the signal processing requirements. The signal blocks 91 and 92 will be described in
FIG.
[0060]
The low pass filters 98, 99, the phase shifters 37A, 37B, and the adder 38 are low in FIGS. 1A
and 1B, except that they are passed through the low pass filter prior to signal synthesis. Perform
the same function as the pass filter 41, the phase shifters 37A, 37B, and the adder 38. The
combination of the low pass filter 98 and the adder 96 and the combination of the low pass filter
99 and the adder 97 provide the same function as the high pass filters 36 and 40 of FIGS. 1A and
1B, respectively. Run. All-pass filters 94, 95 provide proper phase matching when combining
high frequency signals in subsequent stages of the device.
10-05-2019
26
[0061]
Referring to FIG. 15A, the processing block 35 of the embodiment of FIG. 12 is shown in more
detail. The signal line from summer 38 is coupled to clipper 190 and notch filter 192. The output
terminal of clipper 190 is coupled to notch filter 194 and summer 196. The output terminal of
clipper 190 is coupled to notch filter 194 and summer 196. The output terminal of notch filter
194 is differentially coupled to summer 196. The output of summer 196 and the output of notch
filter 192 are coupled to summer 198. For purposes of explanation, FIG. 15A identifies several
nodes. Node 200 is on the signal line between the input terminal and clipper 190 and between
the input terminal and notch filter 192. Node 202 is on the signal line between clipper 190 and
notch filter 192 and between clipper 190 and adder 196. Node 204 is on the signal line between
notch filter 194 and summer 196. Node 206 is on the signal line between notch filter 192 and
summer 198. Node 28 is on the signal line between adders 196 and 198. Node 209 is on the
signal line between adder 198 and the output terminal.
[0062]
FIG. 15B shows a variation of the circuit of FIG. 15A. In the circuit of FIG. 15B, the adders 196
and 198 of FIG. 15A are combined into an adder 197. The circuits of FIGS. 15A and 15B perform
essentially the same function.
[0063]
Referring to FIGS. 15C and 15A, examples of frequency response patterns at the node of FIG.
Curve 210 is the frequency response of the audio signal. Curve 212 is the frequency response
curve at node 202. After clipping, curve 212 has undesirable distortion 214. Curve 216 is a
frequency response curve at node 204 behind notch filter 194. Curve 220 illustrates the addition
at adder 196. Curve 216 'is the inverse of curve 216 and shows differential addition. Curve 222
is the frequency response at node 208 after addition at adder 196. Curve 224 is the frequency
response at node 206 behind notch filter 192. Curve 226 shows the addition at adder 198. Curve
228 is the frequency response at node 209 after addition at adder 198.
[0064]
10-05-2019
27
The notch filter 192, 194 is centered around the frequency of the maximum excursion of the
electroacoustic transducer, or some other attention such as low impedance and stressed
frequency around the frequency. It is good to center on the frequency that should be. Clipper
190 may be a bipolar clipper or some other type of clipper that limits the amplitude of the signal
to a narrow frequency band. The notch filters 192, 194 may be notch filters as shown, or may be
band pass filters or low pass filters.
[0065]
In operation, the circuits of FIGS. 15A and 15B effectively resolve and reorganize the input signal
as a function of frequency, using both clipped and unclipped signals. The portion of the
distortion sensitive clip signal that is used is around the maximum excursion frequency of one or
more electroacoustic transducers, and it is an area where it is desirable to limit the maximum
signal applied. The reorganized frequency response curve is more in the portion taken from the
unclipped frequency response curve. That is because the distortion contained in the unclipped
frequency response curve is less than the clipped frequency response curve. The circuit may be
modified to clip at more than one frequency, or to clip at frequencies other than the maximum
excursion frequency. In some applications, the notch filter can be replaced with a low pass filter
or a band pass filter. The circuits of FIGS. 15A and 15B limit the maximum amplitude signal at
one or more predetermined frequencies, apply no clipping at other frequencies, and apply
clipping to minimize contaminating distortion.
[0066]
The signal conditioners 89, 90 and the combining / filtering circuit 14 of FIG. 12 and its
components can be modified and reconfigured in many ways. For example, the signal
conditioners 89, 90 can also be used independently. That is, it is possible to use one and not the
other. In a system having both signal conditioners 89, 90 and combining / filtering circuit 14, the
order may be reversed. That is, signal synthesis and filtering may be performed first and then
adjusted. Any of the components of the signal conditioner (compressor 160 and level enabled
dynamic equalizer 162 of FIG. 13A) can be used independently. That is, it is possible to use one
and not the other.
[0067]
10-05-2019
28
The directivity control circuit 91 is shown in more detail with reference to FIG. The signal line
from the adder 96 of the combining / filtering circuit 14 of FIG. 14 is coupled to the time delay
230, the multiplier 232 and the adder 234. Time delay 230 is coupled to multiplier 236 and
adder 238. Multiplier 232 is differentially coupled to adder 234 and additively coupled to adder
27A. Multiplier 236 is differentially coupled to adder 238 and additively coupled to adder 28A.
Adder 234 is coupled to adder 28A. Adder 238 is coupled to adder 27A. A summer 27A is
coupled to the signal to the electroacoustic transducer 27B. Adder 28A is coupled to electroacoustic transducer 28B. Processing block 35 (not shown) of FIG. 14 is coupled to summers 27A,
28A.
[0068]
The time delay element 230 can be realized in the form of one or more phase shifters, since time
and phase can be related in a known manner. The time delay elements can also be implemented
using non-minimum phase devices. In a system using DSP, time delay can be accomplished by
directly delaying data samples for a certain number of clock cycles. The phase shifter can be
implemented as an all-pass filter or as a complementary all-pass filter.
[0069]
In operation, the audio signal from summer 96 of synthesis / filter circuit 14 is attenuated by
multiplier 232.
[0070]
Attenuating signal is attenuated and added to the unattenuated signal differentially in the adder
234.
The combined signal is then transmitted to the adder 28A. In addition, the output of multiplier
232 is transmitted to adder 27A. The audio signal from the adder 96 of the combining / filtering
circuit 14 is time delayed by the time delay element 230 and the attenuation factor is multiplied
by the multiplier 236
[0071]
10-05-2019
29
And is differentially combined with the unattenuated signal in summer 238. The combined signal
is transmitted to the adder 27A. In addition, the output of multiplier 236 is transmitted to adder
28A. The adders 27A, 28A also receive the low frequency audio signal from the processing block
35 of the audio signal processing circuit 22. The signals synthesized in the adders 27A, 28A are
then respectively emitted by the electroacoustic transducers 27B, 28B. The time delays Δt,
spacing and orientation of transducers 27B, 28B radiate sound energy directionally as described
in US Pat. Nos. 5,809,153 and 5,870,484. Or can be configured to be implemented in the
systems of FIGS. 3A, 3B, 4, 5, 6, 7, and 11.
[0072]
The directivity of the array of electroacoustic transducers 27B, 28B can be controlled by
controlling the correlation, amplitude and phase relationships of the L and R signals. Two cases
are shown at the bottom of FIG. When L = R (ie, in-phase monaural signal) and the attenuation
coefficient has a value of 0, the signal -LΔt is transmitted to the converter 27B, but the L signal
is not substantially transmitted, and the converter 28B is , The signal L is transmitted, but -LΔt is
not substantially transmitted. If L = -R (i.e., the same magnitude but opposite in phase) and the
value of the attenuation factor is 1, then to the converter 28B the signal -L.DELTA.t is transmitted
but the L signal is substantially transmitted. Then, the signal L is transmitted to the converter
27B, but -LΔt is not substantially transmitted. The result is a substantially different directivity
pattern.
[0073]
In the processing result of the circuit of FIG. 16, the signal is attenuated by the following
coefficient.
[0074]
The signal is summed with the low frequency audio signal from component 35 and the low
frequency audio signal from component 35 in a summer 27A and converted by converter 27B,
with an attenuated and phase reversed time delay, phase shifted signal by a factor
[0075]
Coefficient below
10-05-2019
30
[0076]
The signal attenuated by is
[0077]
, And the low frequency audio signal from component 35 and the low frequency audio signal
from component 35 are combined in adder 28A and converted by converter 28A.
Varying the magnitude, correlation, and phase of the L and R signals can result in different
radiation patterns, as described in the discussion of FIG.
In addition to the signal-dependent directivity control of FIG. 16, other configurations, such as
user-operable switches or automatic switches or signal processing, can change the directivity
pattern continuously or incrementally, It can be done based on the occurrence of some event.
[0078]
The directivity control circuit 92 has substantially the same components as the directivity control
circuit 91, is arranged in substantially the same configuration, and performs substantially the
same operation in substantially the same manner.
In addition, the directivity control circuit of FIG. 16 can be used for other channels, such as
surround channels.
The surround channel signal may be processed to be emitted by the transducers 27B, 28B or
may be processed to be emitted by other transducers.
[0079]
Referring to FIGS. 17A and 17B, another embodiment of the present invention is shown. Audio
10-05-2019
31
system 300A includes a front audio system 301A. The front audio system 301A has left (L),
center (C) and right (R) channel input terminals 310L, 310C, 310R of the multi-channel audio
system. Each of the input terminals is coupled to the high pass filters 312L, 312C, 312R, while
each of the high pass filters 312L, 312C, 312R is coupled to one of the processing blocks 313L,
313C, 313R It is done. Each of processing blocks 313L, 313C, 313R is coupled to summers
314L, 314C, 314R, each of which is coupled to electroacoustic transducers 316L, 316C, 316R,
respectively. The electroacoustic transducers 316L, 316C, 316R are mounted such that they emit
sound waves into a low frequency enhancement device such as the waveguide 318. Input
terminals 310 L, 310 C, 310 R are coupled to summer 320, which is coupled to low pass filter
311. Low pass filter 311 is coupled to processing block 313LF, while processing block 313LF is
coupled to summers 314L, 314C, 314R, respectively. As in the previous figures, some or all of
the input terminals 310L, 310C, 310R may be coupled to the summer 320 via phase shifters
such as components 37A, 37B of FIG. 1A. These components may be arranged in a different
order. The filters 312L, 312C, 312R may be incorporated into the transfer function of the
processing blocks 313L, 313C, 313R. The transfer function may incorporate processes such as
phase shift, delay, signal conditioning, compression, clipping, equalization, HRTF processing, etc.
or may represent zero or one. In addition, transducers 316L, 316C, 316R may be mounted such
that they emit sound waves into waveguide 318 through the acoustic volume as shown in the
previous figures.
[0080]
Forward audio system 301A operates as described in the previous figures, such as FIG. 3C. The
electroacoustic transducers 316L, 316C, 316R respectively emit high frequency sound waves of
the channel (Lhf, Chf, Rhf, respectively) and also emit synthesized low frequency sound waves (L
+ R + C) lf. A low frequency enhancement device, such as waveguide 318, enhances the
generation of low frequency sound waves.
[0081]
Audio system 300A may also include rear audio system 302A, shown in FIG. 17B. The rear audio
system 302A includes left rear (LR) and right rear (RR) channel input terminals 330LR, 330RR of
the multi-channel audio system. Each of the input terminals is coupled to one of the high pass
filters 332LR, 332RR, each of which is coupled to one of the processing blocks 333LR, 333RR.
Each of the summers 334LR, 334RR is coupled to the electroacoustic transducers 336LR, 336RR,
respectively. The electroacoustic transducers 336LR, 336RR are mounted such that they emit
sound waves to a low frequency enhancement device such as the ported enclosure 338. Each of
10-05-2019
32
the input terminals 330LR, 310RR is also coupled to the summer 340, while the summer 340 is
coupled to the low pass filter 341. Low pass filter 341 is coupled to processing block 333LR,
while processing block 333LR is coupled to summers 334LR and 34RR. As in the previous
figures, one or both of the input terminals 330LR, 330RR may be coupled to the summer 340 via
phase shifters such as components 37A, 37B of FIG. 1A. These components may be arranged in
different orders. The filters 332LR, 332RR may be incorporated into the transfer function of the
processing blocks 333LR, 333RR. The transfer function may incorporate processes such as phase
shift, delay, signal conditioning, compression, clipping, equalization, HRTF processing, etc. or may
represent zero or one. In addition, the transducers 336L, 336R can be mounted such that they
emit sound waves into low frequency enhancement devices such as ported enclosures and
enclosures with passive radiators.
[0082]
The rear audio system 302A operates similarly to the previous embodiment and may also
operate similar to the rear acoustic emission device of co-pending US patent application Ser. No.
10 / 309,395. The LR and RR signals may include left surround and right surround channel
audio signals, respectively, and may further include binaural time differences, binaural phase
differences, binaural level differences, or mono spectral cues. The source image can also be more
accurately represented to the listener 332, including HRTF elements. The transducers may also
be coupled to other components by circuitry as described above, such that they can emit sound
with varying degrees of directivity.
[0083]
An audio system according to the embodiment of FIGS. 17A and 17B is advantageous for the
reasons described above. In addition, the audio system according to FIGS. 17A and 17B radiates
realistic localization information to the listener 22 and radiates different localization information
to listeners of many multimedia entertainment devices in the same listening area be able to. Each
listener has the sound associated with the corresponding multimedia device due to the proximity
to the listener and the natural directivity of the transducer at very high frequencies, while the
other listeners have other multimedia entertainment devices It can be heard more clearly than
you can hear the sounds involved.
[0084]
10-05-2019
33
Referring to FIG. 18, another implementation of the embodiment of FIGS. 17A and 17B is shown.
In FIG. 18, a signal processing system using the circuit of FIG. 2 generates a left signal L, a right
signal R, a left rear signal LR, and a right rear signal RR. Transducer 316C may be identical to
that in FIG. 17A, or may be replaced with a directional array, or, as in FIG. 8B, downmix the
center channel and eliminate transducer 316C in FIG. 17A. Good. Transducers 316L, 316R,
336LS, 336RS of FIGS. 17A and 17B have been replaced with directional arrays. The
embodiment of FIG. 18 uses two signal processing systems similar to the systems of FIGS. 1A and
1B or FIG. 2, one for the front, one for the rear, corresponding to both the front and rear
radiation. You may Transducer 316L and summer 314L of FIG. 17A are replaced with directional
arrays including transformers 316L-1 and 316L-2, along with corresponding signal summers
314L-1 and 314L-2. The converter 17A and the adder 314R of FIG. 17A are replaced with
directional arrays that include the converters 316R-1 and 316R-2 as well as the adders 314L-1
and 314L-2. Transducers 316L-1, 316L-2, 316R-1, and 316R-2 each emit a sound wave to the
outside environment with one radiation surface of each transducer, and one radiation surface of
each transducer acoustically It can be mounted to emit into a low frequency radiation
enhancement device such as waveguide 318. Similarly, converter 336LR and adder 333LR of
FIG. 17B are replaced with directional arrays including converters 336LR-1 and 336RR-1 along
with corresponding adders 334LR-1 and 334LR-2. Transducer 336 RR can be replaced with a
directional array including transducer 336 RR-1 receiving the audio signal from summer 334 RR1 and transducer 336 RR-2 receiving the audio signal from summer 334 RR-2 . Transducers
336LR-1, 336LR-2, 336RR-1, 336RR-2 have one emitting surface of each transducer emitting
sound waves to the external environment, and one emitting surface of each transducer being
enclosure 340 with port Can be mounted to emit sound waves into a low frequency radiation
enhancement device such as
[0085]
In the embodiment of FIG. 18, the transducers 316L-1, 316L-2, 316R-1, 316R-2 all receive a
combined left and right low frequency signal (L + R) lf. In addition, the converter 316L-1 receives
the high frequency left signal Lhf, the converter 316L-2 receives the polarity inverted and time
delayed high frequency Lhf signal, and the converter 316R-1 receives the high frequency signal
Rhf, the converter 316R-2 receives a polarity-reversed, time-delayed signal Rhf. Transducers
316L-1, 316L-2 operate as a directional array and respond to Lhf signals to radiate more
acoustic energy towards listeners 322 than towards listeners in an adjacent listening space Emits
sound waves. Similarly, the transducers 316R-1, 316R-2 operate as a directional array to emit
more acoustic energy towards the listener 322 than towards the listener in the adjacent listening
space. It emits an acoustic wave corresponding to the signal. Acoustic waveguide 318 cooperates
with transducers 316L-1, 316L-2, 316R-1, 316R-2 to enhance the emission of low frequency
10-05-2019
34
acoustic energy.
[0086]
Transducers 336LR-1, 336LR-2, 336RR-1, 336RR-2 all receive a combined left rear and right
rear low frequency signal (LR + RR) lf. In addition, the converter 3361R-1 receives the high
frequency left signal LRhf, the converter 336RR-1 receives the high frequency signal Rhf, and the
converter 336RR-2 receives the polarity inverted and time delayed signal RRhf. Transducers
336LR-1 and 336LR-2 operate as directional arrays and respond to the LRhf signal to emit more
acoustic energy towards the listener 322 than towards the listener in the adjacent listening space
Emits sound waves. Similarly, transducers 336RR-1 and 336RR-2 operate as directional arrays
and are adapted to emit more acoustic energy towards the listener 332 than towards the listener
in the adjacent listening space. It emits an acoustic wave corresponding to the signal. Ported
enclosure 340 cooperates with transducers 316LR-1, 316LR02, 316RR-1, 316RR-2 to enhance
the emission of low frequency acoustic energy.
[0087]
Left rear LR and right rear RR signals may correspond to left and right surround signals, or other
or additional information, such as HRTF information as in FIG. 17B and FIG. 18, individualized
sound, etc. Other information such as tracks or audio messages may also be included.
[0088]
The system of FIGS. 17A and 17B and another embodiment of the system of FIG. 18 can also be
implemented by combining the front audio system 301A of FIG. 17A with the rear audio system
302B of FIG. 18 or the rear audio system 302A of FIG. Can be implemented in combination with
the front audio system 301B of FIG.
Other features of the other embodiments, such as the level-enabled dynamic equalizer or
compressor of FIGS. 13A-13C, or the variable directional component of FIG. 16, may also be
added to the embodiments of FIGS. 17A, 17B, and 18. It can be used.
[0089]
10-05-2019
35
FIG. 19 shows another embodiment of the system of FIGS. 17A, 17B, 18. The embodiment of FIG.
19 eliminates the rear audio system 302C, the low frequency enhancement device of FIG. 17B, or
340 of FIG. Transducers 336 LR- 1, 336 LR- 2, 336 RR- 1, 336 RR- 2 may be preferably located
in a small enclosure near the head of listener 322. The LR signal comprises the high frequency
portion of the LS signal and, if necessary, performs HTRF processing as described in US patent
application Ser. No. 10 / 309,395. The RR signal contains the high frequency part of the RS
signal and performs HRTF processing if necessary. The low frequency portions of the signals LS,
RS are routed to the summers 314L-1, 314L-2, 314R-2, 314R-2 and all low frequency acoustic
energy is emitted by the transducers of the front audio system 301B. You can also do so.
[0090]
In the alternative configuration of FIG. 19, the front audio system may be similar to the front
audio system 301A of FIG. 17A or 301B of FIG. In another alternative configuration of FIG. 19,
the front low frequency enhancement device such as waveguide 318 is eliminated, and all low
frequency signals are transferred back to the rear audio • like 302A in FIG. 17B, 302B or 302C
in FIG. It may be emitted by the system.
[0091]
The embodiments according to FIGS. 17A, 17B and 18 have a common listening area for playing
a large number of different audio program material (such as gambling machines, video games or
other multimedia entertainment devices). It is particularly suitable for relatively close situations.
According to the embodiment according to FIG. 17A, FIG. 17B or FIG. 18, all surround sound
channels are emitted while accurately positioning the sound image and providing sufficient low
frequency radiation without the need for a separate low frequency loudspeaker It becomes
possible.
[0092]
It will be apparent to those skilled in the art that numerous uses and developments of the specific
devices and techniques disclosed herein can be made without departing from the inventive
concepts. Accordingly, the present invention is intended to embrace each and every combination
of the novel features and novel features disclosed herein, and be limited only by the spirit and
10-05-2019
36
scope of the appended claims.
[0093]
FIG. 1A is a block diagram of an audio signal processing system embodying the present
invention. FIG. 1B is a block diagram of an alternative embodiment of the audio signal processing
system of FIG. 1A. FIG. 2 is a block diagram of an alternative embodiment of the audio signal
processing system of FIG. 1A. FIG. 3A is a schematic diagram of an implementation of the audio
signal processing system of FIG. FIG. 3B is a schematic diagram of another embodiment of the
audio signal processing system of FIG. FIG. 3C is a schematic diagram of an arrangement of
electroacoustic transducers for use in a directional array. FIG. 4 is a schematic diagram of a
plurality of networked audio signal processing systems. FIG. 5 is a schematic diagram of an
alternative embodiment of the audio signal processing system of FIG. 3A. FIG. 6 is a block
diagram of another audio signal processing system embodying the present invention. FIG. 7 is a
schematic view of one implementation of the embodiment of FIG. FIG. 8A is a block diagram of
another audio signal processing system embodying the present invention. FIG. 8B is a block
diagram of an alternative circuit for processing the central channel. FIG. 8C is a block diagram of
an alternative implementation of the embodiment of FIG. 8A. FIG. 9 is a schematic diagram of one
implementation of the audio signal processing system of FIGS. 8A and 8C. FIG. 10A, together with
FIG. 10B, is a block diagram illustrating another audio signal processing system embodying the
present invention. FIG. 10B, together with FIG. 10A, is a block diagram illustrating another audio
signal processing system embodying the present invention. FIG. 11 is a schematic diagram of one
implementation of the audio signal processing system of FIGS. 10A and 10B. FIG. 12 is a block
diagram of an audio processing system that includes alternative arrangements of some of the
components of the previous figures and illustrates some of the additional features of the present
invention. FIG. 13A is a block diagram showing some of the components of FIG. 12 in further
detail. FIG. 13B is a block diagram illustrating some of the components of FIG. 12 in further
detail. FIG. 13C is a block diagram illustrating some of the components of FIG. 12 in further
detail. FIG. 14 is a block diagram illustrating another portion of the components of FIG. 12 in
further detail. FIG. 15A is a block diagram of another portion of the components of FIG. FIG. 15B
is a block diagram of another portion of the components of FIG. FIG. 15C shows frequency
response curves illustrating the operation of the circuit of FIGS. 15A and 15B. FIG. 16 is a block
diagram of another portion of the components of FIG. 12; FIG. 17A is a diagram of another audio
signal processing system embodying the present invention.
FIG. 17B is a diagram of another audio signal processing system embodying the present
invention. FIG. 18 is a diagram of another embodiment of the audio signal processing system of
FIGS. 17A and 17B. FIG. 19 is an alternative embodiment of the audio signal processing system
10-05-2019
37
of FIG.
10-05-2019
38
Документ
Категория
Без категории
Просмотров
0
Размер файла
64 Кб
Теги
jp2004194315
1/--страниц
Пожаловаться на содержимое документа