close

Вход

Забыли?

вход по аккаунту

?

JP2009060351

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2009060351
An object of the present invention is to realize an appropriate delay correction corresponding to
the propagation delay time of audio when viewing music or the like. SOLUTION: Test voices
output from each of the speakers 131 to 131 are collected at a sound collecting position P2 by
microphones installed at positions where voices output from all the speakers 131 to 131 can
directly reach. Do. The control processing unit analyzes the sound collection result data, and
estimates the distances MDS to MDS from the sound collection position P2 to the speakers 131
to 131, respectively. Next, distances HDS to HDS from the assumed listening position P1 to the
speakers 131 to 131 are estimated based on the positional relationship between the assumed
listening position P1 and the sound collecting position P2 and the estimated distances MDS to
MDS. Then, the distances HDS to HDS are converted into time, and the propagation delay time of
the sound from each speaker to the assumed listening position P1 is estimated. [Selected figure]
Figure 5
Acoustic device, delay measurement method, delay measurement program and recording
medium therefor
[0001]
The present invention relates to an acoustic device, a delay measurement method, a delay
measurement program, and a recording medium on which the delay measurement program is
recorded.
[0002]
10-05-2019
1
With the recent development of recording media such as CDs (Compact Disks) and DVDs (Digital
Versatile Disks), acoustic devices provided with a plurality of multi-channel surround speakers
have been developed.
By using such an audio device, it is possible to enjoy realistic surround sound not only in the
home space but also in the vehicle space.
[0003]
By the way, since the installation environment of the sound device is various, there are often
cases where a plurality of speakers can not be arranged at symmetrical positions in the viewpoint
of the multi-channel surround system. In particular, when a multi-channel surround sound
system is mounted on a vehicle, a plurality of speakers are arranged at a position having a
recommended symmetry from the viewpoint of the multi-channel surround system, due to the
restriction of the seating position as the listening position. Can not do it. Therefore, when the
positional relationship between each speaker and the listening position in the audio device does
not have the above symmetry, the output timing of the sound output from each speaker is
adjusted (hereinafter also referred to as "time alignment correction" )There is a need.
[0004]
As this time alignment correction method, a microphone is installed at an assumed listening
position such as a headrest at the driver's seat, and the sound output from each speaker is
collected by the microphone, whereby the propagation delay time of the output sound from each
speaker A method of measuring the is generally adopted (see Patent Document 1 etc .:
hereinafter, referred to as “conventional example”).
[0005]
JP-A-7-212896
[0006]
By the way, when the microphone is installed in the headrest of the driver's seat as the assumed
listening position as in the conventional example, about the sound outputted from the speaker
(for example, left speaker or light speaker) disposed in front of the assumed listening position It
10-05-2019
2
is possible to collect sound directly by the microphone.
However, with regard to sound output from a speaker (for example, a surround left speaker or a
surround light speaker) disposed on the rear side of the assumed listening position, the headrest
may be an obstacle and direct sound may be collected by the microphone. It often happens that it
can not be done.
Under such measurement environment, it is difficult to perform accurate time alignment
correction.
[0007]
For this reason, there is a need for a technology that can easily perform accurate time alignment
correction for the sound output from all the speakers. Responding to such a request is one of the
problems to be solved by the present invention.
[0008]
The present invention has been made in view of the above circumstances, and is a new acoustic
device and delay measurement capable of measuring the propagation delay time of voice from
each of a plurality of speakers to an assumed listening position simply and accurately. Intended
to provide a method.
[0009]
The invention according to claim 1 is an audio apparatus which outputs audio from a plurality of
speakers toward a sound field space from a reproduction result of audio content, wherein a
predetermined positional relationship with an assumed listening position in the sound field space
is made. Sound collecting means arranged at a specific position and collecting the sound that has
reached the specific position after the audio is output from each of the plurality of speakers; and
sequentially outputting the test sound from each of the plurality of speakers Test sound output
means for causing the first and second estimation means to estimate a distance from the specific
position to each of the plurality of speakers based on a sound collection result of the test sound
by the sound collection means; And second estimation means for estimating the distance from
each of the plurality of speakers to the assumed listening position based on the estimation result
by the step and the predetermined positional relationship. An acoustic device according to claim.
10-05-2019
3
[0010]
The invention according to claim 10 comprises a test voice output step of sequentially outputting
a test voice from each of a plurality of speakers; a collection of the test voices at a specific
position having a predetermined positional relationship with an assumed listening position in a
sound field space. A first estimation step of estimating a distance from the specific position to
each of the plurality of speakers based on a sound result; and based on an estimation result in
the first estimation step and the predetermined positional relationship, A second estimation step
of estimating a distance from each to the assumed listening position; and estimating a
propagation delay time of voice from each of the plurality of speakers to the assumed listening
position based on the estimation result in the second estimation step And a delay time estimation
step.
[0011]
The invention according to claim 11 is a delay measurement program characterized by causing a
computing means to execute the delay measurement method according to claim 10.
[0012]
The invention according to claim 12 is a recording medium on which the delay measurement
program according to claim 11 is recorded so as to be readable by an operation means.
[0013]
Hereinafter, embodiments of the present invention will be described with reference to the
attached drawings.
In the following description, the same or equivalent elements will be denoted by the same
reference symbols, without redundant description.
[0014]
First Embodiment First, a first embodiment of the present invention will be described with
reference to FIGS. 1 to 13.
10-05-2019
4
In the first embodiment, an acoustic device mounted on a vehicle CR (see FIG. 2) will be
described as an example.
[0015]
<Configuration> FIG. 1 is a block diagram showing a schematic configuration of the acoustic
device 100A according to the first embodiment.
[0016]
As shown in FIG. 1, the acoustic device 100A includes a control unit 110A and a drive unit 120.
[0017]
The acoustic device 100A further includes a sound output unit 130L, a sound output unit 130R,
a sound output unit 130SL, and a sound output unit 130SR.
[0018]
Here, the sound output unit 130L has a left speaker 131L (hereinafter also referred to as "L
speaker"), and the sound output unit 130R has a light speaker 131R (hereinafter also referred to
as "R speaker").
Further, the sound output unit 130SL has a surround left speaker 131SL (hereinafter also
referred to as "SL speaker"), and the sound output unit 130SR has a surround light speaker
131SR (hereinafter also referred to as "SR speaker").
[0019]
Furthermore, the acoustic device 100A includes a sound collection unit 140 as a sound collection
unit, a display unit 150, and an operation input unit 160.
[0020]
The elements 120 to 160 other than the control unit 110A are connected to the control unit
110A.
10-05-2019
5
[0021]
The control unit 110 </ b> A centrally controls the entire acoustic device 100 </ b> A.
The details of the control unit 110A will be described later.
[0022]
When the compact disc CD in which audio content is recorded is inserted, the drive unit 120
reports that effect to the control unit 110A.
When the drive unit 120 receives a reproduction command DVC of the audio content from the
control unit 110A in a state where the compact disc CD is inserted, the drive unit 120 reads out
the audio for which the reproduction designation has been made from the compact disc CD.
The readout result of the audio content is sent to the control unit 110A as content data CTD
which is an audio signal.
[0023]
Each of the sound output units 130L to 130SR includes, in addition to the above-described
speakers 131L to 131SR, an amplifier for amplifying the audio output signals AOSL to AOSSR
received from the control unit 110A.
The sound output units 130L to 130SR reproduce and output a test sound signal, music, etc.
under the control of the control unit 110A.
[0024]
10-05-2019
6
In the first embodiment, as shown in FIG. 2, the L speaker 131L of the sound output unit 130L is
disposed in the front door housing on the passenger seat side.
The L speaker 131L is disposed to face the front passenger seat.
[0025]
The R speaker 131R of the sound output unit 130R is disposed in the front door housing on the
driver's seat side.
The R speaker 131R is disposed to face the driver's seat.
[0026]
The SL speaker 131SL of the sound output unit 130SL is disposed in a housing at the rear of the
passenger seat.
The SL speaker 131SL is disposed to face the rear seat on the passenger seat side.
[0027]
The SR speaker 131SR of the sound output unit 130SR is disposed in a housing at the rear of the
driver's seat. The SR speaker 131SR is disposed to face the rear seat on the driver's seat side.
[0028]
Returning to FIG. 1, the sound collection unit 140 is (i) a microphone 141 which collects ambient
sound to make an electrical analog audio signal, (ii) an amplifier which amplifies an analog audio
signal output from the microphone, (iii) And A) an analog to digital converter for converting the
amplified analog audio signal into a digital audio signal. The sound collection result by the sound
10-05-2019
7
collection unit 140 is reported to the control unit 110A as sound collection result data AAD.
[0029]
The microphone 141 of the sound collection unit 140 is disposed at a predetermined sound
collection position P2 when operating in a “delay time setting mode” described later. The
sound collection position P2 is set to have a predetermined positional relationship with the
assumed listening position P1 assumed as the listening position of the listener. In the first
embodiment, as shown in FIG. 2, the assumed listening position P1 is a position estimated to be
the middle position between both ears when the listener is seated at the driver's seat. Further, the
sound collection position P2 is a position where direct sound from each of the speakers 131L to
131SR can be reached without an obstacle such as a seat.
[0030]
Here, the positional relationship among the speakers 131L to 131SR assumed in the first
embodiment, the assumed listening position P1 and the sound collecting position P2 will be
described with reference to FIGS.
[0031]
As shown in FIG. 3, the speaker 131 j (j = L to SR) has a length LL along the traveling direction of
the vehicle CR and a length WID along the orthogonal direction to the traveling direction of the
vehicle CR (hereinafter referred to as “installation It is arranged at each vertex of a rectangle of
width WID.
Then, a sound collection position P2 is set on a straight line perpendicular to the line segment
passing through the middle point of the line segment connecting the speaker 131L as the first
speaker and the speaker 131R as the second speaker.
[0032]
Here, the distance between the line segment and the sound collection position P2 is a length FL
(hereinafter, also referred to as “distance FL”). Further, the distance between the sound
10-05-2019
8
collection position P2 and the straight line connecting the speaker 131SL and the speaker 131SR
is a length RL (= LL-FL; hereinafter, also referred to as "distance RL").
[0033]
The setting width WID and the distances FL and RL are both unknown values at the start of the
operation of the “delay time setting mode” for the acoustic device 100A.
[0034]
The sound collection position P2 is separated from the assumed listening position P1 by the
distance d along the reverse direction of the traveling direction and along the orthogonal
direction to the traveling direction by the distance (WID / 4) to the passenger seat side is
seperated.
Here, the distance d is (1/2) of the thickness of the head in a normal person, and is a value that
can be accurately determined in advance.
[0035]
The positional relationship assumed in the first embodiment described above is an accurate
positional relationship regardless of the type of vehicle.
[0036]
Therefore, as shown in FIG. 4, when the distance from the sound collection position P2 to each of
the speakers 131j (j = L to SR) is MDSj, the distance MDSj is based on the positional relationship
shown in FIG. Are expressed by the following equations (1) to (4).
MDSL = [(WID / 2) <2> + FL <2>] <1/2> (1) MDSR = [(WID / 2) <2> + FL <2>] <1/2> (2) MDSSL =
[(WID / 2) <2> + RL <2>] <1/2> (3) MDSSR = [(WID / 2) <2> + RL <2>] <1/2> (4)
[0037]
10-05-2019
9
Here, each of the distances MDSj is a measurable value obtained by measuring the propagation
delay time of the sound from the speaker 131j to the sound collection position P2. Using such a
measured value, the distance FL can be expressed as the following equation (5) or (6). FL =
[MDSL <2>-(WID / 2) <2>] <1/2> ... (5) FL = [MDSR <2>-(WID / 2) <2>] <1/2> ... ( 6)
[0038]
Further, the distance RL can be expressed as the following equation (7) or (8). RL = [MDSSL <2>(WID / 2) <2>] <1/2> (7) RL = [MDSSR <2>-(WID / 2) <2>] <1/2> (1/2) 8)
[0039]
Further, a sine value (sin θ) of an angle θ formed by a line segment connecting the speaker
131L and the speaker 131R and a line segment connecting the sound collection position P2 and
the speaker 131R is expressed by the following equation (9). sin θ = FL / MDSR = [MDSR <2>(WID / 2) <2>] <1/2> / MDSR (9)
[0040]
By the way, as shown in FIG. 5, when the distance from the assumed listening position P1 to each
of the speakers 131j (j = L to SR) is HDSj, the distance HDSj is based on the positional
relationship of FIG. And are expressed by the following equations (10) to (13). HDSL = [(FL−d)
<2> + (3 × WID / 4) <2>] <1/2> (10) HDSR = [(FL−d) <2> + (WID / 4) < 2>] <1/2> (11) HDSSL =
[(RL + d) <2> + (3 × WID / 4) <2>] <1/2> (12) HDSSR = [(RL + d) <2 > + (WID / 4) <2>] <1/2>
(13)
[0041]
For this reason, in consideration of the relationship between the equations (5) to (8) for the
distances FL and RL in the equations (10) to (13), if the distance HDSj can know the installation
width WID, the measured value of the distance MDSj It can be calculated using
[0042]
10-05-2019
10
By the way, although the installation width WID is a value determined by the vehicle type,
according to the knowledge obtained by the inventor as a result of the research, the value R of
the ratio between the distance MDSR and the distance MDSSR without knowing the vehicle type
It can be accurately estimated by MDSSR).
An example of the correspondence between the empirically obtained value R and the installation
width WID is shown in FIG.
[0043]
Returning to FIG. 1, the display unit 150 includes (i) a display device 151 such as a liquid crystal
display panel, an organic EL (Electro Luminescence) panel, or a PDP (Plasma Display Panel), and
(ii) display control sent from the control unit 110A. A display controller such as a graphic
renderer that controls the entire display unit 150 based on data, and (iii) a display image
memory that stores display image data, and the like are configured. The display unit 150 displays
operation guidance information and the like under the control of the control unit 110A.
[0044]
The operation input unit 160 is configured of a key unit provided on the main body of the
acoustic device 100A, or a remote input device or the like including the key unit. Here, as a key
part provided in the main body part, a touch panel provided in the display device 151 of the
display unit 150 can be used. In addition, it can replace with the structure which has a key part,
and can also employ | adopt the structure which voice-inputs.
[0045]
When the user operates the operation input unit 160, the setting of the operation content of the
acoustic device 100A is performed. For example, the user issues an instruction for setting a delay
time between speakers, an instruction for reproducing audio content, and the like by using the
operation input unit 160. Such input contents are sent from the operation input unit 160 to the
control unit 110A as operation input data IPD.
10-05-2019
11
[0046]
As described above, the control unit 110A centrally controls the entire acoustic device 100A. As
shown in FIG. 7, the control unit 110A includes a control processing unit 111A, a channel signal
processing unit 112A as an adjustment unit, and an output signal selection unit 113A. The
control unit 110A also includes an analog conversion unit 114A and a volume adjustment unit
115A. Further, the control unit 110A includes a test signal generating unit 116 as a test sound
output unit.
[0047]
The control processing unit 111A selects the channel signal processing unit 112A, the output
signal selection unit 113A, the volume adjustment unit 115A, and the test signal generation unit
based on the command input input to the operation input unit 160 and the sound collection
result by the sound collection unit 140. Control 116; The control processing unit 111 </ b> A
also controls the drive unit 120 and the display unit 150. Details of the control processing unit
111A will be described later.
[0048]
The channel signal processing unit 112A processes the content data CTD sent from the drive unit
120, and adjusts the audio output timing between the speakers 131L to 131SR at the time of
reproduction of the audio content. As shown in FIG. 8, the channel signal processing unit 112A
includes a channel separation unit 210A and a signal delay unit 220A as a delay unit.
[0049]
The channel separation unit 210A receives the content data CTD from the drive unit 120. Then,
the channel separation unit 210A develops the content data CTD in accordance with the content
reproduction control command CSC from the control processing unit 111A, and generates a
digital sound data signal which is an audio signal. Subsequently, the channel separation unit
210A analyzes the generated digital sound data signal, and according to channel designation
10-05-2019
12
information included in the digital sound data signal, the digital sound data signal is transmitted
to each of the speakers 131L, 131R, 131SL and 131SR. Separate as supplied. The signals
separated in this manner are sent to the signal delay unit 220A as separated channel signals
SCDL, SCDR, SCDSL, and SCDSR.
[0050]
The signal delay unit 220A delays the separated channel signals SCDL to SCDSR sent from the
channel separation unit 210A by a predetermined time according to the delay control command
DLC from the control processing unit 111A. As shown in FIG. 9, the signal delay unit 220A
having such a function includes four delay devices 221L to 221SR.
[0051]
The delay units 221L to 221SR delay the separated channel signals SCDL to SCDSR by the delay
times DLL to DLSR designated by the individual delay control commands DLCL to DLCSR in the
delay control command DLC. The delay result is sent to the output signal selection unit 113A as
channel processing signals PCDL to PCDSR.
[0052]
Returning to FIG. 7, the output signal selection unit 113A receives channel processing signals
PCDL to PCDSR from the signal delay unit 220A and a test audio signal SGD from the test signal
generation unit 116, which will be described later. Then, the output signal selection unit 113A
supplies the channel processing signals PCDL to PCDSR, the test audio signal SGD, and the like
toward the analog conversion unit 114A according to the output signal selection instruction ODS
from the control processing unit 111A. Select whether to supply no signal. As shown in FIG. 10,
the output signal selection unit 113A having such a function includes four switch elements 113L
to 113SR.
[0053]
Each of the switch elements 113L to 113SR has an A terminal and a B terminal as input
10-05-2019
13
terminals and a C terminal as an output terminal. The terminal A is a terminal connected to the
signal delay unit 220A, and the terminal B is a terminal connected to the test signal generation
unit 116. The terminal C is a terminal connected to the analog conversion unit 114A. Each of the
switch elements 113L to 113SR receives the channel processing signals PCDL to PCDSR at the A
terminal, and receives the test voice signal SGD at the B terminal. Then, according to the
individual output selection instruction ODSL to ODSSR in the output signal selection instruction
ODS from the control processing unit 111A, the A terminal and the C terminal are electrically
connected, the B terminal and C terminal are electrically connected, and further, the A terminal
and The C terminal is not conducted to any of the B terminals. The selected signal (including no
signal) is sent from the C terminals of the switch elements 113L to 113SR toward the analog
conversion unit 114A as the sound output selection signals PBDL to PBDSR.
[0054]
Returning to FIG. 7, the analog conversion unit 114A converts the sound output selection signals
PBDL to PBDSR, which are digital signals sent from the output signal selection unit 113A, into
analog signals. The analog conversion unit 114 </ b> A includes four DA (Digital to Analogue)
converters configured similarly to each other in correspondence to the four digital signals. The
analog signals PBSL to PBSSR that are conversion results by the analog conversion unit 114A are
sent to the volume adjustment unit 115A.
[0055]
The volume adjuster 115A receives the analog signals PBSL to PBSSR from the analog converter
114A. Then, the volume adjuster 115A adjusts the volume of each of the analog signals PBSL to
PBSSR in accordance with the volume adjustment command VLC from the control processor
111A. The adjustment result is output to the sound output units 130L to 130SR as the audio
output signals AOSL to AOSSR.
[0056]
When receiving test audio signal generation instruction SGC including a speaker designation
from control processing unit 111A, test signal generator 116 generates test audio signal SGD.
The test voice signal SGD generated in this way is sent to the output signal selection unit 113A.
10-05-2019
14
[0057]
The control processing unit 111A exerts the function of the audio device 100A while controlling
the other components described above. The control processing unit 111A, as shown in FIG. 11,
includes a first estimation unit 251A as a first estimation unit, an installation width estimation
unit 252A as an installation width estimation unit, and a second estimation as a second
estimation unit. And a unit 253A. Further, the control processing unit 111A includes a control
unit 254A as a delay control unit.
[0058]
Under the control of the control unit 254A, the first estimation unit 251A selects each of the
speakers 131L to 131SR from the sound collection position P2, which is the installation position
of the microphone 141, based on the sound collection result data AAD from the sound collection
unit 140. Distances MDSL to MDSSR (see FIG. 4) are estimated. The estimation of the distances
MDSL to MDSSR by the first estimation unit 251A is started upon receiving the estimation start
command DMC from the control unit 254A.
[0059]
First, upon receiving a test sound signal generation command SGC including the designation of
the first measurement target speaker from the control unit 254A, the first estimation unit 251A
temporarily stores the time TR at which the command is received, and collects the sound. Start
collecting result data AAD. Then, the first estimation unit 251A analyzes the sound collection
result data AAD, and temporarily stores the time TP when the test sound output from the first
measurement target speaker has reached the microphone 141. .
[0060]
The first estimation unit 251A converts a value (TP-TR) obtained by subtracting the time TR from
the time TP into a distance, and stores the conversion result as the distance from the microphone
141 to the speaker as the first measurement target. After that, the first estimation unit 251A
sends a report MDR indicating that the process related to the first speaker to be measured is
10-05-2019
15
finished to the control unit 254A.
[0061]
Subsequently, upon receiving the test sound signal generation command SGC including the
speaker designation of the next measurement object from the control unit 254A, the first
estimation unit 251A performs the same process as described above, and transmits the next
measurement object from the microphone 141. Estimate and store the distance to the speaker.
Thereafter, the first estimation unit 251A performs the same process as described above until the
distance estimation with the microphones 141 for all the speakers is completed.
[0062]
When the distance estimation with respect to the microphones 141 for all the speakers is
completed, the first estimation unit 251A sets the distance from the microphone 141 to each of
the speakers as a first distance MDS (that is, the distance MDSL to MDSSR). While sending to
252A, it sends toward the 2nd estimating part 253A. In the first embodiment, only the distances
MDSR and MDSSR in the first distance MDS are sent to the installation width estimation unit
252A.
[0063]
The installation width estimation unit 252A includes an installation width table 259 in which the
relationship between the value R and the installation width WID shown in FIG. 6 described above
is registered. The installation width estimation unit 252A receives the first distance MDS from
the first estimation unit 251A. Subsequently, in the first embodiment, the installation width
estimation unit 252A calculates the value R (= MDSR / MDSSR) based on the distances MDSR and
MDSSR. Then, the installation width estimation unit 252A estimates the installation width WID
corresponding to the value R with reference to the installation width table 259. The installation
width WID thus estimated is sent to the second estimation unit 253A.
[0064]
10-05-2019
16
The second estimation unit 253A receives the first distance MDS from the first estimation unit
251A and the installation width WID from the installation width estimation unit 252A. Then, the
second estimation unit 253A estimates the distances HDSL to HDSSR (see FIG. 5) from the
assumed listening position P1 to each of the speakers 131L to 131SR based on the first distance
MDS and the installation width WID.
[0065]
The estimation of the distances HDSL to HDSSR is performed using the received first distance
MDS and the installation width WID in consideration of the relationship of the equations (5) to
(8) described above and the equations (10) to (13) It is performed by performing calculation
according to. The distances HDSL to HDSSR thus estimated are sent to the control unit 254A as a
second distance HDS.
[0066]
The control unit 254A controls the operation in two modes of the “reproduction mode” and
the “delay time setting mode” in the acoustic device 100A. Here, the "reproduction mode" is a
mode for reading out audio content from the compact disc CD and reproducing an audio signal.
In the "delay time setting mode", the test voice signal SGD is generated and measured, and the
delay time corresponding to each of the speakers is corrected in order to perform time alignment
correction of the voice output timing from each of the sound output units. This is the mode to set.
[0067]
Control unit 254A analyzes operation input data IPD received from operation input unit 160, and
performs operation control of either “reproduction mode” or “delay time setting mode”.
More specifically, the control unit 254A normally controls the operation of the "reproduction
mode". On the other hand, when receiving a delay time setting command from the operation
input unit 160, the controller 254A controls the operation of the “delay time setting mode”.
Then, when the control of the operation in the “delay time setting mode” is finished, the
control unit 254A returns to the operation control in the “reproduction mode”.
10-05-2019
17
[0068]
When controlling the operation of the “delay time setting mode”, the control unit 254A
controls delay time measurement for each of the sound output units 130L to 130SR.
[0069]
At the time of control of this delay time measurement, the control unit 254A first sends a
command to the output signal selection unit 113A to select the test voice signal SGD.
More specifically, the B terminal and the C terminal of the switch element corresponding to the
first measurement target speaker in the output signal selection unit 113A are made conductive,
and the C terminals of the other switch elements are the A terminal and the B terminal. An output
signal selection instruction ODS for designating no conduction at all is sent to the output signal
selection unit 113A.
[0070]
Subsequently, control unit 254A sends test voice signal generation instruction SGC to test signal
generation unit 116 to the effect that test voice signal SGD should be generated, and also sends it
to first estimation unit 251A.
[0071]
In addition, when control unit 254A receives a report MDR indicating that distance estimation
between the first measurement target speaker and microphone 141 is completed from first
estimation unit 251A, the next sound output unit to be measured is output. Settings regarding
are made.
More specifically, the B terminal and the C terminal of the switch element corresponding to the
sound output unit to be measured next are made conductive, and the C terminals of the other
switch elements are not made conductive with either the A terminal or the B terminal An output
signal selection instruction ODS specifying that is sent to the output signal selection unit 113A.
Subsequently, the control unit 254A directs the test sound signal generation command SGC to
the test signal generation unit 116 that the test sound signal SGD should be generated, as in the
measurement operation of the delay time for the first measurement target speaker. send.
10-05-2019
18
[0072]
Thereafter, the control unit 254A performs the same control as described above on the output
signal selection unit 113A, the test signal generation unit 116, and the first estimation unit 251A
until the distance estimation with the microphones 141 for all the speakers is completed. .
[0073]
Further, when the control unit 254A receives the second distance HDS from the second
estimation unit 253A, the control unit 254A converts the distances HDSL, HDSR, HDSSL, and
HDSSR included in these into time.
Then, the delay times DLL to DLSR of the audio output signals AOSL to AOSSR supplied to the
sound output units 130L to 130SL are calculated. Then, the control unit 254A internally stores
the calculation result and sends it to the signal delay unit 220A as a delay control command DLC.
[0074]
Thus, when setting of the delay time in the signal delay unit 220A is performed, the control unit
254A ends the operation control of the “delay time setting mode”.
[0075]
At the time of operation control of the “reproduction mode”, the control unit 254A instructs
the output signal selection unit 113A to specify that the A terminal and the C terminal should be
electrically connected for all of the switch elements 113L to 113SR. Send ODS.
As a result, the channel processing signals PCDL to PCDSR from the signal delay unit 220A are
supplied toward the analog conversion unit 114A as the sound output selection signals PBDL to
PBDSR via the output signal selection unit 113A.
[0076]
10-05-2019
19
In addition, the control unit 254A causes the display unit 150 to display a guidance screen for
assisting the user in designating the audio content to be reproduced when controlling the
operation in the “reproduction mode”. Then, when a reproduction instruction specifying audio
content is input from the operation input unit 160, the control unit 254A controls the drive unit
120 to control data reading of the reproduction content.
[0077]
Further, when controlling the operation in the “reproduction mode”, the control unit 254A
controls the channel separation unit 210A to separate the content data CTD into separated
channel signals SCDL to SCDSR.
[0078]
Further, the control unit 254A controls the volume adjustment unit 115A to control the volume
of the sound output from the speakers 131L to 131SR of the sound output units 130L to 130SR
when controlling the operation in the “reproduction mode”.
When controlling the output volume, control unit 254A generates volume adjustment instruction
VLC based on the volume specification input to operation input unit 160 and the noise level
obtained from the result of sound collection by sound collection unit 140, and the volume
adjustment is performed. Send to the part 115A.
[0079]
<Operation> Next, the operation of the acoustic device 100A configured as described above will
be described, focusing mainly on the operation in the “delay time setting mode”.
[0080]
When the user inputs a delay time setting command to the operation input unit 160, the
operation of the "delay time setting mode" of the acoustic device 100A is started.
10-05-2019
20
Thus, when the operation of the “delay time setting mode” is started, first, in step S11 of FIG.
12, the selection of the first speaker to be measured is performed.
[0081]
In the first embodiment, the control unit 254A of the control processing unit 111A selects, for
example, the L speaker 131L as the speaker to be the first measurement target. Then, the control
unit 254A performs setting processing of a signal path for measuring the distance between the L
speaker 131L and the microphone 141. In the signal path setting process in step S11, the control
unit 254A electrically connects the B terminal and the C terminal of the switch element 113L of
the output signal selection unit 113A, and the C terminals of the other switch elements 113R to
113SR are the A terminals. An output signal selection instruction ODS for designating no
conduction with any of the B terminals is issued toward the output signal selection unit 113A
(see FIG. 10).
[0082]
After completing the setting of the above signal path, the control unit 254A issues an estimation
start instruction DMC toward the first estimation unit 251A of the control processing unit 111A.
[0083]
Next, in step S12, the control unit 254A sends a test signal generation instruction SGC to the
effect that the test voice signal SGD should be generated, to the test signal generation unit 116,
and sends it to the first estimation unit 251A.
The test signal generation unit 116 having received the test signal generation instruction SGC
generates a test voice signal SGD. As a result, the test sound is output from the L speaker 131L
via the output signal selection unit 113A, the analog conversion unit 114A, and the volume
adjustment unit 115A. Further, the first estimation unit 251A that has received the test signal
generation command SGC temporarily stores the time TR at which the command is received, and
starts collecting the sound collection result data AAD.
[0084]
10-05-2019
21
Next, in step S13, the first estimation unit 251A performs a process of estimating the distance
MDSL from the sound collection position P2 to the L speaker 131L. In the process of estimating
the distance MDSL, the first estimation unit 251A analyzes the sound collection result data AAD,
and stores the time TP when the test voice output from the L speaker 131L reaches the
microphone 141. Then, the first estimation unit 251A converts a value (TP-TR) obtained by
subtracting the time TR from the time TP into a distance, and the conversion result is a distance
MDSL from the microphone 141 to the L speaker 131L in the first estimation unit 251A.
Remember to After that, the first estimation unit 251A sends a report MDR indicating that the
process related to the L speaker 131L is finished to the control unit 254A, and the process of
step S13 ends.
[0085]
Next, in step S14, the control unit 254A determines whether the distance measurement for all
the speakers 131L to 131SR is completed. If the result of this determination is negative (step
S14: N), the process proceeds to step S15.
[0086]
In step S15, processing for setting a signal path for distance measurement regarding the next
measurement target, for example, the R speaker 131R is performed. In the signal path setting
process in step S15, the control unit 254A electrically connects the B terminal and the C terminal
of the switch element 113R of the output signal selection unit 113A, and the C terminals of the
other switch elements 113L, 113SL, and 113SR are A An output signal selection instruction ODS
for designating no conduction between any of the terminal and the B terminal is issued to the
output signal selection unit 113A.
[0087]
When the process of step S15 ends, the process returns to step S12. Thereafter, the processes of
steps S12 to S15 are repeated until the result of the determination in step S14 is affirmative.
[0088]
10-05-2019
22
When the distance measurement for all the speakers 130L to 130SR is completed and the result
of the determination in step S14 is affirmative (step S14: Y), the first estimation unit 251A
determines the distance from the microphone 141 to each of the speakers While being sent to
the installation width estimation unit 252A as the one-distance MDS, it is sent to the second
estimation unit 253A. Thereafter, the process proceeds to step S16.
[0089]
In step S16, a measurement process of delay time is performed. In this delay time measurement
process, as shown in FIG. 13, first, in step S21, the installation width estimation unit 252A
mounts the acoustic device 100A from the relationship between the distance MDSR and the
distance MDSSR included in the first distance MDS. From the estimated vehicle shape, the
installation width WID, which is the installation interval between the L speaker 131L and the R
speaker 131R, is estimated from the estimated vehicle shape. The installation width WID thus
estimated is sent to the second estimation unit 253A. After this, the process of step S21 ends.
[0090]
Next, in step S22, the second estimation unit 253A estimates the distances HDSL to HDSSR (see
FIG. 5) from the assumed listening position P1 to each of the speakers 131L to 131SR. When
estimating the distance, the second estimating unit 253A performs the calculation according to
the equations (10) to (13) described above using the first distance MDS and the installation width
WID. The distances HDSL to HDSSR thus calculated are sent to the control unit 254A as a second
distance HDS. Thereafter, the process proceeds to step S23.
[0091]
In step S23, the control unit 254A that has received the second distance HDS converts the
distances HDSL, HDSR, HDSSL, and HDSSR included in the second distance HDS into time. After
conversion, when the process of step S23 ends, the process of step S16 ends, and the process
proceeds to step S17 of FIG.
10-05-2019
23
[0092]
In step S17, the control unit 254A analyzes the delay measurement results regarding the sound
output units 130L to 131SR, and calculates delay times DLL to DLSR of the audio output signals
AOSL to AOSSR supplied to the respective sound output units 130L to 130SR. . Then, the control
unit 254A internally stores the calculation result and sends it to the signal delay unit 220A as a
delay control command DLC.
[0093]
Thus, when the process of step S17 is completed, the control unit 254A instructs the output
signal selection unit 113A to select an output signal designating that the terminals A and C
should be conductive for all of the switch elements 113L to 113SR. Send command ODS. As a
result, the channel processing signals PCDL to PCDSR sent from the signal delay unit 220A are
supplied toward the analog conversion unit 114A as the sound output selection signals PBDL to
PBDSR via the output signal selection unit 113A. Become. Thus, when the “delay time setting
mode” ends, the acoustic device 100A resumes the operation of the “reproduction mode”.
[0094]
The control unit 254A causes the display unit 150 to display a guidance screen for supporting
specification of audio content to be reproduced by the user in the “reproduction mode”. Then,
when a reproduction instruction specifying audio content is input to the operation input unit
160, the control unit 254A controls the drive unit 120 to control data readout of the audio
content.
[0095]
In the “reproduction mode”, control unit 254A controls channel separation unit 210A to
separate content data CTD from drive unit 120 into separated channel signals SCDL to SCDSR.
[0096]
Further, at the time of the “reproduction mode”, the control unit 254A controls the volume
adjustment unit 115A to adjust the output volume from the speakers 131L to 131SR of the
sound output units 130L to 130SR.
10-05-2019
24
[0097]
Under the control of the control unit 254A in the above-described "reproduction mode", the
audio content is reproduced, and the reproduced sound is provided to the listener who is the user
of the audio device 100A.
[0098]
As described above, in the first embodiment, the microphone 141 for collecting the test voice is
disposed at the middle point of the line connecting the headrests of the driver's seat and the
front passenger's seat.
Therefore, the test voices output from the four speakers of the L speaker 131L, the R speaker
131R, the SL speaker 131SL, and the SR speaker 131SR are directly collected by the microphone
141.
Therefore, in the time alignment correction, it is possible to perform measurement using the
direct sound of the test sound for all the speakers 131L to 131SR.
[0099]
In the first embodiment, the installation width estimation unit 252A estimates the shape of the
vehicle equipped with the acoustic device 100A from the relationship between the distance
MDSL and the distance MDSSL, and the L speaker from the estimated vehicle shape. It is
estimated from the installation width WID which is an installation interval of the 131L and the R
speaker 131R.
Then, the second estimation unit 253A calculates the second distance HDS from the first distance
MDS using the installation width WID. Generally, in order to obtain the second distance from the
first distance MDS in the vehicle CR, it is necessary for the setting value of the vehicle shape and
the like to be known. However, in the first embodiment, since the installation width WID is
estimated as described above, only the positional relationship between the installation position
P2 of the microphone 141 and the assumed listening position P1 is set to a predetermined
10-05-2019
25
positional relationship. Time alignment correction can be performed accurately without
investigating the shape or the like.
[0100]
Therefore, according to the first embodiment, when a user listens to music or the like, it is
possible to easily carry out an appropriate audio delay correction corresponding to the audio
propagation delay time.
[0101]
Second Embodiment Next, a second embodiment of the present invention will be described
mainly with reference to FIGS. 14 to 23 and with reference to other drawings as appropriate.
Also in the second embodiment, as in the above-described first embodiment, an acoustic device
mounted on a vehicle CR (see FIG. 15) will be illustrated and described.
[0102]
<Configuration> FIG. 14 is a block diagram showing a schematic configuration of an acoustic
device 100B according to the second embodiment. Hereinafter, the configuration of the acoustic
device 100B will be described focusing mainly on the differences from the acoustic device 100A
according to the first embodiment.
[0103]
As shown in FIG. 14, the acoustic device 100B is different from the acoustic device 100A in that
the acoustic device 100B includes a control unit 110B instead of the control unit 110A and
further includes a sound output unit 130C.
[0104]
The sound output unit 130C includes a center speaker 131C (hereinafter also referred to as "C
speaker") as a third speaker, and an amplifier for amplifying the sound output signal AOSC
10-05-2019
26
received from the control unit 110B.
Similar to the sound output units 130L to 130SR, the sound output unit 130C reproduces and
outputs a test sound signal, music, etc. under the control of the control unit 110B.
[0105]
In the second embodiment, as shown in FIG. 15, the C speaker 131C of the sound output unit
130C is disposed at the midpoint of the line connecting the L speaker 131L and the R speaker
131R in the dashboard in the front center. Be done. The C speaker 131C is disposed to face the
rear.
[0106]
Here, the positional relationship among the speakers 131C to 131SR assumed in the second
embodiment, the assumed listening position P1 and the sound collecting position P2 will be
described with reference to FIGS.
[0107]
As shown in FIG. 16, in the second embodiment, the speakers 131 </ b> L to 131 SR are
arranged at the same positions as in the first embodiment.
Further, the positional relationship between the assumed listening position P1 and the sound
collecting position P2 is also the same as in the first embodiment. Further, as shown in FIG. 16,
the speaker 131C is disposed near the midpoint of a line connecting the speaker 131L as the
first speaker and the speaker 131R as the second speaker.
[0108]
The positional relationship assumed in the second embodiment described above is an accurate
positional relationship regardless of the vehicle type.
[0109]
10-05-2019
27
Now, as shown in FIG. 17, assuming that the distance from the sound collection position P2 to
each of the speakers 131k (k = C to SR) is MDSk, the distance MDSj (j = L to SR) within the
distance MDSk is As in the case of the first embodiment, the above-mentioned equations (1) to
(4) are used.
Further, the distance MDSC is expressed by the following equation (14). MDSC = FL (14)
[0110]
As a result, a sine value (sin θ) of an angle θ formed by a line segment connecting the speaker
131L and the speaker 131R and a line segment connecting the sound collection position P2 and
the speaker 131R is expressed by the following equation (15) . sin θ = MDSC / MDSR (15)
[0111]
Further, the installation width WID is expressed by the following equation (16). WID = 2 ×
(MDSR <2> -MDSC <2>) <1/2> (16)
[0112]
Therefore, as shown in FIG. 18, when the distance from the assumed listening position P1 to each
of the speakers 131k (k = C to SR) is HDSk, the distance HDSk is based on the positional
relationship of FIG. Are expressed by the following equations (17) to (21). HDSC = [(FL−d) <2> +
(WID / 4) <2>] <1/2> (17) HDSL = [(FL−d) <2> + (3 × WID / 4) < 2>] <1/2> (18) HDSR =
[(FL−d) <2> + (WID / 4) <2>] <1/2> (19) HDSSL = [(RL + d) <2 > + (3 × WID / 4) <2>] <1/2>
(20) HDSSR = [(RL + d) <2> + (WID / 4) <2>] <1/2> (21)
[0113]
Therefore, considering the equation (14) for the distance FL in the equations (17) to (21),
considering the equation (8) for the distance RL, and considering the equation (16) for the
installation width WID, The HDSj can be calculated using the measured value of the distance
10-05-2019
28
MDSk.
[0114]
Returning to FIG. 14, as described above, the control unit 110 </ b> B integrally controls the
entire acoustic device 100 </ b> B.
Compared to the control unit 110A in the first embodiment, the control unit 110B includes a
control processing unit 111B instead of the control processing unit 111A and a channel signal
processing unit 112A as shown in FIG. The difference is that the channel signal processing unit
112B is provided and the output signal selection unit 113B is provided instead of the output
signal selection unit 113A. Further, the control unit 110B includes an analog converting unit
114B instead of the analog converting unit 114A and a volume adjusting unit 115B instead of
the volume adjusting unit 115A as compared with the control unit 110A according to the first
embodiment. Are different.
[0115]
The control processing unit 111B controls the channel signal processing unit 112B, the output
signal selection unit 113B, the volume adjustment unit 115B and the test signal generation unit
based on the command input input to the operation input unit 160 and the sound collection
result by the sound collection unit 140 Control 116; Further, the control processing unit 111 </
b> B controls the drive unit 120 and the display unit 150. Details of the control processing unit
111B will be described later.
[0116]
The channel signal processing unit 112 </ b> B adjusts the audio output timing between the
speakers 131 </ b> C to 131 SR at the time of reproduction of the audio content. The channel
signal processing unit 112B includes a channel separation unit 210B instead of the channel
separation unit 210A as compared with the channel signal processing unit 112A in the first
embodiment as shown in FIG. 20, and a signal delay unit 220A. In that a signal delay unit 220B is
provided instead of
10-05-2019
29
[0117]
The channel separation unit 210B supplies digital sound data signals to the five speakers 131L,
131R, 131SL, and 131SR plus the speaker 131C, as compared to the channel separation unit
210A in the first embodiment. The point of separation and the point of sending separated
channel signals SCDL, SCDR, SCDSL, SCDSR, SCDC separated in this way to the signal delay unit
220B are different.
[0118]
The signal delay unit 220B further includes a delay unit for delaying the separated channel
signal SCDC by a predetermined delay time DLC in addition to the delay units 221L to 221SR as
compared with the signal delay unit 220A in the first embodiment, The difference is that the
channel processing signals PCDC to PCDSR as a result are sent to the output signal selection unit
113B.
[0119]
Returning to FIG. 19, the output signal selection unit 113B supplies the channel processing
signal PCDC toward the analog conversion unit 114B in addition to the switch elements 113L to
113SR in comparison with the output signal selection unit 113A in the first embodiment. A point
further including a switch element (hereinafter also referred to as “C-channel switch element”)
for supplying the audio signal SGD and selecting neither of the signals, and converting the sound
output selection signals PBDC to PBDSR into analog signals The point of sending toward the part
114B is different.
The C-channel switch elements are configured in the same manner as the switch elements 113L
to 113SR.
[0120]
The analog conversion unit 114B further includes a DA converter that converts a sound output
selection signal PBDC that is a digital signal into an analog signal PBSC, as compared with the
analog conversion unit 114A according to the first embodiment, and an analog signal PBSC ~ The
difference is that the PBSSR is sent toward the volume control unit 115B.
[0121]
10-05-2019
30
In addition to the analog signals PBSL to PBSSR, the volume adjuster 115B performs volume
adjustment on the analog signal PBSC in addition to the analog signals PBSL to PBSSR as
compared with the volume adjuster 115A according to the first embodiment, and an audio output
signal as a result of the adjustment. The difference is that AOSC to AOSSR are sent to the sound
output units 130C to 130SR.
[0122]
The control processing unit 111B exerts the function of the acoustic device 100B.
As shown in FIG. 21, the control processing unit 111B includes a first estimation unit 251B
instead of the first estimation unit 251A in comparison with the control processing unit 111A in
the first embodiment, and an installation width estimation unit. The difference is that the
installation width estimation unit 252B is provided instead of the 252A, and the second
estimation unit 253B is provided instead of the second estimation unit 253A.
Further, the control processing unit 111B differs from the control processing unit 111A
according to the first embodiment in that a control unit 254B is provided instead of the control
unit 254A.
[0123]
The first estimating unit 251B is different from the first estimating unit 251A in the first
embodiment in the distance MDSC from the microphone 141 to the speaker 131C in addition to
the distance from the microphone 141 to each of the speakers 131L to 131SR (see FIG. 17). )
And the distances from the microphone 141 to the speakers 131C to 131SR as the first distance
MDS (ie, the distances MDSC to MDSSR) toward the installation width estimation unit 252B and
the second estimation unit 253B. The points are different.
In the second embodiment, only the distances MDSC and MDSR in the first distance MDS are sent
to the installation width estimation unit 252B.
[0124]
10-05-2019
31
The installation width estimation unit 252B receives the first distance MDS from the first
estimation unit 251B. In the second embodiment, the installation width estimation unit 252B
estimates the installation width WID by performing calculation according to the above-described
equation (16) based on the distances MDSC and MDSR. The installation width WID thus
estimated is sent to the second estimation unit 253B.
[0125]
As compared with the second estimation unit 253A according to the first embodiment, the
second estimation unit 253B adds to the distances HDSL to HDSSR from the assumed listening
position P1 to the speakers 131L to 131SR, respectively, and the assumed listening position P1
to the speaker 131C. The difference is that the distance HDSC (see FIG. 18) up to the point is
estimated, and the distances HDSL to HDSSR from the assumed listening position P1 to the
respective speakers 131C to 131SR are sent to the control unit 254B as the second distance
HDS. ing. Here, the estimation of the distances HDSC to HDSSR is described using the received
first distance MDS and the installation width WID in consideration of the relationships of the
expressions (7), (8) and (14) described above (17 ) To (21). The distances HDSC to HDSSR thus
estimated are sent to the control unit 254B as a second distance HDS.
[0126]
The control unit 254B controls the operation in two modes of the “reproduction mode” and
the “delay time setting mode” in the acoustic device 100B.
[0127]
When controlling the operation of the “delay time setting mode”, the control unit 254 B
performs sound output in addition to the measurement control of the delay time for each of the
sound output units 130 L to 130 SR as compared with the control unit 254 A according to the
first embodiment. The difference is that the setting control of the delay time for the unit 130C is
performed.
At the time of this control, the control unit 254B directs each of the output signal selection unit
113B, the first estimation unit 251B, the installation width estimation unit 252B, and the second
10-05-2019
32
estimation unit 253B to the delay time of each of the sound output units 130C to 130SR.
Measurement control is performed.
[0128]
Further, when controlling the operation of the “reproduction mode”, the control unit 254B
directs the sound to each of the sound output units 130C to 130SR toward each of the channel
signal processing unit 112B, the output signal selection unit 113B, and the volume adjustment
unit 115B. Control is performed to output audio sound read out from the content.
[0129]
<Operation> Regarding the operation of the acoustic device 100B configured as described above,
the estimation operation of the installation width WID in the installation width estimation unit
252B and each of the speakers 131C to 131SR from the assumed listening position P1 in the
second estimation unit 253B. Description will be made by focusing mainly on the estimation
operation of the distance up to the point.
[0130]
When the user inputs a delay time setting command to the operation input unit 160, the
operation of the “delay time setting mode” of the acoustic device 100B is started.
First, in step S31 of FIG. 22, the selection of the first speaker to be measured is performed.
[0131]
In the second embodiment, the control unit 254B selects, for example, the C speaker 131C as a
speaker to be the first measurement target.
Then, the control unit 254B performs setting processing of a signal path for measuring the
distance between the C speaker 131C and the microphone 141. After the setting of the signal
path is completed, the control unit 254B issues an estimation start instruction DMC toward the
first estimation unit 251B.
10-05-2019
33
[0132]
Next, in step S32, the control unit 254B sends the test signal generation command SGC to the
test signal generation unit 116 and the first estimation unit 251B. The test signal generation unit
116 having received the test signal generation instruction SGC generates a test voice signal SGD.
As a result, the test sound is output from the C-speaker 131C via the output signal selection unit
113B, the analog conversion unit 114B, and the volume adjustment unit 115B. Further, the first
estimation unit 251B that has received the test signal generation command SGC temporarily
stores the time TR at which the command is received, and starts collecting the sound collection
result data AAD.
[0133]
Next, in step S33, the first estimation unit 251B analyzes the sound collection result data AAD,
and stores the time TP when the test voice output from the C speaker 131C reaches the
microphone 141. Then, the first estimation unit 251B converts a value (TP-TR) obtained by
subtracting the time TR from the time TP into a distance, and stores the conversion result as a
distance MDSC in the first estimation unit 251B. Thereafter, the first estimation unit 251B sends,
to the control unit 254B, a report MDR indicating that the processing related to the C speaker
131C is completed.
[0134]
Next, in step S34, the control unit 254B determines whether the distance measurement for all the
speakers 131C to 131SR has ended. If the result of this determination is negative (step S34: N),
the process proceeds to step S35.
[0135]
In step S35, the setting process of the signal path for distance measurement regarding the next
measurement object, for example, the L speaker 131L is performed in the same manner as in the
first embodiment.
[0136]
10-05-2019
34
When the process of step S35 ends, the process returns to step S32.
Thereafter, the process of steps S32 to S35 is repeated until the result of the determination in
step S34 is affirmative.
[0137]
When the distance measurement for all the speakers 130C to 130SR is completed and the result
of the determination in step S34 is affirmative (step S34: Y), the first estimation unit 251B
determines the distance from the microphone 141 to each of the speakers It sends toward the
installation width estimation part 252B and the 2nd estimation part 253B as 1 distance MDS.
Thereafter, the process proceeds to step S36.
[0138]
In step S36, measurement processing of the delay time is performed. In this delay time
measurement process, as shown in FIG. 23, first, in step S41, the installation width estimation
unit 252B includes the distance MDSC, the distance MDSR, the microphone 141, the C speaker
131C, and the like included in the first distance MDS. From the arrangement relationship of the R
speaker 131R, the installation width WID, which is the installation interval between the L speaker
131L and the R speaker 131R, is estimated according to equation (16). The estimated installation
width WID is sent to the second estimation unit 253B, and then the process of step S41 ends.
[0139]
Next, in step S42, the second estimation unit 253B estimates the distances HDSC to HDSSR from
the assumed listening position P1 to each of the speakers 131C to 131SR. When estimating the
distance, the second estimating unit 253B performs the calculation according to the abovedescribed equations (17) to (21), using the received first distance MDS and the installation width
WID. The distances HDSC to HDSSR thus calculated are sent to the control unit 254B as a second
distance HDS.
10-05-2019
35
[0140]
In step S43, the control unit 254B that has received the second distance HDS converts the
distances HDSC to HDSSR included in the second distance HDS into time. Thereafter, the process
of step S43 ends, and the process proceeds to step S37 of FIG.
[0141]
In step S37, the control unit 254B analyzes the delay measurement results regarding the sound
output units 130C to 131SR, and calculates delay times DLC to DLSR of the audio output signals
AOSC to AOSSR supplied to the respective sound output units 130C to 130SR. . Then, the control
unit 254B stores the calculation result inside and sends it to the signal delay unit 220B as a delay
control command DLC.
[0142]
Thus, when the process of step S37 ends, the control unit 254B sends an output signal selection
instruction ODS to the effect that the channel processing signals PCDC to PCDSR should be
selected, toward the output signal selection unit 113B. As a result, the channel processing signals
PCDC to PCDSR sent from the signal delay unit 220B are supplied toward the analog conversion
unit 114B as the sound output selection signals PBDC to PBDSR via the output signal selection
unit 113B. Become. Thus, when the “delay time setting mode” ends, the acoustic device 100B
starts the operation of the “reproduction mode”.
[0143]
As described above, in the second embodiment, as in the case of the first embodiment, the
microphone 141 for collecting the test voice is the middle point of the line connecting the
driver's seat and the headrest of the front passenger's seat. Is located in Therefore, the test voices
output from the five speakers of the C speaker 131 C, the L speaker 131 L, the R speaker 131 R,
the SL speaker 131 SL, and the SR speaker 131 SR are directly collected by the microphone 141.
Therefore, in the time alignment correction, it is possible to perform measurement using the
direct sound of the test sound for all the speakers 131C to 131SR.
10-05-2019
36
[0144]
In the second embodiment, the installation width estimating unit 252B estimates the distances
MDSC and MDSL from the installation width WID, which is the installation interval between the L
speaker 131L and the R speaker 131R, according to equation (16). Then, the second estimation
unit 253B calculates the second distance HDS from the first distance MDS using the installation
width WID. Therefore, by setting only the positional relationship between the installation position
P2 of the microphone 141 and the assumed listening position P1 to be a predetermined
positional relationship, time alignment correction can be accurately performed without
examining the shape or the like of the vehicle.
[0145]
Therefore, according to the second embodiment, when a user listens to music or the like, it is
possible to easily carry out appropriate voice delay correction corresponding to the voice
propagation delay time.
[0146]
[Modification of Embodiment] The present invention is not limited to the above embodiment, and
various modifications are possible.
[0147]
For example, in the first embodiment described above, the installation width estimation unit
252A estimates the shape of the vehicle on which the acoustic device 100A is mounted from the
relationship between the distance MDSR and the distance MDSSR.
On the other hand, in the installation width estimation unit 252A, the shape of the vehicle may
be estimated from the relationship between the distance MDSL and the distance MDSSL, and may
be estimated from the installation width WID from this estimation result. The shape of the vehicle
may be estimated from the relationship with and the installation width WID may be estimated
from the estimation result.
10-05-2019
37
[0148]
In the second embodiment described above, the installation width WID is calculated from the
distances MDSC and MDSR in the installation width estimation unit 252B according to equation
(16).
On the other hand, in the installation width estimation unit 252B, the installation width WID may
be calculated from the distances MDSC and MDSL according to the following equation (22). WID
= 2 × (MDSL <2> -MDSC <2>) <1/2> (22)
[0149]
In the first embodiment described above, four sound output units are provided. However, if the
left speaker 131L and the light speaker 131R are provided, an audio signal as a result of reading
the audio content is appropriately separated or It is also possible to mix and make sound output
from two, three or less, or five or more speakers.
[0150]
In the above second embodiment, five sound output units are provided. However, if the center
speaker 131C, the left speaker 131L, and the light speaker 131R are provided, an audio signal
which is a result of reading the audio content Can be properly separated or mixed, and sound can
be output from three or more and four or less or six or more speakers.
[0151]
Moreover, in said 1st and 2nd embodiment, although the driver's seat was assumed as
assumption listening position P1, an assumption listening position may be a passenger seat etc.
In this case, the user can send assumed listening position information to the control unit using
the operation input unit 160, and can set a delay time corresponding to the assumed listening
position.
[0152]
In the first and second embodiments described above, one microphone 141 is provided, but two
10-05-2019
38
or more microphones may be provided if the positional relationship with the assumed listening
position P1 is known.
[0153]
In the first and second embodiments described above, the sound collection position P2 is
disposed on a straight line perpendicular to the line segment passing through the middle point of
the line segment connecting the speaker 131L and the speaker 131R. .
On the other hand, the sound collection position P2 does not necessarily pass through the
midpoint of the line segment, and in this case, the intersection point of a straight line passing
through P2 and orthogonal to the line segment and the line segment If the division ratio of the
line segment in is determined in advance, processing similar to that of the first and second
embodiments described above can be performed.
[0154]
In the first and second embodiments described above, the drive unit 120 is a CD drive unit, but
may be a fixed disk or a DVD drive unit.
Furthermore, broadcast wave reception circuits such as radio broadcasts and terrestrial digital
television broadcasts, audio input circuits of external devices, and the like can also be used.
[0155]
In the first and second embodiments described above, the present invention is applied to an
acoustic device mounted on a vehicle, but the present invention is also applied to an acoustic
device mounted on a mobile other than a vehicle. For example, the present invention can be
applied to an acoustic device installed in a home or the like.
[0156]
A central processing unit (CPU: Central Processing Unit), a DSP (Digital Signal Processor), a read
10-05-2019
39
only memory (ROM: Read Only Memory), a random access memory, and a part or all of the
control units 110A and 110B in the above embodiment. (RAM: Random Access Memory) is
configured as a computer as computing means, and a part of or all of the processing in the above
embodiment is executed by executing a prepared program on the computer. May be
This program is recorded on a computer-readable recording medium such as a hard disk, a CDROM, a DVD, etc., and is read from the recording medium and executed by the computer. Also,
this program may be acquired in the form of being recorded on a portable recording medium
such as a CD-ROM, a DVD or the like, or may be acquired in the form of delivery via a network
such as the Internet. It is also good.
[0157]
It is a block diagram showing roughly composition of an audio system concerning a 1st
embodiment of the present invention. It is a figure for demonstrating the arrangement position of
four speakers of FIG. 1, the arrangement position (sound collection position) of a microphone,
and an assumption listening position. It is a figure for demonstrating the interrelationship of the
arrangement position of the speaker in FIG. 2, the arrangement position of a microphone, and an
assumption listening position. It is a figure for demonstrating the distance from each speaker in
FIG. 3 to a sound collection position. It is a figure for demonstrating the distance from each
speaker in FIG. 3 to an assumption listening position. It is a figure for demonstrating the
correspondence of the interrelationship of the arrangement position of the speaker in FIG. 3, and
installation width. It is a block diagram for demonstrating the structure of the control unit of FIG.
It is a block diagram for demonstrating the structure of the channel signal processing part of FIG.
It is a block diagram for demonstrating the structure of the signal delay part of FIG. It is a block
diagram for demonstrating the structure of the output signal selection part of FIG. It is a block
diagram for demonstrating the structure of the control processing part of FIG. It is a flowchart
for demonstrating the delay time setting process by the apparatus of FIG. It is a flowchart for
demonstrating the delay time measurement process in FIG. It is a block diagram which shows
roughly the structure of the audio equipment concerning 2nd Embodiment of this invention. It is
a figure for demonstrating the arrangement position of five speakers of FIG. 14, the arrangement
position (sound collection position) of a microphone, and an assumption listening position. It is a
figure for demonstrating the interrelationship of the arrangement position of the speaker in FIG.
15, the arrangement position of a microphone, and an assumption listening position. FIG. 17 is a
diagram for explaining the distance from each speaker in FIG. 16 to the sound collection position.
It is a figure for demonstrating the distance from each speaker in FIG. 16 to an assumption
listening position. It is a block diagram for demonstrating the structure of the control unit of FIG.
It is a block diagram for demonstrating the structure of the channel signal processing part of FIG.
10-05-2019
40
It is a block diagram for demonstrating the structure of the control processing part of FIG. It is a
flowchart for demonstrating the delay time setting process by the apparatus of FIG. It is a
flowchart for demonstrating the delay time measurement process in FIG.
Explanation of sign
[0158]
DESCRIPTION OF SYMBOLS 100A, 100B ... Sound apparatus 112A, 112B ... Channel signal
processing part (adjustment means) 116 ... Test signal generation part (test voice output means)
131C-131SR ... Speaker 140 ... Sound collection unit (sound collection means) 220A, 220B ...
Signal Delay unit (delay means) 251A, 251B ... first estimation unit (first estimation means) 252A,
252B ... installation width estimation unit (installation width estimation means) 253A, 253B ...
second estimation unit (second estimation means) 254A, 254B ... control unit (delay control
means)
10-05-2019
41
Документ
Категория
Без категории
Просмотров
0
Размер файла
58 Кб
Теги
jp2009060351
1/--страниц
Пожаловаться на содержимое документа