close

Вход

Забыли?

вход по аккаунту

?

JP2016178396

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2016178396
Abstract: In an audio wireless transmission system for transmitting an audio signal from a source
device to a plurality of speaker devices by wireless communication, the plurality of speaker
devices are divided into a plurality of groups, and the audio signal is reproduced for each group
It is possible to realize setting of grouping in consideration of each audio balance by simple
operation from the speaker device side. According to a system of the present invention, a speaker
device receives a predetermined user operation and transmits an operation signal indicated by
the predetermined user operation to a source device. The source device 1 includes a setting unit
that performs setting to group the plurality of speaker devices into a plurality of speaker groups
based on the operation signal received from the speaker device 2, and for each speaker group set
by the setting unit. The voice balance adjustment processing is executed, and the voice signal or
the voice signal processing parameter obtained by the voice balance adjustment processing is
transmitted to each speaker device by wireless communication. [Selected figure] Figure 1
Audio wireless transmission system, speaker device, and source device
[0001]
The present invention relates to an audio wireless transmission system for wirelessly
transmitting an audio signal, a speaker device in the system, and a source device.
[0002]
Conventionally, some speaker devices are provided with an adjustment knob for adjusting the
volume and the like of the sound to be output.
09-05-2019
1
In addition, as a technology for performing an operation on the speaker device, Patent Document
1 discloses a speaker device provided with a light receiving element for receiving an operation
signal from a remote controller (remote control). With this speaker device, based on the
operation from the remote control, it becomes possible to select (switch) the source device to
which the audio signal is input, adjust the sound quality in the built-in signal processing unit,
adjust the volume in the built-in audio amplifier, etc. ing.
[0003]
Further, Patent Document 2 discloses a speaker device (that is, a speaker array device) formed by
arranging a plurality of speaker units including a light receiving unit that receives an operation
signal from a remote control. In this speaker array device, an operation to change the angle and
focal length of the sound beam is received by the remote control, and the delay amount of the
sound signal output from each speaker unit and the level of the sound signal are changed
according to the operation. . In this technique, the operation unit provided in the content
reproduction apparatus connected to the speaker array device can receive an operation of
volume adjustment and content selection. Moreover, this speaker array apparatus is installed at
one place.
[0004]
On the other hand, in recent years, AV (Audio Visual) devices are increasing in devices that use
wireless communication, and WiFi (registered trademark) is also used for audio. Same below. ),
ZigBee (registered trademark). Same below. ), Bluetooth (registered trademark). Same below. ),
Wireless Speaker and Audio (WiSA; registered trademark). Same below. The wireless
transmission is carried out by means of By using wireless communication for audio signal
transmission, the speaker device and the source device such as the player and the amplifier are
connected wirelessly, which is convenient because there is no need to draw a cable.
[0005]
JP, 2014-216985, A JP, 2008-35, 252 A
[0006]
09-05-2019
2
In the conventional system that transmits audio signals from the source device to multiple
speaker devices by wireless communication, the speaker device is placed at a place other than
the installation place such as a room where the source device is installed, making use of the
wireless characteristics. It is also possible to construct a listening environment in which content
is individually listened to at a plurality of installation places (installation rooms).
In such a listening environment, it is necessary to construct a system so as to divide a plurality of
speaker devices wirelessly connected to a source device into a plurality of groups, and reproduce
content for each group.
[0007]
However, when this system is used, there are some events etc. For example, if part of the speaker
devices installed in one room is moved to another room, the original channel configuration
remains in both rooms. The sound balance is not correct, and it is impossible to construct a
correct sound field.
[0008]
Therefore, along with such movement, a function capable of adjusting the sound balance is
required.
In addition, even if the listener does not move, if the listener tries to assemble his / her favorite
speaker device to construct the above-mentioned system, it is necessary to adjust the sound
balance at the initial setting stage.
[0009]
However, if the operation related to such audio balance adjustment is performed from the source
device side, the operation content becomes complicated. Also, temporarily applying the
technology described in Patent Documents 1 and 2 enables an operation to change the audio
output (volume level etc.) to the speaker device from the room where the source device is not
installed by the operation of the remote control etc. Even if configured, the change is reflected
09-05-2019
3
only on the speaker device. In this case, in order to adjust the sound balance, it is necessary for
the user to perform the sound output change operation one by one for each of the speaker
devices in both rooms, which takes time and effort.
[0010]
The present invention has been made in view of the above-described actual situation, and an
object thereof is to transmit a plurality of speaker devices in an audio wireless transmission
system for transmitting an audio signal from a source device to a plurality of speaker devices by
wireless communication. When reproducing an audio signal for each group by dividing into a
plurality of groups, it is possible to realize the setting of the grouping in consideration of the
audio balance for each group by a simple operation from the speaker device side.
[0011]
In order to solve the above-mentioned problems, a first technical means of the present invention
is an audio wireless system including a plurality of speaker devices and a source device
transmitting an audio signal to the plurality of speaker devices by wireless communication. In the
transmission system, the speaker device receives a predetermined user operation, transmits an
operation signal indicated by the predetermined user operation to the source device, and the
source device receives the operation signal received from the speaker device. And a setting unit
configured to group the plurality of speaker devices into a plurality of speaker groups,
performing an audio balance adjustment process for each of the speaker groups set by the
setting unit, and setting the sound to each speaker device An audio signal or an audio signal
processing parameter obtained by the balance adjustment processing is transmitted by wireless
communication.
[0012]
A second technical means of the present invention is the first technical means, wherein the
speaker device performs an operation to reset the plurality of speaker groups and / or the
plurality of speaker groups as the predetermined user operation. It is characterized in that the
operation to be initialized can be accepted.
[0013]
A third technical means of the present invention is the first or second technical means, wherein
the setting unit is capable of storing a plurality of patterns of setting information indicating the
result of performing the setting for grouping. The speaker device is characterized in that an
operation of selecting one pattern to be used from the setting information can be received as the
predetermined user operation.
09-05-2019
4
[0014]
A fourth technical means of the present invention is any one of the first to third technical means,
wherein the speaker device has a receiver for receiving a signal transmitted from a remote
controller, and the reception of the signal is performed. It is characterized in that the
predetermined user operation can be received.
[0015]
According to a fifth technical means of the present invention, in the technical means according to
any one of the first to fourth, the speaker device transmits the operation signal to the source
device by wireless communication. .
[0016]
A sixth technical means of the present invention is characterized in that, in the first technical
means, the audio signal transmitted from the source device is a non-compressed signal. .
[0017]
A seventh technical means of the present invention is a source device for transmitting an audio
signal to a plurality of speaker devices by wireless communication, wherein the speaker device
receives an operation signal indicated by a predetermined user operation accepted by the
speaker device. And a setting unit configured to group the plurality of speaker devices into a
plurality of speaker groups based on the received operation signal, and performing audio balance
adjustment processing for each of the speaker groups set by the setting unit. To transmit the
audio signal or the audio signal processing parameter obtained by the audio balance adjustment
processing to each of the speaker devices by wireless communication.
[0018]
An eighth technical means of the present invention is a speaker device among the plurality of
speaker devices for receiving the audio signal from a source device transmitting the audio signal
to the plurality of speaker devices by wireless communication, The speaker device receives a
predetermined user operation for setting the grouping of the plurality of speaker devices into a
plurality of speaker groups, and transmits an operation signal indicated by the predetermined
user operation to the source device. An audio signal or an audio signal processing parameter
subjected to audio balance adjustment processing for each of the speaker groups based on the
operation signal by the source device is received from the source device by wireless
communication.
09-05-2019
5
[0019]
According to the present invention, in an audio wireless transmission system in which audio
signals are transmitted by wireless communication from a source device to a plurality of speaker
devices, the plurality of speaker devices are divided into a plurality of groups and the audio
signal is reproduced for each group. It becomes possible to realize the setting of the grouping in
consideration of the sound balance of each group by a simple operation from the speaker device
side.
[0020]
BRIEF DESCRIPTION OF THE DRAWINGS It is a block diagram which shows one structural
example of the audio | voice radio transmission system which concerns on the 1st Embodiment
of this invention.
It is a schematic diagram which shows an example of the apparatus arrangement | positioning in
the audio | voice radio | wireless transmission system of FIG.
It is a schematic diagram which shows the other example of apparatus arrangement | positioning
in the audio | voice radio | wireless transmission system of FIG.
It is a figure which shows each apparatus in arrangement | positioning of FIG. 2B.
It is a figure which shows an example of the channel of the audio | voice signal transmitted and
reproduced in the audio | voice radio | wireless transmission system of FIG.
It is a figure which shows the other example of the channel of the audio | voice signal
transmitted and reproduced in the audio | voice radio | wireless transmission system of FIG.
It is a figure which shows the other example of the channel of the audio | voice signal
transmitted and reproduced in the audio | voice radio | wireless transmission system of FIG.
09-05-2019
6
It is a sequence diagram for demonstrating an example of the group reconstruction process
procedure in the audio | voice radio | wireless transmission system of FIG.
It is a sequence diagram for demonstrating the other example of the group reconstruction
process procedure in the audio | voice radio | wireless transmission system of FIG.
It is a sequence diagram for demonstrating an example of the channel map setting process
procedure in the audio | voice radio | wireless transmission system which concerns on the 2nd
Embodiment of this invention.
It is a figure which shows an example of the channel map set by the channel map setting process
of FIG. 6A. It is a sequence diagram for demonstrating the 1st communication system used for
the communication processing in the audio | voice radio | wireless transmission system which
concerns on the 4th Embodiment of this invention. It is a sequence diagram for demonstrating
the 2nd communication system used for the communication processing in the audio | voice radio
| wireless transmission system which concerns on the 4th Embodiment of this invention. It is a
sequence diagram for demonstrating the 3rd communication system used for the communication
processing in the audio | voice radio | wireless transmission system which concerns on the 4th
Embodiment of this invention. It is a figure which shows an example of the packet used for the
communication processing in the audio | voice radio | wireless transmission system which
concerns on the 4th Embodiment of this invention. It is a figure which shows an example of the
content described by each section in the packet of FIG. 8A. It is a figure which shows an example
of the packet used for the instruction | indication in the 1st communication system of FIG. 7A.
FIG. 10 is a diagram showing an example of an ID (command type) described in the packet of FIG.
9A and parameters defined for each ID. It is a figure which shows an example of the packet used
for the instruction | indication response in the 1st communication system of FIG. 7A. FIG. 10 is a
diagram showing an example of an ID (instruction type) described in the packet of FIG. 9C and
parameters defined for each ID. It is a figure which shows an example of the packet used for the
request | requirement in the 2nd communication system of FIG. 7B. It is a figure which shows an
example of the parameter prescribed for every ID (request type) described in the packet of FIG.
10A, and ID. It is a figure which shows an example of the packet used for the request response in
the 2nd communication system of FIG. 7B. It is a figure which shows an example of ID (request
response type) described in the packet of FIG. 10C, and the parameter prescribed | regulated for
every ID. It is a figure which shows an example of the packet used for the notification in the 3rd
communication system of FIG. 7C. It is a figure which shows an example of an ID (notification
type) described in the packet of FIG. 11A, and a parameter prescribed for every ID.
09-05-2019
7
[0021]
The voice radio transmission system according to the present invention is a system provided with
a speaker device and a source device, and can be referred to as a wireless audio system, a
wireless speaker system, and the like. Although this source device has a wireless communication
unit for transmitting an audio signal by wireless communication, the wireless communication
unit can also be configured in a separate case.
[0022]
Also, as the source device, various audio reproduction devices such as a CD (Compact Disc)
player, a SACD (Super Audio CD) player, a BD (Blu-ray Disc (registered trademark) player, a HDD
(Hard disk drive) player, etc. A television apparatus, PC (Personal Computer), etc. may be
mentioned. Here, as the audio reproduction device, there is a network player which receives
music files stored in a server on a network via the network and wirelessly transmits the music
files to a speaker device. Further, in any source device, part of the speaker device may be
incorporated (note that, regarding the incorporated speaker device, transmission of various data
may be performed by wire). For example, a center speaker can be provided in a housing of a
display portion of a television set, and speakers for other channels can be arranged in another
housing as the speaker device. Hereinafter, a voice radio transmission system according to the
present invention will be described with reference to the drawings.
[0023]
(First Embodiment) A first embodiment of the present invention will be described with reference
to FIGS. FIG. 1 is a block diagram showing a configuration example of a voice radio transmission
system (hereinafter, the present system) according to the present embodiment.
[0024]
The system includes a plurality of grouped speaker devices 2 and a source device 1 that
transmits an audio signal to the plurality of speaker devices 2 by wireless communication.
Although only one of the plurality of speaker devices 2 is illustrated in FIG. 1, the source device 1
09-05-2019
8
and the speaker device 2 may be provided in a one-to-many relationship.
[0025]
The source device 1 is a wireless transmitter that includes the control unit 10 that controls the
whole of the source device 1 and the wireless communication unit 15. The control unit 10 is
configured of, for example, a central processing unit (CPU). Providing the wireless
communication unit 15 causes the source device 1 to function as a wireless transmitter that
transmits uncompressed audio signals (audio signals in the original sound) or compressed audio
signals to the speaker device 2 by wireless communication. Can. The following description will be
given by taking an example of wireless transmission of an audio signal without compression. It
can be said that it is preferable that the audio signal transmitted from the source device 1 be an
uncompressed signal, since high-quality audio can be output from the speaker device 2.
[0026]
In addition, the source device 1 of this configuration example includes an audio input unit for
inputting audio content (audio signal) to be output. Here, although an example including the
voice input terminal 11, the communication unit 12, and the voice input processing unit 13 is
given as an example of the voice input unit, one of the voice input terminal 11 and the
communication unit 12 is provided. Good.
[0027]
As the audio input terminal 11, for example, High-Definition Multimedia Interface (HDMI;
registered trademark). Same below. Input terminals of various standards such as terminals can be
applied. The communication unit 12 is a part for communicating with the outside and receiving
audio content based on standards such as LAN (Local Area Network), WiFi, Blutooth, etc.
regardless of wired or wireless. For example, the communication unit 12 may be configured to
receive an internet radio broadcast. Of course, both the communication unit 12 and the wireless
communication unit 15 can use the same standard such as adopting the WiFi standard, but in
that case MAC (Media Access) to avoid interference in reception and transmission of voice
signals. Control address) The identifiers such as the address may be made different.
09-05-2019
9
[0028]
The audio input processing unit 13 is a part that converts an audio signal input from the audio
input terminal 11 or the communication unit 12 so as to conform to the processing format in the
signal processing unit 14 described later. Processing for converting the signal into (or
performing reverse conversion of) the signal, and processing for multiplexing signals of a
plurality of channels included in the audio signal (or processing for demultiplexing the reverse).
Further, in the source device 1, in addition to or in place of one or both of the voice input
terminal 11 and the communication unit 12, a storage device for storing audio content and
reading out audio content from the storage device for reproduction processing And an audio
reproduction unit.
[0029]
Furthermore, the source device 1 includes a signal processing unit 14 that performs
predetermined signal processing on the audio signal output from the audio input processing unit
13. The predetermined signal processing in the signal processing unit 14 includes, for example,
processing to change the sound quality according to the user's operation, processing to create a
sound field (processing to adjust the sound balance, etc.), etc. There is a correction process of
correcting the audio signal of each channel before transmission. Of course, the signal processing
in the signal processing unit 14 is processing different from predetermined signal processing in
the signal processing unit 22 described later. The signal processing unit 14 and the signal
processing unit 22 described later are often configured by a DSP (Digital Signal Processor).
[0030]
The signal processing unit 14 passes the audio signal after the signal processing to the wireless
communication unit 15 and causes the wireless communication unit 15 to wirelessly transmit the
signal. Of course, such control may be performed by the control unit 10. The wireless
communication unit 15 transmits the audio signal of each channel input from the signal
processing unit 14 to each speaker device 2 by wireless communication without being
compressed.
[0031]
09-05-2019
10
Furthermore, the source device 1 of this configuration example includes an operation unit 16
that receives a user operation and passes the operation signal to the control unit 10, and a
display unit 17 that displays various information. As the operation unit 16, a button provided on
the main body of the source device 1, a reception unit for receiving an infrared signal from a
remote control or a short distance wireless signal such as Blutooth, wireless from a terminal
device such as a tablet, a smartphone or a portable information terminal A receiver etc. which
receive a control signal by communication are mentioned. The communication unit 12 may be
used as these reception units. The display unit 17 displays information related to the operation in
the operation unit 16 (for example, information such as content to be selected and input source,
volume and sound quality to be changed) and operation results on the operation unit 16.
Moreover, the function of the source device 1 can be mounted on the above-mentioned terminal
device, and in such a case, the reception unit as described above may not be provided, and the
operation unit 16 and the display unit 17 may be configured by a touch panel. .
[0032]
The control unit 10 controls the speaker device 2 from the source device 1 side, for example, by
superimposing a control signal corresponding to the operation signal on a wireless carrier wave
by the wireless communication unit 15 and transmitting the control signal to the speaker device
2 side. Can.
[0033]
On the other hand, the speaker device 2 is a wireless receiver provided with the control unit 20
that controls the entire device and also provided with the wireless communication unit 21.
The control unit 20 includes, for example, a CPU. By providing the wireless communication unit
21, the speaker device 2 can function as a wireless receiver that receives an audio signal
transmitted from the source device 1 by wireless communication. The wireless communication
between the wireless communication unit 15 and the wireless communication unit 21 is
preferably bi-directional communication, and transmits and receives various parameters in
addition to data of voice signals and control signals (commands) such as the above control
signals. Preferably.
[0034]
09-05-2019
11
Here, an example of wireless communication between the wireless communication unit 15 and
the wireless communication unit 21 will be described. One example of the wireless
communication is wireless communication based on the WiSA (Wireless Speaker and Audio)
standard. In this case, as the wireless communication unit 15 on the wireless communication unit
15 side and the wireless communication unit 21 on the speaker device 2 side, modules on the
transmission side and the reception side which are being standardized by the WiSA Association
can be applied. The WiSA standard is one of the standards for wirelessly transmitting an audio
signal with uncompressed PCM (pulse code modulation). A wireless communication device is
provided for both the source device and the speaker device, and the source device transmits the
uncompressed audio in uncompressed PCM to the speaker device using a communication
protocol compliant with WiFi. In addition, parameters such as specification information of the
speaker device can be transmitted from the speaker device to the source device.
[0035]
A single specification (mounting method) using the WiSA standard will be described by applying
to this configuration example. In the source device 1, an equalizer is built in the signal processing
unit 14, and by applying frequency equalization processing (or room equalization processing) in
the source device 1, the audio characteristics are adjusted and corrected. It is preferable that
such a correction be performed so as to match the speaker characteristics of each speaker device
2 (for example, which channel the speaker is for, the frequency characteristics, and the like).
When matching with the speaker characteristics, the speaker characteristics may be stored in
advance inside the source device 1, but the source device 1 makes an inquiry to each speaker
device 2 or a separately provided server device. May be acquired.
[0036]
After the correction, the source device 1 wirelessly transmits the audio signal of each channel
(which becomes the original sound) after equalization processing to each of the speaker devices
2 as uncompressed audio data. The speaker device 2 outputs the received audio signal without
processing it. Alternatively, it is multiplexed and wirelessly transmitted to all the speaker devices
2 as common data (data of audio signals of all channels) by time division or the like. That is,
wireless transmission may be performed collectively as broadcast or multicast as common data
without being divided. In this case, the speaker device 2 side extracts data of its own channel
from the received data of all channels.
09-05-2019
12
[0037]
Another specification (implementation method) can be adopted in the WiSA standard. In this
specification, audio signals of all channels to be used are wirelessly transmitted in a broadcast or
multicast manner as described above, with no compression and no correction (the original sound
remains). The speaker device 2 side extracts data of its own channel from the received data of all
channels. In such a transmission method, the amount of channel selection and correction
(including frequency equalizing processing or room equalizing processing, etc.) can be arbitrarily
determined on the side of the speaker device 2, so that setting with higher freedom is possible.
Such setting can also be executed by an instruction specifying a parameter from the source
device 1 side. The speaker device 2 has an equalizer that performs frequency equalization
processing or room equalization processing in the signal processing unit 22 described later, and
performs equalization processing (sound quality adjustment) on the received audio signal
according to the received parameters. The voice indicated by the processed voice signal is output.
[0038]
It is also possible to apply both of the mounting methods and correct the sound in the source
device 1 and to instruct the speaker device 2 to specify a parameter for each of the speaker
devices 2. In addition, even when the WiSA standard module is not adopted, the wireless
communication unit 15 may multiplex the audio signal of each channel and then wirelessly
transmit it to all the speaker devices 2. Only the audio signal of the target channel may be
wirelessly communicated with the communication unit 21 one by one.
[0039]
The speaker device 2 of this configuration example includes the speaker unit 25 and the signal
processing unit 22, the volume adjustment unit 23, and the amplifier unit (AMP) 24. The signal
processing unit 22 subjects the audio signal received by the wireless communication unit 21 to
predetermined signal processing such as various filter processing, for example, and outputs the
processed signal to the volume adjustment unit 23. Further, the signal processing unit 22 has a D
/ A converter (DAC) for converting an audio signal from a digital signal to an analog signal before
or after the predetermined signal processing. The DAC may be provided on the volume
adjustment unit 23 side.
09-05-2019
13
[0040]
Here, the parameters necessary for the predetermined signal processing are stored in a memory
or the like inside the signal processing unit 22 (or the control unit 20), and are read out as
necessary. In addition, the control unit 20 or the signal processing unit 22 rewrites the above
parameters based on a control signal according to the above operation signal from the source
device 1 and / or based on a user operation accepted by the operation unit 26 described later.
May be configured to be possible.
[0041]
The volume adjuster 23 passes the input audio signal to the AMP 24, and adjusts the volume of
the sound output from the speaker 25 by adjusting the AMP 24. The AMP 24 amplifies the audio
signal output from the signal processing unit 22 via the volume adjustment unit 23, and outputs
the amplified audio signal to the speaker unit 25.
[0042]
Then, the speaker device 2 includes an operation unit 26 that receives a user operation and
passes the operation signal to the control unit 20. Examples of the operation unit 26 include a
button provided on the main body of the speaker device 2 and the communication unit 27. The
communication unit 27 refers to a receiving unit that receives an infrared signal from a remote
control or a short distance wireless signal such as Blutooth, and a receiving unit that receives a
control signal from a terminal device such as a tablet, a smartphone or a portable information
terminal by wireless communication. . The device such as the remote control 3 on the side of
transmitting the signal received by the communication unit 27 is a device such as the remote
control on the side of transmitting the signal received by the receiving unit described as an
example of the operation unit 16 on the source device 1 side. It may be commonality.
[0043]
Further, it is preferable that the speaker device 2 be provided with a display unit for displaying
09-05-2019
14
various information as well as the display unit 17. This makes it possible to display information
related to the operation by the operation unit 26 and the operation result by the operation unit
26.
[0044]
As a main feature of the present invention, the speaker device 2 can receive a predetermined
user operation and transmit an operation signal indicated by the predetermined user operation to
the source device 1. This user operation is an operation for operating the source device 1 and
will be hereinafter referred to as “source device operation” to distinguish it from the operation
not to be transmitted to the source device 1. That is, the operation unit 26 can receive at least a
predetermined source device operation as a user operation. The predetermined source device
operation refers to a source device operation for performing setting to group all the speaker
devices 2 in the system into a plurality of speaker groups, as understood from the processing
described later.
[0045]
In particular, in consideration of operability, the speaker device 2 has a receiving unit for
receiving the signal transmitted from the remote control 3 as described above as an example of
the communication unit 27, and the predetermined source device operation is performed by
receiving the signal. Is preferably acceptable.
[0046]
The transmission of the operation signal will be specifically described.
The operation unit 26 receives this source device operation, and the control unit 20 interprets
the operation signal indicating the source device operation, converts the operation signal as it is
or converts it into an operation signal for passing to the source device 1 as needed. And transmit
to the source device 1. This transmission is preferably performed by superimposing on a wireless
carrier wave using the wireless communication unit 21. However, the speaker device 2 and the
source device 1 are separately provided with a communication unit for performing wired or
wireless communication, and the communication unit It may be sent by The communication unit
27 or the communication unit 12 can also be used as this communication unit. However, it is
preferable that the speaker device 2 transmit the operation signal to the source device 1 by
09-05-2019
15
wireless communication, for reasons such as the ease of arrangement of the speaker device 2
and the like.
[0047]
The source device 1 receives the operation signal transmitted from the speaker device 2. Then,
based on the received operation signal, the setting unit provided in the source device 1 performs
setting for grouping the plurality of speaker devices 2 in the system into a plurality of speaker
groups. Although the setting unit will be described on the premise that the setting unit is
provided in the control unit 10, the setting unit may be provided other than the control unit 10
such as the signal processing unit 14. The setting in the setting unit mainly includes initial
setting and resetting of the speaker group, and it is preferable that at least one of them be
included. However, in the present embodiment, a configuration in which resetting can be
performed, that is, a configuration in which an operation of resetting (reconstructing) a plurality
of speaker groups can be received as the predetermined source device operation will be
described.
[0048]
The signal processing unit 14 of the source device 1 performs audio balance adjustment
processing for each of the speaker groups set by the setting unit. The audio balance adjustment
processing is performed before outputting the audio signal, but includes the adjustment of the
audio signal itself or the change of the audio signal processing parameter.
[0049]
In the case of transmitting voice signals in a broadband or broadcast manner as in this
configuration example, the voice balance adjustment process also includes setting (or resetting)
of the channel map. The setting of the channel map is performed so that each of the speaker
devices 2 can correctly receive an audio signal for itself. At the time of this setting, the source
device 1 performs wireless communication with the speaker device 2 in the same manner as
communication of the audio signal processing parameter, and rewrites the channel map recorded
in the control unit 20 of each speaker device 2 side. Thus, the speaker device 2 can extract an
audio signal or an audio signal processing parameter according to the channel map.
09-05-2019
16
[0050]
The audio signal processing parameters refer to one or more parameters used for predetermined
signal processing in the signal processing unit 22. However, as in the present embodiment, at the
time of re-setting (change) rather than at the time of initial setting, the audio balance adjustment
for the speaker group not involved in the change is the same as before the change.
[0051]
After such an audio balance adjustment process, the wireless communication unit 15 of the
source device 1 transmits the audio signal or the audio signal processing parameter obtained by
the process to the wireless communication unit 21 of each speaker device 2 by wireless
communication. Do.
[0052]
The wireless communication unit 21 of the speaker device 2 receives the transmitted audio
signal or the audio signal after the audio balance adjustment processing and the audio signal
processing parameter after the audio balance adjustment processing by wireless communication.
Then, in the speaker device 2, the voice indicated by the received voice signal is output from the
speaker unit 25.
[0053]
When the source device 1 adopts a configuration in which the audio balance adjustment is
performed by changing the audio signal, the speaker device 2 may output the audio signal as
received. On the other hand, when adopting a configuration in which the changed parameter is
an audio signal processing parameter, in the speaker device 2, the parameter stored in the
memory in the signal processing unit 22 or the control unit 20 using the audio signal processing
parameter. , And the predetermined parameter processing may be performed on the audio signal
received thereafter (or simultaneously) using the rewritten parameters. This enables voice output
similar to that in the case where the voice signal is changed.
09-05-2019
17
[0054]
A more specific example of the present system will be described with reference to FIGS. 2A-5. 2A
is a schematic view showing an example of the device arrangement in the present system, FIG. 2B
is a schematic view showing another example of the device arrangement, FIG. 2C is a view
showing each device in the device arrangement of FIG. 2B, FIG. These are figures which show the
example of the channel of the audio | voice signal transmitted and reproduced in this system.
[0055]
In the system illustrated in FIG. 2A, eight speaker devices SP1 to SP8 are installed in a room
(audio room) in which the source device 1 is installed. Although SP1 to SP8 are all speaker
devices 2, their speaker characteristics and the like are different. In this example, each of the
speaker devices SP1, SP2, SP3, SP4, SP5, SP6, SP7, SP8 is center (C), front left (L), front right (R),
surround left rear (LB) , A subwoofer (LFE), a surround right rear (RB), a surround left (LS), and a
surround right (RS) channel, which constitute a 7.1 channel speaker system. Of course, some or
all of them may be speaker devices having the same speaker characteristics. The speaker devices
SP1 to SP8 in the system of FIG. 2A can all be said to belong to the same speaker group
(hereinafter, also simply referred to as “group”). It is assumed that 0 (2a).
[0056]
An example will be described in which two speaker devices SP7 and SP8 are moved from such a
7.1-ch speaker system to a remote room (living room) to reconstruct the system illustrated in
FIG. 2B or 2C. The system after reconstruction is composed of a 2ch speaker system using the
speaker devices SP7 and SP8 for L and R channels, respectively, and a 5.1ch speaker system
consisting of the speaker devices SP1 to SP6. In the latter case, the speaker devices SP4 and SP6
are used as channels for LS and RS, respectively. Hereinafter, the speaker devices SP1 to SP6 are
group no. Leave the speaker devices SP7 and SP8 in the group No. 0 (2a). It classifies as 1 (2b)
and explains.
[0057]
09-05-2019
18
By storing the information classified in this manner in a memory or the like in the control unit 10
(or the signal processing unit 14 or the like) of the source device 1, a certain speaker device of
the source device 1 belongs to any of the groups 2a and 2b. Can recognize. Therefore, even when
the source device 1 receives an operation signal indicating the source device operation, it is
necessary to include information specific to the speaker device 2 that has transmitted the
operation signal, and to which group the speaker device 2 belongs. It can be identified. The
classified information may be stored, for example, as a channel map (channel map table)
described later in the second embodiment. Although the example divided into two groups is
given, the same idea can be applied to three or more.
[0058]
In the system of FIG. 2A before reconstruction, it is sufficient that the common data of 7.1 ch is
distributed and the audio devices corresponding to each channel are extracted by the respective
speaker devices SP1 to SP8. However, in the system of FIG. 2B after reconstruction, it is
necessary to perform audio balance adjustment processing including audio distribution, and
finally to output 5.1 ch and 2 ch audio in the groups 2a and 2b.
[0059]
As one example of audio signal transmission for that, the case where the source device 1
distributes the 7.1 ch audio signal as it is to the eight speaker devices SP1 to SP8 as common
data (data of audio signals for all channels) as it is , Will be described with reference to FIG. 3A.
[0060]
In the system of FIG. 2B, the source device 1 transmits such common data, and each of the
speaker devices SP1 to SP8 receives it.
The wireless communication unit 21 that has received this audio signal extracts the required
audio signal under the control of the control unit 20. The speaker devices SP1, SP2, SP3, SP4,
SP5 and SP6 belonging to the group 2a extract the audio signals of the channels C, L, R and LS
and LB, LFE, RS and RB, respectively, and output the audio. However, for the speaker device SP4,
the signal processing unit 22 downmixes the audio signals of the channels LS and LB of 7.1 ch
and outputs the audio signal of 5.1 ch LS before output. Similarly, a 5.1 channel RS audio signal
is generated for the right speaker device SP6. For example, the following equation can be applied
09-05-2019
19
as a method of this downmixing. The coefficient (downmix coefficient) 0.5 in the following
equation corresponds to -6 dB. 5.1chのLS=0.5×(7.1chの
LS)+0.5×(7.1chのLB)5.1chのRS=0.5×(7.1chの
RS)+0.5×(7.1chのRB)
[0061]
The speaker devices SP7 and SP8 belonging to the group 2b extract the audio signals of the
channels LB, LS, L, C and LFE and the channels LFE, C, R, RS and RB, respectively, and use them
for the left (La). , (R) audio signal for right (Ra) is generated (down-mixed) and output. For
example, the following equation can be applied as a method of this downmixing. The downmix
coefficients 0.5 and 0.707 in the following equation correspond to -6 dB and -3 dB, respectively.
La=1.0×L+0.707×C+0.5×LS+0.5×LB+0.5×LFERa=1.
0×R+0.707×C+0.5×RS+0.5×RB+0.5×LFE
[0062]
Next, as illustrated in FIG. 3B, the audio signal of the first content after downmixing for the six
speaker devices SP1 to SP6 and the audio of the second content for the two speaker devices SP7
and SP8 as the source device 1 is The case of transmitting common data of signals will be
described. For the first content, the signal processing unit 14 of the source device 1 executes the
downmixing process to 5.1 ch as performed by the scalar devices SP4 and SP6 in the example of
FIG. 3A. For the second content, leave it as 7.1ch. With regard to simultaneous delivery of such
two contents, if there are two slots for the 5.1ch and 7.1ch channels of the source device 1, that
is, the control unit 10, the audio input processing unit 13, and the signal processing unit It is
possible if the processing capability of 14 and the processing capability (communication band
etc.) of the wireless communication unit 15 correspond to the delivery of two contents.
[0063]
In this case, the speaker devices SP1, SP2, SP3, SP4, SP5 and SP6 belonging to the group 2a
respectively extract the audio signals of the channels C, L, R, LS, LFE and RS of the first content
and output the audio Do. The speaker devices SP7 and SP8 belonging to the group 2b extract the
audio signals of the channels LS, L, C, and LFE of the second content and the channels LFE, C, R,
and RS of the second content, respectively, Similarly, from the audio signals, left (La) and right
09-05-2019
20
(Ra) audio signals for the second content are generated (down-mixed) and output.
[0064]
As illustrated in FIG. 3C, the audio signal of the first content after downmixing for the six speaker
devices SP1 to SP6 and the second content after downmixing for the two speaker devices SP7
and SP8 as illustrated in FIG. 3C. A case of transmitting common data of audio signals of left (La)
and right (Ra) channels will be described. In this case, before transmission, the first content is
downmixed to 5.1 ch as in the example of FIG. 3B, and the second content is downmix similar to
the downmix in the speaker devices SP7 and SP8 of FIG. 3B. To generate audio signals of the
channels La and Ra. The audio output in group 2a is the same as in FIG. 3B. The speaker devices
SP7 and SP8 belonging to the group 2b extract the audio signals of the channels La and Ra of the
second content, respectively, and output the audio.
[0065]
Although the example which multiplexes and distributes two contents was given in the example
of FIG. 3B and FIG. 3C, these contents are not restricted to what the source device 1 received
from one source. As described above, the source device 1 is configured to be able to acquire
content from both the communication unit 12 and the audio input terminal 11, and to be able to
acquire content from two types of sources by the communication unit 12 It is also good. In any
case, in order for the source device 1 to distribute two or more contents, not only the ability to
multiplex the channel in the wireless communication unit 15 but also the contents in the voice
input processing unit 13 are multiplexed. It requires the ability (that is, the ability to process
multiple pieces of content).
[0066]
Further, as can be seen from the examples of FIGS. 3A to 3C, all of the downmixing may be
performed on the side of the speaker device 2 or all of the downmixing may be performed on the
source device 1. May be In addition, depending on the number of channels of content, there are
also occasions when it may be necessary to up-convert. Unlike FIGS. 3A to 3C, it is also possible
for the source device 1 and the speaker devices 2 to communicate in a one-on-one manner.
09-05-2019
21
[0067]
Next, referring to the sequence diagrams of FIG. 4 and FIG. 5, an example of the group rebuilding
process procedure in the present system will be described. A processing procedure of an example
in which it is necessary to correct the audio signal on the side of the speaker device 2 as shown
in FIG. 3A will be described based on FIG. The moved speaker device SP7 performs the process of
step S2 at the time when the source device operation for reconstructing the speaker group in the
communication unit 27 is received by the remote control code from the remote control 3 (step
S1). It should be noted that the remote control 3 may be provided with a group rebuilding button
for emitting such a remote control code, or such remote control code may be issued by multiple
simultaneous pressing of general purpose buttons or differences in pressing time. You may leave
it.
[0068]
The control unit 20 receives the remote control code from the communication unit 27, interprets
it, determines to transmit a group reconstruction instruction (group reconstruction command),
and transmits it to the source device 1 from the wireless communication unit 21. (Step S2). The
unique information of the speaker device SP7 itself is added to the command to be transmitted.
The same process is performed for the moved speaker device SP8 (steps S3 and S4).
[0069]
In the source device 1, the wireless communication unit 15 receives the group rebuilding
instruction from the speaker devices SP7 and SP8, and the control unit 10 interprets the
instruction (step S5). Group No. 1 in which the speaker devices SP1 to SP6, which are determined
as 1 (2b) and are obtained by removing the speaker devices SP7 and SP8 from the original group
2a, are updated. It is determined as 0 (2a) (step S6). In step S5, for example, with respect to a
group reconstruction instruction from another speaker device 2 received within a predetermined
time after first receiving a group reconstruction instruction from a certain speaker device 2, the
registration to the same group is requested. You can think of it as you are. In this example, it is
determined that the speaker devices SP7 and SP8 are to be registered in the same group, and a
new group 2b is determined. Of course, the registration request of the same group may be
determined by other methods such as receiving detailed operation instructions.
09-05-2019
22
[0070]
The role of each of the speaker devices SP1 to SP8 changes as the number of speaker devices in
the groups 2a and 2b changes. For group 2a, when acquiring the channel to be acquired and
multiple channels for each of the speaker devices SP1 to SP6, the sound balance is determined so
that the sound field can be appropriately constructed by the six speaker devices SP1 to SP6. And
give instructions. Specifically, the control unit 10 instructs the signal processing unit 14 to
calculate the sound balance amount for the group 2a. The signal processing unit 14 calculates
the sound balance amount based on the speaker device group belonging to the group 2a, and
requests to instruct the channel of the sound signal to be acquired by the speaker devices SP1 to
SP6 belonging to the group 2a and the sound balance amount to be adjusted. Generate Then,
according to the instruction of the signal processing unit 14, the wireless communication unit 15
transmits this request to the speaker devices SP1 to SP6 (step S7). Here, the audio balance
amount is a ratio at which each channel is mixed, and may be indicated as a parameter indicating
the downmix coefficient for the group 2a described in FIG. 3A.
[0071]
Thereby, the speaker devices SP1 to SP6 of the group 2a receive the audio signal delivered
simultaneously or later, extract the audio signal of the channel to be acquired, and as shown in
FIG. 3A, are downed by the instructed audio balance amount. After mixing, audio can be output.
As a result, it is possible to construct a new sound field for 5.1 ch with six speaker devices SP1 to
SP6 and perform audio output (step S11).
[0072]
Further, the control unit 10 causes the signal processing unit 14 to calculate the audio balance
amount for the group 2b in the same manner, and requests for instructing the channel of the
audio signal to be acquired by the speaker devices SP7 and SP8 and the audio balance amount to
be adjusted. Generate Then, the wireless communication unit 15 transmits this request to each of
the speaker devices SP7 and SP8 (step S8). Here, the voice balance amount may be indicated as a
parameter indicating the downmix coefficient for the group 2b described in FIG. 3A. As a result,
the speaker devices SP7 and SP8 of the group 2b receive the audio signal to be delivered
simultaneously or later and extract the audio signal of the channel to be acquired, as shown in
FIG. 3A. After mixing, the audio of the left (La) channel and the right (Ra) channel can be output
(steps S9 and S10). As a result, it is possible to construct a new sound field for 2ch with two
09-05-2019
23
speaker devices SP7 and SP8 and perform audio output.
[0073]
Based on FIG. 5, the processing procedure of the example which needs to correct | amend an
audio signal by the source device 1 side like FIG. 3C is demonstrated. However, although the
example of the 1st, 2nd content was mentioned in FIG. 3C, suppose that the original content is
the same here. First, processing similar to steps S1 to S6 is performed (steps S21 to S26).
[0074]
Next, with regard to the group 2a, audio balance adjustment is performed so that sound fields
can be appropriately constructed by the six speaker devices SP1 to SP6, and a channel
instruction to be acquired is issued to the speaker devices SP1 to SP6. Specifically, the control
unit 10 instructs the signal processing unit 14 to calculate the sound balance amount for the
group 2a. The signal processing unit 14 calculates based on the speaker device group belonging
to the group 2a, and downmixes the audio signal as in the example of the first content of FIG. 3C
based on the calculation. The signal processing unit 14 further generates a request for
instructing a channel of an audio signal to be acquired by the speaker devices SP1 to SP6
belonging to the group 2a. Then, in accordance with the instruction of the signal processing unit
14, the wireless communication unit 15 transmits this request to the speaker devices SP1 to SP6
(step S27).
[0075]
Thereby, the speaker devices SP1 to SP6 of the group 2a can receive the audio signal delivered
simultaneously or later, extract the audio signal of the channel to be acquired, and output the
audio. As a result, it is possible to construct a new sound field for 5.1 ch with six speaker devices
SP1 to SP6 and perform voice output (step S31).
[0076]
Also, the control unit 10 similarly calculates the audio balance amount for the group 2b, the
09-05-2019
24
downmixing of the audio signal as in the example of the second content in FIG. 3C based thereon,
and the audio to be acquired by the speaker devices SP7 and SP8. The signal processing unit 14
executes generation of a request for indicating a channel of a signal. Then, the wireless
communication unit 15 transmits this request to each of the speaker devices SP7 and SP8 (step
S28). As a result, the speaker devices SP7 and SP8 of the group 2b receive the audio signals
delivered simultaneously or later, extract the audio signals of the left (La) and right (Ra) channels
to be acquired, and extract the respective audios. It is possible to output (steps S29 and S30). As
a result, it is possible to construct a new sound field for 2ch with two speaker devices SP7 and
SP8 and perform audio output. The output result in the example of FIG. 5 is the same as the
example of FIG. 4 if the downmix coefficients are the same.
[0077]
In an example where it is necessary to make corrections in both the source device 1 and the
speaker device as shown in FIG. 3B, the processing procedures in FIGS. 4 and 5 may be
combined. In the above example, the case where a part of speaker devices is moved to another
area from the state where only one group exists first, that is, the case where a new group is
created has been described. However, the present invention is not limited to this, and the present
invention can be similarly applied to the case of moving a speaker device among a plurality of
originally existing groups.
[0078]
As in this example, according to the present system, when the plurality of speaker devices 2
included in the present system are divided into a plurality of groups and the audio signal is
reproduced for each group, the speaker devices 2 belonging to a certain group are moved It
becomes possible to realize setting (setting of audio balance adjustment by rearranging the
speaker device 2) to function in consideration of audio balance as a speaker device of another
group by simple operation from the speaker device 2 side. , User convenience can be improved.
In particular, in the present system, even if the speaker device 2 is relocated, it is possible to
construct a sound field optimized for each room or a room of a restaurant.
[0079]
As mentioned above, although this embodiment was mentioned and described the example of
09-05-2019
25
composition of Drawing 1, this system is not limited to this, for example, the following
application examples can also be applied. In such setting (resetting) for group reconstruction, it is
preferable to change the sound balance adjustment not only by the number of speaker devices
but also by the speaker characteristics (type). However, as an extreme example, the speaker
devices in the present system may all be the same monaural speaker, and in this case, the audio
balance adjustment may be performed according to the number of speaker devices belonging to
each group. In addition, depending on the speaker characteristics of the speaker device 2 moved
before and after the reconstruction, there may be cases where the audio of the desired channel
can not be output in each group after the reconstruction. Preferably, the warning is issued by
voice and / or display.
[0080]
Further, although it has been described on the premise that one speaker device is provided with
one speaker unit as the speaker unit 25, one speaker device (provided with one wireless
communication unit 21 inside) has a plurality of speaker units arranged. Such a speaker array
device may be used. For example, it is a speaker device provided with a plurality of speaker units
such as three speaker units (tweeter, mid range, woofer) for high frequency range, middle
frequency range, and low frequency range. May be Furthermore, in one speaker device, the
speaker unit 25 (or the AMP 24 and the speaker unit 25) may be configured in a separate
housing from the other parts, in which case the operation unit 26 is attached to the speaker unit
25 side. Can also be configured to In the case of adopting these configurations, basically
connection between the housings may be made by wire.
[0081]
Furthermore, by constructing a system divided into a plurality of speaker groups like the system
of FIG. 2B, the same content is output in a plurality of rooms, and source device operation such
as changing the volume level and the voice characteristics in each room It can also be configured
to receive Also, as can be seen from the examples of FIGS. 3B and 3C, it may be configured to
receive source device operations that cause different contents to be reproduced in multiple
rooms. As such source device operation, for example, switching operation of content (operation
such as previous skip or back skip) and operation of changing the reproduction state of content
(operation such as playback, stop, pause, fast forward, fast reverse, etc.) are listed. Be Further, for
example, by providing both the operation buttons for all the speaker devices in the system and
the operation buttons for all the speaker devices belonging to the group in the remote control 3,
the user can select and operate the voice change target it can. Even when the terminal device is
09-05-2019
26
equipped with the function of the source device 1, devices such as remote controls of both the
source device 1 and the speaker device 2 may be shared, in which case the source device 1 is
carried around and the speaker You can go to the group's location and change only the sound of
that location. These applications are, of course, equally applicable to the other embodiments
described later.
[0082]
Second Embodiment A second embodiment of the present invention will be described with
reference to FIGS. 6A and 6B. FIG. 6A is a sequence diagram for explaining an example of a
channel map setting process procedure in the voice radio transmission system according to the
present embodiment, and FIG. 6B is a diagram showing an example of a channel map set in the
process. Although the present embodiment will be described focusing on differences from the
first embodiment, various examples of the first embodiment can be applied.
[0083]
In the present embodiment, as the predetermined source device operation, an operation to
initialize a plurality of speaker groups (an operation to set the channel map described above) can
be received. Of course, it is preferable to be executable in conjunction with the above-mentioned
source device operation of group reconfiguration (reconstruction). In the following, an example
of constructing the system of FIG. 2B by the initial setting is given, but when constructing a
system of only one group by the initial setting as in the system of FIG. 2A, for example, only one
group is described in the following description. You just have to do it.
[0084]
At the time of initial setting, its own channel and group are set in all the speaker devices SP1 to
SP8 included in the system of FIG. 2B (steps S41, S42, and S43). Such setting may be received by
the operation unit 26 while displaying a user interface (UI) image on the display unit, for
example. Note that the channel and group of one's own may be predetermined. For example,
when selling a speaker set consisting of a plurality of speaker devices, channels may also be
allocated and registered as the same group and then sold.
09-05-2019
27
[0085]
Thereafter, pressing of the speaker setting button is accepted by one speaker device (exemplified
by the speaker device SP7), and a speaker setting instruction (speaker setting command) is
transmitted to the source device 1 by wireless communication (step S44). The unique
information of the speaker device SP7 itself is added to the command to be transmitted. The
speaker setting button may be provided on the remote control 3 or may be provided on all or
part of the speaker devices 2. In the case of providing only a part, if there is a speaker device that
needs to be moved as a set (for example, speaker devices SP7 and SP8) in order to construct an
appropriate sound field, only one of them may be provided.
[0086]
Following step S44, in the source device 1, the wireless communication unit 15 receives the
speaker setting instruction from the speaker device SP7, the control unit 10 interprets it, and
transmits the response to the speaker device SP7 (step S45) The speaker information acquisition
request is transmitted to all the speaker devices SP1 to SP8 (step S46). The speaker information
is information indicating the above-described channels and groups set in the respective speaker
devices SP1 to SP8. The source device 1 collects the speaker information transmitted from all the
speaker devices SP1 to SP8 as a response (step S47).
[0087]
The source device 1 determines the channel map of each of the groups 2a and 2b based on the
collected speaker information (step S48). As this channel map, for example, as shown in the
channel map 10a of FIG. , Group no. , Channel, and system configuration (configuration of the
speaker system in the group). Here, as exemplified in FIG. 3C, on the premise that the audio
signal of each channel is generated on the side of the source device 1, the group and one channel
to be extracted by each speaker device may be identified. There is. However, when it is necessary
to extract a plurality of channels on the side of the speaker device 2 and generate a channel of
output sound from that, etc., not only information indicating each channel to be extracted as
channel information but also downmix coefficients are included. You may not
[0088]
09-05-2019
28
Next, the source device 1 sets the determined channel map for each of the speaker devices SP1
to SP8 (steps S50 to S52). Here, only information required for the speaker device to be set in the
channel map may be set in the memory or the like in the control unit 20. In the example of FIG.
6B, the information of each row of the channel map 10a may be set in each speaker device.
[0089]
After the channel map is set as described above, each speaker device can perform processing
such as extraction of an audio signal of a required channel and downmixing, and an optimum
sound field can be constructed for each group. As described above, according to the present
embodiment, when the plurality of speaker devices 2 included in the present system are divided
into a plurality of groups and the audio signal is reproduced for each group, grouping is also
performed in consideration of the audio balance for each group. The initial setting of can be
realized by a simple operation from the side of the speaker device 2 and user convenience can be
improved.
[0090]
Also, although the channel map setting process of FIG. 6A has been described as the process at
the time of initial setting, the same process (during the process of FIG. 6A, the process excluding
steps S41 to S45) is substituted for the instruction by steps S7 and S8 of FIG. Alternatively, it
may be executed instead of the instructions in steps S27 and S28 of FIG.
[0091]
Third Embodiment The third embodiment of the present invention will be described.
Although the present embodiment will be described focusing on differences from the first
embodiment, various examples of the first and second embodiments can be applied.
[0092]
The setting unit in the present embodiment is capable of storing a plurality of patterns of setting
09-05-2019
29
information (for example, the channel map 10a in FIG. 6B) indicating the result of setting for
grouping. Then, the speaker device 2 can accept an operation of selecting one pattern to be used
from the setting information as the predetermined source device operation. It is preferable that
each pattern is selectable by displaying information on grouping in the display unit or the like,
but it is also possible to simply register the pattern number and select the number by an input or
skip operation. .
[0093]
Thus, the user moves the speaker devices SP7 and SP8 from the system of FIG. 2A to the system
of FIG. 2B, for example, and then returns to the system of FIG. It can be easily restored by the
source device operation by. Of course, the same operation can also be configured to be accepted
by the operation unit 16 or the like of the source device 1.
[0094]
Fourth Embodiment As a fourth embodiment of the present invention, FIG. 7A to FIG. 11B are
combined for an example of communication control between the speaker device and the source
device applicable to the first to third embodiments. Will be described with reference to FIG. FIGS.
7A, 7B, and 7C are sequence diagrams for explaining first, second, and third communication
methods used for communication processing in the voice radio transmission system according to
the present embodiment, respectively. FIG. 8A is a view showing an example of a packet used for
communication processing in the voice radio transmission system according to the present
embodiment, and FIG. 8B is a view showing an example of contents described in each section in
the packet of FIG. 8A.
[0095]
As the minimum communication unit of signals related to control between the source device 1
(wireless transmitter) and the speaker device 2 (wireless receiver), there are the following three
types of communication methods, and a series of communication procedures ( Sequence) is a
combination of these types. Although the voice signal is transmitted by wireless communication
as described above, this signal may be transmitted by wire.
09-05-2019
30
[0096]
As a first communication method, when the wireless receiver is operated, the wireless receiver
issues an instruction (Command), thereby requesting control of the wireless transmitter (step
S71). In response to this instruction, the wireless transmitter returns an instruction response
(Command Response) (step S72). In the second communication method, the wireless transmitter
issues a request (Request) to the wireless receiver to request information and the like (step S73).
The wireless receiver returns a request response (Request Response) in response to this request
(step S74). In the third communication method, the wireless transmitter issues a notification
(Report) to the wireless receiver to notify of a status change (step S75).
[0097]
As exemplified in FIG. 8A, the communication data format (packet) 41 used in these
communication methods is the type of packet (type indicating which type of packet from step
S71 to S75), the ID of the packet, The address, the address, the parameter length, the parameter,
and the cyclic redundancy check value (CRC value) are included. The length and description of
each section are not particularly described, but are as illustrated in Table 42 of FIG. 8B. Of
course, not limited to the length and the description of this packet, the parameter length and
parameter description method for each packet type described later with reference to FIGS. 9A to
11B is not limited to the exemplified one.
[0098]
The first communication method will be described. 9A shows an example of a packet used for an
instruction in the first communication system of FIG. 7A, and FIG. 9B shows an example of an ID
(command type) described in the packet of FIG. 9A and parameters defined for each ID. FIG. 9C
shows an example of a packet used for an instruction response in the first communication
method of FIG. 7A, and FIG. 9D is defined for each ID (instruction type) described in the packet of
FIG. It is a figure which shows an example of a parameter.
[0099]
As shown in FIG. 9A, in the packet 43 of the instruction used in step S71, "00h" corresponding to
09-05-2019
31
the instruction in the packet type is described. Then, an ID is attached to the packet 43 to
indicate that it is described as “description” in the table 44 of FIG. 9B, and a parameter
(additional parameter) is also added as necessary. For example, with regard to the speaker setting
instruction (transmission by step S44) in the processing example of FIG. 6A, the ID is "0006" and
the additional parameter is absent. Note that the source device 1 can recognize that the
instruction is from the speaker device SP7 or the like by the own address. Further, for example,
although group reconstruction instructions (sent by steps S2, S4, S22 and S24) in the processing
example of FIGS. 4 and 5 are not illustrated, different IDs may be defined and used. Of course, the
same ID "0006" can be used even when the system is configured to be able to execute both the
processing example of FIG. 4 and FIG. 5 and the processing example of FIG. 6A. For this purpose,
additional parameters may be added to either or both.
[0100]
As shown in FIG. 9C, in the packet 45 of the instruction response used in step S72, "01h"
corresponding to the instruction response in the packet type is described. Then, an ID is attached
to the packet 45 to indicate that it is described as “description” in the table 46 of FIG. 9D, and
a parameter (additional parameter) is also added as necessary. This ID is the same as the ID at
the time of instruction. For example, as a response to the speaker setting instruction in the
processing example of FIG. 6A (transmission by step S45), the ID is “0006”, and the
instruction execution result as an additional parameter (when the speaker setting is reserved, it is
reserved Information is also added). Further, for example, the response to the group
reconstruction instruction in the processing example of FIGS. 4 and 5 may be returned in the
same manner as this response, for example, may be returned immediately after steps S5 and S25,
respectively.
[0101]
The second communication method will be described. 10A shows an example of a packet used
for a request in the second communication system of FIG. 7B, and FIG. 10B shows an example of
an ID (request type) described in the packet of FIG. 10A and parameters defined for each ID. FIG.
10C shows an example of a packet used for a request response in the second communication
method of FIG. 7B, and FIG. 10D is defined for each ID (request / response type) described in the
packet of FIG. It is a figure which shows an example of the parameter.
[0102]
09-05-2019
32
As shown in FIG. 10A, in the packet 51 of the request used in step S73, "02h" corresponding to
the request is described by the packet type. Then, an ID is attached to the packet 51, meaning
that it is described as "description" in the table 52 of FIG. 10B, and a parameter (additional
parameter) is also added as necessary. For example, in the processing example of FIG. 6A, in the
case of the speaker information acquisition request (transmission by step S46), the ID is
“0000”, and the additional parameter is absent. Also, in the case of a channel map setting
request (transmission at step S49), the system configuration is as shown in the example of the ID
“0003” and as an additional parameter (for example, channel map configuration such as
“0001h” for 2ch [Stereo]) Will be added. Also, for example, the request for voice (and balance)
instruction to be acquired (sent by steps S7, S8, S27 and S28) in the processing example of FIGS.
4 and 5 is the same processing as the channel map setting request, Although not illustrated,
different IDs may be defined and used. Of course, the same ID "0003" can be used even when the
system is configured to be able to execute both the processing example of FIG. 4 and FIG. 5 and
the processing example of FIG. 6A. In order to do so, additional parameters may be added to
either or both.
[0103]
As shown in FIG. 10C, in the packet 53 of the request response used in step S74, "03h"
corresponding to the request response is described by the packet type. Then, an ID is attached to
the packet 53 to indicate that it is described as “description” in the table 54 of FIG. 10D, and a
parameter (additional parameter) is also added as necessary. This ID is the same as the ID at the
time of instruction. For example, in the processing example of FIG. 6A, in the case of a response
to the speaker information acquisition request (transmission at step S46), the ID is "0000", and
the instruction execution result and the speaker information (channel ID and group ID) are added
as additional parameters. Be done. Also, in the case of a response to the channel map setting
request (transmission at step S49), the ID is "0003" and the instruction execution result as an
additional parameter (when channel map setting is reserved, it is assumed that it has been
reserved). Information is also included). Further, for example, a response to an instruction
request such as voice to be acquired in the processing example of FIGS. 4 and 5 may be returned
in the same manner as this response.
[0104]
The third communication method will be described. 11A shows an example of a packet used for
09-05-2019
33
notification in the third communication method of FIG. 7C, and FIG. 11B shows an example of an
ID (notification type) described in the packet of FIG. 11A and parameters defined for each ID. FIG.
[0105]
As shown in FIG. 11A, in the packet 61 of the request used in step S75, "04h" corresponding to
the notification by the packet type is described. Then, an ID is attached to the packet 61 to
indicate that it is described as “description” in Table 62 of FIG. 11B, and a parameter
(additional parameter) is also added as necessary.
[0106]
(Others) Although the system according to each embodiment of the present invention has been
described above, this system may not adopt the technology premised on WiSA. For example, in
the WiSA, an IC (Integrated Circuit) chip capable of wirelessly receiving an audio signal is
mounted in each of the speaker devices, but a plurality of signal processing units may be
provided in one speaker device.
[0107]
In addition, parts other than the speaker unit in the source device and the speaker device
illustrated in FIG. 1 may be hardware such as a microprocessor (or DSP), a memory, a bus, an
interface, and peripheral devices such as a remote control. Can be realized by executable
software. A part of the hardware may be mounted as an IC / IC chip set, in which case the
software may be stored in the memory. In addition, all of the components of the present
invention may be configured by hardware, and in such a case, it is also possible to mount a part
of the hardware as an IC / IC chip set.
[0108]
In addition, a recording medium recording a program code of software for realizing the functions
in the various configuration examples described above is supplied to a source device or a speaker
09-05-2019
34
device, and the program code is executed by a microprocessor or DSP in each device. The object
of the present invention is also achieved by this. In this case, the program code itself of the
software implements the functions of the various configuration examples described above, and
even this program code itself or a recording medium (external recording medium or internal
storage device) recording the program code The present invention can be configured by the
control side reading and executing the code. Examples of the external recording medium include
various media such as an optical disc such as a CD-ROM or a DVD-ROM, and a nonvolatile
semiconductor memory such as a memory card. Examples of the internal storage device include
various devices such as hard disks and semiconductor memories. The program code can also be
downloaded and executed from the Internet and can be received and executed from broadcast
waves.
[0109]
As mentioned above, although the voice radio transmission system concerning the present
invention was explained, as the procedure of the processing was explained, the present invention
transmits a voice signal to a plurality of speaker apparatus and the plurality of speaker apparatus
by wireless communication And a source device for performing voice wireless transmission in a
voice wireless transmission system.
[0110]
In this audio wireless transmission method, the speaker device receives a predetermined user
operation and transmits an operation signal indicated by the predetermined user operation to the
source device, and the setting unit of the source device receives the signal from the speaker
device. A step of setting the grouping of the plurality of speaker devices into a plurality of
speaker groups based on the received operation signal; and the source device performing an
audio balance adjustment process for each of the speaker groups set by the setting unit. And
transmitting the audio signal or the audio signal processing parameter obtained by the audio
balance adjustment processing to each of the speaker devices by wireless communication.
The other application examples are as described for the voice radio transmission system, and the
description will be omitted.
[0111]
09-05-2019
35
The program code itself is, in other words, a program for causing the computer on the source
device side and the computer on the speaker device side to execute the audio wireless
transmission method. That is, the program comprises the steps of: receiving, from the speaker
device, an operation signal indicated by a predetermined user operation accepted by the speaker
device from the computer of the source device; and based on the received operation signal
Performing a setting for grouping the devices into a plurality of speaker groups, performing
audio balance adjustment processing for each of the set speaker groups, and processing audio
signals or audio signals obtained by the audio balance adjustment processing for each speaker
device Transmitting the wireless communication parameter by wireless communication. Further,
the program receives a predetermined user operation for setting the plurality of speaker devices
into a plurality of speaker groups in the computer on the speaker device side, and the operation
signal indicated by the predetermined user operation is received. The step of transmitting to the
source device, and the audio signal or the audio signal processing parameter subjected to the
audio balance adjustment processing for each speaker group based on the operation signal in the
source device are received from the source device by wireless communication And a receiver
program for executing the steps. The other application examples are as described for the voice
radio transmission system, and the description will be omitted.
[0112]
DESCRIPTION OF SYMBOLS 1 ... Source device, 2, SP1, SP2, SP3, SP4, SP5, SP6, SP8 ... Speaker
apparatus, 2a, 2b ... Speaker group, 3 ... Remote control, 10 ... Control part of source device, 10a
... Channel map, 11: voice input terminal, 12: communication unit of source device, 13: voice
input processing unit, 14: signal processing unit of source device, 15: wireless communication
unit of source device, 16: operation unit of source device, 17: display Unit 20: Control unit of
speaker device 21: Wireless communication unit of speaker device 22: Signal processing unit of
speaker device 23: Volume control unit 24: Amplifier unit (AMP) 25: Speaker unit 26: Speaker
Operation unit of the device, 27: Communication unit of the speaker device.
09-05-2019
36
Документ
Категория
Без категории
Просмотров
0
Размер файла
58 Кб
Теги
jp2016178396
1/--страниц
Пожаловаться на содержимое документа