close

Вход

Забыли?

вход по аккаунту

?

JP2016158242

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2016158242
Abstract: To provide an automated source switching in a volume-controllable audio environment.
A processor (204) operates in response to a software module with reference to stored zone group
configurations characterizing zone groups created by a user, and a zone player (200) audios from
another zone player or device on the network. Read the source, synchronize the players in the
zone group, and play the audio source as desired. The software modules create pairs between
two or more zone players to create the desired multi-channel audio environment. [Selected
figure] Figure 2A.
Smart line input processing of audio
[0001]
The techniques described herein are directed to techniques for use in the field of consumer
electronics. In particular, certain embodiments are directed to smart line input processing for use
in an audio environment.
[0002]
Music is widely part of our daily life. And, thanks to technological advances, music content is now
more accessible than ever. The same can be said for other types of media, such as television,
movies, and other audio and video content. In fact, users can now access content on the Internet
through online stores, internet radio stations, online music services, online movie services etc in
09-05-2019
1
addition to the conventional means of accessing audio and video content .
[0003]
The demand for such audio and video content continues to increase rapidly. Given the high
demand over the years, the techniques used to access and play back such content are likewise
improving. Nevertheless, access to content and the technology used to play such content can be
significantly improved or developed in ways that the market and end users can not anticipate.
[0004]
These and other features, aspects, and advantages of the technology described herein will be
better understood by those skilled in the art with respect to the following description, appended
claims, and accompanying drawings.
[0005]
FIG. 1 shows an example of an arrangement in which a particular embodiment may be
implemented.
FIG. 2A shows an exemplary functional block diagram of a player according to a particular
embodiment. FIG. 2B shows an example of a controller that may be used to remotely control one
or more players of FIG. 2A. FIG. 2C shows an example of a controller that may be used to
remotely control one or more players of FIG. 2A. FIG. 2D is an example of a functional block
diagram within a controller according to a particular embodiment. FIG. 3A provides an
illustration of zone scene configuration. FIG. 3B shows that the user defines multiple groups to be
collected simultaneously. FIG. 4 shows an example of a user interface that may be displayed on a
controller or the computer of FIG. FIG. 5A shows an example of a user interface that allows the
user to form a scene. FIG. 5B shows another example of a user interface that allows the user to
form a scene. FIG. 5C shows an example of a user interface that allows the user to adjust the
volume level of zone players in the zone scene individually or collectively. FIG. 6 shows a
flowchart or process for providing player themes or zone scenes of multiple players, wherein one
or more players are placed in the zone. FIG. 7 shows an example of an arrangement in which an
audio source is played by two players according to an embodiment. FIG. 8 shows an example of
the configuration of pairing between a plurality of players according to the embodiment. FIG. 9
shows a flow chart or process for grouping multiple audio products and playing back separate
09-05-2019
2
soundtracks synchronously to simulate a multi-channel listening environment. FIG. 10A shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 10B shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 10C shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 10D shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 10E shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 10F shows an
example of a snapshot of a controller used in a particular embodiment. FIG. 11 illustrates an
example of a smart line input process configuration according to a particular embodiment.
[0006]
Further, while the drawings are intended to illustrate particular embodiments, it should be
understood that the invention is not limited to the arrangements and instrumentality shown in
the drawings. I. Overview
[0007]
Embodiments described herein relate to smart line input processing. Embodiments are
particularly useful in a network environment, where a playback device can play audio data from
two or more different sources, and at least one of the sources sends its audio data from the audio
device via a line input connection. To receive. The advantages of the particular embodiments
described herein, among other benefits, are that the listener controls the audio device itself, has
the system detect the line input signal, and automatically source the playback device Switching is
to play from the audio device. In this way, the listener does not have to manually switch the
source of the playback device before playing the audio device. Another advantage described
herein is that the system allows the listener to switch the playback device to a different source
while the line input signal is still present. Yet another advantage described herein is that the
system can rearm itself, and switch the source back to the audio device when the system again
detects a line input signal. Can. In embodiments, features associated with several environments
where multi-source regeneration is desired may be found.
[0008]
In an embodiment, the playback device is idle and not emitting sound, ie the playback device is
configured to receive and play the first audio data stream from the first source. The playback
09-05-2019
3
device is further capable of receiving and playing the second audio data stream from the second
source. The second source is connected to the audio device through a line input connector on the
second source. The listener commands the audio device to play the audio. The second source is
configured to automatically switch the playback device when the signal is detected on the line
input connector, and to transmit audio from the audio device through the second audio data
stream Reproduce. The switch for playing back audio from the audio device may be performed
only after the second source detects the signal on the line input connector during the threshold
time, if necessary. The playback of the second audio data stream may override the playback of
the first audio data stream. The playback device and some of the first and second sources may be
components of a single device, or the playback device may be separate from either the first
source or the second source , For example, may communicate with each other over a network.
[0009]
In a particular embodiment, the playback device is configured to receive and play audio from a
source, the source receiving audio from an audio device connected to the source via a line input
connector. During playback of audio from the audio device, the listener commands the playback
device to play back a new audio data stream from a different source. Upon receiving the
command, the playback device switches to play a new audio data stream. The playback device
then instructs the source to stop sending audio of the audio device to the playback device. The
source stops sending audio to the playback device and waits for a period of time until it detects
no signal on its line-in connector. When a signal is not detected on the line input connector for a
certain period of time, when the source is detected again on that line input connector, it is
prepared to automatically switch the playback device and play the audio from the audio device .
The playback device and any of the sources different from the source may be components of a
single device, or the playback device may be separate from any of the sources different from the
source, eg communicate with each other over a network May be
[0010]
In a particular embodiment, the playback device is configured to output audio data in response to
the first volume level. If the playback device is switched automatically and plays audio from the
new source, the volume of the playback device will be, when the new source receives audio from
the audio device connected to the new source through the line input connector, Change to
second volume level. The dynamic range is increased and a second volume level is set to give to
the volume control of the audio device connected to the new source via line input. If the playback
device switches to play audio from a source, and the source is different from the new source with
09-05-2019
4
a line input connector, then the volume of the playback device is returned to a safe volume and
the high level audio is a listener Not played against.
[0011]
In particular embodiments, the playback device comprises a network interface, a processor, and
optionally, an amplifier and a speaker driver. The network interface may be configured to receive
and transmit audio over the network. The amplifier supplies power to any speaker driver when
the playback device is so equipped. The processor processes the audio data and is sent to
another device for actual playback, output through the speaker driver if the playback device is so
configured, or both. The playback device further comprises a line input connector to receive
audio from the audio device. The playback device may perform automatic source switching, and
when a signal is detected on the line input connector, the playback device automatically triggers
the audio from the audio device to be played back by the playback device itself Alternatively, it
may be played back by another device in communication with the playback device, or by both. An
automatic switch to play audio from the audio device may be implemented as needed only after
the signal has been detected on the line input connector during the threshold time.
[0012]
In particular embodiments, the playback device comprises a network interface, a processor, an
amplifier, and a speaker driver. The network interface may be configured to receive and transmit
audio data on the network. The amplifier supplies power to the speaker driver. A processor
processes audio data and outputs it through a speaker driver. The playback device may be
automatically switched to play streaming audio data received from the source device. The
playback device includes a line input connector to which the audio device is connected. When a
signal is detected on the line input connector of the source device, thereby indicating that the
listener wants to hear audio from the audio device, the playback device receives commands from
the source device and streams streaming audio data from the audio device Play automatically. An
automatic switch to play audio from the audio device may be implemented as needed only after
the signal has been detected on the line input connector during the threshold time.
[0013]
Many of these and some other embodiments are described hereinafter. Moreover, the detailed
09-05-2019
5
description is mainly presented in terms of an exemplary environment, system, procedure, steps,
logic blocks, processing, and other symbolic representations that directly or indirectly resemble
the operation of a data processing device connected to a network. Be done. These process
descriptions and representations are generally used by those skilled in the art to most effectively
convey the substance of their work to others skilled in the art. Numerous specific details are set
forth in order to provide a thorough understanding of the present invention. However, it will be
understood by those skilled in the art that certain embodiments of the present invention may be
practiced without certain specific details. In other instances, well-known methods, procedures,
components and circuits have been described in a simplified manner to avoid unnecessarily
obscuring aspects of the embodiments.
[0014]
Reference herein to "an embodiment" means that the particular features, structures or
characteristics described in connection with the embodiments may be included in at least one
embodiment of the present invention. The appearances of this phrase in various parts of the
specification are not all referring to the same embodiment, nor are they separate or alternative
embodiments excluding each other. It will be appreciated by those skilled in the art, explicitly and
implicitly, that the embodiments described herein may be combined with other embodiments.
II. Environment example
[0015]
Referring to the figures, similar numerals therein can represent similar parts throughout the
several views. FIG. 1 shows an example of an arrangement 100 in which a particular embodiment
may be implemented. Configuration 100 may represent, but is not limited to, a complex with a
home, a home, part of a business building, or multiple zones. A number of multimedia players of
three examples 102, 104 and 106 are shown as audio devices. Each of the audio devices may be
located or provided in one particular area or zone, and is therefore referred to herein as a zone
player.
[0016]
As used herein, unless explicitly stated otherwise, the audio source or audio sources are generally
in digital form and may be transmitted or streamed on a data network obtain. To allow an
09-05-2019
6
understanding of the example environment of FIG. 1, configuration 100 is assumed to represent
home. However, it is understood that this technique is not limited to that location of the
application. Returning to FIG. 1, zone players 102 and 104 may be located in one or two of the
bedrooms, while zone player 106 may be installed or located in the living room . All zone players
102, 104 and 106 are connected directly or indirectly to data network 108. Additionally,
computing device 110 is displayed and connected to network 108. In practice, for example, a
home gateway device or some other device such as an MP3 player may be connected to the
network 108 as well.
[0017]
Network 108 may be a wired network, a wireless network, or a combination of both. In one
example, all devices including zone players 102, 104 and 106 are connected to the network 108
by wireless means based on industry standards such as IEEE 802.11. In yet another example, all
devices, including zone players 102, 104 and 106, are part of a local area network in
communication with a wide area network (eg, the Internet). In yet another example, all devices
including zone players 102, 104, 106 and controller 142 form an ad hoc network, for example,
family identification: specifically named Carrai family, for example family identification such as
Smith family And distinguished from similar neighborhood settings.
[0018]
Many devices on the network 108 are configured to download and store audio sources. For
example, computing device 110 may download an audio source, such as audio associated with
music or video, from the Internet (e.g., a "cloud") or some other source, store the downloaded
audio source, or It can be shared locally with other devices on the network 108. The computing
device 110 or any of the zone players 102, 104 and 106 may also be configured to receive
streaming audio. As shown as a stereo system, device 112 is configured to receive an analog
audio source (e.g., from broadcasting) or to extract a digital audio source (e.g., from a compact
disc). Analog audio sources can also be converted to digital audio sources. According to
particular embodiments, various audio sources may be shared among devices on network 108.
[0019]
Two or more zone players (e.g., any two or more zone players 102, 104, and 106) may be
09-05-2019
7
grouped together to form a new zone group. Several combinations of zone players and existing
zone groups may be grouped together. In one example, a new zone group is formed by adding
one zone player to another zone player or an existing zone group.
[0020]
In certain embodiments, there are two or more zone players in one environment (eg, a living
room of a house). Instead of grouping these zone players and playing back the same audio source
synchronously, these two zone players may be configured to play two different sounds on the left
and right channels. In other words, the stereo effect of the sound is reproduced or enhanced
through these two zone players, one left and the other right. Similarly, in three-channel (or 2.1
sound effects) sound, three such zone players are reconfigured as if they were three speakers,
with left and right speakers and subwoofers forming stereo sound It is also good. Details of the
zone player reconfiguration and the operation of these audio products are described further
below. Similar configurations comprising multiple channels (more than three, e.g., four, five, six,
seven, nine channels etc) may be applied. For example, configurations using more than one
channel are useful in television and movie theater type settings, whereby video content such as
television and movie formats are played with audio content including more than one channel. In
addition, certain music may be encoded with more than one channel sound as well.
[0021]
In certain embodiments, two or more zone players may be integrated to form a single integrated
zone player. The integrated zone player may be further paired with a single zone player or yet
another integrated zone player. The integrated zone player may comprise one or more individual
playback devices. Each playback device of the integrated playback devices is preferably set to the
integrated mode.
[0022]
According to some embodiments, either grouping, merging, or pairing can continue to be
performed until the desired configuration is complete. The grouping, integration, pairing
operations are preferably performed through the control interface, and physically connecting and
re-connecting the speaker wires to, for example, individual separate speakers to create different
configurations Absent. In such cases, certain embodiments described herein provide a more
09-05-2019
8
flexible and dynamic platform capable of providing sound reproduction to end users.
[0023]
According to particular embodiments, a particular zone player (eg, any of zone players 102, 104,
and 106) may be configured to receive a first audio data stream from a first source . The first
source may be downloaded songs (e.g. music files stored on accessible hard drive), internet radio
stations, online music services, online movie services etc, as well as the traditional access of
audio and video content to access The first audio data stream may be obtained from any of the
means. The zone player may further be capable of receiving and playing the second audio data
stream from the second source. The second source may be connected to the audio device
through the line input connector on the second source. In particular embodiments, the audio
device may include any of an audio signal source device such as, for example, a record player, a
radio, a cassette player, a CD player, a DVD player, and the like. In particular embodiments, the
audio source may include an audio device commercially sold by Apple Inc., such as a wireless
network device such as, for example, Airport Express. Airport Express may provide a streaming
audio device to the line input connection of the second source. Audio data from the Airport
Express device may be controlled, for example, via a graphical interface such as an iTunes music
player, which may be separate from the playback device or the controller of the second source. It
is understood that the first and second sources may include zone players (e.g., any of zone
players 102, 104, and 106).
[0024]
The listener may command the audio device and play the audio (eg, without manually switching
the source on the zone player). For example, if the audio device is an Airport Express, the listener
may command playback to the audio device using, for example, an airport enabled computer or
smartphone with an iTunes music player. The second source automatically switches the zone
player when a signal is detected on the line input connector, such that the second source plays
audio from the audio device via the second audio data stream. It may be configured. The switch
to play back audio from the audio device may only be performed as needed after the second
source has detected a signal on the line input connector during the threshold time. Although
operable at any programmed time, for example, the threshold time is 300 milliseconds or less.
The playback of the second audio data stream may override the playback of the first audio data
stream. The zone player, the first source, and the second source may be components of a single
device, or the playback device is separated from either the first source and the second source,
and For example, they may communicate with each other on a network.
09-05-2019
9
[0025]
Sometime later, the audio device still playing is connected to the second source via its line-in
connector, and the listener may decide to play a new audio data stream from a different source In
response thereto, a command to communicate with the zone player may be input to play a new
audio data stream. The zone player receives commands, switches to the listener's desired source,
and plays a new audio data stream. The zone player then commands a second source having a
line input connector to stop transmitting its audio data stream to the zone player. At this point,
the second source waits for a period of time until it detects no signal on its line input connector.
For example, the fixed time may be 13 seconds or less, although it can operate at any
programmed time. If the second source again detects a signal from the audio device on its line-in
connector, the second source is immediately armed (or re-armed), again playing on the zone
player from the second source Prepare to automatically switch to the audio data stream of.
[0026]
An advantage of the particular embodiments described above is that the listener (e.g. by pressing
"Play" or "Stop" on the audio device, or "Play" or "Stop" on the graphical interface associated with
the audio device) Control the audio device itself by pressing it, and let the system detect the line
input signal and automatically source the zone player without requiring the listener to need to
manually switch the zone player It is a switchable thing.
[0027]
Another advantage of the specific embodiments described above is that the system allows the
user to switch the playback device to a different source while the line input signal is still present.
[0028]
Yet another advantage of the particular embodiments described above is that the system can
rearm itself and switch the source back to the audio device if the system again detects a line
input signal. .
[0029]
According to some embodiments, a particular zone player (eg, any of zone players 102, 104 and
09-05-2019
10
106) is configured to output audio data in response to the first volume level.
When the zone player is automatically switched to play an audio data stream from a new source
having a line input connector, the zone player's volume is changed to the second volume level.
The dynamic range is increased and a second volume level is set to give to the volume control of
the audio device connected to the new source via line input.
For example, the second volume level is 75% of the total volume. When the zone player is
switched to play the audio stream again from a different source than the new source with the line
input connector, the zone player's volume is returned to the safe volume level and high level
audio is not played to the listener. For example, the safe volume level is 25% of the total volume.
[0030]
It is understood that the techniques described herein are not limited to that location of the
application. For example, zones and zone players, the embodiments described herein include
cars, motorbikes on the water, planes, amphitheaters, outdoors, along streets of villages or cities,
etc., as well as homes, offices, gyms, schools , A hospital, a hotel, a cinema, a shopping mall, a
store, a casino, a museum, a theme park, or any other place where audio content is to be played.
Thus, it should be understood that the embodiments described herein may be used in connection
with any system or application for which multi-channel pairing is desired. III. Playback
device example
[0031]
Referring to FIG. 2A, an exemplary functional block diagram of zone player 200 according to an
embodiment is shown. The zone player 200 includes a network interface 202, a processor 204, a
memory 206, an audio processing circuit 210, a module 212, and optionally, an optional speaker
unit connected to an internal or external audio amplifier 214 and an audio amplifier 214.
Including 218. Network interface 202 enables data flow between the data network (i.e., data
network 108 of FIG. 1) and zone player 200, generally implements special rules (i.e., protocols),
and allows data to be exchanged between each other. connect. One of the common protocols
used in the Internet is TCP / IP (Transmission Control Protocol / Internet Protocol). In general,
09-05-2019
11
the network interface 202 manages the compression of audio sources or files into smaller
packets, which packets are sent over the data network or decompressed into original sources or
files. In addition, the network interface 202 handles the address portion of each packet and
intercepts packets as each packet reaches the correct destination or towards the zone player
200. Thus, in particular embodiments, each of the packets includes not only an IP-based source
address but also an IP-based destination address.
[0032]
Network interface 202 may include one or both of wireless interface 216 and wired interface
217. The wireless interface 216, also referred to as an RF interface, provides a network interface
function by the wireless means of the zone player 200, and a communication protocol (eg,
wireless standard IEEE 802.11a, 802.11b, 802.11g, 802.11n, or 802). Communicate with other
devices according to 15.1). The wired interface 217 provides a network interface function by
wired means (for example, Ethernet (registered trademark) cable). In one embodiment, the zone
player includes both interfaces 216 and 217, and the other zone player includes only an RF or
wired interface. Thus, these other zone players communicate with other devices on the network
or read out the audio source via the zone player. The processor 204 is configured to control the
operation of other components in the zone player 200. Memory 206 can load one or more
software modules executable by processor 204 to accomplish a desired task. According to one
embodiment, for example, when the software modules implemented in the embodiments
described herein are executed, the processor 204 stores the stored zone group configuration
characterizing the zone group created by the user. Operating according to the software module
by reference, the zone player 200 reads the audio source from another zone player or device on
the network, synchronizes the players in the zone group, and plays the audio source as desired.
According to another embodiment, software modules implementing the embodiments described
herein create pairs between two or more zone players to create a desired multi-channel audio
environment.
[0033]
According to another embodiment, software implementing one or more of the embodiments
described herein enables automated source switching. For example, processor 204 may operate
in response to a software module to determine that an audio signal is present at line input
connector 220 and, in response, switch the zone player source or playback device to an audio
device. The processor 204 may, depending on the software module, receive commands and even
stop playing audio data from the audio device, even if an audio signal is present at the line input
09-05-2019
12
connector 220. The processor 204 may, depending on the software module, determine that an
audio signal is no longer present at the line input connector 220 and in response may rearm, the
presence of the subsequent audio signal switching the source to an audio device .
[0034]
Line in connection 220 may include a socket of a jack plug or other audio connector, and may be
connected to audio processing circuit 210. In a particular embodiment, line input connection 220
includes a socket of either a 0.25 inch plug, a 3.5 mm plug, and a 2.5 mm plug. For example, the
setup may include connecting an Airport Express 3.5 mm stereo mini jack connection and a zone
player 200 3.5 mm connection (e.g., line input connection 220).
[0035]
According to one embodiment, memory 206 is used to save one or more saved zone
configuration files and can be read and modified at any time. Generally, stored zone group
configuration files are sent to a controller (eg, control device 140 or 142 of FIG. 1, a computer, a
portable device, or a television) when a user operates the control device. The zone group
configuration provides an interactive user interface, and various operations or controls of the
zone player can be performed.
[0036]
In a particular embodiment, the audio processing circuit 210 is like a circuit of an audio
reproduction device, one or more analog to digital converters (ADCs), one or more digital to
analog converters (DACs), Audio pre-processing unit, audio enhancement unit or digital signal
processor etc. In operation, an audio source is read through network interface 202 and
processed by audio processing circuit 210 to generate an analog audio signal. The processed
analog audio signal is then sent to the audio amplifier 214 for playback on a speaker. In addition,
audio processing circuitry 210 may include circuitry necessary to process analog signals as an
input for generating digital signals, and may be shared with other devices on the network.
[0037]
09-05-2019
13
Depending on the exact implementation, module 212 may be implemented as a combination of
hardware and software. In one embodiment, module 212 is used to save the scene. The audio
amplifier 214 is a general analog circuit that supplies power to the provided analog audio signal
and drives one or more speakers.
[0038]
It is understood that zone player 200 is an example of a playback device. For example, playback
devices include those zone players sold commercially by Sonoz Inc., Santa Barbara, California.
They currently include zone player 90, zone player 120, and Sonoz S5. Zone player 90 is an
example of a zone player that does not incorporate an amplifier, while zone player 120 is an
example of a zone player that incorporates an amplifier. S5 is an example of a zone player
incorporating an amplifier and a speaker. In particular, S5 is a five powered speaker system that
includes two tweeters, two mid-range drivers, and one subwoofer. When playing back audio
content via S5, the left audio data of the track is sent from the left tweeter and the left mid-range
driver, and the right audio data of the track is sent from the right tweeter and the right mid-range
driver , And mono bus is sent from the subwoofer. Furthermore, both mid-range drivers and both
tweeters have the same equalization (or substantially the same equalization). That is, they are
transmitted on the same frequency only from both different channels of audio. Although S5 is an
example of a zone player with speakers, zone players with speakers are not limited to those with
a specific number of speakers (eg, 5 speakers as in S5), but rather 1 It is understood that more
than one speaker can be included. Furthermore, the zone player may be part of another device,
which may also serve a primary purpose different from audio. IV. Controller example
[0039]
Referring to FIG. 2B, an example of controller 240 is shown, which may correspond to control
device 140 or 142 of FIG. The controller 240 can be used to allow control of multimedia
applications, automation, and other complex ones. In particular, controller 240 controls the
operation of one or more zone players (eg, zone player 200) through the RF interface
corresponding to wireless interface 216 of FIG. 2A, selecting multiple audio sources available on
the network. Configured to allow you to According to one embodiment, the wireless means is
based on an industry standard (e.g. infrared, wireless communication, wireless standards IEEE
802.11a, 802.11b, 802.11g, 802.11n or 802.15.1) . If a particular audio source is to be played
by zone player 200, the image associated with the audio source, if any, may be transmitted from
zone player 200 to the display of controller 240. In one embodiment, controller 240 is used to
09-05-2019
14
synchronize audio playback of one or more zone players by grouping zone players in a group. In
another embodiment, controller 240 is used to control the volume of each zone player in the
zone group individually or together.
[0040]
In one embodiment, controller 240 is used to create a pairing with two or more playback devices
to create or enhance a multi-channel listening environment. For example, controller 240 may be
used to select and pair two or more playback devices. Additionally, controller 240 may be used to
turn pairing on or off. The controller 240 may be used to integrate playback devices and also to
set a particular playback device in integrated mode. Thus, in some embodiments, controller 240
provides a flexible mechanism to configure a dynamic multi-channel audio environment. In some
instances, pairing creates a multi-channel listening environment. In some instances, pairing
improves the multi-channel listening environment by increasing the distance between devices.
For example, two individual playback devices located at a distance from one another can provide
the listener with multiple channel separations rather than audio from a single device.
[0041]
The user interface of controller 240 includes a screen 242 (e.g., a liquid crystal display screen)
and a set of function buttons such as: "zone" button 244, "back" button 246, "music" button 248,
scroll wheel 250, " and a set of soft buttons 266 corresponding to labels 268 displayed on the
screen 242. The OK button 252, the set of transport control buttons 254, the mute button 256,
the volume up / down button 264, and
[0042]
The screen 242 displays various screen menus according to the user's selection.
In one embodiment, the "zone" button 244 activates the zone management screen or "zone
menu" described in more detail below. The “Return” button 246 may perform different
operations depending on the current screen. In one embodiment, the "Back" button triggers the
current screen display and returns to the previous screen. In another embodiment, the "back"
button denies the user's misselection. A "music" button 248 activates the music menu and allows
selection of an audio source (e.g., song) to be added to the music cue for playback on the zone
player.
09-05-2019
15
[0043]
Scroll wheel 250 is used to select an item in the list that is always displayed on screen 242. If
there are too many items in the list to fit in one screen view, scroll indicators such as scroll bars
or scroll arrows are displayed next to the list. If a scroll indicator is displayed, the user may turn
the scroll wheel 250 to select the displayed item or to display a hidden item in the list. An "OK"
button 252 is used to confirm the user selection on screen 242.
[0044]
There are three transport buttons 254 that are used to control the effects of the current playing
song. For example, the functions of the transport button include playing / pausing a song, fast
forwarding / rewinding, advancing to the track of the next song, or returning to the previous
track. According to one embodiment, pressing one of the volume control buttons such as mute
button 262 or volume up / down button 264 activates the volume panel. In addition, there are
three soft buttons 266 that can be activated in response to label 268 on screen 242. It will be
appreciated that in a multi-zone system there may be multiple audio sources each played by one
or more zone players. The music transport feature described herein is applied by selecting one of
the sources, if one corresponding to one of the zone players or zone groups is selected.
[0045]
FIG. 2C shows an example of controller 260 that may correspond to control device 140 or 142 of
FIG. The controller 260 includes a touch screen that allows the user to interact with the
controller, for example navigating through playlists of many items and controlling the operation
of one or more players. In one embodiment, as further illustrated in FIGS. 10A-10F, the user
interacts with the controller to create a multi-channel audio environment, eg, create a stereo pair,
and multi-channel audio environment, such as un-stereo pair. You may use it to separate. Other
network devices such as an iPhone, iPad, or any other smart phone or network enabled device
may be used as a controller to interact with or control multiple zone players in the environment
(eg, A network computer such as a PC or Mac may be used as a controller). According to one
embodiment, the application may be downloaded to a network enabled device. Such applications
may implement most of the functionality described above on controller 240 using a navigation
mechanism or a touch screen within the device. Those skilled in the art will appreciate that the
09-05-2019
16
flexibility of the application and porting to new types of specific portable devices subject to the
detailed description herein.
[0046]
FIG. 2D shows an internal functional block diagram of an example controller 270 that may
correspond to controller 240 of FIG. 2B, a computing device, a smartphone, or any other
communication device. The screen 272 of the controller 270 may be a liquid crystal display
screen. The screen 272 communicates with a screen driver 274 controlled by a microcontroller
(eg, processor) 276 to issue commands. Memory 282 may load one or more application modules
284 executable by microcontroller 276 with or without user input through user interface 278 to
accomplish the desired task. In one embodiment, the application module is configured to enable
grouping of selected zone players into zone groups and synchronizing the zone players to one
audio source. In another embodiment, the application module is configured to control the zone
group audio source (e.g., volume) of the zone group together. In operation, when the
microcontroller 276 executes one or more of the application modules 284, the screen driver 274
results in the generation of control signals, driving the screen 272 and displaying the
application's specific user interface. , Which is described more fully below.
[0047]
The controller 270 includes a network interface 280 called an RF interface 280 that enables
wireless communication with the zone player via its corresponding RF interface. In one
embodiment, commands such as, for example, synchronization of volume control or playback of
audio are sent via the RF interface. In another embodiment, the saved zone group configuration is
transmitted between the zone player and the controller via an RF interface. Controller 270 may
control one or more zone players, such as 102, 104, and 106 of FIG. Nevertheless, there may be
multiple controllers in each preferred zone (e.g. a room or rooms close to each other) and may be
configured to control any one and all of the zone players.
[0048]
In one embodiment, the user creates a zone group that includes at least two zone players from
the controller 240 that transmits signals or data to one of the zone players. When all zone
players are connected on the network, the signal received by one zone player synchronizes the
09-05-2019
17
other zone players in the group, and all zone players in the group are identical in a timely
manner of synchronization Play a list of audio sources or the same audio source, and you can not
(or almost can not hear) any gaps or jumps. Similarly, if the user increases the audio volume from
the controller, the signal or data to increase the audio volume of the group is sent to one of the
zone players, and the other zone players in the group together to the same volume Enlarge.
[0049]
According to one embodiment, application modules are loaded into memory 282 to manage zone
groups. When a predetermined key (eg, “zone” button 244) is activated on controller 240, an
application module is executed on microcontroller 276. An input interface 278 connected to and
controlled by the microcontroller 272 receives input from the user. The “zone menu” is
displayed on the screen 272. Users can group zone players into zone groups by activating the
"Link Zone" or "Add Zone" soft buttons, or zone groups by activating the "Unlink Zone" or "Drop
Zone" buttons. May start to ungroup. The details of zone group operations are further described
below.
[0050]
As mentioned above, input interface 278 includes a screen graphical user interface as well as a
plurality of function buttons. It should be pointed out that the controller 240 of FIG. 2B is not the
only control device that can implement the embodiment. Other devices (eg, computing devices,
handheld devices) that provide equivalent control functionality may be configured to practice the
present invention. In the above description it is clear that the keys or buttons are usually referred
to as physical buttons or soft buttons, unless stated otherwise, and the user can enter commands
or data.
[0051]
One mechanism of "connected" zone players together for music playback is to link multiple zone
players together to form a group. To link multiple zone players together, the user may manually
link each zone player or room to each other. For example, there is a multi-zone system that
includes the following zones: Bathroom Bedroom Study Dining Room Family Room Entrance
09-05-2019
18
[0052]
If the user wants to link 5 out of 6 zone players using the current mechanism, the user may start
in a single zone and then manually link each zone to that zone. This mechanism can take a
considerable amount of time. According to one embodiment, a set of zones can be linked together
dynamically using commands. Using what is referred to herein as a theme or zone scene, zones
can be configured into a specific scene (eg, morning, afternoon, or garden), with predetermined
zone groupings and attributes for groupings Configuration is achieved automatically.
[0053]
For example, the "Morning" zone scene / configure command links the bedroom, study and
dining room together in one operation. Without this single command, the user has to manually
link each zone individually. FIG. 3A provides an example of one zone scene, the left column
shows the start of zone grouping-all zones are separated, the right column shows the results of
zone grouping, We have created a group of three zones named after "Morning".
[0054]
Further expanding the idea, zone scenes can be configured to create multiple sets of linked zones.
For example, the scene creates a group of three separate zones, the downstairs zones are linked
together, the upstairs zones are linked together within their own group, the outer zones (in this
case the patio) are Move into its own group.
[0055]
In one embodiment as shown in FIG. 3, the user defines multiple groups to be collected
simultaneously. For example, an "evening scene" is desired to link the following zones: -Group 1
○ bedroom ○ study ○ dining room-group 2 ○ garage ○ garden where the bathroom, family
room and entrance are any part of the group before the zone scene is called It should be
separated from the group.
[0056]
09-05-2019
19
A feature of a particular embodiment is that the zones do not need to be split up before invoking
the zone scene. In one embodiment, when invoked, a command is provided to link with all zones
in one step. Commands are in the form of zone scenes. After linking to the appropriate zone, the
zone scene command can apply the following attributes: Set the volume level in each zone (each
zone has different volume) Mute / unmute the zone Set the playback mode of the music to select
and play the specific music in the zone (Shuffle, Repeat, Shuffle ̶Repeat) Set music playback
equalization for each zone (eg, bass treble)
[0057]
A further extension of this embodiment is to trigger zone scene commands as an alarm clock
function. For example, a zone scene is set to be applied at 8:00 am. It can automatically link to
the appropriate zone, play a specific music, and set to stop after a predetermined period of time.
Although a single zone may be assigned to an alarm, the scene setting as an alarm clock provides
a synchronized alarm, and several linked zones in the scene have predefined audio (eg, like
(Predefined playlists) for a specific time or for a specific period of time. If for any reason you fail
to play the scheduled music (e.g. empty playlist, no connection to share, no UPnP failure, no
internet connection to internet radio station), a backup buzzer will sound. This buzzer is a sound
file stored in the zone player.
[0058]
FIG. 4 illustrates an example of a user interface 400 that may be displayed on controller 142 or
computer 110 of FIG. The interface 400 displays a list of items that can be set by the user and
causes the scene to function at a particular time. In the embodiment shown in FIG. 4, the list of
items includes "alarm", "time", "zone", "music", "frequency", and "alarm length". "Alarm" can be set
on or off. If "alarm" is set to on, then "time" is a specific time to set the alarm off. "Zone" indicates
that the zone player is set to play the designated audio at a specific time. "Music" displays what is
being played at a specific time. "Frequency" allows the user to define the frequency of the alarm.
The "alarm length" defines how long the audio will be played back. It should be noted that user
interface 400 is provided herein to illustrate some of the functions associated with setting up an
alarm. Depending on the exact implementation, other functions may also be provided, such as,
for example, time zones for display, daylight savings time, time synchronization, and time / date
formats.
09-05-2019
20
[0059]
In one embodiment, each zone in the scene may be configured for a different alarm. For example,
in the "Morning" scene, there are three zone players in the bedroom, study and dining room,
respectively. After selecting a scene, the user may set an alarm for the scene as a whole. As a
result, each of the zone players is activated at a particular time.
[0060]
FIG. 5A shows a user interface 500 that allows the user to form a scene. The left panel shows the
zones available at home. The right panel shows the zones that have been selected and grouped as
part of this scene. Depending on the exact implementation of the user interface, an add / delete
button may be provided to move the zone between the panels, or zones may be dragged along
the panels.
[0061]
FIG. 5B shows another user interface 520 that allows the user to form a scene. The user interface
520 can be displayed on a controller or computer device to indicate available zones in the
system. Check boxes are provided next to each of the zones, allowing the user to check the zones
associated with the scene.
[0062]
FIG. 5C illustrates a user interface 510 that allows the user to adjust the volume level of zone
players in the zone scene individually or collectively. As shown in the user interface 510,
“volume ... The 'button (slider, shown in other possible forms) can affect the volume of the zone
player the user is associated with when the zone scene is recalled. In one embodiment, when a
scene is recalled, zone players can be configured to maintain whatever volume they currently
have. In addition, the user can decide whether to unmute or mute the volume when the scene is
recalled. V. Providing examples of player themes or zone scenes
09-05-2019
21
[0063]
FIG. 6 shows a flowchart or process 600 for providing player themes or zone scenes for a
plurality of players, wherein one or more players are arranged in a zone. Process 600 is shown in
accordance with an embodiment of the present invention and may be implemented with modules
located in memory 282 of FIG. 2C.
[0064]
Process 600 begins when the user decides to continue the zone scene at 602. Process 600 then
moves to 604 where the user can determine which zone player to associate with the scene. For
example, there are 10 players at home, and the scene is named "morning". The user may be
provided with an interface to select 4 out of 10 players associated with the scene. At 606, the
scene is saved. The scene can be saved to any of the members in the scene. In the example of FIG.
1, the scene may be stored on one of the zone players and displayed on the controller 142. In
operation, a set of data associated with a scene includes a plurality of parameters. In one
embodiment, the parameters include, but are not limited to, an associated player and playlist
identifier (eg, IP address). The parameters may include volume / tone settings associated with the
scene. The user may return to 602 to configure another scene as desired.
[0065]
Given a stored scene, at 610, the user can activate the scene at any time or set a timer to activate
the scene. Process 600 may continue when the saved scene is activated at 610. At 612, upon
activating the stored scene, process 600 checks the status of the player associated with the
scene. The player's status means that each of the players must be responsive in a synchronized
manner. In one embodiment, player interconnections are checked to ensure that players
communicate between themselves and / or to the controller if there are such controller players in
the scene.
[0066]
Assume that all players associated with the scene are in good condition. At 614, the command is
executed with parameters (eg, associated playlists and volumes). In one embodiment, data
09-05-2019
22
including parameters are transmitted from one member (e.g., a controller) to other members in
the scene to synchronize the player with the actions configured in the scene. The operation
allows all players to play songs on the same or different volumes, or play pre-stored files. VI.
Multichannel Environment Example
[0067]
FIG. 7 shows an example of an arrangement in which an audio source is played by two players
702 and 704 according to an exemplary embodiment. These two players 702 and 704 may be
located at one location (eg, a hall, a room, or a nearby room) and its surroundings, and may be
designated to play two soundtracks each Good. For example, the audio source may have left and
right sound channels or tracks (e.g. stereo sound). The players 702 and 704 are paired and
synchronized if they play the same audio content at approximately the same time, instead of
grouping the players 702 and 704 and playing back the audio sources synchronously together.
Can play different sounds from the audio source. As a result of pairing, stereo sound effects may
be simulated or enhanced via the two players 702 and 704, for example, for one player or no
player.
[0068]
In particular embodiments, the players in each of the players 702 and 704 may have a network
interface, one or more speaker drivers (eg, if the player can play in stereo mode without pairing,
as shown in FIG. 2A, etc.) Two or more speakers in the example), an amplifier, and a processor.
The network interface receives audio data on the network. One or more amplifiers provide power
to the speaker driver. The processor processes the audio source to be output through the speaker
driver. The processor configures first equalization of the output from the speaker driver in
response to the first type of pairing, and further configures second equalization of the output
from the speaker driver in response to the second type of pairing You may
[0069]
In one embodiment, the two players 702 and 704 are configured to output multiple audio
channels that are independent of each other. For example, each player 702 and 704 may be
configured to output audio content in stereo that is independent of each other. After pairing, one
playback device (e.g., player 702) is configured to output a first subset of the plurality of audio
09-05-2019
23
channels, and another playback device (e.g., player 704) includes the plurality of audios.
Configured to output a second subset of channels. The first and second subsets are different from
one another. In this example, after pairing the players 702 and 704, the player 702 can play the
right channel, and the player 704 can play the left channel. In another example, player 702 can
play the center channel in addition to the right channel (eg, television or theater mode), and
player 704 can play the center channel in addition to the left channel. In a further example, the
first and second subsets differ in that player 702 plays channel right + center and player 704
plays channel left + center. In yet another embodiment, after pairing, player 702 may play all
channels except for certain bus frequencies, playing through player 704, thereby making player
704 as a subwoofer You may use it.
[0070]
In another embodiment, a collection of three or more playback devices (e.g., players 702, 704
and one or more other players) may output multiple independent audio channels of another
playback device in the collection. Each is configured. After pairing, each of the playback devices
is typically configured to output a different audio channel from the collection. This embodiment
is particularly useful in a television or cinema setting, where a particular playback device of the
plurality of playback devices is output in two channels or stereo mode at one time (eg, when
playing a song) And after pairing, output as front right channel, front center channel, front left
channel, rear right channel, rear left channel etc (eg when watching a movie or TV) It is
configured.
[0071]
In another embodiment, one of the paired playback devices (eg, player 702 or player 704)
processes the data of the audio item, eg, substantially divides the data into channels, each of the
channels being single One sound track is represented and played on one of the playback devices,
then creating or enhancing a multi-channel listening environment. In another embodiment, both
playback devices (e.g., players 702 and 704) may receive and process audio item data, each
playback device outputting only the audio content specified for the respective player You may
For example, player 702 may play only the left channel, but may receive both left and right
channel audio while player 704 plays only the right channel, but left and right channel audio
May be received.
[0072]
09-05-2019
24
In another embodiment, two or more playback devices (eg, player 702 or 704) are grouped into
a single or integrated playback device and integrated playback devices (eg, integrated player)
702 + 704) may be paired with one or more playback devices. For example, two playback devices
may be grouped into a first integrated playback device, and two other playback devices may be
grouped into a second integrated playback device. The first and second integrated playback
devices may then be paired to create or enhance a multi-channel listening environment.
[0073]
In certain embodiments, a playback device (either player 702 or 704) is configured to output an
audio channel, paired with one or more other playback devices, previously configured by the
playback device Configured to output different audio channels. For example, the playback device
may be configured to output the right channel in stereo mode, even after pairing with one or
more other playback devices, and output the rear, right channels in theater mode It may be
configured to A playback device may be paired with one or more other playback devices.
[0074]
In certain embodiments, a playback device (eg, either player 702 or 704) is configured to output
multiple audio channels, paired with one or more other playback devices, and one playback
device. It is configured to output a subset of the plurality of audio channels associated with one
or more other playback devices. For example, the playback device may be configured to output in
two channels or stereo mode, but one or more other playback devices after pairing output the
right or left channel It may be configured. A playback device may be paired with one or more
other playback devices.
[0075]
According to particular embodiments, the pairing operation of two or more playback devices may
be based on a command (eg, a manual command) from the user via the control interface or to an
event (eg, an automatic command) Triggered in response. For example, using the controller, the
user can create a pairing with two or more playback devices, or unpair a pair with two or more
playback devices. In another example, the pairing is triggered by the audio content itself, by a
09-05-2019
25
signal received from the source device, or by some other predetermined event, eg, the event is
detected by the controller or playback device Sometimes pairing may occur. Additionally, another
device may be programmed to detect an event and provide a pairing signal to the controller and
/ or playback device.
[0076]
Furthermore, from no-pairing (unpaired or non-paired) configuration to pairing configuration, or
from one type of pairing (eg pairing used in stereo mode or theater mode type) to a different
type of It is understood that transitioning to pairing (e.g., another pairing used in stereo mode or
theater mode type) is all the various types of "pairing" that can occur according to a particular
embodiment. Be done. Furthermore, unpairing with a plurality of playback devices may be, for
example, from pairing to no pairing, or from the first type of pairing back to the previous type of
pairing.
[0077]
In a first example, the first type of pairing may include "no pairing" with another playback device,
and the second type of pairing with one or more other playback devices May include pairing. In a
second example, the first type of pairing may include pairing with a second playback device, and
the second type of pairing includes pairing with a plurality of playback devices It may be. In a
third example, the first type of pairing may include reproducing two channel sound through a
speaker driver, and the second type of pairing includes two channels through a speaker driver Be
prepared to reproduce only one of the sounds. In a fourth example, the first type of pairing may
comprise reproducing the first audio channel via the speaker driver, and the second type of
pairing may be performed via the speaker driver It may include reproducing two audio channels.
In a fifth example, the first type of pairing may include reproducing audio content through the
speaker driver in stereo mode, and the second type of pairing through the speaker driver in
theater mode May include reproducing audio content. In a sixth example, in the case of
integrated mode, the first type of pairing may include reproducing audio content via a speaker
driver, and the second type of pairing is a speaker driver May be provided to reproduce audio
content. It will be understood that various changes or modifications may be made to the
embodiments described above with the realization of the advantages of some or all of the
techniques described herein.
[0078]
09-05-2019
26
According to a particular embodiment, the configuration of the playback device changes the
equalization of the playback device by changing the equalization of one or more specific speaker
drivers, synchronization between paired devices Includes any of the optimizations. Changing the
equalization of the playback device, turning on or off (or effectively muting) one or more specific
speaker drivers, changing the channel output of one or more speaker drivers, one This may
include changing the frequency response of the specific speaker driver, changing the amplifier
gain of any specific speaker driver, or changing the amplifier gain of the reproduction device as a
whole.
[0079]
In particular embodiments, changing the equalization of the playback device (eg, changing the
equalization of one or more speaker drivers of the playback device) can affect frequency
dependent parameters. Examples may include frequency intensity adjustment, phase adjustment,
and time delay adjustment in audio data. In addition, certain equalizations, such as those that
attenuate high, medium, or low frequencies, for example, allow other frequencies to pass without
(or substantially without) filters. Type pass filters can be used. The filters may be of different
types or different orders (eg, first order filter, second order filter, third order filter, fourth order
filter, etc.). For example, the first equalization of the playback device may include using a first
type of pass filter that causes the output to change based on the first type of pairing, and the
second equalization of the playback device may include It may also include using a second type
of pass filter that causes the output to change based on the second type of pairing. In this
example, the first and second types of pass filters have one or different properties and / or
behaviors to change the equalization and sound behavior of the device.
[0080]
For illustration purposes, when two S5 devices are paired to create a stereo pair, for example,
one S5 device may be configured as "left" and the other S5 devices are configured as "right" May
be In one embodiment, the user may decide which is left or right. In this configuration, for
example, left and right audio data may be sent to both S5 devices, but the left audio data of the
track is played from the S5 device configured as left and the right audio data of the track is ,
From the S5 device configured as the right. Furthermore, the equalization of each S5 device is
altered in order to reduce or eliminate certain structural or destructive interference. For example,
one tweeter of each S5 device may be switched off or nearly muted. In certain embodiments, the
09-05-2019
27
crossover frequency for each driver may be further modified from the previous configuration,
such that two or more drivers do not necessarily output exactly the same audio data, otherwise
they are structured And / or destructive interference may occur. In particular embodiments, the
amplifier gain adjusts the particular speaker driver and / or the playback device as a whole.
[0081]
In operation, according to a particular embodiment, controller 706 (eg, controller 142 of FIG. 1
or 240 or portable device of FIG. 2B) is used to initiate operation. Through the user interface, the
controller 706 causes the player 702 to read the audio source if the audio source is on the
network 708 (eg, the Internet or a local area network). Similarly, controller 706 may also cause
the designated device (eg, another network device) to establish a communication session with
player 702 to transmit the requested audio source. In some cases, one or both of the players 702
and 704 can access data indicative of an audio source.
[0082]
In particular embodiments, modules of player 702 may be activated to process data. According to
one embodiment, it is divided into right and left soundtracks. One sound track is held locally at
one player and the other sound track is pushed or uploaded to other devices (eg, via an ad hoc
network). If the right and left sound tracks are played simultaneously or nearly simultaneously,
stereo sound effects are obtained.
[0083]
In another embodiment, some tracks are divided into, for example, television or theater modes.
For example, the track may be divided into a center channel, a right front channel, a left front
channel, a left front channel, a right rear channel, a left rear channel, and the like. Thus, one or
more sound tracks may be kept local to one player, and other sound tracks may be pushed or
uploaded to other devices.
[0084]
09-05-2019
28
In yet another embodiment, one player may process data and keep one or more tracks locally
while transmitting the remaining data on another player. The receiving player may then process
the data, hold one or more tracks locally, and send some remaining data to another player. This
process, or the like, may continue until all tracks are held locally by the corresponding player
device.
[0085]
In yet another embodiment, each player may receive and process data and play only the channel
or channels designated for the player.
[0086]
In certain embodiments, it is important to maintain good synchronization, especially as multichannel audio content is initially intended when pairing two or more independently clocked
playback devices. To be played.
According to an embodiment, a message may be initiated from one device to another to be
activated and may send back an acknowledgment. In receiving an acknowledgment, the time
delay of data transmitted from one device to another can be measured. The time delay is taken
into account when synchronizing two players and playing two separate soundtracks. In certain
embodiments, when a packet (e.g., a packet according to the SNTP protocol) is sent to a playback
device and receiving a response, e.g., clock information, is included in the packet if it takes more
than 15 ms. Information is discarded. If it is within 15 milliseconds of sending and receiving the
packet, then the information from the packet is then used to coordinate the playback, if
necessary.
[0087]
Further details of synchronizing the operation of two or more independently clocked players can
be found in the same application, filed Apr. 1, 2004, between a plurality of independently
clocked digital data processing devices. No. 10 / 816,217, entitled “System and Method for
Synchronizing the Operation of the US Patent Application Ser.
[0088]
09-05-2019
29
FIG. 8 shows an example of the configuration of pairing between a plurality of players 802, 804,
806, 808, 810, and 812 in a theater-like environment according to an embodiment.
The player 802 may operate as a front left channel, the player 804 may operate as a center
channel, the player 806 may operate as a front right channel, and the player 808 operates as a
subwoofer Player 810 may operate as a rear, right channel, and player 812 may operate as a
rear, right channel. In this example, players 802, 804, 806, 808, 810, and 812 are wirelessly
connected on network 815 to receive and transmit data on the wireless network, from a wall
power outlet or other power source (eg, Power can be obtained through the battery). The players
802, 804, 806, 808, 810, and 812 may be wired if so configured in another embodiment. The
controller 814 may be a network enabled device, including, by way of example, a smartphone, a
tablet computer, a laptop computer, a desktop computer, or a television.
[0089]
In one embodiment, a designated player, such as player 804, receives multi-channel audio
content from source 816. Sources 816 may include audio and / or video content downloaded or
streamed from the Internet, DVD or Blu-ray Disc, or from some other source of audio and / or
video content. The player 804 splits into multi-channel audio and sends each audio channel to its
playback owner. For example, when a specific audio channel is designated as a front or rear
speaker, the content is wirelessly directed from the player 804 to the player 802 or the like.
Players 802, 804, 806, 808, 810, and 812 synchronously play audio content to create a multichannel listening environment. Further, if source 816 provides video content with audio content,
the audio content is preferably played back in synchronization with the video content.
[0090]
In another embodiment, the players in each of the players 802, 804, 806, 808, 810, and 812 can
be divided into their own one or more channels for playback. That is, all audio content, or a
portion of them, is sent to each player (e.g., from source 816 or another playback device), and
the player itself gets its own data for playback.
[0091]
09-05-2019
30
Further, players 802, 804, 806, 808, 810, and 812 may be reconfigured as described above and
operate in many different configurations. For example, players 802 and 806 may be paired and
operate in stereo mode while other players are maintained or switched off in sleep mode (player
808 is so configured and so on) If it is operating as a subwoofer, it may remain in any particular
configuration). In another example, players 802 and 810 are integrated to output left channel
audio, while players 806 and 812 are integrated to output right channel audio. In yet another
example, some of the players 802, 804, 806, 808, 810, and 812 are integrated into a single
player and paired with another playback device, for example, in the next room. In a further
example, the players 802, 804, 806, 808, 810, and 812 are grouped and not paired if the audio
content is music (eg, for movie content). These are just some examples of settings. Many other
configurations are possible using the teachings described herein.
[0092]
FIG. 9 shows a flowchart or process 900 for grouping a plurality of audio products, playing the
split sound tracks synchronously and simulating a multi-channel listening environment. Process
900 is shown in accordance with a particular embodiment and may be implemented with
modules located in memory 282 of FIG. 2D. To enable the description of process 900, a listening
environment for stereo sound on the left and right channels is described. Those skilled in the art
will appreciate that the description may apply to other forms of multi-channel listening
environments as well (e.g., 3, 5, 7 channel environments).
[0093]
Generally, there are multiple players controlled by one or more controllers, and these players are
located at various locations. For example, there are five players in the house, three of which are
located in three rooms respectively, while two players are located in a larger room. Thus, instead
of simply playing synchronized audio from both in a grouped fashion, these two players are
candidates to be paired and simulate a stereo listening environment. In another example, there
are four players in a large space and an adjacent space, and two pairs of players can be paired to
simulate a stereo listening environment, with two players in one integrated pair being a group To
play one (left) soundtrack, the other two of the other integrated pair can be grouped to play one
(right) soundtrack.
[0094]
09-05-2019
31
In either case, it is determined if two groups of players or two players are paired at 902. If the
players do not pair, the process 900 will not be active. At 902, it is assumed that two players are
selected from the group of players controlled by the controller and are paired. Process 900
proceeds.
[0095]
At 904, the user may determine which player plays which sound track. Depending on the
position of the user or listener relative to the selected player, player or unit A is selected to play
the left sound track, and another player or unit B is selected to play the right sound track. In
another embodiment, the player itself (or the controller) is configured to which unit is configured
to play the right channel, and which unit plays the left channel without input from the user. It
may be determined automatically.
[0096]
According to one embodiment, the time delay for transmitting data between the two units A and
B is measured at 906. This time delay can allow for synchronization of the sound between the
two units, one of the units receiving the processed sound track from the other. The user may
continue to operate the controller and select 910 a title (e.g., an audio source or an item from a
playlist) to play on the two units.
[0097]
Once the title is determined in 912, the data of the title is accessed. Depending on where the data
is located, the controller may be configured to cause or stream one of the two units to acquire
data. In one embodiment, the controller or unit A initiates a request for a remote network device
to provide or store data. Assuming that the authentication procedure is successfully completed,
the remote device starts uploading data to unit A. Similarly, if the data is stored locally at unit A,
the data can be accessed locally without making the same request from the network. The
processing module is activated in unit A to process the data and substantially split the data into
two streams of soundtracks at 914 so that the data is received or accessed in unit A. In another
embodiment, each unit receives and processes data, substantially dividing the data into streams
to be reproduced by the respective unit.
09-05-2019
32
[0098]
At 916, one of the streams is uploaded from unit A to unit B via a local network (eg, an ad hoc
network formed by all players controlled by the controller). The two units are each configured to
play the stream so that the stream is delivered, and each reproduces the sound of a single
soundtrack at 918. At the same time, during synchronization, two units create a stereo sound
listening environment.
[0099]
It should be noted that, in the salient case, the delay time may be incorporated into unit A, delay
the consumption of the stream until the delay time, and be synchronized with unit B.
Alternatively, non-selected players may be used to process streaming data of titles, configured to
supply two streams to a pair of players, and otherwise experienced by unit B Equalize the delay
time.
[0100]
10A-10F show examples of screen shots of a controller for creating stereo pairs according to
particular embodiments. The screen shot is from a computing device (eg, a tablet computer,
laptop, or desktop) used as a controller. One skilled in the art will appreciate that FIGS. 10A-10F
may be easily modified and used for portable devices with network capabilities, such as an
iPhone or iTouch or other smartphone or other network enabled device. Furthermore, the
controller may be present as part of the player or be directly / indirectly connected to the player,
and thus such screen shots may be modified accordingly-such The controller does not have
network functionality, and the player has network connectivity.
[0101]
FIG. 10A illustrates a graphic interface 1000 that may be displayed on the controller if the user
desires to create a stereo pair with two players in the stream. It is understood that the system
may include more than one player. If a stereo pair is desired, then any two players in the system
09-05-2019
33
(one or both integrable players) may be paired as described with respect to the example of FIGS.
10A-10F. However, if pairing of two or more players is desired, for example, creating an
environment capable of playing two or more channels of audio data, the graphic interface 1000
may provide another option or options. May be included. For example, the options may include
"Create Movie Surround Sound Pairing", "Create Music Surround Sound Pairing", or "Create Dolby
Pro Logic Pairing". Any descriptive language may be used to properly indicate the type of pairing
that can be created for the user. In selecting options, the configuration wizard on the controller
helps the user to properly configure the system, and multi-channel discrete audio may be
effectively implemented by the system.
[0102]
Referring back to FIG. 10A, the interface 1000 allows the user to initiate a stereo pair with a zone
player named "ZPS5-Black". In particular embodiments, the system recognizes that ZPS 5-Black is
part of a particular zone (eg, kitchen, family room, bedroom, etc.). The system may allow the user
to pair ZPS 5-Black with other players only in the same zone, or the system may allow the user to
pair ZPS 5-Black with other players in different zones (e.g. adjacent zones etc). May be paired.
Pairing players in different zones may be particularly useful if the open space is divided into two
or more zones (eg, the open space includes a kitchen and a family room).
[0103]
In addition, the system may be configured to pair players from different zones into different
zones, reflecting players in pairing mode (e.g. single kitchen-family room during pair operation
period) Zones may be created from kitchen zones and family zones during unpaired operating
periods). In such embodiments, the user may be able to switch between zones or dynamically
create new zones.
[0104]
In certain embodiments, the screen shot of FIG. 10B can be displayed if other similar players can
be paired. If the user wants to continue creating a pair, the user can select "OK". Otherwise, the
user can select "cancel". In another embodiment, different players (eg, non-S5 players) are paired
together. That is, when players are designed to be paired, players of different types are paired. In
order to accommodate differences in player types, equalization of one or more players may be
09-05-2019
34
adjusted accordingly to compensate for other players, such as the number and size of speaker
drivers used by one player. It is also good. In yet another embodiment, a list of players in the
system (not shown) may be displayed, and the user selects two or more players to create a stereo
pair. The list of players may be automatically determined by the system based on, for example,
the home, the particular location of the player in the room, or the configuration with other
players in the room.
[0105]
Returning to FIG. 10C, in this example, it is assumed that the user can select a zone player named
“ZPS5-White” paired with “ZPS5-Black” to create a stereo pair. If so, the user can select "OK"
to proceed with pairing. If not, the user can select "cancel". In certain embodiments, ZPS 5-White
may be in the same zone as ZPS 5-Black. In other embodiments, ZPS 5-White may be in a
different zone than ZPS 5-Black.
[0106]
When "OK" is selected in FIG. 10C, a screen shot such as that shown in FIG. You may press the
designated button). In addition, the player's lights may be blinking to further indicate that each of
the players may have left channel pairing. When selecting the left player, if so desired, FIG. 10E
may be displayed to inform the user that the pair has been created along with the name of the
pair. In response, the system plays left channel audio from the user specified player and
automatically plays right channel audio from the other players. FIG. 10F provides an example
screenshot that allows the user to split stereo pairs if so desired.
[0107]
In another embodiment, creation of stereo pairs may be an option for a particular zone or
number of zones (eg, a home of zones). For example, there may be an option to create a "stereo
pair", for example, when selecting, the configuration wizard prompts the user, in any zone, part of
a zone or all zones, on any speaker that the user wants to be the left speaker You may request to
press a flashing mute button (or some other designated button) to activate. In one embodiment,
the blinking occurs at all the same speaker types. In another embodiment, the blinking occurs in
all speaker types that can be paired. After selecting the left speaker, the wizard screen asks the
user to do the same with the right speaker. Preferably, only the speakers that can be paired as
09-05-2019
35
the right speaker are flashing to properly narrow the user's options.
[0108]
Further, in one embodiment, as shown in FIGS. 3A and 3B, screen displays are provided to show
the user all players in the system and how they are grouped and named. If FIG. 3A changes after
the stereo pair is complete, then the stereo pair nickname of display 1040 is highlighted and
further displayed in FIG. 3A.
[0109]
A similar graphic interface may be used to create a pair in an environment having more than one
channel. For example, in a home theater environment, the system creates two or more separate
because the user creates a pairing by selecting a player that is adapted to operate as front right,
center, front left, rear right and rear left. You may list the players of. Subwoofers may also be
added to the list and incorporated by the user into multi-channel pairing.
[0110]
By way of example, the system may blink indicator lights on all players involved, as in the
description of the various embodiments related to creating a stereo pair described above, and the
configuration wizard is appropriate for all speakers The user may be asked to select "front left",
then "front right", then "front center", then "rear left", then "rear right" etc. . Preferably, only the
speakers that can be paired as the next speaker are flashing, narrowing the user's options
appropriately. VII. Smart Line Input Processing Example
[0111]
FIG. 11 shows an example of the configuration of smart line input processing according to the
embodiment. System 1100 includes a playback device 1102, a first source 1104, a second
source 1106 having a line input connector 1108, and an audio device 1110. The playback device
1102 and any of the first source 1104 and the second source 1106 may be components of a
single device (eg, a single playback device) or the playback device 1102 may be , And may
09-05-2019
36
communicate with one another, such as over a wired or wireless network, for example. For
purposes of illustration, the playback device 1102 may be, for example, the zone player or
playback device shown in FIGS. 1, 2A, 7 and 8. Further, it is understood that the first source
1104 and the second source 1106 may each be a playback device, eg, as described herein.
[0112]
In particular embodiments, playback device 1102 is idle and not producing sound. Instead,
playback device 1102 is configured to receive and play the first audio data stream from first
source 1104. The playback device 1102 further enables to receive and play the second audio
data stream from the second source 1106. The second source 1106 is connected to the audio
device 1110 through a line input connector 1108 on the second source 1106. The line input
connector 1108 may include a TRS type connector / socket (eg, the TRS connector is referred to
as an audio jack, a jack plug, a stereo plug, a mini jack, and a mini stereo). Other types of
connectors may also be used depending on the application. Additionally, digital audio
connections may be made instead of analog audio connections. As mentioned above, examples of
audio device 1110 may include, for example, audio devices commercially sold by Apple Inc., such
as wireless network devices such as, for example, Airport Express.
[0113]
The listener (e.g., 1100 users) issues commands to the audio device 1110 and plays the audio
(e.g., through a separate control interface such as an iTunes music controller). The second source
1106 automatically switches the playback device 1102 when a signal is detected on the line
input connector 1108, and the second source transmits the audio from the audio device 1110 via
the second audio data stream. Configured to play. The switch for playing back audio from the
audio device 1110 is only needed after the second source 1106 detects a signal on the line input
connector 1108 during a threshold time (eg 300 milliseconds or less) May be executed. It is
understood that playback of the second audio data stream may override playback of the first
audio data stream if the playback device 1102 was receiving and / or playing audio from the first
source 1104. . Furthermore, when the playback device 1102 is automatically switched to play
audio from the second source 1106, an audio device in which the second source 1106 is
connected to the second source 1106 via the line input connector 1108. Upon receiving the
audio from 1110, the volume of the playback device 1102 is changed to the second volume level.
The second volume level is set to increase the dynamic range and be given to the volume control
of the audio device 1110 connected to the second source 1106 via line input.
09-05-2019
37
[0114]
In another description, playback device 1102 is configured to receive and play audio from
second source 1106. During playback, the listener issues a command to the playback device
1102 via the controller (eg, as shown in FIG. 2D) to instead play back the first audio data stream
from the first source 1104. Upon receiving the command, the playback device 1102 switches to
play the first audio data stream. The playback device 1102 then instructs the second source
1106 to stop sending audio of the audio device 1110 to the playback device 1102. The second
source 1106 stops sending audio to the playback device 1102 and waits for a period of time
until no signal is detected on its line input 1108. If the second source 1106 again detects a signal
on its line-in connector 1108 if the signal is not detected on the line-in connector 1108 for a
certain period of time (e.g. 13 seconds or less), the second source 1106 will , Prepare to
automatically switch the playback device 1102 and play the audio from the audio device 1110.
Furthermore, when the playback device 1102 switches to play the first audio data stream, the
volume of the playback device 1102 returns to the safe volume level and the audio is not played
back to the listener at a high level. The high level may be programmed to taste, but the example
of the safe level is less than 100 db. VII. Conclusion
[0115]
The components, elements, and / or functions of the aforementioned systems may be
implemented alone or in combination, in various forms, eg, in hardware, firmware and / or as a
set of software instructions. Certain embodiments are provided as instructions that reside on a
computer and readable medium, such as a memory, hard disk, CD-ROM, DVD, and / or EPROM, to
be executed on a processing device such as a controller and / or playback device. It may be done.
[0116]
Various inventions have been described in sufficient detail with some specificity. The
embodiments of the present disclosure are made for the purpose of illustration only, and many
changes in the changes and combinations of parts may be made as the claims without departing
from the spirit and scope of the present invention. Although the embodiments described herein
may appear to include some limitations with regard to the presentation of information units, in
terms of format or arrangement, embodiments far exceed such embodiments. Are applicable and
09-05-2019
38
can be understood by those skilled in the art. Accordingly, the scope of the present invention is
defined by the appended claims rather than the embodiments described above.
09-05-2019
39
Документ
Категория
Без категории
Просмотров
0
Размер файла
66 Кб
Теги
jp2016158242
1/--страниц
Пожаловаться на содержимое документа