close

Вход

Забыли?

вход по аккаунту

?

JP2010124133

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2010124133
[Object] To allow a user to arbitrarily determine a sound emission point for emitting different
voices different from a base or a speaker. A voice distribution device is specified by a plug-in unit
selected by a user from among a plurality of plug-in units, and a communication address or a
user of a conference terminal of a voice data transmission source specified by the user. The input
with the voice feature information is accepted, and these are associated and stored in the
memory. Then, the voice distribution device acquires the communication address of the
conference terminal or the voice characteristic information represented by the voice data from
the voice data transmitted from the conference terminal of the voice data transmission source,
and acquires the acquired communication address or feature information The identifier of the
plug-in unit stored in the memory is specified in association with the above, and the transmitted
voice data is supplied to the sound emitting apparatus inserted in the plug-in unit identified by
the specified identifier. [Selected figure] Figure 2
Voice relay device
[0001]
The present invention relates to a technology for relaying voice data.
[0002]
There is known a technique of selecting one of a plurality of speakers and outputting sound from
the selected speaker.
09-05-2019
1
For example, Patent Document 1 discloses a technology for detecting movement of a human and
outputting sound from a speaker installed at a movement destination. Further, Patent Document
2 discloses a technique of switching an output destination of an audio signal according to a
switching operation of an output selection switch. Then, in Patent Document 3, a directional
direction when a person sitting in a chair listens to a sound is detected, and an audio signal is
supplied to a speaker corresponding to the detected directional direction among a plurality of
speakers disposed in a dispersed manner. Technology is disclosed. Further, Patent Document 4
discloses a technique for outputting a voice of a speaker from a speaker near a position where
the speaker is displayed in a video conference system. Japanese Patent Application Laid-Open
No. 58-147297 Japanese Patent Application Laid-Open No. 10-336786 Japanese Utility Model
Application Laid-open No. 04-027699 Japanese Patent Application Laid-Open No. 07-58859
[0003]
By the way, in the conference system, since the voice is transmitted through the communication
line, the sound quality is deteriorated. Therefore, in a conference system, it is difficult to identify
a speaker by voice quality, and it is difficult to understand who is speaking. Furthermore, in a
conference system connecting multiple points, since the voices of a plurality of speakers at
different locations are emitted from the same speaker, it is difficult to know which speaker at
which location is speaking. Therefore, it is convenient if the user can decide where to emit the
speech of each speaker at another site. The present invention has been made in view of such
background, and it is an object of the present invention to allow a user to arbitrarily determine
sound emission points for emitting different voices of different locations such as a base and a
speaker.
[0004]
According to the present invention, the transmission source identification information or the user
for identifying the sound emission point selected by the user from among the plurality of sound
emission points from which the sound is emitted and the audio data transmission source
specified by the user And storing the emission point and the transmission source identification
information or the feature information of the voice, the input receiving means for receiving an
input with the specified voice feature information, and the input point received by the input
receiving means. Storage means, acquisition means for acquiring transmission source
identification information for identifying the audio data transmission source or voice
characteristic information represented by the audio data from the audio data transmitted from
the audio data transmission source, and the acquisition means Sound emitting point specifying
means for specifying a sound emitting point stored in the storage means in association with the
09-05-2019
2
transmission source identification information obtained by the above or the feature information,
and the sound emitting point specifying means Thus the sound emitting means arranged
identified sounded point, to provide a voice relay device comprising: a data supply means for
supplying audio data which has the been transmitted.
[0005]
In a preferred aspect of the present invention, there is provided a sound emitting device having a
plurality of outlets arranged at each of the sound emitting points, and the sound emitting means,
wherein the sound emitting device can be inserted into and removed from the outlets. And
detecting means for detecting the insertion port into which the input port is inserted, wherein the
input receiving means is a sound emission point at which the insertion port detected by the
detection means is disposed, and a voice specified by the user The input with the source
identification information for identifying the data source or the voice characteristic information
designated by the user is received, and the data supply means is disposed at the sound emission
point identified by the sound emission point identification means The transmitted sound data is
supplied to the sound emission device inserted into the insertion port, and the sound emission
device receives the sound data by the data supply unit, and the sound corresponding to the
sound data is supplied. Said sound emitting means It may be Luo sound.
[0006]
In a preferred aspect of the present invention, the input reception means is a sound volume
representing a sound emission point at which the outlet detected by the detection means is
disposed, and a sound volume at which sound is emitted from the sound emission point.
Receiving an input with information, the storage means stores the sound emission point for
which the input receiving means receives the input and the volume information in association
with each other, and stores the sound emission point specified by the sound emission point
specifying means Volume information extraction means for extracting volume information stored
in the storage means in association with the data storage means, and the data supply means is an
insertion port disposed at the sound emission point identified by the sound emission point
identification means To the sound emitting device inserted into the device, the transmitted sound
data and the volume information extracted by the volume information extracting means, and the
sound emitting device is configured to receive the audio data and the sound data by the data
supplying means. When serial volume information is supplied, the sound corresponding to the
audio data may be released from the sound emitting means at the volume indicated by the
volume information.
[0007]
In a preferred aspect of the present invention, the sound emitting device includes display means
09-05-2019
3
for displaying an image, and the input receiving means is a sound emission point at which the
insertion port detected by the detection means is disposed, and the difference. The input with
image data representing an image to be displayed on the sound emission device inserted into the
outlet is received, and the storage means associates the image release point with the sound
emission point for which the input acceptance means has accepted the input. Image data
extraction means for extracting image data stored in the storage means in association with the
sound emission point specified by the sound emission point identification means, the data supply
means comprising The sound emitting device inserted into the outlet disposed at the sound
emitting point specified by the point specifying means is supplied with the transmitted voice data
and the image data extracted by the image data extracting means. And the sound output device,
when the image data by said data supply means is supplied, may display an image corresponding
to the image data on the display means.
[0008]
In a preferred aspect of the present invention, the apparatus includes a plurality of the sound
emitting means disposed at each of the sound emitting points, and the data supply means is
disposed at the sound emitting point identified by the sound emitting point identifying means.
The transmitted sound data may be supplied to the sound emission means, and the sound
emission means may emit sound according to the sound data when the sound data is supplied by
the data supply means.
[0009]
According to the present invention, the user can arbitrarily determine the sound emission point
for emitting the different voices of the base, the speaker, and the like.
[0010]
[Configuration] (Configuration of Conference System) FIG. 1 is a diagram showing a configuration
of a conference system 1 according to the present embodiment.
As shown in the figure, the conference system 1 includes a plurality of conference terminals 5.
The respective conference terminals 5 are installed at different places from each other, and are
connected via a network N such as the Internet.
09-05-2019
4
These conference terminals 5 can mutually transmit and receive data representing video and
audio of the user.
In this way, users who are at different locations can perform remote conference while
exchanging video and audio with each other using the conference terminal 5.
FIG. 2 is a view showing the appearance of the conference terminal 5.
As shown in the figure, the conference terminal 5 includes an information processing device 10,
a plurality of sound emitting devices 20, and an audio distribution device 30.
[0011]
(Configuration of Information Processing Device) Next, the configuration of the information
processing device 10 will be described. The information processing apparatus 10 is, for example,
a personal computer, and a central processing unit (CPU) executes a program stored in a storage
unit to execute video and audio of the user at a conference terminal 5 installed at another site.
Perform processing to exchange. For example, the information processing apparatus 10 picks up
the voice of the user, and transmits voice data representing the picked-up voice to another
conference terminal 5. When voice data is transmitted from another conference terminal 5, the
information processing device 10 supplies the voice data to the voice distribution device 30.
Further, as shown in the figure, the information processing apparatus 10 includes an operation
unit 11 and a display unit 12. The operation unit 11 is, for example, a keyboard, and receives an
operation of the user. The display unit 12 is, for example, a liquid crystal display, and displays an
image according to image data.
[0012]
(Configuration of Sound Output Device) Next, the configuration of the sound output device 20
will be described. FIG. 3 is a view showing the appearance of the sound output device 20. As
shown in FIG. The sound emitting device 20 has a thin plate-like shape, and its size is, for
example, 25 mm wide × 75 mm long. As shown in the figure, the sound output device 20
includes a sound output unit 21, a contact terminal 22, and a display unit 23. The sound emitting
09-05-2019
5
unit 21 is a sound emitting means including an electrostatic flat speaker, a D / A converter, and
the like. The sound emitting unit 21 is provided on the back side of the display unit 23, and uses
the display unit 23 as a diaphragm to emit sound according to the sound data. The contact
terminal 22 is a terminal to be in contact with the contact terminal on the audio distribution
device 30 side. The display unit 23 is, for example, electronic paper, and is a display unit that
displays an image according to image data.
[0013]
(Configuration of Audio Distribution Device) Next, the configuration of the audio distribution
device 30 will be described. The audio distribution device 30 functions as an audio relay device
alone or in cooperation with the sound emission device 20. First, the mounting structure of the
audio distribution device 30 will be described. As shown in FIG. 2, the sound distribution device
30 is provided with a mounting portion 301 having a substantially U-shaped cross section. FIG. 4
is a side view of the voice distribution device 30 as viewed in the direction of arrow V in FIG. As
shown in the figure, the attachment portion 301 is disposed so as to sandwich the upper edge
portion of the display portion 12 of the information processing device 10. Thus, the voice
distribution device 30 is fixed to the information processing device 10. Further, as shown in FIG.
2, the voice distribution device 30 is provided with a communication cable 302. The
communication cable 302 is a USB (Universal Serial Bus) cable, and is connected to the
information processing apparatus 10 by inserting a connector 303 provided at the tip into the
connection port 13 of the information processing apparatus 10. Thus, communication can be
performed between the audio distribution device 30 and the information processing device 10 in
accordance with the USB standard.
[0014]
Subsequently, the configuration of the voice distribution device 30 will be described. FIG. 5 is a
block diagram showing the configuration of the audio distribution device 30. As shown in FIG. As
shown in the figure, the audio distribution device 30 includes a control unit 31, a communication
unit 32, and a plurality of insertion units 33. The control unit 31 has a memory and controls
each unit of the audio distribution device 30. The memory is storage means for storing data, and
stores a management table T for managing the insertion unit 33 and the like. The communication
unit 32 establishes communication with the information processing apparatus 10 connected via
the communication cable 302 to transmit and receive data. The insertion part 33 is an insertion
port in which the sound emission apparatus 20 is inserted, as shown in FIG. Each insertion part
33 is arranged in a line, as shown in FIG. The point where each of the insertion parts 33 is
09-05-2019
6
arranged is a sound emission point where the sound is emitted. The insertion portion 33 is
provided with a locking portion (not shown), and when the sound output device 20 is inserted
into the insertion portion 33, a click sound is generated by the action of the locking portion. The
sound output device 20 is fixed inside the insertion portion 33. Further, when the sound emitting
device 20 is further pushed in while the sound emitting device 20 is fixed inside the insertion
portion 33, a click sound is generated, and the fixing by the locking portion is released. Thus, the
sound output device 20 can be taken out from the insertion unit 33. That is, the sound emitting
device 20 is freely removable with respect to the insertion portion 33.
[0015]
As shown in FIG. 5, each insertion unit 33 includes a storage unit 35, a contact terminal 36, and a
detection unit 37. The storage unit 35, the contact terminal 36, and the detection unit 37 of each
insertion unit 33 are connected to the control unit 31 via the bus B. The storage unit 35 is, for
example, a memory, and stores an identifier assigned to the self-insertion unit 33. The contact
terminal 36 is a terminal that is brought into contact with the contact terminal 22 of the sound
output device 20 when the sound output device 20 is inserted into the self insertion portion 33.
When the sound output device 20 is inserted into the self insertion portion 33, the detection unit
37 detects that the sound output device 20 is connected.
[0016]
Here, the management table T stored in the memory of the control unit 31 will be described. FIG.
6 is a diagram showing an example of the management table T. As shown in the figure, in the
management table T, the above-mentioned "identifier of plug-in unit" and "connection state" are
associated. The “connected state” is information indicating whether the sound emitting device
20 is inserted into the insertion unit 33. In this management table T, the connection state "OFF"
is associated with any "identifier of the insertion part". This means that the sound emitting device
20 is not inserted into any of the insertion parts 33.
[0017]
[Operation] Next, the operation of the conference system 1 will be described. (Setting Process)
First, a setting process will be described in which the user sets the sound emission point of the
voice of the speaker at another site. In this case, the user inserts the sound emitting devices 20
09-05-2019
7
for the number of speakers at other sites into any one of the insertion units 33 of the voice
distribution device 30. That is, the user selects one of a plurality of sound emission points from
which sound is emitted. Here, in FIG. 1, it is assumed that the user U0 at the base A inserts the
sound emitting devices 20p, 20q, and 20r as shown in FIG. In the following description, when the
plug-in units 33 of the audio distribution device 30 are distinguished, they are referred to as
"plug-in unit 33a" to "plug-in unit 33j" sequentially from the left side in the drawing.
[0018]
When the sound output device 20p is inserted, the contact terminal 22 of the sound output
device 20p contacts the contact terminal 36 of the insertion portion 33b. Thus, the detection unit
37 of the insertion unit 33b detects that the sound emitting device 20p is connected, and
transmits the identifier "2" stored in the storage unit 35 and the detection signal to the control
unit 31. Do. Similarly, the detection unit 37 of the insertion unit 33 d detects that the sound
emitting device 20 q is connected, and transmits the identifier “4” stored in the storage unit
35 and the detection signal to the control unit 31. Send. Furthermore, the detection unit 37 of
the insertion unit 33 j also detects that the sound emission device 20 r is connected, and
transmits the identifier “10” stored in the storage unit 35 and the detection signal to the
control unit 31. . The control unit 31 detects the insertion unit 33 into which the sound emitting
device 20 is inserted, based on the transmitted identifier of the insertion unit and the detection
signal. That is, the control unit 31 and these detection units 37 cooperate with each other to
function as a detection unit. Subsequently, the control unit 31 updates the connection state of the
management table T stored in the memory based on the detection result. In this example, in the
management table T shown in FIG. 6, the connection state associated with the identifiers “2”,
“4”, and “10” of the insertion unit is changed from “OFF” to “ON”.
[0019]
Next, the user U0 operates the operation unit 11 of the information processing apparatus 10,
and for each insertion unit 33 into which the sound emission apparatus 20 is inserted, the sound
emission apparatus inserted into the insertion unit 33 20. Information to determine the sound to
be emitted from 20 is registered. At this time, when there are a plurality of speakers at one site,
the user U0 registers the communication address of the conference terminal 5 as the
transmission source of the voice data of the voice and the feature information of the voice. If
there is only one speaker at one site, only the communication address that is the transmission
source of the voice data of that voice is registered. The communication address is transmission
source identification information that identifies the conference terminal 5 that is the
09-05-2019
8
transmission source of voice data. Also, voice feature information is information representing the
voice quality of the speaker. For example, frequency characteristics are used as the feature
information. That is, the user designates transmission source identification information
identifying the audio data transmission source or feature information of the audio. As described
above, when there is only one speaker at one site, the reason for registering only the
communication address is to specify the speaker only by the communication address of the
conference terminal 5 which is the transmission source of voice data. It is because it can. Further,
the user U0 operates the operation unit 11 of the information processing apparatus 10, and for
each insertion unit 33 into which the sound emission apparatus 20 is inserted, the sound
emission apparatus 20 inserted into the insertion unit 33. The character data representing the
character image to be displayed in is registered.
[0020]
In this example, it is assumed that the following contents are registered for the insertion parts
33b, 33d, and 33j. First, for the plug-in unit 33b, in FIG. 1, the communication address
"192.168.0.2" of the conference terminal 5 installed at the base B, the feature information F1 of
the voice of the speaker U1, and the speaker Base B where U1 is located and text data "Tokyo
Sato" representing the name of speaker U1 are registered. In addition, for the insertion unit 33 d,
the communication address “192.168.0.2” of the conference terminal 5 installed at the base B,
the feature information F2 of the voice of the speaker U2, and the base where the speaker U2 is
present Character data "Tokyo Suzuki" representing B and the name of the speaker U2 are
registered. Furthermore, for the insertion section 33 j, the communication address
“192.168.0.3” of the conference terminal 5 installed at the site C and the names of the site C
where the speaker U3 is present and the speaker U3 “ The character data "Osaka Kato" is
registered.
[0021]
In this case, the information processing apparatus 10 associates the identifier of the plug-in unit
33, the communication address or voice feature information registered with the plug-in unit 33,
and the character data, and assigns the assignment information E. create. Then, the information
processing device 10 transmits the created assignment information E to the voice distribution
device 30. The control unit 31 of the voice distribution device 30 receives the allocation
information E transmitted from the information processing device 10 by the communication unit
32, and stores the information in the memory. That is, the control unit 31 is designated by the
sound source identification information identifying the sound emission point at which the
09-05-2019
9
insertion port into which the sound emission device is inserted and the voice data transmission
source designated by the user or designated by the user The input receiving unit is an input
receiving unit that receives input of voice feature information and image data representing an
image to be displayed by the sound emitting device inserted into the insertion port. Then, the
memory stores the sound emission point at which the control unit 31 receives an input, the
transmission source identification information or the feature information of sound, and the image
data in association with each other. FIG. 8 shows allocation information E stored in the memory
at this time. As shown in the figure, in this allocation information E, the character data of the
identifier "2" of the plug-in unit 33b, the communication addresses "192.168.0.2" and "feature
information F1", and "Tokyo Sato" And are associated with each other. Further, in this allocation
information E, the identifier “4” of the plug-in unit 33 d, the communication addresses
“192.168.0.2” and “feature information F2”, and the character data “Tokyo Suzuki” are
associated with each other. ing. Furthermore, in the assignment information E, the identifier "10"
of the insertion unit 33j, the communication address "192.168.0.3", and the character data
"Osaka Kato" are associated.
[0022]
(Assignment Process) Next, an assignment process will be described in which the voices of the
speakers at other locations are assigned to the set sound emission points. Here, first, in FIG. 1,
the case where the voice data D1 representing the voice of the speaker U1 at the location B is
transmitted from the conference terminal 5 installed at the location B to the conference terminal
5 installed at the location A Suppose. To this voice data D1, “192.168.0.2”, which is the
communication address of the conference terminal 5 installed at the site B, is added. When the
audio data D1 is transmitted, the information processing device 10 of the conference terminal 5
installed at the site A transmits the audio data D1 to the audio distribution device 30.
[0023]
FIG. 9 is a sequence diagram showing assignment processing performed by the voice distribution
device 30 and the sound emission device 20 at this time. First, the control unit 31 of the voice
distribution device 30 receives the voice data D1 transmitted from the information processing
device 10 by the communication unit 32 (step S11). Subsequently, the control unit 31 acquires,
from the received voice data D1, the communication address of the conference terminal 5 as the
transmission source and the feature information of the voice (step S12). That is, the control unit
31 is an acquisition unit that acquires, from the audio data transmitted from the audio data
transmission source, transmission source identification information for identifying the audio data
09-05-2019
10
transmission source or voice characteristic information represented by the audio data. In this
example, the communication address "192.168.0.2" added to the audio data D1 and the audio
feature information F1 represented by the audio data D1 are acquired. Subsequently, in the
assignment information E stored in the memory, the control unit 31 specifies the identifier of the
plug-in unit associated with the acquired communication address or voice feature information
(step S13). That is, the control unit 31 is a sound emission point identification unit that specifies
the sound emission point stored in the storage unit in association with the transmission source
identification information or the voice characteristic information acquired as described above. In
this example, in the allocation information E shown in FIG. 8, the identifier “2” associated with
the communication address “192.168.0.2” and the feature information F1 is specified.
[0024]
Subsequently, the control unit 31 extracts character data associated with the identified identifier
of the insertion unit in the assignment information E stored in the memory (step S14). That is,
the control unit 31 is an image data extraction unit that extracts the image data stored in the
storage unit in association with the sound emission point specified above. In this example, in the
allocation information E shown in FIG. 8, character data "Tokyo Sato" associated with the
identifier "2" of the insertion portion is extracted. Subsequently, the control unit 31 transmits the
voice data D1 received in step S11 and the extracted character data to the sound emitting device
20 inserted in the insertion unit 33 to which the identifier specified in step S13 is assigned. (Step
S15). That is, the control unit 31 is data supply means for supplying the transmitted audio data
to the sound emission means arranged at the sound emission point specified above. In this
example, the voice data D1 and the character data "Tokyo Sato" are sent to the plug-in unit 33b,
and the sound output device 20p is inserted into the plug-in unit 33b via the contact terminal 36
of the plug-in unit 33b. Will be sent. When the voice data D1 and the character data are
transmitted, the sound emitting device 20p receives these data via the contact terminal 22 (step
S16). Subsequently, the sound emitting device 20p emits a sound corresponding to the received
voice data D1 from the sound emitting unit 21 and displays a character image corresponding to
the received character data on the display unit 23 (step S17). As a result, in the sound output
device 20p inserted into the insertion unit 33b, the voice of the speaker U1 at the base B is
emitted, and the names of the base B at which the speaker U1 is present and the names of the
speaker U1 “Tokyo The character image "Sato" is displayed.
[0025]
Further, in FIG. 1, it is assumed that the voice data D2 representing the voice of the speaker U2
09-05-2019
11
at the site B is transmitted from the conference terminal 5 installed at the site B to the
conference terminal 5 installed at the site A. . In this case, in step S12 described above, the
communication address "192.168.0.2" added to the audio data D2 and the audio feature
information F2 represented by the audio data D2 are acquired. Subsequently, in step S13, in the
allocation information E shown in FIG. 8, an identifier "4" associated with the communication
address "192.168.0.2" and the voice feature information F2 is specified. Subsequently, in step
S14, in the assignment information E shown in FIG. 8, character data "Tokyo Suzuki" associated
with the identifier "4" of the insertion portion is extracted. Subsequently, in step S15, the voice
data D2 and the character data "Tokyo Suzuki" are transmitted to the sound emitting device 20q
inserted into the insertion unit 33d through the contact terminal 36 of the insertion unit 33d.
Then, in step S17, the sound emitting unit 21 of the sound emitting device 20q inserted into the
inserting unit 33d emits a sound according to the voice data D2, and the character data is
displayed on the display unit 23 of the sound emitting device 20q. The character image
according to is displayed. As a result, in the sound output device 20q inserted into the insertion
portion 33d, the voice of the speaker U2 at the base B is emitted, and the names of the base B
where the speaker U2 is present and the names of the speaker U2 are “Tokyo The character
image "Suzuki" is displayed.
[0026]
Further, in FIG. 1, it is assumed that voice data D3 representing the voice of the speaker U3 at
the site C is transmitted from the conference terminal 5 installed at the site C to the conference
terminal 5 installed at the site A. . In this case, in step S12 described above, the communication
address "192.168.0.3" added to the voice data D3 is acquired. Subsequently, in step S13, in the
allocation information E shown in FIG. 8, the identifier "10" associated with the communication
address "192.168.0.3" is specified. Subsequently, in step S14, in the assignment information E
shown in FIG. 8, character data "Osaka-Kato" associated with the identifier "10" of the insertion
portion is extracted. Subsequently, in step S15, the voice data D3 and the character data "Osaka
Kato" are transmitted to the sound emitting device 20r inserted in the insertion unit 33j through
the contact terminal 36 of the insertion unit 33j. Then, in step S17, the sound emitting unit 21 of
the sound emitting device 20r inserted into the inserting unit 33j emits a sound according to the
audio data D3 and the character data is displayed on the display unit 23 of the sound emitting
device 20r. The character image according to is displayed. Thus, in the sound output device 20r
inserted into the insertion unit 33j, the voice of the speaker U3 at the base C is emitted, and the
name of the base C at which the speaker U3 is present and the names of the speaker U3 “Osaka
The character image "Kato" is displayed.
[0027]
09-05-2019
12
FIG. 10 is a diagram showing how sound is emitted in the sound distribution device 30. As shown
in FIG. As described above, in this example, the voice of the speaker U1 at the site B is emitted
from the sound emitting device 20p inserted into the insertion portion 33b, and the voice of the
speaker U2 at the site B is inserted. The sound is emitted from the sound output device 20q
inserted into the portion 33d, and the voice of the speaker U3 at the base C is emitted from the
sound output device 20r inserted into the insertion portion 33j. Thus, the voices of the speakers
U1 to U3 are emitted from different emission points. Also, on each sound emitting device 20,
information on the speaker of the sound emitted from the sound emitting device 20 is displayed.
Thereby, the user U0 can easily identify the speaker when the speaker at the other base speaks.
In addition, for example, the user U0 is made to emit sound from the sound emitting device 20
which inserts the voice of the speaker at a distant base into the plug-in unit 33 apart from his
position, and the speaker at the nearby base The sound may be emitted from the sound emission
device 20 inserted into the insertion portion 33 closer to the user's position. As a result, the user
U0 can intuitively understand whether the voice speaker is at a distant base or at a nearby base.
As described above, according to the present embodiment, the user can arbitrarily determine the
sound emission point for emitting the different voices of the base, the speaker, and the like.
[0028]
[Modification] The above is the description of the embodiment, but the contents of this
embodiment can be modified as follows. Also, the following modifications may be combined as
appropriate. (Modification 1) In the embodiment described above, the volume of the sound
emitted from the sound emitting device 20 may be adjusted for each of the insertion sections 33
into which the sound emitting device 20 is inserted. In this case, the user operates the operation
unit 11 of the information processing apparatus 10, and for each insertion unit 33 into which the
sound emission apparatus 20 is inserted, the sound emission apparatus 20 inserted into the
insertion unit 33. Set the volume when sound is emitted from. Then, when creating the abovedescribed assignment information E, the information processing apparatus 10 associates the
identifier of the insertion unit 33 with the volume information indicating the volume set for the
insertion unit 33. In this case, the control unit 31 of the audio distribution device 30 receives the
allocation information E including the volume information from the information processing
device 10. Then, the memory stores the identifier of the plug-in unit 33 described above in
association with the volume information. Further, when the identifier of the insertion unit is
specified in step S13 described above, the control unit 31 extracts the volume information
associated with the specified identifier in the allocation information E stored in the memory. That
is, the control unit 31 is a sound volume information extraction unit that extracts the sound
volume information stored in the storage unit in association with the specified sound emission
09-05-2019
13
point. Then, in step S15 described above, the control unit 31 transmits the extracted volume
information in addition to the voice data and the text data described above. In this case, the
sound emitting device 20 emits the sound corresponding to the received sound data from the
sound emitting unit 21 at the sound volume represented by the sound volume information in
step S17 described above. Thereby, for example, only the volume of a specific speaker can be
increased.
[0029]
(Modification 2) In the embodiment described above, when the sound emitting device 20 is
pulled out from the insertion portion 33, the sound emitted from the sound emitting device 20 is
emitted from another sound emission point. May be In this case, the control unit 31 associates
the identifier of the plug-in unit specified in step S13 described above in the management table T
stored in the memory before transmitting the voice data in step S15 described above. It is
determined whether the connected state is "ON". Then, when the connection state is "ON", the
control unit 31 performs the same process as the above-described embodiment, and when the
connection state is "OFF", the control unit 31 performs the process in step S11 described above.
Send the received data to another sound emission point. As another sound emission point, when
the information processing apparatus 10 is provided with a sound emission means such as a
speaker, it may be a sound emission means of the information processing apparatus 10 or
another speaker at the same site It may be another sound emitting device 20 from which the
sound of is emitted.
[0030]
(Modification 3) In the embodiment mentioned above, each sound emission apparatus 20 may be
provided with a storage part which memorizes an identifier assigned to the sound emission
apparatus 20 concerned. In this case, the user operates the operation unit 11 of the information
processing apparatus 10 to determine the sound emitted from the sound emission apparatus 20
for each sound emission apparatus 20 inserted into the insertion section 33. Register
information. Then, the information processing device 10 creates the assignment information by
associating the identifiers of the sound emitting devices 20, the communication address or voice
feature information registered with the sound emitting devices 20, and the character data. Do. In
this case, in step S13 described above, in the assignment information stored in the memory, the
identifier of the sound emitting device associated with the acquired communication address or
voice characteristic information is specified. In step S14, character data associated with the
identified identifier of the sound emitting device is extracted from the assignment information
09-05-2019
14
stored in the memory. And in step S15, the voice data and character data which were mentioned
above are transmitted to the sound emission apparatus 20 to which the identifier of the specified
sound emission apparatus was allocated.
[0031]
(Modification 4) In the embodiment described above, although the sound output device 20 is
provided so as to be freely removable from the sound distribution device 30, a plurality of sound
output portions are fixed for each insertion portion 33 of the sound distribution device 30 May
be provided. For example, an array speaker is used as the plurality of sound emitting units. That
is, the sound distribution device 30 may include a plurality of sound emitting means arranged at
each sound emitting point. In this case, a plug-in member that is inserted into the plug-in unit 33
is used instead of the above-described sound output device 20. Then, the user inserts the
insertion members for the number of speakers at the other sites into any one of the insertion
sections 33 of the voice distribution device 30. In this case, in step S15 described above, the
control unit 31 of the voice distribution device 30 transmits the voice data and the character
data described above to the sound emitting unit of the insertion unit 33 to which the identifier
specified in step S13 is assigned. . Further, in this case, a switch may be provided for each sound
emitting unit of the sound distribution device 30, and sound may be emitted by the sound
emitting unit when the switch is pressed.
[0032]
(Modification 5) In the embodiment mentioned above, although the sound emission part 21 gave
and demonstrated the example provided with the flat speaker of an electrostatic type, the sound
emission part 21 may be equipped with the speaker of another shape or system. . For example,
the sound emitting unit 21 may include an electromagnetic speaker. Further, the shape of the
sound output device 20 is not limited to the example shown in FIG. Further, in the embodiment
described above, the plurality of insertion units 33 are arranged in a line, but the arrangement of
the insertion units 33 is not limited to this. For example, the plurality of insertion parts 33 may
be arranged side by side in a matrix, or may be arranged side by side along the surface of a globe
imitating a globe.
[0033]
09-05-2019
15
(Modification 6) Although the operation unit 11 of the information processing apparatus 10
receives the user's operation in the embodiment described above, the voice distribution
apparatus 30 may include the operation unit and may receive the user's operation. In this case,
information registered by the user is input to the control unit 31 by the operation of the
operation unit. Then, the control unit 31 creates the above-described assignment information E
based on the input information.
[0034]
(Modification 7) The operations of the information processing device 10, the sound emitting
device 20, and the voice distribution device 30 described above may be realized by one or more
hardware configurations, or the CPU executes one or more programs. It may be realized by Each
program executed by the CPU is stored in a recording medium readable by a computer device,
such as a magnetic recording medium such as a magnetic tape or a magnetic disk, an optical
recording medium such as an optical disc, a magneto-optical recording medium, or a
semiconductor memory. It can be provided as it is. It is also possible to download this program
via a network such as the Internet.
[0035]
It is a figure showing composition of a meeting system concerning this embodiment. It is a figure
which shows the external appearance of the meeting terminal of the said meeting system. It is a
figure which shows the external appearance of the sound emission apparatus of the said meeting
terminal. It is a side view of the audio distribution apparatus of the said meeting terminal. It is a
block diagram which shows the structure of the said audio | voice distribution apparatus. It is a
figure which shows an example of the management table which the said audio | voice
distribution apparatus memorize | stores. It is a figure which shows a mode that the said sound
emission apparatus is inserted. It is a figure which shows the allocation information memorize |
stored in the said audio | voice distribution apparatus. It is a sequence diagram which shows the
allocation process of the said audio | voice distribution apparatus and the said sound emission
apparatus. It is a figure which shows a mode that the audio | voice is emitted in the said audio |
voice distribution apparatus.
Explanation of sign
09-05-2019
16
[0036]
DESCRIPTION OF SYMBOLS 1 ... Conference system, 5 ... Conference terminal, 10 ... Information
processing apparatus, 20 ... Sound emission apparatus, 21 ... Sound emission part, 22 ... Contact
terminal, 23 ... Display part, 30 ... Sound distribution apparatus, 31 ... Control part, 32 ...
Communication unit, 33 ... Insertion unit, 35 ... Storage unit, 36 ... Contact terminal, 37 ...
Detection unit.
09-05-2019
17
Документ
Категория
Без категории
Просмотров
0
Размер файла
31 Кб
Теги
jp2010124133
1/--страниц
Пожаловаться на содержимое документа