close

Вход

Забыли?

вход по аккаунту

?

JPH05150792

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JPH05150792
[0001]
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention is used in an
electronic conference system, computer interface, etc., and in particular, can simultaneously
provide the sound required by an individual in a situation where a plurality of people move
freely. The present invention relates to an individual tracking sound generator.
[0002]
2. Description of the Related Art FIG. 5 is a diagram showing a sound generation apparatus in a
conventional electronic conference system, in which 100 is a conference room, 101 is a
conference table, 102 is a chair, 103 is an acoustic control device, and 104 is an acoustic control
device. A keyboard 105 indicates a switching unit, M1, M2, M3... Indicate directional
microphones, and S1, S2, S3.
[0003]
Conventionally, among electronic conference systems, those using various communication
devices, television systems and the like have been known.
In such an electronic conference system, for example, a plurality of people gather in two
conference rooms at remote locations to hold a conference. In this case, the information on the
conference is transmitted to all members in each conference room by video and audio using
10-05-2019
1
television or the like.
[0004]
However, apart from such video and audio information, an audio generation apparatus has been
known which provides different audio information to a plurality of persons. This device is used,
for example, when talking between two specific members of a conference, or when providing
predetermined acoustic information (message, music, etc.) only to a specific person. It is.
[0005]
As said sound production apparatus, there exists an apparatus as shown, for example in FIG. As
illustrated, a conference table 101 and a chair 102 are installed in the conference room 100.
Then, directional microphones M1, M2, M3... And directional speakers S1, S2, S3... Are set on the
conference table 101 in correspondence with the chairs 102 respectively. deep.
[0006]
Further, for example, the sound control apparatus 103, the keyboard 104, and the like are
installed outside the conference room (it may be installed inside the conference room). The sound
control apparatus 103 is provided with a switching unit 105 for switching a directional
microphone and a directional speaker, an amplifier, an MPU, a file (not shown) and the like.
[0007]
The directional speakers S1, S2, S3... Have directivity such that the sound can be heard only by
the person sitting in the corresponding chair 102, and the directional microphones M1, M2, M3
sit in each chair 102. It is a directional microphone that only the voice of the person who has
[0008]
When a meeting is to be performed, a place (position) where a person sits down is designated in
advance, and an acoustic space including the place is set.
10-05-2019
2
For example, suppose that members A, B, C,... Of the conference sit on the chair 102 as shown in
FIG. At this time, seat information (information on who sits on which chair), connection
information of a speaker and a microphone, and the like are input to the sound control apparatus
103 using the keyboard 104.
[0009]
The sound control apparatus 103 switches the switching unit 105 using the input information.
By this switching, the setting of the directional speakers S1, S2, S3... And the directional
microphones M1, M2, M3. Perform and provide sound independently to specific people.
[0010]
For example, when the member A and the member C talk, the circuit of the directional
microphone M1 and the circuit of the directional speaker S3 are connected, and the circuit of the
directional microphone M3 and the circuit of the directional speaker S1 are connected. Also, if a
specific sound is provided from the sound control device 103 to, for example, the member B, an
external sound source (not shown) connected to the sound control device 103 is connected to the
directional speaker S2.
[0011]
In this manner, by switching the circuit of each speaker and microphone in accordance with the
information input from the keyboard 104, sound can be provided independently to a specific
person.
[0012]
SUMMARY OF THE INVENTION The above-mentioned prior art has the following problems.
(1) The sound generation apparatus as shown in FIG. 5 can provide the sound independently to a
specific person, but can not cope with the case where the member moves and sits in another
10-05-2019
3
chair. In this case, the same sound as that before the movement can be provided again by reentering the information of the member moved from the keyboard (the number of the chair of
the movement destination, etc.).
[0013]
However, the input of information from the keyboard is troublesome and also takes time. (2) If
the movement of members increases, the input of information from the keyboard can not catch
up and it can not respond sufficiently.
[0014]
(3)
If a plurality of members move at the same time, it becomes difficult to cope with it and it
becomes impossible to provide sound. The present invention solves such conventional problems,
and allows each person to be independent without having to input position changes one by one
to a plurality of persons freely moving in a room or indoor where a conference is to be
conducted. It is an object of the present invention to automatically set an acoustic environment
for communicating with one another or simultaneously providing unique acoustic information.
[0015]
FIG. 1 is a diagram showing the principle of the present invention, in which 1 is an acoustic
space, 2 is an acoustic input unit, 3 is a personal position measurement identification unit, 5 is an
acoustic switching unit, 1a, 1b, 1c and 1d denote directional speakers, 2a, 2b, 2c and 2d denote
directional microphones, 3a, 3b, 3c and 3d denote sensors, and 11 denotes a control unit.
[0016]
The present invention is configured as follows in order to solve the problems described above.
(1) An acoustic space generation unit 1 that generates a plurality of acoustic spaces acoustically
separated, an acoustic input unit 2 that inputs an acoustic (voice) in each acoustic space,
identification of an individual moving between acoustic spaces, The position detection /
10-05-2019
4
identification unit 3 for detecting the position, the sound switching unit 5 for switching the
output signal of the sound input unit 2 to the instructed sound space, the information outputted
from the position detection / identification unit 3 Based on the control unit 11, the control unit
11 issues an instruction to switch to the sound switching unit 5, and it is possible to continuously
provide a specific sound to an individual freely moving between the sound spaces. Individualtracking sound generator characterized by
[0017]
(2) In the above configuration (1), the sound generation unit 6 is connected to the sound
switching unit 5, and switching of the sound switching unit 5 is performed based on the
instruction of the control unit 11, whereby the sound in the designated sound space is generated.
The input signal of the input unit 2 is superimposed on the sound signal of the sound generation
unit 6 and can be output.
[0018]
(3) In the configuration (1), the communication unit 7 is connected to the sound switching unit 5,
and switching of the sound switching unit 5 is performed based on the instruction of the control
unit 11 to communicate the instructed sound space with the outside. I was able to
[0019]
The operation of the present invention based on the above configuration will be described with
reference to FIG.
The acoustic space generation unit 1 generates a plurality of acoustic spaces acoustically
separated by the plurality of directional speakers 1a, 1b, 1c, 1d, and so on.
The sound input unit 2 inputs the sound (voice) of each sound space using the directional
microphones 2a, 2b, 2c, 2d,... Installed for each sound space.
[0020]
The personal position measurement identification unit 3 identifies the position of an individual
while identifying the individual using the signals from the sensors 3a, 3b, 3c, 3d,... Installed in
each acoustic space (who is where? Detect).
10-05-2019
5
[0021]
Now, it is temporarily assumed that the acoustic space generated by the directional speaker 1a is
1-a, the acoustic space generated by the directional speaker 1b is 1-b, and the acoustic space
generated by the directional speaker 1c is 1-c.
Then, it is assumed that the human A is in the acoustic space 1-a and the human B is in the
acoustic space 1-b.
[0022]
In this case, the personal position measurement identification unit 3 recognizes that the human A
is in the acoustic space 1-a, recognizes that the human B is in the acoustic space 1-b, and sends
the information to the control unit 11. Based on these pieces of information, the sound switching
unit 5 is instructed to switch.
[0023]
The sound switching unit 5 switches the sound as instructed above.
As a result, the directional microphones of the acoustic spaces 1-a and 1-b and the directional
speakers are acoustically connected (connected between 2a-1b and 2b-1a). In this state, the
human A in the acoustic space 1-a and the human B in the acoustic space 1-b can talk with each
other.
[0024]
Next, it is assumed that the human A moves and enters the acoustic space 1-c. At this time, the
personal position measurement identification unit 3 recognizes that the human A has moved
from the acoustic space 1-a to 1-c based on the information from the sensors 3a and 3c, and
sends the information to the control unit 11. The control unit 11 immediately instructs the sound
switching unit 5 to switch the sound.
10-05-2019
6
[0025]
As a result, the directional microphones of the acoustic spaces 1-c and 1-b and the directional
speakers are connected (between 2c-1b and between 2b-1c). As a result, even if the human being
A moves from the sound space 1-a to the acoustic space 1-c, it is possible to continuously talk
with the human being B.
[0026]
Likewise, even when a plurality of persons move freely among a plurality of sound spaces, a
specific sound can be provided continuously. Further, if another sound generation unit is
connected to the sound switching unit 5, the sound (for example, music) from the sound
generation unit is provided to an arbitrary sound space designated via the control unit 11. be
able to. Furthermore, by connecting the communication device, communication can be performed
between the designated acoustic space and the outside.
[0027]
Embodiments of the present invention will be described below with reference to the drawings. 2
to 4 are diagrams showing an embodiment of the present invention, FIG. 2 is a block diagram of
an individual tracking sound generation apparatus, FIG. 3 is an explanatory view of a conference
room, and FIG. FIG.
[0028]
In the figure, the same reference numerals as in FIG. 1 denote the same components. Further, 6
indicates a sound generation unit, 7 indicates a communication device, 8 indicates an entrance, 9
indicates a personal identification device, 10 indicates a registration unit (file), 11 indicates a
control unit, 12 indicates an input unit, and 13 indicates a conference site.
[0029]
10-05-2019
7
The personal tracking sound generation apparatus of this embodiment is an apparatus used
when a plurality of people (members of a meeting or the like) talk while changing the place, and
the configuration of the apparatus is shown in FIG.
[0030]
As shown in FIG. 2, the personal tracking type acoustic generation device includes an acoustic
space generation unit 1, an acoustic input unit 2, an individual position measurement
identification unit 3, an individual tracking unit 4, an acoustic switching unit 5, an acoustic
generation unit 6, and registration. A unit (file) 10, a control unit 11, an input unit 12, and the
like.
Further, the acoustic space generation unit 1 includes a plurality of directional speakers 1a, 1b,
1c, and 1d, and the acoustic input unit 2 includes a plurality of directional microphones 2a, 2b,
2c, and 2d, and also measures an individual position. The identification unit 3 is provided with a
plurality of sensors 3a, 3b, 3c and 3d.
[0031]
Furthermore, the personal tracking sound generation apparatus is connected to a personal
identification device 9 installed near the entrance 8 of a meeting room or the like, and connected
to a communication device (for example, a telephone) 7. The communication device 7 can
communicate with members of another conference room at a remote place via a communication
line.
[0032]
The individual tracking sound generation apparatus is installed and used in a conference hall 13
as shown in FIG. 3, for example. A personal identification device 9 is installed in the vicinity of
the entrance 8 of the conference hall 13 to identify the visitors. For example, the personal
identification device may be a commonly used device such as a personal identification device
using a fingerprint or a personal identification device using an ID card.
10-05-2019
8
[0033]
The directional speakers 1a, 1b, 1c, 1d are installed on the ceiling of the conference room 13 at a
fixed interval, and installed so as to output sound to a fixed space in the floor direction. In the
acoustic space generation unit 1 provided with such directional speakers, the space in which a
human (member of the conference) moves is divided into appropriate sizes, and acoustically
separated to generate sounds.
[0034]
As the sound space in this case, for example, as shown by 1-a, 1-b, 1-c, 1-d... In FIG. One acoustic
space is the size of the floor that is large enough to allow one person to afford it with some room.
[0035]
The sound input unit 2 includes a plurality of directional microphones 2a, 2b, 2c, 2d, and so on,
and inputs sounds (sounds) in the respective sound spaces.
The directional microphones 2a, 2b, 2c, 2d,... Are installed on the ceiling in the conference room
13 for each of the acoustic spaces 1-a, 1-b, 1-c, 1-d,. In FIG. 4, the installation areas of the
directional microphones are indicated by 2-a, 2-b, 2-c, 2-d, and so on.
[0036]
The individual position measurement identification unit 3 includes a plurality of sensors 3a, 3b,
3c, 3d,... And performs position and identification of a person moving around in the conference
center (identifies who moved where and where). As the sensor, for example, an infrared sensor is
used, and is installed on the ceiling in the conference room 13 for each acoustic space. The
detection area of this sensor is an area indicated by 3-a, 3-b, 3-c, 3-d,... In FIG.
[0037]
10-05-2019
9
Based on the information from the personal identification device 9 and the personal position
measurement identification part 3, the personal tracking part 4 follows the person moving
around. The control unit 11 performs various controls, and the input unit (keyboard or the like)
12 inputs data. The registration unit (file) 10 is for registering information on members of the
conference.
[0038]
The sound switching unit 5 switches the sound to be output to the sound space generation unit 1
based on an instruction from the control unit 11. The sound generation unit 6 includes, for
example, a reproduction device of music (BGM or the like), an announcement device, and the like.
[0039]
The operation of the present embodiment based on the above configuration will be described
above. In this example, it is assumed that a conference room in which a plurality of people talk
while changing places. Before the start of the meeting, the information on the members of the
meeting (the attendees) is input from the input unit 12, and the control unit 11 registers the
information in the registration unit 10.
[0040]
Then, the directional speakers 1a, 1b, 1c, 1d,... Sequentially output sound to the acoustic spaces
1-a, 1-b, 1-c, 1-d. 2c, 2d,... Collect the sound (sound) from the acoustic space 1-a, 1-b, 1-c, 1-d. · In
the installation areas 2-a, 2-b, 2-c, 2-d,..., Human beings in the acoustic space 1-a, 1-b, 1-c, 1-d,. I
will detect the presence of
[0041]
The members A, B, C,... Of the conference enter the conference room one by one from the
entrance 8 of the conference room 13.
First, the case where the member A enters from the entrance 8 will be described. At the entrance
10-05-2019
10
8, first, it is identified by the personal identification device 9 that the human who is now is the
member A, and personal identification information is sent to the personal tracking unit 4.
[0042]
When the member A enters the conference room (see FIG. 4), the sensor 3a of the personal
position measurement identification unit 3 detects that a person is present in the acoustic space
1-a, and sends detection information to the personal tracking unit 4. At this time, since the
identification information from the personal identification device 9 (information indicating that
the member A is in) has been input to the individual tracking unit 4, the member A receives an
audio based on these information. It is determined that the user is in the space 1-a, and the
information is sent to the control unit 11.
[0043]
The control unit 11 registers in the registration unit 10 that the member A is in the acoustic
space 1-a based on the information. Next, it is assumed that the member A moves and moves to
another sound space. At this time, since member A must move to the nearby acoustic space, no
one is present in the acoustic space 1-a, and then, if, for example, a person appears in the
acoustic space 1-b, the member A It is determined that it has moved from 1-a to 1-b.
[0044]
Likewise, the individual tracking unit 4 tracks which space the member A in the conference room
belongs to, in the same manner as described above, and always registers information on where
the member A is in the registration unit 10 . Further, members B, C,... Who enter the conference
room 13 next are similarly traced.
[0045]
At the time of information input from the input unit 12, information on who the member is,
information on who the person talks with, and the like are input and registered in the registration
unit 10 in advance. In this case, for example, it is assumed that members A and B are registered
10-05-2019
11
so as to have a conversation. Further, it is assumed that the current member A is in the acoustic
space 1-a and the member B is in the acoustic space 1-c.
[0046]
In this case, the control unit 11 reads out the information registered in the registration unit 10
and instructs the sound switching unit 5 to switch. According to this instruction, the directional
microphone 2a and the directional speaker 1c, which perform switching of the acoustic circuit in
the acoustic switching unit 5, are connected, and the directional microphone 2c and the
directional microphone 1a are connected (each acoustically tangent).
[0047]
In this state, the member A in the acoustic space 1-a and the member B in the acoustic space 1-c
can mutually communicate voice and talk. Next, it is assumed that the member A moves from the
acoustic space 1-a to 1-b. At this time, the detection signal from the sensor 3a in the acoustic
space 1-a disappears, and at the same time, the sensor 3b in the nearby acoustic space 1-b
detects a person.
[0048]
Based on this information, the individual tracking unit 4 determines that the member A has
moved from the acoustic space 1-a to 1-b, and sends the information to the control unit 11. The
control unit 11 registers the above information in the registration unit 10.
[0049]
At the same time, the control unit 11 instructs the sound switching unit 5 to switch the sound
circuit. By this switching, the directional microphone 2b and the directional speaker 1c are
connected, and the directional microphone 2c and the directional speaker 1b are connected.
With this connection, it is possible to mutually communicate the sound between the member A in
the acoustic space 1-b and the member B in the acoustic space 1-c to perform a conversation.
10-05-2019
12
[0050]
In this case, the above-mentioned sound is not provided to the member C in the other sound
space. In this way, even if members in the conference hall move freely, it is possible to conduct
conversations etc. among specific members by detecting the movement and automatically
switching the sound transmission circuit. .
[0051]
Further, when providing the sound from the sound generation unit 6, for example, music only to
a specific member, the control unit 11 reads out the information registered in advance in the
registration unit 10, and based on the information, the sound switching unit 5 switches the
acoustic circuit.
[0052]
For example, as described above, it is assumed that the member A is in the sound space 1-b, and
the member B is in the sound space 1-c and they are in conversation.
At this time, if the acoustic signal from the acoustic generation unit 6 is superimposed on the
input signal of the directional microphone 2 c by switching of the acoustic switching unit 5, the
superimposed acoustic signal is output from the directional speaker 1 b. Thereby, only the
member A can be provided with the sound (for example, music) from the sound generation unit
6.
[0053]
Furthermore, if an acoustic signal from the communication device 7 is connected to an acoustic
space having a specific member via the acoustic switching unit 5, communication with the
outside can be performed. For example, when the member B in the acoustic space 1-c
communicates with the outside, the directional microphone 2c and the directional speaker 1c
may be connected to the communication device 7.
10-05-2019
13
[0054]
At this time, the control unit 11 reads out the information registered in advance in the
registration unit 10 and performs an instruction to the sound switching unit 5 to switch the
acoustic circuit. Further, even if information is directly input from the input unit 12, the
switching can be performed.
[0055]
Other Embodiments Although the embodiments have been described above, the present
invention can also be implemented as follows. (1) Switching of the sound switching unit is
performed by an instruction from the control unit based on the information registered in advance
in the registration unit, but it is also possible to switch without using the registration information
by the information input from the input unit. .
[0056]
(2) The present invention can be used not only for ordinary electronic conferencing systems, but
also for conferences using computers (conferences etc. performed while exchanging information
between a plurality of remote computers).
[0057]
As described above, according to the present invention, the following effects can be obtained.
(1) Even if a plurality of members such as a meeting move freely in the conference room etc., the
movement can always be detected and the switching of sound can be performed automatically.
[0058]
(2)
Specific sounds can be continuously provided to freely moving individuals without being tied to
equipment. (3) The sound from the sound generator can be provided only to designated
10-05-2019
14
individuals. In this case, the sound can be provided to a specific individual by superimposing on
the voice during the conversation.
[0059]
(4) By connecting the communication device, a person in the designated acoustic space can
communicate with the outside.
10-05-2019
15
Документ
Категория
Без категории
Просмотров
0
Размер файла
24 Кб
Теги
jph05150792
1/--страниц
Пожаловаться на содержимое документа