close

Вход

Забыли?

вход по аккаунту

?

JP2008131515

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008131515
PROBLEM TO BE SOLVED: To provide an information processing apparatus capable of
controlling a sound output means corresponding to an object on a display unit, an information
processing method and a program in an information processing apparatus provided with a sound
output means. SOLUTION: Based on the position of an object 33 detected by a sensor 6 or a builtin camera 7, properties of sound outputted from each of the speakers 30a to 30d are individually
controlled by a CPU 10. Thus, the operator can change the position of the object 33 on the
display unit 32 to control the sound to be output, so that the sound output can be intuitively
operated and controlled. [Selected figure] Figure 1
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND
PROGRAM
[0001]
The present invention relates to an information processing apparatus that detects a position of
an object on a display unit and controls a sound to be output based on the detected position, an
information processing method, and a program.
[0002]
As an information processing apparatus for controlling the output of sound according to the first
prior art, the transmitting side transmits voice together with speaker position information
indicating the position of the speaker, the receiving side includes a plurality of speakers for voice
output, and There is disclosed a voice teleconferencing device which outputs received voice from
a speaker corresponding to person's position information (see, for example, Patent Document 1).
04-05-2019
1
[0003]
In the information processing apparatus of the second prior art, a display is provided on the
upper surface of a desk.
The display is formed to be able to display various images, and an object is formed to be
mountable.
The information processing apparatus detects the position of the object placed on the display,
changes the display area of the image displayed on the display unit based on the position of the
detected object, and detects the object that is an obstacle. Avoid and display the image. The
information processing apparatus is provided with a button, and the content of the image
displayed on the display can be changed by operating the button (see, for example, Patent
Document 2).
[0004]
JP-A-9-261351 JP-A-2004-271866
[0005]
In the first prior art, although the speaker for outputting voice is switchable, it is necessary to
add speaker position information indicating the position of the speaker to the voice information
in order to switch the speaker.
Further, in the first and second prior art information processing devices, a technique for
controlling the output of sound based on an object placed on the display screen is not disclosed.
[0006]
As described above, the prior art can not control the output of sound in accordance with the
object on the display unit. Therefore, an object of the present invention is to provide an
04-05-2019
2
information processing apparatus, an information processing method, and a program capable of
controlling sound output means corresponding to an object on a display unit in an information
processing apparatus provided with sound output means.
[0007]
The present invention provides a sound output means for outputting sound, a display means for
displaying an image on the display unit, a position detection means for detecting the position of
an object on the display unit, and a position of the object detected by the position detection
means. And a control unit configured to control the sound output unit so as to output a
corresponding sound.
[0008]
Further, in the present invention, the sound output unit includes a plurality of sound output units
capable of controlling the sound individually output, and the control unit outputs each sound
based on the position of the object detected by the position detection unit. It is characterized in
that the sounds output from the unit are individually controlled.
[0009]
Furthermore, according to the present invention, the sound output unit includes a plurality of
sound output units capable of controlling sounds individually output, and the control unit is
based on an image displayed at the position of the object detected by the position detection unit.
, And individually control the sound output from each sound output unit.
[0010]
Furthermore, according to the present invention, identification means for acquiring identification
information for identifying an object on a display unit, related sound information storage means
for storing predetermined related sound information in association with the identification
information, and identification means for identifying information It further includes related
sound information search means for searching for related sound information stored in
association with identification information upon acquisition, and the control means includes the
position of the object detected by the position detection means, and the related sound
information search means. The sound output from each sound output unit is individually
controlled based on the related sound information to be searched.
[0011]
Furthermore, according to the present invention, the control means individually outputs the
04-05-2019
3
volume to be output for each sound output unit based on the distance between the position of
the object detected by the position detection means and the position of each sound output unit. It
is characterized by controlling.
[0012]
Furthermore, the present invention includes a display step of displaying an image on a display
unit, a position detection step of detecting a position of an object on the display unit, and a sound
output step of outputting a sound corresponding to the position of the detected object. An
information processing method characterized by
[0013]
Furthermore, the present invention is a program which causes a computer to execute the
function of the information processing apparatus.
[0014]
According to the present invention, the sound output means is controlled by the control means to
output a sound corresponding to the position of the object detected by the position detection
means.
Therefore, when there is an object on the display unit, the sound corresponding to the position of
the object on the display unit is output by the sound output means, so that the sound output by
changing the position of the object can be controlled. .
As a result, the operator can control the output sound by changing the position of the object on
the display unit, so that the output of the sound can be intuitively controlled.
[0015]
Further, according to the present invention, based on the position of the object detected by the
position detection means, the property of the sound output from each sound output unit is
individually controlled by the control means.
04-05-2019
4
Therefore, by changing the position of the object, it is possible to individually control the nature
of the sound output from each sound output unit.
Thus, the operator can change the position of the object to control the properties of the sound
output from each sound output unit, and thus can intuitively control the sound output of each
sound output unit. .
In addition, since a plurality of sound output units are included, it is possible to output a realistic
sound.
[0016]
Furthermore, according to the present invention, based on the image displayed on the position of
the object detected by the position detection means, the sound outputted from each sound output
unit is individually controlled by the control means.
Therefore, the position of the object can be changed to a position at which different images are
displayed on the display unit, and the sounds output from the sound output units can be
individually controlled.
As a result, the operator can change the position of the object to the position of the image
displayed on the display unit and control the sound output from each sound output unit, so the
sound output of each sound output unit It can be controlled more intuitively. In addition, since a
plurality of sound output units are included, it is possible to output a realistic sound.
[0017]
Furthermore, according to the present invention, when the identification means acquires
identification information, related sound information stored in association with the identification
information is searched. Therefore, the related sound information on the object on the display
unit can be searched by the related sound information searching means. The sound output from
each sound output unit is individually controlled by the control means based on the related
04-05-2019
5
sound information searched for and the position of the object. Therefore, since the sound related
to the object on the display unit can be individually output from each sound output unit, the
operator changes the sound output from each sound output unit by changing the object to
another object. can do. This allows the operator to change the sound itself by changing the object
itself and changing the position of the object on the display unit. Therefore, the operator can
control the sound output unit more intuitively and individually.
[0018]
Furthermore, according to the present invention, based on the distance between the position of
the object detected by the position detection means and the position of each sound output unit,
the volume of the output sound is individually controlled by the control means for each sound
output unit. It is controlled. Therefore, the sound output unit closest to the position of the object
on the display unit can output the largest sound volume, and can be controlled so that the
outputted sound volume decreases as the distance increases. Thus, the operator can select a
sound output unit whose sound volume is to be increased or a sound output unit whose sound is
to be reduced by changing the position of the object. Therefore, the operator can intuitively
control the volume of the sound output unit.
[0019]
Further, according to the present invention, the sound corresponding to the position of the object
detected in the position detection step is output in the sound output step. Therefore, when an
object is present on the display unit, a sound corresponding to the position of the object on the
display unit is output, so that the sound output can be controlled by changing the position of the
object. As a result, the operator can control the output sound by changing the position of the
object on the display unit, so that the output of the sound can be intuitively controlled.
[0020]
Furthermore, according to the present invention, it is possible to cause a computer to execute the
function of the information processing apparatus, and it is possible to reduce the effort of
controlling the sound output means.
[0021]
04-05-2019
6
DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present
invention will be described with reference to the drawings.
The same referential mark may be attached | subjected to the part corresponding to the matter
currently demonstrated by the form which precedes in each form, and the overlapping
description may be abbreviate | omitted. When only a part of the configuration is described, the
other parts of the configuration are the same as those described above. Not only the combination
of the portions specifically described in the embodiments but also the embodiments may be
partially combined if any problem does not occur in the combination. The start condition of each
flowchart is not necessarily limited to the described start condition.
[0022]
FIG. 1 is a block diagram showing the electrical configuration of the information processing
apparatus 1 according to the first embodiment of this invention. The information processing
apparatus 1 is provided, for example, on a desk, and configured to control the sound to be output
according to the position where the object 33 is disposed when various objects 33 (see FIG. 3)
are disposed on the desk. ing. The information processing apparatus 1 has an input / display
system 2 and a control system 3. The input / display system 2 includes a liquid crystal display
(Liquid Crystal Display; abbreviated as LCD) 5, a sensor 6, a built-in camera 7, a backlight 8 as a
light source, and a plurality of speakers 30. The control system 3 includes a central processing
unit (CPU) 10 as a control means, a sensor input unit 11, a peripheral device input unit 12, a
display control unit 13, a main storage unit 14, an auxiliary storage unit 15, It has a
communication unit 16, a bus 17 and an audio control unit 31.
[0023]
The LCD 5 is a display means, and displays information such as an electronic file as an image on
the display unit 32 (see FIG. 3). The sensor 6 detects at least one pattern of the object 33 placed
on the display screen of the LCD 5, a character and a tag. The display unit 32 is configured to be
capable of mounting the object 33, and has a strength that can be used as a desk.
[0024]
04-05-2019
7
The sensor 6 is realized by an image sensor using a thin film transistor (abbreviation: TFT) array,
an infrared sensor, a digitizer, a tablet or the like. The sensor 6 may be configured separately
from the low resolution sensor and the high resolution sensor, or may be integrated. The built-in
camera 7 detects at least one pattern of the object 33 placed on the display screen, a character
and a data carrier. The data carrier is, for example, a QR code and a bar code, and the built-in
camera 7 has a function of detecting whether such a data carrier is not formed on the object 33.
The built-in camera 7 may be integrated with the sensor 6. The sensor 6 or the built-in camera 7
has a function as position detection means, detects the position of the object 33 on the display
unit 32, and gives the detected position information to the sensor input unit 11.
[0025]
The backlight 8 is a light source for performing liquid crystal display. The light source is not
limited to the backlight 8 but may be a front light type when the LCD 5 is of a reflective type. The
speaker 30 is a sound output means and outputs a sound. In the present embodiment, a plurality
of speakers 30 are provided, and the speakers 30 a to 30 d function as sound output units. Each
of the speakers 30a to 30d is configured to be individually controllable. The sensor 6 and the
built-in camera 7 are electrically connected to the sensor input unit 11. The sensor input unit 11
converts the output from the sensor 6 and the built-in camera 7 into an electric signal and
converts it into data. The peripheral device input unit 12 is configured to be electrically
connectable to peripheral devices 18 such as a keyboard, a pointing device, an external camera,
and a microphone. The peripheral device input unit 12 converts the output from the peripheral
device 18 into an electrical signal and converts it into data.
[0026]
An LCD 5 and a backlight 8 are electrically connected to the display control unit 13. The display
control unit 13 controls the LCD 5 and the backlight 8 in accordance with an instruction from
the CPU 10. The speakers 30 a to 30 d are electrically connected to the sound control unit 31.
The voice control unit 31 individually controls the speakers 30 a to 30 d according to an
instruction from the CPU 10.
[0027]
04-05-2019
8
The main storage unit 14 stores an operation program necessary for realizing a plurality of
processing operations in the information processing apparatus 1. Main storage unit 14 is
implemented by, for example, a random access memory (abbreviated as RAM), acquires
information stored in auxiliary storage unit 15, or temporarily stores information transmitted
from each configuration. . The auxiliary storage unit 15 is realized by an information storage
medium such as a hard disk, a digital versatile disc (abbreviated as DVD), and a compact disc
(abbreviated as CD). The auxiliary storage unit 15 additionally stores information necessary for
operation. Further, the auxiliary storage unit 15 has a function as a related sound information
storage unit, and in the present embodiment, the related sound information is stored in
association with identification information for identifying the object 33. The auxiliary storage
unit 15 also stores related sound information in association with the image displayed on the
display unit 32. The related sound information is, for example, sound information related to the
object 33 and the image. For example, if the object 33 is a model of a locomotive (hereinafter
sometimes referred to simply as "locomotive"), the related sound information is information on
the sound generated when the locomotive travels, and for example, the object 33 is an airplane
In the case of a model (hereinafter sometimes referred to simply as "airplane"), the related sound
information is information on the sound generated when the plane flies. Further, when the image
is an image showing the forest 41, the related sound information is information relating to the
sound of a bird crying or wind passing in the forest 41. The communication unit 16 is electrically
connected to a communication line network such as the Internet line network, a public line
network, and a LAN (Local Area Network), and has a function of communicating with an external
device.
[0028]
The CPU 10 is electrically connected to the sensor input unit 11, the peripheral device input unit
12, the display control unit 13, the main storage unit 14, the auxiliary storage unit 15, the
communication unit 16, and the voice control unit 31 via the bus 17. There is. The CPU 10
controls the sensor input unit 11, the peripheral device input unit 12, the display control unit 13,
the main storage unit 14, the auxiliary storage unit 15, and the communication unit 16 based on
control programs stored therein and in the main storage unit 14. And comprehensive control of
hardware resources including the voice control unit 31. CPU 10 is implemented by, for example,
a microcomputer.
[0029]
Further, the CPU 10 instructs the voice control unit 31 to output the sound corresponding to the
04-05-2019
9
position of the object 33 detected by the sensor 6 or the built-in camera 7, and controls each of
the speakers 30a to 30d. Further, based on the position of the object 33 detected by the sensor 6
or the built-in camera 7, the CPU 10 individually controls the sound output from each of the
speakers 30 a to 30 d. Here, controlling the sound means, for example, controlling the properties
of the sound. The property of sound includes at least one of the output state of the sound output
from each of the speakers 30 a to 30 d and the characteristic of the sound. Also, the sound
output state includes at least one of volume, high and low of sound, and sound quality. The sound
characteristics also include at least one of the operator's preference level, type, and genre.
[0030]
FIG. 2 is a diagram showing the configuration of the operation program 20 stored in the main
storage unit 14. The operation program 20 of this embodiment includes a position recognition
processing program 21, a direction recognition processing program 25, a tag recognition
processing program 22, a shape recognition processing program 23, a character recognition
processing program 26, an information search processing program 24, an image comparison
processing program. 27 and a sound output processing program 28. When the CPU 10 executes
the position recognition processing program 21, position recognition processing for recognizing
the position of the object 33 detected by the sensor 6 or the built-in camera 7 is realized. The
position of the object 33 refers to, for example, a predetermined position of the display area 32 a
of the display unit 32 as an origin, and recognizes position coordinates of a portion closest to the
origin of the object 33. For example, the lower left position of the display area 32a in FIG. 3
described later is set as the origin.
[0031]
The CPU 10 executes the tag recognition processing program 22 to recognize tag information
based on the data carrier detected by the sensor 6 or the built-in camera 7. データキャリアは、
たとえばQRコードおよびバーコードなどである。 The tag information includes, for example,
information such as identification information, direction recognition information, and shape
recognition information. The identification information is information for identifying the object
33, the direction recognition information is information for recognizing the direction of the
object 33, and the shape recognition information is information for recognizing the shape of the
object 33. . Therefore, the CPU 10 has a function as identification means for identifying the
object 33.
04-05-2019
10
[0032]
The CPU 10 executes the shape recognition processing program 23 to recognize the shape of the
image read by the sensor 6 or the built-in camera 7. The CPU 10 executes the information search
processing program 24 to search the auxiliary storage unit 15 for related sound information
associated with the identification information. The CPU 10 executes the character recognition
processing program 26 to recognize character information detected by the sensor 6 or the builtin camera 7. By executing the direction recognition processing program 25 by the CPU 10, based
on the image information, the direction recognition information included in the data carrier, or
the recognized characters, the predetermined direction of the object 33, for example, the vertical
or horizontal direction is recognized. . The CPU 10 executes the image comparison processing
program 27 to implement image comparison processing for comparing time-series image
information. The CPU 10 executes the sound output processing program 28 to realize sound
output processing for controlling the sound output from each of the speakers 30 a to 30 d based
on the position of the object 33 recognized by the position recognition processing.
[0033]
FIG. 3 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32. As shown in FIG. The display unit 32 is in
the shape of a square, and has a display area 32 a for displaying a square-shaped image on the
surface portion. In the display unit 32, four speakers 30a to 30d are provided near the four
corners, respectively. In FIG. 3, a state where a locomotive as the object 33 is placed on the
display unit 32 is shown. In the present embodiment, the vertical direction is the vertical
direction in the drawing in FIG. 3, and the horizontal direction is the horizontal direction in the
drawing in FIG. An image of an elliptical line centered around the center of the display unit 32 is
displayed so that each corner approaches. Further, in FIG. 3, a locomotive as the object 33 is
placed near the lower right corner.
[0034]
FIG. 4 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32, and shows a state in which a sound is
outputted from the speaker 30. As shown in FIG. In FIG. 4, in order to facilitate understanding, a
state in which a sound is output from the lower right speaker 30b is visualized and shown by a
note 34. Therefore, in FIG. 4, the sound is output only from the lower right speaker 30b, and no
04-05-2019
11
sound is output from the remaining three speakers 30a, 30c, and 30d. The selection of the
speaker 30 that outputs such a sound corresponds to the position of the locomotive, which is the
object 33 disposed on the display unit 32. In the present embodiment, the CPU 10 controls so
that the sound is output only from the speaker 30 closest to the object 33 disposed in the display
unit 32. Such processing will be described using a flowchart.
[0035]
FIG. 5 is a flowchart showing the sound output processing procedure of the CPU 10. The sound
output process shown in FIG. 5 is executed by the CPU 10. The sound output process shown in
FIG. 5 is repeatedly performed during the operation of the information processing device 1.
While the information processing apparatus 1 is in operation, an image is displayed on the
display unit 32. For example, the image shown in FIG. 4 is displayed. The process shown in FIG. 5
includes, for example, a condition for turning on the main power supply (not shown) of
information processing apparatus 1 from off, constant time intervals by a timer not shown, and
approach or movement of the user by a human sensor (not shown). It starts on the detected
condition.
[0036]
In step a1, the sensor 6 or the built-in camera 7 senses the state on the display screen of the LCD
5, which is the display means of the information processing apparatus 1, to determine whether
the object 33 is present in the next step a2. Move to a2. In step a2, as a result of the sensing, it is
determined whether or not the object 33 is present on the display screen of the LCD 5. If it is
determined that the object 33 is present on the display screen, the process proceeds to step a3,
and the object 33 is presented on the display screen. If it is determined that there is not, the
process ends.
[0037]
In step a3, the position recognition processing program 21 described above is executed to
recognize the position of the object 33, and the process proceeds to step a4. Therefore, steps a1
to a3 are position detection steps for detecting the position of the object 33 on the display
screen. In step a 4, the sound control unit 31 is configured to select the closest speaker 30 based
on the position of the object 33 by executing the sound output processing program 28 described
04-05-2019
12
above, and to output the sound from the selected speaker 30. Give a command to and complete
this process. Therefore, step a4 is a sound output step of outputting a sound corresponding to
the position of the detected object.
[0038]
By such processing, when the object 33 is present on the display unit 32, a sound corresponding
to the position on the display unit 32 of the object 33 is outputted by the speaker 30, so that the
sound is outputted by changing the position of the object 33. You can control the sound. Thus,
the operator can change the position of the object 33 on the display unit 32 to control the sound
to be output, so that the sound output can be intuitively operated and controlled. Therefore, the
operability of the sound output from the speaker 30 can be improved.
[0039]
Further, in the present embodiment, based on the position of the object 33 detected by the
sensor 6 or the built-in camera 7, the property of the sound output from each of the speakers 30
a to 30 d is individually controlled by the CPU 10. Therefore, by changing the position of the
object 33, it is possible to individually control the nature of the sound output from each of the
speakers 30a to 30d. As a result, the operator can change the position of the object 33 to control
the properties of the sound output from the speakers 30a to 30d, and thus intuitively control the
output of the sounds from the speakers 30a to 30d. be able to. Further, since the plurality of
speakers 30 are included, it is possible to output a realistic sound.
[0040]
Moreover, in the present embodiment, the size of the sound, which is the output information of
the sound, can be selected as the property of the sound. Therefore, the sound of the speaker 30
closest to the object 33 is increased, and the sound output from the remaining speakers 30 is
made as small as possible, in other words, the output of the sound is stopped. Therefore, by
disposing the object 33 near the speaker 30 that wants to output sound, the speaker 30 that
outputs the sound can be selected, so the operator can operate the speaker 30 with an intuitive
operation.
04-05-2019
13
[0041]
Further, in the present embodiment, the sound output from each of the speakers 30a to 30d is
not particularly limited, and the configuration for selecting the speaker 30 for outputting the
sound has been described, but the sound based on the object 33 disposed in the display unit 32
May be output. The CPU 10 executes the tag recognition processing program in parallel with the
position recognition processing program in step a3 described above to acquire identification
information of the object 33. Based on the identification information, the information search
processing program is executed to read out related sound information stored in the auxiliary
storage unit 15 in association with each other. The voice control unit 31 is instructed to output
the read related sound information in step a4. By such processing, the sound related to the object
33 can be output. For example, when the object 33 is a locomotive as shown in FIG. 4, the output
sound is a sound generated when the locomotive travels as described above. Thus, by outputting
the sound related to the object 33 and further outputting the sound from the speaker 30 in
proximity to the object 33, a sense of realism as if the object 33 is at that position in the scene on
the display unit 32 You can get
[0042]
Next, a second embodiment of the present invention will be described. In the information
processing apparatus 1 according to the present embodiment, when the object 33 is disposed in
the display unit 32, the display unit 32 displays the sound related to the object 33 and the
position where the object 33 is disposed or in the vicinity thereof. It is characterized in that the
sound related to the image is output. FIG. 6 is a front view showing the information processing
apparatus 1 and is a view showing an example of a screen displayed on the display unit 32
according to the present embodiment. In FIG. 6, as an image displayed on the display unit 32, a
rain cloud 40, a forest 41, a lakeside 42, and a city area 43 are displayed as an example in the
vicinity of four corners sequentially clockwise. Therefore, on the display unit 32, an image of rain
cloud 40 is displayed near the upper right corner, an image of forest 41 is displayed near the
lower right corner, and an image of lake 42 is displayed near the lower left corner. An image of
the city area 43 is displayed near the upper left corner. Further, the image of the elliptical line 44
centered on the vicinity of the center of the display unit 32 is displayed so that each corner
approaches. Further, in FIG. 3, a locomotive as the object 33 is placed in the vicinity where the
image of the forest 41 is displayed.
[0043]
04-05-2019
14
FIG. 7 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32 of the present embodiment, and shows a
state in which a sound is outputted from the speaker 30. In FIG. 7, in order to facilitate
understanding, a state in which a sound is output from the lower right speaker 30b is visualized
and shown by the note. Therefore, in FIG. 7, the sound is output only from the lower right
speaker 30b, and no sound is output from the remaining three speakers 30a, 30c, and 30d. The
selection of the speaker 30 that outputs such a sound corresponds to the position of the
locomotive, which is the object 33 disposed on the display unit 32. In the present embodiment,
the CPU 10 controls so that the sound is output only from the speaker 30 closest to the object 33
disposed in the display unit 32. The sound output from the nearest speaker 30 is a sound related
to the object 33 and a sound related to an image displayed at or near the position where the
object 33 is placed. Therefore, as shown in FIG. 7, the object 33 is a locomotive, and the forest 41
is displayed in the vicinity where the object 33 is placed. Therefore, from the lower right speaker
30b, it occurs when the locomotive travels. A sound and a sound generated in the forest 41 are
output. Such processing will be described using a flowchart.
[0044]
FIG. 8 is a flow chart showing a sound output processing procedure of the present embodiment
of the CPU 10. The sound output process shown in FIG. 8 is executed by the CPU 10. The sound
output process shown in FIG. 8 is repeatedly performed during the operation of the information
processing device 1. For example, under the condition that the main power supply (not shown) of
the information processing apparatus 1 is turned on, the constant time interval by the timer not
shown, and the approach or movement of the user by the human sensor not shown. The process
shown in FIG.
[0045]
In step b1, the sensor 6 or the built-in camera 7 senses the state on the display screen of the LCD
5, which is the display means of the information processing apparatus 1, in order to determine
whether the object 33 is present in the next step b2. Move to b2. In step b2, as a result of the
sensing, it is determined whether or not the object 33 is present on the display screen of the LCD
5. If it is determined that the object 33 is present on the display screen, the process proceeds to
step b3, and the object 33 is presented on the display screen. If it is determined that there is not,
the process ends.
04-05-2019
15
[0046]
In step b3, the position recognition processing program 21 described above is executed to
recognize the position of the object 33, and the process moves to step b4. In step b4, by
executing the above-described sound output processing program 28, the closest speaker 30 is
selected based on the position of the object 33, and the process proceeds to step b5. In step b5,
the sound to be output is determined, and the voice control unit 31 is instructed to output the
determined sound from the selected speaker 30, and the process is ended. In step b5, the tag
recognition processing program is executed to acquire identification information of the object
33. Based on the identification information, an information search processing program is
executed to read out related sound information stored in the auxiliary storage unit 15 in
association with the identification information. Further, based on the image displayed at the
position of the object 33, the information search processing program is executed to read out the
related sound information stored in the auxiliary storage unit 15 in association with the image. It
is decided to the sound which outputs such related sound information. Thus, by outputting the
sound related to the object 33 and the sound in which the object 33 is arranged, and further
outputting the sound from the speaker 30 close to the object 33, the object 33 in the scene on
the display unit 32 is as if You can get a sense of realism as if you were in position.
[0047]
Therefore, in the present embodiment, based on the image displayed at the position of the object
33 detected by the sensor 6 or the built-in camera 7, the property of the sound output from each
of the speakers 30 a to 30 d is individually controlled by the CPU 10. Therefore, by changing the
position of the object 33 to a position where different images are displayed on the display unit
32, it is possible to individually control the nature of the sound output from each of the speakers
30a to 30d. Thus, the operator can change the position of the object 33 to the position of the
image displayed on the display unit 32 and control the property of the sound output from each
of the speakers 30a to 30d. The output of the 30d sound can be controlled more intuitively.
Further, since the plurality of speakers 30 are included, it is possible to output a realistic sound.
[0048]
Further, in the present embodiment, since the sound related to the object 33 on the display can
be individually output from each of the speakers 30a to 30d, the operator can change each
04-05-2019
16
object 33 to another object 33 so that each speaker can The sounds output from 30a to 30d can
be changed. Thus, the operator can change the output sound by changing the object 33 itself and
changing the position of the object 33 on the display unit 32. Therefore, since the operator can
control the speakers 30 individually and more intuitively, the operability can be further
improved.
[0049]
Next, a third embodiment of the present invention will be described. In the information
processing apparatus 1 according to the present embodiment, when the object 33 is disposed on
the display unit 32, a sound based on the image displayed on the display unit 32 is output at or
near the position where the object 33 is disposed. Are characterized by
[0050]
FIG. 9 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32 according to the present embodiment. In
FIG. 9, as the image displayed on the display unit 32, the display area 32a is divided into four
into two rows and two columns, and images of different musical instruments are displayed in the
respective display areas 32a. In the present embodiment, as shown in FIG. 9, the image of the
piano 45 is displayed in the upper right display area 32a, the image of the trumpet 46 is
displayed in the lower right display area 32a, and the drum 47 is displayed in the lower left
display area 32a. The image of the guitar 48 is displayed in the upper left display area 32a. In
addition, an image showing a circular area 49 centered around the center of the display unit 32
is displayed.
[0051]
FIG. 10 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32 shown in FIG. 9, and shows a state in which
a sound is output from the speaker 30. In FIG. 10, in order to facilitate understanding, a state in
which a sound is output from the upper right speaker 30a is visualized and shown by a note 34.
Therefore, in FIG. 10, sound is output only from the upper right speaker 30a, and no sound is
output from the remaining three speakers 30b, 30c, and 30d. Further, in FIG. 10, the finger
pointing as the object 33 is placed on the display area 32a where the image of the piano 45 at
04-05-2019
17
the upper right is displayed. The selection of the speaker 30 that outputs such a sound
corresponds to the position of the object 33 disposed on the display unit 32. In the present
embodiment, the CPU 10 controls so that the sound is output only from the speaker 30 closest to
the object 33 disposed in the display unit 32. Further, as the sound output from the nearest
speaker 30, the sound related to the image displayed at or near the position where the object 33
is placed is output. Therefore, as shown in FIG. 10, since the piano 45 is displayed in the display
area 32a on which the object 33 is placed, the sound when the piano 45 is played is output from
the upper right speaker 30a.
[0052]
FIG. 11 is a front view showing the information processing apparatus 1 and is a view showing an
example of a screen displayed on the display unit 32 shown in FIG. 9 and shows another state in
which a sound is outputted from the speaker 30. . In FIG. 11, in order to facilitate understanding,
a state in which a sound is output from the speaker 30 is visualized and shown by a note 34.
Therefore, in FIG. 11, the sound is output from the four speakers 30. As shown in FIG. 11, the
finger pointing as the object 33 is placed at the position where the image showing the circular
area 49 is displayed. When the object 33 is placed in the circular area 49, control is performed
such that sound is output from all the speakers 30.
[0053]
As described above, the selection of the speaker 30 that outputs the sound is controlled by the
CPU 10 so that the sound is output only from the speaker 30 closest to the object 33 disposed in
the display unit 32. As shown in FIG. 11, the position where the object 33 is disposed is a circular
area 49, which is approximately equidistant from all the speakers 30. In such a case, it is
determined that it is an instruction to output the sound from all the speakers 30, and the sound
output from each of the speakers 30a to 30d is the sound associated with the image
corresponding to the position of each of the speakers 30a to 30d. It is output. Therefore, in the
case shown in FIG. 11, the sound when the piano 45 is played is output from the upper right
speaker 30a. The lower right speaker 30b outputs a sound when the trumpet 46 is played.
Further, from the lower left speaker 30c, the sound when the drum 47 is played is output.
Further, from the upper left speaker 30d, the sound when the guitar 48 is played is output.
[0054]
04-05-2019
18
Such processing is realized by the same processing as the processing in the flowchart of FIG.
Specifically, at step b5 described above, the tag recognition processing program is executed to
acquire identification information of the object 33. Based on the identification information, an
information search processing program is executed to read out related sound information stored
in the auxiliary storage unit 15 in association with the identification information. As in the
present embodiment, when the object 33 is a pointing finger, information that does not output a
sound is stored as the related sound information. Further, based on the image displayed at the
position of the object 33, the information search processing program is executed to read out the
related sound information stored in the auxiliary storage unit 15 in association with the image. It
is decided to the sound which outputs such related sound information. As a result, related sound
information related to the image displayed at the position where the object 33 is arranged is
output as a sound. Therefore, the same operation and effect as the above-described embodiments
can be achieved.
[0055]
Further, in the present embodiment, the object 33 is a finger, but it is not limited to such a finger,
and may be, for example, an object 33 associated with related sound information and stored. For
example, when the object 33 is another musical instrument, for example, a model of a flute
(hereinafter, may be simply referred to as a "flute"), when the flute is placed in the area where the
image of the trumpet 46 is displayed, The sound and the sound when the trumpet 46 is played
may be output.
[0056]
Next, a fourth embodiment of the present invention will be described. In the information
processing apparatus 1 of the present embodiment, when the object 33 is disposed in the display
unit 32, the volume to be output is output based on the distance between the position of the
object 33 and the positions of the speakers 30a to 30d. This embodiment is characterized in that
each of the speakers 30a to 30d is controlled individually. FIG. 12 is a front view showing the
information processing apparatus 1 of the present embodiment and is a view showing an
example of a screen displayed on the display unit 32 of the present embodiment, and a sound is
outputted from the speaker 30. Show the condition. 12, an image similar to that of FIG. 6 is
displayed, and the locomotive, which is the object 33, is disposed below and to the left of the
center in the left-right direction. Further, in FIG. 12, in order to facilitate understanding, the state
in which the sound is output from the lower right and lower left speakers 30 b and 30 c is
04-05-2019
19
visualized and shown by the note 34. Also, the number of notes 34 is shown to increase in
proportion to the volume. Therefore, as shown in FIG. 12, the volume output from the lower left
speaker 30c is larger than the volume output from the lower right speaker 30b. The selection of
the speaker 30 that outputs such a sound and the selection of the sound volume correspond to
the position of the locomotive that is the object 33 disposed on the display unit 32. In the
present embodiment, the volume is adjusted based on the distance between the object 33
disposed in the display unit 32 and the speakers 30a to 30d, and the CPU 10 controls the sound
output. The sound output from the lower right speaker 30 b is a sound related to the lower right
image and a sound related to the object 33. The sound outputted from the lower left speaker 30
c is a sound related to the lower left image and a sound related to the object 33. Therefore, when
an image as shown in FIG. 12 is displayed on the display unit 32 and the locomotive is disposed
as the object 33, the sound output from the lower right speaker 30b is the sound generated
when the locomotive travels. , The sound produced in the forest 41. Further, the sounds
outputted from the lower left speaker 30 c are a sound generated when the locomotive travels
and a sound generated at the lake edge 42. Such processing will be described using a flowchart.
[0057]
FIG. 13 is a flow chart showing a sound output processing procedure of the present embodiment
of the CPU 10. The sound output process shown in FIG. 13 is executed by the CPU 10. The sound
output process shown in FIG. 13 is repeatedly performed during the operation of the information
processing device 1. For example, under the condition that the main power supply (not shown) of
information processing apparatus 1 is turned on, the constant time interval by the timer not
shown, and the approach or motion of the user by the human sensor not shown. The process
shown in FIG.
[0058]
In step c1, the sensor 6 or the built-in camera 7 senses the state on the display screen of the LCD
5, which is the display means of the information processing apparatus 1, in order to determine
whether the object 33 is present in the next step c2. Move to c2. In step c2, as a result of the
sensing, it is determined whether or not the object 33 is present on the display screen of the LCD
5. If it is determined that the object 33 is present on the display screen, the process proceeds to
step c3 and the object 33 is presented on the display screen. If it is determined that there is not,
the process ends.
04-05-2019
20
[0059]
In step c3, the position recognition processing program 21 described above is executed to
recognize the position of the object 33, and the process moves to step c4. In step c4, the distance
between the position of the object 33 and each of the speakers 30a to 30d is calculated based on
the position of the object 33, and the process proceeds to step c5. In step c5, the above-described
sound output processing program 28 is executed to select the speaker 30 for outputting sound
based on the distance between the position of the object 33 and each of the speakers 30a to 30d,
and the process proceeds to step c5. In step c5, the sound to be output and the volume are
determined, and the voice control unit 31 is instructed to output the determined sound from the
selected speaker 30 at the determined volume, and the process ends. In step c5, the tag
recognition processing program is executed to acquire identification information of the object
33. Based on the identification information, an information search processing program is
executed to read out related sound information stored in the auxiliary storage unit 15 in
association with the identification information. Further, based on the image displayed at the
position close to the speaker 30 selected in step c5, the information search processing program
is executed to read out the related sound information stored in the auxiliary storage unit 15 in
association with the image. It is decided to the sound which outputs such related sound
information. Further, as the distance between the position of the object 33 and each of the
speakers 30a to 30d becomes smaller, the volume is determined so that the volume output from
each of the speakers 30a to 30d becomes larger.
[0060]
Next, a specific method of determining the volume will be described. FIG. 14 is a front view
showing the display unit 32 and the speaker 30 in a simplified manner. The display area 32a of
the display unit 32 is divided according to the arrangement and the number of the speakers 30a
to 30d, and the speaker 30 is selected by detecting which area the position where the object 33
is arranged is divided. Since each of the speakers 30a to 30d is disposed at the corner of the
display unit 32, the display area 32a of the display unit 32 is divided into four equal parts of two
rows and two columns according to the positions of the speakers 30a to 30d. Thus, four divided
areas 50a to 50d are formed. Thus, the speakers 30a to 30d correspond to the divided areas 50a
to 50d on a one-to-one basis. Therefore, the first divided area 50a on the upper right
corresponds to the speaker 30a on the upper right. The lower right second divided area 50b
corresponds to the lower right speaker 30b. The lower left third divided region 50c corresponds
to the lower left speaker 30c. The upper left fourth divided area 50d corresponds to the upper
left speaker 30d. Therefore, in the first to third embodiments described above, when the object
33 is disposed in each of the divided areas 50a to 50d, the control is performed so as to output
04-05-2019
21
sound from the corresponding speakers 30a to 30d. For example, when the object 33 is disposed
in the first divided area 50a, control is performed such that a sound is output from the upper
right speaker 30a.
[0061]
In the present embodiment, in order to individually control the volume output from each of the
speakers 30a to 30d according to the position of the object 33, the area in which the object 33 is
arranged and the remaining position from the position in which the object 33 is arranged.
Calculate the distance to the area. As shown in FIG. 14, when the object 33 is disposed in the
third divided area 50c, the distance from the position of the object 33 to the first divided area
50a, the second divided area 50b, and the fourth divided area 50d, Calculate the minimum
distance. In order to calculate the minimum distance, the lower left of the display area 32a is set
as the origin 51, and the coordinates of the object 33 are set as (x0, y0). The coordinates of the
boundary of each of the divided areas 50a to 50d are (xd, yd). The distance L1 from the object
33 to the first divided region 50a is calculated by the following equation (1) based on the threesquare theorem. L1 = √ ((xd−x0) <2> + (yd−y0) <2>) (1)
[0062]
Further, the distance L2 from the object 33 to the second divided region 50b is calculated by the
following equation (2). L2 = xd-x0 (2)
[0063]
Further, the distance L4 from the object 33 to the fourth divided region 50d is calculated by the
following equation (3). L4 = yd-y0 (3)
[0064]
Comparing the magnitude relationship of the distances calculated in this manner, L2 <L4 <L1 in
the example shown in FIG. 14, so the speaker 30 c corresponding to the third divided area 50 c
in which the object 33 is disposed is selected as the volume. The maximum size is set to be
04-05-2019
22
smaller in the speaker 30b corresponding to the second divided area 50b, the speaker 30d
corresponding to the fourth divided area 50d, and the speaker 30a corresponding to the first
divided area 50a. In the present embodiment, when the distance is larger than the predetermined
distance, the volume output from the speaker 30 corresponding to the divided area is set to 0.
Therefore, the volume output from the lower left speaker 30c is set larger than the volume
output from the lower right speaker 30b, and the volume output from the upper right speaker
30a and the upper left speaker 30d is set to zero. Although FIG. 14 describes the example in
which the object 33 is disposed in the third divided area 50c, the volume is determined by the
same operation even when the object 33 is disposed in the remaining divided area.
[0065]
As described above, in the present embodiment, the volume to be output is proportional to the
distance between the position of the object 33 detected by the sensor 6 or the built-in camera 7
and the position of each of the speakers 30a to 30d. The CPU 10 individually controls the
speakers 30a to 30d. Therefore, the speaker 30 closest to the position of the object 33 on the
display unit 32 outputs the largest sound volume, and is controlled so that the outputted sound
volume decreases as the distance increases. Thus, the operator can select the speaker 30 whose
volume is to be increased or the speaker 30 whose sound is to be reduced by changing the
position of the object 33. Therefore, the operator can intuitively control the volume of the
speaker 30, so that the operability can be further improved.
[0066]
Further, since the sound control process shown in FIG. 13 is repeatedly performed in a short
time, when the object 33 is moved, the sound to be output is controlled based on the movement.
Therefore, as shown in FIG. 12, when the object 33 is a locomotive, by continuously moving the
locomotive on the track 44, the sounds output from the speakers 30a to 30d are appropriately
controlled. Thus, the sound output from each of the speakers 30a to 30d can be controlled as the
object 33 moves.
[0067]
Further, in the present embodiment, the volume is determined based on the distance between the
position of the object 33 and each of the speakers 30a to 30d. However, not only the volume but
04-05-2019
23
also the characteristics of the sound may be determined. For example, when the image in the
third embodiment described above is displayed on the display unit 32, when the object 33 is
arranged in the third divided region 50c as in the present embodiment, the speakers 30a to 30d
Based on the distance, a sound obtained by mixing the sound related to the image displayed in
the third divided area 50c and the sound related to the image displayed in the second divided
area 50b is output only from the lower left speaker 30c. You may control. Therefore, although
the speaker 30 is selected into one according to the position of the object 33, the characteristic
of the sound to be output is mixed based on the distance between the position of the object 33
and the image displayed on the display unit 32 thereof. As a result, the operator can obtain a
realistic sound as if the object 33 was at that position in the scene on the display unit 32.
[0068]
Next, a fifth embodiment of the present invention will be described. In the information processing
apparatus 1 according to the present embodiment, when the object 33 is disposed on the display
unit 32 or in a space facing above the display unit 32, the signals are output from the speakers
30 a to 30 h according to the position of the object 33. It has a feature in that it controls at least
one of sound and volume. In the above-described embodiments, the case of the two-dimensional
space in which the object 33 is directly arranged in the display unit 32 has been described.
However, the present invention is not limited to the case of direct arrangement in the display unit
32. The same processing can be performed in the case of a three-dimensional space arranged in
a predetermined space.
[0069]
FIG. 15 is a front view showing information processing device 1 of the present embodiment in a
simplified manner, and shows an example of a screen displayed on display unit 32 of the present
embodiment. Indicates the output status. FIG. 16 is a front view showing the display unit 32 and
the speakers 30a to 30h in a simplified manner. The information processing apparatus 1 is
further configured to include four speakers 30e to 30h, and four speakers 30e to 30h are
respectively provided above the four speakers 30a to 30d provided in the display unit 32
described above. Therefore, as shown in FIG. 16, the eight speakers 30 a to 30 h are in a state in
which the speakers 30 a to 30 h are disposed at the vertices of the rectangular parallelepiped
space. In FIG. 15, a model of an airplane as the object 33 is shown flying in a space where the
display unit 32 faces upward. Further, in FIG. 15, as the images displayed on the display unit 32,
the images of the control tower 52, the runway 53, and the airport terminal 54 are displayed as
an example in order from the left. Further, FIG. 15 shows a case where an airplane as the object
04-05-2019
24
33 is in the vicinity of the upper side where the image of the runway 53 is displayed and to the
right. Further, in FIG. 15, in order to facilitate understanding, a state in which a sound is output
from the upper right upper speaker 30e is visualized and shown by a note 34. Therefore, in FIG.
15, the sound is output only from the upper right upper speaker 30 e, and no sound is output
from the remaining seven speakers 30.
[0070]
The selection of the speaker 30 that outputs such a sound corresponds to the position of the
plane that is the object 33. In the present embodiment, the CPU 10 controls to output sound only
from the speaker 30 closest to the object 33. Therefore, as shown in FIG. 16, the space above the
display area 32a of the display unit 32 is divided according to the arrangement and the number
of the speakers 30a to 30h, and in which area the position where the object 33 is arranged is
divided The speaker 30 is selected by detecting something. Each of the speakers 30a to 30h is
disposed at the corner of the display unit 32 and the upper side thereof, so that the space facing
the display area 32a of the display unit 32 is 2 × 2 × depending on the position of each of the
speakers 30a to 30h. Divide into eight equal parts of two to form eight divided spaces. Thereby,
the speakers 30a to 30h correspond to the divided spaces one to one. Therefore, when the object
33 is disposed in each divided space, control is performed so as to output sound from the
corresponding speakers 30a to 30h. Further, as the sound output from the nearest speaker 30, a
sound related to the object 33 and a sound related to an image displayed at or near the position
of the object 33 are output. Therefore, as shown in FIG. 15, since the object 33 is an airplane and
the object 33 is away from the display unit 32, the upper right upper speaker 30e outputs a
sound generated when the airplane flies. Further, for example, when the plane which is the object
33 is flying in the vicinity of the control tower 52 displayed in the lower left, the sound
generated when the plane flies and the sound generated in the control tower 52 from the lower
left speaker 30c. Is output. Therefore, the same operation and effect as the above-described
embodiments can be achieved.
[0071]
In the above-described embodiments, the output of sound is not controlled based on the direction
of the object 33, but the direction of the object 33 is further detected, and the sound output from
each of the speakers 30a to 30d is detected based on the detected direction. Properties may be
determined. For example, by setting in advance that sound is output from a specific object 33 in
a specific direction, the direction of the object 33 is detected, and the speaker closest to the
position facing in the specific direction is detected. Control may be performed to output an
04-05-2019
25
associated sound from 30. In this case, it is preferable to set the volume of the sound to be
output according to the distance between the position of the object 33 and the speaker 30 to be
output. For example, when the object 33 is a model of a jet, in an actual jet, sound is generated
from the rear engine unit, and therefore, the speakers 30 close to a position facing the back of
the model of the jet based on the direction in which the model of the jet is facing. By outputting
the engine sound of the jet, the operator can hear a realistic sound.
[0072]
Moreover, although the case where one object 33 was one was demonstrated in each abovementioned embodiment, even if it is a case where multiple objects 33 are arrange | positioned,
the same process can be performed with respect to each object 33. FIG.
[0073]
FIG. 2 is a block diagram showing an electrical configuration of the information processing
apparatus 1 according to the first embodiment.
FIG. 2 is a view showing the configuration of an operation program 20 stored in a main storage
unit 14; FIG. 6 is a front view showing the information processing device 1 and is a view showing
an example of a screen displayed on the display unit 32. FIG. 10 is a front view showing the
information processing device 1, and shows an example of a screen displayed on the display unit
32, and shows a state in which a sound is output from the speaker 30. It is a flowchart which
shows the sound output processing procedure of CPU10. It is a front view which shows the
information processing apparatus 1, Comprising: It is a figure which shows an example of the
screen displayed on the display part 32 of this Embodiment. FIG. 10 is a front view showing the
information processing device 1 and is a view showing an example of a screen displayed on the
display unit 32 of the present embodiment, showing a state in which a sound is output from the
speaker 30. It is a flowchart which shows the sound output processing procedure of this
Embodiment of CPU10. It is a front view which shows the information processing apparatus 1,
Comprising: It is a figure which shows an example of the screen displayed on the display part 32
of this Embodiment. FIG. 10 is a front view showing the information processing device 1, and is a
view showing an example of a screen displayed on the display unit 32 shown in FIG. 9, showing a
state in which a sound is outputted from the speaker 30. FIG. 10 is a front view showing the
information processing device 1 and is a view showing an example of a screen displayed on the
display unit 32 shown in FIG. 9, showing another state in which a sound is output from the
speaker 30. FIG. 10 is a front view showing the information processing device 1 of the present
embodiment, and is a view showing an example of a screen displayed on the display unit 32 of
04-05-2019
26
the present embodiment, showing a state in which sound is output from the speaker 30. . It is a
flowchart which shows the sound output processing procedure of this Embodiment of CPU10.
FIG. 5 is a front view showing the display unit 32 and the speaker 30 in a simplified manner. FIG.
10 is a front view schematically showing the information processing apparatus 1 of the present
embodiment, and is a diagram showing an example of a screen displayed on the display unit 32
of the present embodiment, and a sound is output from the speaker 30. Indicates the status. It is
a front view which simplifies and shows indicator 32 and speakers 30a-30h.
Explanation of sign
[0074]
DESCRIPTION OF SYMBOLS 1 information processing apparatus 5 LCD 6 sensor 7 built-in
camera 10 CPU 11 sensor input part 13 display control part 14 main memory part 15 auxiliary
memory part 16 communication part 30a-30h speaker 31 audio | voice control part 32 display
part 33 object
04-05-2019
27
Документ
Категория
Без категории
Просмотров
0
Размер файла
44 Кб
Теги
jp2008131515
1/--страниц
Пожаловаться на содержимое документа