close

Вход

Забыли?

вход по аккаунту

?

JP2014007444

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2014007444
Abstract: To transmit information about content being played back to users as much as possible.
In one aspect, an electronic device (for example, a mobile phone 1A) detects a change in a
physical quantity, a piezoelectric element, a first sound generation unit vibrated by the
piezoelectric element, and a second sound generation unit. A detection unit, wherein the first
sound generation unit generates an air conduction sound and a vibration sound transmitted by
vibrating an object in contact with the first sound generation unit, and the second sound
generation unit When a change in the physical quantity is detected by the detection unit when a
sound corresponding to the first information is generated among contents including the first
information and the second information, The one sound generation unit generates the air
conduction sound and the vibration sound corresponding to the second information. [Selected
figure] Figure 5
Electronic device, control method and control program
[0001]
The present application relates to an electronic device, a control method, and a control program.
[0002]
Patent Document 1 describes an electronic device that transmits air conduction noise and
vibration noise to a user.
11-05-2019
1
Patent Document 1 describes that when a voltage is applied to a piezoelectric element of a
vibrating body disposed on the outer surface of a housing of an electronic device, the vibrating
body flexes and vibrates due to expansion and contraction of the piezoelectric element.
Moreover, it is described in patent document 1 that the air conduction sound and the vibration
sound are transmitted to the user when the user makes the pinnae vibrate with the vibrator
vibrating in bending. According to Patent Document 1, the air conduction sound is a sound
transmitted to the user's auditory nerve when the vibration of air caused by the vibration of the
object is transmitted to the tympanic membrane through the ear canal and the tympanic
membrane vibrates. Further, according to Patent Document 1, the vibration sound is a sound
transmitted to the user's auditory nerve through a part of the user's body (for example, the
cartilage of the outer ear) contacting the vibrating object.
[0003]
JP 2005-348193 A
[0004]
Generally, in electronic devices, for example, there is a need to transmit as much information as
possible about the content being reproduced to the user.
[0005]
An electronic device according to one aspect includes a piezoelectric element, a first sound
generation unit that vibrates by the piezoelectric element, a second sound generation unit, and a
detection unit that detects a change in physical quantity, and the first sound The generation unit
generates an air conduction sound and a vibration sound transmitted by vibrating an object in
contact with the first sound generation unit, and the second sound generation unit includes the
first information and the second information. The first sound generation unit is configured to
generate the second information when a change in the physical quantity is detected by the
detection unit while generating a sound corresponding to the first information in the content.
Generating the air conduction sound and the vibration sound corresponding to.
[0006]
The control method according to one aspect is performed by an electronic device including a
piezoelectric element, a first sound generation unit that vibrates by the piezoelectric element, a
second sound generation unit, and a detection unit that detects a change in physical quantity. A
control method comprising: reproducing content including first information and second
information; generating a sound corresponding to the first information included in the content to
be reproduced by the second sound generation unit When the change in the physical quantity is
11-05-2019
2
detected by the detection unit, the air conduction sound and the first sound generation unit
corresponding to the second information included in the content to be reproduced are contacted.
Generating an oscillating sound transmitted by causing the object to vibrate by the first sound
generator.
[0007]
A control program according to one aspect includes an electronic device including a piezoelectric
element, a first sound generation unit that vibrates by the piezoelectric element, a second sound
generation unit, and a detection unit that detects a change in physical quantity. Playing the
content including the second information and the second information, and when the second
sound generation unit generates the sound corresponding to the first information included in the
content to be reproduced, When a change in the physical quantity is detected, vibration sound
transmitted by vibrating an air conduction sound and an object in contact with the first sound
generation unit corresponding to the second information included in the content to be
reproduced Are generated by the first sound generation unit.
[0008]
FIG. 1 is a front view of a mobile phone according to the embodiment.
FIG. 2 is a cross-sectional view of the mobile phone according to the embodiment.
FIG. 3 is a view showing an example of the shape of a panel.
FIG. 4 is a diagram showing an example of the vibration of the panel.
FIG. 5 is a block diagram of a mobile phone according to the embodiment.
FIG. 6 is a diagram illustrating an example of a processing procedure by the mobile phone
according to the first embodiment. FIG. 7 is a diagram illustrating an example of a processing
procedure by the mobile phone according to the second embodiment. FIG. 8 is a diagram
showing an example of the processing procedure by the mobile phone according to the third
embodiment. FIG. 9 is a front view of the mobile phone according to the fourth embodiment. FIG.
11-05-2019
3
10 is a cross-sectional view of the mobile phone shown in FIG. 9 along the line b-b. FIG. 11 is a
front view of the mobile phone according to the fifth embodiment. 12 is a cross-sectional view of
the mobile phone shown in FIG. 11 taken along line cc. FIG. 13 is a diagram showing an example
of change of frequency characteristics by the reinforcing member.
[0009]
Embodiments for carrying out the present invention will be described in detail with reference to
the drawings. Hereinafter, a mobile phone will be described as an example of an electronic device
that transmits air conduction noise and vibration noise to a user.
[0010]
First Embodiment The overall configuration of a mobile phone 1A according to an embodiment
will be described with reference to FIGS. 1 and 2. FIG. 1 is a front view of the mobile phone 1A.
FIG. 2 is a cross-sectional view schematically showing the a-a cross section of the mobile phone
1A. As shown in FIGS. 1 and 2, the mobile phone 1A includes a display 2, a button 3, an
illuminance sensor 4, a proximity sensor 5, a piezoelectric element 7, a microphone 8, a speaker
11, and a camera 12. A panel 20 and a housing 40 are provided.
[0011]
The display 2 includes a display device such as a liquid crystal display (LCD: Liquid Crystal
Display), an organic EL display (OELD: Organic Electro-Luminescence Display), or an inorganic EL
display (IELD: Inorganic Electro-Luminescence Display). The display 2 displays characters,
images, symbols, figures, and the like.
[0012]
The button 3 receives an operation input from the user. The number of buttons 3 is not limited to
the example shown in FIGS. 1 and 2.
11-05-2019
4
[0013]
The illuminance sensor 4 detects the illuminance of ambient light of the mobile phone 1A. The
illuminance indicates the light intensity, brightness or luminance. The illumination sensor 4 is
used, for example, to adjust the brightness of the display 2. The proximity sensor 5 detects the
presence of a nearby object without contact. The proximity sensor 5 detects the presence of an
object based on the change of the magnetic field, the change of the feedback time of the reflected
wave of the ultrasonic wave, or the like. The proximity sensor 5 detects, for example, that the
display 2 is brought close to the face. The illumination sensor 4 and the proximity sensor 5 may
be configured as one sensor. The illumination sensor 4 may be used as a proximity sensor.
[0014]
When an electric signal (voltage according to a sound signal) is applied, the piezoelectric element
7 expands and contracts or bends according to the electromechanical coupling coefficient of the
constituent material. That is, the piezoelectric element 7 deforms when an electric signal is
applied. The piezoelectric element 7 is attached to the panel 20 and used as a vibration source
for vibrating the panel 20. The piezoelectric element 7 is formed, for example, using ceramic or
quartz. The piezoelectric element 7 may be a unimorph, a bimorph, or a laminated piezoelectric
element. The laminated piezoelectric element includes a laminated bimorph element in which
bimorphs are laminated (for example, 16 layers or 24 layers are laminated). The laminated
piezoelectric element is formed of, for example, a laminated structure of a plurality of dielectric
layers made of PZT (lead zirconate titanate) and an electrode layer disposed between the
plurality of dielectric layers. Unimorphs expand and contract when an electrical signal (voltage)
is applied. The bimorph bends when an electrical signal (voltage) is applied.
[0015]
The microphone 8 is a sound input unit. The microphone 8 converts the input sound into an
electrical signal. The speaker 11 is a sound output unit that outputs sound in an air conduction
system. The speaker 11 is, for example, a dynamic speaker, and can transmit the sound
converted from the electric signal to a person whose ear is not in contact with the mobile phone
1A. The speaker 11 is used, for example, to output music as music content. The speaker 11 is an
example of a second sound generation unit.
11-05-2019
5
[0016]
The camera 12 is an in-camera that captures an object facing the display 2. The camera 12
converts the captured image into an electrical signal. In addition to the camera 12, the mobile
phone 1A may be equipped with an out camera that captures an object facing the opposite
surface of the display 2.
[0017]
The panel 20 vibrates as the piezoelectric element 7 deforms (stretches or bends), and transmits
the vibration to ear cartilage (auricular cartilage) or the like which the user contacts the panel
20. The panel 20 also has a function of protecting the display 2 and the piezoelectric element 7
from external force. The panel 20 is formed of, for example, glass or a synthetic resin such as
acrylic. The shape of the panel 20 is, for example, a plate. The panel 20 may be a flat plate. The
panel 20 may be a curved panel whose surface is curved smoothly.
[0018]
The display 2 and the piezoelectric element 7 are attached to the back surface of the panel 20 by
the bonding member 30. The piezoelectric element 7 is spaced apart from the inner surface of
the housing 40 by a predetermined distance in a state of being disposed on the back surface of
the panel 20. The piezoelectric element 7 may be spaced apart from the inner surface of the
housing 40 even in a stretched or bent state. That is, the distance between the piezoelectric
element 7 and the inner surface of the housing 40 may be larger than the maximum amount of
deformation of the piezoelectric element 7. The piezoelectric element 7 may be attached to the
panel 20 via a reinforcing member (for example, a sheet metal or a glass fiber reinforced resin).
The bonding member 30 is, for example, a double-sided tape, or an adhesive having
thermosetting or ultraviolet curing properties. The bonding member 30 may be an optical elastic
resin which is a colorless and transparent acrylic ultraviolet curable adhesive.
[0019]
The display 2 is disposed substantially at the center in the lateral direction of the panel 20. The
piezoelectric element 7 is disposed near the longitudinal end of the panel 20 by a predetermined
11-05-2019
6
distance so that the longitudinal direction of the piezoelectric element 7 is parallel to the short
direction of the panel 20. The display 2 and the piezoelectric element 7 are arranged in parallel
on the inner surface of the panel 20.
[0020]
A touch screen (touch sensor) 21 is disposed on substantially the entire surface of the outer
surface of the panel 20. The touch screen 21 detects a touch on the panel 20. The touch screen
21 is used to detect a touch operation of the user by a finger, a pen, or a stylus pen or the like.
The gestures detected using the touch screen 21 include, but are not limited to, touch, long
touch, release, swipe, tap, double tap, long tap, drag, flick, pinch in and pinch out, for example.
The detection method of the touch screen 21 may be any method such as a capacitance method,
a resistive film method, a surface acoustic wave method (or ultrasonic method), an infrared
method, an electromagnetic induction method, and a load detection method.
[0021]
The touch screen 21 is also used to detect an auricle human body or the like touching the panel
20 to hear a sound.
[0022]
The housing 40 is formed using a resin or a metal.
The housing 40 supports the button 3, the illuminance sensor 4, the proximity sensor 5, the
microphone 8, the speaker 11, the camera 12, the panel 20 and the like.
[0023]
The sound output by the mobile phone 1A according to the first embodiment will be described in
more detail with reference to FIGS. 1 to 4. FIG. 3 is a view showing an example of the shape of
the panel 20. As shown in FIG. FIG. 4 is a view showing an example of the vibration of the panel
20. As shown in FIG.
11-05-2019
7
[0024]
An electrical signal corresponding to the sound to be output is applied to the piezoelectric
element 7. For example, ± 15 V, which is higher than ± 5 V that is a voltage applied to a socalled panel speaker that transmits sound by air conduction sound through the ear canal, may be
applied to the piezoelectric element 7. Thereby, even if the user presses a part of his / her body
against the panel 20 with a force of 3 N or more (5 N to 10 N force, for example), sufficient
vibration is generated in the panel 20 and used Vibration sound can be generated that is
transmitted through a part of the person's body. The voltage applied to the piezoelectric element
7 can be appropriately adjusted according to the fixed strength of the panel 20 to the housing
40, the performance of the piezoelectric element 7, and the like.
[0025]
When an electrical signal is applied, the piezoelectric element 7 stretches or bends in the
longitudinal direction. The panel 20 to which the piezoelectric element 7 is attached deforms in
accordance with the expansion or contraction or bending of the piezoelectric element 7. Thereby,
the panel 20 vibrates to generate air conduction sound. Furthermore, when the user brings a part
of the body (for example, an ear (auricular cartilage)) into contact with the panel 20, the panel 20
generates an oscillating sound that is conducted to the user through the part of the body. That is,
the panel 20 vibrates at a frequency that is perceived as a vibration noise with respect to an
object in contact with the panel 20 as the piezoelectric element 7 is deformed.
[0026]
For example, when an electric signal corresponding to sound data of the other party's voice or
ringing tone, music content such as music content is applied to the piezoelectric element 7, the
panel 20 generates an air conduction sound corresponding to the electric signal and Generate
vibration noise. The sound signal output through the piezoelectric element 7 and the panel 20
may be based on sound data stored in the storage 9 described later. The sound signal output via
the piezoelectric element 7 and the panel 20 may be stored in an external server or the like, and
may be based on sound data acquired via the network by the communication unit 6 described
later.
11-05-2019
8
[0027]
In this embodiment, the panel 20 may be approximately the same size as the user's ear. Also, as
shown in FIG. 3, the panel 20 may be larger than the user's ear. In this case, the user can contact
the panel 20 with substantially the entire outer periphery of the ear when listening to the sound.
By listening to the sound in this manner, ambient sound (noise) is less likely to enter the ear
canal. In the present embodiment, at least the panel 20 has a length in the longitudinal direction
(or a lateral direction) corresponding to a distance from a human anti-annular lower leg (lower
leg) to an anti-tragus, and An area wider than an area having a length in the short direction (or
longitudinal direction) corresponding to the distance to the ear rings vibrates. The panel 20 has a
longitudinal (or short side) length corresponding to the distance from the part near the upper ear
ring (upper upper ring foot) to the earlobe in the ear ring, and the area near the opposite ear ring
in the trachea A region having a length in the latitudinal direction (or longitudinal direction)
corresponding to the distance to the site may vibrate. The region having the above length and
width may be a rectangular region, or an elliptical shape in which the length in the longitudinal
direction is the major axis and the length in the lateral direction is the minor axis. It is also good.
The average size of the human ear can be known, for example, by referring to the Japanese
Human Body Size Database (1992-1994) prepared by the Human Life Engineering Research
Center (HQL).
[0028]
As shown in FIG. 4, the panel 20 vibrates not only in the attachment area 20a where the
piezoelectric element 7 is attached, but also in the area away from the attachment area 20a. The
panel 20 has a plurality of points vibrating in a direction intersecting the main surface of the
panel 20 in the vibrating area, and the value of the vibration amplitude changes from positive to
negative with time in each of the plurality of points. Or it changes in the opposite way. At each
moment, the panel 20 vibrates in such a manner that the relatively large part of the vibration
amplitude and the relatively small part of the vibration seem to be randomly or regularly
distributed over substantially the entire panel 20. That is, vibration of a plurality of waves is
detected over the entire area of the panel 20. As described above, if the voltage applied to the
piezoelectric element 7 is ± 15 V, even if the user presses the panel 20 against the body with a
force of 5 N to 10 N, for example, the above description of the panel 20 Vibration is difficult to
dampen. Therefore, even if the user brings his ear into contact with the area on the panel 20
away from the mounting area 20a, he can hear the vibration noise. The panel 20 is an example of
a first sound generation unit.
11-05-2019
9
[0029]
In the present embodiment, the display 2 is attached to the panel 20. Therefore, the rigidity of
the lower portion of the panel 20 (the side on which the display 2 is attached) is increased, and
the vibration is smaller than the upper portion of the panel 20 (the side on which the
piezoelectric element 7 is attached). For this reason, in the lower part of the panel 20, the sound
leakage of the air conduction sound due to the vibration of the panel 20 is reduced.
[0030]
The mobile phone 1A can transmit the air conduction sound and the vibration sound via a part of
the user's body (for example, an ear (auricular cartilage)) to the user by the vibration of the panel
20. Therefore, in the case where the mobile phone 1A outputs a sound with a volume equal to
that of the dynamic receiver, the sound transmitted to the periphery of the mobile phone 1A by
air vibrations may be reduced compared to the electronic device having only the dynamic
speaker. it can. Such a feature is suitable, for example, when listening to a recorded message at a
place where others are nearby, such as in a train.
[0031]
Furthermore, the mobile phone 1A transmits vibration sound to the user by vibration of the
panel 20. Therefore, even if the user wears the earphone or the headphone, the user can hear the
vibration sound due to the vibration of the panel 20 through the earphone or the headphone and
a part of the body by bringing the mobile phone 1A into contact with them. it can.
[0032]
Furthermore, the mobile phone 1A transmits a sound by the vibration of the panel 20. Therefore,
when the mobile phone 1A does not separately include a dynamic receiver (for example, the
speaker 11), it is not necessary to form an opening (a sound outlet) in the housing 40 for
transmitting the sound emitted by the panel 20 to the outside. For this reason, when realizing a
waterproof structure, the structure can be simplified. When it is necessary to form an opening
such as a sound emission port of a dynamic speaker (for example, the speaker 11) in the case 40,
the mobile phone 1A is a member that allows gas but not liquid to realize a waterproof structure.
11-05-2019
10
A structure that closes the opening may be employed. The part that allows the passage of gas but
not the liquid is, for example, Gore-Tex®.
[0033]
The functional configuration of the mobile phone 1A will be described with reference to FIG. FIG.
5 is a block diagram of the mobile phone 1A. As shown in FIG. 5, the mobile phone 1A includes a
display 2, a button 3, an illuminance sensor 4, a proximity sensor 5, a communication unit 6, a
piezoelectric element 7, a microphone 8, a storage 9, and a controller 10. , A speaker 12, a
camera 12, an earphone jack 13, a posture detection unit 15, a vibrator 18, and a touch screen
21.
[0034]
The communication unit 6 communicates by radio. The communication scheme supported by the
communication unit 6 is a wireless communication standard. As wireless communication
standards, there are, for example, communication standards for cellular phones such as 2G, 3G,
4G, and the like. As communication standards for cellular phones, for example, Long Term
Evolution (LTE), Wideband Code Division Multiple Access (W-CDMA), CDMA2000, Personal
Digital Cellular (PDC), GSM (Global System for Mobile Communications), PHS (Personal Handyphone System) and the like. As wireless communication standards, for example, there are WiMAX
(Worldwide Interoperability for Microwave Access), IEEE 802.11, Bluetooth (registered
trademark), IrDA (Infrared Data Association), NFC (Near Field Communication), and the like.
Communication unit 6 may support one or more of the communication standards described
above.
[0035]
The storage 9 stores programs and data. The storage 9 is also used as a work area for
temporarily storing the processing result of the controller 10. The storage 9 may include any
non-transitory storage medium such as a semiconductor storage medium and a magnetic storage
medium. The storage 9 may include multiple types of storage media. The storage 9 may include a
combination of a portable storage medium such as a memory card, an optical disc, or a magnetooptical disc and a reader of the storage medium. The storage 9 may include a storage device used
as a temporary storage area such as a random access memory (RAM).
11-05-2019
11
[0036]
The programs stored in the storage 9 include an application executed in the foreground or
background and a control program for supporting the operation of the application. The
application causes the display 10 to display a screen, for example, and causes the controller 10
to execute processing in accordance with the gesture detected by the touch screen 21. The
control program is, for example, an OS. The application and control program may be installed in
the storage 9 via wireless communication by the communication unit 6 or a non-transitory
storage medium.
[0037]
The storage 9 stores, for example, a control program 9A, a call application 9B, a music
reproduction application 9C, a moving image reproduction application 9D, content data 9Y, and
setting data 9Z. The call application 9B provides a call function for a call by wireless
communication. The music reproduction application 9C provides a music reproduction function
for reproducing a sound such as music content (music) included in the content data 9Y. The
video playback application 9D provides a video playback function for playing back video and
sound from video data. The content data 9Y is a sound file of music content such as sound and
music. The music content includes, for example, first information which is music data itself such
as music, and second information related to music data such as a song name, a singer name and
an album name corresponding to the first information. . The setting data 9Z includes information
on various settings and processes related to the operation of the mobile phone 1A.
[0038]
The control program 9A provides functions related to various controls for operating the mobile
phone 1A. The control program 9A realizes a call by, for example, controlling the communication
unit 6, the piezoelectric element 7, the microphone 8, and the like. The function provided by the
control program 9A may be used in combination with the function provided by another program
such as the call application 9B.
[0039]
11-05-2019
12
In the control program 9A, for example, when the second sound generation unit generates a
sound corresponding to the first information in the music content including the first information
and the second information, the detection unit When contact of a part of the human body (for
example, the ear (auricular cartilage)) with the first sound generation unit is detected, the air
conduction sound and the vibration sound corresponding to the second information are
transmitted to the first sound generation unit Control functions to be generated by The first
sound generator is, for example, a panel 20. The second sound generation unit is, for example, a
speaker 11 or an earphone connected from the outside to an earphone jack 13 described later.
The detection unit is, for example, the touch screen 21. In the first embodiment, the panel 20 is
deformed by the piezoelectric element 7 and the air conduction sound and the vibration sound
transmitted to the human body via a part of the human body (for example, ear (auricular
cartilage)) in contact with the panel 20 being deformed. The method of generating and is
expressed as human body conduction system. Furthermore, the control program 9A includes a
function of synthesizing the read-out speech corresponding to the second information and
generating air conduction sound and vibration sound corresponding to the synthesized read-out
speech in the human body conduction system.
[0040]
The controller 10 is an arithmetic processing unit. The arithmetic processing apparatus includes,
for example, a central processing unit (CPU), a system-on-a-chip (SoC), a micro control unit
(MCU), and a field-programmable gate array (FPGA), but is not limited thereto. The controller 10
comprehensively controls the operation of the mobile phone 1A to realize various functions.
[0041]
Specifically, the controller 10 executes an instruction included in the program stored in the
storage 9 while referring to the data stored in the storage 9 as necessary. Then, the controller 10
controls the function unit according to the data and the instruction, thereby realizing various
functions. The functional unit includes, for example, the display 2, the communication unit 6, the
piezoelectric element 7, the microphone 8, the speaker 11, and the vibrator 18, but is not limited
thereto. The controller 10 may change the control according to the detection result of the
detection unit. The detection unit includes, for example, the button 3, the illuminance sensor 4,
the proximity sensor 5, the camera 12, the posture detection unit 15, and the touch screen 21,
but is not limited thereto.
11-05-2019
13
[0042]
For example, by executing the control program 9A, the controller 10 generates, for example, a
sound corresponding to the first information of the music content including the first information
and the second information. If the contact of a part of the human body (for example, ear
(auricular cartilage)) to the first sound generation unit is detected by the detection unit while the
air conduction sound is being generated, an air conduction sound corresponding to the second
information And the vibration sound is controlled to be generated by the first sound generator.
The first sound generator is, for example, a panel 20. The second sound generation unit is, for
example, a speaker 11 or an earphone connected from the outside to an earphone jack 13
described later. The detection unit is, for example, the touch screen 21. The air conduction sound
and the vibration sound corresponding to the second information generated by the first sound
generation unit are, for example, read-out sound.
[0043]
The earphone jack 13 is a connection unit to which an audio output device such as an earphone
is connected. An earphone or the like connected to the earphone jack 13 from the outside is an
example of a second sound generation unit. In the example shown in FIG. 5, the mobile phone 1A
has the earphone jack 13. However, instead of the earphone jack 13, the mobile phone 1A has a
connector etc. to which an audio output device such as earphone can be connected from outside
by USB (Universal Serial Bus). May be The earphone may be connected to the mobile phone 1A
by wireless such as Bluetooth (registered trademark), in addition to a wire connection used by
inserting it into the earphone jack 13. The second sound generator is not limited to the speaker
or the earphone, and may have an additional function such as a headset.
[0044]
The attitude detection unit 15 detects the attitude of the mobile phone 1A. The attitude detection
unit 15 includes at least one of an acceleration sensor, an orientation sensor, and a gyroscope to
detect an attitude. The vibrator 18 vibrates part or all of the mobile phone 1A. The vibrator 18
has, for example, a piezoelectric element or an eccentric motor to generate vibration. The
vibration by the vibrator 18 is not used to transmit a sound but to notify the user of various
events such as an incoming call.
11-05-2019
14
[0045]
Some or all of the programs and data stored in the storage 9 in FIG. 5 may be downloaded from
another device by wireless communication by the communication unit 6. Some or all of the
programs and data stored in the storage 9 in FIG. 5 may be stored in a non-transitory storage
medium readable by the reader included in the storage 9. Non-transitory storage media include,
for example, optical disks such as CD (registered trademark), DVD (registered trademark), Blu-ray
(registered trademark), magneto-optical disks, magnetic storage media, memory cards, and solidstate storage media Including, but not limited to.
[0046]
The configuration of the mobile phone 1A shown in FIG. 5 is an example, and may be changed as
appropriate without departing from the scope of the present invention. For example, the mobile
phone 1A may have a button such as a ten key array or a QWERTY array as a button for
operation.
[0047]
The mobile phone 1A according to the first embodiment can output sound through the speaker
11. The cellular phone 1A can deform the panel 20 by the piezoelectric element 7 to generate air
conduction sound and vibration sound transmitted through an object (for example, a part of a
human body) in contact with the panel 20 being deformed. . FIG. 6 is a diagram showing an
example of a processing procedure by the mobile phone 1A according to the first embodiment.
The processing procedure shown in FIG. 6 is realized by the controller 10 executing the control
program 9A and the like stored in the storage 9. In the following, an example of the processing
procedure executed along with the reproduction of music content (music) in the mobile phone
1A will be described using FIG.
[0048]
As shown in FIG. 6, at step S101, when the reproduction of music as music content is started, at
step S102, the controller 10 determines whether connection to the earphone jack 13 is detected.
11-05-2019
15
[0049]
If the controller 10 determines that the connection to the earphone jack 13 is detected as a result
of the determination (Yes at step S102), then at step S103, the music is sent to the earphone
connected to the earphone jack 13 via the earphone jack 13. Output sound (sound corresponding
to the first information).
[0050]
Subsequently, at Step S104, the controller 10 determines whether ear touch on the panel 20 is
detected.
For example, the controller 10 determines whether there is an ear touch on the panel 20 based
on the detection result of the touch screen 21.
[0051]
Specifically, the controller 10 may detect ear contact with the panel 20 by pattern matching
between an image obtained based on the detection result of the touch screen 21 and a sample
prepared in advance.
An image obtained based on the detection result of the touch screen 21 divides the detection
area of the touch screen 21 into a grid, and converts the detection state of contact of an object in
each of the divided areas into the state of the corresponding pixel. Obtained by When the value
detected by the touch screen 21 in each region fluctuates due to, for example, the distance
between the touch screen 21 and an object, or the pressure with which the object presses the
touch screen 21, an image obtained based on the detection result of the touch screen 21 May be
a multi-tone image. The sample is an image that should be obtained in the same manner as the
image obtained based on the detection result of the touch screen 21 in the area where the ear
into which the earphone is inserted is in contact with the panel 20. The sample may be an image
that should be obtained when the user of the mobile phone 1A touches the ear, or an image that
should be obtained when the general person's ear contacts. A plurality of samples may be
prepared, such as an image of the right ear and an image of the left ear.
11-05-2019
16
[0052]
The method by which the controller 10 detects the touch of the ear on the panel 20 based on the
detection result of the touch screen 21 is not limited to the above method, and other methods
may be adopted. For example, the controller 10 collects in advance the area when the user's ear
comes in contact with the panel 20 and the like, and collects in advance the area of the image
obtained based on the detection result of the touch screen 21. Ear contact may be detected
simply by comparing with.
[0053]
When the touch of the ear to the panel 20 is detected as a result of the determination (Step S104,
Yes), the controller 10 causes the panel 20 to generate a read-out voice for reading out
information related to the music being reproduced at Step S105. Output. Specifically, when the
touch of the ear to the panel 20 is detected, the controller 10 uses the storage 9 to transmit the
second information (song name, singer name, album name, etc.) regarding the music being
played. get. Then, the controller 10 synthesizes the read-out speech corresponding to the second
information, and causes the panel 20 to generate the air conduction sound and the vibration
sound corresponding to the synthesized read-out speech. Output to a part of, for example, an ear.
[0054]
Subsequently, at Step S106, the controller 10 determines whether or not the reproduction of the
music content is ended. For example, the controller 10 determines whether an operation input
for ending the reproduction has been received from the user, or whether the reproduction time
of the music content has been completed.
[0055]
When the controller 10 ends the reproduction of the music content as a result of the
determination (Yes in step S106), the processing procedure illustrated in FIG. 6 ends. On the
other hand, when the controller 10 does not finish the reproduction of the music content as a
result of the determination (No in step S106), the process returns to step S102.
11-05-2019
17
[0056]
If the controller 10 determines in step S104 that ear contact with the panel 20 is not detected as
a result of the determination (step S104, No), the controller 10 proceeds to step S106 and
determines whether to end the reproduction of the music content Do.
[0057]
If it is determined in step S102 that the connection to the earphone jack 13 is not detected in
step S102 (step S102, No), the controller 10 outputs the sound of music from other than the
earphone jack 13 as step S107.
For example, the controller 10 may output the sound of the music being reproduced from the
speaker 11 or may generate and output the sound corresponding to the music being reproduced
by deforming the panel 20. Subsequently, the controller 10 proceeds to step S106, and
determines whether or not the reproduction of the music content is ended.
[0058]
In the processing procedure illustrated in FIG. 6, the example in which the controller 10 detects
the touch of the ear on the panel 20 based on the detection result of the touch screen 21 in step
S104 has been described, but the present invention is not limited thereto. For example, the
mobile phone 1A is equipped with a magnetic sensor for detecting the magnetism emitted from
the magnet used in the earphone. When the controller 10 detects the magnetism emitted from
the earphone by the magnetic sensor, the process proceeds to the processing procedure of step
S105, and the panel 20 generates and outputs the sound corresponding to the information on
the music being reproduced. You may
[0059]
As described above, in the first embodiment, when the mobile phone 1A is connected to the
earphone and the touch of the ear on the panel 20 is detected while playing the music content
(music), the music content is detected. A sound corresponding to information related to (music) is
11-05-2019
18
generated by the panel 20 and output to the outside. That is, even if the earphone, which is the
output path of the sound of the mobile phone 1A, is blocked by the output of the sound of the
music being played, for example, by putting the ear on the panel 20, the title of the music being
played I can listen to the singer name and so on. As described above, according to the first
embodiment, information on the content being reproduced can be transmitted to the user as
much as possible.
[0060]
In the first embodiment, as shown in FIG. 6, the processing procedure executed along with the
reproduction of music content (music) has been described. However, even in the case of
performing streaming reproduction of moving image content, radio program, television program,
etc. The processing procedure according to the first embodiment can be applied.
[0061]
The mobile phone 1A stores information related to the moving image content, such as the name
of the moving image content, the name of a performer, a synopsis, and the reproduction time.
The information related to the moving image data may be stored in advance or may be acquired
in real time. When streaming playback of moving image content is being performed, the mobile
phone 1A responds to information related to the moving image content when the connection of
the earphone is detected and the ear contact to the panel 20 is detected. Sound is generated by
the panel 20 and output to the outside.
[0062]
Similarly, the mobile phone 1A stores information on the radio program such as the program
name of the radio program, the personality name and the sponsorship name. Information on the
radio program may be stored in advance or may be acquired in real time. When streaming of a
radio program is being performed, the mobile phone 1A responds to information related to the
radio program when the connection of the earphone is detected and the touch of the ear on the
panel 20 is detected. Sound is generated by the panel 20 and output to the outside.
[0063]
11-05-2019
19
Similarly, the mobile phone 1A stores information on the television program such as the program
name of the television program, the name of the performer, and the name of the sponsor
sponsor. Information on television programs may be stored in advance or may be acquired in real
time. When streaming of a television program is being performed, the cellular phone 1A
responds to information related to the television program when the connection of the earphone
is detected and the touch of the ear on the panel 20 is detected. Sound is generated by the panel
20 and output to the outside.
[0064]
For example, when the radio wave is cut while the earphones are connected and the radio
program or the like is viewed, a message indicating that the radio wave has not arrived upon
detection of the contact of the ear to the panel 20 as a trigger. May be generated by the panel 20
and output to the outside.
[0065]
For example, when playing back a television program, when the connection of the earphone is
detected and the touch of the ear to the panel 20 is detected, the mobile phone 1A connects the
main voice to the earphone jack 13 The secondary sound may be generated by the panel 20
triggered by touch detection of the ear on the panel 20.
Furthermore, when the mobile phone 1A is playing a bilingual television program or a subtitle
broadcast movie, the connection of the earphone is detected, and the ear contact to the panel 20
is detected. The voice of a foreign language may be output from the earphone connected to the
earphone jack 13, and the voice of Japanese may be generated by the panel 20 in response to
the detection of contact of the ear with the panel 20. Conversely, even if the mobile phone 1A
outputs Japanese speech from the earphone connected to the earphone jack 13 and the speech
of a foreign language is generated by the panel 20 triggered by touch detection of the ear to the
panel 20. Good.
[0066]
Second Embodiment In the first embodiment, when the earphone is connected and the touch of
11-05-2019
20
the ear to the panel 20 is detected during the reproduction of the music content (music), the
music content (music) is associated. The example which generates the sound corresponding to
information by panel 20 was explained. In the second embodiment, another processing
procedure executed with the reproduction of music content (music) will be described.
[0067]
In the control program 9A, for example, different information is selected from the information
included in the second information according to whether or not there is a break between the first
content and the second content, and the selected information is selected. A function of causing
the first sound generation unit to generate a corresponding sound is included. The break
between the first content and the second content is, for example, from music A to music B when
playing music content including the music A which is the first content and the music B which is
the second content. It corresponds to the switching part (silence part). The break between the
first content and the second content can also be referred to as a portion between the music A and
the music B. The first sound generator is, for example, a panel 20. The sound generated by the
first sound generation unit is, for example, a reading voice.
[0068]
The controller 10 differs among information included in the second information according to, for
example, whether or not there is a break between the first content and the second content by
executing the control program 9A. Information is selected, and a sound corresponding to the
selected information is generated by the first sound generator.
[0069]
FIG. 7 is a view showing an example of a processing procedure by the mobile phone 1A
according to the second embodiment.
The processing procedure shown in FIG. 7 is realized by the controller 10 executing the control
program 9A and the like stored in the storage 9. Hereinafter, another example of the processing
procedure executed along with the reproduction of the music content (music) in the mobile
phone 1A will be described using FIG. Among the processing procedures shown in FIG. 7, the
processing of step S205 is different from the processing procedure shown in FIG.
11-05-2019
21
[0070]
As shown in FIG. 7, at step S201, when the reproduction of music that is music content is started,
at step S202, the controller 10 determines whether connection to the earphone jack 13 is
detected at step S202.
[0071]
If the controller 10 determines that the connection to the earphone jack 13 is detected as a result
of the determination (Yes at step S202), then at step S203, the music is sent to the earphone
connected to the earphone jack 13 via the earphone jack 13. Output the sound of
[0072]
Subsequently, at Step S204, the controller 10 determines whether ear touch on the panel 20 is
detected.
For example, the controller 10 determines whether there is an ear touch on the panel 20 based
on the detection result of the touch screen 21.
The method of detecting ear contact with the panel 20 may be similar to the method of step
S104 shown in FIG.
[0073]
When the touch of the ear to the panel 20 is detected as a result of the determination (Step S204,
Yes), the controller 10 determines whether the music content being reproduced is a break of
music as Step S205. Do. That is, the controller 10 determines whether or not it is a switching part
of music that switches from the first music to the second music while reproducing music content
including a plurality of music.
[0074]
11-05-2019
22
For example, the controller 10 analyzes an audio file of music to detect a silent portion, and
associates the detected silent portion with an elapsed time from the start of reproduction when
the music is reproduced. Subsequently, when the elapsed time from the start of reproduction of
the currently-played music is the time corresponding to the silent portion, the controller 10
determines that it is a break of the music.
[0075]
As a result of the determination, when the result of the determination is that it is not a break of
the music (Step S205, No), the controller 20 generates a read-out voice for reading the song
name and singer name of the song being reproduced by Step S206. Do. Specifically, the
controller 10 acquires the song name and the singer's name of the song being reproduced from
the storage 9 when it is not a break of the song. Then, the controller 10 synthesizes the reading
voice corresponding to the song name and the singer name of the music being reproduced, and
causes the panel 20 to generate the air conduction sound and the vibration sound corresponding
to the synthesized reading voice. Output to a part of the human body in contact with 20, for
example, an ear.
[0076]
Subsequently, at Step S207, the controller 10 determines whether or not the reproduction of the
music content is ended. For example, the controller 10 determines whether an operation input
for ending the reproduction has been received from the user, or whether the reproduction time
of the music content has been completed.
[0077]
When the controller 10 ends the reproduction of the music content as a result of the
determination (Yes in step S207), the processing procedure illustrated in FIG. 7 ends. On the
other hand, when the controller 10 does not finish the reproduction of the music content as a
result of the determination (No in step S207), the process returns to step S202.
[0078]
11-05-2019
23
If it is determined in step S205 that the controller 10 determines that there is a break in the
music (step S205, Yes), the panel 20 uses the panel 20 to read the read-out voice for reading the
album name of the music being played and the number Generate and output to the outside.
Specifically, the controller 10 acquires, from the storage 9, the album name of the song being
reproduced and the number of the song (the number of the song recorded in the album), when it
is a break of the song. Then, the controller 10 synthesizes the read-out sound corresponding to
the album name of the music being reproduced and the number of the music, and causes the
panel 20 to generate the air conduction sound and the vibration sound corresponding to the
synthesized read-out sound. (A part of the human body in contact with the panel 20, for example,
an ear). Subsequently, the controller 10 proceeds to step S207, and determines whether to end
the reproduction of the music content.
[0079]
If it is determined in step S204 that the touch of the ear to the panel 20 is not detected in step
S204 (step S204, No), the controller 10 proceeds to step S207 and determines whether or not
the reproduction of the music content is ended. Do.
[0080]
If it is determined in step S202 that the connection to the earphone jack 13 is not detected in
step S202 (step S202, No), the controller 10 outputs the sound of music from other than the
earphone jack 13 as step S209.
Subsequently, the controller 10 proceeds to step S207, and determines whether to end the
reproduction of the music content.
[0081]
In the processing procedure shown in FIG. 7, when it is not a break of music, a reading voice for
reading a song name and a singer's name is outputted by the panel 20, and when it is a break of
music, it is a reading aloud The example which outputs an audio | voice by the panel 20 was
demonstrated. This is an example in consideration of the possibility that the user is likely to want
to confirm the song name and the singer's name if the music is not a break, that is, if the music
sound is being output, It is not limited to this. For example, if it is not a break of the music, only
11-05-2019
24
the read-out voice for reading the song name is output, and if the break is music, the panel 20
outputs a read-out voice for reading the singer name, the album name and the number It is also
good. As described above, it is possible to arbitrarily set what kind of information is to be output
depending on whether or not it is a break of the music, that is, whether it is a switching part of
the music.
[0082]
As described above, in the second embodiment, the mobile phone 1A has different contents
among the information related to the music content (music) according to whether it is a break of
the music, that is, whether it is a switching part of the music. Information is selected, and a sound
corresponding to the selected information is generated by the panel 20 and output to the
outside. As described above, according to the second embodiment, it is possible to inform the
user of information corresponding to the state of the content being reproduced among
information related to the content being reproduced.
[0083]
Third Embodiment In the first embodiment, when the earphone is connected and the touch of the
ear to the panel 20 is detected during the reproduction of the music content (music), the music
content (music) is associated. An example in which the sound corresponding to the information is
generated by the panel 20 has been described. Furthermore, in the third embodiment, another
processing procedure executed with the reproduction of music content (music) will be described.
[0084]
In the control program 9A, for example, different information is selected from the information
included in the second information according to the number of times of contact detected by the
detection unit, and a sound corresponding to the selected information is generated as a first
sound. Includes functions generated by the department. The detection unit is, for example, the
touch screen 21. The first sound generator is, for example, a panel 20. The sound generated by
the first sound generation unit is, for example, a reading voice.
[0085]
11-05-2019
25
The controller 10 executes, for example, the control program 9A to select different information
from among the information included in the second information according to, for example, the
number of contacts detected by the detection unit, and the selected information The first sound
generator generates a sound corresponding to.
[0086]
FIG. 8 is a view showing an example of a processing procedure by the mobile phone 1A
according to the third embodiment.
The processing procedure shown in FIG. 8 is realized by the controller 10 executing the control
program 9A and the like stored in the storage 9. Hereinafter, another example of the processing
procedure executed along with the reproduction of music content (music) in the mobile phone
1A will be described with reference to FIG. Among the processing procedures shown in FIG. 8,
the processing of steps S304 to S307 and steps S309 to S312 is different from the processing
procedure shown in FIG. 6.
[0087]
As shown in FIG. 8, when the controller 10 starts reproduction of music that is music content at
Step S301, the controller 10 determines whether connection to the earphone jack 13 is detected
at Step S302.
[0088]
If the controller 10 determines that the connection to the earphone jack 13 is detected as a result
of the determination (Yes at step S302), then at step S303, the music is transmitted to the
earphone connected to the earphone jack 13 via the earphone jack 13. Output the sound of
[0089]
Subsequently, at step S304, the controller 10 determines whether ear touch on the panel 20 is
detected.
For example, the controller 10 determines whether there is an ear touch on the panel 20 based
11-05-2019
26
on the detection result of the touch screen 21.
The method of detecting ear contact with the panel 20 may be similar to the method of step
S104 shown in FIG.
[0090]
As a result of the determination, when contact of the ear to the panel 20 is detected (Yes at step
S304), the controller 10 determines that the ear is attached to the panel 20 based on the
detection result of the touch screen 21 as step S305. Get the number of contacts. For example,
the controller 10 acquires the number of times of ear contact detected within a fixed time.
[0091]
Subsequently, at Step S306, the controller 10 determines whether the number of contacts is one.
When the controller 10 determines that the number of contacts is one (Yes at step S306), the
controller 10 causes the panel 20 to output a read-out voice for reading the song title of the
music being reproduced at step S307.
[0092]
Subsequently, at Step S308, the controller 10 determines whether to end the reproduction of the
music content. For example, the controller 10 determines whether an operation input for ending
the reproduction has been received from the user, or whether the reproduction time of the music
content has been completed.
[0093]
When the controller 10 ends the reproduction of the music content as a result of the
determination (Yes in step S308), the processing procedure illustrated in FIG. 8 ends. On the
other hand, when the controller 10 does not finish the reproduction of the music content as a
result of the determination (No at step S308), the process returns to step S302.
11-05-2019
27
[0094]
In step S306, as a result of the determination, when the number of contacts is not one (No in step
S306), the controller 10 determines whether the number of contacts is two, as step S309.
[0095]
If the controller 10 determines that the number of contacts is two (Yes at step S309), the
controller 10 outputs a read-out voice for reading the singer's name of the music being
reproduced by the panel 20 at step S310.
Subsequently, the controller 10 proceeds to step S308 and determines whether or not the
reproduction of the music content is ended.
[0096]
On the other hand, when the number of contacts is not two as a result of determination (No at
step S309), the controller 10 determines whether the number of contacts is three or more, as
step S311.
[0097]
As a result of the determination, when the number of contacts is three or more (Yes at Step
S311), the controller 10 outputs a read-out voice to the panel 20 to read the album name of the
music being played and the number Do.
Subsequently, the controller 10 proceeds to step S308 and determines whether or not the
reproduction of the music content is ended.
[0098]
On the other hand, when the number of contacts is not three or more as a result of the
determination (Step S311, No), the controller 10 proceeds to Step S308, and determines whether
to end the reproduction of the music content.
11-05-2019
28
[0099]
If it is determined in step S304 that the touch of the ear to the panel 20 is not detected in step
S304 (step S304, No), the controller 10 proceeds to step S308 and determines whether or not
the reproduction of the music content is ended. Do.
[0100]
If it is determined in step S302 that the connection to the earphone jack 13 has not been
detected in step S302 (step S302, No), the controller 10 outputs the sound of music from other
than the earphone jack 13 as step S313.
For example, the controller 10 may output the sound of the music being played from the speaker
11, or may deform the panel 20 so that the panel 20 generates and outputs a sound
corresponding to the music being played. .
Subsequently, the controller 10 proceeds to step S308 and determines whether or not the
reproduction of the music content is ended.
[0101]
In the processing procedure shown in FIG. 8, when the number of ear contacts to the panel 20 is
one, the song name is output by the panel 20, and when the number of contacts is two, the
singer's name is output by the panel 20. Although the example in which the album name and the
number of songs are output by the panel 20 when the number of contacts is three or more has
been described, the present invention is not limited thereto.
[0102]
As described above, in the third embodiment, the mobile phone 1A selects and selects
information of different content from the information related to the music content (music)
according to the number of times of ear contact with the panel 20. The panel 20 generates a
reading voice for reading out the generated information and outputs it to the outside.
11-05-2019
29
As described above, according to the third embodiment, it is possible to convey information to
the user according to the user's operation among the information on the content being
reproduced.
[0103]
Fourth Embodiment In the above embodiment, an example in which the touch screen 21 is
disposed on substantially the entire surface of the panel 20 has been described. However, the
touch screen 21 may be disposed so as not to overlap the panel 20. FIG. 9 is a front view of the
mobile phone according to the fourth embodiment. FIG. 10 is a cross-sectional view of the mobile
phone shown in FIG. 9 along the line b-b. The mobile phone 1B disposed so that the touch screen
21 does not overlap the panel 20 will be described with reference to FIGS. 9 and 10.
[0104]
As shown in FIGS. 9 and 10, in the mobile phone 1B, the display 2 is disposed side by side with
the panel 20 so as not to be inside the panel 20 but to be flush with the panel 20. The touch
screen 21 is disposed to cover substantially the entire front surface of the display 2. That is, the
touch screen 21 and the display 2 constitute a so-called touch panel (touch screen display).
[0105]
The piezoelectric element 7 is attached to a substantially central portion of the rear surface of
the panel 20 by a bonding member 30. The panel 20 vibrates in response to expansion and
contraction (deformation) of the piezoelectric element 7 when an electric signal is applied to the
piezoelectric element 7, and the air conduction sound and a part of the human body in contact
with the panel 20 (for example, Vibrational sound transmitted through the cartilage)). By
disposing the piezoelectric element 7 at the center of the panel 20, the vibration of the
piezoelectric element 7 is uniformly transmitted to the entire panel 20, and the quality of sound
transmitted to the user is improved.
[0106]
Although the touch screen 21 is not disposed on the front surface of the panel 20, the panel 20
11-05-2019
30
is disposed in the vicinity of the display 2 on which the touch screen 21 is disposed.
[0107]
When the panel 20 is disposed so as not to overlap with the touch screen 21, the air conduction
sound is triggered by, for example, the touch detection of the ear on the top of the display 2
based on the detection result of the touch screen 21. And generate vibration noise.
In this case, since it is considered that the user's ear is also in contact with the panel 20, it is
possible to transmit information on the content being reproduced to the user, as in the above
embodiment.
[0108]
Fifth Embodiment In the above embodiment, an example is described in which at least a part of
the touch screen 21 is disposed so as to overlap the display 2. However, the touch screen 21 is
disposed so as not to overlap the display 2 May be FIG. 11 is a front view of the mobile phone
according to the fifth embodiment. 12 is a cross-sectional view of the mobile phone shown in FIG.
11 taken along line cc. The mobile phone 1C disposed so that the touch screen 21 does not
overlap the panel 20 will be described with reference to FIGS. 11 and 12. The mobile phone 1C
shown in FIGS. 11 and 12 is an example of a so-called foldable mobile phone.
[0109]
As shown in FIGS. 11 and 12, in the mobile phone 1C, the display 2 is disposed side by side with
the panel 20 so as not to be inside the panel 20 but to be flush with the panel 20.
[0110]
The piezoelectric element 7 is attached to a substantially central portion of the rear surface of
the panel 20 by a bonding member 30.
A reinforcing member 31 is disposed between the panel 20 and the piezoelectric element 7. The
11-05-2019
31
reinforcing member 31 is, for example, a plate made of resin, a plate, or a plate including glass
fiber. That is, in the mobile phone 1 </ b> C, the piezoelectric element 7 and the reinforcing
member 31 are bonded by the bonding member 30, and the reinforcing member 31 and the
panel 20 are bonded by the bonding member 30. The piezoelectric element 7 may not be
provided at the center of the panel 20.
[0111]
The reinforcing member 31 is, for example, an elastic member such as rubber or silicon. The
reinforcing member 31 may be, for example, a metal plate made of aluminum or the like having a
certain degree of elasticity. The reinforcing member 31 may be, for example, a stainless steel
plate such as SUS304. The thickness of a metal plate such as a stainless steel plate is suitably 0.2
mm to 0.8 mm, for example, according to the voltage value applied to the piezoelectric element 7
and the like. The reinforcing member 31 may be, for example, a plate made of resin. As resin
which forms the board made of resin here, a polyamide-type resin is mentioned, for example. The
polyamide resin is, for example, a crystalline thermoplastic resin obtained from
metaxylylenediamine and adipic acid, and there is Reny (registered trademark) rich in strength
and elasticity. Such a polyamide resin may be a reinforced resin reinforced with glass fiber, metal
fiber, carbon fiber or the like as a base polymer itself. The strength and elasticity of the
reinforced resin are appropriately adjusted according to the amount of addition of glass fiber,
metal fiber, carbon fiber or the like to the polyamide resin. The reinforcing resin is formed, for
example, by impregnating a base material formed by knitting glass fiber, metal fiber, carbon fiber
or the like with the resin and curing it. The reinforcing resin may be formed by mixing a finely
cut fiber piece into a liquid resin and then curing it. The reinforced resin may be a laminate of a
base on which fibers are woven and a resin layer.
[0112]
By arranging the reinforcing member 31 between the piezoelectric element 7 and the panel 20,
the following effects can be obtained. When an external force is applied to the panel 20, the
possibility that the external force is transmitted to the piezoelectric element 7 and the
piezoelectric element 7 is broken can be reduced. For example, when an external force is applied
to the panel 20 by the cellular phone 1C falling to the ground, the external force is first
transmitted to the reinforcing member 31. Since the reinforcing member 31 has a predetermined
elasticity, it is elastically deformed by an external force transmitted from the panel 20. Therefore,
at least a part of the external force applied to the panel 20 is absorbed by the reinforcing
member 31, and the external force transmitted to the piezoelectric element 7 is reduced. As a
11-05-2019
32
result, breakage of the piezoelectric element 7 can be reduced. When the reinforcing member 31
is disposed between the piezoelectric element 7 and the casing 40, the casing 40 is deformed by,
for example, the mobile phone 1C falling to the ground, and the deformed casing 40 collides with
the piezoelectric element 7 7 can be reduced.
[0113]
Vibration due to expansion or contraction or bending of the piezoelectric element 7 is first
transmitted to the reinforcing member 31 and further transmitted to the panel 20. That is, the
piezoelectric element 7 first vibrates the reinforcing member 31 having a larger elastic
coefficient than the piezoelectric element 7 and further vibrates the panel 20. Therefore, as
compared with the structure in which the mobile phone 1C does not include the reinforcing
member 31 and the piezoelectric element 7 is bonded to the panel 20 by the bonding member
30, the deformation of the piezoelectric element 7 can be less likely to be excessive. Thereby, the
amount of deformation (degree of deformation) of the panel 20 can be adjusted. This structure is
particularly effective in the case of the panel 20 in which the deformation of the piezoelectric
element 7 is not easily inhibited.
[0114]
Furthermore, by arranging the reinforcing member 31 between the piezoelectric element 7 and
the panel 20, as shown in FIG. 13, the resonance frequency of the panel 20 is lowered, and the
acoustic characteristics in the low frequency band are improved. FIG. 13 is a view showing an
example of change of frequency characteristics by the reinforcing member 31. As shown in FIG.
In FIG. 13, frequency characteristics in the case of using a sheet metal (stainless steel plate) such
as SUS 304 described above as the reinforcing member 31 and frequency characteristics in the
case of using a reinforced resin such as the above-mentioned Lenney as the reinforcing member
31 are shown. It is shown. The horizontal axis represents frequency, and the vertical axis
represents sound pressure. The resonance point in the case of using a reinforced resin is about 2
kHz, and the resonance point in the case of using a sheet metal is about 1 kHz. The dip is about 4
kHz when the reinforced resin is used, and about 3 kHz when the sheet metal is used. That is,
when the reinforced resin is used, the resonance point of the panel 20 is located in a high
frequency region and the dip of the frequency characteristic is located in a higher frequency
region as compared with the case where a sheet metal is used. Since the frequency band used for
voice communication of the mobile phone is 300 Hz to 3.4 kHz, the dip can not be included in
the operating frequency band of the mobile phone 1C when the reinforced resin is used as the
reinforcing member 31. . Even when a sheet metal is used as the reinforcing member 31, the dip
11-05-2019
33
is not included in the operating frequency band of the mobile phone 1C by appropriately
adjusting the type or composition of the metal constituting the sheet metal or the thickness of
the sheet metal. be able to. When comparing sheet metal and reinforced resin, the reinforced
resin can reduce the impact on antenna performance as compared to sheet metal. A reinforced
resin is less likely to be plastically deformed as compared to a sheet metal, and thus has the
advantage that the acoustic characteristics are less likely to change. The reinforced resin
suppresses the temperature rise at the time of sound generation as compared to the sheet metal.
Instead of the reinforcing member 31, a plate-like weight may be attached to the piezoelectric
element 7 by the joining member 30.
[0115]
When an electric signal is applied to the piezoelectric element 7, the panel 20 vibrates in
response to the deformation (expansion or bending) of the piezoelectric element 7, and the air
conduction sound and a part of the human body in contact with the panel 20 (e.g. (Auricular
Cartilage)) and generates vibration noise. The touch screen 21 is disposed to cover substantially
the entire front surface of the panel 20.
[0116]
When the touch screen 21 is disposed so as not to overlap the display 2, the mobile phone 1C
detects the contact of the ear with the panel 20 based on the detection result of the touch screen
21 as in the above embodiment, for example. As a trigger, air conduction noise and vibration
noise are generated. Therefore, as in the above embodiment, it is possible to convey information
about the content being played back to the user.
[0117]
(Other Embodiments) In the above embodiments, the mobile phones 1A to 1C are configured so
that the frequency of the sound corresponding to the specific information included in the first
information and the air conduction sound and the vibration sound corresponding to the second
information May be different from the frequency of. For example, when the singer's voice
included in the music being played is a frequency corresponding to a male voice, the mobile
phone 1A is a woman who reads the information related to the music such as the song name and
the singer's name. And the corresponding frequency. In this way, it is possible to convey
11-05-2019
34
information about the content being played back to the user in an easy-to-understand manner.
[0118]
In the above embodiment, the mobile phone 1A can also extract a sound of a specific frequency
from the music content being reproduced and output it by the panel 20. For example, the mobile
phone 1A extracts only sound or voice of a specific frequency such as vocal sound, drum sound,
bass sound or guitar sound from the sounds or sounds included in the music being reproduced,
Output by
[0119]
In the above embodiment, the panel 20 is deformed by the piezoelectric element 7 to generate
air conduction noise and vibration noise. Air conduction noise and vibration noise may be
generated by deforming a part of the casing 40 by the piezoelectric element 7. A portion of the
housing 40 may be, for example, a corner of the housing.
[0120]
In the above embodiment, an example is shown in which the display 2 is attached to the back of
the panel 20 using the bonding member 30 as the mobile phone 1A, but the mobile phone 1A
has a space between the panel 20 and the display 2 May be configured. By providing a space
between the panel 20 and the display 2, the panel 20 easily vibrates, and the range over which
the vibration noise can be easily heard on the panel 20 becomes wide.
[0121]
Although the above-mentioned embodiment showed the example which attaches piezoelectric
element 7 to panel 20, it may be attached to other places. For example, the piezoelectric element
7 may be attached to the battery lid. The battery lid is a member attached to the housing 40 and
covering the battery. Since the battery lid is often attached to a surface different from the display
2 in portable electronic devices such as mobile phones, according to such a configuration, the
user can use a part of the body (e.g. (Auricular Cartilage)) can be contacted to hear the sound.
11-05-2019
35
The piezoelectric element 7 may be configured to vibrate the corners (for example, at least one of
the four corners) of the housing 40. In this case, the piezoelectric element 7 may be configured
to be attached to the inner surface of the corner of the housing 40, or may further include an
intermediate member, such that the vibration of the piezoelectric element 7 is transmitted to the
corner of the housing 40 via the intermediate member. May be. According to this configuration,
since the range of vibration can be relatively narrowed, the air conduction noise generated by the
vibration hardly leaks to the surroundings. Further, according to this configuration, the air
conduction sound and the vibration sound are transmitted to the user, for example, in a state
where the user inserts the corner of the housing 40 into the ear canal, so that ambient noise does
not easily enter the user's ear canal. Therefore, the quality of the sound transmitted to the user
can be improved.
[0122]
In the above embodiment, the reinforcing member 31 is a plate-like member, but the shape of the
reinforcing member 31 is not limited to this. For example, the reinforcing member 31 may be
larger than the piezoelectric element 7 and may have a shape in which an end thereof is curved
toward the piezoelectric element 7 and covers the side of the piezoelectric element 7. Further,
the reinforcing member 31 may have a shape including, for example, a plate-like portion and an
extending portion which is extended from the plate-like portion and covers the side portion of
the piezoelectric element 7. In this case, the extension portion and the side portion of the
piezoelectric element 7 may be separated by a predetermined distance. This makes it difficult for
the extension portion to inhibit the deformation of the piezoelectric element.
[0123]
The panel 20 can constitute a part or all of any of a display panel, an operation panel, a cover
panel, and a lid panel for making a rechargeable battery removable. In particular, when the panel
20 is a display panel, the piezoelectric element 7 is disposed outside the display area for the
display function. This has the advantage of being less likely to interfere with the display. The
operation panel includes a touch panel. In addition, the operation panel includes, for example, a
sheet key which is a member which is integrally formed with the key tops of the operation keys
in the foldable type portable telephone and which constitutes one surface of the operation unit
housing.
[0124]
11-05-2019
36
The bonding members for bonding the panel 20 and the piezoelectric element 7 and the bonding
members for bonding the panel 20 and the casing 40 are described as the bonding members 30
having the same reference numerals. However, as the joining member, different ones may be
appropriately used depending on the members to be joined.
[0125]
In the above embodiment, a mobile phone has been described as an example of an apparatus
according to the appended claims, but an apparatus according to the appended claims is not
limited to a mobile phone. The device according to the appended claims may be a portable
electronic device other than a mobile phone. Portable electronic devices include, but are not
limited to, tablets, portable personal computers, digital cameras, media players, electronic book
readers, navigators, and game consoles, for example.
[0126]
In order to fully and clearly disclose the technology of the attached claims, the description has
been made of the characteristic embodiments. However, the appended claims should not be
limited to the above-described embodiments, and all variations and alternatives that can be
created by those skilled in the art within the scope of the basic matter presented herein. It should
be embodied by possible configurations.
[0127]
In the above embodiment, an example in which the controller 10 detects the touch of the ear on
the panel 20 based on the detection result of the touch screen 21, and the magnetism (magnetic
field emitted from the magnet mounted on the earphone by the magnetic sensor) Although the
example which detects the contact or approach of the ear or the earphone to the panel 20 based
on the detection result of 1.) was demonstrated, it is not limited to this. The detection unit may
detect a change in capacitance value when the touch screen 21 is a capacitance type, and may
detect a change in current value when the touch screen is a resistive film type. The detection unit
may detect a change in physical quantity according to any one of the various types of touch
screen methods described above. The control described in the above embodiment may be
11-05-2019
37
performed based on a change in a specific physical quantity detected by the detection unit. The
physical quantity and the detection unit that detects the physical quantity may be the illuminance
sensor 4 that detects the illuminance (light intensity, brightness, or luminance) of the ambient
light of the mobile phone. The detection unit may be a temperature sensor that detects the
temperature near the panel 20, an infrared sensor that detects the intensity of infrared light, or
the like. The detection unit may detect a change in the physical quantity itself or may detect a
change in a voltage value corresponding to the physical quantity.
[0128]
1A to 1C mobile phone 2 display 3 button 4 illuminance sensor 5 proximity sensor 6
communication unit 7 piezoelectric element 8 microphone 9 storage 9A control program 9B call
application 9C music playback application 9D video playback application 9Y content data 9Z
setting data 10 controller 11 speaker 12 Camera 13 Earphone jack 15 Posture detection unit 18
Vibrator 20 Panel 21 Touch screen 30 Joint member 31 Reinforcement member 40 Housing
11-05-2019
38
Документ
Категория
Без категории
Просмотров
0
Размер файла
58 Кб
Теги
jp2014007444
1/--страниц
Пожаловаться на содержимое документа