close

Вход

Забыли?

вход по аккаунту

?

JP2010251916

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2010251916
To provide a sound data processing apparatus capable of calculating a correction value from the
direction of a sound source designated by a user or automatically detected even if an error
occurs in the direction of the estimated sound source. . A sound data processing apparatus
according to the present invention has a function of drawing an estimated sound source direction
on a video, and a function of calculating a correction value of a sound source direction based on
a position in a video specified by a user. Have. Further, the sound data processing apparatus of
the present invention has a face detection function, and has a function of calculating a correction
value of the direction of the sound source from the detected face. [Selected figure] Figure 5
Sound data processing apparatus and program
[0001]
The present invention relates to a sound data processing apparatus capable of obtaining a
correction value for correcting an error in the direction of an estimated sound source.
[0002]
A large number of recording devices have been commercialized today, and features such as
optical zoom and brightness adjustment have been incorporated.
Patent Document 1 describes a method of providing a plurality of microphones in a recording
device, estimating a sound source direction, and separating and extracting individual sound
04-05-2019
1
sources. According to this method, it is possible to improve the clarity of the target sound by
suppressing the sound other than the target sound source at the time of recording.
[0003]
JP 2002-84590 A
[0004]
In the recording apparatus disclosed in Patent Document 1 mentioned above, when the recording
apparatus is used for a long time, accuracy and characteristics for calculating the direction of the
sound source (sound source direction) due to decrease in sensitivity of the microphone or
distortion of the housing of the recording apparatus. May change.
In this case, it is considered that the performance of separating and extracting the target sound
source is affected if the direction of the estimated sound source is shifted. Therefore, a
mechanism for correcting the estimated sound source direction to the correct direction is
desired. The present invention has been made in view of the above-mentioned circumstances,
and it is an object of the present invention to enable correction of an estimated sound source
direction in a sound data processing device having a function of estimating a sound source
direction.
[0005]
According to a first aspect of the present invention, there is provided a sound data processing
apparatus comprising: a plurality of microphones; a photographing means; an estimation means
for estimating a position of a sound source from a plurality of sound data fetched from the
microphone; A display means for displaying a video and a predetermined mark at a position
estimated by the estimation means; an input means for receiving from the user an instruction for
specifying a position on the video displayed by the display means; It is characterized by
comprising: calculation means for calculating the difference between the estimated position and
the position designated by the user; and storage means for storing the difference calculated by
the calculation means.
[0006]
Preferably, the input means comprises a touch panel display.
04-05-2019
2
[0007]
Also preferably, the input means comprises a direction key indicating the up, down, left, and right
directions in the screen.
[0008]
In order to achieve the above object, a sound data processing apparatus according to a second
aspect of the present invention comprises: a plurality of microphones; an imaging means; an
estimation means for estimating the position of a sound source from a plurality of sound data
fetched from the microphones; A detection unit that detects a position of a person's mouth in a
video captured by the imaging unit; a calculation unit that calculates a difference between a
position estimated by the estimation unit and a position of the person's mouth; And storage
means for storing the difference calculated by the above.
[0009]
Preferably, the storage unit further stores a plurality of sound data fetched from the microphone,
and the storage unit further includes a correction unit that corrects the sound data using the
difference stored in the storage unit. .
[0010]
A program according to a third aspect of the present invention comprises: a computer connected
to a plurality of microphones; an imaging unit; an estimation unit for estimating a position of a
sound source from a plurality of sound data fetched from the microphone; A display means for
displaying a video and a predetermined mark at a position estimated by the estimation means; an
input means for receiving from the user an instruction for specifying a position on the video
displayed by the display means; It is characterized in that it functions as: calculating means for
calculating a difference between the position and the position designated by the user; and
storage means for storing the difference calculated by the calculating means.
[0011]
A program according to a fourth aspect of the present invention comprises: a computer
connected to a plurality of microphones; an imaging unit; an estimation unit for estimating a
position of a sound source from a plurality of sound data fetched from the microphone; A
detection means for detecting the position of the person's mouth in the image, a calculation
means for calculating the difference between the position estimated by the estimation means and
the position of the person's mouth, A memory for storing the difference calculated by the
calculation means It is characterized in that it functions as means.
04-05-2019
3
[0012]
According to the present invention, it is possible to determine the difference between the
estimated sound source direction and the sound source direction that is correct or estimated to
be correct.
If this difference is used as a correction value, for example, at the time of shooting or
reproduction, even if the device has deteriorated, a function to make it easy to hear a sound from
a specific angle, or for an object present in the sound source direction It is possible to maintain
the performance such as the function of setting the focus of the camera.
[0013]
It is a block diagram of a portable device provided with a sound data processing function
concerning an embodiment of the present invention.
It is a figure showing the installation place of the microphone concerning an embodiment of the
present invention.
It is a related figure of the angle of the microphone concerning the embodiment of the present
invention, and a sound source.
It is a flowchart figure of processing which computes a correction value of a sound source
direction concerning Embodiment 1 of the present invention.
It is an image drawn on a display part at the time of correction value calculation concerning
Embodiment 1 of the present invention.
It is a flowchart figure which corrects a sound source direction concerning Embodiment 1 of the
present invention.
04-05-2019
4
It is a flowchart figure of processing which computes a correction value of a sound source
direction by key operation concerning Embodiment 2 of the present invention. It is a flowchart
figure of processing which computes a correction value of a sound source direction concerning a
3rd embodiment of the present invention automatically. It is an image drawn on a display part at
the time of correction value calculation concerning Embodiment 3 of the present invention.
[0014]
First Embodiment A mobile device 101 such as a mobile phone having a sound data processing
function according to a first embodiment of the present invention will be described.
[0015]
As illustrated in FIG. 1, the portable device 101 according to the present embodiment includes a
photographing unit 102, microphones 103 to 106, a key input unit 107, a codec unit 108, a
control unit 109, a recording unit 110, and a display unit. And a speaker 112.
[0016]
The imaging unit 102 includes a CCD (Charge Coupled Device) camera, a CMOS (Complementary
MOS) sensor, and the like, captures an image, and converts the image into an electrical signal.
[0017]
The microphones 103 to 106 collect sound and convert the collected sound into an analog
signal.
Although four microphones are used in this embodiment, the number of microphones is not
limited to four and may be four or more.
[0018]
An example of attachment of the imaging unit 102 and the microphones 103 to 106 is shown in
FIG.
04-05-2019
5
When the optical axis of the imaging unit 102 is taken as an X axis, and the Y axis and the Z axis
perpendicular to each other are defined on the X axis, the X axis, the Y axis, and the Z axis are
used. Are arranged such that there are microphones of different positions.
That is, as shown in FIG. 2, the microphones 103, 104, and 106 are disposed at different
positions on the installation surface of the imaging unit 102, and the microphone 105 is
disposed on the back surface.
[0019]
The key input unit 107 includes a power switch, a recording and recording button, a direction
key, and the like. The key input unit 107 receives operation input such as activation of the
microphones 103 to 106 and the imaging unit 102, recording start / end of recording,
movement of a cursor displayed on the display unit 111, and controls information of the received
operation input. Send to section 109.
[0020]
The codec unit 108 decodes the video captured from the imaging unit 102 and the sound
captured from the microphones 103 to 106 in order to compress and reproduce. The codec unit
108 may be divided into a codec unit for video and a codec unit for sound.
[0021]
The control unit 109 includes a central processing unit (CPU), a read only memory (ROM), a
random access memory (RAM), and the like, executes a program stored in the storage unit 110,
and performs the essential functions of the portable device 101. While performing, it also
performs an operation as a sound data processing device. For example, the sound source
direction is calculated based on the sound data captured from the microphone, or the processing
shown in the flowcharts of FIGS. 4 and 6 is performed.
04-05-2019
6
[0022]
The recording unit 110 includes a ROM, a flash memory, a hard disk drive (HDD), and the like,
and the image captured from the imaging unit 102, the sound data captured from the
microphones 103 to 106, the microphone position information, and the information on the
calculated sound source direction Store etc. The control unit 109 also stores a program and the
like related to the processing performed by the control unit 109.
[0023]
The display unit 111 includes an LCD (Liquid Crystal Display) or an organic EL display (organic
Electro-Luminescence display), a driver, and the like, and the image of the image captured by the
imaging unit 102 and the mark of the sound source in the sound source direction calculated by
the control unit 109 Display etc.
[0024]
In addition, the display unit 111 may be configured of a touch panel display capable of drawing
an image, and instead of the key input unit 107, a user's operation input may be received.
Hereinafter, the display unit 111 is assumed to be configured of a touch panel display.
[0025]
The speaker 112 is composed of an amplifier, a micro speaker, and the like, and outputs a sound
based on an analog sound signal sent from the control unit 109.
[0026]
Next, a method for estimating the sound source direction of the sound incident on the
microphone, which is executed in the portable device 101 having the above physical
configuration, will be described with reference to FIG.
[0027]
04-05-2019
7
First, two microphones are arbitrarily selected from the four microphones, and the sound
incident on each microphone is correlated.
A time difference TimeLag from the same sound incident on one microphone to the other
microphone is obtained.
For example, assuming that the sound captured by any of the microphones 310 and 320 is sound
data 311 and 321, the time difference TimeLag can be obtained from the deviation of the sound
data waveform. Assuming that the distance between the microphones 310 and 320 is d and the
speed of sound is C, an angle formed by a direction perpendicular to a line connecting the
microphones 310 and 320 and the direction of the sound source on a plane including the
microphones 310 and 320 and the sound source Is expressed by equation (1). angle = arcsin (C ·
TimeLag / d) ... (1)
[0028]
The control unit 109 obtains the angle angle from the equation (1), and determines the obtained
angle in the X, Y, Z coordinate system based on the position of each microphone in the X, Y, Z
coordinates shown in FIG. Correct to The control unit 109 obtains the direction (angle) of the
specific sound source from the origin on the X, Y, Z coordinate system by similarly analyzing
other combinations of microphones. As described above, it is possible to detect the sound source
direction for the sound from any angle in the three-dimensional space by analyzing the
installation positional relationship of the microphones and the taken-in sounds. Also, this makes
it possible to emphasize the sound from a specific sound source direction.
[0029]
In the portable device 101 having the function of estimating the sound source as described
above, the sensitivity of the microphone decreases due to long-term use or the like, and the
accuracy of TimeLag calculation decreases, and the case is distorted and the distance of the
microphone d If etc. changes, an error occurs in the estimation of the incident angle of sound.
[0030]
The process of calculating the correction value of the sound source direction at the time of
04-05-2019
8
shooting a moving image or a picture, which is executed in the portable device 101, will be
described below with reference to the flowchart of FIG.
Further, an example of a video displayed on the display unit 111 when correcting the sound
source direction is shown in FIG. In this example, it is assumed that the dog 511 emits a sound.
[0031]
First, the control unit 109 in FIG. 1 calculates the sound source direction on the threedimensional space in FIG. 2 using the sound data captured from the microphones 103 to 106
and the equation (1) (step S401). Next, the control unit 109 converts the sound source direction
in the three-dimensional space into coordinates on the image in the display unit 111 (step S402).
The control unit 109 causes the display unit 111 to display the video captured from the imaging
unit 102 and the sound source mark 512 of a solid line at the position of the converted
coordinates (step S403). When the control unit 109 detects a plurality of sound sources, a
plurality of sound source marks are displayed. In the example of the image 510 of FIG. 5, the
control unit 109 does not accurately detect the direction of the sound emitted from the dog 511,
and as a result, the sound source mark 512 is displayed at a different position from the dog 511
of the sound source. ing.
[0032]
Assuming that the user touches the sound source mark 512 of the display unit 111 with a finger
in order to correct the detected sound source direction, the control unit 109 detects a touch
operation by the user (step S404; Yes). The detection range of the touch operation may be only
within the range where the sound source mark is displayed, or may be touched within a range
larger by 10 dots in the upper, lower, left, and right directions than the sound source mark. It
may be variable together.
[0033]
When control unit 109 detects a touch operation (step S404; Yes), control unit 109 displays
selected sound source mark 512 as sound source mark 522 to indicate that the user has started
specifying the position of the sound source. To the dotted line, and is displayed on the display
04-05-2019
9
unit 111 together with a message 523 of "during sound source position correction". As long as
the user can confirm the start of the sound source position correction process by changing the
form of the sound source mark, the sound source mark may be blinked or the color may be
changed, for example, besides changing the solid line to a dotted line. If the control unit 109 does
not detect the touch operation (step S404; No), the detection of the sound source direction is
continued.
[0034]
As shown in the image 530 of FIG. 5, when the user slides the sound source mark 532 to the
position on the image where it is assumed that sound is actually generated, the control unit 109
detects a drag operation (step S405). ;) Next, when the user releases the finger from the display
unit 111, the control unit 109 detects a release operation (step S406; Yes), and as shown in the
image 540, the message "sound source position correction in progress" is deleted. The mark 532
is returned to the original solid line (sound source mark 542). Note that the user double-clicks
the sound source mark displayed first, and double-clicks again at the position where it is
assumed that a sound is actually generated on the image to specify the position of the sound
source on the image. Or other operations.
[0035]
Next, the control unit 109 determines the vertical position between the position on the image in
the sound source direction (position of the sound source mark 512) calculated first and the
position on the image (position of the sound source mark 542) finally set by the user. The
difference between the direction and the lateral direction is calculated (step S407), and the
difference is stored in the storage unit 110 as a correction value (step S408). Next, the control
unit 109 confirms whether the shooting is continuing (step S409). If the shooting is continuing
(step S409; No), the sound source direction is detected, and if an instruction to finish shooting is
received (step S409; Yes), the correction value calculation process is finished.
[0036]
Here, when detecting the sound source direction (step S401), the sound source mark drawn
previously may be left drawn or may be deleted in a time zone in which no sound exists. Even
when drawing is performed, the processing from step S404 to step S408 in FIG. 4 may be
04-05-2019
10
performed to correct the sound source direction.
[0037]
Next, the process of correcting the sound source direction based on the correction value stored in
the storage unit 110 will be described using the flowchart of FIG. The control unit 109 detects
the sound source direction (step S601) and converts it to the coordinates on the image of the
display unit 111 (step S602), as in the process of step S401 and step S402 of FIG. Next, the
control unit 109 reads out the correction value from the storage unit 110, and the read out
vertical and horizontal correction values for the vertical and horizontal coordinates on the
converted image. In addition, the angle of the sound source is obtained from the position
coordinates of the sound source to which the correction value has been added (step S603). The
control unit 109 causes the display unit 111 to display a sound source mark at the position of
the sound source to which the correction value is added (step S604). Next, it is checked whether
or not the reproduction is continued (step S605). If the reproduction is continued (step S605;
No), the sound source direction is detected, and if the instruction to end the reproduction is
received (step S605; Yes) End the process.
[0038]
The process of applying the correction value and correcting the sound source direction may be
performed along with the flow of correcting the sound source direction of FIG. 4, in which case
the process of step 603 and step S 604 of FIG. It carries out between S407 and step S408.
[0039]
In addition, since the sound source mark always moves when the subject generating the sound
moves at the time of shooting a moving image or a picture, it is determined that there is a touch
operation near the sound source in step S404 of FIG. In this case, the image displayed on the
display unit 111 may be stopped, and the image captured from the imaging unit may not be
displayed.
Even in that case, the processing from step S405 to step S408 in FIG. 4 is performed to correct
the sound source direction.
04-05-2019
11
[0040]
Further, in the above-described flowchart, the correction value is calculated at the time of
shooting, but the user may specify the position on the video at the time of reproduction of the
moving image file, and the correction value may be calculated. The same applies to the following
embodiments.
[0041]
According to the present embodiment, even if an error occurs in the estimated sound source
direction, an operation such as the user touching the sound source direction on the display
portion by displaying the sound source direction on the display portion at the time of shooting or
reproduction Can be easily specified. In addition, since a correction value can be obtained based
on the designated sound source direction, even if the device has deteriorated due to long-term
use, a specific angle can be obtained by using this correction value at the time of shooting or
reproduction. The function etc. which make it easy to hear the sound from can be maintained.
[0042]
Second Embodiment Next, a second embodiment will be described in which the sound source
direction is corrected by key operation with respect to a still image in the portable device 101.
[0043]
FIG. 7 shows a flowchart of processing for calculating a correction value of a sound source
direction by key operation with respect to a still image.
First, the control unit 109 performs the same process as the process of steps S401 to S403 of the
first embodiment. Hereinafter, step S704 and subsequent steps of the processing different from
the first embodiment will be described.
[0044]
04-05-2019
12
First, the control unit 109 determines whether or not the user has pressed the direction key (step
S704), and if not (step S704; No), the direction key pressing determination is continued. When
the control unit 109 detects that the direction key is pressed (step S704; Yes), the sound source
position is displayed on the display unit 111 as shown in the image 520 of FIG. 5 to indicate that
the user has started specifying the position of the sound source. A message 523 indicating
“under correction” is displayed, and the sound source mark 512 is changed to a dotted line
(sound source mark 522). Furthermore, when the user performs a pressing operation, the control
unit 109 moves the sound source mark in the pressed direction (step S705). For example, as
shown in the video 530, if the user moves the sound source mark 532 and presses the
determination key such as the Enter key, the control unit 109 determines that the pressing is
performed (step S706; Yes). When the control unit 109 does not press the key (step S704; No),
the determination of the key press is continued. When the control unit 109 detects the decision
key pressing operation (step S706; Yes), as shown in the image 540, the message 523 of "during
sound source position correction" is deleted and the sound source mark 532 returns to the
original solid line (sound source mark 542).
[0045]
Next, the control unit 109 determines the vertical position between the position on the image in
the sound source direction (position of the sound source mark 512) calculated first and the
position on the image (position of the sound source mark 542) finally set by the user. The
difference between the direction and the lateral direction is calculated (step S 707), and stored in
the storage unit 110 as a correction value (step S 708). Next, the control unit 109 confirms
whether or not the reproduction is continued (step S709). If the reproduction is continued (step
S709; No), the control of the sound source direction is performed and an instruction to end the
photographing is received. And (step S709; Yes), the process ends. Thereafter, similarly to the
process of the flowchart of FIG. 6 described in the first embodiment, the process of correcting the
sound source direction based on the correction value is performed.
[0046]
In addition, at the time of shooting a moving image shown in Embodiment 1, when it is
determined that there is a touch operation near the sound source, the image displayed on the
display unit 111 is stopped, and the image captured from the imaging unit 102 is displayed. You
may not do. Even in that case, the processing from step S405 to step S408 in FIG. 4 is performed
to correct the sound source direction.
04-05-2019
13
[0047]
According to the present embodiment, even in the portable device 101 having the display unit
other than the touch panel display, the correction value of the sound source direction can be
calculated when the captured moving image file is paused, and the sound source direction after
correction is used. Thus, the performance of the sound data processing function can be
maintained.
[0048]
Third Embodiment Next, a third embodiment will be described which automatically corrects the
sound source direction when the position of the sound source is detected near the face in the
portable device 101 having the face recognition and the sound source direction recognition
function.
[0049]
FIG. 8 shows a flowchart for correcting the sound source direction at the time of shooting a
moving image or a picture.
Further, an example of an image displayed on the display unit 111 when correcting the sound
source direction is shown in FIG.
In this example, it is assumed that a sound is emitted from the mouth of a human face 911.
[0050]
First, the control unit 109 performs the same process as the process of steps S401 to S403 of the
first embodiment. Hereinafter, step S 804 and subsequent steps of processing different from the
first embodiment will be described. The control unit 109 performs face detection on the image
captured from the imaging unit 102 (step S804), and causes the display unit 111 to display a
face area mark indicating an area where a face is present (step S805). For example, as shown in
the image 910 of FIG. 9, the control unit 109 is an image 911 of the face of a person captured
from the imaging unit 102, a sound source mark 912 indicating the position on the screen of the
sound source direction The mark 913 is displayed on the display unit 111. Next, the control unit
04-05-2019
14
109 determines the location of the mouth from among the detected faces, and detects the
coordinates of the mouth (step S806). The control unit 109 causes the display unit 111 to
display the mouth area mark 923 as shown in the image 920. At this time, the mark of the area
where the face is present may be deleted.
[0051]
Next, the control unit 109 determines whether or not the detected mouth coordinates are within
a predetermined distance from the position on the screen of the calculated sound source
direction (step S807). For example, it is determined whether the coordinates of the mouth are 10
dots or more away from the position on the screen of the calculated sound source direction and
within 30 dots. If the condition is not satisfied (step S807; No), detection of the sound source
direction is continued (step S801). This condition may be set appropriately by the user. If the
control unit 109 determines that the position of the mouth has been detected within a
predetermined distance from the sound source (step S 807; Yes), first, the sound source mark is
changed to a dotted line as in the sound source mark 922 of FIG. Next, like the sound source
mark 932, the sound source mark is moved to the coordinates of the mouth (step S808).
[0052]
After that, the control unit 109 causes a vertical direction on the screen of the position of the
sound source (position of the sound source mark 912) calculated first and the position of the
sound source finally set (position of the mouth area mark 932), Then, the difference in the lateral
direction is calculated (step S809), and the calculated difference is stored in the storage unit 110
as a correction value (step S810). Next, the control unit 109 confirms whether shooting is
continued (step S811), and if it is continued (step S811; No), detection of the sound source
direction is performed, and an instruction to finish shooting is received (step S811; Yes) end the
process.
[0053]
When a plurality of sound sources are detected in step S401 of FIG. 8 or when a plurality of faces
are detected in step S804, the detected number of sound source marks and face area marks are
displayed. In that case, a combination of the position of the sound source on the screen and the
position of the face may be detected. Alternatively, the sound source mark and the face area
04-05-2019
15
mark may be selected one by one by touch operation to the display unit 111 or by key operation,
and a pair of the position of the sound source on the screen and the position of the face may be
set. . Thereafter, the processing from step S807 to step S810 is performed on the combination of
the position of the sound source and the position of the face to calculate a correction value of the
sound source direction. Further, the sound source mark 912 or the like in FIG. 9 may not be
drawn, and only the position of the sound source on the screen may be automatically corrected.
Thereafter, similarly to the process of the flowchart of FIG. 6 described in the first embodiment,
the process of correcting the sound source direction based on the correction value is performed.
[0054]
According to the present embodiment, even if the device has deteriorated due to long-term use,
the correction value of the sound source direction can be automatically obtained, and the
portable device 101 can easily hear the sound from a specific angle. It is possible to maintain the
performance such as the function to perform or the function to automatically set the focus of the
camera on the object present in the sound source direction.
[0055]
Further, in the portable device 101 according to the first to third embodiments described above,
since the correction value is stored in the storage unit 110, the correction value of the sound
source direction determined at the time of shooting a moving image or a photo It can be applied
during playback.
Alternatively, the correction value of the sound source direction determined at the time of
reproduction of the captured moving image file can be applied at the time of capturing a moving
image or a picture.
[0056]
Alternatively, correction values for different sound source directions may be calculated for a
plurality of captured moving image files, and may be stored in association with the respective
files. When reproducing each captured moving image file, the correction value of the sound
source direction stored in association is read out to correct the sound source direction.
04-05-2019
16
[0057]
Further, in the portable device 101 according to the first to third embodiments, the correction
value of the sound source direction may be one, or the display unit 111 may be divided into
several, and the correction value may be set and applied for each divided area. You may First, it is
determined that the right side or the left side of the screen of the display unit 111 is designated,
and an area for storing the correction value for each is held. The correction value determined for
the right side of the screen of the display unit 111 is applied to the correction to the sound
source localization result for the right side of the screen. Similarly, the correction value
determined for the left side of the screen of the display unit 111 is the sound source for the left
side of the screen Applies to correction to localization results. Not only the left and right of the
screen, but also the top and bottom of the screen may have different correction values.
[0058]
In addition, in the portable device 101 according to the first to third embodiments, the correction
value may be other than the number of dots on the image. For example, after correction, the
position of the sound source on the image is converted into three-dimensional space, and angles
from the X-axis direction, the Y-axis direction, and the Z-axis direction are obtained. Thereafter,
the difference with the sound source direction before correction on the three-dimensional space
is calculated, and is stored as a correction value for each axis. In the correction of the position of
the sound source, after calculating the direction in the three-dimensional space, the correction
value is added to each of the X-axis direction, the Y-axis direction, and the Z-axis direction.
[0059]
Further, the present invention can be applied to all devices equipped with a sound data
processing function, such as electronic cameras, movies, PDAs, notebook computers, wearable
personal computers, calculators, electronic dictionaries, etc. other than mobile phones.
[0060]
101: portable device 102: imaging unit 103, 104, 105, 106, 310, 320: microphone 107: key
input unit 108: codec unit 109: control unit 110: recording unit 111: display unit 112: Speaker,
311, 321: sound data, 510, 520, 530, 540, 910, 920, 930: video, 511: dog, 512, 522, 532, 542,
912, 922, 932: sound source mark, 523: Message, 911: human face, 913: face area mark, 923:
mouth area mark
04-05-2019
17
04-05-2019
18
Документ
Категория
Без категории
Просмотров
0
Размер файла
29 Кб
Теги
jp2010251916
1/--страниц
Пожаловаться на содержимое документа