close

Вход

Забыли?

вход по аккаунту

?

JP2011120165

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2011120165
To improve the usability of a user in an imaging device. A digital camera (10) performs a zoom
process of moving an optical lens according to a change of an optical zoom magnification, and at
the same time, performs a predetermined process on sound collected by the microphones (5L
and 5R). By performing the directivity changing process electrically, the user can enhance the
directivity of the voice of the main subject 4 and cause the LCD 38 to display a directivity mark
indicating the directivity of the current microphones 5L and 5R. A change in directivity of the
microphones 5L and 5R can be recognized, and the user can further enhance the degree of
satisfaction with his / her operation. [Selected figure] Figure 5
Imaging device
[0001]
The present invention particularly relates to an imaging apparatus that performs predetermined
processing on collected voice according to an imaging mode for capturing an object, with respect
to voice emitted from a subject collected using a plurality of microphones.
[0002]
A conventional imaging device having a function of capturing a moving image, collecting voices
around the imaging device at the time of shooting a moving image, and synchronizing and
recording the moving image and the voice is for award It is done.
10-05-2019
1
Patent Document 1: Japanese Patent Application Laid-Open No. 5-333452 Patent Document 1
describes a recorder built-in type photographing device which can easily arrange information by
correlating visualization information obtained by photographing with sound information related
thereto. It is done. In particular, a recorder and a camera are incorporated in the device body, and
a unidirectional microphone is provided in the device body, and the microphone is tiltable
relative to the body device and rotatably provided, and is provided in the finder The cursor is set
at a position corresponding to the pointing direction of the microphone, and the user operates
the button for turning the microphone to turn the microphone, and the cursor indicates
directivity in accordance with the turning of the microphone, so it is necessary. There is disclosed
a technique capable of improving the sound collection efficiency of audio information to be
transmitted.
[0003]
However, in the recorder built-in type imaging device of the prior art, the microphone has
directivity and can be turned, but a compact image pickup convenient for users such as recent
image pickup devices, particularly digital cameras, to carry. In the device, adopting a structure
that allows the microphone having directivity to be rotatable is costly and also causes the
imaging device to be large, and does not satisfy the specifications required for the recent imaging
device.
[0004]
Therefore, a digital camera adopting a plurality of microphones capable of collecting sound in a
plurality of channels is used for prizes.
When many users take a moving image of a subject with the digital camera, it may be considered
that the subject is subjected to zoom processing for enlargement and then recorded. Then, the
moving image and the sound are recorded as a moving image file in synchronization with the
sound. Later, when playing back a moving image file, the main subject is displayed large, and
although the main subject is clearly noticed at a glance, the sound reproduced in synchronization
is the sound emitted from the main subject Not only that, but also the sound from other sound
sources is reproduced, the user feels very uncomfortable.
[0005]
10-05-2019
2
Therefore, when capturing a moving image of a subject that has undergone zoom processing
using a plurality of microphones that can collect sound in multiple channels, it is possible to hear
the sound emitted from the main subject according to the zoom processing well It is desirable to
control the directivity of the speech. Then, according to the zoom processing, the user is provided
with information indicating the directivity of the currently recorded voice, thereby enhancing the
recognition degree of the user's own operation and improving the degree of satisfaction with the
operation. Is desirable.
[0006]
SUMMARY OF THE INVENTION The present invention solves the above-mentioned problems
and, when using a plurality of microphones capable of collecting sound in a plurality of channels,
when capturing a moving image of a subject subjected to a zooming process to be enlarged,
Accordingly, the directivity of the voice can be controlled so that the voice emitted from the main
subject can be heard well, and further, the user can be provided with information indicating the
directivity of the voice currently recorded according to the zoom processing. An imaging device
is provided.
[0007]
An image pickup apparatus according to a first aspect of the present invention is an image
pickup means for picking up an image of a subject and outputting an image signal, a plurality of
microphones for collecting sound with a plurality of channels, and display control for outputting
an image based on the image signal to a display means. Means, zoom processing control means
for executing zoom processing in which an image output to the display means is enlarged and
displayed, zoom magnification changing means for receiving an instruction to change zoom
magnification in zoom processing, and a plurality of microphones It is superimposed on a video
signal corresponding to an image so that directivity control means for controlling the directivity
of voice and directivity marks indicating directivity of voice obtained from a plurality of
microphones are displayed at predetermined positions on the display means. In accordance with
the mark display means to be displayed and the instruction to change the zoom factor, the
directivity and directivity mark of the sound obtained from a plurality of microphones And a
further allowed to change means.
[0008]
An imaging apparatus according to a second invention is according to the first invention, and the
directivity control means controls directivity by electrically controlling voices collected from a
plurality of microphones. Do.
10-05-2019
3
[0009]
An imaging apparatus according to a third invention is subordinate to the imaging apparatus
according to the first invention or the second invention, and the changing unit is a subject to be
enlarged when the change instruction of the zoom magnification is a change in the enlargement
direction. It is characterized in that, while the sensitivity of the sound obtained from the direction
of is increased, the sensitivity of the sound obtained from the direction of the subject to be
reduced is lowered when the change instruction of the zoom magnification is a change to the
reduction direction. .
[0010]
According to the imaging apparatus of the present invention, when capturing a moving image of
a subject subjected to zoom processing to be enlarged using a plurality of microphones capable
of collecting sound in a plurality of channels, the subject emits light according to the zoom
processing It is possible to control the directivity of the voice so that the voice to be heard can be
heard well, and further to provide the user with information indicating the directivity of the voice
currently recorded according to the zoom processing.
[0011]
It is an illustration figure which shows the positional relationship of the digital camera 10 and a
sound source which concern on a present Example.
It is a block diagram showing a part of circuit composition of digital camera 10 concerning this
example.
It is a block diagram showing a part of circuit composition of microphone sound processing part
42 concerning this example.
It is a figure for demonstrating the method the direction determination part 102 which concerns
on a present Example calculates the direction in which an audio | voice arrives.
It is an illustration figure which shows an example of the screen displayed on LCD38 which
concerns on a present Example.
10-05-2019
4
It is an illustration figure which shows an example of the image of the directivity of microphones
5L and 5R which concern on a present Example. It is a flowchart which shows a part of operation
| movement of the digital camera 10 which concerns on a present Example. It is a lookup table
which shows an example of the relationship between the optical zoom magnification which
concerns on a present Example, and a register value.
[0012]
Hereinafter, an embodiment of a digital camera as an example of the imaging device of the
present invention will be specifically described with reference to the drawings.
[0013]
FIG. 1 shows the positional relationship between the digital camera 10 of this embodiment in
which the main subject 4 is imaged, and in which sound generated from the sound source P, the
sound source Q and the sound source R is collected. It is an illustration figure.
[0014]
Using the microphones 5L and 5R of the digital camera 10 of this embodiment, a sound source P
for generating sound from the main subject 4 located on the optical axis of the lens group 16
including the optical lens of the digital camera 10, Main A sound source Q located to the left with
respect to the optical axis of the lens group 16 that generates sound from other than the subject
4 and a sound source R located to the right with respect to the optical axis of the lens group 16
that generates sound from other than the main subject 4 , The sound from each is collected by
two channels (Lch, Rch).
[0015]
FIG. 2 shows a block diagram of the digital camera 10.
[0016]
The digital camera 10 includes a lens group 16 including an optical lens and a diaphragm (not
shown), and the optical image of the main subject 4 is a CMOS imager unit through the lens
group 16 and the diaphragm controlled by a motor driver (not shown) Captured at 18.
10-05-2019
5
In the present embodiment, although a mode in which the CMOS imager unit 18 is adopted as an
image sensor is described, a CCD imager may be adopted.
[0017]
Then, the digital image pickup signal for one frame is output from the CMOS imager unit 18 by a
capture pulse supplied by a timing generator (not shown) connected to the CPU 22.
Here, in the CMOS imager unit 18, the charge accumulated in each pixel is amplified and read
out from each pixel as a signal using a wire, and the signal is subjected to correlated double
sampling processing, gain adjustment, clamp processing , A / D conversion processing.
The digital imaging signal subjected to the processing has one of R, G, and B color signals for
each pixel, and is temporarily stored in the SDRAM 32 through the bus 40 under the control of
the CPU 22.
[0018]
The digital imaging signal temporarily stored in the SDRAM 32 is input to the signal processing
circuit 20 under the control of the CPU 22.
The signal processing circuit 20 performs color separation processing on the input digital
imaging signal, and further converts it into Y, U, and V signals by YUV conversion. Then, the
digital image signal converted by the signal processing circuit 20 is stored again in the SDRAM
32 via the bus 40. In the present embodiment, the processing until the digital imaging signal
output from the CMOS imager unit 18 described above is converted into a digital image signal by
the signal processing circuit 20 and stored in the SDRAM 32 is defined as imaging processing.
[0019]
Further, the digital image signal stored in the SDRAM 32 is output to the LCD 38 through the
superimposing circuit 44, if necessary, under the control of the CPU 22. The LCD 38 includes an
10-05-2019
6
LCD driver (not shown), and the LCD driver converts Y, U, and V signals into RGB signals to cause
the LCD 38 to display an image signal based on the digital image signal. In the present
embodiment, processing until digital image signals stored in the SDRAM 32 are displayed as
image signals on the LCD 38 is defined as display processing. In the present embodiment, a mode
in which the LCD 38 is adopted as a display device is described, but a display device such as an
organic EL may be adopted.
[0020]
The operation unit 24 includes a mode selection button 24a, a still image pickup button 24b, a
moving image pickup button 24c, a menu button 24d, a reproduction start button 24e, and a
zoom button 24f. By operating the mode selection button 24 a by the user, it is possible to select
one of the photographing mode and the reproduction mode provided in the digital camera 10.
[0021]
Now, here, when the shooting mode is selected by operating the mode selection button 24a, the
above-described shooting processing is repeatedly executed based on a predetermined resolution
and frame rate, and display processing is sequentially executed. Thus, a through image is
displayed on the LCD 38.
[0022]
When the still image pickup button 24b is pressed by the user in a state where the through
image is displayed, the above-described image pickup processing is executed based on the
resolution for still image pickup.
Then, the digital image signal stored in the SDRAM 32 obtained by executing the above-described
imaging processing is input to the compression / decompression processing circuit 26 under the
control of the CPU 22, subjected to JPEG compression processing, and compressed again. It is
stored in the SDRAM 32 as image data.
[0023]
10-05-2019
7
Then, the external memory card control circuit 30 controlled by the CPU 22 compresses the
compressed still image data stored in the SDRAM 32 into the JPEG file structure (start marker,
end marker, header information (including thumbnail image data) on the external memory card
36 ), As a still image file of a plurality of segments and still image data). The recording operation
from the series of imaging processing for recording the still image file is defined as still image
imaging processing. In the present embodiment, a mode in which the external memory card 36 is
adopted as a recording medium is described, but an internal memory may be provided inside the
digital camera 10.
[0024]
Further, in the state where the through image is displayed, when the moving image shooting
button 24 c is pressed by the user, the above-described imaging processing is repeatedly
executed based on the resolution and the frame rate for moving image imaging. Further, the CPU
22 controls the microphone voice control unit 42 to perform predetermined processing
described later on analog voice signals output from the microphones 5L and 5R, and sequentially
output digital voice signals to the compression signal processing 26. Let As described with
reference to FIG. 1, sounds from the sound source P, the sound source Q, and the sound source R
are collected by the microphones 5L and 5R.
[0025]
At the same time, digital image signals stored in the SDRAM 32 obtained by execution of the
above-described imaging processing are sequentially input to the compression / decompression
processing circuit 26 under the control of the CPU 22, and digital audio signals and digital image
signals are processed. Then, the MPEG-4 compression processing is performed, and the
compressed video / audio data is stored again in the SDRAM 32.
[0026]
Then, the external memory card control circuit 30 controlled by the CPU 22 sequentially
transfers the compressed moving image / audio data stored in the SDRAM 32 to the external
memory card 36 to obtain the MP4 file structure (header information (thumbnail image data
Recording video data and audio data, which are compressed moving picture / audio data).
The recording operation from the series of imaging processing for recording the moving image
10-05-2019
8
file is defined as moving image imaging processing. Note that the real time recording image is
displayed on the LCD 38 by sequentially executing the display processing described above also
during the moving image pickup processing.
[0027]
When the through image is displayed or the real time recording image is displayed, the CPU 22
controls the motor drive unit (not shown) by operating the zoom button 24 f to be included in
the lens group 16. To move the optical lens.
[0028]
There are a wide-angle operation for moving the optical lens to the wide-angle side and a
telephoto operation for moving the optical lens to the telephoto side as the operation mode of the
zoom button 24f, and based on the optical image of the main subject 4 when the telephoto
operation is performed. The image is enlarged and displayed on the LCD 38 (hereinafter referred
to as enlargement processing), and when the wide-angle operation is performed, the main subject
4 is reduced and displayed on the LCD 38 (hereinafter referred to as reduction processing).
The wide angle operation is mainly performed when returning from the magnification magnified
by the telephoto operation to the original magnification.
[0029]
In this embodiment, by operating the zoom button 24f, the magnification of the enlargement
process (hereinafter referred to as the optical zoom magnification) can be changed linearly from
1 to 5 times. The telephoto operation and the wide-angle operation are defined as an optical
zoom operation, and the above-described processing based on the optical zoom operation is
defined as an imaging zoom processing.
[0030]
In the present embodiment, when the above-described telephoto operation is performed in the
state where moving image pickup processing is being performed, the CPU 22 causes the
10-05-2019
9
microphones 5L and 5R to collect sound sources P, sound sources Q, and the like according to
the optical zoom magnification. A process of increasing the sensitivity of the sound from the
sound source P to the sound from the sound source R, that is, an electrical directivity change
process is executed by the microphone sound processing unit 42, and the digital sound signal
subjected to the directivity change process is The compressed signal processing 26 is
sequentially output. Then, as described above, the moving image file based on the digital image
signal and the digital audio signal is recorded on the external memory card 36. Further, the CPU
22 superimposes a directivity mark indicating the directivity of the microphones 5L and 5R
subjected to the directivity change processing on the digital image signal according to the optical
zoom magnification, and causes the LCD 38 to display it as a real time recording image.
[0031]
Further, when the playback mode is selected by operating the mode selection button 24 a and
the playback start button 24 e is pressed, the CPU 22 controls the external memory card control
circuit 30 to be recorded in the external memory card 36. The header information of the latest
still image file or moving image file is read out and the file can be played back, that is, whether
the file is not broken, whether it is a JPEG format file or an MP4 format file If the file can be
played back, the format of the file is analyzed, and the compressed still image data or the
compressed moving image / audio data is temporarily stored in the SDRAM 32 and
decompressed by the compression / decompression processing circuit 26. The image signal
based on the digital expansion image signal is displayed on the LCD 38. When a moving image
file is to be reproduced, an image signal based on the digital expanded image signal is displayed
on the LCD 38, and the digital expanded audio signal is output to the D / A conversion circuit 46
to convert it into an analog expanded audio signal. Output to
[0032]
Next, the electrical directivity changing process described above will be described in detail with
reference to FIGS. 3 and 4.
[0033]
FIG. 3 is a block diagram showing an outline of an internal configuration of the microphone
sound processing unit 42. As shown in FIG.
10-05-2019
10
The microphone sound processing unit 42 separates and extracts voices from a plurality of
sound sources (three sound sources of the sound source P, the sound source Q, and the sound
source R in FIG. 1) collected by the microphones 5L and 5R. Then, the current optical zoom
magnification notified from the CPU 22 is detected, and processing is performed to increase the
sensitivity of the sound from the sound source P with respect to the sound from each sound
source separated and extracted according to the optical zoom magnification. As a result of such
processing, when the viewer reproduces and views the moving image file, the sound from the
main subject 4 will be heard more clearly.
[0034]
This will be described in more detail with reference to FIG. In FIG. 3, the digital audio signals
output from the ADCs 6L and 6R are signals in the time domain, and when the elapsed time from
a certain reference time is t (t is an integer), the digital audio signal is a function of t It can be
expressed as Hereinafter, digital audio signals output from the ADCs 6L and 6R will be referred
to as an original signal Li (t) and an original signal Ri (t), respectively.
[0035]
The FFT (Fast Fourier Transform) units 101L and 101R respectively perform discrete Fourier
transform on the original signals Li (t) and Ri (t) to generate frequency spectra. The frequency
spectrums output from the FFT units 101L and 101R are obtained by converting digital audio
signals output from the ADCs 6L and 6R as signals on the time domain into signals on the
frequency domain. Therefore, the frequency spectrum can be expressed as a function of
frequency f (f is a positive integer). Hereinafter, the frequency spectrums output from the FFT
units 101L and 101R will be described as frequency spectra L (f) and R (f), respectively.
[0036]
In the digital camera 10 of the present embodiment, each of the ADCs 6L and 6R converts an
analog audio signal into a digital audio signal at a sampling frequency of 48 kHz (kilohertz), for
example. Then, the digital camera 10 sets 1024 samples of the generated digital audio signal,
that is, about 21.3 msec (1024 × 1/48 kHz) as one frame, and performs audio signal processing
on the digital audio signal in frame units. .
10-05-2019
11
[0037]
The FFT units 101L and 101R perform discrete Fourier transform on digital audio signals in
units of one frame. At this time, the frequency band of the digital audio signal is subdivided into
M (M is an integer of 2 or more) at a sampling interval of Δf, and a frequency spectrum is
calculated for each subdivided frequency band. Hereinafter, the subdivided frequency band is
described as a subdivided band. For example, assuming that the entire frequency band of the
digital audio signal is ΔF, the number M of subdivided bands is M = ΔF / Δf. Here, ideally, by
narrowing the sample interval Δf, each of the subdivided bands can include only the component
of the digital audio signal from one sound source. That is, the digital audio signal included in
each subdivided band can be considered to be a component of the digital audio signal based on
the sound emitted from any one of the plurality of sound sources.
[0038]
When the plurality of subdivided bands are f0, f1, f2, ..., fm-1 (m is an integer of 1 or more), the
frequency spectra L (f) and R (f) are divided into subdivided bands f0, 0 .., fm-1 (m is an integer
of 1 or more). Hereinafter, the frequency spectra of the subdivided bands constituting the
frequency spectra L (f) and R (f) are represented by L (f0), L (f1), L (f2),... L (fm-1), respectively.
And R (f0), R (f1), R (f2),... R (fm-1).
[0039]
Direction determination unit 102 calculates the phase difference when the digital audio signal
included in each subdivided band reaches microphones 5L and 5R from the frequency spectrum
of each subdivided band output from each of FFT units 101L and 101R. Based on the phase
difference, the arrival direction of the digital voice signal included in each subdivided band is
determined.
[0040]
FIG. 4 is a diagram for explaining a method in which the direction determination unit 102
calculates the direction in which the audio signal arrives.
10-05-2019
12
Now, assume a two-dimensional coordinate plane having coordinate axes X and Y axes
orthogonal to each other. The X and Y axes are orthogonal at the origin O. With the origin O as a
reference, the X-axis positive direction is the right side, the negative direction is the left side, the
Y-axis positive direction is the front, and the negative direction is the rear. It is assumed that the
microphones 5L and 5R are disposed at mutually different positions on the X axis so as to be
symmetrical with respect to the Y axis, and the distance between the two microphones is D. The
distance D is, for example, about several mm. Now, for example, the sound included in the
subdivided band of f0 Hz is emitted from the sound source H, and the incident angle when the
sound arrives at the origin O is θ (positive for the counterclockwise direction around the origin).
It is assumed that rad) (radian). At this time, the incident angles to the microphones 5L and 5R
can also be approximated as θ (rad). Assuming that the phase difference when the voice reaches
the microphones 5L and 5R is Δφ (rad), Δφ is calculated from the frequency spectra L (f0) and
R (f0) output from the FFT units 101L and 101R, respectively. Can.
[0041]
Specifically, assuming that the real part of frequency spectrum L (f0) calculated by discrete
Fourier transform is L_r (f0) and the imaginary part is L_i (f0), the phase φl of L (f0) is
[0042]
[0043]
It can be calculated as
[0044]
Similarly, assuming that the real part of the frequency spectrum R (f0) is R_r (f0) and the
imaginary part is R_i (f0), the phase φr of R (f0) is
[0045]
[0046]
It can be calculated as
[0047]
Here, since the phase difference Δφ can be calculated as Δφ = φr−φl, it can be calculated by
10-05-2019
13
the following equation (1).
[0048]
[0049]
Further, assuming that the sound velocity is C (mm / sec) and the distance between the
microphones 5L and 5R is D (mm), Δφ can also be calculated from the following equation (2).
[0050]
[0051]
Therefore, the incident angle θ can be calculated from the following equation (3) from the above
equations (1) and (2).
[0052]
[0053]
From the above, it is possible to calculate the incident angle θ which is the direction in which
the sound included in the subdivided band f0 Hz arrives.
Thus, the direction determination unit 102 calculates the incident angle of the sound included in
all the subdivided bands.
Hereinafter, the incident angle of the sound included in the subdivided band is simply referred to
as the incident angle of the subdivided band.
[0054]
In the present embodiment, the direction determination unit 102 outputs all the groups to the
10-05-2019
14
zoom magnification determination unit 103 as one set of the frequency spectrum of the
subdivided band of the frequency spectrum L (f) and the incident angle of the subdivided band.
The frequency spectrum of the subdivided band of the frequency spectrum R (f) and the incident
angle of the subdivided band may be output as one set.
[0055]
For example, the subdivision bands f0, f1, f2, f3, f4, f6, f7, f8, and f9 have incident angles of θ0,
θ1, θ0, θ0, θ1, θ1, θ1, θ1, θ2, θ2, and θ2, respectively. If it is (rad), the direction
determination unit 102 calculates (L (f0), θ0), (L (f1), θ1), (L (f2), θ0), (L (f3), θ0), (L (f4),
θ1), (L (f5), θ1), (L (f6), θ1), (L (f7), θ2), (L (f8), θ2), (L (f9) , Θ 2) to the zoom magnification
determination unit 103.
[0056]
The zoom magnification determining unit 103 extracts the frequency spectrum of the subdivided
band for each incident angle from the set of the frequency spectrum of the subdivided band
output from the direction determining unit 102 and the incident angle of the subdivided band,
and combines these. And generate a composite frequency spectrum.
That is, a synthesized frequency spectrum for each direction of arrival of speech is generated.
Hereinafter, a combined frequency spectrum generated by combining the frequency spectra of
the subdivided bands arriving from the incident angle θ will be referred to as L (θ).
[0057]
For example, from the direction determination unit 102, the zoom magnification determination
unit 103 sets (L (f0), θ0), (L (f1), θ1), (L (f2), θ0), (L (f3), θ0). , (L (f4), θ1), (L (f5), θ1), (L
(f6), θ1), (L (f7), θ2), (L (f8), θ2), (L (f9) ), Θ2), the frequency spectrums L (f0), L (f2), L (f3) of
the subdivided band of the incident angle θ0 are extracted, these are synthesized, and a
synthesized frequency spectrum L (θ0) is generated. Do.
[0058]
10-05-2019
15
Similarly, L (f1), L (f4), L (f5) and L (f6) are extracted and synthesized to obtain a synthesized
frequency spectrum L (θ1) from the incident angle θ1.
Further, L (f7), L (f8), and L (f9) are extracted and synthesized to obtain a synthesized frequency
spectrum L (θ2) from the incident angle θ2.
[0059]
Here, it is assumed that based on the sound source P, the sound source Q and the sound from the
sound source R shown in FIG.
In that case, let L (θ P) be a synthesized frequency spectrum from the sound source P, L (θ Q)
be a synthesized frequency spectrum from the sound source Q, and L (θ R) be a synthesized
frequency spectrum from the sound source R.
Since the sound source P is located in the direction of the optical axis of the lens group 16 of the
digital camera 10, θP ° 90 °.
As described above, the zoom magnification determination unit 103 can recognize that the sound
source P is a sound source from the main subject 4.
In addition, the zoom magnification determination unit 103 positions the sound source Q in a
direction shifted by an angle counterclockwise from the sound source P located in the optical
axis direction of the lens group 16 of the digital camera 10; It can be recognized that it is located
in a direction deviated by an angle clockwise from.
[0060]
The zoom magnification determination unit 103 notifies the gain adjustment unit 104 of the
combined frequency spectrum L (θP), L (θQ) and L (θR).
10-05-2019
16
The CPU 22 determines the register value in accordance with the zoom factor, and notifies the
gain adjusting unit 104 of which the register value of the register 104 a provided in the gain
adjusting unit 104 is set.
[0061]
The gain adjustment unit 104 performs amplification processing at a first amplification factor on
the frequency spectrum L corresponding to the combined frequency spectrum L (θ P)
corresponding to the register value of the register 104 a to correspond to the combined
frequency spectrum L (θ Q) The amplification process is performed on the frequency spectrum
L to be obtained by the second amplification factor, and the amplification process is performed
on the frequency spectrum L corresponding to the combined frequency spectrum L (θR) at the
third amplification factor.
[0062]
This will be specifically described using directional image diagrams of the microphones 5L (Lch)
and 5R (Rch) shown in FIGS. 6 (A) to 6 (C).
For example, when a zoom operation corresponding to an optical zoom magnification of 1 × to
less than 2.5 × is performed, the first amplification factor, the second amplification factor, and
the third amplification factor are as shown in FIG. The directivity of the microphone 5L as shown
in FIG. 1 is approximately 90 ° counterclockwise around the optical axis direction of the lens
group 16 of the digital camera 10, and the directivity of the microphone 5R is the optical axis
direction of the lens group 16 of the digital camera 10. The amplification factor is set to
approximately 90 ° clockwise about the central axis.
[0063]
When a zoom operation corresponding to an optical zoom magnification of 2.5 times or more
and less than 4 times is performed, the first amplification factor, the second amplification factor,
and the third amplification factor are as shown in FIG. 6 (B). The directivity of the microphone 5
L is approximately 60 ° counterclockwise around the optical axis direction of the lens group 16
of the digital camera 10, and the directivity of the microphone 5 R is centered on the optical axis
direction of the lens group 16 of the digital camera 10 The amplification factor is set to be
approximately 60 ° clockwise as an axis.
10-05-2019
17
[0064]
When a zoom operation corresponding to an optical zoom magnification of 4 or more is
performed, the first amplification factor, the second amplification factor, and the third
amplification factor are the directivity of the microphone 5L as shown in FIG. 6C. Is
approximately 30 ° counter clockwise around the optical axis direction of the lens group 16 of
the digital camera 10, and the directivity of the microphone 5R is approximately clockwise
around the optical axis direction of the lens group 16 of the digital camera 10 as the central axis
The amplification factor is set to 30 °.
[0065]
IFFT (Inverse Fast Fourier Transform) sections 105L and 105R respectively perform inverse
Fourier transform on the frequency spectra L (f) and R (f) after gain adjustment to convert them
into signals in the time domain, and respectively obtain Lo (t And Ro (t) to the bus 40.
[0066]
As mentioned above, although an example of directivity change processing was explained, you
may make it change directivity by other methods.
[0067]
Further, in the digital camera 10 according to the present embodiment, when the electrical
directivity changing process is performed according to the change of the optical zoom
magnification as described above, the directivity of the microphones 5L and 5R is determined
with respect to the user. It can be shown to the user if it has been changed.
Specifically, the CPU 22 controls the superimpose circuit 44 to superimpose directivity marks
representing the directivity of the microphones 5L and 5R on the digital image signal, and
outputs the digital image signal to the LCD 38 as a real time recording image.
More specifically, when the CPU 22 detects the current optical zoom magnification, it determines
the register value with reference to the look-up table showing the relationship between the
optical zoom magnification and the register value as shown in FIG. A register value is set in a
register 44 a provided in the circuit 44.
10-05-2019
18
The superimpose circuit 44 performs directivity mark superimposition processing to
superimpose the directivity mark on the digital image signal corresponding to the register value
of the register 44 a and outputs the result to the bus 40.
[0068]
FIGS. 5A to 5C show screens of a real-time recording image which is displayed on the LCD 38
and in which a mark indicating directivity is superimposed on the digital image signal in
response to the change of the optical zoom magnification.
[0069]
FIG. 5A shows the screen of the real-time recording image displayed on the LCD 38 when the
optical zoom magnification is 1 ×, and the directivity of the microphones 5L and 5R as shown in
FIG. A directional mark 200 which makes an image appear on the digital image signal.
[0070]
FIG. 5B shows the screen of the real-time recording image displayed on the LCD 38 when the
optical zoom magnification is 3 ×, and the directivity of the microphones 5L and 5R as shown in
FIG. A directional mark 300, which makes the user to image the image, is superimposed on the
digital image signal.
[0071]
FIG. 5C shows the screen of the real-time recording image displayed on the LCD 38 when the
optical zoom magnification is 5 times, and the directivity of the microphones 5L and 5R as shown
in FIG. A directional mark 400 which is to be imaged on is superimposed on the digital image
signal.
[0072]
FIG. 7 is a flowchart showing an example of the flow of directivity change processing and
directivity mark superposition processing.
The directivity changing process and the directivity mark superimposing process are realized by
10-05-2019
19
the CPU 22 executing a program stored in a flash memory (not shown).
[0073]
First, when the mode is switched to the moving image capturing mode by operating the mode
selection button 24a of the digital camera 10, the CPU 22 detects an optical zoom magnification
in step S1.
Next, proceeding to step S3, in step S3, a register value corresponding to the optical zoom
magnification is determined using the look-up table shown in FIG.
Next, in step S5, the register value of the register 104a provided in the gain adjustment unit
104a in the microphone audio processing unit 42 and the register value of the register 44a
provided in the super-invocation circuit 44 are determined. Set the value to
Next, the process proceeds to step S7, and in step S7, it is determined whether or not there is an
instruction to change the optical zoom magnification.
If it is determined as NO, the process returns to step S7, and the determination is repeated until it
is determined as YES.
If "YES" is determined in the step S7, the process returns to the step S1 and the above process is
repeated.
[0074]
As described above, the digital camera 10 according to the present embodiment executes the
zoom processing for moving the optical lens in response to the change of the optical zoom
magnification, and at the same time, for the sound collected by the microphones 5L and 5R. By
performing predetermined processing, that is, electrical directivity change processing, the
directivity of the sound of the main subject 4 is enhanced, and a directivity mark indicating the
directivity of the current microphones 5L and 5R is displayed on the LCD 38. Thus, the user can
recognize the change in directivity of the microphones 5L and 5R, and the user can further
enhance the degree of satisfaction with his / her operation.
10-05-2019
20
[0075]
In the present embodiment, the CPU 22 executes the directivity changing process and the
directivity mark superimposing process according to the change of the optical zoom
magnification, but may execute the process according to the change of the electronic zoom
magnification.
[0076]
Further, in the present embodiment, the directivity changing process and the directivity mark
superimposing process are executed according to the change of the optical zoom magnification
in the state where the moving image imaging process is being executed, but the through image is
displayed The directivity mark may be displayed on the LCD 38 by executing the process also
when the optical zoom magnification is changed.
[0077]
5L: Microphone 5R: Microphone 10: Digital camera 16: Lens group 18: CMOS imager unit 20:
Signal processing circuit 22: CPU 24: Operation unit 30 · External memory card control circuit
32 · · · SDRAM 36 · · · External memory card 38 · · · LCD 42 · · · Microphone audio processing unit
44 · · · Super invoke circuit 46 · · · D / A conversion circuit 48 · · ·・ ・ Speaker
10-05-2019
21
Документ
Категория
Без категории
Просмотров
0
Размер файла
33 Кб
Теги
jp2011120165
1/--страниц
Пожаловаться на содержимое документа