close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2018101987

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2018101987
Abstract: The presence of a drone and the position thereof can be easily determined from a
captured image. In an unmanned airborne vehicle detection system 5, an omnidirectional camera
CA images a surveillance area 8. The microphone array MA picks up the sound of the monitoring
area. The monitoring device 10 detects the unmanned air vehicle appearing in the monitoring
area using the voice data collected by the microphone array MA. When displaying the image data
of the monitoring area captured by the omnidirectional camera CA on the monitor 50, the signal
processing unit in the monitoring apparatus 10 superimposes the identification mark obtained
by converting the unmanned aerial vehicle into visual information on the image data of the
monitoring area Let [Selected figure] Figure 1
Sound source display system for monitoring area and sound source display method
[0001]
The present invention relates to a sound source display system of a surveillance area and a sound
source display method.
[0002]
Conventionally, there have been known flying object monitoring apparatuses capable of
detecting the presence of an object and detecting the flying direction of an object using a
plurality of sound detection units that detect sounds generated in a monitoring area for each
direction. (See, for example, Patent Document 1).
11-04-2019
1
When the processing device of the flying object monitoring device detects the flying object and
the flying direction of the flying object by the sound detection by the microphone, the monitoring
camera is directed in the direction in which the flying object is flying. Furthermore, the
processing device displays the video captured by the surveillance camera on the display device.
[0003]
JP, 2006-168421, A
[0004]
However, in the configuration of Patent Document 1, the image displayed on the display device is
a captured image itself capturing the flying object, and even if the user views the flying object
displayed on the display device, the flying object is the user's It was not possible to easily
determine whether it was the target unmanned air vehicle.
For example, in the image captured by the surveillance camera, various flying objects may appear
in addition to the unmanned air vehicle targeted by the user. In this case, it has been difficult to
easily grasp the position of the unmanned air vehicle from the surrounding conditions whether
there is an unmanned air vehicle aimed for by the user or even if it exists.
[0005]
The present invention is devised in view of the above-mentioned conventional situation, and an
object thereof is to provide a sound source display system and a sound source display method of
a monitoring area for displaying a sound source of a monitoring area using a captured image by
a camera. .
[0006]
According to the present invention, a camera for imaging a surveillance area, a microphone array
for picking up sound of the surveillance area, a display unit for displaying image data of the
surveillance area imaged by the camera, and sound collection by the microphone array A sound
parameter specifying the size of the sound in the monitoring area is derived for each
predetermined unit of pixels making up the image data in the monitoring area using the voice
data thus processed, and the derived sound parameter and sound are derived. The sound source
image information in which the sound parameter is converted stepwise to different visual
11-04-2019
2
information in accordance with comparison with a plurality of threshold values regarding the
size is superimposed for each predetermined unit of pixels constituting the image data of the
monitoring area, There is provided a sound source display system for a monitoring area,
comprising: a signal processing unit for performing a display operation of a sound source to be
displayed on a display unit.
[0007]
Further, the present invention is a sound source display method in a sound source display system
of a surveillance area including a camera and a microphone array, wherein the camera captures
an image of the surveillance area, and the microphone array picks up audio of the surveillance
area. A sound parameter for specifying the size of the sound in the monitoring area is derived for
each predetermined unit of pixels constituting the image data in the monitoring area using the
audio data collected by the microphone array. According to the comparison between the sound
parameter and a plurality of threshold values relating to the size of the sound, the sound source
image information in which the sound parameter is converted stepwise to different visual
information is divided into predetermined image data of the monitoring area. A sound source
display method is provided, in which a display operation of a sound source is performed by
superimposing each unit and displaying the same on a display unit.
[0008]
According to the present invention, it is possible to display the sound source of the monitoring
area using the image captured by the camera.
[0009]
A diagram showing an example of a schematic configuration of the unmanned airborne vehicle
detection system of the present embodiment A diagram showing an example of the external
appearance of a sound source detection unit A block diagram showing an example of the internal
configuration of a microphone array Block diagram showing in detail an example of internal
configuration of PTZ camera Block diagram showing in detail an example of internal
configuration of monitoring device Timing showing an example of pattern of detection sound
signal of unmanned air vehicle registered in memory Timing chart showing an example of the
frequency change of the detection sound signal obtained as a result of the frequency analysis
process Sequence chart showing an example of the detection operation of the unmanned air
vehicle in the unmanned air vehicle detection system of this embodiment Flow chart showing an
example of the details of the unmanned air vehicle detection determination procedure The
pointing direction is sequentially scanned in the monitoring area A diagram showing an example
of how a flying object is detected A diagram showing an example of a display screen of a monitor
when an unmanned air vehicle is not detected A diagram showing an example of a display screen
11-04-2019
3
of a monitor when an unmanned aircraft is detected Is an illustration showing an example of the
display screen of the monitor when the PTZ camera changes the optical axis direction in
conjunction with the detection. The unmanned air vehicle is detected and the PTZ camera
changes the optical axis direction in conjunction with the detection. Figure showing another
example of the display screen of the time monitor
[0010]
Hereinafter, an embodiment (hereinafter, referred to as “the present embodiment”) which
specifically discloses an unmanned air vehicle detection system and an unmanned air vehicle
detection method according to the present invention will be described in detail with reference to
the drawings as appropriate.
However, the detailed description may be omitted if necessary.
For example, detailed description of already well-known matters and redundant description of
substantially the same configuration may be omitted.
This is to avoid unnecessary redundancy in the following description and to facilitate
understanding by those skilled in the art.
It is to be understood that the attached drawings and the following description are provided to
enable those skilled in the art to fully understand the present disclosure, and they are not
intended to limit the claimed subject matter.
[0011]
FIG. 1 is a diagram showing an example of a schematic configuration of the unmanned air vehicle
detection system 5 of the present embodiment.
The unmanned air vehicle detection system 5 detects an unmanned air vehicle dn (see, for
example, FIG. 14) which is the user's goal as a detection target. The unmanned air vehicle dn is,
11-04-2019
4
for example, a drone flying autonomously using a GPS (Global Positioning System) function, a
radio controlled helicopter wirelessly steered by a third party, or the like. Such unmanned air
vehicle dn is used, for example, for aerial imaging of a target, transportation of materials, and the
like.
[0012]
In this embodiment, a multi-copter drone equipped with a plurality of rotors (in other words,
rotary blades) is illustrated as the unmanned air vehicle dn. In a multi-copter type drone,
generally, when the number of rotor blades is two, harmonics of a frequency twice as high as a
specific frequency, and further, harmonics of the frequency of its multiplication are generated.
Similarly, when the number of rotor blades is three, harmonics of a frequency three times the
frequency of the specific frequency and further harmonics of the frequency of the multiplication
thereof are generated. The same applies to the case where the number of rotor wings is four or
more.
[0013]
The unmanned airborne vehicle detection system 5 is configured to include a plurality of sound
source detection units UD, a monitoring device 10, and a monitor 50. The plurality of sound
source detection units UD are mutually connected to the monitoring device 10 via the network
NW. Each sound source detection unit UD includes a microphone array MA, an omnidirectional
camera CA, and a PTZ camera CZ. The sound source detection unit UD is referred to unless it is
necessary to distinguish the individual sound source detection units. Similarly, the microphone
arrays MA, the omnidirectional camera CA, and the PTZ camera CZ will be referred to unless
individual microphone arrays, omnidirectional cameras and PTZ cameras need to be particularly
distinguished.
[0014]
In the sound source detection unit UD, the microphone array MA picks up sound in all directions
in the sound collection area in which the device itself is installed in a non-directional state. The
microphone array MA has a housing 15 (see FIG. 2) in which a cylindrical opening having a
predetermined width is formed at the center. Sounds to be collected by the microphone array MA
include, for example, mechanical operation sounds such as drone, voices emitted by humans etc.,
11-04-2019
5
and other sounds widely, and limited to sounds in the audio frequency range (that is, 20 Hz to 23
khHz) Alternatively, low-frequency sounds lower than the audible frequency and ultrasonic
sounds exceeding the audible frequency may be included.
[0015]
The microphone array MA includes a plurality of omnidirectional microphones M1 to Mn (see
FIG. 3). The microphones M <b> 1 to Mn are arranged concentrically at predetermined intervals
(e.g., uniform intervals) around the opening provided in the housing 15 along the circumferential
direction. As the microphone, for example, an electret condenser microphone (ECM: Electret
Condenser Microphone) is used. The microphone array MA transmits sound data of a sound (see
below) obtained by the sound collection of the microphones M1 to Mn to the monitoring device
10 via the network NW. The arrangement of the microphones M1 to Mn described above is an
example, and may be another arrangement.
[0016]
In addition, the microphone array MA has a plurality of microphones M1 to Mn (for example, n =
32) and a plurality of amplifiers (amplifiers) PA1 to PAn (see FIG. 3) that respectively amplify
output signals of the plurality of microphones M1 to Mn. . An analog signal output from each
amplifier is converted into a digital signal by A / D converters A1 to An (see FIG. 3) described
later. The number of microphones in the microphone array is not limited to 32 and may be
another number (for example, 16, 64, 128).
[0017]
Inside the opening formed at the center of the housing 15 (see FIG. 2) of the microphone array
MA, an omnidirectional camera CA substantially corresponding to the volume of the opening is
accommodated. That is, the microphone array MA and the omnidirectional camera CA are
integrally disposed (see FIG. 2). The omnidirectional camera CA is a camera equipped with a
fisheye lens capable of capturing an omnidirectional image of the imaging area which is the
sound collecting space. In the present embodiment, both the sound collecting area and the
imaging area are described as a common monitoring area, but the spatial size (for example,
volume) of the sound collecting area and the imaging area may not be the same. For example, the
volume of the sound collection area may be larger or smaller than the volume of the imaging
11-04-2019
6
area. The point is that the volume collecting area and the imaging area may have a common
volume. The omnidirectional camera CA functions as, for example, a surveillance camera capable
of imaging an imaging area in which the sound source detection unit UD is installed. That is, the
omnidirectional camera CA has, for example, an angle of view of 180 ° in the vertical direction
and 360 ° in the horizontal direction, and images, for example, a monitoring area 8 (see FIG. 11)
which is a half sphere.
[0018]
In each sound source detection unit UD, the omnidirectional camera CA and the microphone
array MA are coaxially arranged by fitting the omnidirectional camera CA inside the opening of
the housing 15. As described above, when the optical axis of the omnidirectional camera CA and
the central axis of the housing of the microphone array MA coincide with each other, the imaging
area and the sound collecting area in the axial direction (that is, the horizontal direction) become
approximately the same. The position of the subject in the image and the position of the sound
source to be collected can be expressed by the same coordinate system (for example, coordinates
indicated by (horizontal angle, vertical angle)). Each of the sound source detection units UD is
attached so that, for example, the upward direction in the top-bottom direction is the sound
collecting surface and the imaging surface in order to detect the unmanned air vehicle dn flying
from above (see FIG. 2).
[0019]
The monitoring device 10 forms directivity (that is, beamforming) in which the main beam
direction is an arbitrary direction based on the user's operation with respect to the
omnidirectional sound collected by the microphone array MA. You can emphasize the sound of
the direction. The technique related to directivity control processing of sound data for
beamforming the sound picked up by the microphone array MA is a known technique as shown
in, for example, the patent documents 1 and 2.
[0020]
(Reference Patent Document 1) JP-A-2014-143678 (Reference Patent Document 2) JP-A-2015029241
[0021]
11-04-2019
7
The monitoring apparatus 10 processes a captured image using an image captured by the
omnidirectional camera CA (hereinafter, may be abbreviated as “captured image”) to generate
an omnidirectional image.
Note that the omnidirectional image may be generated not by the monitoring device 10 but by
the omnidirectional camera CA.
[0022]
The monitoring apparatus 10 uses various images (see FIG. 15) based on the calculated values of
the sound pressure of the sound collected by the microphone array MA and the images based on
the captured image captured by the omnidirectional camera CA. The image is output to the
monitor 50 or the like and displayed. For example, the monitoring device 10 displays the
omnidirectional image GZ1 and the identification mark mk1 (see FIG. 13) obtained by converting
the detected unmanned air vehicle dn into visual information in the omnidirectional image GZ1
on the monitor 50. The monitoring device 10 is configured using, for example, a PC (Personal
Computer) or a server. The visual information means, for example, in the omnidirectional image
GZ1, information represented to a degree that allows the user to clearly distinguish other objects
when viewing the omnidirectional image GZ, and so on.
[0023]
The monitor 50 displays the omnidirectional image GZ1 captured by the omnidirectional camera
CA. The monitor 50 also generates and displays a composite image in which the identification
mark mk is superimposed on the omnidirectional image GZ1. The monitor 50 may be configured
as an integral unit with the monitoring device 10.
[0024]
In FIG. 1, the plurality of sound source detection units UD and the monitoring device 10 have a
communication interface, and are mutually connected in data communication via the network
NW. The network NW may be a wired network (for example, an intranet, the Internet, a wired
11-04-2019
8
LAN (Local Area Network), or a wireless network (for example, a wireless LAN). The sound source
detection unit UD and the monitoring device 10 may be directly connected without via the
network NW. Moreover, the monitoring apparatus 10 and the monitor 50 are installed in the
monitoring room RM in which a user such as a surveillance staff resides.
[0025]
FIG. 2 is a view showing the appearance of the sound source detection unit UD. The sound source
detection unit UD has a support 70 for mechanically supporting the above-mentioned
microphone array MA, omnidirectional camera CA, PTZ camera CZ, and the like. The support 70
includes a tripod 71, two rails 72 fixed to the top plate 71a of the tripod 71, and a first mounting
plate 73 and a second mounting plate 74 attached to both ends of the two rails 72, respectively.
And a combined structure.
[0026]
The first mounting plate 73 and the second mounting plate 74 are mounted so as to straddle the
two rails 72 and have substantially the same flat surface. Further, the first mounting plate 73 and
the second mounting plate 74 are slidable on the two rails 72, and are adjusted and fixed at
positions separated or close to each other.
[0027]
The first mounting plate 73 is a disk-shaped plate. An opening 73 a is formed at the center of the
first mounting plate 73. The housing 15 of the microphone array MA is accommodated and fixed
in the opening 73a. On the other hand, the second mounting plate 74 is a substantially
rectangular plate. An opening 74 a is formed in a portion close to the outside of the second
mounting plate 74. The PTZ camera CZ is accommodated and fixed in the opening 74a.
[0028]
As shown in FIG. 2, the optical axis L1 of the omnidirectional camera CA accommodated in the
housing 15 of the microphone array MA and the optical axis L2 of the PTZ camera CZ attached
to the second mounting plate 74 are in the initial installation state Are set to be parallel to each
other.
11-04-2019
9
[0029]
The tripod 71 is supported by the three legs 71b on the ground contact surface, and the position
of the top plate 71a can be moved in the vertical direction with respect to the ground contact
surface by manual operation, and the pan and tilt directions The orientation of the plate 71a can
be adjusted.
Thus, the sound collection area of the microphone array MA (in other words, the imaging area of
the omnidirectional camera CA) can be set to an arbitrary direction.
[0030]
FIG. 3 is a block diagram showing in detail an example of the internal configuration of the
microphone array MA. The microphone array MA illustrated in FIG. 3 includes a plurality of
amplifiers (amplifiers) PA1 to PAn that respectively amplify output signals of a plurality of
microphones M1 to Mn (for example, n = 32) and a plurality of microphones M1 to Mn. Is
configured to include a plurality of A / D converters A1 to An that respectively convert analog
signals output from the analog signal to digital signals, a compression processing unit 25 and a
transmission unit 26.
[0031]
The compression processing unit 25 generates audio data packets based on the digital audio
signals output from the A / D converters A1 to An. The transmitting unit 26 transmits the packet
of the audio data generated by the compression processing unit 25 to the monitoring device 10
via the network NW.
[0032]
Thus, the microphone array MA amplifies the output signals of the microphones M1 to Mn by
the amplifiers PA1 to PAn, and converts the output signals to digital audio signals by the A / D
11-04-2019
10
converters A1 to An. Thereafter, the microphone array MA generates a packet of audio data in
the compression processing unit 25 and transmits the packet of audio data to the monitoring
device 10 via the network NW.
[0033]
FIG. 4 is a block diagram showing an example of the internal configuration of the omnidirectional
camera CA in detail. The omnidirectional camera CA shown in FIG. 4 is configured to include a
CPU 41, a communication unit 42, a power management unit 44, an image sensor 45, a memory
46, and a network connector 47. Note that, in FIG. 4, the illustration of the fisheye lens provided
in the front stage (that is, the right side of FIG. 4) of the image sensor 45 is omitted.
[0034]
The CPU 41 performs signal processing for integrally controlling the operation of each part of
the omnidirectional camera CA, input / output processing of data with other parts, arithmetic
processing of data, and storage processing of data. Instead of the CPU 41, a processor such as a
micro processing unit (MPU) or a digital signal processor (DSP) may be provided.
[0035]
For example, the CPU 41 generates cut-out image data obtained by cutting out an image of a
specific range (direction) among omnidirectional image data according to a designation of a user
who operates the monitoring device 10 and stores the cut-out image data in the memory 46.
[0036]
The image sensor 45 is configured using, for example, a CMOS (complementary metal oxide
semiconductor) sensor or a CCD (charge coupled device) sensor, and an optical image of reflected
light from an imaging area collected by a fisheye lens (not shown) Omnidirectional image data is
acquired by performing imaging processing on the light receiving surface.
[0037]
The memory 46 stores a ROM 46 z in which a program for defining the operation of the
omnidirectional camera CA and data of setting values are stored, and cut-out image data and
work data of which omnidirectional image data or a partial range thereof is cut out. And a
11-04-2019
11
memory card 46x which is detachably connected to the omnidirectional camera CA and in which
various data are stored.
[0038]
The communication unit 42 is a network interface (I / F) that controls data communication with
the network NW connected via the network connector 47.
[0039]
The power management unit 44 supplies DC power to each part of the omnidirectional camera
CA.
In addition, the power management unit 44 may supply DC power to devices connected to the
network NW via the network connector 47.
[0040]
The network connector 47 is a connector that transmits omnidirectional image data or twodimensional panoramic image data to the monitoring device 10 via the network NW, and can be
fed via a network cable.
[0041]
FIG. 5 is a block diagram showing an example of the internal configuration of the PTZ camera CZ
in detail.
About each part similar to omnidirectional camera CA, the description is abbreviate | omitted by
attaching | subjecting the code | symbol corresponding to each part of FIG.
The PTZ camera CZ is a camera capable of adjusting an optical axis direction (sometimes referred
to as an imaging direction) by an angle of view change instruction from the monitoring device
10.
11-04-2019
12
[0042]
Similar to the omnidirectional camera CA, the PTZ camera CZ includes a CPU 51, a
communication unit 52, a power management unit 54, an image sensor 55, a memory 56, and a
network connector 57, and further includes an imaging direction control unit 58 and a lens drive
motor 59.
When receiving an instruction to change the angle of view of the monitoring device 10, the CPU
51 notifies the imaging direction control unit 58 of an instruction to change the angle of view.
[0043]
The imaging direction control unit 58 controls the imaging direction of the PTZ camera CZ to at
least one of the pan direction and the tilt direction according to the view angle change
instruction notified from the CPU 51, and further changes the zoom magnification as necessary.
Control signal is output to the lens drive motor 59.
The lens drive motor 59 drives the imaging lens in accordance with the control signal, changes
the imaging direction (direction of the optical axis L2), and adjusts the focal length of the
imaging lens to change the zoom magnification.
[0044]
FIG. 6 is a block diagram showing an example of the internal configuration of the monitoring
apparatus 10 in detail. The monitoring device 10 illustrated in FIG. 6 has a configuration
including at least a communication unit 31, an operation unit 32, a signal processing unit 33, a
speaker device 37, a memory 38, and a setting management unit 39.
[0045]
The signal processing unit 33 is configured using, for example, a central processing unit (CPU), a
11-04-2019
13
micro processing unit (MPU), or a digital signal processor (DSP), and is a control process for
integrally controlling the operation of each unit of the monitoring apparatus 10. And performs
data input / output processing with other units, data calculation (calculation) processing, and
data storage processing. The signal processing unit 33 includes a directivity processing unit 63, a
frequency analysis unit 64, an object detection unit 65, a detection result determination unit 66,
a scan control unit 67, a detection direction control unit 68, a sound source direction detection
unit 34, and an output control unit 35. including. Also, the monitoring device 10 is connected to
the monitor 50.
[0046]
The sound source direction detection unit 34 uses, for example, sound data of sound in the
monitoring area 8 collected by the microphone array MA according to a known whitening cross
correlation method (CSP (Cross-power Spectrum Phase analysis) method). Estimate In the CSP
method, the sound source direction detection unit 34 divides the monitoring area 8 shown in FIG.
11 into a plurality of blocks, and when the sound is collected by the microphone array MA, the
threshold such as sound pressure or volume is exceeded for each block. The sound source
position in the monitoring area 8 can be roughly estimated by determining whether there is a
sound.
[0047]
The setting management unit 39 has in advance a coordinate conversion equation regarding
coordinates of a position designated by the user on the screen of the monitor 50 on which
omnidirectional image data captured by the omnidirectional camera CA is displayed. This
coordinate conversion formula designates the user on omnidirectional image data based on, for
example, the physical distance difference between the installation position of omnidirectional
camera CA (see FIG. 2) and the installation position of PTZ camera CZ (see FIG. 2). This is a
mathematical expression for converting the coordinates of the position (that is, (horizontal angle,
vertical angle)) into coordinates in the direction viewed from the PTZ camera CZ.
[0048]
The signal processing unit 33 is a position designated by the user from the installation position
of the PTZ camera CZ based on the installation position of the PTZ camera CZ (see FIG. 2) using
11-04-2019
14
the above-mentioned coordinate conversion formula held by the setting management unit 39
Coordinates (θMAh, θMAv) indicating the pointing direction toward the actual sound source
position corresponding to θMAh is a horizontal angle in the direction from the installation
position of the PTZ camera CZ to the actual sound source position corresponding to the position
designated by the user. θMAv is a vertical angle in the direction from the installation position of
the PTZ camera CZ to the actual sound source position corresponding to the position designated
by the user. As shown in FIG. 2, since the distance between the omnidirectional camera CA and
the PTZ camera CZ is known, and the optical axes L1 and L2 are parallel, the calculation process
of the above-mentioned coordinate conversion formula is, for example, known. It can be realized
by geometrical calculation. The sound source position is an actual sound source position
corresponding to the position designated from the operation unit 32 by the operation of the
user's finger or the stylus pen with respect to the image data displayed on the monitor 50.
[0049]
Incidentally, as shown in FIG. 2, in the present embodiment, the omnidirectional camera CA and
the microphone array MA are respectively arranged such that the optical axis direction of the
omnidirectional camera CA and the central axis of the housing of the microphone array MA are
coaxial. It is done. Therefore, the coordinates of the designated position derived by the
omnidirectional camera CA according to the user's designation on the monitor 50 on which the
omnidirectional image data is displayed are the emphasis direction (also referred to as directivity
direction) of the sound viewed from the microphone array MA. It can be considered identical. In
other words, the monitoring apparatus 10 transmits the coordinates of the specified position on
the omnidirectional image data to the omnidirectional camera CA when the user designates the
monitor 50 on which the omnidirectional image data is displayed. Thereby, the omnidirectional
camera CA uses the coordinates of the designated position transmitted from the monitoring
device 10 to indicate the direction of the sound source position corresponding to the designated
position as viewed from the omnidirectional camera CA (horizontal angle, vertical angle
Calculate). The calculation process in the omnidirectional camera CA is a known technique, and
thus the description thereof is omitted. The omnidirectional camera CA transmits the calculation
result of the coordinate indicating the direction of the sound source position to the monitoring
device 10. The monitoring device 10 can use coordinates (horizontal angle, vertical angle)
calculated by the omnidirectional camera CA as coordinates (horizontal angle, vertical angle)
indicating the direction of the sound source position as viewed from the microphone array MA.
[0050]
However, when the omnidirectional camera CA and the microphone array MA are not coaxially
arranged, the setting management unit 39 derives the omnidirectional camera CA according to
11-04-2019
15
the method described in, for example, JP-A-2015-029241. It is necessary to convert these
coordinates into coordinates in the direction viewed from the microphone array MA.
[0051]
The setting management unit 39 also holds a first threshold th1 and a second threshold th2 to be
compared with the sound pressure p for each pixel calculated by the signal processing unit 33.
Here, the sound pressure p is used as an example of a sound parameter related to a sound
source, represents the magnitude of the sound collected by the microphone array MA, and
represents the size of the sound output from the speaker device 37. It is distinguished from the
volume. The first threshold th1 and the second threshold th2 are values to be compared with the
sound pressure of the sound generated in the monitoring area 8, and set to, for example, a
predetermined value for determining the sound emitted by the unmanned air vehicle dn. Also, a
plurality of threshold values can be set, and in the present embodiment, two values, for example,
a first threshold value th1 and a second threshold value th2 larger than this are set (first
threshold value th1 <second threshold value th2) . In the present embodiment, three or more
threshold values may be set.
[0052]
Further, as described later, the area R1 (see FIG. 15) of the pixel in which the sound pressure
larger than the second threshold th2 is obtained is drawn, for example, in red on the monitor 50
on which omnidirectional image data is displayed. Further, the area B1 of the pixel for which the
sound pressure larger than the first threshold th1 and smaller than or equal to the second
threshold th2 is obtained is drawn, for example, in blue on the monitor 50 on which
omnidirectional image data is displayed. The sound pressure area N1 of pixels having the first
threshold th1 or less is drawn, for example, colorless on the monitor 50 on which
omnidirectional image data is displayed, that is, it is not different from the display color of
omnidirectional image data.
[0053]
The communication unit 31 receives omnidirectional image data or cutout video data transmitted
by the omnidirectional camera CA and audio data transmitted by the microphone array MA, and
outputs the data to the signal processing unit 33.
11-04-2019
16
[0054]
The operation unit 32 is a user interface (UI: User Interface) for notifying the signal processing
unit 33 of the content of the user's input operation, and is configured of, for example, a pointing
device such as a mouse or a keyboard.
In addition, the operation unit 32 may be disposed, for example, in correspondence with the
screen of the monitor 50, and may be configured using a touch panel or a touch pad that allows
direct input operation with a user's finger or a stylus pen.
[0055]
When the red region R1 of the sound pressure heat map MP (see FIG. 15) displayed on the
monitor 50 is designated by the user, the operation unit 32 acquires coordinate data indicating
the designated position and outputs the coordinate data to the signal processing unit 33. Output.
The signal processing unit 33 reads the sound data corresponding to the coordinate data of the
designated position from the memory 38, and after forming directivity from the microphone
array MA in a direction toward the sound source position corresponding to the designated
position Output from the device 37. Thus, the user can clearly confirm the sound at not only the
unmanned air vehicle dn but also at another designated position.
[0056]
The memory 38 is configured by a ROM and a RAM. The memory 38 holds, for example, various
data including sound data of a predetermined section, setting information, a program, and the
like. In addition, the memory 38 has a pattern memory in which sound patterns unique to each
unmanned air vehicle dn are registered. Furthermore, the memory 38 stores data of the sound
pressure heat map MP. Further, in the memory 38, an identification mark mk (see FIG. 13) that
schematically represents the position of the unmanned air vehicle dn is registered. The
identification mark mk used here is a star-shaped symbol as an example. The identification mark
mk is not limited to a star but may be a circle or a quadrangle, or a symbol or a letter such as a
"卍" shape reminiscent of an unmanned air vehicle. In addition, the display mode of the
identification mark mk may be changed between daytime and nighttime, for example, it may be a
11-04-2019
17
star shape during daytime, and may be a square that does not mistake the star during nighttime.
Also, the identification mark mk may be changed dynamically. For example, a star-shaped symbol
may be displayed in a blinking manner or may be rotated, and the user may be further alerted.
[0057]
FIG. 7 is a timing chart showing an example of the detection sound pattern of the unmanned air
vehicle dn registered in the memory 38. As shown in FIG. The pattern of the detection sound
shown in FIG. 7 is a combination of frequency patterns, and sounds of four frequencies f1, f2, f3
and f4 generated by rotation of four rotors mounted on a multicopter type unmanned air vehicle
dn Including. The signal of each frequency is a signal of the frequency of a different sound which
generate | occur | produces, for example with rotation of a plurality of feathers pivotally
supported by each rotor.
[0058]
In FIG. 7, the region of the frequency indicated by hatching is the region where the sound
pressure is high. The pattern of the detection sound may include not only the number of sounds
of a plurality of frequencies and the sound pressure, but also other sound information. For
example, a sound pressure ratio representing the ratio of sound pressure at each frequency may
be mentioned. Here, as an example, the detection of the unmanned air vehicle dn is determined
based on whether the sound pressure of each frequency included in the pattern of the detection
sound exceeds a threshold.
[0059]
The directivity processing unit 63 performs the directivity forming process (beam forming)
described above using a sound signal (also referred to as sound data) collected by the
nondirectional microphones M1 to Mn, and directs any direction to the directivity direction.
Perform extraction processing of sound data. Further, the directivity processing unit 63 can also
perform sound data extraction processing in which a range in an arbitrary direction is the
directivity range. Here, the pointing range is a range including a plurality of adjacent pointing
directions, and is intended to include a certain extent of the pointing direction as compared with
the pointing direction.
11-04-2019
18
[0060]
The frequency analysis unit 64 performs frequency analysis processing on the sound data
extracted and processed in the pointing direction by the directivity processing unit 63. In this
frequency analysis process, the frequency included in the sound data in the pointing direction
and the sound pressure thereof are detected.
[0061]
FIG. 8 is a timing chart showing an example of the frequency change of the detection sound
signal obtained as a result of the frequency analysis processing. In FIG. 8, four frequencies f11,
f12, f13, f14 and sound pressure of each frequency are obtained as the detection sound signal
(that is, detection sound data). In the figure, the fluctuation of each frequency which changes
irregularly is caused, for example, by the rotational fluctuation of the rotor (rotor) which slightly
changes when the unmanned air vehicle dn controls the attitude of the unmanned air vehicle dn
itself.
[0062]
The object detection unit 65 performs detection processing of the unmanned air vehicle dn. In
the detection process of the unmanned air vehicle dn, the object detection unit 65 registers in
advance the detection sound pattern (see FIG. 8) (frequency f11 to f14) obtained as a result of
the frequency analysis process and the pattern memory of the memory 38. The detected sound
patterns (see FIG. 7) (frequencies f1 to f4) are compared. The object detection unit 65 determines
whether the patterns of the two detection sounds approximate each other.
[0063]
Whether or not the two patterns are similar is determined, for example, as follows. If the sound
pressure of at least two frequencies included in the detected sound data out of the four
frequencies f1, f2, f3 and f4 respectively exceeds the threshold, the object detection unit 65
determines that the sound patterns are similar, Detect the flying object dn. The unmanned air
vehicle dn may be detected when other conditions are satisfied.
11-04-2019
19
[0064]
When it is determined that the unmanned air vehicle dn does not exist, the detection result
determination unit 66 instructs the detection direction control unit 68 to shift to detection of the
unmanned air vehicle dn in the next pointing direction. The detection result determination unit
66 notifies the output control unit 35 of the detection result of the unmanned air vehicle dn
when it is determined that the unmanned air vehicle dn is present as a result of the scanning of
the pointing direction. The detection result includes information on the detected unmanned air
vehicle dn. The information of the unmanned air vehicle dn includes, for example, identification
information of the unmanned air vehicle dn, and position information (for example, direction
information) of the unmanned air vehicle dn in the sound collecting space.
[0065]
The detection direction control unit 68 controls the direction for detecting the unmanned air
vehicle dn in the sound collection space based on the instruction from the detection result
determination unit 66. For example, the detection direction control unit 68 sets an arbitrary
direction of the directivity range BF1 including the sound source position estimated by the sound
source direction detection unit 34 in the entire sound collection space as a detection direction.
[0066]
The scan control unit 67 instructs the directivity processing unit 63 to perform beamforming
with the detection direction set by the detection direction control unit 68 as the directivity
direction.
[0067]
The directivity processing unit 63 performs beamforming on the directivity direction instructed
from the scan control unit 67.
In the initial setting, the directivity processing unit 63 sets the initial position in the directivity
range BF1 (see FIG. 11) including the sound source position estimated by the sound source
11-04-2019
20
direction detection unit 34 as the directivity direction BF2. The pointing direction BF2 is set one
after another in the pointing range BF1 by the detection direction control unit 68.
[0068]
The output control unit 35 generates sound pressure for each pixel forming omnidirectional
image data based on omnidirectional image data captured by the omnidirectional camera CA and
audio data collected by the microphone array MA. Calculate This sound pressure calculation
process is a known technique, and the detailed description of the process is omitted. Thereby, the
output control unit 35 generates a sound pressure heat map MP in which the calculated value of
the sound pressure is allocated to the position of the corresponding pixel for each of the pixels
constituting the omnidirectional image data. Furthermore, the output control unit 35 generates a
sound pressure heat map MP as shown in FIG. 15 by performing color conversion processing on
the sound pressure value for each pixel of the generated sound pressure heat map MP.
[0069]
Although it has been described that the output control unit 35 generates the sound pressure heat
map MP in which the sound pressure values calculated in pixel units are allocated to the
corresponding pixel positions, the sound pressure is not calculated for each pixel. The sound
pressure heat map is calculated by calculating the average value of sound pressure values in
pixel block units consisting of a predetermined number (for example, 4) of pixels and assigning
the average value of sound pressure values corresponding to the corresponding predetermined
number of pixels. An MP may be generated.
[0070]
Further, the output control unit 35 controls each operation of the monitor 50 and the speaker
device 37, and outputs and displays omnidirectional image data or cutout video data transmitted
from the omnidirectional camera CA on the monitor 50, and further, Audio data transmitted from
the microphone array MA is output to the speaker device 37 as audio.
Further, when the unmanned air vehicle dn is detected, the output control unit 35 outputs the
identification mark mk representing the unmanned air vehicle dn to the monitor 50 in order to
be superimposed and displayed on the omnidirectional image.
11-04-2019
21
[0071]
Further, the output control unit 35 uses the sound data collected by the microphone array MA
and the coordinates indicating the direction of the sound source position derived by the
omnidirectional camera CA to generate sound data collected by the microphone array MA. By
performing the directivity formation processing, the sound data in the directivity direction is
emphasized. The directivity formation processing of audio data is a known technique described
in, for example, JP-A-2015-029241.
[0072]
The speaker device 37 outputs the voice data collected by the microphone array MA or the voice
data collected by the microphone array MA and the directivity is formed by the signal processing
unit 33. The speaker device 37 may be configured as a device separate from the monitoring
device 10.
[0073]
The operation of the unmanned airborne vehicle detection system 5 having the above
configuration is shown.
[0074]
FIG. 9 is a sequence diagram showing an example of a detection operation of the unmanned air
vehicle in the unmanned air vehicle detection system 5 of this embodiment.
When each device (for example, monitor 50, monitoring device 10, PTZ camera CZ,
omnidirectional camera CA, microphone array MA) of unmanned airborne object detection
system 5 is powered on, unmanned airborne object detection system 5 starts operation. Do.
[0075]
11-04-2019
22
In the initial operation, the monitoring device 10 sends an image transmission request to the PTZ
camera CZ (T1). In response to the request, the PTZ camera CZ starts an imaging process in
response to the turning on of the power (T2). Similarly, the monitoring device 10 sends an image
transmission request to the omnidirectional camera CA (T3). In response to the request, the
omnidirectional camera CA starts an imaging process in response to power on (T4). Furthermore,
the monitoring apparatus 10 sends a sound transmission request to the microphone array MA
(T5). In response to this request, the microphone array MA starts the sound collection process
according to the power on (T6).
[0076]
When the initial operation ends, the PTZ camera CZ transmits data of a captured image (for
example, still image, moving image) obtained by imaging to the monitoring device 10 via the
network NW (T7). The monitoring apparatus 10 converts the captured image data transmitted
from the PTZ camera CZ into display data such as NTSC (T8), and outputs the display data to the
monitor 50 (T9). When the display 50 is input, the monitor 50 displays a PTZ image GZ2 (see
FIG. 12 or the like) by the PTZ camera CZ on the screen.
[0077]
Similarly, the omnidirectional camera CA transmits data of an omnidirectional image (for
example, a still image, a moving image) obtained by imaging to the monitoring device 10 via the
network NW (T10). The monitoring device 10 converts omnidirectional image data transmitted
from the omnidirectional camera CA into display data such as NTSC (T11), and outputs the
display data to the monitor 50 (T12). When the display 50 is input, the monitor 50 displays an
omnidirectional image GZ1 (see FIG. 12 and the like) by the omnidirectional camera CA on the
screen.
[0078]
In addition, the microphone array MA encodes sound data of sound obtained by sound collection
via the network NW and transmits the data to the monitoring device 10 (T13). In the monitoring
device 10, the sound source direction detection unit 34 estimates the sound source position in
the monitoring area 8 (T14). The estimated sound source position is used as a reference position
of the pointing range BF1 required for setting the initial pointing direction when the monitoring
11-04-2019
23
device 10 detects the unmanned air vehicle dn.
[0079]
The monitoring device 10 performs detection determination of the unmanned air vehicle dn
(T15). Details of the detection determination process of the unmanned air vehicle dn will be
described later.
[0080]
When the unmanned air vehicle dn is detected as a result of the detection determination process,
the output control unit 35 in the monitoring device 10 causes the omnidirectional image GZ1
displayed on the screen of the monitor 50 to be in the directivity direction determined in step
T15. An identification mark mk representing an existing unmanned air vehicle dn is
superimposed and displayed (T16).
[0081]
The output control unit 35 transmits, to the PTZ camera CZ, information on the pointing
direction obtained in the procedure T15, and a request for changing the imaging direction of the
PTZ camera CZ to the pointing direction (in other words, an instruction to change the angle of
view) Do (T17).
When the PTZ camera CZ receives information on the pointing direction (that is, an instruction to
change the angle of view), the imaging direction control unit 58 drives the lens drive motor 59
based on the information on the pointing direction to obtain the imaging lens of the PTZ camera
CZ. The optical axis L2 is changed, and the imaging direction is changed to the pointing direction
(T18). At the same time, the imaging direction control unit 58 changes the zoom magnification of
the imaging lens of the PTZ camera CZ to a value set in advance, or a value corresponding to the
ratio of the unmanned air vehicle dn to the imaged image.
[0082]
On the other hand, when the unmanned air vehicle dn is not detected as a result of the detection
determination process in the procedure T15, the processes of T16, T17, and T18 are not
performed.
11-04-2019
24
[0083]
Thereafter, the processing of the unmanned airborne vehicle detection system 5 returns to the
procedure T7, and the same processing is repeated until a predetermined event such as the
power-off operation is detected.
[0084]
FIG. 10 is a flowchart showing an example of details of the unmanned air vehicle detection
determination procedure of the procedure T15 of FIG.
In the sound source detection unit UD, the directivity processing unit 63 sets a directivity range
BF1 based on the sound source position estimated by the sound source direction detection unit
34 as an initial position of the directivity direction BF2 (S21).
[0085]
FIG. 11 is a diagram showing an example of how the pointing direction BF2 is sequentially
scanned in the monitoring area 8 and the unmanned air vehicle dn is detected.
Note that the initial position is not limited to the directivity range BF1 based on the sound source
position of the monitoring area 8 estimated by the sound source direction detection unit 34, and
an arbitrary position designated by the user is set as the initial position. The inside may be
scanned sequentially. Since the initial position is not limited, even when the sound source
included in the directivity range BF1 based on the estimated sound source position is not an
unmanned air vehicle, it is possible to detect an unmanned air vehicle flying in another direction
early. It becomes.
[0086]
The directivity processing unit 63 determines whether the sound data collected by the
microphone array MA and converted into digital values by the A / D converters An1 to An is
11-04-2019
25
temporarily stored in the memory 38 (S22). . If not stored, the process of the directivity
processing unit 63 returns to step S21.
[0087]
When the sound data collected by microphone array MA is temporarily stored in memory 38
(S22, YES), directivity processing unit 63 makes it possible for any directivity direction BF2 in
directivity range BF1 of monitoring area 8 to be transmitted. Beam forming is performed, and
sound data of this pointing direction BF2 is extracted (S23).
[0088]
The frequency analysis unit 64 detects the frequency of the sound data subjected to the
extraction processing and the sound pressure thereof (S24).
[0089]
The object detection unit 65 detects the unmanned air vehicle by comparing the pattern of the
detection sound registered in the pattern memory of the memory 38 with the pattern of the
detection sound obtained as a result of the frequency analysis process (S25).
[0090]
The detection result determination unit 66 notifies the output control unit 35 of the result of the
comparison, and notifies the detection direction control unit 68 of the detection direction shift
(S26).
[0091]
For example, the object detection unit 65 compares the pattern of the detection sound obtained
as a result of the frequency analysis process with the four frequencies f1, f2, f3, and f4 registered
in the pattern memory of the memory 38.
As a result of comparison, the object detection unit 65 has at least two identical frequencies in
the patterns of both detected sounds, and when the sound pressure of these frequencies is
greater than the first threshold th1, the patterns of both detected sounds are It approximates and
determines that unmanned air vehicle dn exists.
11-04-2019
26
[0092]
Here, it is assumed that at least two frequencies coincide, but the object detection unit 65
approximates one frequency when the sound pressure at this frequency is greater than the first
threshold th1. It may be determined that
[0093]
Further, the object detection unit 65 may set an error of an allowable frequency for each
frequency, and determine the presence or absence of the above approximation, assuming that the
frequency within this error range is the same frequency.
[0094]
Further, in addition to the comparison of the frequency and the sound pressure, the object
detection unit 65 may determine that the sound pressure ratios of the sounds of the respective
frequencies substantially match to the determination condition.
In this case, since the determination condition becomes severe, the sound source detection unit
UD can easily identify the detected unmanned air vehicle dn as the pre-registered target object
(moving object dn), and the detection accuracy of the unmanned air vehicle dn Can be improved.
[0095]
As a result of step S26, the detection result determination unit 66 determines whether the
unmanned air vehicle dn is present or absent (S27).
[0096]
When the unmanned air vehicle dn is present, the detection result determination unit 66 notifies
the output control unit 35 that the unmanned air vehicle dn is present (detection result of the
unmanned air vehicle dn) (S28).
[0097]
On the other hand, when there is no unmanned air vehicle dn in step S27, the scan control unit
11-04-2019
27
67 moves the pointing direction BF2 of the scan target in the monitoring area 8 in the next
different direction (S29).
The notification of the detection result of the unmanned air vehicle dn may be collectively
performed after the omnidirectional scan is completed, not the timing when the detection
process of one pointing direction is finished.
[0098]
The order of moving the pointing direction BF2 sequentially in the monitoring area 8 is, for
example, from the outer circumference toward the inner circumference within the pointing range
BF1 of the monitoring area 8 or in the entire range of the monitoring area 8 Alternatively, it may
be in a spiral (spiral) order from the inner circumference toward the outer circumference.
[0099]
In addition, the detection direction control unit 68 sets the position in advance in the monitoring
area 8 instead of scanning the pointing direction continuously like one-stroke writing, and sets
the pointing direction BF2 at each position in an arbitrary order. You may move it.
Thereby, the monitoring apparatus 10 can start the detection process from, for example, a
position where the unmanned air vehicle dn easily intrudes, and the detection process can be
made efficient.
[0100]
The scan control unit 67 determines whether the scan of all directions in the monitoring area 8
has been completed (S30).
If scanning of all directions has not been completed (S30, NO), the process of the directivity
processing unit 63 returns to step S23, and the same operation is performed.
That is, the directivity processing unit 63 beamforms the directivity direction BF2 of the position
11-04-2019
28
moved in step S29, and extracts the sound data of the directivity direction BF2.
As a result, the sound source detection unit UD continues detecting the unmanned air vehicle dn
which may exist even if one unmanned air vehicle dn is detected, so detection of multiple
unmanned air vehicles dn It is possible.
[0101]
On the other hand, when the omnidirectional scan is completed in step S30 (S30, YES), the
directivity processing unit 63 erases the sound data temporarily stored in the memory 38 and
collected by the microphone array MA (S31) ).
[0102]
After the sound data is erased, the signal processing unit 33 determines whether the detection
process of the unmanned air vehicle dn is ended (S32).
The end of the detection process of the unmanned air vehicle dn is performed in response to a
predetermined event.
For example, the number of times the unmanned air vehicle dn is not detected may be stored in
the memory 38 in step S6, and the detection process of the unmanned air vehicle dn may be
ended when the number is equal to or more than a predetermined number.
In addition, the signal processing unit 33 may end the detection processing of the unmanned air
vehicle dn based on the time increase by the timer or the user's operation on the UI (User
Interface) (not shown) of the operation unit 32. Moreover, when the power supply of the
monitoring apparatus 10 turns off, you may complete | finish.
[0103]
In the process of step S24, the frequency analysis unit 64 analyzes the frequency and also
measures the sound pressure of the frequency. When the sound pressure level measured by the
11-04-2019
29
frequency analysis unit 64 gradually increases with time, the detection result determination unit
66 determines that the unmanned air vehicle dn is approaching the sound source detection unit
UD. It is also good.
[0104]
For example, when the sound pressure level of the predetermined frequency measured at time
t11 is smaller than the sound pressure level of the same frequency measured at time t12 after
time t11, the sound pressure increases with the passage of time, It may be determined that the
unmanned air vehicle dn is approaching. Also, the sound pressure level is measured three or
more times, and it is determined that the unmanned air vehicle dn is approaching based on the
transition of statistical values (for example, variance value, average value, maximum value,
minimum value, etc.) Good.
[0105]
In addition, when the measured sound pressure level is greater than the third threshold th3
which is the alert level, the detection result determination unit 66 may determine that the
unmanned air vehicle dn has invaded the alert area.
[0106]
The third threshold th3 is, for example, a value larger than the second threshold th2.
The alert area is, for example, the same area as the monitoring area 8 or an area included in the
monitoring area 8 and narrower than the monitoring area 8. The alert area is, for example, an
area in which the intrusion of the unmanned air vehicle dn is restricted. In addition, the approach
determination and the intrusion determination of the unmanned air vehicle dn may be performed
by the detection result determination unit 66.
[0107]
FIG. 12 is a view showing an example of a display screen of the monitor 50 when the unmanned
air vehicle dn is not detected. On the display screen of the monitor 50, an omnidirectional image
11-04-2019
30
GZ1 by the omnidirectional camera CA and a PTZ image GZ2 by the PTZ camera CZ are displayed
in contrast. In the omnidirectional image GZ1, three buildings bL1, bL2, bL3 and a chimney pL
appear, but no unmanned air vehicle dn appears. In addition, although omnidirectional image
GZ1 and PTZ image GZ2 are contrasted and displayed here, only any one may be selected and
displayed, and these images may be switched and displayed for every fixed time. You may
[0108]
FIG. 13 is a view showing an example of a display screen of the monitor 50 when the unmanned
air vehicle dn is detected. In the omnidirectional image GZ1 by the omnidirectional camera CA
displayed on the display screen of the monitor 50, in addition to the three buildings bL1, bL2,
bL3 and chimney pL, identification marks representing unmanned air vehicle dn flying over them
mk is drawn with a star symbol. On the other hand, in the PTZ image GZ2 by the PTZ camera CZ,
although the three buildings bL1, bL2, bL3 and the chimney pL still appear, the unmanned air
vehicle dn does not appear. That is, in FIG. 13, after the monitoring apparatus 10 displays the
identification mark mk in the procedure T16 of FIG. 9, the imaging direction is requested to the
PTZ camera CZ in the procedure T17, and the PTZ camera CZ performs imaging in the procedure
T18. A PTZ image GZ2 in a state before the lens is rotated to change the optical axis direction is
displayed.
[0109]
FIG. 14 is a view showing an example of a display screen of the monitor 50 when the unmanned
air vehicle dn is detected and the PTZ camera CZ changes the optical axis direction in
conjunction with the detection. The PTZ image GZ2 by the PTZ camera CZ is an image zoomed
up toward the unmanned air vehicle dn. In the PTZ image GZ2, the three buildings bL1, bL2, bL3
and the chimney pL are no longer out of the angle of view, and the unmanned air vehicle dn is
seen zoomed up.
[0110]
That is, in FIG. 14, the monitoring device 10 rotates the imaging lens of the PTZ camera CZ in
step T18 of FIG. 9 to change the optical axis direction, and further zooms up the displayed PTZ
image GZ2 ing.
[0111]
11-04-2019
31
Here, the identification mark mk is superimposed on the omnidirectional image GZ1 captured by
the omnidirectional camera CA, and the unmanned flying vehicle dn appears as it is in the PTZ
image GZ2 captured by the PTZ camera CZ.
This may be difficult to distinguish even if the image of the unmanned air vehicle dn appears in
the omnidirectional image GZ1 as it is. On the other hand, since the PTZ image GZ2 captured by
the PTZ camera CZ is a zoomed-in image, when the image of the unmanned air vehicle dn
appears on the display screen, the unmanned air vehicle dn is clearly projected. Therefore, it is
also possible to identify the type of unmanned air vehicle dn from the outline of the unmanned
air vehicle dn projected clearly. Thus, the sound source detection unit UD can appropriately
display the unmanned air vehicle dn in consideration of the visibility of the image displayed on
the display screen of the monitor 50.
[0112]
In addition, even if the omnidirectional image GZ1 does not display the identification mark mk,
the unmanned air vehicle dn itself is projected as it is so that the omnidirectional image GZ1 and
the PTZ image GZ2 have the same display or a different display. Alternatively, the identification
mark mk may be superimposed on the PTZ image GZ2.
[0113]
FIG. 15 is a view showing another example of the display screen of the monitor 50 when the
unmanned air vehicle dn is detected and the PTZ camera CZ changes the optical axis direction in
conjunction with the detection.
The display screen of the monitor 50 shown in FIG. 15 is displayed, for example, when the user
instructs another display menu (not shown) via the operation unit 32 of the monitoring
apparatus 10. The display screen of FIG. 15 shows that there are other sound sources in which
the calculated sound pressure value of each pixel constituting the omnidirectional image data is
equal to the sound pressure value of the unmanned air vehicle dn. In the omnidirectional image
GZ1, in addition to the identification mark mk representing the unmanned air vehicle dn, another
identification mark mc representing another sound source is superimposed. The other
identification mark mc is preferably drawn in a display form different from the identification
mark mk, and in FIG. 15, it is drawn by a circular symbol. Different display forms include
11-04-2019
32
symbols such as ellipses, triangles, Hatena marks, and characters. Further, the other
identification mark mc may be displayed dynamically as the identification mark mk.
[0114]
Furthermore, in the omnidirectional image GZ1, a sound pressure map representing the sound
pressure for each pixel is generated by the output control unit 35, and a region where the
calculated value of the sound pressure exceeds the threshold is obtained by color conversion
processing A sound pressure heat map MP is superimposed. Here, in the sound pressure heat
map MP, the region R1 where the sound pressure exceeds the second threshold th2 is drawn in
red (large dot group in the figure), and the sound pressure is larger than the first threshold th1
and not more than the second threshold th2 A certain area B1 is drawn in blue (small dots in the
figure). Further, the area N1 in which the sound pressure is equal to or less than the first
threshold th1 is drawn transparent (in the drawing, nothing is displayed).
[0115]
In addition, another identification mark mc representing the position of another sound source is
drawn on the same omnidirectional image GZ1 as the identification mark mk representing the
unmanned air vehicle dn, and the sound pressure heat map MP is drawn, whereby The situation
around the aircraft dn can be understood well. For example, when an unregistered sound source
is flying as the unmanned air vehicle dn, the user points the position of the sound source
represented by another identification mark mc on the display screen of the monitor 50, or The
red region R1 of the sound pressure heat map MP is indicated. As a result, the output control unit
35 of the monitoring apparatus 10 causes the PTZ camera CZ to zoom up the position of the
sound source or the red area R1 to acquire the PTZ image GZ2 after zoom up and display the
PTZ image GZ2 on the monitor 50. Because it can be done, unconfirmed sound sources can be
checked quickly and accurately. As a result, even if there is an unregistered unmanned air vehicle
dn, the user can detect it.
[0116]
A display form in which only the other identification mark mc is drawn or a display form in
which only the sound pressure heat map MP is drawn may be provided on the same
omnidirectional image GZ1 as the identification mark mk. The user can arbitrarily select the
11-04-2019
33
display form of these display screens.
[0117]
As described above, in the unmanned airborne vehicle detection system 5 of the present
embodiment, the omnidirectional camera CA captures an image of the monitoring area 8
(imaging area). The microphone array MA picks up the sound of the monitoring area 8. The
monitoring device 10 detects the unmanned air vehicle dn appearing in the monitoring area 8
using the voice data collected by the microphone array MA. The signal processing unit 33 in the
monitoring device 10 monitors the identification mark mk (first identification information)
obtained by converting the unmanned air vehicle dn into visual information in a captured image
of the omnidirectional camera CA (that is, omnidirectional image GZ1). It is superimposed on the
omnidirectional image GZ1 of the area 8 and displayed on the monitor 50. Thereby, the
unmanned airborne vehicle detection system 5 can quickly and accurately determine the
presence and the position of the unmanned airborne vehicle dn as a target by using the
omnidirectional image GZ1 captured by the omnidirectional camera CA. .
[0118]
Further, in the unmanned airborne vehicle detection system 5, the PTZ camera CZ capable of
adjusting the optical axis direction images the monitoring area 8. The signal processing unit 33
outputs, to the PTZ camera CZ, an instruction for adjusting the optical axis direction in the
direction corresponding to the detection result of the unmanned air vehicle dn. The monitor 50
displays an image (that is, a PTZ image GZ2) captured by the PTZ camera CZ whose optical axis
direction has been adjusted based on this instruction. As a result, the unmanned airborne vehicle
detection system 5 clearly recognizes the accurate type of the unmanned airborne vehicle dn to
the observer, who is the user, from the undistorted unmanned airborne vehicle dn captured by
the PTZ camera CZ. , Can be identified.
[0119]
Further, the monitor 50 contrastively displays the omnidirectional image GZ1 of the
omnidirectional camera CA including the identification mark mk of the unmanned air vehicle dn
and the captured image of the PTZ camera CZ (that is, the PTZ image GZ2). As a result, the
observer who is the user can accurately grasp the type of the unmanned air vehicle dn and the
11-04-2019
34
surrounding area where the unmanned air vehicle dn is present, for example, by alternately
comparing the omnidirectional image GZ1 and the PTZ image GZ2. be able to.
[0120]
In addition, the signal processing unit 33 detects at least one other sound source in the
monitoring area 8 and converts the other sound source into visual information in the captured
image of the omnidirectional camera, which is another identification mark different from the
identification mark mk It is displayed on the monitor 50 as mc (second identification
information). As a result, the supervisor who is the user can grasp an unconfirmed sound source
which is not the target unmanned air vehicle dn. In addition, the user can also accurately check
whether the unconfirmed sound source is an unregistered unmanned unmanned air vehicle.
[0121]
Further, the signal processing unit 33 calculates the sound pressure value of each pixel in the
captured image of the monitoring area 8 and sets the sound pressure value of each pixel in the
captured image as the sound pressure heat map MP. Accordingly, the image is superimposed on
eight omnidirectional image data in the imaging area and displayed on the monitor 50 so as to be
distinguishable by a plurality of different color gradations. As a result, the user can compare and
compare the sound pressure of the sound emitted by the unmanned air vehicle dn with the sound
pressure of the surroundings, and the sound pressure of the unmanned air vehicle can be
understood relatively and visually.
[0122]
Although various embodiments have been described above with reference to the drawings, it
goes without saying that the present invention is not limited to such examples. It will be apparent
to those skilled in the art that various modifications and alterations can be conceived within the
scope of the appended claims, and of course these also fall within the technical scope of the
present invention. It is understood.
[0123]
The present invention is useful as a sound source display system and a sound source display
11-04-2019
35
method of a monitoring area that displays a sound source of a monitoring area using a captured
image by a camera.
[0124]
5 unmanned air vehicle detection system 10 monitoring device 15 housing 25 compression
processing unit 26 transmission unit 31 communication unit 32 operation unit 33 signal
processing unit 34 sound source direction detection unit 35 output control unit 37 speaker
device 38 memory 39 setting management unit 41, 51 CPU 42, 52 Communication unit 44, 54
Power management unit 45, 55 Image sensor 46, 56 Memory 46x, 56x Memory card 46y, 56y
RAM 46z, 56z ROM 47, 57 Network connector 50 Monitor 58 Imaging direction control unit 59
Lens drive motor 63 directivity processing unit 64 frequency analysis unit 65 object detection
unit 66 detection result determination unit 67 scanning control unit 68 detection direction
control unit 70 support 71 tripod 71a top plate 71b leg 72 rail 73 first mounting plate 74
second mounting plate A1 to An A / D converter B1, R1 N1 area BF1 pointing range BF2 pointing
direction bL1 to bL3 Building CA Omnidirectional camera CZ PTZ camera GZ1 Omnidirectional
image GZ2 PTZ image L1, L2 Optical axis M1 to Mn Microphone unit MA Microphone array mc
Other identification mark mk Identification mark NW network PA1 PAn amplifier pL chimney UD
sound source detection unit
11-04-2019
36
Документ
Категория
Без категории
Просмотров
0
Размер файла
55 Кб
Теги
description, jp2018101987
1/--страниц
Пожаловаться на содержимое документа