close

Вход

Забыли?

вход по аккаунту

?

JPH09284676

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JPH09284676
[0001]
The present invention relates to, for example, a video display apparatus for reproducing and
displaying a video signal consisting of a compressed and transmitted digital signal used for
satellite broadcasting, and / or outputting an audio, such as an HMD (Head Mount). Display),
television receiver, etc., calculate the head rotation angle of the viewer, the amount of movement
in the viewpoint direction, etc., and consider the amount of movement of the video supplied with
an aspect ratio of 16: 9 or 4: 3. The present invention relates to a video and audio processing
method and apparatus for enhancing the real-time appearance at low cost by extracting and
reproducing in real time and changing and processing audio information according to the
amount of movement.
[0002]
2. Description of the Related Art Conventionally, in the field of video technology of AV equipment
or AV system, there have been video reproducing methods for changing a video viewed by a
viewer in accordance with rotation or the like of the head of the viewer.
[0003]
In addition, a data compression method MPEG2 format for processing a video of a moving object
has been standardized worldwide, and a data encoder and a data decoder conforming to this
standard have been commercialized.
10-05-2019
1
The MPEG decoder has a PAN & SCAN function. This is a function of designating an aspect ratio
of a display image by writing region designation information representing an aspect ratio of a
display image in a header portion of an MPEG2 format that constitutes a video signal.
[0004]
This function makes it possible to extract (or extract) image information with an aspect ratio of 4:
3 from compressed image information with an aspect ratio of 16: 9 in the MPEG2 format.
[0005]
On the other hand, in the field of audio technology of AV equipment, when the same applicant as
the present invention disclosed in Japanese Patent Application Laid-Open No. 7-95698 "Audio
playback device", the case of reproducing the sound picked up in stereo by headphones There is
a method of correcting the left and right sound outputs of the headphones according to the
impulse response stored in advance in the memory according to the rotation angle of the head.
[0006]
Furthermore, as a sensor for detecting the amount of movement and rotation angle of the human
body, a digital angle detector using the horizontal component of geomagnetism as described in
the specification of the above-mentioned JP-A-7-95698. There are various sensors, such as
sensors using gyros.
[0007]
However, in principle, the above-described conventional image processing method displays a
sensor for detecting the rotational angle of the head and a display of the AV device based on the
rotational angle information from the sensor. A special video signal corresponding to an image
several to several tens of times the screen is processed in synchronization with the rotational
speed.
Therefore, a large memory capacity and a high processing speed are required, which is an
obstacle to downsizing and weight reduction of AV equipment and cost reduction, which has
been a first problem.
10-05-2019
2
[0008]
The second problem is that a shift occurs between the image and the localization of the sound
image.
That is, it is used in combination with a fixed type video reproducing apparatus in which the
image does not change due to the movement of the head such as a television receiver having a
normal CRT display tube and a headphone whose localization of sound image changes due to the
movement of the head. In this case, only the localization of the sound image changes in
accordance with the movement of the head, and a shift occurs between the position of the image
and the localization of the sound image depending on the accuracy of the gyro sensor that
detects the head rotation angle. There is a case.
[0009]
In addition, even when using a head mount display in which the image does not change due to
the movement of the head in combination with the above headphones, the image does not
change when the head is moved, so the position of the image matches the localization of the
sound image. It will be gone.
Note that localization of a sound image refers to the position of a virtual sound source formed in
the head of the viewer when the audio signal, which is originally adjusted to be reproduced from
an audio output device such as a speaker, is listened to by a headphone, for example, front right,
It means front left, front center, etc.
[0010]
The present invention solves the above-mentioned problems and can realize an AV apparatus
free from the difference between the image and the localization of the sound image at low cost.
Have.
[0011]
SUMMARY OF THE INVENTION In order to solve the above problems, the video and audio
processing method synchronized with the movement of the body according to the present
invention comprises: a video display device for displaying a predetermined video on a screen;
10-05-2019
3
The viewer comprises a viewer who views from a certain direction with respect to the display,
and the display of the screen is provided with means for detecting the amount of movement and
/ or rotation angle of the whole body or part of the body of the viewer. According to the detected
movement amount and / or the change amount of the rotation angle, image processing is
performed to extract and display a part of the video signal of the original image, and a position
corresponding to the image extracted and displayed simultaneously It is that the voice processing
was performed to make it possible to hear the voice from.
[0012]
Further, the video display device detects a change amount of the rotation angle of the head, a
display unit that forms an image at a position in front of the eyeball when supported by the head
of the viewer, a headphone unit that outputs audio, It is a head fixed type image display device
comprising a sensor unit, and the display unit extracts and displays a part of the image signal of
the original image according to the amount of change of the rotation angle detected by the
sensor unit. Video processing and audio processing in which audio can be heard from a position
corresponding to the extracted and displayed video.
[0013]
According to the above-described configuration, the video in the eyes of the viewer and the
localization of the sound image formed in the head are harmonized to enable video and audio
output with enhanced presence.
[0014]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The appearance of three desirable
embodiments of an AV system which is an image display apparatus embodying an image and
audio processing method synchronized with the movement of the body according to the present
invention will be described with reference to FIGS. It is shown in FIG.
In addition, (A) of FIG. 1, (B) shows a top view and a side view, respectively.
[0015]
The video display apparatus according to the first embodiment of the present invention is, as
shown in FIG. 1, an AV system 100 configured of a head-mounted video display apparatus, that
is, a head mounted display (HMD). The AV system 100 has a headset structure fixed to the head
10-05-2019
4
of the viewer 2 by some supporting device 1 such as a headband, and is provided in front of both
eyes of the viewer 2 and a predetermined image is displayed on the screen It has an image
display 3 for displaying and forming an image, a stereo headphone 4 having a sound producing
body corresponding to the left and right ears of the viewer 2, and a gyro sensor 5 for detecting
the movement of the head of the viewer 2.
[0016]
The video displayed on the video display device 3 is a partial area of the “long video area in the
horizontal direction” (hereinafter referred to as the original video). When the head 2 is rotated,
for example, in the right direction, the video is displayed. The area in the original video of the
video displayed on the display device 3 moves to the right by the movement amount
accompanying the rotation of the head.
[0017]
In general, there are two methods for extracting an image to be displayed from an original image
as described below.
[0018]
(1) The first method of extracting the image to be displayed from the original image is as shown
in FIGS. 4A, 4B, and 4C, with blanking portions 9, 10 on the left and right of the original image.
As a result, the image of the desired area is displayed.
That is, if the amount of change in the rotation angle of the head has moved to the left, as shown
in FIG. 4A, the width of the blanking 10 on the right side is large and the width of the blanking 9
at the left end is small. Do the configuration as shown in the figure.
[0019]
If there is no change in the rotational angle of the head and it does not move to the left or right,
as shown in FIG. 4B, the blanking 9 and 10 widths at the left and right ends are the same.
[0020]
Also, if the amount of change in the rotation angle of the head has moved to the right, as shown
in FIG. 4C, the width of the blanking 10 on the left side is large and the width of the blanking 9 at
10-05-2019
5
the right end is small. Do the configuration as shown in the figure.
[0021]
In this way, the blanking 9 and 10 widths of the left and right ends of the screen are changed
according to the movement amount of the rotation angle of the head, and the displayed screen is
displayed according to the movement amount of the rotation angle of the head Can be realized.
[0022]
(2) The second method of extracting the image to be displayed from the original image is, as
shown in FIGS. 5A, 5B and 5C, extracting (extracting) the image of the desired area from the
original image It shows three display examples of the extraction display to be displayed.
This method is based on PAN & SCAN, which is a standard of the format function of compressed
transmission / reception signals in the MPEG2 format used in satellite broadcasting to be
described later, and has 256 digital video signals with an aspect ratio of 16: 9. The control signal
indicating the divided start and end is included, and it is extracted and displayed based on the
control signal according to the movement amount of the rotation angle of the head.
It is a matter of course that the time from the start and the number of pixels may be used instead
of including the start and the end in the control signal.
[0023]
That is, if the rotation angle of the head is the amount of movement to the left, as shown in FIG.
5A, among the control signals indicating the start and the end divided into 256 of the original
image, By extracting the control signal, the right video portion 10A is not displayed on the
screen.
As a result, the image of the left portion of the original image is displayed on the screen.
[0024]
10-05-2019
6
On the other hand, when the amount of movement of the rotation angle of the head is not
detected, as shown in FIG. 5B, the left and right video portions 9A and 10A are cut off, and the
central image of the original image is extracted. It will be displayed.
[0025]
Also, if the rotation angle of the head is the amount of movement in the right direction, as shown
in FIG. 5C, the image portion 9A of the left portion of the original image is cut out, and the right
portion of the original image is Extract and display.
[0026]
In this manner, as in the first method of changing the width of the blanking 9 and 10 shown in
FIG. 4, the left and right ends of the original image are cut out in accordance with the amount of
movement of the rotational angle of the head. If the image is extracted (extracted) and displayed
and the screen display is changed, the screen display can be changed as the displayed screen
responds to the rotational movement of the head.
[0027]
Next, as shown in FIG. 2, the video display apparatus according to the second embodiment of the
present invention is an AV system 200 including a viewer integrated video display apparatus. In
the same manner as in the case of (1), the stereo headphone 4 mounted on the head of the
viewer 2 and the image display 6 of screen type are provided.
The video display device 6 is fixed to one end of the structure 7.
The viewer 2 can also sit on the viewing unit, which is a sofa fixed to the other end of the
structure 7, facing the screen of the video display device 6, that is, integrated with the video
display device 6 as the display unit And a viewer unit that rotates in conjunction with the
[0028]
The structure 7 is configured to rotate in the direction of the arrow about the fulcrum 7a to
detect the rotation angle in the left-right direction.
10-05-2019
7
Therefore, when the structure 7 is rotated, the image display device 6 and the viewer 2 move
simultaneously in a state of viewing the image without rotating the head or the like, and when
the structure 7 is rotated in the right direction, for example, the image display device The image
area displayed at 6 is displayed by moving rightward by the movement amount accompanying
the rotation of the entire body in the original image.
In this way, the head of the viewer is positioned in front of the screen, and the head is rotated by
rotating the body of the viewer integrally in the left and right direction and extracting the image
from the original image shown in FIG. 4 and FIG. Even in the case where the entire body is
physically moved in the left-right direction without being caused, a realistic screen can be
displayed.
By performing audio processing synchronized with this changing screen display, it is possible to
obtain the localization of the sound image that was in the changed screen display.
[0029]
Here, the localization of the sound image is viewed when the audio signal, which is originally
adjusted to be reproduced from the audio output device such as a speaker, is listened to by the
headphone as described in the above-mentioned “Problem to be solved by the invention”.
Position of the virtual sound source formed in the person's head, for example, front right, front
left, front center, etc.
[0030]
In the above embodiment, the rotation angle of the rotation supporting point 7a of the structure
7 is detected, but the present invention is not limited to this, and may be, for example, a stereo
headphone 4 in which a gyro sensor is integrated. Of course it is.
[0031]
Next, as shown in FIG. 3, the video display apparatus in the third embodiment according to the
present invention is an AV system 300 configured of a stationary video display apparatus, for
example, a television receiver, and this AV system Similarly to the case of FIG. 1, the stationary
type 300 comprises a stereo headphone 4 and a gyro sensor 5 mounted on the head of the
viewer 2, a display unit for displaying an image on a screen, and an audio output unit for
outputting audio. And an ordinary television receiver 8 which is a video display device of the
10-05-2019
8
present invention.
[0032]
The gyro sensor 5 corresponds to a sensor unit for detecting the amount of change of each
rotation in the viewing direction provided for any one viewer in front of the display unit of the
television receiver 8 and is integrally formed in the stereo headphone 4 If it is possible to detect
the viewing direction of the viewer, it may be worn on a part of the body.
[0033]
In this case, for example, an image with an aspect ratio of 16: 9, which corresponds to the entire
screen of the television receiver 8, is used as the original image, and blankings 9 and 10 of
variable widths are provided on both left and right sides of this original image (see FIG. 4). Or
extracting a screen (see FIG. 5) to extract and display a partial area of the original image, for
example, an image with an aspect ratio of 4: 3.
[0034]
Thus, according to the movement amount of rotation of the head of the viewer 2 shown in FIG. 1
and FIG. 3, or it moved to the left and right together without changing the screen display and the
viewing direction of the viewer shown in FIG. By changing the widths of the left and right
blankings 9 and 10 in accordance with the amount of movement, the image area to be extracted
from the original image is changed.
For example, as shown in FIG. 6A, when the viewer 2 rotates the head in the right direction, the
width of the blanking 9 on the left side is increased, and the width of the blanking 10 on the
right side is eliminated or narrowed. When the head of the viewer 2 faces straight forward as
shown in FIG. 6B, the widths of the left and right blankings 9 and 10 are equalized, and the
viewer as shown in FIG. 6C. When 2 rotates the head in the left direction, the width of the left
blanking 9 is eliminated or narrowed, and the width of the right blanking 10 is increased.
By this, it is possible to make the displayed image seemingly move.
[0035]
10-05-2019
9
In the AV system 100 of the first embodiment (see FIG. 1), the AV system 200 of the second
embodiment (see FIG. 2), and the AV system 300 of the third embodiment (see FIG. 3) In any of
the embodiments, the video and audio processing method synchronized with the movement of
the body according to the present invention detects the movement of the whole or part of the
body of the viewer 2 by the sensor, and inputs the output of this sensor to the microcomputer
The microcomputer calculates body movement information such as movement or rotation
direction, speed, movement amount (rotation angle), and the like.
Note that this microcomputer is provided at an appropriate position on either the video display
device 3, 6 or 8 side or the stereo headphone 4 side.
The body movement information calculated by the microcomputer is commonly supplied to the
headphone 4 for outputting the audio processed voice and the video display devices 3, 6 and 8
subjected to the video processing.
[0036]
The image display devices 3, 6, 8 calculate an image area to be extracted from the original image
based on the body movement information, and output an area specifying signal for specifying the
calculated image area.
The video area designated by the area designation signal is extracted from the original video and
displayed on the screen.
[0037]
At the same time, the stereo headphone 4 localizes the sound image based on the abovementioned body movement information, and processes the audio signal so as to form a sound
image in harmony with the images displayed on the image display devices 3, 6, 8.
[0038]
Next, among the video and audio processing methods synchronized with the movement of the
body according to the present invention, the video processing method will be described in detail
below for the configuration for performing the video processing.
10-05-2019
10
[0039]
The basic configuration for realizing the video processing method is, as shown in FIG. 7, a
synchronization signal input unit 11, a storage / control device 12, a video information input unit
13, two storage devices 14 and 15, and a video And a processing unit 18.
[0040]
The synchronization signal input unit 11 extracts a synchronization signal from the digital video
signal of the original video, and outputs this to the storage / control device 12.
The storage / control device 12 controls to write the video information in one of the storage
devices 14 and 15 each time the synchronization signal is input.
Further, the storage / control device 12 controls the area designation signal for controlling the
area to be written in the storage device 14 or 15 based on the area information from the video
processing unit 18 which calculates the rotational movement amount of the rotational angle etc.
simultaneously with the synchronization signal. input.
[0041]
The video information input unit 13 inputs video information of a digital video signal which is an
original video, and outputs the video information to the storage devices 14 and 15.
When the storage devices 14 and 15 receive the write control signal from the storage / control
device 12, all of the video information from the video information input unit 13 or the video
signal of the area information designated by the storage / control device 12 Write.
Whether all or part of the writing is due to the difference in configuration will be described later.
[0042]
10-05-2019
11
The image processing unit 18 is configured to detect the amount of rotational movement of the
rotational angle of the head and to process the image based on the detected amount of
movement.
[0043]
This video processing unit 18 is a readout control that outputs the video information of the area
designated by the area designation signal from the recording / control signal 12 or the entire
storage content among the video information stored in the storage devices 14 and 15 Generate a
signal.
Whether all or part of the output is due to the difference in configuration will be described later.
[0044]
The basic configuration of the video processing method shown in FIG. 7 configured as described
above operates as follows.
That is, when the synchronization signal is input, the storage / control device 12 sends a write
control signal to the storage device 14 to record video information.
The storage / control unit 12 sends a read control signal to the storage unit 14 based on the area
designation signal generated simultaneously with the next synchronization signal to output video
information, and simultaneously sends a write control signal to the storage unit 15 The
information is written to the storage device 15. The above is repeated for each synchronization
signal. In the embodiment shown in FIG. 7, the display video signal is extracted by designating an
area from the video information recorded in the recording devices 14 and 15. However, the
image signal is extracted by designating an area. Of course, things may be written to the
recording devices 14 and 15. About this point, "1. This will be described in the section
"Processing of General Digital Video Signal".
[0045]
10-05-2019
12
Next, transmission and reception by digital signals used for satellite broadcasting and the like
according to this basic configuration will be described. Processing of general digital video signal;
The processing of the video signal of the MPEG2 format will be described.
[0046]
1. Processing of General Digital Video Signal As shown in FIG. 8, a video processing apparatus
for realizing a general method of processing digital video signals includes the storage / control
device 12, the storage devices 14 and 15, and the storage device 14, It comprises a video signal
conversion device 16 commonly connected to the output of 15, a synchronization signal
detection device 17, and a control microcomputer 18 which is a video processing unit that
performs video processing. Note that the storage / control device 12 may be included in the
control microcomputer 18 as indicated by a dotted line in FIG.
[0047]
The control microcomputer 18, which is a video processing unit, outputs a synchronization
signal, an angle calculation unit 20, an extraction start / end timing calculation unit 21, and an
angle change adjustment unit 22 depending on the screen size and video viewpoint distance.
And. The control microcomputer 18 having such a configuration has an operation speed very fast
compared to the period of the synchronization signal of the video signal, and performs all
required calculations between two adjacent synchronization signals. It can be done.
[0048]
The synchronization calculation unit 19 receives the synchronization signal detected by the
synchronization signal detection unit 17 and outputs the input synchronization signal to the
storage / control unit 12 and the extraction start / end timing calculation unit 21.
[0049]
The angle calculation unit 20 inputs information such as position, direction, speed, angle, etc.
from the gyro sensor 5, calculates the rotation angle of the head of the viewer 2 and the
viewpoint movement distance, extracts the calculation result, and starts / ends timing For
example, the signal is output to the headphone 4 (see FIG. 1) via the calculation unit 21 and an
audio processing unit (not shown).
10-05-2019
13
[0050]
The extraction start / end timing calculation unit 21 creates an image area to be extracted from
the original image based on the angle signal calculated by the angle calculation unit 20, that is,
an area specification signal indicating the timing of start and end of extraction, and this area
specification The signal is output to the storage / control unit 12 and the storage units 14 and
15.
Since the control of writing and reading to the recording devices 14 and 15 has been described
with reference to FIG. 7, the description thereof is omitted.
[0051]
The angle change adjustment unit 22 based on the screen size and the video viewpoint distance
extracts the correction value of the angle change based on the screen size and the viewer's
viewpoint distance, and outputs the correction value to the start / end timing calculation unit 21.
[0052]
The video signals for the area designated by the area designation signal output from the control
microcomputer 18 which is the video processing unit configured as described above are output
from the storage devices 14 and 15 to the video signal conversion device 16. There are the
following two orders, and either may be used.
[0053]
That is, (1) the storage / control device 12 stores the video information of the original video in
the storage device 14 or 15 as it is.
The storage device 14 or 15 outputs the video information of the area designated by the area
designation signal from the control microcomputer 18.
[0054]
10-05-2019
14
Alternatively, (2) the storage / control device 12 causes the storage device 14 or 15 to store the
video information for the video area designated by the area designation information from the
control microcomputer 18.
The storage device 14 or 15 outputs the stored content as it is.
[0055]
In any of the above cases, the video information output from the storage devices 14 and 15 has a
reduced amount of information compared to the original video, so either the video signal
conversion device 16 supplements the amount of information or The reading speed of
information from the devices 14 and 15 is changed and output.
At this time, depending on the recording method, the synchronization signal is superimposed or
rebuilt.
[0056]
Since the information output from the storage devices 14 and 15 is a digital signal, the video
signal conversion device 16 is provided with a D / A conversion function if necessary. The
display video signal output from the video signal conversion device 16 passes through a video
signal processing circuit (not shown), and is sent to the display for display.
[0057]
2. Processing of video signal of MPEG2 format As shown in FIG. 9, the processing device of
video signal of MPEG2 format includes an MPEG2 decoder 23, a D / A converter 24, and a
control microcomputer 25 which is a video processing unit for performing video processing. It
consists of
10-05-2019
15
[0058]
Similar to the control microcomputer 18 shown in FIG. 8 described above, the control
microcomputer 25 for performing image processing inputs information such as position,
direction, speed, angle, etc. from the gyro sensor 5, and rotates the head of the viewer 2 The
angle or viewpoint movement distance is calculated, and the calculation result is extracted and
output to, for example, the headphone 4 (see FIG. 1) via the start / end timing calculation unit 21
and an audio processing unit (not shown). The control microcomputer 25 is composed of an
angle calculation unit 20, an extraction position calculation adjustment signal generation unit 26,
and an angle change adjustment unit 22 depending on the screen size and the image viewpoint
distance. The extraction position calculation adjustment signal generation unit 26 calculates the
extraction start position of the video signal from the original video based on the angle
information from the jariro sensor 5 or the like calculated by the angle calculation unit 20 and
outputs it to the MPEG2 decoder 23 as a region designation signal. . The extraction start position
can be selected from any position of start / end obtained by dividing the original video into 256
pieces.
[0059]
The processing of the video signal of the MPEG2 format shown in FIG. 9 having such a
configuration operates as follows. The video information in the MPEG2 format included in the
information input by the reproduction of the package media or the broadcast reception is input
to the MPEG2 decoder 23, and the audio information is output to the audio described later, for
example, supplied to the headphone 4 (see FIG. 1) Ru.
[0060]
In the header portion of the MPEG2 format input to the MPEG2 decoder 23, area designation
information for cutting out an image with an aspect ratio of 4: 3 from image information with an
aspect ratio of 16: 9 is written. The content of the area designation information is not used, and
the area designation signal created by the control microcomputer 25 is written in the field of the
area designation information.
[0061]
The video signal of the area designated by the area designation signal is output from the MPEG2
10-05-2019
16
decoder 23.
This video signal is converted to an analog signal by the D / A converter 24 and is sent to a
display unit (not shown) as a display video signal. As a result, an image of the designated area is
displayed.
[0062]
Next, an audio processing portion synchronized with the movement of the body according to the
present invention will be described.
[0063]
The configuration of the audio processing portion is such that information of movement or
rotation of the viewer is supplied from the angle calculation unit 20 (see FIGS. 8 and 9) of the
video processing portion in the configuration of the invention disclosed in JP-A-7-95698.
Although the detailed description is given to the above specification, a very simplified block
diagram is shown in FIG. 10, and only the main points will be described.
[0064]
In FIG. 10, an audio processing apparatus, for example, a headphone 4 is schematically
configured by an address control unit 30, a memory 31, a convolution integrator 32, a memory
33, a D / A converter 34, and a sounding body 35. There is.
[0065]
The address control unit 30 inputs an angle signal output from the angle calculation unit 20 (see
FIGS. 8 and 9), and designates an address of the memory 31 based on the angle signal.
The angle and the address are associated on a one-to-one basis.
In this way, the means for detecting the amount of movement and / or the amount of change in
the rotation angle of the whole of the viewer's body or a part of the human body is standardized.
As a result, it becomes possible to make the user not feel the variation of the detection amount
due to the variation of components such as the gyro sensor or the temperature.
10-05-2019
17
[0066]
The memory 31 stores a correspondence table of angles and impulse responses.
The impulse response represents the virtual sound source position with respect to the reference
direction of the viewer's head. Therefore, an impulse response signal representing a virtual sound
source position matching the direction of the image according to the movement angle of the head
is output from the memory 31.
[0067]
The impulse response signal is supplied to the convolution integrator 32 and the memory 33.
The convolutional integrator 32 and the memory 33 generate an audio signal representing a
sound image localized according to the angle of the head by superposing and integrating an
impulse response signal on an input audio signal, and It outputs to A converter 34.
[0068]
The audio signal is converted to an analog signal by the D / A converter 34, and then supplied to
the sound generator 35 to be supplied as sound to the ear of the viewer.
[0069]
Again, as shown in FIGS. 4 and 5, the image extracted and displayed from the original image is,
for example, the rotation angle of the head relative to the central position P at which the original
image is displayed on the screen. The movement position Q with respect to the current screen is
calculated from the movement amount, and based on the movement position Q, image display is
performed with the images at the left and right ends cut away from the original image.
[0070]
The cut out image portion may be displayed as blankings 9 and 10 (see FIG. 4) or may be
extracted and displayed (see FIG. 5).
10-05-2019
18
[0071]
That is, if the rotation angle of the head is a movement amount in the left direction, the
movement position Q is in the left direction relative to the center position P, and the image is a
cutout of the image portions 10 and 10A at the right end.
If there is no movement amount of the rotation angle of the head and the front position with
respect to the screen without the movement amount of the rotation angle of the head, the center
position P and the movement position Q become the same position and cut the image portions 9,
9A, 10, 10A It becomes a missing image.
If the rotation angle of the head is a movement amount in the right direction, the movement
position Q is in the right direction with respect to the center position P, and an image is obtained
by cutting out the image portions 9 and 9A at the left end.
[0072]
The central position P is not only the central position of the extracted video but also the central
position to which audio processing is performed, and the movement position Q also indicates the
localization of the sound image with respect to the central position P of the video.
Therefore, if the positions P and Q shown in FIGS. 4B and 5B coincide with each other, it is
possible to obtain an audio following the changed image.
[0073]
Although only the right audio signal is shown in FIG. 10, a convolution integrator 32, a memory
33, a D / A converter 34, and a sound generator 35 are also provided for the left audio signal.
The impulse response signal from the memory 31 is supplied.
[0074]
Thus, in the present invention, the screen display to be displayed from the original video is
10-05-2019
19
changed according to the movement of a part of the body, for example the eyeball or the whole
body, and the sound is changed according to the changed screen. If it is a device equipped with
headphones and an MPEG2 decoder chip that can change the sound image position, such as
product name ORBIT, Virtualphone, etc., it is not limited to the embodiment described above. The
system can be built by degree.
It is also possible to construct a virtual reality space.
[0075]
Furthermore, an operation screen is provided at the end of the screen of a screen display device,
for example, a television receiver, and the state of the eyeball looking at the operation screen of
the screen by detection of the rotation angle of the head You can see
[0076]
Then, by attaching a sensor for detecting the viewpoint position on the screen, it becomes
possible to easily operate the operation screen for setting or changing the screen attached to the
end of the screen by the movement of the eyeball.
[0077]
Moreover, since the screen and sound processing are synchronized with the movement of the
body, it is possible to reduce the adverse effect on the body.
[0078]
Furthermore, if screen and audio processing synchronized with the movement of the body is
performed, the angle information used for moving the video and the sound image is effectively
used, and DSS (Digital Satelite System), DVD (Digital Video System), It is also possible to use for
media such as DVB (Digital Video Braudcasting) and control of games and the like.
[0079]
As described above, the video and audio processing method and the video display device
synchronized with the movement of the body according to the present invention are the original
video according to the movement of a part of the body, for example, the eyeball or the whole
body. By extracting and displaying the image to be displayed from the screen, it is possible to
change the screen according to the movement of the body, for example, the movement of the
eyeball, and it is possible to create an image with enhanced sense of reality. effective.
10-05-2019
20
[0080]
Also, by changing the voice following the changed screen, it is possible to easily construct, for
example, a virtual reality space consisting of realistic voice and video in which the video and the
voice coincide with each other. effective.
[0081]
Furthermore, since the present invention can be realized without particularly making the
structure complicated, for example, in an AV apparatus equipped with existing headphones
capable of changing the sound image position and an MPEG2 decoder chip, the difference
between the image and the sound image Will be able to
[0082]
Brief description of the drawings
[0083]
1 is an explanatory view showing a first embodiment of the video and audio processing method
synchronized with the movement of the body according to the present invention.
[0084]
2 is an explanatory view showing a second embodiment.
[0085]
3 is an explanatory view showing the third embodiment.
[0086]
4 is an explanatory view showing a display example of the blanking display.
[0087]
It is an explanatory view showing a display example of FIG. 5 extraction display.
[0088]
10-05-2019
21
6 is an explanatory view showing the relationship between the rotation of the head of the viewer
and the blanking display.
[0089]
7 is an explanatory view showing a basic configuration of a video processing method common to
the embodiment of the video and audio processing method synchronized with the movement of
the body according to the present invention.
[0090]
<Figure 8> It is the explanation drawing which shows the constitution of the processing method
of the general digital picture signal.
[0091]
9 is an explanatory view showing a configuration of a video signal processing method of the
MPEG2 format.
[0092]
10 is an explanatory view showing a basic configuration of a voice processing method common
to the embodiment of the video and voice processing method synchronized with the movement
of the body according to the present invention.
[0093]
Explanation of sign
[0094]
1: Support section 2: 2: Viewer: 3: Image display device 4: 4: Stereo headphone 5: Gyro sensor 6:
6: Image display device 7: 7: Structure 7a: Fulcrum 8: Television receiver 9: 9 Left blanking, 10:
right blanking, 11: sync signal input unit, 12: storage / control device, 13: video information
input unit, 14, 15: storage device, 16: video signal conversion device, 17: sync signal detection
Device, 18, 25: control microcomputer (image processing unit), 19: synchronization calculation
unit, 20: angle calculation unit, 21: extraction start / end timing calculation unit, 22: angle
change adjustment unit depending on screen size and image viewpoint distance , 23: MPEG2
decoder, 24: D / A converter, 26: extraction position calculation adjustment signal generation
unit, 30: address control unit, 31, 33: memory, 34: D / A converter, 35: sound generation , 100,
200, and 300: embodiment.
10-05-2019
22
10-05-2019
23
Документ
Категория
Без категории
Просмотров
0
Размер файла
36 Кб
Теги
jph09284676
1/--страниц
Пожаловаться на содержимое документа