A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network
Computacional recognition of sign languages aims to
allow a greater social and digital inclusion of deaf people through
interpretation of their language by computer. This article presents
a model of recognition of two of global parameters from sign
languages; hand configurations and hand movements. Hand motion
is captured through an infrared technology and its joints are built
into a virtual three-dimensional space. A Multilayer Perceptron
Neural Network (MLP) was used to classify hand configurations and
Dynamic Time Warping (DWT) recognizes hand motion. Beyond
of the method of sign recognition, we provide a dataset of
hand configurations and motion capture built with help of fluent
professionals in sign languages. Despite this technology can be
used to translate any sign from any signs dictionary, Brazilian
Sign Language (Libras) was used as case study. Finally, the model
presented in this paper achieved a recognition rate of 80.4%.
MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks
Studies estimate that there will be 266,120 new cases
of invasive breast cancer and 40,920 breast cancer induced deaths
in the year of 2018 alone. Despite the pervasiveness of this
affliction, the current process to obtain an accurate breast cancer
prognosis is tedious and time consuming. It usually requires a
trained pathologist to manually examine histopathological images and
identify the features that characterize various cancer severity levels.
We propose MITOS-RCNN: a region based convolutional neural
network (RCNN) geared for small object detection to accurately
grade one of the three factors that characterize tumor belligerence
described by the Nottingham Grading System: mitotic count. Other
computational approaches to mitotic figure counting and detection
do not demonstrate ample recall or precision to be clinically viable.
Our models outperformed all previous participants in the ICPR 2012
challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14
challenge along with recently published works. Our model achieved
an F- measure score of 0.955, a 6.11% improvement in accuracy from
the most accurate of the previously proposed models.
Spectrum of Dry Eye Disease in Computer Users of Manipur India
Computer and video display users might complain about Asthenopia, burning, dry eyes etc. The management of dry eyes is often not in the lines of severity. Following systematic evaluation and grading, dry eye disease is one condition that can be practiced at all levels of ophthalmic care. In the present study, different spectrum causing dry eye and prevalence of dry eye disease in computer users of Manipur, India are determined with 600 individuals (300 cases and 300 control). Individuals between 15 and 50 years who used computers for more than 3 hrs a day for 1 year or more were included. Tear break up time (TBUT) and Schirmer’s test were conducted. It shows that 33 (20.4%) out of 164 males and 47 (30.3%) out of 136 females have dry eye. Possible explanation for the observed result is discussed.
Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision
In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.
Detecting Tomato Flowers in Greenhouses Using Computer Vision
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.
Human Motion Capture: New Innovations in the Field of Computer Vision
Human motion capture has become one of the major
area of interest in the field of computer vision. Some of the major
application areas that have been rapidly evolving include the
advanced human interfaces, virtual reality and security/surveillance
systems. This study provides a brief overview of the techniques and
applications used for the markerless human motion capture, which
deals with analyzing the human motion in the form of mathematical
formulations. The major contribution of this research is that it
classifies the computer vision based techniques of human motion
capture based on the taxonomy, and then breaks its down into four
systematically different categories of tracking, initialization, pose
estimation and recognition. The detailed descriptions and the
relationships descriptions are given for the techniques of tracking and
pose estimation. The subcategories of each process are further
described. Various hypotheses have been used by the researchers in
this domain are surveyed and the evolution of these techniques have
been explained. It has been concluded in the survey that most
researchers have focused on using the mathematical body models for
the markerless motion capture.
Stereo Motion Tracking
Motion Tracking and Stereo Vision are complicated,
albeit well-understood problems in computer vision. Existing
softwares that combine the two approaches to perform stereo motion
tracking typically employ complicated and computationally expensive
procedures. The purpose of this study is to create a simple and
effective solution capable of combining the two approaches. The
study aims to explore a strategy to combine the two techniques
of two-dimensional motion tracking using Kalman Filter; and depth
detection of object using Stereo Vision. In conventional approaches
objects in the scene of interest are observed using a single camera.
However for Stereo Motion Tracking; the scene of interest is
observed using video feeds from two calibrated cameras. Using two
simultaneous measurements from the two cameras a calculation for
the depth of the object from the plane containing the cameras is made.
The approach attempts to capture the entire three-dimensional spatial
information of each object at the scene and represent it through a
software estimator object. In discrete intervals, the estimator tracks
object motion in the plane parallel to plane containing cameras and
updates the perpendicular distance value of the object from the plane
containing the cameras as depth. The ability to efficiently track
the motion of objects in three-dimensional space using a simplified
approach could prove to be an indispensable tool in a variety of
surveillance scenarios. The approach may find application from high
security surveillance scenes such as premises of bank vaults, prisons
or other detention facilities; to low cost applications in supermarkets
and car parking lots.
High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform
Real time image and video processing is a demand in
many computer vision applications, e.g. video surveillance, traffic
management and medical imaging. The processing of those video
applications requires high computational power. Thus, the optimal
solution is the collaboration of CPU and hardware accelerators. In
this paper, a Canny edge detection hardware accelerator is proposed.
Edge detection is one of the basic building blocks of video and image
processing applications. It is a common block in the pre-processing
phase of image and video processing pipeline. Our presented
approach targets offloading the Canny edge detection algorithm from
processing system (PS) to programmable logic (PL) taking the
advantage of High Level Synthesis (HLS) tool flow to accelerate the
implementation on Zynq platform. The resulting implementation
enables up to a 100x performance improvement through hardware
acceleration. The CPU utilization drops down and the frame rate
jumps to 60 fps of 1080p full HD input video stream.
Development of a Computer Vision System for the Blind and Visually Impaired Person
Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may results from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments.
Mouse Pointer Tracking with Eyes
In this article, we expose our research work in
Human-machine Interaction. The research consists in manipulating
the workspace by eyes. We present some of our results, in particular
the detection of eyes and the mouse actions recognition. Indeed, the
handicaped user becomes able to interact with the machine in a more
intuitive way in diverse applications and contexts. To test our
application we have chooses to work in real time on videos captured
by a camera placed in front of the user.
Forces Association-Based Active Contour
A welded structure must be inspected to guarantee that the weld quality meets the design requirements to assure safety and reliability. However, X-ray image analyses and defect recognition with the computer vision techniques are very complex. Most difficulties lie in finding the small, irregular defects in poor contrast images which requires pre processing to image, extract, and classify features from strong background noise. This paper addresses the issue of designing methodology to extract defect from noisy background radiograph with image processing. Based on the use of actives contours this methodology seems to give good results
A Real-time Computer Vision System for VehicleTracking and Collision Detection
Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.
Hand Gesture Recognition using Blob Detection for Immersive Projection Display System
We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without
Offline Handwritten Signature Recognition
Biometrics, which refers to identifying an individual
based on his or her physiological or behavioral characteristics, has
the capability to reliably distinguish between an authorized person
and an imposter. Signature verification systems can be categorized as
offline (static) and online (dynamic). This paper presents a neural
network based recognition of offline handwritten signatures system
that is trained with low-resolution scanned signature images.
Optical Fish Tracking in Fishways using Neural Networks
One of the main issues in Computer Vision is to extract the movement of one or several points or objects of interest in an image or video sequence to conduct any kind of study or control process. Different techniques to solve this problem have been applied in numerous areas such as surveillance systems, analysis of traffic, motion capture, image compression, navigation systems and others, where the specific characteristics of each scenario determine the approximation to the problem. This paper puts forward a Computer Vision based algorithm to analyze fish trajectories in high turbulence conditions in artificial structures called vertical slot fishways, designed to allow the upstream migration of fish through obstructions in rivers. The suggested algorithm calculates the position of the fish at every instant starting from images recorded with a camera and using neural networks to execute fish detection on images. Different laboratory tests have been carried out in a full scale fishway model and with living fishes, allowing the reconstruction of the fish trajectory and the measurement of velocities and accelerations of the fish. These data can provide useful information to design more effective vertical slot fishways.
Visual Hull with Imprecise Input
Imprecision is a long-standing problem in CAD design
and high accuracy image-based reconstruction applications. The visual
hull which is the closed silhouette equivalent shape of the objects
of interest is an important concept in image-based reconstruction.
We extend the domain-theoretic framework, which is a robust and
imprecision capturing geometric model, to analyze the imprecision in
the output shape when the input vertices are given with imprecision.
Under this framework, we show an efficient algorithm to generate the
2D partial visual hull which represents the exact information of the
visual hull with only basic imprecision assumptions. We also show
how the visual hull from polyhedra problem can be efficiently solved
in the context of imprecise input.
Fast 3D Collision Detection Algorithm using 2D Intersection Area
There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
View-Point Insensitive Human Pose Recognition using Neural Network and CUDA
Although lots of research work has been done for
human pose recognition, the view-point of cameras is still critical
problem of overall recognition system. In this paper, view-point
insensitive human pose recognition is proposed. The aims of the
proposed system are view-point insensitivity and real-time processing.
Recognition system consists of feature extraction module, neural
network and real-time feed forward calculation. First, histogram-based
method is used to extract feature from silhouette image and it is
suitable for represent the shape of human pose. To reduce the
dimension of feature vector, Principle Component Analysis(PCA) is
used. Second, real-time processing is implemented by using Compute
Unified Device Architecture(CUDA) and this architecture improves
the speed of feed-forward calculation of neural network. We
demonstrate the effectiveness of our approach with experiments on
Hand Gesture Recognition Based on Combined Features Extraction
Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.
Non-contact Gaze Tracking with Head Movement Adaptation based on Single Camera
With advances in computer vision, non-contact gaze tracking systems are heading towards being much easier to operate and more comfortable for use, the technique proposed in this paper is specially designed for achieving these goals. For the convenience in operation, the proposal aims at the system with simple configuration which is composed of a fixed wide angle camera and dual infrared illuminators. Then in order to enhance the usability of the system based on single camera, a self-adjusting method which is called Real-time gaze Tracking Algorithm with head movement Compensation (RTAC) is developed for estimating the gaze direction under natural head movement and simplifying the calibration procedure at the same time. According to the actual evaluations, the average accuracy of about 1° is achieved over a field of 20×15×15 cm3.
A Robust Method for Hand Tracking Using Mean-shift Algorithm and Kalman Filter in Stereo Color Image Sequences
Real-time hand tracking is a challenging task in many
computer vision applications such as gesture recognition. This paper
proposes a robust method for hand tracking in a complex environment
using Mean-shift analysis and Kalman filter in conjunction with 3D
depth map. The depth information solve the overlapping problem
between hands and face, which is obtained by passive stereo measuring
based on cross correlation and the known calibration data of
the cameras. Mean-shift analysis uses the gradient of Bhattacharyya
coefficient as a similarity function to derive the candidate of the hand
that is most similar to a given hand target model. And then, Kalman
filter is used to estimate the position of the hand target. The results
of hand tracking, tested on various video sequences, are robust to
changes in shape as well as partial occlusion.
A Cooperative Multi-Robot Control Using Ad Hoc Wireless Network
In this paper, a Cooperative Multi-robot for Carrying
Targets (CMCT) algorithm is proposed. The multi-robot team
consists of three robots, one is a supervisor and the others are
workers for carrying boxes in a store of 100×100 m2. Each robot has
a self recharging mechanism. The CMCT minimizes robot-s worked
time for carrying many boxes during day by working in parallel. That
is, the supervisor detects the required variables in the same time
another robots work with previous variables. It works with
straightforward mechanical models by using simple cosine laws. It
detects the robot-s shortest path for reaching the target position
avoiding obstacles by using a proposed CMCT path planning
(CMCT-PP) algorithm. It prevents the collision between robots
during moving. The robots interact in an ad hoc wireless network.
Simulation results show that the proposed system that consists of
CMCT algorithm and its accomplished CMCT-PP algorithm
achieves a high improvement in time and distance while performing
the required tasks over the already existed algorithms.
Edge Detection in Digital Images Using Fuzzy Logic Technique
The fuzzy technique is an operator introduced in order
to simulate at a mathematical level the compensatory behavior in
process of decision making or subjective evaluation. The following
paper introduces such operators on hand of computer vision
In this paper a novel method based on fuzzy logic reasoning
strategy is proposed for edge detection in digital images without
determining the threshold value. The proposed approach begins by
segmenting the images into regions using floating 3x3 binary matrix.
The edge pixels are mapped to a range of values distinct from each
other. The robustness of the proposed method results for different
captured images are compared to those obtained with the linear Sobel
operator. It is gave a permanent effect in the lines smoothness and
straightness for the straight lines and good roundness for the curved
lines. In the same time the corners get sharper and can be defined
Vision Based Hand Gesture Recognition
With the development of ubiquitous computing,
current user interaction approaches with keyboard, mouse and pen
are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands as an input device is
an attractive method for providing natural Human Computer
Interaction which has evolved from text-based interfaces through 2D
graphical-based interfaces, multimedia-supported interfaces, to fully
fledged multi-participant Virtual Environment (VE) systems.
Imagine the human-computer interaction of the future: A 3Dapplication
where you can move and rotate objects simply by moving
and rotating your hand - all without touching any input device. In this
paper a review of vision based hand gesture recognition is presented.
The existing approaches are categorized into 3D model based
approaches and appearance based approaches, highlighting their
advantages and shortcomings and identifying the open issues.
A Novel Computer Vision Method for Evaluating Deformations of Fibers Cross Section in False Twist Textured Yarns
In recent five decades, textured yarns of polyester fiber produced by false twist method are the most
important and mass-produced manmade fibers. There are
many parameters of cross section which affect the physical and mechanical properties of textured yarns. These parameters
are surface area, perimeter, equivalent diameter, large
diameter, small diameter, convexity, stiffness, eccentricity, and hydraulic diameter. These parameters were evaluated by
digital image processing techniques. To find trends between production criteria and evaluated parameters of cross section, three criteria of production line have been adjusted and different types of yarns were produced. These criteria are
temperature, drafting ratio, and D/Y ratio. Finally the relations between production criteria and cross section parameters were
considered. The results showed that the presented technique can recognize and measure the parameters of fiber cross section in acceptable accuracy. Also, the optimum condition
of adjustments has been estimated from results of image analysis evaluation.
Detecting and Measuring Fabric Pills Using Digital Image Analysis
In this paper a novel method was presented for
evaluating the fabric pills using digital image processing techniques. This work provides a novel technique for
detecting pills and also measuring their heights, surfaces and
volumes. Surely, measuring the intensity of defects by human vision is an inaccurate method for quality control; as a result, this problem became a motivation for employing digital image processing techniques for detection of defects of fabric
surface. In the former works, the systems were just limited to measuring of the surface of defects, but in the presented
method the height and the volume of defects were also
measured, which leads to a more accurate quality control. An algorithm was developed to first, find pills and then measure their average intensity by using three criteria of height, surface
and volume. The results showed a meaningful relation
between the number of rotations and the quality of pilled fabrics.
Panoramic Sensor Based Blind Spot Accident Prevention System
There are many automotive accidents due to blind spots and driver inattentiveness. Blind spot is the area that is invisible to the driver's viewpoint without head rotation. Several methods are available for assisting the drivers. Simplest methods are — rear mirrors and wide-angle lenses. But, these methods have a disadvantage of the requirement for human assistance. So, the accuracy of these devices depends on driver. Another approach called an automated approach that makes use of sensors such as sonar or radar. These sensors are used to gather range information. The range information will be processed and used for detecting the collision. The disadvantage of this system is — low angular resolution and limited sensing volumes. This paper is a panoramic sensor based automotive vehicle monitoring..
3D Star Skeleton for Fast Human Posture Representation
In this paper, we propose an improved 3D star skeleton
technique, which is a suitable skeletonization for human posture representation
and reflects the 3D information of human posture.
Moreover, the proposed technique is simple and then can be performed
in real-time. The existing skeleton construction techniques, such as
distance transformation, Voronoi diagram, and thinning, focus on the
precision of skeleton information. Therefore, those techniques are not
applicable to real-time posture recognition since they are computationally
expensive and highly susceptible to noise of boundary. Although
a 2D star skeleton was proposed to complement these problems,
it also has some limitations to describe the 3D information of the
posture. To represent human posture effectively, the constructed skeleton
should consider the 3D information of posture. The proposed 3D
star skeleton contains 3D data of human, and focuses on human action
and posture recognition. Our 3D star skeleton uses the 8 projection
maps which have 2D silhouette information and depth data of human
surface. And the extremal points can be extracted as the features of 3D
star skeleton, without searching whole boundary of object. Therefore,
on execution time, our 3D star skeleton is faster than the “greedy" 3D
star skeleton using the whole boundary points on the surface. Moreover,
our method can offer more accurate skeleton of posture than the
existing star skeleton since the 3D data for the object is concerned.
Additionally, we make a codebook, a collection of representative 3D
star skeletons about 7 postures, to recognize what posture of constructed
Real-time 3D Feature Extraction without Explicit 3D Object Reconstruction
For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
View-Point Insensitive Human Pose Recognition using Neural Network
This paper proposes view-point insensitive human
pose recognition system using neural network. Recognition system
consists of silhouette image capturing module, data driven database,
and neural network. The advantages of our system are first, it is
possible to capture multiple view-point silhouette images of 3D human
model automatically. This automatic capture module is helpful to
reduce time consuming task of database construction. Second, we
develop huge feature database to offer view-point insensitivity at pose
recognition. Third, we use neural network to recognize human pose
from multiple-view because every pose from each model have similar
feature patterns, even though each model has different appearance and
view-point. To construct database, we need to create 3D human model
using 3D manipulate tools. Contour shape is used to convert silhouette
image to feature vector of 12 degree. This extraction task is processed
semi-automatically, which benefits in that capturing images and
converting to silhouette images from the real capturing environment is
needless. We demonstrate the effectiveness of our approach with
experiments on virtual environment.