Klyuch Aktivacii Matlab 2010
% Create a cascade detector object. Produkey. FaceDetector = vision.CascadeObjectDetector();% Read a video frame and run the face detector.
Feb 24, 2013 - a>.com/hixylumaxy/klyuch-aktivacii-movavi-video-konverter-12-9727.html'. A> matlab-dlya-windows-7.
VideoFileReader = vision.VideoFileReader( 'tilted_face.avi'); videoFrame = step(videoFileReader); bbox = step(faceDetector, videoFrame);% Draw the returned bounding box around the detected face. VideoFrame = insertShape(videoFrame, 'Rectangle', bbox); figure; imshow(videoFrame); title( 'Detected face');% Convert the first box into a list of 4 points% This is needed to be able to visualize the rotation of the object. BboxPoints = bbox2points(bbox(1,:)). To track the face over time, this example uses the Kanade-Lucas-Tomasi (KLT) algorithm. While it is possible to use the cascade object detector on every frame, it is computationally expensive. It may also fail to detect the face, when the subject turns or tilts his head.
This limitation comes from the type of trained classification model used for detection. The example detects the face only once, and then the KLT algorithm tracks the face across the video frames.
The strategy defines the main directions of Swiss support, budgets, and approaches. It builds on past programmes, results and the reputation as a trusted international partner.
Identify Facial Features To Track The KLT algorithm tracks a set of feature points across the video frames. Once the detection locates the face, the next step in the example identifies feature points that can be reliably tracked. This example uses the standard, 'good features to track' proposed by Shi and Tomasi. Detect feature points in the face region. Initialize a Tracker to Track the Points With the feature points identified, you can now use the vision.PointTracker System object to track them. For each point in the previous frame, the point tracker attempts to find the corresponding point in the current frame.
Then the estimateGeometricTransform function is used to estimate the translation, rotation, and scale between the old points and the new points. This transformation is applied to the bounding box around the face.
Create a point tracker and enable the bidirectional error constraint to make it more robust in the presence of noise and clutter. OldPoints = points; while ~isDone(videoFileReader)% get the next frame videoFrame = step(videoFileReader);% Track the points. Note that some points may be lost. [points, isFound] = step(pointTracker, videoFrame); visiblePoints = points(isFound,:); oldInliers = oldPoints(isFound,:); if size(visiblePoints, 1) >= 2% need at least 2 points% Estimate the geometric transformation between the old points% and the new points and eliminate outliers [xform, oldInliers, visiblePoints] = estimateGeometricTransform(. OldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);% Apply the transformation to the bounding box points bboxPoints = transformPointsForward(xform, bboxPoints);% Insert a bounding box around the object being tracked bboxPolygon = reshape(bboxPoints', 1, []); videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon. 'LineWidth', 2);% Display tracked points videoFrame = insertMarker(videoFrame, visiblePoints, '+'.
'Color', 'white');% Reset the points oldPoints = visiblePoints; setPoints(pointTracker, oldPoints); end% Display the annotated video frame using the video player object step(videoPlayer, videoFrame); end% Clean up release(videoFileReader); release(videoPlayer); release(pointTracker). Summary In this example, you created a simple face tracking system that automatically detects and tracks a single face.
Try changing the input video, and see if you are still able to detect and track a face. Make sure the person is facing the camera in the initial frame for the detection step. References Viola, Paul A. And Jones, Michael J. 'Rapid Object Detection using a Boosted Cascade of Simple Features', IEEE CVPR, 2001. Lucas and Takeo Kanade.