[SOLVED] 留学生代考 EBU7240 Computer Vision

30 $

File Name: 留学生代考_EBU7240_Computer_Vision.zip
File Size: 367.38 KB

SKU: 5902920165 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


EBU7240 Computer Vision
Detection1: Pedestrian detection
Semester 1, 2021
Changjae Oh

Copyright By PowCoder代写加微信 assignmentchef

Outline • Overview
• Dalal-Triggs (pedestrian detection) ̶ Histogram of Oriented Gradients
̶ Learning with SVM

Object Detection
• Focus on object search: “Where is it?”
• Build templates that differentiate object patch from background patch
Object or Non-Object?

Challenges in modeling the object class
Illumination
Object pose
Occlusions
Intra-class appearance
[K. Grauman, B. Leibe]

Challenges in modeling the non
object class
True Detections
Bad Localization
Confused with Similar Object
Misc. Background
Confused with Dissimilar Objects

Object Detection Design challenges
• How to efficiently search for likely objects
̶ Even simple models require searching hundreds of thousands of positions and scales.
• Feature design and scoring
̶ How should appearance be modeled?
̶ What features correspond to the object?
• How to deal with different viewpoints?
̶ Often train different models for a few different viewpoints

General Process of Object Detection
Specify Object Model Generate Hypotheses Score Hypotheses
Resolve Detections
What are the object parameters?

Specifying an object model
Statistical Template in Bounding Box
Object is some (x,y,w,h) in image
Features defined with respect to bounding box coordinates
Image Template Visualization
Images from Felzenszwalb 8

Specifying an object model
Articulated parts model
Object is configuration of parts Each part is detectable
Images from Felzenszwalb 9

Specifying an object model 3. Hybrid template/parts model
Detections
Template Visualization

Specifying an object model
4. 3D-ish model
̶ Object is collection of 3D planar patches under affine transformation

Specifying an object model
4. Deformable 3D model
̶ Object is a parameterized space of shape/pose/deformation of class of 3D object

Why not just pick the most complex model? • Inference is harder
More parameters
Harder to ‘fit’ (infer / optimize fit) Longer computation
, Leveraging MoCap Data for Human Mesh Recovery, Arxiv 2021

General Process of Object Detection
Specify Object Model Generate Hypotheses Score Hypotheses
Resolve Detections
Propose an alignment of the model to the image

Generating hypotheses
1. 2D template model / sliding window
̶ Test patch at each location and scale

Generating hypotheses
1. 2D template model / sliding window
̶ Test patch at each location and scale
Note – Template did not change size

Each window is separately classified

Generating hypotheses
2. Region-based proposal
̶ Arbitrary bounding box + image ‘cut’ segmentation

General Process of Object Detection
Specify Object Model Generate Hypotheses Score Hypotheses
Resolve Detections
Mainly gradient-based features, usually based on summary representation, many classifiers.

General Process of Object Detection
Specify Object Model Generate Hypotheses Score Hypotheses
Resolve each proposed object based on the whole set

Resolving detection scores 1. Non-max suppression
Score = 0.8
Score = 0.8
Score = 0.1

Resolving detection scores 2. Context/reasoning
Via geometry
Via known information or prior distributions Non-max suppression
Hoiem et al. 2006

: Person detection with HOG & linear SVM
Histograms of Oriented Gradients for Human Detection,,, International Conference on Computer Vision & Pattern Recognition – June 2005

Statistical Template
• Object model = sum of scores of features at fixed positions!
? +3+2 -2-1 -2.5 = -0.5> 7.5
Non-object
? +4+1+0.5+3+0.5= 10.5 > 7.5

1. Extract fixed-sized (64×128 pixel) window at each position and scale
2. Compute HOG (histogram of gradient) features within each window
3. Score the window with a linear SVM classifier
4. Perform non-maxima suppression to remove overlapping detections w ith lower scores
pedestrian detector
and, Histograms of Oriented Gradients for Human Detection, CVPR05

and, Histograms of Oriented Gradients for Human Detection, CVPR05
Slides by 27

• Tested with
̶ Grayscale
• Gamma Normalization and Compression
̶ Square root ̶ Log
Very slightly better performance vs. no adjustment
Slightly better performance vs. grayscale

Outperforms
uncentered
cubic-corrected

Histogram of Oriented Gradients
Orientation: 9 bins (for un Histograms over signed angles 0 -180) k x k pixel cells
̶ Votes weighted by magnitude
̶ Bilinear interpolation between cells
(4 bins shown)

Normalize with respect to surrounding cells
Rectangular HOG (R-HOG)
How to normalize?
– Concatenate all cell responses from neighboring blocks into vector.
– Normalize vector.
– Extract responses from cell of interest.
– Do this 4x for each neighbor set in 2×2.
e is a small constant
(to remove div. by zero on empty bins)

# orientations # features = 15 x 7 x 9 x 4 = 3780
# normalizations by neighboring cells

neg w Training data classes

pedestrian

Strengths/Weaknesses of Statistical Template Approach • Strengths
̶ Works very well for non-deformable objects with canonical orientations: faces, cars, p edestrians
̶ Fast detection • Weaknesses
Not so well for highly deformable objects or “stuff” Not robust to occlusion
Requires lots of training data

EBU7240 Computer Vision
Detection2: Face detection
Semester 1, 2021
Changjae Oh

Outline • Overview
• Viola-Jones (face detection) ̶ Boosting for learning
̶ Decision trees

Consumer application: Apple iPhoto
• Things iPhoto thinks are faces

Challenges of face detection
• Sliding window = tens of thousands of location/scale evaluations
̶ One megapixel image has ~106 pixels, and a comparable number of candidate face lo cations
• Faces are rare: 0–10 per image
̶ For computational efficiency, spend as little time as possible on the non-face windows
• For 1 Mpix, to avoid having a false positive in every image, our false positive rate has to be less than 10−6

The Viola/Jones Face Detector
• A seminal approach to real-time object detection.
• Training is slow, but detection is very fast
• Key ideas:
1. Integral images for fast feature evaluation
2. Boosting for feature selection
3. Attentional cascade for fast non-face window rejection
P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. CVPR 2001. P. Viola and M. Jones. Robust real-time face detection. IJCV 57(2), 2004.

1. Integral images for fast feature evaluation
• The integral image computes a value at each pixel (x,y) that is the sum of all pixel values above and to the left of (x,y), inclusive.
• This can quickly be computed in one pass through the image.
• ‘Summed area table’

Computing the integral image
Region already computed
Current pixel

Computing the integral image
ii(x, y-1)
• Cumulative row sum: s(x, y) = s(x–1, y) + i(x, y)
• Integral image: ii(x, y) = ii(x, y−1) + s(x, y)
Python: ii = np.cumsum(i)

Computing sum within a rectangle
• Let A,B,C,D be the values of the integra l image at the corners of a rectangle
• The sum of original image values within
the rectangle can be computed as: C
̶ sum = A – B – C + D
Only 3 additions are required for any size of rectangle!

Integral Images
• ii = cumsum(cumsum(im, 1), 2)
ii(x,y) = Sum of the values in the grey region
SUM within Rectangle D is ii(4) – ii(2) – ii(3) + ii(1)

Integral Images
• Find the integral image of the figure below and computer the sum of pix els in the grey region based on the integral image.
ii(4) – ii(2) – ii(3) + ii(1) = 42 – 10 – 4 + 1

Features that are fast to compute • “Haar-like features”
Haar wavelet
Differences of sums of intensity
Computed at different positions and scales within sliding window
Two-rectangle features Three-rectangle features Etc.
CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=801361

Image Features
“Rectangle filters”
Value = ∑(pixels in white area) – ∑(pixels in black area)

Computing a rectangle feature
Integral Image

But these features are rubbish…!
Yes, individually they are ‘weak classifiers’
•Jargon: ‘feature’ and ‘classifier’ are used interchangeably here. Also with ‘learner’.
But, what if we combine thousands of them…
Two-rectangle features Three-rectangle features Etc.

How many features are there?
• For a 24×24 detection region, the number of possible rectangle features is ~160,000!

How many features are there?
• For a 24×24 detection region, the number of possible rectangle features is ~160,000!
• At test time, it is impractical to evaluate the entire feature set.
• Can we learn a ‘strong classifier’ using just a small subset of all possible f eatures?

for feature selection
Initially, weight each training example equally. Weight = size of point
Slide credit:

In each boosting round:
Find the weak classifier, trained for each feature, that achieves the lowest weighted training error.
Raise the weights of training examples misclassified by current weak classifier.
Weak Classifier 1
for feature selection
Slide credit:

Boosting illustration
In each boosting round:
Find the weak classifier, trained for each feature, that achieves the lowest weighted training error.
Raise the weights of training examples misclassified by current weak classifier.
Weights Increased

Boosting illustration
In each boosting round:
Find the weak classifier, trained for each feature, that achieves the lowest weighted training error.
Raise the weights of training examples misclassified by current weak classifier.
Weak Classifier 2

Boosting illustration
In each boosting round:
Find the weak classifier, trained for each feature, that achieves the lowest weighted training error.
Raise the weights of training examples misclassified by current weak classifier.
Weights Increased

Boosting illustration
In each boosting round:
Find the weak classifier, trained for each feature, that achieves the lowest weighted training error.
Raise the weights of training examples misclassified by current weak classifier.
Weak Classifier 3

Boosting illustration
Compute final classifier as linear combination of all weak classifier.
Weight of each classifier is directly proportional to its accuracy.
Exact formulas for re-weighting and combining weak learners depend on the particular boosting scheme (e.g., AdaBoost).
Y. Freund and R. Schapire, A short introduction to boosting, Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.

Boosting for face detection
• First two features selected by boosting:
The first feature measures the difference in intensity between the region of the eyes a nd a region across the upper cheeks. The feature capitalizes on the observation that t he eye region is often darker than the cheeks.
The second feature compares the intensities in the eye regions to the intensity across the bridge of the nose.
The two features are shown in the t op row and then overlayed on a typi cal training face in the bottom row.

Feature selection with boosting
• Create a large pool of features (180K)
• Select discriminative features that work well together
Final strong learner window
Weak learner Learner weight
• “Weak learner” = feature + threshold + ‘polarity’
‘polarity’ = black or white region flip
value of rectangle feature threshold
• Choose weak learner that minimizes error on the weighted training set, then reweight

Boosting: Pros and Cons • Advantages of boosting
Integrates classifier training with feature selection
Complexity of training is linear instead of quadratic in the number of training examples Flexibility in the choice of weak learners, boosting scheme
Testing is fast
• Disadvantages
̶ Needs many training examples ̶ Training is slow

Cascade for Fast Detection
Stage 1 h1(x) > t1?
Stage 2 h2(x) > t2?
… Stage N hn(x) > tn?
Fast classifiers early in cascade which reject many negative examples but detect almost all positive examples.
Slow classifiers later, but most examples don’t get there.

Attentional cascade
• Chain classifiers that are progressively more complex and have lower fal
se positive rates:
Receiver operating characteristic
T T T Classifier 2 Classifier 3
IMAGE SUB-WINDOW
Classifier 1
NON-FACE NON-FACE NON-FACE

Attentional cascade
• The detection rate and the false positive rate of the cascade are found by m ultiplying the respective rates of the individual stages
• A detection rate of 0.9 and a false positive rate on the order of 10−6 can be achieved by a 10-stage cascade if each stage has a detection rate of 0.99 (0 .9910 ≈ 0.9) and a false positive rate of about 0.30 (0.310 ≈ 6×10−6)
IMAGE SUB-WINDOW
Classifier 1
T T T Classifier 2 Classifier 3
NON-FACE NON-FACE NON-FACE

Training the cascade
• Set target detection and false positive rates for each stage
• Keep adding features to the current stage until its target rates have been met
̶ Need to lower boosting threshold to maximize detection (as opposed to minimizing total classification error)
̶ Test on a validation set
• If the overall false positive rate is not low enough, then add another stage
• Use false positives from current stage as the negative training examples f or the next stage

The implemented system • Training Data
5000 faces
• 9500non-faceimages
All frontal, rescaled to 24×24 pixels
300 million non-faces Faces are normalized
• Scale,translation
• Many variations ̶ Across individuals
̶ Illumination ̶ Pose

• 38 stages with 1, 10, 25, 50 … features ̶ 6061 total used out of 180K candidates
̶ 10 features evaluated on average
• Training Examples
̶ 4916 positive examples
̶ 10000 negative examples collected after each stage
• Scanning
Scale detector rather than image
Scale steps = 1.25 (factor between two consecutive scales) Translation 1*scale (# pixels between two consecutive windows)
Jones details
• Non-max suppression: average coordinates of overlapping boxes
• Train 3 classifiers and take vote

Jones Results
Speed = 15 FPS (in 2001)
MIT + CMU face dataset

Boosting for face detection
• A 200-feature classifier can yield 95% detection rat e and a false positive rate of 1 in 14084
positive Not good enough!
Receiver operating characteristic (ROC) curve

Output of Face Detector on Test Images

Summary: Viola/Jones detector
• Rectangle features
• Integral images for fast computation
• Boosting for feature selection
• Attentional cascade for fast rejection of negative windows

程序代写 CS代考加微信: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 留学生代考 EBU7240 Computer Vision
30 $