Skip to content

Commit

Permalink
Misc. ./apps ./doc ./platoforms typos
Browse files Browse the repository at this point in the history
Found via `codespell -q 3 --skip="./3rdparty" -I ../opencv-whitelist.txt`
  • Loading branch information
luzpaz committed Feb 8, 2018
1 parent 090ee46 commit d47b1f3
Show file tree
Hide file tree
Showing 54 changed files with 99 additions and 99 deletions.
2 changes: 1 addition & 1 deletion apps/annotation/opencv_annotation.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@

/*****************************************************************************************************
USAGE:
./opencv_annotation -images <folder location> -annotations <ouput file>
./opencv_annotation -images <folder location> -annotations <output file>
Created by: Puttemans Steven - February 2015
Adapted by: Puttemans Steven - April 2016 - Vectorize the process to enable better processing
Expand Down
2 changes: 1 addition & 1 deletion apps/interactive-calibration/parametersController.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ bool calib::parametersController::loadFromParser(cv::CommandLineParser &parser)

if(!checkAssertion(mCapParams.squareSize > 0, "Distance between corners or circles must be positive"))
return false;
if(!checkAssertion(mCapParams.templDst > 0, "Distance betwen parts of dual template must be positive"))
if(!checkAssertion(mCapParams.templDst > 0, "Distance between parts of dual template must be positive"))
return false;

if (parser.has("v")) {
Expand Down
2 changes: 1 addition & 1 deletion apps/traincascade/HOGfeatures.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ void CvHOGEvaluator::Feature::write(FileStorage &fs) const
//}

//cell[0] and featComponent idx writing. By cell[0] it's possible to recover all block
//All block is nessesary for block normalization
//All block is necessary for block normalization
void CvHOGEvaluator::Feature::write(FileStorage &fs, int featComponentIdx) const
{
fs << CC_RECT << "[:" << rect[0].x << rect[0].y <<
Expand Down
10 changes: 5 additions & 5 deletions apps/traincascade/old_ml.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
#include <iostream>

// Apple defines a check() macro somewhere in the debug headers
// that interferes with a method definiton in this header
// that interferes with a method definition in this header
#undef check

/****************************************************************************************\
Expand Down Expand Up @@ -1243,9 +1243,9 @@ struct CvGBTreesParams : public CvDTreeParams
// weak - array[0..(class_count-1)] of CvSeq
// for storing tree ensembles
// orig_response - original responses of the training set samples
// sum_response - predicitons of the current model on the training dataset.
// sum_response - predictions of the current model on the training dataset.
// this matrix is updated on every iteration.
// sum_response_tmp - predicitons of the model on the training set on the next
// sum_response_tmp - predictions of the model on the training set on the next
// step. On every iteration values of sum_responses_tmp are
// computed via sum_responses values. When the current
// step is complete sum_response values become equal to
Expand All @@ -1270,7 +1270,7 @@ struct CvGBTreesParams : public CvDTreeParams
// matrix has the same size as train_data. 1 - missing
// value, 0 - not a missing value.
// class_labels - output class labels map.
// rng - random number generator. Used for spliting the
// rng - random number generator. Used for splitting the
// training set.
// class_count - count of output classes.
// class_count == 1 in the case of regression,
Expand Down Expand Up @@ -1536,7 +1536,7 @@ class CvGBTrees : public CvStatModel
// type - defines which error is to compute: train (CV_TRAIN_ERROR) or
// test (CV_TEST_ERROR).
// OUTPUT
// resp - vector of predicitons
// resp - vector of predictions
// RESULT
// Error value.
*/
Expand Down
2 changes: 1 addition & 1 deletion apps/traincascade/old_ml_precomp.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ CvMat* icvGenerateRandomClusterCenters( int seed,
/* Fills the <labels> using <probs> by choosing the maximal probability. Outliers are
fixed by <oulier_tresh> and have cluster label (-1). Function also controls that there
weren't "empty" clusters by filling empty clusters with the maximal probability vector.
If probs_sums != NULL, filles it with the sums of probabilities for each sample (it is
If probs_sums != NULL, fills it with the sums of probabilities for each sample (it is
useful for normalizing probabilities' matrice of FCM) */
void icvFindClusterLabels( const CvMat* probs, float outlier_thresh, float r,
const CvMat* labels );
Expand Down
2 changes: 1 addition & 1 deletion doc/js_tutorials/js_assets/js_fourier_transform_dft.html
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ <h2>Image DFT Example</h2>
planes.push_back(plane1);
cv.merge(planes, complexI);

// in-place dft transfrom
// in-place dft transform
cv.dft(complexI, complexI);

// compute log(1 + sqrt(Re(DFT(img))**2 + Im(DFT(img))**2))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ navigator.mediaDevices.getUserMedia({ video: true, audio: false })
video.play();
})
.catch(function(err) {
console.log("An error occured! " + err);
console.log("An error occurred! " + err);
});
@endcode

Expand Down
16 changes: 8 additions & 8 deletions doc/js_tutorials/js_imgproc/js_pyramids/js_pyramids.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ Goal
Theory
------

Normally, we used to work with an image of constant size. But in some occassions, we need to work
with images of different resolution of the same image. For example, while searching for something in
an image, like face, we are not sure at what size the object will be present in the image. In that
case, we will need to create a set of images with different resolution and search for object in all
the images. These set of images with different resolution are called Image Pyramids (because when
they are kept in a stack with biggest image at bottom and smallest image at top look like a
pyramid).
Normally, we used to work with an image of constant size. But on some occasions, we need to work
with (the same) images in different resolution. For example, while searching for something in
an image, like face, we are not sure at what size the object will be present in said image. In that
case, we will need to create a set of the same image with different resolutions and search for object
in all of them. These set of images with different resolutions are called **Image Pyramids** (because
when they are kept in a stack with the highest resolution image at the bottom and the lowest resolution
image at top, it looks like a pyramid).

There are two kinds of Image Pyramids. 1) Gaussian Pyramid and 2) Laplacian Pyramids
There are two kinds of Image Pyramids. 1) **Gaussian Pyramid** and 2) **Laplacian Pyramids**

Higher level (Low resolution) in a Gaussian Pyramid is formed by removing consecutive rows and
columns in Lower level (higher resolution) image. Then each pixel in higher level is formed by the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ results for images with varying illumination.

We use the function: **cv.adaptiveThreshold (src, dst, maxValue, adaptiveMethod, thresholdType, blockSize, C)**
@param src source 8-bit single-channel image.
@param dst dstination image of the same size and the same type as src.
@param dst destination image of the same size and the same type as src.
@param maxValue non-zero value assigned to the pixels for which the condition is satisfied
@param adaptiveMethod adaptive thresholding algorithm to use.
@param thresholdType thresholding type that must be either cv.THRESH_BINARY or cv.THRESH_BINARY_INV.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ BackgroundSubtractorMOG2
------------------------

It is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm. It is based on two
papers by Z.Zivkovic, "Improved adaptive Gausian mixture model for background subtraction" in 2004
papers by Z.Zivkovic, "Improved adaptive Gaussian mixture model for background subtraction" in 2004
and "Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction"
in 2006. One important feature of this algorithm is that it selects the appropriate number of
gaussian distribution for each pixel. It provides better adaptibility to varying scenes due illumination
Expand Down
2 changes: 1 addition & 1 deletion doc/pattern_tools/svgfig.py
Original file line number Diff line number Diff line change
Expand Up @@ -1857,7 +1857,7 @@ class Poly:
piecewise-linear segments joining the (x,y) points
"bezier"/"B" d=[(x, y, c1x, c1y, c2x, c2y), ...]
Bezier curve with two control points (control points
preceed (x,y), as in SVG paths). If (c1x,c1y) and
precede (x,y), as in SVG paths). If (c1x,c1y) and
(c2x,c2y) both equal (x,y), you get a linear
interpolation ("lines")
"velocity"/"V" d=[(x, y, vx, vy), ...]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Apart from OpenCV, Python also provides a module **time** which is helpful in me
execution. Another module **profile** helps to get detailed report on the code, like how much time
each function in the code took, how many times the function was called etc. But, if you are using
IPython, all these features are integrated in an user-friendly manner. We will see some important
ones, and for more details, check links in **Additional Resouces** section.
ones, and for more details, check links in **Additional Resources** section.

Measuring Performance with OpenCV
---------------------------------
Expand Down
2 changes: 1 addition & 1 deletion doc/py_tutorials/py_feature2d/py_fast/py_fast.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Non-maximum Suppression.

It is several times faster than other existing corner detectors.

But it is not robust to high levels of noise. It is dependant on a threshold.
But it is not robust to high levels of noise. It is dependent on a threshold.

FAST Feature Detector in OpenCV
-------------------------------
Expand Down
4 changes: 2 additions & 2 deletions doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ used.
Second param is boolean variable, crossCheck which is false by default. If it is true, Matcher
returns only those matches with value (i,j) such that i-th descriptor in set A has j-th descriptor
in set B as the best match and vice-versa. That is, the two features in both sets should match each
other. It provides consistant result, and is a good alternative to ratio test proposed by D.Lowe in
other. It provides consistent result, and is a good alternative to ratio test proposed by D.Lowe in
SIFT paper.

Once it is created, two important methods are *BFMatcher.match()* and *BFMatcher.knnMatch()*. First
Expand Down Expand Up @@ -164,7 +164,7 @@ Second dictionary is the SearchParams. It specifies the number of times the tree
should be recursively traversed. Higher values gives better precision, but also takes more time. If
you want to change the value, pass search_params = dict(checks=100).

With these informations, we are good to go.
With this information, we are good to go.
@code{.py}
import numpy as np
import cv2 as cv
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ grayscale image. Then you specify number of corners you want to find. Then you s
level, which is a value between 0-1, which denotes the minimum quality of corner below which
everyone is rejected. Then we provide the minimum euclidean distance between corners detected.

With all these informations, the function finds corners in the image. All corners below quality
With all this information, the function finds corners in the image. All corners below quality
level are rejected. Then it sorts the remaining corners based on quality in the descending order.
Then function takes first strongest corner, throws away all the nearby corners in the range of
minimum distance and returns N strongest corners.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ different scale. It is OK with small corner. But to detect larger corners we nee
For this, scale-space filtering is used. In it, Laplacian of Gaussian is found for the image with
various \f$\sigma\f$ values. LoG acts as a blob detector which detects blobs in various sizes due to
change in \f$\sigma\f$. In short, \f$\sigma\f$ acts as a scaling parameter. For eg, in the above image,
gaussian kernel with low \f$\sigma\f$ gives high value for small corner while guassian kernel with high
gaussian kernel with low \f$\sigma\f$ gives high value for small corner while gaussian kernel with high
\f$\sigma\f$ fits well for larger corner. So, we can find the local maxima across the scale and space
which gives us a list of \f$(x,y,\sigma)\f$ values which means there is a potential keypoint at (x,y) at
\f$\sigma\f$ scale.
Expand Down Expand Up @@ -66,7 +66,7 @@ the intensity at this extrema is less than a threshold value (0.03 as per the pa
rejected. This threshold is called **contrastThreshold** in OpenCV

DoG has higher response for edges, so edges also need to be removed. For this, a concept similar to
Harris corner detector is used. They used a 2x2 Hessian matrix (H) to compute the pricipal
Harris corner detector is used. They used a 2x2 Hessian matrix (H) to compute the principal
curvature. We know from Harris corner detector that for edges, one eigen value is larger than the
other. So here they used a simple function,

Expand All @@ -79,7 +79,7 @@ points.
### 3. Orientation Assignment

Now an orientation is assigned to each keypoint to achieve invariance to image rotation. A
neigbourhood is taken around the keypoint location depending on the scale, and the gradient
neighbourhood is taken around the keypoint location depending on the scale, and the gradient
magnitude and direction is calculated in that region. An orientation histogram with 36 bins covering
360 degrees is created. (It is weighted by gradient magnitude and gaussian-weighted circular window
with \f$\sigma\f$ equal to 1.5 times the scale of keypoint. The highest peak in the histogram is taken
Expand All @@ -89,7 +89,7 @@ with same location and scale, but different directions. It contribute to stabili
### 4. Keypoint Descriptor

Now keypoint descriptor is created. A 16x16 neighbourhood around the keypoint is taken. It is
devided into 16 sub-blocks of 4x4 size. For each sub-block, 8 bin orientation histogram is created.
divided into 16 sub-blocks of 4x4 size. For each sub-block, 8 bin orientation histogram is created.
So a total of 128 bin values are available. It is represented as a vector to form keypoint
descriptor. In addition to this, several measures are taken to achieve robustness against
illumination changes, rotation etc.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ and location.
![image](images/surf_boxfilter.jpg)

For orientation assignment, SURF uses wavelet responses in horizontal and vertical direction for a
neighbourhood of size 6s. Adequate guassian weights are also applied to it. Then they are plotted in
neighbourhood of size 6s. Adequate gaussian weights are also applied to it. Then they are plotted in
a space as given in below image. The dominant orientation is estimated by calculating the sum of all
responses within a sliding orientation window of angle 60 degrees. Interesting thing is that,
wavelet response can be found out using integral images very easily at any scale. For many
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ commands in your Python terminal :
>>> print( flags )
@endcode
@note For HSV, Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255].
Different softwares use different scales. So if you are comparing OpenCV values with them, you need
Different software use different scales. So if you are comparing OpenCV values with them, you need
to normalize these ranges.

Object Tracking
Expand Down
4 changes: 2 additions & 2 deletions doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,8 @@ got with corresponding values in newly added mask image. Check the code below:*
# newmask is the mask image I manually labelled
newmask = cv.imread('newmask.png',0)

# whereever it is marked white (sure foreground), change mask=1
# whereever it is marked black (sure background), change mask=0
# wherever it is marked white (sure foreground), change mask=1
# wherever it is marked black (sure background), change mask=0
mask[newmask == 0] = 0
mask[newmask == 255] = 1

Expand Down
18 changes: 9 additions & 9 deletions doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ In this chapter,
Theory
------

Normally, we used to work with an image of constant size. But in some occassions, we need to work
with images of different resolution of the same image. For example, while searching for something in
an image, like face, we are not sure at what size the object will be present in the image. In that
case, we will need to create a set of images with different resolution and search for object in all
the images. These set of images with different resolution are called Image Pyramids (because when
they are kept in a stack with biggest image at bottom and smallest image at top look like a
pyramid).

There are two kinds of Image Pyramids. 1) Gaussian Pyramid and 2) Laplacian Pyramids
Normally, we used to work with an image of constant size. But on some occasions, we need to work
with (the same) images in different resolution. For example, while searching for something in
an image, like face, we are not sure at what size the object will be present in said image. In that
case, we will need to create a set of the same image with different resolutions and search for object
in all of them. These set of images with different resolutions are called **Image Pyramids** (because
when they are kept in a stack with the highest resolution image at the bottom and the lowest resolution
image at top, it looks like a pyramid).

There are two kinds of Image Pyramids. 1) **Gaussian Pyramid** and 2) **Laplacian Pyramids**

Higher level (Low resolution) in a Gaussian Pyramid is formed by removing consecutive rows and
columns in Lower level (higher resolution) image. Then each pixel in higher level is formed by the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Template Matching with Multiple Objects
---------------------------------------

In the previous section, we searched image for Messi's face, which occurs only once in the image.
Suppose you are searching for an object which has multiple occurances, **cv.minMaxLoc()** won't
Suppose you are searching for an object which has multiple occurrences, **cv.minMaxLoc()** won't
give you all the locations. In that case, we will use thresholding. So in this example, we will use
a screenshot of the famous game **Mario** and we will find the coins in it.
@code{.py}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ from matplotlib import pyplot as plt
# simple averaging filter without scaling parameter
mean_filter = np.ones((3,3))

# creating a guassian filter
# creating a gaussian filter
x = cv.getGaussianKernel(5,10)
gaussian = x*x.T

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ some, they are not.
Additional Resources
--------------------

-# CMM page on [Watershed Tranformation](http://cmm.ensmp.fr/~beucher/wtshed.html)
-# CMM page on [Watershed Transformation](http://cmm.ensmp.fr/~beucher/wtshed.html)

Exercises
---------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Most of you will have some old degraded photos at your home with some black spot
on it. Have you ever thought of restoring it back? We can't simply erase them in a paint tool
because it is will simply replace black structures with white structures which is of no use. In
these cases, a technique called image inpainting is used. The basic idea is simple: Replace those
bad marks with its neighbouring pixels so that it looks like the neigbourhood. Consider the image
bad marks with its neighbouring pixels so that it looks like the neighbourhood. Consider the image
shown below (taken from [Wikipedia](http://en.wikipedia.org/wiki/Inpainting)):

![image](images/inpaint_basics.jpg)
Expand All @@ -28,8 +28,8 @@ First algorithm is based on the paper **"An Image Inpainting Technique Based on
Method"** by Alexandru Telea in 2004. It is based on Fast Marching Method. Consider a region in the
image to be inpainted. Algorithm starts from the boundary of this region and goes inside the region
gradually filling everything in the boundary first. It takes a small neighbourhood around the pixel
on the neigbourhood to be inpainted. This pixel is replaced by normalized weighted sum of all the
known pixels in the neigbourhood. Selection of the weights is an important matter. More weightage is
on the neighbourhood to be inpainted. This pixel is replaced by normalized weighted sum of all the
known pixels in the neighbourhood. Selection of the weights is an important matter. More weightage is
given to those pixels lying near to the point, near to the normal of the boundary and those lying on
the boundary contours. Once a pixel is inpainted, it moves to next nearest pixel using Fast Marching
Method. FMM ensures those pixels near the known pixels are inpainted first, so that it just works
Expand Down
Loading

0 comments on commit d47b1f3

Please sign in to comment.