Emgu.CV.World The Affine3 matrix, double precision. Create an empty Affine3, double precision matrix Create a new identity matrix Rotate the Affine3 matrix by a Rodrigues vector Value of the Rodrigues vector Value of the Rodrigues vector Value of the Rodrigues vector The rotated Affine3 matrix Translate the Affine3 matrix by the given value Value of the translation vector Value of the translation vector Value of the translation vector The translated Affine3 matrix Get the 3x3 matrix's value as a double vector (of size 9) The 3x3 matrix's value as a double vector (of size 9) Release the unmanged memory associated with this Affine3 model Library to invoke OpenCV functions Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, CV_FOURCC('P','I','M','1') is MPEG-1 codec, CV_FOURCC('M','J','P','G') is motion-jpeg codec etc. Framerate of the created video stream. Size of video frames. If != 0, the encoder will expect and encode color frames, otherwise it will work with grayscale frames The video writer Finishes writing to video file and releases the structure. pointer to video file writer structure Writes/appends one frame to video file. video writer structure. the written frame True on success, false otherwise Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed Pointer to the capture structure Allocates and initialized the CvCapture structure for reading the video stream from the specified file. After the allocated structure is not used any more it should be released by cvReleaseCapture function. Name of the video file. Pointer to the capture structure. The function cvReleaseCapture releases the CvCapture structure allocated by cvCreateFileCapture or cvCreateCameraCapture pointer to video capturing structure. Grabs a frame from camera or video file, decompresses and returns it. This function is just a combination of cvGrabFrame and cvRetrieveFrame in one call. Video capturing structure The output frame true id a frame is read The returned image should not be released or modified by user. Grab a frame Video capturing structure True on success Get the frame grabbed with cvGrabFrame(..) This function may apply some frame processing like frame decompression, flipping etc. Video capturing structure The output image The frame retrieve flag True on success The returned image should not be released or modified by user. Retrieves the specified property of camera or video file Video capturing structure Property identifier The specified property of camera or video file Sets the specified property of video capturing Video capturing structure Property identifier Value of the property True on success Check to make sure all the unmanaged libraries are loaded True if library loaded string marshaling type Represent a bool value in C++ Represent a int value in C++ Opencv's calling convention Attempts to load opencv modules from the specific location The directory where the unmanaged modules will be loaded. If it is null, the default location will be used. The names of opencv modules. e.g. "opencv_cxcore.dll" on windows. True if all the modules has been loaded successfully If is null, the default location on windows is the dll's path appended by either "x64" or "x86", depends on the applications current mode. Get the module format string. On Windows, "{0}".dll will be returned; On Linux, "lib{0}.so" will be returned; Otherwise {0} is returned. Attempts to load opencv modules from the specific location The names of opencv modules. e.g. "opencv_cxcore.dll" on windows. True if all the modules has been loaded successfully Static Constructor to setup opencv environment Get the corresponding depth type The opencv depth type The equivalent depth type Get the corresponding opencv depth type The element type The equivalent opencv depth type This function performs the same as MakeType macro The type of depth The number of channels An interger tha represent a mat type Check if the size of the C structures match those of C# True if the size matches Finds perspective transformation H=||h_ij|| between the source and the destination planes Point coordinates in the original plane Point coordinates in the destination plane FindHomography method The maximum allowed reprojection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3 Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored. The 3x3 homography matrix if found. Null if not found. Finds perspective transformation H=||hij|| between the source and the destination planes Point coordinates in the original plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates), where N is the number of points. Point coordinates in the destination plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates) The type of the method The maximum allowed re-projection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3 The optional output mask set by a robust method (RANSAC or LMEDS). Output 3x3 homography matrix. Homography matrix is determined up to a scale, thus it is normalized to make h33=1 Converts a rotation vector to rotation matrix or vice versa. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis. The input rotation vector (3x1 or 1x3) or rotation matrix (3x3). The output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively Optional output Jacobian matrix, 3x9 or 9x3 - partial derivatives of the output array components w.r.t the input array components Calculates an essential matrix from the corresponding points in two images. Array of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision). Array of the second image points of the same size and format as points1 Camera matrix K=[[fx 0 cx][0 fy cy][0 0 1]]. Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. Method for computing a fundamental matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise. Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. The essential mat Calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrices found (1 or 3) and 0, if no matrix is found. Array of N points from the first image. The point coordinates should be floating-point (single or double precision). Array of the second image points of the same size and format as points1 Method for computing the fundamental matrix Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise. Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. The optional pointer to output array of N elements, every element of which is set to 0 for outliers and to 1 for the "inliers", i.e. points that comply well with the estimated epipolar geometry. The array is computed only in RANSAC and LMedS methods. For other methods it is set to all 1. The calculated fundamental matrix For every point in one of the two images of stereo-pair the function cvComputeCorrespondEpilines finds equation of a line that contains the corresponding point (i.e. projection of the same 3D point) in the other image. Each line is encoded by a vector of 3 elements l=[a,b,c]^T, so that: l^T*[x, y, 1]^T=0, or a*x + b*y + c = 0 From the fundamental matrix definition (see cvFindFundamentalMatrix discussion), line l2 for a point p1 in the first image (which_image=1) can be computed as: l2=F*p1 and the line l1 for a point p2 in the second image (which_image=1) can be computed as: l1=F^T*p2Line coefficients are defined up to a scale. They are normalized (a2+b2=1) are stored into correspondent_lines The input points. 2xN, Nx2, 3xN or Nx3 array (where N number of points). Multi-channel 1xN or Nx1 array is also acceptable. Index of the image (1 or 2) that contains the points Fundamental matrix Computed epilines, 3xN or Nx3 array Converts points from Euclidean to homogeneous space. Input vector of N-dimensional points. Output vector of N+1-dimensional points. Converts points from homogeneous to Euclidean space. Input vector of N-dimensional points. Output vector of N-1-dimensional points. Transforms 1-channel disparity map to 3-channel image, a 3D surface. Disparity map 3-channel, 16-bit integer or 32-bit floating-point image - the output map of 3D points The reprojection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000). The optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. Returns the new camera matrix based on the free scaling parameter. Input camera matrix. Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,?x,?y]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. Original image size. Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). Image size after rectification. By default,it is set to imageSize . output rectangle that outlines all-good-pixels region in the undistorted image. indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. The new camera matrix based on the free scaling parameter. Finds an initial camera matrix from 3D-2D point correspondences. Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. Vector of vectors of the projections of the calibration pattern points. Image size in pixels used to initialize the principal point. If it is zero or negative, both fx and fy are estimated independently. Otherwise, fx=fy*aspectRatio. An initial camera matrix for the camera calibration process. Currently, the function only supports planar calibration patterns, which are patterns where each object point has z-coordinate =0. Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points) The array of object points. The rotation vector, 1x3 or 3x1 The translation vector, 1x3 or 3x1 The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's The output array of image points, 2xN or Nx2, where N is the total number of points in the view Aspect ratio Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. The array of image points which is the projection of Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points). The array of object points, 3xN or Nx3, where N is the number of points in the view The rotation vector, 1x3 or 3x1 The translation vector, 1x3 or 3x1 The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's The output array of image points, 2xN or Nx2, where N is the total number of points in the view Aspect ratio Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. Estimates intrinsic camera parameters and extrinsic parameters for each of the views The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points. The first index is the index of the image, second index is the index of the point The size of the image, used only to initialize intrinsic camera matrix The output 3xM or Mx3 array of rotation vectors (compact representation of rotation matrices, see cvRodrigues2). The output 3xM or Mx3 array of translation vectors/// cCalibration type The termination criteria The output camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized The output 4x1 or 1x4 vector of distortion coefficients [k1, k2, p1, p2] The final reprojection error Estimates intrinsic camera parameters and extrinsic parameters for each of the views The joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views The joint matrix of corresponding image points, 2xN or Nx2, where N is the total number of points in all views Size of the image, used only to initialize intrinsic camera matrix The output camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized The output 4x1 or 1x4 vector of distortion coefficients [k1, k2, p1, p2] The output 3xM or Mx3 array of rotation vectors (compact representation of rotation matrices, see cvRodrigues2). The output 3xM or Mx3 array of translation vectors Different flags The termination criteria The final reprojection error Computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size The matrix of intrinsic parameters Image size in pixels Aperture width in real-world units (optional input parameter). Set it to 0 if not used Aperture width in real-world units (optional input parameter). Set it to 0 if not used Field of view angle in x direction in degrees Field of view angle in y direction in degrees Focal length in real-world units The principal point in real-world units The pixel aspect ratio ~ fy/f Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error. The array of object points The array of corresponding image points The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's. The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2). The output 3x1 or 1x3 translation vector Use the input rotation and translation parameters as a guess Method for solving a PnP problem The extrinsic parameters Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error The array of object points, 3xN or Nx3, where N is the number of points in the view The array of corresponding image points, 2xN or Nx2, where N is the number of points in the view The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's. The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2). The output 3x1 or 1x3 translation vector Use the input rotation and translation parameters as a guess Method for solving a PnP problem Finds an object pose from 3D-2D point correspondences using the RANSAC scheme. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3D32f can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPointF can be also passed here. Input camera matrix Input vector of distortion coefficients of 4, 5, 8 or 12 elements. If the vector is null/empty, the zero distortion coefficients are assumed. Output rotation vector Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Number of iterations. Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The probability that the algorithm produces a useful result. Output vector that contains indices of inliers in objectPoints and imagePoints . Method for solving a PnP problem Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 Size of the image, used only to initialize intrinsic camera matrix The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems The optional output essential matrix The optional output fundamental matrix Termination criteria for the iterative optimization algorithm The calibration flags Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views The joint matrix of corresponding image points in the views from the 1st camera, 2xN or Nx2, where N is the total number of points in all views The joint matrix of corresponding image points in the views from the 2nd camera, 2xN or Nx2, where N is the total number of points in all views The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 Size of the image, used only to initialize intrinsic camera matrix The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems The optional output essential matrix The optional output fundamental matrix Termination criteria for the iterative optimization algorithm The calibration flags computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in space, hence the suffix "Uncalibrated". Another related difference from cvStereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations, encoded by the homography matrices H1 and H2. The function implements the following algorithm [Hartley99]. Note that while the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have significant distortion, it would better be corrected before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using cvCalibrateCamera2 and then the images can be corrected using cvUndistort2 The array of 2D points The array of 2D points Fundamental matrix. It can be computed using the same set of point pairs points1 and points2 using cvFindFundamentalMat Size of the image The rectification homography matrices for the first images The rectification homography matrices for the second images If the parameter is greater than zero, then all the point pairs that do not comply the epipolar geometry well enough (that is, the points for which fabs(points2[i]T*F*points1[i])>threshold) are rejected prior to computing the homographies computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, that makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. On input the function takes the matrices computed by cvStereoCalibrate and on output it gives 2 rotation matrices and also 2 projection matrices in the new coordinates. The function is normally called after cvStereoCalibrate that computes both camera matrices, the distortion coefficients, R and T The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1] The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1] The vectors of distortion coefficients for first camera, 4x1, 1x4, 5x1 or 1x5 The vectors of distortion coefficients for second camera, 4x1, 1x4, 5x1 or 1x5 Size of the image used for stereo calibration The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems 3x3 Rectification transforms (rotation matrices) for the first camera 3x3 Rectification transforms (rotation matrices) for the second camera 3x4 Projection matrices in the new (rectified) coordinate systems 3x4 Projection matrices in the new (rectified) coordinate systems The optional output disparity-to-depth mapping matrix, 4x4, see cvReprojectImageTo3D. The operation flags, use ZeroDisparity for default Use -1 for default Use Size.Empty for default The valid pixel ROI for image1 The valid pixel ROI for image2 Finds subpixel-accurate positions of the chessboard corners Source chessboard view; it must be 8-bit grayscale or color image Pointer to the output array of corners(PointF) detected region size True if successfull Attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners Source chessboard view; it must be 8-bit grayscale or color image The number of inner corners per chessboard row and column Pointer to the output array of corners(PointF) detected Various operation flags True if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, it returns 0 The coordinates detected are approximate, and to determine their position more accurately, the user may use the function cvFindCornerSubPix Filters off small noise blobs (speckles) in the disparity map. The input 16-bit signed disparity image The disparity value used to paint-off the speckles The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value. The optional temporary buffer to avoid memory allocation within the function. Draws the individual chessboard corners detected (as red circles) in case if the board was not found (pattern_was_found=0) or the colored corners connected with lines when the board was found (pattern_was_found != 0). The destination image; it must be 8-bit color image The number of inner corners per chessboard row and column The array of corners detected Indicates whether the complete board was found (!=0) or not (=0). One may just pass the return value cvFindChessboardCorners here. Reconstructs points by triangulation. 3x4 projection matrix of the first camera. 3x4 projection matrix of the second camera. 2xN array of feature points in the first image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1 2xN array of corresponding points in the second image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 4xN array of reconstructed points in homogeneous coordinates. Refines coordinates of corresponding points. 3x3 fundamental matrix. 1xN array containing the first set of points. 1xN array containing the second set of points. The optimized points1. The optimized points2. The default Exception callback to handle Error thrown by OpenCV An error handler which will ignore any error and continue A custom error handler for OpenCV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. A custom error handler for OpenCV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. Define an error callback that can be registered using cvRedirectError function The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision. The new error handler Arbitrary pointer that is transparently passed to the error handler. Pointer to the previously assigned user data pointer. Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision. Pointer to the new error handler Arbitrary pointer that is transparently passed to the error handler. Pointer to the previously assigned user data pointer. Sets the specified error mode. The error mode Returns the current error mode Returns the current error status - the value set with the last cvSetErrStatus call. Note, that in Leaf mode the program terminates immediately after error occurred, so to always get control after the function call, one should call cvSetErrMode and set Parent or Silent error mode. the current error status Sets the error status to the specified value. Mostly, the function is used to reset the error status (set to it CV_StsOk) to recover after error. In other cases it is more natural to call cvError or CV_ERROR. The error status. Returns the textual description for the specified error status code. In case of unknown status the function returns NULL pointer. The error status the textual description for the specified error status code. initializes CvMat header so that it points to the same data as the original array but has different shape - different number of channels, different number of rows or both Input array Output header to be filled New number of channels. new_cn = 0 means that number of channels remains unchanged New number of rows. new_rows = 0 means that number of rows remains unchanged unless it needs to be changed according to new_cn value. destination array to be changed Fills the destination array with source array tiled: dst(i,j)=src(i mod rows(src), j mod cols(src))So the destination array may be as larger as well as smaller than the source array Source array, image or matrix Destination array, image or matrix Flag to specify how many times the src is repeated along the vertical axis. Flag to specify how many times the src is repeated along the horizontal axis. This function is the opposite to cvSplit. If the destination array has N channels then if the first N input channels are not IntPtr.Zero, all they are copied to the destination array, otherwise if only a single source channel of the first N is not IntPtr.Zero, this particular channel is copied into the destination array, otherwise an error is raised. Rest of source channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to insert a single channel into the image. Input vector of matrices to be merged; all the matrices in mv must have the same size and the same depth. output array of the same size and the same depth as mv[0]; The number of channels will be the total number of channels in the matrix array. The function cvMixChannels is a generalized form of cvSplit and cvMerge and some forms of cvCvtColor. It can be used to change the order of the planes, add/remove alpha channel, extract or insert a single plane or multiple planes etc. The array of input arrays. The array of output arrays The array of pairs of indices of the planes copied. from_to[k*2] is the 0-based index of the input plane, and from_to[k*2+1] is the index of the output plane, where the continuous numbering of the planes over all the input and over all the output arrays is used. When from_to[k*2] is negative, the corresponding output plane is filled with 0's. Unlike many other new-style C++ functions in OpenCV, mixChannels requires the output arrays to be pre-allocated before calling the function. Extract the specific channel from the image The source image The channel 0 based index of the channel to be extracted Insert the specific channel to the image The source channel The destination image where the channel will be inserted into 0-based index of the channel to be inserted Shuffles the matrix by swapping randomly chosen pairs of the matrix elements on each iteration (where each element may contain several components in case of multi-channel arrays) The input/output matrix. It is shuffled in-place. Pointer to MCvRNG random number generator. Use 0 if not sure The relative parameter that characterizes intensity of the shuffling performed. The number of iterations (i.e. pairs swapped) is round(iter_factor*rows(mat)*cols(mat)), so iter_factor=0 means that no shuffling is done, iter_factor=1 means that the function swaps rows(mat)*cols(mat) random pairs etc Inverses every bit of every array element: The source array The destination array The optional mask for the operation, use null to ignore Calculates per-element maximum of two arrays: dst(I)=max(src1(I), src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size). The first source array The second source array. The destination array Returns the number of non-zero elements in arr: result = sumI arr(I)!=0 In case of IplImage both ROI and COI are supported. The image the number of non-zero elements in image Find the location of the non-zero pixel The source array The output array where the location of the pixels are sorted Computes PSNR image/video quality metric The first source image The second source image the quality metric Calculates per-element minimum of two arrays: dst(I)=min(src1(I),src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size). The first source array The second source array The destination array Adds one array to another one: dst(I)=src1(I)+src2(I) if mask(I)!=0All the arrays must have the same type, except the mask, and the same size (or ROI size) The first source array. The second source array. The destination array. Operation mask, 8-bit single channel array; specifies elements of destination array to be changed. Optional depth type of the output array Subtracts one array from another one: dst(I)=src1(I)-src2(I) if mask(I)!=0 All the arrays must have the same type, except the mask, and the same size (or ROI size) The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Optional depth of the output array Divides one array by another: dst(I)=scale * src1(I)/src2(I), if src1!=IntPtr.Zero; dst(I)=scale/src2(I), if src1==IntPtr.Zero; All the arrays must have the same type, and the same size (or ROI size) The first source array. If the pointer is IntPtr.Zero, the array is assumed to be all 1s. The second source array The destination array Optional scale factor Optional depth of the output array Calculates per-element product of two arrays: dst(I)=scale*src1(I)*src2(I) All the arrays must have the same type, and the same size (or ROI size) The first source array. The second source array The destination array Optional scale factor Optional depth of the output array Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Calculates per-element bit-wise disjunction of two arrays: dst(I)=src1(I)|src2(I) In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Mask, 8-bit single channel array; specifies elements of destination array to be changed. Copies selected elements from input array to output array: dst(I)=src(I) if mask(I)!=0. If any of the passed arrays is of IplImage type, then its ROI and COI fields are used. Both arrays must have the same type, the same number of dimensions and the same size. The function can also copy sparse arrays (mask is not supported in this case). The source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Initializes scaled identity matrix: arr(i,j)=value if i=j, 0 otherwise The matrix to initialize (not necessarily square). The value to assign to the diagonal elements. Initializes the matrix as following: arr(i,j)=(end-start)*(i*cols(arr)+j)/(cols(arr)*rows(arr)) The matrix to initialize. It should be single-channel 32-bit, integer or floating-point The lower inclusive boundary of the range The upper exclusive boundary of the range Calculates either magnitude, angle, or both of every 2d vector (x(I),y(I)): magnitude(I)=sqrt( x(I)2+y(I)2 ), angle(I)=atan( y(I)/x(I) ) The angles are calculated with ~0.1 degree accuracy. For (0,0) point the angle is set to 0 The array of x-coordinates The array of y-coordinates The destination array of magnitudes, may be set to IntPtr.Zero if it is not needed The destination array of angles, may be set to IntPtr.Zero if it is not needed. The angles are measured in radians (0..2?) or in degrees (0..360?). The flag indicating whether the angles are measured in radians or in degrees Calculates either x-coordinate, y-coordinate or both of every vector magnitude(I)* exp(angle(I)*j), j=sqrt(-1): x(I)=magnitude(I)*cos(angle(I)), y(I)=magnitude(I)*sin(angle(I)) Input floating-point array of magnitudes of 2D vectors; it can be an empty matrix (=Mat()), in this case, the function assumes that all the magnitudes are =1; if it is not empty, it must have the same size and type as angle input floating-point array of angles of 2D vectors. Output array of x-coordinates of 2D vectors; it has the same size and type as angle. Output array of y-coordinates of 2D vectors; it has the same size and type as angle. The flag indicating whether the angles are measured in radians or in degrees Raises every element of input array to p: dst(I)=src(I)p, if p is integer dst(I)=abs(src(I))p, otherwise That is, for non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following sample, computing cube root of array elements, shows: CvSize size = cvGetSize(src); CvMat* mask = cvCreateMat( size.height, size.width, CV_8UC1 ); cvCmpS( src, 0, mask, CV_CMP_LT ); /* find negative elements */ cvPow( src, dst, 1./3 ); cvSubRS( dst, cvScalarAll(0), dst, mask ); /* negate the results of negative inputs */ cvReleaseMat( &mask ); For some values of power, such as integer values, 0.5 and -0.5, specialized faster algorithms are used. The source array The destination array, should be the same type as the source The exponent of power Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is 7e-6. Currently, the function converts denormalized values to zeros on output The source array The destination array, it should have double type or the same type as the source Calculates natural logarithm of absolute value of every element of input array: dst(I)=log(abs(src(I))), src(I)!=0 dst(I)=C, src(I)=0 Where C is large negative number (-700 in the current implementation) The source array The destination array, it should have double type or the same type as the source finds real roots of a cubic equation: coeffs[0]*x^3 + coeffs[1]*x^2 + coeffs[2]*x + coeffs[3] = 0 (if coeffs is 4-element vector) or x^3 + coeffs[0]*x^2 + coeffs[1]*x + coeffs[2] = 0 (if coeffs is 3-element vector) The equation coefficients, array of 3 or 4 elements The output array of real roots. Should have 3 elements. Padded with zeros if there is only one root the number of real roots found Finds all real and complex roots of any degree polynomial with real coefficients The (degree + 1)-length array of equation coefficients (CV_32FC1 or CV_64FC1) The degree-length output array of real or complex roots (CV_32FC2 or CV_64FC2) The maximum number of iterations Solves linear system (src1)*(dst) = (src2) The source matrix in the LHS The source matrix in the RHS The result The method for solving the equation 0 if src1 is a singular and CV_LU method is used Sorts each matrix row or each matrix column in ascending or descending order.So you should pass two operation flags to get desired behaviour. input single-channel array. output array of the same size and type as src. operation flags Sorts each matrix row or each matrix column in the ascending or descending order.So you should pass two operation flags to get desired behaviour. Instead of reordering the elements themselves, it stores the indices of sorted elements in the output array. input single-channel array. output integer array of the same size as src. operation flags Performs forward or inverse transform of 1D or 2D floating-point array In case of real (single-channel) data, the packed format, borrowed from IPL, is used to to represent a result of forward Fourier transform or input for inverse Fourier transform Source array, real or complex Destination array of the same size and same type as the source Transformation flags Number of nonzero rows to in the source array (in case of forward 2d transform), or a number of rows of interest in the destination array (in case of inverse 2d transform). If the value is negative, zero, or greater than the total number of rows, it is ignored. The parameter can be used to speed up 2d convolution/correlation when computing them via DFT. See the sample below Returns the minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r. Vector size The minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r. Performs per-element multiplication of the two CCS-packed or complex matrices that are results of real or complex Fourier transform. The first source array The second source array The destination array of the same type and the same size of the sources Operation flags; currently, the only supported flag is DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum. Optional flag that conjugates the second input array before the multiplication (true) or not (false). Performs forward or inverse transform of 1D or 2D floating-point array Source array, real 1D or 2D array Destination array of the same size and same type as the source Transformation flags Calculates a part of the line segment which is entirely in the rectangle. The rectangle First ending point of the line segment. It is modified by the function Second ending point of the line segment. It is modified by the function. It returns false if the line segment is completely outside the rectangle and true otherwise. Calculates absolute difference between two arrays. dst(I)c = abs(src1(I)c - src2(I)c). All the arrays must have the same data type and the same size (or ROI size) The first source array The second source array The destination array Calculated weighted sum of two arrays as following: dst(I)=src1(I)*alpha+src2(I)*beta+gamma All the arrays must have the same type and the same size (or ROI size) The first source array. Weight of the first array elements. The second source array. Weight of the second array elements. Scalar, added to each sum. The destination array. Optional depth of the output array; when both input arrays have the same depth Performs range check for every element of the input array: dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 For single-channel arrays, dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 && lower(I)_1 <= src(I)_1 <= upper(I)_1 For two-channel arrays etc. dst(I) is set to 0xff (all '1'-bits) if src(I) is within the range and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size) The source image The lower values stored in an image of same type & size as The upper values stored in an image of same type & size as The resulting mask Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined. The first source image The second source image. If it is null, the absolute norm of arr1 is calculated, otherwise absolute or relative norm of arr1-arr2 is calculated Type of norm The optional operation mask The calculated norm Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined. The first source image Type of norm The optional operation mask The calculated norm Creates the header and allocates data. Image width and height. Bit depth of image elements Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ... A pointer to IplImage Allocates, initializes, and returns the structure IplImage. Image width and height. Bit depth of image elements Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ... The structure IplImage Initializes the image header structure, pointer to which is passed by the user, and returns the pointer. Image header to initialize. Image width and height. Image depth Number of channels IPL_ORIGIN_TL or IPL_ORIGIN_BL. Alignment for image rows, typically 4 or 8 bytes. Assigns user data to the array header. Array header. User data. Full row length in bytes. Releases the header. Pointer to the deallocated header. Initializes already allocated CvMat structure. It can be used to process raw data with OpenCV matrix functions. Pointer to the matrix header to be initialized. Number of rows in the matrix. Number of columns in the matrix. Type of the matrix elements. Optional data pointer assigned to the matrix header Full row width in bytes of the data assigned. By default, the minimal possible step is used, i.e., no gaps is assumed between subsequent rows of the matrix. Sets the channel of interest to a given value. Value 0 means that all channels are selected, 1 means that the first channel is selected etc. If ROI is NULL and coi != 0, ROI is allocated. Image header Channel of interest starting from 1. If 0, the COI is unset. Returns channel of interest of the image (it returns 0 if all the channels are selected). Image header. channel of interest of the image (it returns 0 if all the channels are selected) Releases image ROI. After that the whole image is considered selected. Image header Sets the image ROI to a given rectangle. If ROI is NULL and the value of the parameter rect is not equal to the whole image, ROI is allocated. Image header. ROI rectangle. Returns channel of interest of the image (it returns 0 if all the channels are selected). Image header. channel of interest of the image (it returns 0 if all the channels are selected) Allocates header for the new matrix and underlying data, and returns a pointer to the created matrix. Matrices are stored row by row. All the rows are aligned by 4 bytes. Number of rows in the matrix. Number of columns in the matrix. Type of the matrix elements. A pointer to the created matrix Initializes CvMatND structure allocated by the user Pointer to the array header to be initialized Number of array dimensions Array of dimension sizes Type of array elements Optional data pointer assigned to the matrix header Pointer to the array header Decrements the matrix data reference counter and releases matrix header Double pointer to the matrix. The function allocates a multi-dimensional sparse array. Initially the array contain no elements, that is Get or GetReal returns zero for every index Number of array dimensions Array of dimension sizes Type of array elements Pointer to the array header The function releases the sparse array and clears the array pointer upon exit. Reference of the pointer to the array Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The second zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array Array of the element indices The assigned value Clears (sets to zero) the particular element of dense array or deletes the element of sparse array. If the element does not exists, the function does nothing Input array Array of the element indices Assign the new value to the particular element of array Input array. The first zero-based component of the element index The second zero-based component of the element index The assigned value Flips the array in one of different 3 ways (row and column indices are 0-based) Source array. Destination array. Specifies how to flip the array. Rotates a 2D array in multiples of 90 degrees. input array. output array of the same type as src. The size is the same with ROTATE_180, and the rows and cols are switched for ROTATE_90 and ROTATE_270. an enum to specify how to rotate the array Returns header, corresponding to a specified rectangle of the input array. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. ROI is taken into account by the function so the sub-array of ROI is actually extracted. Input array Pointer to the resultant sub-array header. Zero-based coordinates of the rectangle of interest. the resultant sub-array header Return the header, corresponding to a specified row span of the input array Input array Pointer to the prelocated memory of resulting sub-array header Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span Index step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row The header, corresponding to a specified row span of the input array Return the header, corresponding to a specified row of the input array Input array Pointer to the prelocate memory of the resulting sub-array header Zero-based index of the selected row The header, corresponding to a specified row of the input array Return the header, corresponding to a specified col span of the input array Input array Pointer to the prelocated memory of the resulting sub-array header Zero-based index of the selected column Zero-based index of the ending column (exclusive) of the span The header, corresponding to a specified col span of the input array Return the header, corresponding to a specified column of the input array Input array Pointer to the prelocate memory of the resulting sub-array header Zero-based index of the selected column The header, corresponding to a specified column of the input array returns the header, corresponding to a specified diagonal of the input array Input array Pointer to the resulting sub-array header Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc Pointer to the resulting sub-array header Returns number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned. array header number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned. Draws a simple or filled circle with given center and radius. The circle is clipped by ROI rectangle. Image where the circle is drawn Center of the circle Radius of the circle. Color of the circle Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn Line type Number of fractional bits in the center coordinates and radius value Divides a multi-channel array into separate single-channel arrays. Two modes are available for the operation. If the source array has N channels then if the first N destination channels are not IntPtr.Zero, all they are extracted from the source array, otherwise if only a single destination channel of the first N is not IntPtr.Zero, this particular channel is extracted, otherwise an error is raised. Rest of destination channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to extract a single channel from the image Input multi-channel array Output array or vector of arrays Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees. Image Center of the ellipse Length of the ellipse axes Rotation angle Starting angle of the elliptic arc Ending angle of the elliptic arc Ellipse color Thickness of the ellipse arc Type of the ellipse boundary Number of fractional bits in the center coordinates and axes' values Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees. Image The box the define the ellipse area Ellipse color Thickness of the ellipse arc Type of the ellipse boundary Number of fractional bits in the center coordinates and axes' values Fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of src as following: dst(I)=lut[src(I)+DELTA] where DELTA=0 if src has depth CV_8U, and DELTA=128 if src has depth CV_8S Source array of 8-bit elements Destination array of arbitrary depth and of the same number of channels as the source array Look-up table of 256 elements; should have the same depth as the destination array. In case of multi-channel source and destination arrays, the table should either have a single-channel (in this case the same table is used for all channels), or the same number of channels as the source/destination array This function has several different purposes and thus has several synonyms. It copies one array to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel arrays are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination array element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate cvConvert synonym. If source and destination array types have equal types, this is also a special case that can be used to scale and shift a matrix or an image and that fits to cvScale synonym. Source array Destination array Scale factor Value added to the scaled source array elements Similar to cvCvtScale but it stores absolute values of the conversion results: dst(I)=abs(src(I)*scale + (shift,shift,...)) The function supports only destination arrays of 8u (8-bit unsigned integers) type, for other types the function can be emulated by combination of cvConvertScale and cvAbs functions. Source array Destination array (should have 8u depth). ScaleAbs factor Value added to the scaled source array elements Calculates the average value M of array elements, independently for each channel: N = sumI mask(I)!=0 Mc = 1/N * sumI,mask(I)!=0 arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the average to the first scalar component (S0). The array The optional operation mask average (mean) of array elements The function cvAvgSdv calculates the average value and standard deviation of array elements, independently for each channel If the array is IplImage and COI is set, the function processes the selected channel only and stores the average and standard deviation to the first compoenents of output scalars (M0 and S0). The array Pointer to the mean value Pointer to the standard deviation The optional operation mask Calculates a mean and standard deviation of array elements. Input array that should have from 1 to 4 channels so that the results can be stored in MCvScalar Calculated mean value Calculated standard deviation Optional operation mask Calculates sum S of array elements, independently for each channel Sc = sumI arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the sum to the first scalar component (S0). The array The sum of array elements Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes The input matrix The output single-row/single-column vector that accumulates somehow all the matrix rows/columns The dimension index along which the matrix is reduce. The reduction operation type Optional depth type of the output array Releases the header and the image data. Double pointer to the header of the deallocated image Draws contours outlines or filled contours. Image where the contours are to be drawn. Like in any other drawing function, the contours are clipped with the ROI All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours Maximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level. Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Optional information about hierarchy. It is only needed if you want to draw only some of the contours Shift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering. Fills convex polygon interior. This function is much faster than The function cvFillPoly and can fill not only the convex polygons but any monotonic polygon, i.e. a polygon whose contour intersects every horizontal line (scan line) twice at the most Image Array of pointers to a single polygon Polygon color Type of the polygon boundaries Number of fractional bits in the vertex coordinates Fills the area bounded by one or more polygons. Image. Array of polygons where each polygon is represented as an array of points. Polygon color Type of the polygon boundaries. Number of fractional bits in the vertex coordinates. Optional offset of all points of the contours. Renders the text in the image with the specified font and color. The printed text is clipped by ROI rectangle. Symbols that do not belong to the specified font are replaced with the rectangle symbol. Input image String to print Coordinates of the bottom-left corner of the first letter Font type. Font scale factor that is multiplied by the font-specific base size. Text color Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Calculates the width and height of a text string. Input text string. Font to use Font scale factor that is multiplied by the font-specific base size. Thickness of lines used to render the text. Y-coordinate of the baseline relative to the bottom-most text point. The size of a box that contains the specified text. Finds minimum and maximum element values and their positions. The extremums are searched over the whole array, selected ROI (in case of IplImage) or, if mask is not IntPtr.Zero, in the specified array region. If the array has more than one channel, it must be IplImage with COI set. In case if multi-dimensional arrays min_loc->x and max_loc->x will contain raw (linear) positions of the extremums The source array, single-channel or multi-channel with COI set Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask that is used to select a subarray. Use IntPtr.Zero if not needed Copies the source 2D array into interior of destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's The source image The destination image Type of the border to create around the copied source image rectangle Value of the border pixels if bordertype=CONSTANT Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Return the particular array element Input array. Must have a single channel The first zero-based component of the element index the particular array element Return the particular array element Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index the particular array element Return the particular array element Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index the particular array element Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index the particular element of single-channel array Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index the particular element of single-channel array Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index the particular element of single-channel array Enables or disables the optimized code. true if [use optimized]; otherwise, false. The function can be used to dynamically turn on and off optimized code (code that uses SSE2, AVX, and other instructions on the platforms that support it). It sets a global flag that is further checked by OpenCV functions. Since the flag is not checked in the inner OpenCV loops, it is only safe to call the function on the very top level in your application where you can be sure that no other OpenCV function is currently executed. Returns full configuration time cmake output. Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture. Fills the array with normally distributed random numbers. Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels. Mean value (expectation) of the generated random numbers. Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix. Fills the array with normally distributed random numbers. Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels. Mean value (expectation) of the generated random numbers. Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix. Generates a single uniformly-distributed random number or an array of random numbers. Output array of random numbers; the array must be pre-allocated. Inclusive lower boundary of the generated random numbers. Exclusive upper boundary of the generated random numbers. Generates a single uniformly-distributed random number or an array of random numbers. Output array of random numbers; the array must be pre-allocated. Inclusive lower boundary of the generated random numbers. Exclusive upper boundary of the generated random numbers. Computes eigenvalues and eigenvectors of a symmetric matrix The input symmetric square matrix, modified during the processing The output matrix of eigenvectors, stored as subsequent rows The output vector of eigenvalues, stored in the descending order (order of eigenvalues and eigenvectors is syncronized, of course) Currently the function is slower than cvSVD yet less accurate, so if A is known to be positivelydefined (for example, it is a covariance matrix)it is recommended to use cvSVD to find eigenvalues and eigenvectors of A, especially if eigenvectors are not required. To calculate the largest eigenvector/-value set lowindex = highindex = 1. For legacy reasons this function always returns a square matrix the same size as the source matrix with eigenvectors and a vector the length of the source matrix with eigenvalues. The selected eigenvectors/-values are always in the first highindex - lowindex + 1 rows. normalizes the input array so that it's norm or value range takes the certain value(s). The input array The output array; in-place operation is supported The minimum/maximum value of the output array or the norm of output array The maximum/minimum value of the output array The normalization type The operation mask. Makes the function consider and normalize only certain array elements Optional depth type for the dst array Performs generalized matrix multiplication: dst = alpha*op(src1)*op(src2) + beta*op(src3), where op(X) is X or XT The first source array. The second source array. The scalar The third source array (shift). Can be null, if there is no shift. The scalar The destination array. The Gemm operation type Performs matrix transformation of every element of array src and stores the results in dst Both source and destination arrays should have the same depth and the same size or selected ROI size. transmat and shiftvec should be real floating-point matrices. The first source array The destination array transformation 2x2 or 2x3 floating-point matrix. Transforms every element of src in the following way: (x, y) -> (x'/w, y'/w), where (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise The source points 3x3 floating-point transformation matrix. The destination points Transforms every element of src (by treating it as 2D or 3D vector) in the following way: (x, y, z) -> (x'/w, y'/w, z'/w) or (x, y) -> (x'/w, y'/w), where (x', y', z', w') = mat4x4 * (x, y, z, 1) or (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise The source three-channel floating-point array The destination three-channel floating-point array 3x3 or 4x4 floating-point transformation matrix. Calculates the product of src and its transposition. The function evaluates dst=scale(src-delta)*(src-delta)^T if order=0, and dst=scale(src-delta)^T*(src-delta) otherwise. The source matrix The destination matrix Order of multipliers An optional array, subtracted from before multiplication An optional scaling Optional depth type of the output array Returns sum of diagonal elements of the matrix . the matrix sum of diagonal elements of the matrix src1 Transposes matrix src1: dst(i,j)=src(j,i) Note that no complex conjugation is done in case of complex matrix. Conjugation should be done separately: look at the sample code in cvXorS for example The source matrix The destination matrix Returns determinant of the square matrix mat. The direct method is used for small matrices and Gaussian elimination is used for larger matrices. For symmetric positive-determined matrices it is also possible to run SVD with U=V=NULL and then calculate determinant as a product of the diagonal elements of W The pointer to the matrix determinant of the square matrix mat Finds the inverse or pseudo-inverse of a matrix. This function inverts the matrix src and stores the result in dst . When the matrix src is singular or non-square, the function calculates the pseudo-inverse matrix (the dst matrix) so that norm(src*dst - I) is minimal, where I is an identity matrix. The input floating-point M x N matrix. The output matrix of N x M size and the same type as src. Inversion method In case of the DECOMP_LU method, the function returns non-zero value if the inverse has been successfully calculated and 0 if src is singular. In case of the DECOMP_SVD method, the function returns the inverse condition number of src (the ratio of the smallest singular value to the largest singular value) and 0 if src is singular. The SVD method calculates a pseudo-inverse matrix if src is singular. Similarly to DECOMP_LU, the method DECOMP_CHOLESKY works only with non-singular square matrices that should also be symmetrical and positively defined. In this case, the function stores the inverted matrix in dst and returns non-zero. Otherwise, it returns 0. Decomposes matrix A into a product of a diagonal matrix and two orthogonal matrices: A=U*W*VT Where W is diagonal matrix of singular values that can be coded as a 1D vector of singular values and U and V. All the singular values are non-negative and sorted (together with U and V columns) in descenting order. SVD algorithm is numerically robust and its typical applications include: 1. accurate eigenvalue problem solution when matrix A is square, symmetric and positively defined matrix, for example, when it is a covariation matrix. W in this case will be a vector of eigen values, and U=V is matrix of eigen vectors (thus, only one of U or V needs to be calculated if the eigen vectors are required) 2. accurate solution of poor-conditioned linear systems 3. least-squares solution of overdetermined linear systems. This and previous is done by cvSolve function with CV_SVD method 4. accurate calculation of different matrix characteristics such as rank (number of non-zero singular values), condition number (ratio of the largest singular value to the smallest one), determinant (absolute value of determinant is equal to the product of singular values). All the things listed in this item do not require calculation of U and V matrices. Source MxN matrix Resulting singular value matrix (MxN or NxN) or vector (Nx1). Optional left orthogonal matrix (MxM or MxN). If CV_SVD_U_T is specified, the number of rows and columns in the sentence above should be swapped Optional right orthogonal matrix (NxN) Operation flags Performs a singular value back substitution. Singular values Left singular vectors Transposed matrix of right singular vectors. Right-hand side of a linear system Found solution of the system. Calculates the covariance matrix of a set of vectors. Samples stored either as separate matrices or as rows/columns of a single matrix. Output covariance matrix of the type ctype and square size. Input or output (depending on the flags) array as the average value of the input vectors. Operation flags Type of the matrix Calculates the weighted distance between two vectors and returns it The first 1D source vector The second 1D source vector The inverse covariation matrix the Mahalanobis distance Performs Principal Component Analysis of the supplied dataset. Input samples stored as the matrix rows or as the matrix columns. Optional mean value; if the matrix is empty, the mean is computed from the data. The eigenvectors. Maximum number of components that PCA should retain; by default, all the components are retained. Performs Principal Component Analysis of the supplied dataset. Input samples stored as the matrix rows or as the matrix columns. Optional mean value; if the matrix is empty, the mean is computed from the data. The eigenvectors. Percentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2. Projects vector(s) to the principal component subspace. Input vector(s); must have the same dimensionality and the same layout as the input data used at PCA phase The mean. The eigenvectors. The result. Reconstructs vectors from their PC projections. Coordinates of the vectors in the principal component subspace The mean. The eigenvectors. The result. Fills output variables with low-level information about the array data. All output parameters are optional, so some of the pointers may be set to NULL. If the array is IplImage with ROI set, parameters of ROI are returned. Array header Output pointer to the whole image origin or ROI origin if ROI is set Output full row length in bytes Output ROI size Returns matrix header for the input array that can be matrix - CvMat, image - IplImage or multi-dimensional dense array - CvMatND* (latter case is allowed only if allowND != 0) . In the case of matrix the function simply returns the input pointer. In the case of IplImage* or CvMatND* it initializes header structure with parameters of the current image ROI and returns pointer to this temporary structure. Because COI is not supported by CvMat, it is returned separately. Input array Pointer to CvMat structure used as a temporary buffer Optional output parameter for storing COI If non-zero, the function accepts multi-dimensional dense arrays (CvMatND*) and returns 2D (if CvMatND has two dimensions) or 1D matrix (when CvMatND has 1 dimension or more than 2 dimensions). The array must be continuous Returns matrix header for the input array Returns image header for the input array that can be matrix - CvMat*, or image - IplImage*. Input array. Pointer to IplImage structure used as a temporary buffer. Returns image header for the input array Checks that every array element is neither NaN nor Infinity. If CV_CHECK_RANGE is set, it also checks that every element is greater than or equal to minVal and less than maxVal. The array to check. The operation flags, CHECK_NAN_INFINITY or combination of CHECK_RANGE - if set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neither NaN nor Infinity. CHECK_QUIET - if set, the function does not raises an error if an element is invalid or out of range The inclusive lower boundary of valid values range. It is used only if CHECK_RANGE is set. The exclusive upper boundary of valid values range. It is used only if CHECK_RANGE is set. Returns nonzero if the check succeeded, i.e. all elements are valid and within the range, and zero otherwise. In the latter case if CV_CHECK_QUIET flag is not set, the function raises runtime error. Get or set the number of threads that are used by parallelized OpenCV functions When the argument is zero or negative, and at the beginning of the program, the number of threads is set to the number of processors in the system, as returned by the function omp_get_num_procs() from OpenMP runtime. Returns the index, from 0 to cvGetNumThreads()-1, of the thread that called the function. It is a wrapper for the function omp_get_thread_num() from OpenMP runtime. The retrieved index may be used to access local-thread data inside the parallelized code fragments. Returns the number of logical CPUs available for the process. Compares the corresponding elements of two arrays and fills the destination mask array: dst(I)=src1(I) op src2(I), dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size) The first image to compare with The second image to compare with dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. The comparison operator type Converts CvMat, IplImage , or CvMatND to Mat. Input CvMat, IplImage , or CvMatND. When true (default value), CvMatND is converted to 2-dimensional Mat, if it is possible (see the discussion below); if it is not possible, or when the parameter is false, the function will report an error When false (default value), no data is copied and only the new header is created, in this case, the original array should not be deallocated while the new matrix header is used; if the parameter is true, all the data is copied and you may deallocate the original array right after the conversion. Parameter specifying how the IplImage COI (when set) is handled. If coiMode=0 and COI is set, the function reports an error. If coiMode=1 , the function never reports an error. Instead, it returns the header to the whole original image and you will have to check and process COI manually. The Mat header Horizontally concatenate two images The first image The second image The result image Vertically concatenate two images The first image The second image The result image Swaps two matrices The Mat to be swapped The Mat to be swapped Swaps two matrices The UMat to be swapped The UMat to be swapped Check if we have OpenCL Get or set if OpenCL should be used Finishes OpenCL queue. Get the OpenCL platform summary as a string An OpenCL platform summary Set the default opencl device The name of the opencl device Gets a value indicating whether this device have open CL compatible gpu device. true if have open CL compatible gpu device; otherwise, false. Implements k-means algorithm that finds centers of cluster_count clusters and groups the input samples around the clusters. On output labels(i) contains a cluster index for sample stored in the i-th row of samples matrix Floating-point matrix of input samples, one row per sample Output integer vector storing cluster indices for every sample Specifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations) The number of attempts. Use 2 if not sure Flags, use 0 if not sure Pointer to array of centers, use IntPtr.Zero if not sure Number of clusters to split the set by. The grab cut algorithm for segmentation The 8-bit 3-channel image to be segmented Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values: 0 (GC_BGD) defines an obvious background pixels. 1 (GC_FGD) defines an obvious foreground (object) pixel. 2 (GC_PR_BGR) defines a possible background pixel. 3 (GC_PR_FGD) defines a possible foreground pixel. The rectangle to initialize the segmentation Temporary array for the background model. Do not modify it while you are processing the same image. Temporary arrays for the foreground model. Do not modify it while you are processing the same image. The number of iterations The initialization type Calculate square root of each source array element. in the case of multichannel arrays each channel is processed independently. The function accuracy is approximately the same as of the built-in std::sqrt. The source floating-point array The destination array; will have the same size and the same type as src Apply color map to the image The source image. This function expects Image<Bgr, Byte> or Image<Gray, Byte>. If the wrong image type is given, the original image will be returned. The destination image The type of color map Check that every array element is neither NaN nor +- inf. The functions also check that each value is between minVal and maxVal. in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions either return false (when quiet=true) or throw an exception. The array to check The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception This will be filled with the position of the first outlier The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range If quiet, return true if all values are in range Converts NaN's to the given number The array where NaN needs to be converted The value to convert to Computes an optimal affine transformation between two 3D point sets. First input 3D point set. Second input 3D point set. Output 3D affine transformation matrix. Output vector indicating which points are inliers. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Computes an optimal affine transformation between two 3D point sets. First input 3D point set. Second input 3D point set. Output 3D affine transformation matrix 3 x 4 Output vector indicating which points are inliers. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Finds the global minimum and maximum in an array Input single-channel array. The returned minimum value The returned maximum value The returned minimum location The returned maximum location The extremums are searched across the whole array if mask is IntPtr.Zert. Otherwise, search is performed in the specified array region. Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image The source image The destination image Convolution kernel, single-channel floating point matrix. If you want to apply different kernels to different channels, split the image using cvSplit into separate color planes and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center The optional value added to the filtered pixels before storing them in dst The pixel extrapolation method. The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst . Source image. Destination image of the same size and the same number of channels as src. Destination image depth Coefficients for filtering each row. Coefficients for filtering each column. Anchor position within the kernel. The value (-1,-1) means that the anchor is at the kernel center. Value added to the filtered results before storing them. Pixel extrapolation method Performs linear blending of two images: dst(i, j)=weights1(i, j) x src1(i, j) + weights2(i, j) x src2(i, j) It has a type of CV_8UC(n) or CV_32FC(n), where n is a positive integer. It has the same type and size as src1. It has a type of CV_32FC1 and the same size with src1. It has a type of CV_32FC1 and the same size with src1. It is created if it does not have the same size and type with src1. Contrast Limited Adaptive Histogram Equalization (CLAHE) The source image Clip Limit, use 40 for default Tile grid size, use (8, 8) for default The destination image This function retrieve the Open CV structure sizes in unmanaged code The structure that will hold the Open CV structure sizes Finds centers in the grid of circles Source chessboard view The number of inner circle per chessboard row and column Various operation flags The feature detector. Use a SimpleBlobDetector for default The center of circles detected if the chess board pattern is found, otherwise null is returned Finds centers in the grid of circles Source chessboard view The number of inner circle per chessboard row and column Various operation flags The feature detector. Use a SimpleBlobDetector for default output array of detected centers. True if grid found. The file name of the cvextern library The file name of the cvextern library The file name of the opencv_ffmpeg library The List of the opencv modules Creates a window which can be used as a placeholder for images and trackbars. Created windows are reffered by their names. If the window with such a name already exists, the function does nothing. Name of the window which is used as window identifier and appears in the window caption Flags of the window. Waits for key event infinitely (delay <= 0) or for "delay" milliseconds. Delay in milliseconds. The code of the pressed key or -1 if no key were pressed until the specified timeout has elapsed Shows the image in the specified window Name of the window Image to be shown Destroys the window with a given name Name of the window to be destroyed Destroys all of the HighGUI windows. Loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported: Windows bitmaps - BMP, DIB; JPEG files - JPEG, JPG, JPE; Portable Network Graphics - PNG; Portable image format - PBM, PGM, PPM; Sun rasters - SR, RAS; TIFF files - TIFF, TIF; OpenEXR HDR images - EXR; JPEG 2000 images - jp2. The name of the file to be loaded The image loading type The loaded image The function imreadmulti loads a multi-page image from the specified file into a vector of Mat objects. Name of file to be loaded. Read flags Null if the reading fails, otherwise, an array of Mat from the file Saves the image to the specified file. The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format The name of the file to be saved to The image to be saved The parameters true if success Decode image stored in the buffer The buffer The image loading type The output placeholder for the decoded matrix. Decode image stored in the buffer The buffer The image loading type The output placeholder for the decoded matrix. encode image and store the result as a byte vector. The image format The image Output buffer resized to fit the compressed image. The pointer to the array of integers, which contains the parameter for encoding, use IntPtr.Zero for default Implements a particular case of application of line iterators. The function reads all the image points lying on the line between pt1 and pt2, including the ending points, and stores them into the buffer Image to sample the line from Starting the line point. Ending the line point Buffer to store the line points; must have enough size to store max( |pt2.x-pt1.x|+1, |pt2.y-pt1.y|+1 ) points in case of 8-connected line and |pt2.x-pt1.x|+|pt2.y-pt1.y|+1 in case of 4-connected line The line connectivity, 4 or 8 Extracts pixels from src: dst(x, y) = src(x + center.x - (width(dst)-1)*0.5, y + center.y - (height(dst)-1)*0.5) where the values of pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multiple-channel images is processed independently. Whereas the rectangle center must be inside the image, the whole rectangle may be partially occluded. In this case, the replication border mode is used to get pixel values beyond the image boundaries. Source image Size of the extracted patch. Extracted rectangle Depth of the extracted pixels. By default, they have the same depth as . Floating point coordinates of the extracted rectangle center within the source image. The center must be inside the image. Resizes the image src down to or up to the specified size Source image. Destination image Output image size; if it equals zero, it is computed as: dsize=Size(round(fx*src.cols), round(fy * src.rows)). Either dsize or both fx and fy must be non-zero. Scale factor along the horizontal axis Scale factor along the vertical axis; Interpolation method Resize an image such that it fits in a given frame The source image The result image The size of the frame The interpolation method If true, it will not try to scale up the image to fit the frame Applies an affine transformation to an image. Source image Destination image 2x3 transformation matrix Size of the output image. Interpolation method Warp method Pixel extrapolation method A value used to fill outliers Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used The 2x3 rotation matrix that defines the Affine transform Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Pointer to an array of PointF, Coordinates of 3 triangle vertices in the source image. Pointer to an array of PointF, Coordinates of the 3 corresponding triangle vertices in the destination image The destination 2x3 matrix Calculates rotation matrix Center of the rotation in the source image. The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor Pointer to the destination 2x3 matrix Pointer to the destination 2x3 matrix Applies a perspective transformation to an image Source image Destination image 3x3 transformation matrix Size of the output image Interpolation method Warp method Pixel extrapolation method value used in case of a constant border calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3. Coordinates of 4 quadrangle vertices in the source image Coordinates of the 4 corresponding quadrangle vertices in the destination image The perspective transform matrix calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3. Coordinates of 4 quadrangle vertices in the source image Coordinates of the 4 corresponding quadrangle vertices in the destination image The 3x3 Homography matrix Applies a generic geometrical transformation to an image. Source image Destination image The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed. The second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively. Interpolation method (see resize() ). The method 'Area' is not supported by this function. Pixel extrapolation method A value used to fill outliers Inverts an affine transformation Original affine transformation Output reverse affine transformation. Returns the default new camera matrix. Input camera matrix. Camera view image size in pixels. Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not. The default new camera matrix. The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc. Source image Destination image The transformation center, where the output precision is maximal Magnitude scale parameter Interpolation method warp method The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc. Source image Destination image The transformation center, where the output precision is maximal Maximum radius Interpolation method Warp method Performs downsampling step of Gaussian pyramid decomposition. First it convolves source image with the specified filter and then downsamples the image by rejecting even rows and columns. The source image. The destination image, should have 2x smaller width and height than the source. Border type Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples the source image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the destination image is four times larger than the source image. The source image. The destination image, should have 2x smaller width and height than the source. Border type The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown to the previously built pyramid layers, starting from dst[0]==src. Source image. Check pyrDown for the list of supported types. Destination vector of maxlevel+1 images of the same type as src. dst[0] will be the same as src. dst[1] is the next pyramid layer, a smoothed and down-sized src, and so on. 0-based index of the last (the smallest) pyramid layer. It must be non-negative. Pixel extrapolation method Implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer92] Before passing the image to the function, user has to outline roughly the desired regions in the image markers with positive (>0) indices, i.e. every region is represented as one or more connected components with the pixel values 1, 2, 3 etc. Those components will be "seeds" of the future image regions. All the other pixels in markers, which relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0's. On the output of the function, each pixel in markers is set to one of values of the "seed" components, or to -1 at boundaries between the regions. Note, that it is not necessary that every two neighbor connected components are separated by a watershed boundary (-1's pixels), for example, in case when such tangent components exist in the initial marker image. The input 8-bit 3-channel image The input/output Int32 depth single-channel image (map) of markers. Finds minimum area rectangle that contains both input rectangles inside First rectangle Second rectangle The minimum area rectangle that contains both input rectangles inside Fits line to 2D or 3D point set Input vector of 2D or 3D points, stored in std::vector or Mat. The distance used for fitting Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default Sufficient accuracy for angle, 0.01 would be a good default Output line parameters. In case of 2D ?tting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D ?tting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line. Fits line to 2D or 3D point set Input vector of 2D points. The distance used for fitting Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default Sufficient accuracy for angle, 0.01 would be a good default A normalized vector collinear to the line A point on the line. Finds out if there is any intersection between two rotated rectangles. First rectangle Second rectangle The output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as VectorOfPointF or Mat as Mx1 of type CV_32FC2. The intersect type Calculates vertices of the input 2d box. The box The four vertices of rectangles. Calculates vertices of the input 2d box. The box The output array of four vertices of rectangles. Fits an ellipse around a set of 2D points. Input 2D point set The ellipse that fits best (in least-squares sense) to a set of 2D points The function calculates the ellipse that fits a set of 2D points. The Approximate Mean Square (AMS) is used. Input 2D point set The rotated rectangle in which the ellipse is inscribed The function calculates the ellipse that fits a set of 2D points. The Direct least square (Direct) method by [58] is used. Input 2D point set The rotated rectangle in which the ellipse is inscribed Finds convex hull of 2D point set using Sklansky's algorithm The points to find convex hull from Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. The convex hull of the points The function cvConvexHull2 finds convex hull of 2D point set using Sklansky's algorithm. Input 2D point set Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves. Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector The default morphology value. Erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken: dst=erode(src,element): dst(x,y)=min((x',y') in element)) src(x+x',y+y') The function supports the in-place mode. Erosion can be applied several (iterations) times. In case of color image each channel is processed independently. Source image. Destination image Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used. Number of times erosion is applied. Pixel extrapolation method Border value in case of a constant border, use Constant for default Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center. Dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken The function supports the in-place mode. Dilation can be applied several (iterations) times. In case of color image each channel is processed independently Source image Destination image Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used Number of times erosion is applied Pixel extrapolation method Border value in case of a constant border Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center. Blurs an image using a Gaussian filter. input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. output image of the same size and type as src. Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* . Gaussian kernel standard deviation in X direction. Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height , respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY. Pixel extrapolation method Blurs an image using the normalized box filter. input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. Output image of the same size and type as src. Blurring kernel size. Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center. Border mode used to extrapolate pixels outside of the image. Blurs an image using the median filter. Input 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U. Destination array of the same size and type as src. Aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ... Blurs an image using the box filter. Input image. Output image of the same size and type as src. The output image depth (-1 to use src.depth()). Blurring kernel size. Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center. Specifying whether the kernel is normalized by its area or not. Border mode used to extrapolate pixels outside of the image. Calculates the normalized sum of squares of the pixel values overlapping the filter. For every pixel(x, y) in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel(x, y). The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel. input image output image of the same size and type as src the output image depth (-1 to use src.depth()) kernel size kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center flag, specifying whether the kernel is to be normalized by it's area or not. border mode used to extrapolate pixels outside of the image Applies the bilateral filter to an image. Source 8-bit or floating-point, 1-channel or 3-channel image. Destination image of the same size and type as src . Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace. Border mode used to extrapolate pixels outside of the image. The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. The first case corresponds to
 
              |-1  0  1|
              |-2  0  2|
              |-1  0  1|
kernel and the second one corresponds to
              |-1 -2 -1|
              | 0  0  0|
              | 1  2  1|
or
              | 1  2  1|
              | 0  0  0|
              |-1 -2 -1|
kernel, depending on the image origin (origin field of IplImage structure). No scaling is done, so the destination image usually has larger by absolute value numbers than the source image. To avoid overflow, the function requires 16-bit destination image if the source image is 8-bit. The result can be converted back to 8-bit using cvConvertScale or cvConvertScaleAbs functions. Besides 8-bit images the function can process 32-bit floating-point images. Both source and destination must be single-channel images of equal size or ROI size
Source image. Destination image output image depth; the following combinations of src.depth() and ddepth are supported: src.depth() = CV_8U, ddepth = -1/CV_16S/CV_32F/CV_64F src.depth() = CV_16U/CV_16S, ddepth = -1/CV_32F/CV_64F src.depth() = CV_32F, ddepth = -1/CV_32F/CV_64F src.depth() = CV_64F, ddepth = -1/CV_64F when ddepth=-1, the destination image will have the same depth as the source; in the case of 8-bit input images it will result in truncated derivatives. Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7. Pixel extrapolation method Optional scale factor for the computed derivative values Optional delta value that is added to the results prior to storing them in
Calculates the first order image derivative in both x and y using a Sobel operator. Equivalent to calling: Sobel(src, dx, CV_16SC1, 1, 0, 3 ); Sobel(src, dy, CV_16SC1, 0, 1, 3 ); input image. output image with first-order derivative in x. output image with first-order derivative in y. size of Sobel kernel. It must be 3. pixel extrapolation method Calculates the first x- or y- image derivative using Scharr operator. input image. output image of the same size and the same number of channels as src. output image depth order of the derivative x. order of the derivative y. optional scale factor for the computed derivative values; by default, no scaling is applied optional delta value that is added to the results prior to storing them in dst. pixel extrapolation method Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator: dst(x,y) = d2src/dx2 + d2src/dy2 Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Similar to cvSobel function, no scaling is done and the same combinations of input and output formats are supported. Source image. Destination image. Should have type of float Desired depth of the destination image. Aperture size used to compute the second-derivative filters. Optional scale factor for the computed Laplacian values. By default, no scaling is applied. Optional delta value that is added to the results prior to storing them in dst. Pixel extrapolation method. Finds the edges on the input and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges. Input image Image to store the edges found by the function The first threshold The second threshold. Aperture parameter for Sobel operator a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ). The function tests whether the input contour is convex or not. The contour must be simple, that is, without self-intersections. Otherwise, the function output is undefined. Input vector of 2D points true if input is convex finds intersection of two convex polygons The first convex polygon The second convex polygon The intersection of the convex polygon Handle nest Determines whether the point is inside contour, outside, or lies on an edge (or coinsides with a vertex). It returns positive, negative or zero value, correspondingly Input contour The point tested against the contour If != 0, the function estimates distance from the point to the nearest contour edge When measureDist = false, the return value is >0 (inside), <0 (outside) and =0 (on edge), respectively. When measureDist != true, it is a signed distance between the point and the nearest contour edge Finds the convexity defects of a contour. Input contour Convex hull obtained using ConvexHull that should contain pointers or indices to the contour points, not the hull points themselves, i.e. return_points parameter in cvConvexHull2 should be 0 The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Find the bounding rectangle for the specific array of points The collection of points The bounding rectangle for the array of points Finds a rotated rectangle of the minimum area enclosing the input 2D point set. Input vector of 2D points a circumscribed rectangle of the minimal area for 2D point set Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed) Sequence or array of 2D points The minimal circumscribed circle for 2D point set Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed) Sequence or array of 2D points The minimal circumscribed circle for 2D point set Finds a triangle of minimum area enclosing a 2D point set and returns its area. Input vector of 2D points with depth CV_32S or CV_32F Output vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F. The triangle's area Approximates a polygonal curve(s) with the specified precision. Input vector of a 2D point Result of the approximation. The type should match the type of the input curve. Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation. If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed. Returns the up-right bounding rectangle for 2d point set Input 2D point set, stored in std::vector or Mat. The up-right bounding rectangle for 2d point set Calculates area of the whole contour or contour section. Input vector of 2D points (contour vertices), stored in std::vector or Mat. Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned. The area of the whole contour or contour section Calculates a contour perimeter or a curve length Sequence or array of the curve points Indicates whether the curve is closed or not. Contour perimeter or a curve length Applies fixed-level thresholding to single-channel array. The function is typically used to get bi-level (binary) image out of grayscale image (cvCmpS could be also used for this purpose) or for removing a noise, i.e. filtering out pixels with too small or too large values. There are several types of thresholding the function supports that are determined by threshold_type Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Threshold value Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Thresholding type Transforms grayscale image to binary image. Threshold calculated individually for each pixel. For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of x pixel neighborhood, subtracted by param1. For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of x pixel neighborhood, subtracted by param1. Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Adaptive_method Thresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV The size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ... Constant subtracted from mean or weighted mean. It may be negative. Retrieves contours from the binary image and returns the number of retrieved contours. The pointer firstContour is filled by the function. It will contain pointer to the first most outer contour or IntPtr.Zero if no contours is detected (if the image is completely black). Other contours may be reached from firstContour using h_next and v_next links. The sample in cvDrawContours discussion shows how to use contours for connected component detection. Contours can be also used for shape analysis and object recognition - see squares.c in OpenCV sample directory The function modifies the source image content The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. Retrieval mode Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation). Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context The number of countours Retrieves contours from the binary image as a contour tree. The pointer firstContour is filled by the function. It is provided as a convenient way to obtain the hierarchy value as int[,]. The function modifies the source image content The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content Detected contours. Each contour is stored as a vector of points. Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation). Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context The contour hierarchy Convert raw data to bitmap The pointer to the raw data The step The size of the image The source image color type The number of channels The source image depth type Try to create Bitmap that shares the data with the image The Bitmap Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout). The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image The destination image of the same data type as the source one. The number of channels may be different Source color type. Destination color type Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout). The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image The destination image of the same data type as the source one. The number of channels may be different Color conversion operation that can be specifed using CV_src_color_space2dst_color_space constants number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code . Finds circles in grayscale image using some modification of Hough transform The input 8-bit single-channel grayscale image The storage for the circles detected. It can be a memory storage (in this case a sequence of circles is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of type CV_32FC3, to which the circles' parameters are written. The matrix header is modified by the function so its cols or rows will contain a number of lines detected. If circle_storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of circles is returned. Every circle is encoded as 3 floating-point numbers: center coordinates (x,y) and the radius Currently, the only implemented method is CV_HOUGH_GRADIENT Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed The first method-specific parameter. In case of CV_HOUGH_GRADIENT it is the higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller). The second method-specific parameter. In case of CV_HOUGH_GRADIENT it is accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first Minimal radius of the circles to search for Maximal radius of the circles to search for. By default the maximal radius is set to max(image_width, image_height). Pointer to the sequence of circles Finds circles in a grayscale image using the Hough transform 8-bit, single-channel, grayscale input image. Detection method to use. Currently, the only implemented method is CV_HOUGH_GRADIENT , which is basically 21HT Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller). Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. Minimum circle radius. Maximum circle radius. The circles detected Finds lines in a binary image using the standard Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Output vector of lines. Each line is represented by a two-element vector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Accumulator threshold parameter. Only those lines are returned that get enough votes (> threshold) For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. For the multi-scale Hough transform, it is a divisor for the distance resolution theta Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians Accumulator threshold parameter. Only those lines are returned that get enough votes Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. The found line segments Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Output vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians Accumulator threshold parameter. Only those lines are returned that get enough votes Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characeteristics including 7 Hu invariants. Image (1-channel or 3-channel with COI set) or polygon (CvSeq of points or a vector of points) (For images only) If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1s The moment This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Specifies the way the template must be compared with image regions Mask of searched template. It must have the same datatype and size with templ. It is not set by default. Compares two shapes. The 3 implemented methods all use Hu moments First contour or grayscale image Second contour or grayscale image Comparison method Method-specific parameter (is not used now) The result of the comparison Returns a structuring element of the specified size and shape for morphological operations. Element shape Size of the structuring element. Anchor position within the element. The value (-1, -1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted. The structuring element Performs advanced morphological transformations. Source image. Destination image. Structuring element. Type of morphological operation. Number of times erosion and dilation are applied. Pixel extrapolation method. Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Border value in case of a constant border. The algorithm normalizes brightness and increases contrast of the image The input 8-bit single-channel image The output image of the same size and the same data type as src Calculates a histogram of a set of arrays. Source arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels. List of the channels used to compute the histogram. Optional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram. Output histogram Array of histogram sizes in each dimension. Array of the dims arrays of the histogram bin boundaries in each dimension. Accumulation flag. If it is set, the histogram is not cleared in the beginning when it is allocated. This feature enables you to compute a single histogram from several sets of arrays, or to update the histogram in time. Calculates the back projection of a histogram. Source arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels. Number of source images. Input histogram that can be dense or sparse. Destination back projection array that is a single-channel array of the same size and depth as images[0] . Array of arrays of the histogram bin boundaries in each dimension. Optional scale factor for the output back projection. Compares two histograms. First compared histogram. Second compared histogram of the same size as H1 . Comparison method The distance between the histogram Retrieves the spatial moment, which in case of image moments is defined as: M_{x_order,y_order}=sum_{x,y}(I(x,y) * x^{x_order} * y^{y_order}) where I(x,y) is the intensity of the pixel (x, y). The moment state x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The spatial moment Retrieves the central moment, which in case of image moments is defined as: mu_{x_order,y_order}=sum_{x,y}(I(x,y)*(x-x_c)^{x_order} * (y-y_c)^{y_order}), where x_c=M10/M00, y_c=M01/M00 - coordinates of the gravity center Reference to the moment state structure x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The center moment Retrieves normalized central moment, which in case of image moments is defined as: eta_{x_order,y_order}=mu_{x_order,y_order} / M00^{(y_order+x_order)/2+1}, where mu_{x_order,y_order} is the central moment Reference to the moment state structure x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The normalized center moment Adds the whole image or its selected region to accumulator sum Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently). Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point. Optional operation mask Adds the input or its selected region, raised to power 2, to the accumulator sqsum Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently) Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point Optional operation mask Adds product of 2 images or thier selected regions to accumulator acc First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently) Second input image, the same format as the first one Accumulator of the same number of channels as input images, 32-bit or 64-bit floating-point Optional operation mask Calculates weighted sum of input and the accumulator acc so that acc becomes a running average of frame sequence: acc(x,y)=(1-) * acc(x,y) + * image(x,y) if mask(x,y)!=0 where regulates update speed (how fast accumulator forgets about previous frames). Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently). Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point. Weight of input image Optional operation mask Calculates seven Hu invariants Pointer to the moment state structure Pointer to Hu moments structure. Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image. Input image Image to store the Harris detector responces. Should have the same size as image Neighborhood size Aperture parameter for Sobel operator (see cvSobel). format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing. Harris detector free parameter. Pixel extrapolation method. Iterates to find the sub-pixel accurate location of corners, or radial saddle points Input image Initial coordinates of the input corners and refined coordinates on output Half sizes of the search window. For example, if win=(5,5) then 5*2+1 x 5*2+1 = 11 x 11 search window is used Half size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy Calculates one or more integral images for the source image Using these integral images, one may calculate sum, mean, standard deviation over arbitrary up-right or rotated rectangular region of the image in a constant time. It makes possible to do a fast blurring or fast block correlation with variable window size etc. In case of multi-channel images sums for each channel are accumulated independently. The source image, WxH, 8-bit or floating-point (32f or 64f) image. The integral image, W+1xH+1, 32-bit integer or double precision floating-point (64f). The integral image for squared pixel values, W+1xH+1, double precision floating-point (64f). The integral for the image rotated by 45 degrees, W+1xH+1, the same data type as sum. Desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F. Desired depth of the integral image of squared pixel values, CV_32F or CV_64F. Calculates distance to closest zero pixel for all non-zero pixels of source image Source 8-bit single-channel (binary) image. Output image with calculated distances (32-bit floating-point, single-channel). Type of distance Size of distance transform mask; can be 3 or 5. In case of CV_DIST_L1 or CV_DIST_C the parameter is forced to 3, because 3x3 mask gives the same result as 5x5 yet it is faster. The optional output 2d array of labels of integer type and the same size as src and dst. Can be null if not needed Type of the label array to build. If labelType==CCOMP then each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label. If labelType==PIXEL then each zero pixel (and all the non-zero pixels closest to it) gets its own label. Fills a connected component with given color. Input 1- or 3-channel, 8-bit or floating-point image. It is modified by the function unless CV_FLOODFILL_MASK_ONLY flag is set. The starting point. New value of repainted domain pixels. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value. The operation flags. Lower bits contain connectivity value, 4 (by default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Upper bits can be 0 or combination of the following flags: CV_FLOODFILL_FIXED_RANGE - if set the difference between the current pixel and seed pixel is considered, otherwise difference between neighbor pixels is considered (the range is floating). CV_FLOODFILL_MASK_ONLY - if set, the function does not fill the image (new_val is ignored), but the fills mask (that must be non-NULL in this case). Operation mask, should be singe-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. If not IntPtr.Zero, the function uses and updates the mask, so user takes responsibility of initializing mask content. Floodfilling can't go across non-zero pixels in the mask, for example, an edge detector output can be used as a mask to stop filling at edges. Or it is possible to use the same mask in multiple calls to the function to make sure the filled area do not overlap. Note: because mask is larger than the filled image, pixel in mask that corresponds to (x,y) pixel in image will have coordinates (x+1,y+1). Output parameter set by the function to the minimum bounding rectangle of the repainted domain. Flood fill connectivity Filters image using meanshift algorithm Source image Result image The spatial window radius. The color window radius. Maximum level of the pyramid for the segmentation. Use 1 as default value Termination criteria: when to stop meanshift iterations. Use new MCvTermCriteria(5, 1) as default value Converts image transformation maps from one representation to another. The first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 . The second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively. The first output map that has the type dstmap1type and the same size as src . The second output map. Depth type of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2. The number of channels in the dst map. Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation. Transforms the image to compensate radial and tangential lens distortion. The input (distorted) image The output (corrected) image The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. Camera matrix of the distorted image. By default it is the same as cameraMatrix, but you may additionally scale and shift the result by using some different matrix This function is an extended version of cvInitUndistortMap. That is, in addition to the correction of lens distortion, the function can also apply arbitrary perspective transformation R and finally it can scale and shift the image according to the new camera matrix The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1] The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5 The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used The new camera matrix A'=[fx' 0 cx'; 0 fy' cy'; 0 0 1] Depth type of the first output map that can be CV_32FC1 or CV_16SC2 . The first output map. The second output map. Undistorted image size. Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation. The observed point coordinates The ideal point coordinates, after undistortion and reverse perspective transformation. The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1] The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5. The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used. The new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used. Computes the 'minimal work' distance between two weighted point configurations. First signature, a size1 x dims + 1 floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used. Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra 'dummy' point is added to either signature1 or signature2 Used metric. CV_DIST_L1, CV_DIST_L2 , and CV_DIST_C stand for one of the standard metrics. CV_DIST_USER means that a pre-calculated cost matrix cost is used. User-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function. Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). Resultant size1 x size2 flow matrix The 'minimal work' distance between two weighted point configurations. The function is used to detect translational shifts that occur between two images. The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. Source floating point array (CV_32FC1 or CV_64FC1) Source floating point array (CV_32FC1 or CV_64FC1) Floating point array with windowing coefficients to reduce edge effects (optional). Signal power within the 5x5 centroid around the peak, between 0 and 1 The translational shifts that occur between two images This function computes a Hanning window coefficients in two dimensions. Destination array to place Hann coefficients in The window size specifications Created array type Draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image or ROI rectangle. For non-antialiased lines with integer coordinates the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering. The image First point of the line segment Second point of the line segment Line color Line thickness. Type of the line: 8 (or 0) - 8-connected line. 4 - 4-connected line. CV_AA - antialiased line. Number of fractional bits in the point coordinates Draws a arrow segment pointing from the first point to the second one. Image The point the arrow starts from. The point the arrow points to. Line color. Line thickness. Type of the line. Number of fractional bits in the point coordinates. The length of the arrow tip in relation to the arrow length Draws a single or multiple polygonal curves Image Array points Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex. Polyline color Thickness of the polyline edges Type of the line segments, see cvLine description Number of fractional bits in the vertex coordinates Draws a single or multiple polygonal curves Image Array of pointers to polylines Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex. Polyline color Thickness of the polyline edges Type of the line segments, see cvLine description Number of fractional bits in the vertex coordinates Draws a rectangle specified by a CvRect structure /// Image The rectangle to be drawn Line color Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. Type of the line Number of fractional bits in the point coordinates Computes the connected components labeled image of boolean image The boolean image The connected components labeled image of boolean image 4 or 8 way connectivity Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image N, the total number of labels [0, N-1] where 0 represents the background label. Computes the connected components labeled image of boolean image The boolean image The connected components labeled image of boolean image Statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes. The data type is CV_32S Centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F. 4 or 8 way connectivity Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image N, the total number of labels [0, N-1] where 0 represents the background label. Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. weights level weights Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. reject levels level weights Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. found weights found scales detect threshold, use 0 for default win det size, use (64, 128) for default Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). What we mean here by “linear programming problem” (or LP problem, for short) can be formulated as: Maximize c x subject to: Ax <= b and x >= 0 This row-vector corresponds to c in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to c^T. m-by-n+1 matrix, whose rightmost column corresponds to b in formulation above and the remaining to A. It should containt 32- or 64-bit floating point numbers. The solution will be returned here as a column-vector - it corresponds to c in the formulation above. It will contain 64-bit floating point numbers. The return codes Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented. This array should contain one or more noised versions of the image that is to be restored. Here the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary. Corresponds to in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed. Number of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor. Reconstructs the selected image area from the pixel near the area boundary. The function may be used to remove dust and scratches from a scanned photo, or to remove undesirable objects from still images or video. The input 8-bit 1-channel or 3-channel image The inpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted The output image of the same format and the same size as input The inpainting method The radius of circular neighborhood of each point inpainted that is considered by the algorithm Perform image denoising using Non-local Means Denoising algorithm: http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise. Input 8-bit 1-channel, 2-channel or 3-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used to compute weights. Should be odd. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Perform image denoising using Non-local Means Denoising algorithm (modified for color image): http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise. The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using fastNlMeansDenoising function. Input 8-bit 1-channel, 2-channel or 3-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. The same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors. Size in pixels of the template patch that is used to compute weights. Should be odd. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications. Input 8-bit 3-channel image Output 8-bit 3-channel image Edge preserving filters Range between 0 to 200 Range between 0 to 1 This filter enhances the details of a particular image. Input 8-bit 3-channel image Output image with the same size and type as src Range between 0 to 200 Range between 0 to 1 Pencil-like non-photorealistic line drawing Input 8-bit 3-channel image Output 8-bit 1-channel image Output image with the same size and type as src Range between 0 to 200 Range between 0 to 1 Range between 0 to 0.1 Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. Input 8-bit 3-channel image. Output image with the same size and type as src. Range between 0 to 200. Range between 0 to 1. Given an original color image, two differently colored versions of this image can be mixed seamlessly. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src . R-channel multiply factor. Multiplication factor is between .5 to 2.5. G-channel multiply factor. Multiplication factor is between .5 to 2.5. B-channel multiply factor. Multiplication factor is between .5 to 2.5. Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Value ranges between 0-2. Value ranges between 0-2. By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Range from 0 to 100. Value > 100 The size of the Sobel kernel to be used. Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications Input 8-bit 3-channel image. Output 8-bit 1-channel image. Output 8-bit 3-channel image. Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content Input 8-bit 3-channel image. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Point in dst image where object is placed. Output image with the same size and type as dst. Cloning method Implements CAMSHIFT object tracking algorithm ([Bradski98]). First, it finds an object center using cvMeanShift and, after that, calculates the object size and orientation. Back projection of object histogram Initial search window Criteria applied to determine when the window search should be finished Circumscribed box for the object, contains object size and orientation Iterates to find the object center given its back projection and initial position of search window. The iterations are made until the search window center moves by less than the given value and/or until the function has done the maximum number of iterations. Back projection of object histogram Initial search window Criteria applied to determine when the window search should be finished. The number of iterations made Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK. 8-bit input image. Output pyramid. Window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels. 0-based maximal pyramid level number. Set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally. The border mode for pyramid layers. The border mode for gradients. put ROI of input image into the pyramid if possible. You can pass false to force data copying. Number of levels in constructed pyramid. Can be less than maxLevel. Updates the motion history image as following: mhi(x,y)=timestamp if silhouette(x,y)!=0 0 if silhouette(x,y)=0 and mhi(x,y)<timestamp-duration mhi(x,y) otherwise That is, MHI pixels where motion occurs are set to the current timestamp, while the pixels where motion happened far ago are cleared. Silhouette mask that has non-zero pixels where the motion occurs. Motion history image, that is updated by the function (single-channel, 32-bit floating-point) Current time in milliseconds or other units. Maximal duration of motion track in the same units as timestamp. Calculates the derivatives Dx and Dy of mhi and then calculates gradient orientation as: orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) where both Dx(x,y)' and Dy(x,y)' signs are taken into account (as in cvCartToPolar function). After that mask is filled to indicate where the orientation is valid (see delta1 and delta2 description). Motion history image Mask image; marks pixels where motion gradient data is correct. Output parameter. Motion gradient orientation image; contains angles from 0 to ~360. The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2). The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2). Aperture size of derivative operators used by the function: CV_SCHARR, 1, 3, 5 or 7 (see cvSobel). Finds all the motion segments and marks them in segMask with individual values each (1,2,...). It also returns a sequence of CvConnectedComp structures, one per each motion components. After than the motion direction for every component can be calculated with cvCalcGlobalOrientation using extracted mask of the particular component (using cvCmp) Motion history image Image where the mask found should be stored, single-channel, 32-bit floating-point Current time in milliseconds or other units Segmentation threshold; recommended to be equal to the interval between motion history "steps" or greater Vector containing ROIs of motion connected components. Calculates the general motion direction in the selected region and returns the angle between 0 and 360. At first the function builds the orientation histogram and finds the basic orientation as a coordinate of the histogram maximum. After that the function calculates the shift relative to the basic orientation as a weighted sum of all orientation vectors: the more recent is the motion, the greater is the weight. The resultant angle is a circular sum of the basic orientation and the shift. Motion gradient orientation image; calculated by the function cvCalcMotionGradient. Mask image. It may be a conjunction of valid gradient mask, obtained with cvCalcMotionGradient and mask of the region, whose direction needs to be calculated. Motion history image. Current time in milliseconds or other units, it is better to store time passed to cvUpdateMotionHistory before and reuse it here, because running cvUpdateMotionHistory and cvCalcMotionGradient on large images may take some time. Maximal duration of motion track in milliseconds, the same as in cvUpdateMotionHistory The angle Calculates optical flow for a sparse feature set using iterative Lucas-Kanade method in pyramids First frame, at time t Second frame, at time t + dt Array of points for which the flow needs to be found Size of the search window of each pyramid level Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped Flags Array of 2D points containing calculated new positions of input features in the second image Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise Array of double numbers containing difference between patches around the original and moved points the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. Implements sparse iterative version of Lucas-Kanade optical flow in pyramids ([Bouguet00]). It calculates coordinates of the feature points on the current video frame given their coordinates on the previous frame. The function finds the coordinates with sub-pixel accuracy. Both parameters prev_pyr and curr_pyr comply with the following rules: if the image pointer is 0, the function allocates the buffer internally, calculates the pyramid, and releases the buffer after processing. Otherwise, the function calculates the pyramid and stores it in the buffer unless the flag CV_LKFLOW_PYR_A[B]_READY is set. The image should be large enough to fit the Gaussian pyramid data. After the function call both pyramids are calculated and the readiness flag for the corresponding image can be set in the next call (i.e., typically, for all the image pairs except the very first one CV_LKFLOW_PYR_A_READY is set). First frame, at time t. Second frame, at time t + dt . Array of points for which the flow needs to be found. Array of 2D points containing calculated new positions of input Size of the search window of each pyramid level. Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc. Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise. Array of double numbers containing difference between patches around the original and moved points. Optional parameter; can be NULL Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped. Miscellaneous flags the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. Computes dense optical flow using Gunnar Farneback's algorithm The first 8-bit single-channel input image The second input image of the same size and the same type as prevImg The computed flow image for x-velocity; will have the same size as prevImg The computed flow image for y-velocity; will have the same size as prevImg Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field The number of iterations the algorithm does at each pyramid level Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7 Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5 The operation flags Computes dense optical flow using Gunnar Farneback's algorithm The first 8-bit single-channel input image The second input image of the same size and the same type as prevImg The computed flow image; will have the same size as prevImg and type CV 32FC2 Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field The number of iterations the algorithm does at each pyramid level Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7 Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5 The operation flags Finds the geometric transform (warp) between two images in terms of the ECC criterion single-channel template image; CV_8U or CV_32F array. single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as temlateImage. floating-point 2×3 or 3×3 mapping matrix (warp). Specifying the type of motion. Use Affine for default specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values can use 50 iteration and 0.001 eps. An optional mask to indicate valid values of inputImage. The final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image. Estimate rigid transformation between 2 point sets. The points from the source image The corresponding points from the destination image Indicates if full affine should be performed If success, the 2x3 rotation matrix that defines the Affine transform. Otherwise null is returned. Estimate rigid transformation between 2 images or 2 point sets. First image or 2D point set (as a 2 channel Matrix<float>) First image or 2D point set (as a 2 channel Matrix<float>) Indicates if full affine should be performed The resulting Matrix<double> that represent the affine transformation Release the InputArray Pointer to the input array Release the input / output array Pointer to the input output array Release the input / output array Pointer to the input / output array Read point cloud from file The point cloud file The color of the points The normal of the points The points Write point cloud to file The point cloud file name The point cloud The color The normals The Cascade Classifier Create a cascade classifier Create a CascadeClassifier from the specific file The name of the file that contains the CascadeClassifier Load the cascade classifier from a file node The file node, The file may contain a new cascade classifier only. True if the classifier can be imported. Finds rectangular regions in the given image that are likely to contain objects the cascade has been trained for and returns those regions as a sequence of rectangles. The function scans the image several times at different scales. Each time it considers overlapping regions in the image. It may also apply some heuristics to reduce number of analyzed regions, such as Canny prunning. After it has proceeded and collected the candidate rectangles (regions that passed the classifier cascade), it groups them and returns a sequence of average rectangles for each large enough group. The image where the objects are to be detected from The factor by which the search window is scaled between the subsequent scans, for example, 1.1 means increasing window by 10% Minimum number (minus 1) of neighbor rectangles that makes up an object. All the groups of a smaller number of rectangles than min_neighbors-1 are rejected. If min_neighbors is 0, the function does not any grouping at all and returns all the detected candidate rectangles, which may be useful if the user wants to apply a customized grouping procedure. Use 3 for default. Minimum window size. Use Size.Empty for default, where it is set to the size of samples the classifier has been trained on (~20x20 for face detection) Maximum window size. Use Size.Empty for default, where the parameter will be ignored. The objects detected, one array per channel Get if the cascade is old format Get the original window size Release the CascadeClassifier Object and all the memory associate with it A convolution kernel The center of the convolution kernel Create a convolution kernel with the specific number of and The number of raws for the convolution kernel The number of columns for the convolution kernel Create a convolution kernel using the specific matrix and center The values for the convolution kernel The center of the kernel Create a convolution kernel using the specific floating point matrix The values for the convolution kernel Create a convolution kernel using the specific floating point matrix and center The values for the convolution kernel The center for the convolution kernel Get a flipped copy of the convolution kernel The type of the flipping The flipped copy of this image The center of the convolution kernel Obtain the transpose of the convolution kernel A transposed convolution kernel Wrapped CvArr The type of elements in this CvArray The size of the elements in the CvArray, it is the cached value of Marshal.SizeOf(typeof(TDepth)). The pinned GCHandle to _array; Get or set the Compression Ratio for serialization. A number between 0 - 9. 0 means no compression at all, while 9 means best compression Get the size of element in bytes The pointer to the internal structure Get the size of the array Get the width (#Cols) of the cvArray. If ROI is set, the width of the ROI Get the height (#Rows) of the cvArray. If ROI is set, the height of the ROI Get the number of channels of the array The number of rows for this array The number of cols for this array Get or Set an Array of bytes that represent the data in this array Should only be used for serialization & deserialization Get the underneath managed array Allocate data for the array The number of rows The number of columns The number of channels of this cvArray Sum of diagonal elements of the matrix The norm of this Array Calculates and returns the Euclidean dot product of two arrays. src1 dot src2 = sumI(src1(I)*src2(I)) In case of multiple channel arrays the results for all channels are accumulated. In particular, cvDotProduct(a,a), where a is a complex vector, will return ||a||^2. The function can process multi-dimensional arrays, row by row, layer by layer and so on. The other Array to apply dot product with src1 dot src2 Check that every array element is neither NaN nor +- inf. The functions also check that each value is between and . in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions return false. The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range This will be filled with the position of the first outlier True if all values are in range Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes The destination single-row/single-column vector that accumulates somehow all the matrix rows/columns The dimension index along which the matrix is reduce. The reduction operation type The type of depth of the reduced array Copy the current array to The destination Array Set the element of the Array to , using the specific The value to be set The mask for the operation Set the element of the Array to , using the specific The value to be set The mask for the operation Inplace fills Array with uniformly distributed random numbers the inclusive lower boundary of random numbers range the exclusive upper boundary of random numbers range Inplace fills Array with normally distributed random numbers the mean value of random numbers the standard deviation of random numbers Initializes scaled identity matrix The value on the diagonal Set the values to zero Initialize the identity matrix Inplace multiply elements of the Array by The scale to be multiplyed Inplace elementwise multiply the current Array with The other array to be elementwise multiplied with Free the _dataHandle if it is set Inplace compute the elementwise minimum value The value to compare with Inplace elementwise minimize the current Array with The other array to be elementwise minimized with this array Inplace compute the elementwise maximum value with The value to be compare with Inplace elementwise maximize the current Array with The other array to be elementwise maximized with this array Inplace And operation with The other array to perform AND operation Inplace Or operation with The other array to perform OR operation Inplace compute the complement for all array elements Save the CvArray as image The name of the image to save Get the xml schema the xml schema Function to call when deserializing this object from XML The xml reader Function to call when serializing this object to XML The xml writer A function used for runtime serialization of the object Serialization info Streaming context A function used for runtime deserailization of the object Serialization info Streaming context The Mat header that represent this CvArr Get the Mat header that represent this CvArr The unmanaged pointer to the input array. The unmanaged pointer to the output array. The unmanaged pointer to the input output array. Get the umat representation of this mat The UMat A Uniform Multi-dimensional Dense Histogram Creates a uniform 1-D histogram of the specified size The number of bins in this 1-D histogram. The upper and lower boundary of the bin Creates a uniform multi-dimension histogram of the specified size The length of this array is the dimension of the histogram. The values of the array contains the number of bins in each dimension. The total number of bins eaquals the multiplication of all numbers in the array the upper and lower boundaries of the bins Clear this histogram Project the images to the histogram bins The type of depth of the image images to project If it is true, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online. Can be null if not needed. The operation mask, determines what pixels of the source images are counted Project the matrices to the histogram bins The type of depth of the image Matrices to project If it is true, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online. Can be null if not needed. The operation mask, determines what pixels of the source images are counted Backproject the histogram into a gray scale image Source images, all are of the same size and type Destination back projection image of the same type as the source images The type of depth of the image Backproject the histogram into a matrix Source matrices, all are of the same size and type Destination back projection matrix of the sametype as the source matrices The type of depth of the matrix Get the size of the bin dimensions Get the ranges of this histogram Gets the bin values. The bin values File Storage Node class. The node is used to store each and every element of the file storage opened for reading. When XML/YAML file is read, it is first parsed and stored in the memory as a hierarchical collection of nodes. Each node can be a “leaf” that is contain a single number or a string, or be a collection of other nodes. There can be named collections (mappings) where each element has a name and it is accessed by a name, and ordered collections (sequences) where elements do not have names but rather accessed by index. Type of the file node can be determined using FileNode::type method. Note that file nodes are only used for navigating file storages opened for reading. When a file storage is opened for writing, no data is stored in memory after it is written. Type of the file storage node Empty node an integer Floating-point number Synonym or Real Text string in UTF-8 encoding Synonym for Str Integer of size size_t. Typically used for storing complex dynamic structures where some elements reference the others The sequence Mapping The type mask Compact representation of a sequence or mapping. Used only by YAML writer A registered object (e.g. a matrix) Empty structure (sequence or mapping) The node has a name (i.e. it is element of a mapping) Reads a Mat from the node The Mat where the result is read into The default mat. Gets a value indicating whether this instance is empty. true if this instance is empty; otherwise, false. Gets the type of the node. The type of the node. Release the unmanaged resources Reads the string from the node The string from the node Reads the int from the node. The int from the node. Reads the float from the node. The float from the node. Reads the double from the node. The double from the node. XML/YAML file storage class that encapsulates all the information necessary for writing or reading data to/from a file. File storage mode Open the file for reading Open the file for writing Open the file for appending ReadMat data from source or write data to the internal buffer Mask for format flags Auto format XML format YAML format JSON format Write rawdata in Base64 by default. (consider using WriteBase64) enable both Write and Base64 Initializes a new instance of the class. Name of the file to open or the text string to read the data from. Extension of the file (.xml or .yml/.yaml) determines its format (XML or YAML respectively). Also you can append .gz to work with compressed files, for example myHugeMatrix.xml.gz. If both FileStorage::WRITE and FileStorage::MEMORY flags are specified, source is used just to specify the output file format (e.g. mydata.xml, .yml etc.). Mode of operation. Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Writes the specified Mat to the node with the specific The Mat to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Gets a value indicating whether this instance is opened. true if the object is associated with the current file; otherwise, false. Closes the file and releases all the memory buffers Call this method after all I/O operations with the storage are finished. If the storage was opened for writing data and FileStorage.Mode.Write was specified The string that represent the text in the FileStorage Gets the top-level mapping. Zero-based index of the stream. In most cases there is only one stream in the file. However, YAML supports multiple streams and so there can be several. The top-level mapping Gets the first element of the top-level mapping. The first element of the top-level mapping. Gets the specified element of the top-level mapping. Name of the node. The specified element of the top-level mapping. Gets the with the specified node name. The . Name of the node. Release the unmanaged resources Similar to the << operator in C++, we cannot have the operator overload to << in C# where the second parameter is not an int. Therefore we use this function instead. The string value to insert. A HOG descriptor Create a new HOGDescriptor Create a new HOGDescriptor using the specific parameters. Block size in cells. Use (16, 16) for default. Cell size. Use (8, 8) for default. Block stride. Must be a multiple of cell size. Use (8,8) for default. Do gamma correction preprocessing or not. Use true for default. L2-Hys normalization method shrinkage. Number of bins. Gaussian smoothing window parameter. Detection window size. Must be aligned to block size and block stride. Must match the size of the training image. Use (64, 128) for default. Return the default people detector The default people detector Set the SVM detector The SVM detector Performs object detection with increasing detection window. The image to search in Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Window stride. Must be a multiple of block stride. Coefficient of the detection window increase. After detection some objects could be covered by many rectangles. This coefficient regulates similarity threshold. 0 means don't perform grouping. Should be an integer if not using meanshift grouping. Use 2.0 for default If true, it will use meanshift grouping. The regions where positives are found The image Window stride. Must be a multiple of block stride. Use Size.Empty for default Padding. Use Size.Empty for default Locations for the computation. Can be null if not needed The descriptor vector Release the unmanaged memory associated with this HOGDescriptor Get the size of the descriptor Apply converter and compute result for each channel of the image, for single channel image, apply converter directly, for multiple channel image, make a copy of each channel to a temperary image and apply the convertor The return type The source image The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel Apply converter and compute result for each channel of the image, for single channel image, apply converter directly, for multiple channel image, make a copy of each channel to a temperary image and apply the convertor The source image The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel IImage interface Convert this image into Bitmap, when available, data is shared with this image. The Bitmap, when available, data is shared with this image The size of this image Returns the min / max location and values for the image Returns the min / max location and values for the image Split current IImage into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Get the pointer to the unmanaged memory Save the image to the specific The file name of the image Get the number of channels for this image An Image is a wrapper to IplImage of OpenCV. Color type of this image (either Gray, Bgr, Bgra, Hsv, Hls, Lab, Luv, Xyz, Ycc, Rgb or Rbga) Depth of this image (either Byte, SByte, Single, double, UInt16, Int16 or Int32) The dimension of color Create an empty Image Create image from the specific multi-dimensional data, where the 1st dimesion is # of rows (height), the 2nd dimension is # cols (width) and the 3rd dimension is the channel The multi-dimensional data where the 1st dimension is # of rows (height), the 2nd dimension is # cols (width) and the 3rd dimension is the channel Create an Image from unmanaged data. The width of the image The height of the image Size of aligned image row in bytes Pointer to aligned image data, where each row should be 4-align The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter, however, the memory should not be released until the related Image is released. Allocate the image from the image header. This should be only a header to the image. When the image is disposed, the cvReleaseImageHeader will be called on the pointer. Read image from a file the name of the file that contains the image Load the specific file using Bitmap Load the specific file using OpenCV Obtain the image from the specific Bitmap The bitmap which will be converted to the image Create a blank Image of the specified width, height and color. The width of the image The height of the image The initial color of the image Create a blank Image of the specified width and height. The width of the image The height of the image Create a blank Image of the specific size The size of the image Get or Set the data for this matrix. The Get function has O(1) complexity. The Set function make a copy of the data If the image contains Byte and width is not a multiple of 4. The second dimension of the array might be larger than the Width of this image. This is necessary since the length of a row need to be 4 align for OpenCV optimization. The Set function always make a copy of the specific value. If the image contains Byte and width is not a multiple of 4. The second dimension of the array created might be larger than the Width of this image. Allocate data for the array The number of rows The number of columns The number of channels of this image Create a multi-channel image from multiple gray scale images The image channels to be merged into a single image Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info streaming context The IplImage structure Get or Set the region of interest for this image. To clear the ROI, set it to System.Drawing.Rectangle.Empty Get the number of channels for this image Get the underneath managed array Get the equivalent opencv depth type for this image Indicates if the region of interest has been set Get the average value on this image The average color of the image Get the average value on this image, using the specific mask The mask for find the average value The average color of the masked area Get the sum for each color channel The sum for each color channel Set every pixel of the image to the specific color The color to be set Set every pixel of the image to the specific color, using a mask The color to be set The mask for setting color Copy the masked area of this image to destination the destination to copy to the mask for copy Make a copy of the image using a mask, if ROI is set, only copy the ROI the mask for coping A copy of the image Make a copy of the specific ROI (Region of Interest) from the image The roi to be copied The roi region on the image Get a copy of the boxed region of the image The boxed region of the image A copy of the boxed region of the image Make a copy of the image, if ROI is set, only copy the ROI A copy of the image Create an image of the same size The initial pixel in the image equals zero The image of the same size Make a clone of the current image. All image data as well as the COI and ROI are cloned A clone of the current image. All image data as well as the COI and ROI are cloned Get a subimage which image data is shared with the current image. The rectangle area of the sub-image A subimage which image data is shared with the current image Draw an Rectangle of the specific color and thickness The rectangle to be drawn The color of the rectangle If thickness is less than 1, the rectangle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a 2D Cross using the specific color and thickness The 2D Cross to be drawn The color of the cross Must be > 0 Draw a line segment using the specific color and thickness The line segment to be drawn The color of the line segment The thickness of the line segment Line type Number of fractional bits in the center coordinates and radius value Draw a line segment using the specific color and thickness The line segment to be drawn The color of the line segment The thickness of the line segment Line type Number of fractional bits in the center coordinates and radius value Draw a convex polygon using the specific color and thickness The convex polygon to be drawn The color of the triangle If thickness is less than 1, the triangle is filled up Fill the convex polygon with the specific color The array of points that define the convex polygon The color to fill the polygon with Line type Number of fractional bits in the center coordinates and radius value Draw the polyline defined by the array of 2D points A polyline defined by its point if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thinkness of the line Line type Number of fractional bits in the center coordinates and radius value Draw the polylines defined by the array of array of 2D points An array of polylines each represented by an array of points if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thickness of the line Line type Number of fractional bits in the center coordinates and radius value Draw a Circle of the specific color and thickness The circle to be drawn The color of the circle If thickness is less than 1, the circle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a Ellipse of the specific color and thickness The ellipse to be draw The color of the ellipse If thickness is less than 1, the ellipse is filled up Line type Number of fractional bits in the center coordinates and radius value Draw the text using the specific font on the image The text message to be draw Font type. Font scale factor that is multiplied by the font-specific base size. The location of the bottom left corner of the font The color of the text Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Draws contour outlines in the image if thickness>=0 or fills area bounded by the contours if thickness<0 All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours Maximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level. Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Optional information about hierarchy. It is only needed if you want to draw only some of the contours Shift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering. Draws contour outlines in the image if thickness>=0 or fills area bounded by the contours if thickness<0 The input contour stored as a point vector. Color of the contours Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Shift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering. Apply Probabilistic Hough transform to find line segments. The current image must be a binary image (eg. the edges as a result of the Canny edge detector) Distance resolution in pixel-related units. Angle resolution measured in radians A line is returned by the function if the corresponding accumulator value is greater than threshold Minimum width of a line Minimum gap between lines The line segments detected for each of the channels Apply Canny Edge Detector follows by Probabilistic Hough transform to find line segments in the image The threshhold to find initial segments of strong edges The threshold used for edge Linking Distance resolution in pixel-related units. Angle resolution measured in radians A line is returned by the function if the corresponding accumulator value is greater than threshold Minimum width of a line Minimum gap between lines The line segments detected for each of the channels First apply Canny Edge Detector on the current image, then apply Hough transform to find circles The higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller). Accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc Minimal radius of the circles to search for Maximal radius of the circles to search for Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed The circle detected for each of the channels Get or Set the specific channel of the current image. For Get operation, a copy of the specific channel is returned. For Set operation, the specific channel is copied to this image. The channel to get from the current image, zero based index The specific channel of the current image Get or Set the color in the th row (y direction) and th column (x direction) The zero-based row (y direction) of the pixel The zero-based column (x direction) of the pixel The color in the specific and Get or Set the color in the the location of the pixel the color in the Return parameters based on ROI The Pointer to the IplImage The address of the pointer that point to the start of the Bytes taken into consideration ROI ROI.Width * ColorType.Dimension The number of bytes in a row taken into consideration ROI The number of rows taken into consideration ROI The width step required to jump to the next row Apply convertor and compute result for each channel of the image. For single channel image, apply converter directly. For multiple channel image, set the COI for the specific channel before appling the convertor The return type The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel If the image has only one channel, apply the action directly on the IntPtr of this image and , otherwise, make copy each channel of this image to a temperary one, apply action on it and another temperory image and copy the resulting image back to image2 The type of the depth of the image The function which acepts the src IntPtr, dest IntPtr and index of the channel as input The destination image Calculates the image derivative by convolving the image with the appropriate kernel The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7. In all cases except 1, aperture_size xaperture_size separable kernel will be used to calculate the derivative. The result of the sobel edge detector Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator. Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Aperture size The Laplacian of the image Find the edges on this image and marked them in the returned image. The threshhold to find initial segments of strong edges The threshold used for edge Linking The edges found by the Canny edge detector Find the edges on this image and marked them in the returned image. The threshhold to find initial segments of strong edges The threshold used for edge Linking The aperture size, use 3 for default a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ). The edges found by the Canny edge detector Iterates to find the sub-pixel accurate location of corners, or radial saddle points Coordinates of the input corners, the values will be modified by this function call Half sizes of the search window. For example, if win=(5,5) then 5*2+1 x 5*2+1 = 11 x 11 search window is used Half size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy Refined corner coordinates The function slids through image, compares overlapped patches of size wxh with templ using the specified method and return the comparison results Searched template; must be not greater than the source image and the same data type as the image Specifies the way the template must be compared with image regions The comparison result: width = this.Width - template.Width + 1; height = this.Height - template.Height + 1 Perform an elementwise AND operation with another image and return the result The second image for the AND operation The result of the AND operation Perform an elementwise AND operation with another image, using a mask, and return the result The second image for the AND operation The mask for the AND operation The result of the AND operation Perform an binary AND operation with some color The color for the AND operation The result of the AND operation Perform an binary AND operation with some color using a mask The color for the AND operation The mask for the AND operation The result of the AND operation Perform an elementwise OR operation with another image and return the result The second image for the OR operation The result of the OR operation Perform an elementwise OR operation with another image, using a mask, and return the result The second image for the OR operation The mask for the OR operation The result of the OR operation Perform an elementwise OR operation with some color The value for the OR operation The result of the OR operation Perform an elementwise OR operation with some color using a mask The color for the OR operation The mask for the OR operation The result of the OR operation Perform an elementwise XOR operation with another image and return the result The second image for the XOR operation The result of the XOR operation Perform an elementwise XOR operation with another image, using a mask, and return the result The second image for the XOR operation The mask for the XOR operation The result of the XOR operation Perform an binary XOR operation with some color The value for the XOR operation The result of the XOR operation Perform an binary XOR operation with some color using a mask The color for the XOR operation The mask for the XOR operation The result of the XOR operation Compute the complement image The complement image Find the elementwise maximum value The second image for the Max operation An image where each pixel is the maximum of this image and the parameter image Find the elementwise maximum value The value to compare with An image where each pixel is the maximum of this image and Find the elementwise minimum value The second image for the Min operation An image where each pixel is the minimum of this image and the parameter image Find the elementwise minimum value The value to compare with An image where each pixel is the minimum of this image and Checks that image elements lie between two scalars The inclusive lower limit of color value The inclusive upper limit of color value res[i,j] = 255 if <= this[i,j] <= , 0 otherwise Checks that image elements lie between values defined by two images of same size and type The inclusive lower limit of color value The inclusive upper limit of color value res[i,j] = 255 if [i,j] <= this[i,j] <= [i,j], 0 otherwise Compare the current image with and returns the comparison mask The other image to compare with The comparison type The result of the comparison as a mask Compare the current image with and returns the comparison mask The value to compare with The comparison type The result of the comparison as a mask Compare two images, returns true if the each of the pixels are equal, false otherwise The other image to compare with true if the each of the pixels for the two images are equal, false otherwise Use grabcut to perform background foreground segmentation. The initial rectangle region for the foreground The number of iterations to run GrabCut The background foreground mask where 2 indicates background and 3 indicates foreground Elementwise subtract another image from the current image The second image to be subtracted from the current image The result of elementwise subtracting img2 from the current image Elementwise subtract another image from the current image, using a mask The image to be subtracted from the current image The mask for the subtract operation The result of elementwise subtrating img2 from the current image, using the specific mask Elementwise subtract a color from the current image The color value to be subtracted from the current image The result of elementwise subtracting color 'val' from the current image result = val - this the value which subtract this image val - this result = val - this, using a mask The value which subtract this image The mask for subtraction val - this, with mask Elementwise add another image with the current image The image to be added to the current image The result of elementwise adding img2 to the current image Elementwise add with the current image, using a mask The image to be added to the current image The mask for the add operation The result of elementwise adding img2 to the current image, using the specific mask Elementwise add a color to the current image The color value to be added to the current image The result of elementwise adding color from the current image Elementwise multiply another image with the current image and the The image to be elementwise multiplied to the current image The scale to be multiplied this .* img2 * scale Elementwise multiply with the current image The image to be elementwise multiplied to the current image this .* img2 Elementwise multiply the current image with The scale to be multiplied The scaled image Accumulate to the current image using the specific mask The image to be added to the current image the mask Accumulate to the current image using the specific mask The image to be added to the current image Return the weighted sum such that: res = this * alpha + img2 * beta + gamma img2 in: res = this * alpha + img2 * beta + gamma alpha in: res = this * alpha + img2 * beta + gamma beta in: res = this * alpha + img2 * beta + gamma gamma in: res = this * alpha + img2 * beta + gamma this * alpha + img2 * beta + gamma Update Running Average. this = (1-alpha)*this + alpha*img Input image, 1- or 3-channel, Byte or Single (each channel of multi-channel image is processed independently). the weight of Update Running Average. this = (1-alpha)*this + alpha*img, using the mask Input image, 1- or 3-channel, Byte or Single (each channel of multi-channel image is processed independently). The weight of The mask for the running average Computes absolute different between this image and the other image The other image to compute absolute different with The image that contains the absolute different value Computes absolute different between this image and the specific color The color to compute absolute different with The image that contains the absolute different value Raises every element of input array to p dst(I)=src(I)^p, if p is integer dst(I)=abs(src(I))^p, otherwise The exponent of power The power image Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is ~7e-6. Currently, the function converts denormalized values to zeros on output. The exponent image Calculates natural logarithm of absolute value of every element of input array Natural logarithm of absolute value of every element of input array Sample the pixel values on the specific line segment The line to obtain samples The values on the (Eight-connected) line Sample the pixel values on the specific line segment The line to obtain samples The sampling type The values on the line, the first dimension is the index of the point, the second dimension is the index of color channel Scale the image to the specific size The width of the returned image. The height of the returned image. The type of interpolation The resized image Scale the image to the specific size The width of the returned image. The height of the returned image. The type of interpolation if true, the scale is preservered and the resulting image has maximum width(height) possible that is <= (), if false, this function is equaivalent to Resize(int width, int height) The resized image Scale the image to the specific size: width *= scale; height *= scale The scale to resize The type of interpolation The scaled image Rotate the image the specified angle cropping the result to the original size The angle of rotation in degrees. The color with wich to fill the background The image rotates by the specific angle Transforms source image using the specified matrix 2x3 transformation matrix Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The result of the transformation Transforms source image using the specified matrix 2x3 transformation matrix The width of the resulting image the height of the resulting image Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The result of the transformation Transforms source image using the specified matrix 3x3 transformation matrix Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The depth type of , should be either float or double The result of the transformation Transforms source image using the specified matrix 3x3 transformation matrix The width of the resulting image the height of the resulting image Interpolation type Warp type Border type A value used to fill outliers The depth type of , should be either float or double The result of the transformation Rotate this image the specified The angle of rotation in degrees. The color with wich to fill the background If set to true the image is cropped to its original size, possibly losing corners information. If set to false the result image has different size than original and all rotation information is preserved The rotated image Rotate this image the specified The angle of rotation in degrees. Positive means clockwise. The color with with to fill the background If set to true the image is cropped to its original size, possibly losing corners information. If set to false the result image has different size than original and all rotation information is preserved The center of rotation The interpolation method The rotated image Convert the image to log polar, simulating the human foveal vision The transformation center, where the output precision is maximal Magnitude scale parameter interpolation type Warp type The converted image Convert the current image to the specific color and depth The type of color to be converted to The type of pixel depth to be converted to Image of the specific color and depth Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The color type of the source image The color depth of the source image The sourceImage Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The sourceImage Convert the current image to the specific depth, at the same time scale and shift the values of the pixel The value to be multipled with the pixel The value to be added to the pixel The type of depth to convert to Image of the specific depth, val = val * scale + shift The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Image object is disposed The Set property convert the bitmap to this Image type. Utility function for Bitmap Set property Convert this image into Bitmap, the pixel values are copied over to the Bitmap For better performance on Image<Gray, Byte> and Image<Bgr, Byte>, consider using the Bitmap property This image in Bitmap format, the pixel data are copied over to the Bitmap Create a Bitmap image of certain size The width of the bitmap The height of the bitmap This image in Bitmap format of the specific size Performs downsampling step of Gaussian pyramid decomposition. First it convolves this image with the specified filter and then downsamples the image by rejecting even rows and columns. The downsampled image Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples this image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the resulting image is four times larger than the source image. The upsampled image Compute the image pyramid The number of level's for the pyramid; Level 0 referes to the current image, level n is computed by calling the PyrDown() function on level n-1 The image pyramid Use inpaint to recover the intensity of the pixels which location defined by mask on this image The inpainting mask. Non-zero pixels indicate the area that needs to be inpainted The radius of circular neighborhood of each point inpainted that is considered by the algorithm The inpainted image Perform advanced morphological transformations using erosion and dilation as basic operations. Structuring element Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Type of morphological operation Number of times erosion and dilation are applied Border type Border value The result of the morphological operation Perform inplace advanced morphological transformations using erosion and dilation as basic operations. Structuring element Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Type of morphological operation Number of times erosion and dilation are applied Border type Border value Erodes this image using a 3x3 rectangular structuring element. Erosion are applied several (iterations) times The number of erode iterations The eroded image Dilates this image using a 3x3 rectangular structuring element. Dilation are applied several (iterations) times The number of dilate iterations The dialated image Erodes this image inplace using a 3x3 rectangular structuring element. Erosion are applied several (iterations) times The number of erode iterations Dilates this image inplace using a 3x3 rectangular structuring element. Dilation are applied several (iterations) times The number of dilate iterations perform an generic action based on each element of the image The action to be applied to each element of the image Perform an generic operation based on the elements of the two images The depth of the second image The second image to perform action on An action such that the first parameter is the a single channel of a pixel from the first image, the second parameter is the corresponding channel of the correspondind pixel from the second image Compute the element of a new image based on the value as well as the x and y positions of each pixel on the image Compute the element of the new image based on element of this image Compute the element of the new image based on the elements of the two image Compute the element of the new image based on the elements of the three image Compute the element of the new image based on the elements of the four image Release all unmanaged memory associate with the image Perform an elementwise AND operation on the two images The first image to AND The second image to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise OR operation with another image and return the result The first image to apply bitwise OR operation The second image to apply bitwise OR operation The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Compute the complement image The image to be inverted The complement image Elementwise add with The first image to be added The second image to be added The sum of the two images Elementwise add with The image to be added The value to be added The images plus the color Elementwise add with The image to be added The value to be added The images plus the color Elementwise add with The image to be added The color to be added The images plus the color Elementwise add with The image to be added The color to be added The images plus the color Elementwise subtract another image from the current image The image to be subtracted The second image to be subtracted from The result of elementwise subtracting img2 from Elementwise subtract another image from the current image The image to be subtracted The color to be subtracted The result of elementwise subtracting from Elementwise subtract another image from the current image The image to be subtracted The color to be subtracted - - The image to be subtracted The value to be subtracted - Elementwise subtract another image from the current image The image to be subtracted The value to be subtracted - * The image The multiplication scale * * The image The multiplication scale * Perform the convolution with on The image The kernel Result of the convolution / The image The division scale / / The image The scale / Summation over a pixel param1 x param2 neighborhood with subsequent scaling by 1/(param1 x param2) The width of the window The height of the window The result of blur Summation over a pixel param1 x param2 neighborhood. If scale is true, the result is subsequent scaled by 1/(param1 x param2) The width of the window The height of the window If true, the result is subsequent scaled by 1/(param1 x param2) The result of blur Finding median of x neighborhood The size (width & height) of the window The result of mediam smooth Applying bilateral 3x3 filtering Color sigma Space sigma The size of the bilatral kernel The result of bilateral smooth Perform Gaussian Smoothing in the current image and return the result The size of the Gaussian kernel ( x ) The smoothed image Perform Gaussian Smoothing in the current image and return the result The width of the Gaussian kernel The height of the Gaussian kernel The standard deviation of the Gaussian kernel in the horizontal dimwnsion The standard deviation of the Gaussian kernel in the vertical dimwnsion The smoothed image Perform Gaussian Smoothing inplace for the current image The size of the Gaussian kernel ( x ) Perform Gaussian Smoothing inplace for the current image The width of the Gaussian kernel The height of the Gaussian kernel The standard deviation of the Gaussian kernel in the horizontal dimwnsion The standard deviation of the Gaussian kernel in the vertical dimwnsion Performs a convolution using the specific The convolution kernel The optional value added to the filtered pixels before storing them in dst The pixel extrapolation method. The result of the convolution Calculates integral images for the source image The integral image Calculates integral images for the source image The integral image The integral image for squared pixel values The integral image Calculates one or more integral images for the source image The integral image The integral image for squared pixel values The integral for the image rotated by 45 degrees Transforms grayscale image to binary image. Threshold calculated individually for each pixel. For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of x pixel neighborhood, subtracted by param1. For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of x pixel neighborhood, subtracted by param1. Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Adaptive_method Thresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV The size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ... Constant subtracted from mean or weighted mean. It may be negative. The result of the adaptive threshold the base threshold method shared by public threshold functions Threshold the image such that: dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise The threshold value dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise Threshold the image such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise The threshold value The image such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise Threshold the image such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise The threshold value The image such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise Threshold the image such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise The image such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise Threshold the image such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise The threshold value The maximum value of the pixel on the result The image such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise Threshold the image inplace such that: dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise The threshold value Threshold the image inplace such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise The threshold value Threshold the image inplace such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise The threshold value Threshold the image inplace such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise The threshold value The maximum value of the pixel on the result Threshold the image inplace such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise The threshold value The maximum value of the pixel on the result Calculates the average value and standard deviation of array elements, independently for each channel The avg color The standard deviation for each channel The operation mask Calculates the average value and standard deviation of array elements, independently for each channel The avg color The standard deviation for each channel Count the non Zero elements for each channel Count the non Zero elements for each channel Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Return a flipped copy of the current image The type of the flipping The flipped copy of this image Inplace flip the image The type of the flipping The flipped copy of this image Concate the current image with another image vertically. The other image to concate A new image that is the vertical concatening of this image and Concate the current image with another image horizontally. The other image to concate A new image that is the horizontal concatening of this image and Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characteristics including 7 Hu invariants. If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1's spatial and central moments up to the third order Gamma corrects this image inplace. The image must have a depth type of Byte. The gamma value Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. The algorithm inplace normalizes brightness and increases contrast of the image. For color images, a HSV representation of the image is first obtained and the V (value) channel is histogram normalized This function load the image data from Mat The Mat This function load the image data from the iplImage pointer The pointer to the iplImage Get the managed image from an unmanaged IplImagePointer The pointer to the iplImage The managed image from the iplImage pointer Get the jpeg representation of the image An byte array that contains the image as jpeg data Get the size of the array Constants used by the image class Offset of roi The stereo matcher interface Pointer to the stereo matcher A Map is similar to an Image, except that the location of the pixels is defined by its area and resolution The color of this map The depth of this map Get the area of this map as a rectangle Get the resolution of this map as a 2D point Create a new Image Map defined by the Rectangle area. The center (0.0, 0.0) of this map is defined by the center of the rectangle. The resolution of x (y), (e.g. a value of 0.5 means each cell in the map is 0.5 unit in x (y) dimension) The initial color of the map Create a new Image Map defined by the Rectangle area. The center (0.0, 0.0) of this map is defined by the center of the rectangle. The initial value of the map is 0.0 The resolution of x (y), (e.g. a value of 0.5 means each cell in the map is 0.5 unit in x (y) dimension) Map a point to a position in the internal image Map a point to a position in the internal image Map an image point to a Map point The point on image The point on map Get a copy of the map in the specific area the area of the map to be retrieve The area of the map Get or Set the region of interest for this map. To clear the ROI, set it to System.Drawing.RectangleF.Empty Draw a rectangle in the map The rectangle to draw The color for the rectangle The thickness of the rectangle, any value less than or equal to 0 will result in a filled rectangle Draw a line segment in the map The line to be draw The color for the line The thickness of the line Line type Number of fractional bits in the center coordinates and radius value Draw a Circle of the specific color and thickness The circle to be drawn The color of the circle If thickness is less than 1, the circle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a convex polygon of the specific color and thickness The convex polygon to be drawn The color of the convex polygon If thickness is less than 1, the triangle is filled up Draw the text using the specific font on the image The text message to be draw Font type. Font scale factor that is multiplied by the font-specific base size. The location of the bottom left corner of the font The color of the text Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Draw the polyline defined by the array of 2D points the points that defines the poly line if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thinkness of the line Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info streaming context The equivalent of cv::Mat Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime deserailization of the object Serialization info Streaming context A function used for runtime serialization of the object Serialization info streaming context Gets or sets the data as byte array. The bytes. Copy data from this Mat to the managed array The type of managed data array The managed array where data will be copied to. Copy data from managed array to this Mat The type of managed data array The managed array where data will be copied from An option parent object to keep reference to Create an empty cv::Mat Create a mat of the specific type. Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Create a mat of the specific type. Size of the Mat Mat element type Number of channels Create a Mat header from existing data Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. Create multi-dimension mat using existing data. The sizes of each dimension The type of data The pointer to the unmanaged data The steps Create a Mat header from existing data Size of the Mat Mat element type Number of channels Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. Load the Mat from file The name of the file File loading method Create a mat header for the specific ROI The mat where the new Mat header will share data from The region of interest Create a mat header for the specific ROI The mat where the new Mat header will share data from The region of interest The region of interest Convert this Mat to UMat Access type Usage flags The UMat Allocates new array data if needed. New number of rows. New number of columns. New matrix element depth type. New matrix number of channels The size of this matrix The number of rows The number of columns Pointer to the beginning of the raw data Gets the binary data from the specific indices. The indices. Indices of length more than 2 is not implemented Step The size of the elements in this matrix Copy the data in this cv::Mat to an output array The output array to copy to Operation mask. Its non-zero elements indicate which matrix elements need to be copied. Converts an array to another data type with optional scaling. Output matrix; if it does not have a proper size or type before the operation, it is reallocated. Desired output matrix type or, rather, the depth since the number of channels are the same as the input has; if rtype is negative, the output matrix will have the same type as the input. Optional scale factor. Optional delta added to the scaled values. Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. A new mat header that has different shape Release all the unmanaged memory associated with this object. Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Get the width of the mat Get the height of the mat. Get the minimum and maximum value across all channels of the mat The range that contains the minimum and maximum values Convert this Mat to Image The type of Color The type of Depth The image The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Mat object is disposed The Set property convert the bitmap to this Image type. Set the mat to the specific value The value to set to Optional mask Set the mat to the specific value The value to set to Optional mask Returns an identity matrix of the specified size and type. Number of rows. Number of columns. Mat element type Number of channels An identity matrix of the specified size and type. Extracts a diagonal from a matrix. The method makes a new header for the specified matrix diagonal. The new matrix is represented as a single-column matrix. Similarly to Mat::row and Mat::col, this is an O(1) operation. Index of the diagonal, with the following values: d=0 is the main diagonal; d < 0 is a diagonal from the lower half. For example, d=-1 means the diagonal is set immediately below the main one; d > 0 is a diagonal from the upper half. For example, d=1 means the diagonal is set immediately above the main one. A diagonal from a matrix Transposes a matrix. The transposes of the matrix. Returns a zero array of the specified size and type. Number of rows. Number of columns. Mat element type Number of channels A zero array of the specified size and type. Returns an array of all 1's of the specified size and type. Number of rows. Number of columns. Mat element type Number of channels An array of all 1's of the specified size and type. Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Creates a matrix header for the specified matrix row. A 0-based row index. A matrix header for the specified matrix row. Creates a matrix header for the specified matrix column. A 0-based column index. A matrix header for the specified matrix column. Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. Make a clone of the current Mat A clone fo the current Mat Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Compares two Mats and check if they are equal The other mat to compare with True if the two Mats are equal Computes a dot-product of two vectors. Another dot-product operand The dot-product of two vectors. Computes a cross-product of two 3-element vectors. Another cross-product operand. Cross-product of two 3-element vectors. Get an array of the size of the dimensions. e.g. if the mat is 9x10x11, the array of {9, 10, 11} will be returned. True if the data is continues True if the matrix is a submatrix of another matrix Depth type True if the Mat is empty Number of channels The method removes one or more rows from the bottom of the matrix Adds elements to the bottom of the matrix The method returns the number of array elements (a number of pixels if the array represents an image) The matrix dimensionality Matrix data allocator. Base class for Mat that handles the matrix data allocation and deallocation Get the managed data used by the Mat Release resource associated with this object A MatND is a wrapper to cvMatND of OpenCV. The type of depth Create a N-dimensional matrix The size for each dimension Constructor used to deserialize runtime serialized object The serialization info The streaming context This function is not implemented for MatND Not implemented Not implemented Not implemented This function is not implemented for MatND Get the underneath managed array Get the depth representation for openCV Release the matrix and all the memory associate with it A function used for runtime serialization of the object Serialization info Streaming context A function used for runtime deserailization of the object Serialization info Streaming context Not Implemented The XmlReader Not Implemented The XmlWriter The MCvMatND structure Convert this matrix to different depth The depth type to convert to Matrix of different depth Check if the two MatND are equal The other MatND to compares to True if the two MatND equals A Matrix is a wrapper to cvMat of OpenCV. Depth of this matrix (either Byte, SByte, Single, double, UInt16, Int16 or Int32) The default constructor which allows Data to be set later on Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The step (row stride in bytes) The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The number of channels The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The step (row stride in bytes) The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a matrix of the specific size The number of rows (height) The number of cols (width) Create a matrix of the specific size The size of the matrix Create a matrix of the specific size and channels The number of rows The number of cols The number of channels Create a matrix using the specific data. The data will be used as the Matrix data storage. You need to make sure that the data object live as long as this Matrix object Create a matrix using the specific the data for this matrix Get the underneath managed array Get or Set the data for this matrix Get the number of channels for this matrix The MCvMat structure format Returns determinant of the square matrix Return the sum of the elements in this matrix Return a matrix of the same size with all elements equals 0 A matrix of the same size with all elements equals 0 Make a copy of this matrix A copy if this matrix Get reshaped matrix which also share the same data with the current matrix the new number of channles The new number of rows A reshaped matrix which also share the same data with the current matrix Convert this matrix to different depth The depth type to convert to the scaling factor to apply during conversion (defaults to 1.0 -- no scaling) the shift factor to apply during conversion (defaults to 0.0 -- no shifting) Matrix of different depth Returns the transpose of this matrix The transpose of this matrix Get or Set the value in the specific and the row of the element the col of the element The element on the specific and Allocate data for the array The number of rows The number of columns The number of channels for this matrix Get a submatrix corresponding to a specified rectangle the rectangle area of the sub-matrix A submatrix corresponding to a specified rectangle Get the specific row of the matrix the index of the row to be reterived the specific row of the matrix Return the matrix corresponding to a specified row span of the input array Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span Index step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row A matrix corresponding to a specified row span of the input array Get the specific column of the matrix the index of the column to be reterived the specific column of the matrix Get the Matrix, corresponding to a specified column span of the input array Zero-based index of the ending column (exclusive) of the span Zero-based index of the selected column the specific column span of the matrix Return the specific diagonal elements of this matrix Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc The specific diagonal elements of this matrix Return the main diagonal element of this matrix The main diagonal element of this matrix Return the matrix without a specified row span of the input array Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span The matrix without a specified row span of the input array Return the matrix without a specified column span of the input array Zero-based index of the starting column (inclusive) of the span Zero-based index of the ending column (exclusive) of the span The matrix without a specified column span of the input array Concate the current matrix with another matrix vertically. If this matrix is n1 x m and is n2 x m, the resulting matrix is (n1+n2) x m. The other matrix to concate A new matrix that is the vertical concatening of this matrix and Concate the current matrix with another matrix horizontally. If this matrix is n x m1 and is n x m2, the resulting matrix is n x (m1 + m2). The other matrix to concate A matrix that is the horizontal concatening of this matrix and Returns the min / max locations and values for the matrix Elementwise add another matrix with the current matrix The matrix to be added to the current matrix The result of elementwise adding mat2 to the current matrix Elementwise add a color to the current matrix The value to be added to the current matrix The result of elementwise adding from the current matrix Elementwise subtract another matrix from the current matrix The matrix to be subtracted to the current matrix The result of elementwise subtracting mat2 from the current matrix Elementwise subtract a color to the current matrix The value to be subtracted from the current matrix The result of elementwise subtracting from the current matrix result = val - this The value which subtract this matrix val - this Multiply the current matrix with The scale to be multiplied The scaled matrix Multiply the current matrix with The matrix to be multiplied Result matrix of the multiplication Elementwise add with The Matrix to be added The Matrix to be added The elementwise sum of the two matrices Elementwise add with The Matrix to be added The value to be added The matrix plus the value + The Matrix to be added The value to be added The matrix plus the value - The Matrix to be subtracted The value to be subtracted - - The Matrix to be subtracted The matrix to subtract - - The Matrix to be subtracted The value to be subtracted - * The Matrix to be multiplied The value to be multiplied * * The matrix to be multiplied The value to be multiplied * / The Matrix to be divided The value to be divided / * The Matrix to be multiplied The Matrix to be multiplied * Constructor used to deserialize runtime serialized object The serialization info The streaming context Release the matrix and all the memory associate with it This function compare the current matrix with and returns the comparison mask The other matrix to compare with Comparison type The comparison mask Get all channels for the multi channel matrix Each individual channel of this matrix Return true if every element of this matrix equals elements in The other matrix to compare with true if every element of this matrix equals elements in Get the size of the array Create a sparse matrix The type of elements in this matrix Create a sparse matrix of the specific dimension The dimension of the sparse matrix Get or Set the value in the specific and the row of the element the col of the element The element on the specific and Release the unmanaged memory associated with this sparse matrix Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige. Create a stereoBM object the linear size of the blocks compared by the algorithm. The size should be odd (as the block is centered at the current pixel). Larger block size implies smoother, though less accurate disparity map. Smaller block size gives more detailed disparity map, but there is higher chance for algorithm to find a wrong correspondence. the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to . The search range can then be shifted by changing the minimum disparity. Release the stereo state and all the memory associate with it Pointer to the stereo matcher Extension methods for StereoMather Computes disparity map for the specified stereo pair The stereo matcher Left 8-bit single-channel image. Right image of the same size and the same type as the left one. Output disparity map. It has the same size as the input images. Some algorithms, like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map (where each disparity value has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map This is a variation of "Stereo Processing by Semiglobal Matching and Mutual Information" by Heiko Hirschmuller. We match blocks rather than individual pixels, thus the algorithm is called SGBM (Semi-global block matching) The SGBM mode This is the default mode, the algorithm is single-pass, which means that you consider only 5 directions instead of 8 Run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes, which is large for 640x480 stereo and huge for HD-size pictures. Create a stereo disparity solver using StereoSGBM algorithm (combination of H. Hirschmuller + K. Konolige approaches) Minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly. Maximum disparity minus minimum disparity. The value is always greater than zero. In the current implementation, this parameter must be divisible by 16. Matched block size. It must be an odd number >=1 . Normally, it should be somewhere in the 3..11 range. Use 0 for default. The first parameter controlling the disparity smoothness. It is the penalty on the disparity change by plus or minus 1 between neighbor pixels. Reasonably good value is 8*number_of_image_channels*SADWindowSize*SADWindowSize. Use 0 for default The second parameter controlling the disparity smoothness. It is the penalty on the disparity change by more than 1 between neighbor pixels. The algorithm requires > . Reasonably good value is 32*number_of_image_channels*SADWindowSize*SADWindowSize. Use 0 for default Maximum allowed difference (in integer pixel units) in the left-right disparity check. Set it to a non-positive value to disable the check. Truncation value for the prefiltered image pixels. The algorithm first computes x-derivative at each pixel and clips its value by [-preFilterCap, preFilterCap] interval. The result values are passed to the Birchfield-Tomasi pixel cost function. Margin in percentage by which the best (minimum) computed cost function value should “win” the second best value to consider the found match correct. Normally, a value within the 5-15 range is good enough. Maximum size of smooth disparity regions to consider their noise speckles and invalidate. Set it to 0 to disable speckle filtering. Otherwise, set it somewhere in the 50-200 range Maximum disparity variation within each connected component. If you do speckle filtering, set the parameter to a positive value, it will be implicitly multiplied by 16. Normally, 1 or 2 is good enough. Set it to HH to run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes, which is large for 640x480 stereo and huge for HD-size pictures. By default, it is set to false. Release the unmanged memory associated with this stereo solver Pointer to the StereoMatcher Planar Subdivision, can be use to compute Delaunnay's triangulation or Voroni diagram. Start the Delaunay's triangulation in the specific region of interest. The region of interest of the triangulation Create a planar subdivision from the given points. The ROI is computed as the minimum bounding Rectangle for the input points If true, any exception during insert will be ignored The points to be inserted to this planar subdivision Insert a collection of points to this planar subdivision The points to be inserted to this planar subdivision If true, any exception during insert will be ignored Insert a point to the triangulation. The point to be inserted Locates input point within subdivision The point to locate The output edge the point falls onto or right to Optional output vertex double pointer the input point coincides with The type of location for the point Finds subdivision vertex that is the closest to the input point. It is not necessarily one of vertices of the facet containing the input point, though the facet (located using cvSubdiv2DLocate) is used as a starting point. Input point The nearest subdivision vertex The location type of the point Obtains the list of Voronoi Facets The list of Voronoi Facets Returns the triangles subdivision of the current planar subdivision. The triangles might contains virtual points that do not belongs to the inserted points, if you do not want those points, set to false The triangles subdivision in the current planar subdivision Release unmanaged resources A Voronoi Facet Create a Voronoi facet using the specific and The point this facet associate with The points that defines the contour of this facet The point this facet associates to Get or set the vertices of this facet The Image which contains time stamp which specified what time this image is created Create a empty Image Create a blank Image of the specified width, height, depth and color. The width of the image The height of the image The initial color of the image Create an empty Image of the specified width and height The width of the image The height of the image The time this image is captured The equivalent of cv::Mat, should only be used if you know what you are doing. In most case you should use the Matrix class instead Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime deserailization of the object Serialization info Streaming context A function used for runtime serialization of the object Serialization info streaming context Allocation usage. Default Buffer allocation policy is platform and usage specific Buffer allocation policy is platform and usage specific Buffer allocation policy is platform and usage specific It is not equal to: AllocateHostMemory | AllocateDeviceMemory Get or Set the raw image data Create an empty cv::UMat Create a umat of the specific type. Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Allocation Usage Create a umat of the specific type. Size of the UMat Mat element type Number of channels Allocation Usage Get the Umat header for the specific roi of the parent The parent Umat The region of interest Create a umat header for the specific ROI The umat where the new UMat header will share data from The region of interest The region of interest Allocates new array data if needed. New number of rows. New number of columns. New matrix element depth type. New matrix number of channels Allocation Usage Read a UMat from file. The name of the file The read mode The size of this matrix The number of rows The number of columns The size of the elements in this matrix Copy the data in this umat to the other mat Operation mask. Its non-zero elements indicate which matrix elements need to be copied. The input array to copy to Sets all or some of the array elements to the specified value. Assigned scalar converted to the actual array type. Operation mask of the same size as the umat. Sets all or some of the array elements to the specified value. Assigned scalar value. Operation mask of the same size as the umat. Return the Mat representation of the UMat Release all the unmanaged memory associated with this object. Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. A new mat header that has different shape Convert this Mat to Image The type of Color The type of Depth The image The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Image object is disposed The Set property convert the bitmap to this Image type. Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Converts an array to another data type with optional scaling. Output matrix; if it does not have a proper size or type before the operation, it is reallocated. Desired output matrix type or, rather, the depth since the number of channels are the same as the input has; if rtype is negative, the output matrix will have the same type as the input. Optional scale factor. Optional delta added to the scaled values. Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. Make a clone of the current UMat. A clone of the current UMat. Indicates whether the current object is equal to another object of the same type. An object to compare with this object. true if the current object is equal to the parameter; otherwise, false. Copy data from this Mat to the managed array The type of managed data array The managed array where data will be copied to. Copy data from managed array to this Mat The type of managed data array The managed array where data will be copied from Computes the dot product of two mats The matrix to compute dot product with The dot product Creates a matrix header for the specified matrix row. A 0-based row index. A matrix header for the specified matrix row. Creates a matrix header for the specified matrix column. A 0-based column index. A matrix header for the specified matrix column. True if the data is continues True if the matrix is a submatrix of another matrix Depth type True if the matrix is empty Number of channels The method returns the number of array elements (a number of pixels if the array represents an image) The matrix dimensionality Create a video writer that write images to video format Create a video writer using the specific information. On windows, it will open a codec selection dialog. On linux, it will use the default codec for the specified filename The name of the video file to be written to frame rate per second the size of the frame true if this is a color video, false otherwise Create a video writer using the specific information The name of the video file to be written to Compression code. Usually computed using CvInvoke.CV_FOURCC. On windows use -1 to open a codec selection dialog. On Linux, use CvInvoke.CV_FOURCC('I', 'Y', 'U', 'V') for default codec for the specific file name. frame rate per second the size of the frame true if this is a color video, false otherwise Write a single frame to the video writer The frame to be written to the video writer Generate 4-character code of codec used to compress the frames. For example, CV_FOURCC('P','I','M','1') is MPEG-1 codec, CV_FOURCC('M','J','P','G') is motion-jpeg codec etc. C1 C2 C3 C4 The integer value calculated from the four cc code Release the video writer and all the memory associate with it Returns true if video writer has been successfully initialized. Sets a property in the VideoWriter. Property identifier Value of the property. The value of the specific property Returns the specified VideoWriter property. Property identifier. The VideoWriter property Current quality (0..100%) of the encoded videostream. Can be adjusted dynamically in some codecs. (Read-only): Size of just encoded video frame. Note that the encoding order may be different from representation order. Number of stripes for parallel encoding. -1 for auto detection. Camera calibration functions Estimates intrinsic camera parameters and extrinsic parameters for each of the views The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points. The first index is the index of the image, second index is the index of the point The size of the image, used only to initialize intrinsic camera matrix The intrisinc parameters, might contains some initial values. The values will be modified by this function. cCalibration type The termination criteria The output array of extrinsic parameters. The final reprojection error Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point The intrisinc parameters for camera 1, might contains some initial values. The values will be modified by this function. The intrisinc parameters for camera 2, might contains some initial values. The values will be modified by this function. Size of the image, used only to initialize intrinsic camera matrix Different flags The extrinsic parameters which contains: R - The rotation matrix between the 1st and the 2nd cameras' coordinate systems; T - The translation vector between the cameras' coordinate systems. The essential matrix Termination criteria for the iterative optimiziation algorithm The fundamental matrix Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error. The array of object points The array of corresponding image points The intrinsic parameters Method for solving a PnP problem The extrinsic parameters Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points) The array of object points. Extrinsic parameters Intrinsic parameters Optional matrix supplied in the following order: dpdrot, dpdt, dpdf, dpdc, dpddist The array of image points which is the projection of Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used The 2x3 rotation matrix that defines the Affine transform Estimate rigid transformation between 2 point sets. The points from the source image The corresponding points from the destination image Indicates if full affine should be performed If success, the 2x3 rotation matrix that defines the Affine transform. Otherwise null is returned. Extrinsic camera parameters Get or Set the rodrigus rotation vector Get or Set the translation vector ( as 3 x 1 matrix) Get the 3 x 4 extrinsic matrix: [[r11 r12 r13 t1] [r21 r22 r23 t2] [r31 r32 r33 t2]] Create the extrinsic camera parameters Create the extrinsic camera parameters using the specific rotation and translation matrix The rotation vector The translation vector Return true if the two extrinsic camera parameters are equal The other extrinsic camera parameters to compare with True if the two extrinsic camera parameters are equal Intrinsic camera parameters Get or Set the DistortionCoeffs ( as a 5x1 (default), 4x1 or 8x1 matrix ). The ordering of the distortion coefficients is the following: (k1, k2, p1, p2[, k3 [,k4, k5, k6]]). That is, the first 2 radial distortion coefficients are followed by 2 tangential distortion coefficients and then, optionally, by the third radial distortion coefficients. Such ordering is used to keep backward compatibility with previous versions of OpenCV Get or Set the intrinsic matrix (3x3) Create the intrinsic camera parameters Create the intrinsic camera parameters The number of distortion coefficients. Should be either 4, 5 or 8. Pre-computes the undistortion map - coordinates of the corresponding pixel in the distorted image for every pixel in the corrected image. Then, the map (together with input and output images) can be passed to cvRemap function. The width of the image The height of the image The output array of x-coordinates of the map The output array of y-coordinates of the map computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size Image width in pixels Image height in pixels Aperture width in realworld units (optional input parameter). Set it to 0 if not used Aperture width in realworld units (optional input parameter). Set it to 0 if not used Field of view angle in x direction in degrees Field of view angle in y direction in degrees Focal length in realworld units The principal point in realworld units The pixel aspect ratio ~ fy/f Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation. The observed point coordinates Optional rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If null, the identity matrix is used. Optional new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If null, the identity matrix is used. Transforms the image to compensate radial and tangential lens distortion. The camera matrix and distortion parameters can be determined using cvCalibrateCamera2. For every pixel in the output image the function computes coordinates of the corresponding location in the input image using the formulae in the section beginning. Then, the pixel value is computed using bilinear interpolation. If the resolution of images is different from what was used at the calibration stage, fx, fy, cx and cy need to be adjusted appropriately, while the distortion coefficients remain the same The color type of the image The depth of the image The distorted image The corrected image Return true if the two intrinsic camera parameters are equal The other intrinsic camera parameters to compare with True if the two intrinsic camera parameters are equal A unit quaternions that defines rotation in 3D Create a quaternion with the specific values The W component of the quaternion: the value for cos(rotation angle / 2) The X component of the vector: rotation axis * sin(rotation angle / 2) The Y component of the vector: rotation axis * sin(rotation angle / 2) The Z component of the vector: rotation axis * sin(rotation angle / 2) The W component of the quaternion: the value for cos(rotation angle / 2) The X component of the vector: rotation axis * sin(rotation angle / 2) The Y component of the vector: rotation axis * sin(rotation angle / 2) The Z component of the vector: rotation axis * sin(rotation angle / 2) Set the value of the quaternions using euler angle Rotation around x-axis (roll) in radian Rotation around y-axis (pitch) in radian rotation around z-axis (yaw) in radian Get the equivalent euler angle Rotation around x-axis (roll) in radian Rotation around y-axis (pitch) in radian rotation around z-axis (yaw) in radian Get or set the equivalent axis angle representation. (x,y,z) is the rotation axis and |(x,y,z)| is the rotation angle in radians Fill the (3x3) rotation matrix with the value such that it represent the quaternions The (3x3) rotation matrix which values will be set to represent this quaternions Rotate the points in and save the result in . In-place operation is supported ( == ). The points to be rotated The result of the rotation, should be the same size as , can be as well for inplace rotation Rotate the specific point and return the result The point to be rotated The rotated point Get the rotation axis of the quaternion Get the rotation angle in radian Multiply the current Quaternions with The other rotation A composition of the two rotations Perform quaternions linear interpolation The other quaternions to interpolate with If 0.0, the result is the same as this quaternions. If 1.0 the result is the same as The linear interpolated quaternions Computes the multiplication of two quaternions The quaternions to be multiplied The quaternions to be multiplied The multiplication of two quaternions Get the quaternions that represent a rotation of 0 degrees. Compute the conjugate of the quaternions Check if this quaternions equals to The quaternions to be compared True if two quaternions equals, false otherwise Get the string representation of the Quaternions The string representation A (2x3) 2D rotation matrix. This Matrix defines an Affine Transform Create an empty (2x3) 2D rotation matrix Create a (2x3) 2D rotation matrix Center of the rotation in the source image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor. Set the values of the rotation matrix Center of the rotation in the source image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor. Rotate the , the value of the input will be changed. The points to be rotated, its value will be modified Rotate the , the value of the input will be changed. The points to be rotated, its value will be modified Rotate the , the value of the input will be changed. The line segments to be rotated Rotate the single channel Nx2 matrix where N is the number of 2D points. The value of the matrix is changed after rotation. The depth of the points, must be double or float The N 2D-points to be rotated Return a clone of the Matrix A clone of the Matrix Create a rotation matrix for rotating an image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at image centre). The rotation center The source image size The minimun size of the destination image The rotation matrix that rotate the source image to the destination image. A (3x1) Rodrigues rotation vector. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis. Constructor used to deserialize 3D rotation vector The serialization info The streaming context Create a 3D rotation vector (3x1 Matrix). Create a rotation vector using the specific values The values of the (3 x 1) Rodrigues rotation vector Get or Set the (3x3) rotation matrix represented by this rotation vector. The interface that is used for WCF to provide a image capture service Capture a Bgr image frame A Bgr image frame Capture a Bgr image frame that is half width and half heigh A Bgr image frame that is half width and half height The interface to request a duplex image capture Request a frame from server Request a frame from server which is half width and half height The interface for DuplexCaptureCallback Function to call when an image is received The image received Capture images from either camera or video file. the type of flipping The type of capture source Capture from camera Capture from file using HighGUI Get the type of the capture module Get and set the flip type Get or Set if the captured image should be flipped horizontally Get or Set if the captured image should be flipped vertically The width of this capture The height of this capture Create a capture using the specific camera The capture type Create a capture using the default camera Create a capture using the specific camera The index of the camera to create capture from, starting from 0 Create a capture from file or a video stream The name of a file, or an url pointed to a stream. Release the resource for this capture Obtain the capture property The index for the property The value of the specific property Sets the specified property of video capturing Property identifier Value of the property True if success Grab a frame True on success The event to be called when an image is grabbed An exception handler. If provided, it will be used to handle exception in the capture thread. Pause the grab process if it is running. Stop the grabbing thread Retrieve a Gray image frame after Grab() The output image The channel to retrieve image True if the frame can be retrieved Similar to the C++ implementation of cv::Capture >> Mat The matrix the image will be read into. Capture a Bgr image frame A Bgr image frame. If no more frames are available, null will be returned. Capture a Bgr image frame that is half width and half height. Mainly used by WCF when sending image to remote locations in a bandwidth conservative scenario Internally, this is a cvQueryFrame operation follow by a cvPyrDown A Bgr image frame that is half width and half height Query a frame duplexly over WCF Query a small frame duplexly over WCF True if the camera is opened Wrapped AGAST detector Agast feature type AGAST_5_8 AGAST_7_12d AGAST_7_12s OAST_9_16 Create AGAST using the specific values Release the unmanaged resources associated with this object Wrapped AKAZE detector Type of the extracted descriptor The kaze upright The kaze Modified-Local Difference Binary (M-LDB), upright Modified-Local Difference Binary (M-LDB) Create AKAZE using the specific values Type of the extracted descriptor Size of the descriptor in bits. 0 -> Full size Number of channels in the descriptor (1, 2, 3) Detector response threshold to accept point Default number of sublevels per scale level Maximum octave evolution of the image Diffusivity type Release the unmanaged resources associated with this object The match distance type Manhattan distance (city block distance) Squared Euclidean distance Euclidean distance Hamming distance functor - counts the bit differences between two strings - useful for the Brief descriptor, bit count of A exclusive XOR'ed with B. Hamming distance functor - counts the bit differences between two strings - useful for the Brief descriptor, bit count of A exclusive XOR'ed with B. Wrapped BFMatcher Create a BFMatcher of the specific distance type The distance type Specify whether or not cross check is needed. Use false for default. Release the unmanaged resource associated with the BFMatcher Class to compute an image descriptor using the bag of visual words. Such a computation consists of the following steps: 1. Compute descriptors for a given image and its key points set. 2. Find the nearest visual words from the vocabulary for each key point descriptor. 3. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the vocabulary in the given image. Descriptor extractor that is used to compute descriptors for an input image and its key points. Descriptor matcher that is used to find the nearest word of the trained vocabulary for each key point descriptor of the image. Sets a visual vocabulary. The vocabulary Computes an image descriptor using the set visual vocabulary. Image, for which the descriptor is computed Key points detected in the input image. The output image descriptors. Release all the unmanaged memory associated with this object Kmeans-based class to train visual vocabulary using the bag of visual words approach. Create a new BOWKmeans trainer Number of clusters to split the set by. Specifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations). Use empty termcrit for default. The number of attemps. Use 3 for default Kmeans initialization flag. Use PPCenters for default. Get the number of descriptors Add the descriptors to the trainer The descriptors to be added to the trainer Cluster the descriptors and return the cluster centers The cluster centers Release all the unmanaged memory associated with this object BRISK: Binary Robust Invariant Scalable Keypoints Create a BRISK keypoint detector and descriptor extractor. Feature parameters. The number of octave layers. Pattern scale Release the unmanaged resources associated with this object Descriptor matcher The pointer to the Descriptor matcher Find the k-nearest match An n x m matrix of descriptors to be query for nearest neighbours. n is the number of descriptor and m is the size of the descriptor Number of nearest neighbors to search for Can be null if not needed. An n x 1 matrix. If 0, the query descriptor in the corresponding row will be ignored. Matches. Each matches[i] is k or less matches for the same query descriptor. Add the model descriptors The model descriptors Reset the native pointer upon object disposal FAST(Features from Accelerated Segment Test) keypoint detector. See Detects corners using FAST algorithm by E. Rosten ("Machine learning for high-speed corner detection, 2006). One of the three neighborhoods as defined in the paper The type5_8 The type7_12 The type9_16 Create a fast detector with the specific parameters Threshold on difference between intensity of center pixel and pixels on circle around this pixel. Specify if non-maximum suppression should be used. One of the three neighborhoods as defined in the paper Release the unmanaged memory associated with this detector. The feature 2D base class The pointer to the Feature2D object The pointer to the Algorithm object. Get the pointer to the Feature2D object The pointer to the Feature2D object Detect keypoints in an image and compute the descriptors on the image from the keypoint locations. The image The optional mask, can be null if not needed The detected keypoints will be stored in this vector The descriptors from the keypoints If true, the method will skip the detection phase and will compute descriptors for the provided keypoints Reset the pointers Detect the features in the image The result vector of keypoints The image from which the features will be detected from The optional mask. Detect the keypoints from the image The image to extract keypoints from The optional mask. An array of key points Compute the descriptors on the image from the given keypoint locations. The image to compute descriptors from The keypoints where the descriptor computation is perfromed The descriptors from the given keypoints Get the number of elements in the descriptor. The number of elements in the descriptor Library to invoke Features2D functions Tools for features 2D Draw the keypoints found on the image. The image The keypoints to be drawn The color used to draw the keypoints The drawing type The image with the keypoints drawn Draw the matched keypoints between the model image and the observered image. The model image The keypoints in the model image The observed image The keypoints in the observed image The color for the match correspondence lines The color for highlighting the keypoints The mask for the matches. Use null for all matches. The drawing type The image where model and observed image is displayed side by side. Matches are drawn as indicated by the flag Matches. Each matches[i] is k or less matches for the same query descriptor. Define the Keypoint draw type Two source image, matches and single keypoints will be drawn. For each keypoint only the center point will be drawn (without the circle around keypoint with keypoint size and orientation). Single keypoints will not be drawn. For each keypoint the circle around keypoint with keypoint size and orientation will be drawn. Eliminate the matched features whose scale and rotation do not aggree with the majority's scale and rotation. The numbers of bins for rotation, a good value might be 20 (which means each bin covers 18 degree) This determines the different in scale for neighbor hood bins, a good value might be 1.5 (which means matched features in bin i+1 is scaled 1.5 times larger than matched features in bin i The keypoints from the model image The keypoints from the observed image This is both input and output. This matrix indicates which row is valid for the matches. Matches. Each matches[i] is k or less matches for the same query descriptor. The number of non-zero elements in the resulting mask Recover the homography matrix using RANDSAC. If the matrix cannot be recovered, null is returned. The model keypoints The observed keypoints The maximum allowed reprojection error to treat a point pair as an inlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range 1 to 10. The mask matrix of which the value might be modified by the function. As input, if the value is 0, the corresponding match will be ignored when computing the homography matrix. If the value is 1 and RANSAC determine the match is an outlier, the value will be set to 0. The homography matrix, if it cannot be found, null is returned Matches. Each matches[i] is k or less matches for the same query descriptor. Filter the matched Features, such that if a match is not unique, it is rejected. The distance different ratio which a match is consider unique, a good number will be 0.8 This is both input and output. This matrix indicates which row is valid for the matches. Matches. Each matches[i] is k or less matches for the same query descriptor. This matcher trains flann::Index_ on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. Create a Flann based matcher. The type of index parameters The search parameters Release the unmanaged memory associated with this Flann based matcher. Wrapping class for feature detection using the goodFeaturesToTrack() function. Create a Good Feature to Track detector The function first calculates the minimal eigenvalue for every source image pixel using cvCornerMinEigenVal function and stores them in eig_image. Then it performs non-maxima suppression (only local maxima in 3x3 neighborhood remain). The next step is rejecting the corners with the minimal eigenvalue less than quality_level?max(eig_image(x,y)). Finally, the function ensures that all the corners found are distanced enough one from another by considering the corners (the most strongest corners are considered first) and checking that the distance between the newly considered feature and the features considered earlier is larger than min_distance. So, the function removes the features than are too close to the stronger features The maximum number of features to be detected. Multiplier for the maxmin eigenvalue; specifies minimal accepted quality of image corners. Limit, specifying minimum possible distance between returned corners; Euclidian distance is used. Size of the averaging block, passed to underlying cvCornerMinEigenVal or cvCornerHarris used by the function. If true, will use Harris corner detector. K Release the unmanaged memory associated with this detector. Wrapped KAZE detector The diffusivity PM G1 PM G2 Weickert Charbonnier Create KAZE using the specific values Release the unmanaged resources associated with this object MSER detector Create a MSER detector using the specific parameters In the code, it compares (size_{i}-size_{i-delta})/size_{i-delta} Prune the area which bigger than max_area Prune the area which smaller than min_area Prune the area have similar size to its children Trace back to cut off mser with diversity < min_diversity For color image, the evolution steps The area threshold to cause re-initialize Ignore too small margin The aperture size for edge blur Release the unmanaged memory associated with this detector. Detect MSER regions input image (8UC1, 8UC3 or 8UC4, must be greater or equal than 3x3) resulting list of point sets resulting bounding boxes Wrapped ORB detector The score type Harris Fast Create a ORBDetector using the specific values The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. The level at which the image is given. If 1, that means we will also look at the image. times bigger How far from the boundary the points should be. How many random points are used to produce each cell of the descriptor (2, 3, 4 ...). Type of the score to use. Patch size. FAST threshold Release the unmanaged resources associated with this object Simple Blob detector Create a simple blob detector Release the unmanaged memory associated with this detector. Parameters for the simple blob detector Create parameters for simple blob detector and use default values. Release all the unmanaged memory associated with this simple blob detector parameter. Threshold step Min threshold Max threshold Min dist between blobs Filter by color Blob color Filter by area Min area Max area Filter by circularity Min circularity Max circularity Filter by inertia Min inertia ratio Max inertia ratio Filter by convexity Min Convexity Max Convexity Min Repeatability The Kmeans center initiation types Random The index parameters interface Gets the pointer to the index parameter. The index parameter pointer. Flann index Create a flann index A row by row matrix of descriptors The index parameter Perform k-nearest-neighbours (KNN) search A row by row matrix of descriptors to be query for nearest neighbours The result of the indices of the k-nearest neighbours The square of the Eculidean distance between the neighbours Number of nearest neighbors to search for The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored The search epsilon If set to true, the search result is sorted Performs a radius nearest neighbor search for multiple query points The query points, one per row Indices of the nearest neighbors found The square of the Eculidean distance between the neighbours The search radius The maximum number of results The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored The search epsilon If set to true, the search result is sorted The number of points in the search radius Release the unmanaged memory associated with this Flann Index Create index for 3D points Create a flann index for 3D points The IPosition3D array The index parameters Find the approximate nearest position in 3D The position to start the search from The square distance of the nearest neighbour The index with the nearest 3D position Release the resource used by this object When passing an object of this type, the index will perform a linear, brute-force search. Initializes a new instance of the class. Release all the memory associated with this IndexParam When passing an object of this type the index constructed will consist of a set of randomized kd-trees which will be searched in parallel. Initializes a new instance of the class. The number of parallel kd-trees to use. Good values are in the range [1..16] Release all the memory associated with this IndexParam When using a parameters object of this type the index created uses multi-probe LSH (by Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search by Qin Lv, William Josephson, Zhe Wang, Moses Charikar, Kai Li., Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB). Vienna, Austria. September 2007) Initializes a new instance of the class. The number of hash tables to use (between 10 and 30 usually). The size of the hash key in bits (between 10 and 20 usually). The number of bits to shift to check for neighboring buckets (0 is regular LSH, 2 is recommended). Release all the memory associated with this IndexParam When passing an object of this type the index constructed will be a hierarchical k-means tree. Initializes a new instance of the class. The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are CENTERS_RANDOM (picks the initial cluster centers randomly), CENTERS_GONZALES (picks the initial centers using Gonzales’ algorithm) and CENTERS_KMEANSPP (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 ) This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. Release all the memory associated with this IndexParam When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree. Initializes a new instance of the class. The number of parallel kd-trees to use. Good values are in the range [1..16] The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are CENTERS_RANDOM (picks the initial cluster centers randomly), CENTERS_GONZALES (picks the initial centers using Gonzales’ algorithm) and CENTERS_KMEANSPP (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 ) This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. Release all the memory associated with this IndexParam When passing an object of this type the index created is automatically tuned to offer the best performance, by choosing the optimal index type (randomized kd-trees, hierarchical kmeans, linear) and parameters for the dataset provided. Initializes a new instance of the class. Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. Specifies the importance of the index build time reported to the nearest-neighbor search time. In some applications it’s acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it’s required that the index be build as fast as possible even if that leads to slightly longer search times. Is used to specify the trade off between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage. Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters. Release all the memory associated with this IndexParam Hierarchical Clustering Index Parameters Initializes a new instance of the . Release all the memory associated with this IndexParam Search parameters Initializes a new instance of the class. how many leafs to visit when searching for neighbors (-1 for unlimited) Search for eps-approximate neighbors Only for radius search, require neighbors sorted by distance Release all the memory associated with this IndexParam A geodetic coordinate that is defined by its latitude, longitude and altitude Indicates the origin of the Geodetic Coordinate Create a geodetic coordinate using the specific values Latitude in radian Longitude in radian Altitude in meters Latitude (phi) in radian Longitude (lambda) in radian Altitude (height) in meters Compute the sum of two GeodeticCoordinates The first coordinate to be added The second coordinate to be added The sum of two GeodeticCoordinates Compute - The first coordinate The coordinate to be subtracted - Compute * The coordinate The scale to be multiplied * Check if this Geodetic coordinate equals The other coordinate to be compared with True if two coordinates equals Convert radian to degree radian degree Convert degree to radian degree radian Type for cvNorm if arr2 is NULL, norm = ||arr1||_C = max_I abs(arr1(I)); if arr2 is not NULL, norm = ||arr1-arr2||_C = max_I abs(arr1(I)-arr2(I)) if arr2 is NULL, norm = ||arr1||_L1 = sum_I abs(arr1(I)); if arr2 is not NULL, norm = ||arr1-arr2||_L1 = sum_I abs(arr1(I)-arr2(I)) if arr2 is NULL, norm = ||arr1||_L2 = sqrt( sum_I arr1(I)^2); if arr2 is not NULL, norm = ||arr1-arr2||_L2 = sqrt( sum_I (arr1(I)-arr2(I))^2 ) It is used in combination with either CV_C, CV_L1 or CV_L2 It is used in combination with either CV_C, CV_L1 or CV_L2 norm = ||arr1-arr2||_C/||arr2||_C norm = ||arr1-arr2||_L1/||arr2||_L1 norm = ||arr1-arr2||_L2/||arr2||_L2 Type used for cvReduce function The output is the sum of all the matrix rows/columns The output is the mean vector of all the matrix rows/columns The output is the maximum (column/row-wise) of all the matrix rows/columns The output is the minimum (column/row-wise) of all the matrix rows/columns Type used for cvReduce function The matrix is reduced to a single row The matrix is reduced to a single column The dimension is chosen automatically by analysing the dst size Type used for cvCmp function src1(I) "equal to" src2(I) src1(I) "greater than" src2(I) src1(I) "greater or equal" src2(I) src1(I) "less than" src2(I) src1(I) "less or equal" src2(I) src1(I) "not equal to" src2(I) CV Capture property identifier Turn the feature off (not controlled manually nor automatically) Set automatically when a value of the feature is set by the user DC1394 mode auto DC1394 mode one push auto Film current position in milliseconds or video capture timestamp 0-based index of the frame to be decoded/captured next Position in relative units (0 - start of the file, 1 - end of the file) Width of frames in the video stream Height of frames in the video stream Frame rate 4-character code of codec Number of frames in video file Format Mode Brightness Contrast Saturation Hue Gain Exposure Convert RGB White balance blue u Rectification Monochrome Sharpness Exposure control done by camera, user can adjust reference level using this feature Gamma Temperature Trigger Trigger delay White balance red v Zoom Focus GUID ISO SPEED MAX DC1394 Backlight Pan Tilt Roll Iris Settings Buffer size Auto focus Sar num Sar den property for highgui class CvCapture_Android only readonly, tricky property, returns cpnst char* indeed readonly, tricky property, returns cpnst char* indeed OpenNI depth generator OpenNI image generator OpenNI IR generator OpenNI map generators Properties of cameras available through OpenNI interfaces Properties of cameras available through OpenNI interfaces, in mm. Properties of cameras available through OpenNI interfaces, in mm. Properties of cameras available through OpenNI interfaces, in pixels. Flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). Flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). Approx frame sync Max buffer size Circle buffer Max time duration Generator present OpenNI2 Sync OpenNI2 Mirror Openni image generator present Image generator output mode Depth generator present Depth generator baseline, in mm. Depth generator focal length, in pixels. Openni generator registration Openni generator registration on Openni IR generator present Properties of cameras available through GStreamer interface. Default is 1 Ip for enable multicast master mode. 0 for disable multicast FrameStartTriggerMode: Determines how a frame is initiated Horizontal sub-sampling of the image Vertical sub-sampling of the image Horizontal binning factor Vertical binning factor Pixel format Change image resolution by binning or skipping. Output data format Horizontal offset from the origin to the area of interest (in pixels). Vertical offset from the origin to the area of interest (in pixels). Defines source of trigger. Generates an internal trigger. PRM_TRG_SOURCE must be set to TRG_SOFTWARE. Selects general purpose input Set general purpose input mode Get general purpose level Selects general purpose output Set general purpose output mode Selects camera signaling LED Define camera signaling LED functionality Calculates White Balance(must be called during acquisition) Automatic white balance Automatic exposure/gain Exposure priority (0.5 - exposure 50%, gain 50%). Maximum limit of exposure in AEAG procedure Maximum limit of gain in AEAG procedure Average intensity of output signal AEAG should achieve(in %) Image capture timeout in milliseconds Exposure time in microseconds Sets the number of times of exposure in one frame. Gain selector for parameter Gain allows to select different type of gains. Gain in dB Change image downsampling type. Binning engine selector. Vertical Binning - number of vertical photo-sensitive cells to combine together. Horizontal Binning - number of horizontal photo-sensitive cells to combine together. Binning pattern type. Decimation engine selector. Vertical Decimation - vertical sub-sampling of the image - reduces the vertical resolution of the image by the specified vertical decimation factor. Horizontal Decimation - horizontal sub-sampling of the image - reduces the horizontal resolution of the image by the specified vertical decimation factor. Decimation pattern type. Selects which test pattern generator is controlled by the TestPattern feature. Selects which test pattern type is generated by the selected generator. Output data format. Change sensor shutter type(CMOS sensor). Number of taps Automatic exposure/gain ROI offset X Automatic exposure/gain ROI offset Y Automatic exposure/gain ROI Width Automatic exposure/gain ROI Height Correction of bad pixels White balance red coefficient White balance green coefficient White balance blue coefficient Width of the Image provided by the device (in pixels). Height of the Image provided by the device (in pixels). Selects Region in Multiple ROI which parameters are set by width, height, ... ,region mode Activates/deactivates Region selected by Region Selector Set/get bandwidth(datarate)(in Megabits) Sensor output data bit depth. Device output data bit depth. bitdepth of data returned by function xiGetImage Device output data packing (or grouping) enabled. Packing could be enabled if output_data_bit_depth > 8 and packing capability is available. Data packing type. Some cameras supports only specific packing type. Returns 1 for cameras that support cooling. Start camera cooling. Set sensor target temperature for cooling. Camera sensor temperature Camera housing temperature Camera housing back side temperature Camera sensor board temperature Mode of color management system. Enable applying of CMS profiles to xiGetImage (see XI_PRM_INPUT_CMS_PROFILE, XI_PRM_OUTPUT_CMS_PROFILE). Returns 1 for color cameras. Returns color filter array type of RAW data. Luminosity gamma Chromaticity gamma Sharpness Strength Color Correction Matrix element [0][0] Color Correction Matrix element [0][1] Color Correction Matrix element [0][2] Color Correction Matrix element [0][3] Color Correction Matrix element [1][0] Color Correction Matrix element [1][1] Color Correction Matrix element [1][2] Color Correction Matrix element [1][3] Color Correction Matrix element [2][0] Color Correction Matrix element [2][1] Color Correction Matrix element [2][2] Color Correction Matrix element [2][3] Color Correction Matrix element [3][0] Color Correction Matrix element [3][1] Color Correction Matrix element [3][2] Color Correction Matrix element [3][3] Set default Color Correction Matrix Selects the type of trigger. Sets number of frames acquired by burst. This burst is used only if trigger is set to FrameBurstStart Enable/Disable debounce to selected GPI Debounce time (x * 10us) Debounce time (x * 10us) Debounce polarity (pol = 1 t0 - falling edge, t1 - rising edge) Status of lens control interface. This shall be set to XI_ON before any Lens operations. Current lens aperture value in stops. Examples: 2.8, 4, 5.6, 8, 11 Lens current focus movement value to be used by XI_PRM_LENS_FOCUS_MOVE in motor steps. Moves lens focus motor by steps set in XI_PRM_LENS_FOCUS_MOVEMENT_VALUE. Lens focus distance in cm. Lens focal distance in mm. Selects the current feature which is accessible by XI_PRM_LENS_FEATURE. Allows access to lens feature value currently selected by XI_PRM_LENS_FEATURE_SELECTOR. Return device model id Return device serial number The alpha channel of RGB32 output image format. Buffer size in bytes sufficient for output image returned by xiGetImage Current format of pixels on transport layer. Sensor clock frequency in Hz. Sensor clock frequency index. Sensor with selected frequencies have possibility to set the frequency only by this index. Number of output channels from sensor used for data transfer. Define framerate in Hz Select counter Counter status Type of sensor frames timing. Calculate and return available interface bandwidth(int Megabits) Data move policy Activates LUT. Control the index (offset) of the coefficient to access in the LUT. Value at entry LUTIndex of the LUT Specifies the delay in microseconds (us) to apply after the trigger reception before activating it. Defines how time stamp reset engine will be armed Defines which source will be used for timestamp reset. Writing this parameter will trigger settings of engine (arming) Returns 1 if camera connected and works properly. Acquisition buffer size in buffer_size_unit. Default bytes. Acquisition buffer size unit in bytes. Default 1. E.g. Value 1024 means that buffer_size is in KiBytes Acquisition transport buffer size in bytes Queue of field/frame buffers Number of buffers to commit to low level GetImage returns most recent frame Resets the camera to default state. Correction of column FPN Correction of row FPN Current sensor mode. Allows to select sensor mode by one integer. Setting of this parameter affects: image dimensions and downsampling. Enable High Dynamic Range feature. The number of kneepoints in the PWLR. position of first kneepoint(in % of XI_PRM_EXPOSURE) position of second kneepoint (in % of XI_PRM_EXPOSURE) value of first kneepoint (% of sensor saturation) value of second kneepoint (% of sensor saturation) Last image black level counts. Can be used for Offline processing to recall it. Returns hardware revision number. Set debug level Automatic bandwidth calculation, File number. Size of file. Size of free camera FFS. Size of used camera FFS. Setting of key enables file operations on some cameras. Selects the current feature which is accessible by XI_PRM_SENSOR_FEATURE_VALUE. Allows access to sensor feature value currently selected by XI_PRM_SENSOR_FEATURE_SELECTOR. Android flash mode Android focus mode Android white balance Android anti banding Android focal length Android focus distance near Android focus distance optimal Android focus distance far Android expose lock Android white balance lock iOS device focus iOS device exposure iOS device flash iOS device white-balance iOS device torch Smartek Giganetix Ethernet Vision: frame offset X Smartek Giganetix Ethernet Vision: frame offset Y Smartek Giganetix Ethernet Vision: frame width max Smartek Giganetix Ethernet Vision: frame height max Smartek Giganetix Ethernet Vision: frame sens width Smartek Giganetix Ethernet Vision: frame sens height Intelperc Profile Count Intelperc Profile Idx Intelperc Depth Low Confidence Value Intelperc Depth Saturation Value Intelperc Depth Confidence Threshold Intelperc Depth Focal Length Horz Intelperc Depth Focal Length Vert Intelperc Depth Generator Intelperc Image Generator Intelperc Generators Mask The named window type The user can resize the window (no constraint) / also use to switch a fullscreen window to a normal size The user cannot resize the window, the size is constrainted by the image displayed Window with opengl support Change the window to fullscreen The image expends as much as it can (no ratio constraint) the ratio of the image is respected contour approximation method output contours in the Freeman chain code. All other methods output polygons (sequences of vertices). translate all the points from the chain code into points; compress horizontal, vertical, and diagonal segments, that is, the function leaves only their ending points; apply one of the flavors of Teh-Chin chain approximation algorithm use completely different contour retrieval algorithm via linking of horizontal segments of 1s. Only LIST retrieval mode can be used with this method Color Conversion code Convert BGR color to BGRA color Convert RGB color to RGBA color Convert BGRA color to BGR color Convert RGBA color to RGB color Convert BGR color to RGBA color Convert RGB color to BGRA color Convert RGBA color to BGR color Convert BGRA color to RGB color Convert BGR color to RGB color Convert RGB color to BGR color Convert BGRA color to RGBA color Convert RGBA color to BGRA color Convert BGR color to GRAY color Convert RGB color to GRAY color Convert GRAY color to BGR color Convert GRAY color to RGB color Convert GRAY color to BGRA color Convert GRAY color to RGBA color Convert BGRA color to GRAY color Convert RGBA color to GRAY color Convert BGR color to BGR565 color Convert RGB color to BGR565 color Convert BGR565 color to BGR color Convert BGR565 color to RGB color Convert BGRA color to BGR565 color Convert RGBA color to BGR565 color Convert BGR565 color to BGRA color Convert BGR565 color to RGBA color Convert GRAY color to BGR565 color Convert BGR565 color to GRAY color Convert BGR color to BGR555 color Convert RGB color to BGR555 color Convert BGR555 color to BGR color Convert BGR555 color to RGB color Convert BGRA color to BGR555 color Convert RGBA color to BGR555 color Convert BGR555 color to BGRA color Convert BGR555 color to RGBA color Convert GRAY color to BGR555 color Convert BGR555 color to GRAY color Convert BGR color to XYZ color Convert RGB color to XYZ color Convert XYZ color to BGR color Convert XYZ color to RGB color Convert BGR color to YCrCb color Convert RGB color to YCrCb color Convert YCrCb color to BGR color Convert YCrCb color to RGB color Convert BGR color to HSV color Convert RGB colot to HSV color Convert BGR color to Lab color Convert RGB color to Lab color Convert BayerBG color to BGR color Convert BayerGB color to BGR color Convert BayerRG color to BGR color Convert BayerGR color to BGR color Convert BayerBG color to BGR color Convert BayerRG color to BGR color Convert BayerRG color to RGB color Convert BayerGR color to RGB color Convert BGR color to Luv color Convert RGB color to Luv color Convert BGR color to HLS color Convert RGB color to HLS color Convert HSV color to BGR color Convert HSV color to RGB color Convert Lab color to BGR color Convert Lab color to RGB color Convert Luv color to BGR color Convert Luv color to RGB color Convert HLS color to BGR color Convert HLS color to RGB color Convert BayerBG pattern to BGR color using VNG Convert BayerGB pattern to BGR color using VNG Convert BayerRG pattern to BGR color using VNG Convert BayerGR pattern to BGR color using VNG Convert BayerBG pattern to RGB color using VNG Convert BayerGB pattern to RGB color using VNG Convert BayerRG pattern to RGB color using VNG Convert BayerGR pattern to RGB color using VNG Convert BGR to HSV Convert RGB to HSV Convert BGR to HLS Convert RGB to HLS Convert HSV color to BGR color Convert HSV color to RGB color Convert HLS color to BGR color Convert HLS color to RGB color Convert sBGR color to Lab color Convert sRGB color to Lab color Convert sBGR color to Luv color Convert sRGB color to Luv color Convert Lab color to sBGR color Convert Lab color to sRGB color Convert Luv color to sBGR color Convert Luv color to sRGB color Convert BGR color to YUV Convert RGB color to YUV Convert YUV color to BGR Convert YUV color to RGB Convert BayerBG to GRAY Convert BayerGB to GRAY Convert BayerRG to GRAY Convert BayerGR to GRAY Convert YUV420i to RGB Convert YUV420i to BGR Convert YUV420sp to RGB Convert YUV320sp to BGR Convert YUV320i to RGBA Convert YUV420i to BGRA Convert YUV420sp to RGBA Convert YUV420sp to BGRA Convert YUV (YV12) to RGB Convert YUV (YV12) to BGR Convert YUV (iYUV) to RGB Convert YUV (iYUV) to BGR Convert YUV (i420) to RGB Convert YUV (i420) to BGR Convert YUV (420p) to RGB Convert YUV (420p) to BGR Convert YUV (YV12) to RGBA Convert YUV (YV12) to BGRA Convert YUV (iYUV) to RGBA Convert YUV (iYUV) to BGRA Convert YUV (i420) to RGBA Convert YUV (i420) to BGRA Convert YUV (420p) to RGBA Convert YUV (420p) to BGRA Convert YUV 420 to Gray Convert YUV NV21 to Gray Convert YUV NV12 to Gray Convert YUV YV12 to Gray Convert YUV (iYUV) to Gray Convert YUV (i420) to Gray Convert YUV (420sp) to Gray Convert YUV (420p) to Gray Convert YUV (UYVY) to RGB Convert YUV (UYVY) to BGR Convert YUV (Y422) to RGB Convert YUV (Y422) to BGR Convert YUV (UYNY) to RGB Convert YUV (UYNV) to BGR Convert YUV (UYVY) to RGBA Convert YUV (VYUY) to BGRA Convert YUV (Y422) to RGBA Convert YUV (Y422) to BGRA Convert YUV (UYNV) to RGBA Convert YUV (UYNV) to BGRA Convert YUV (YUY2) to RGB Convert YUV (YUY2) to BGR Convert YUV (YVYU) to RGB Convert YUV (YVYU) to BGR Convert YUV (YUYV) to RGB Convert YUV (YUYV) to BGR Convert YUV (YUNV) to RGB Convert YUV (YUNV) to BGR Convert YUV (YUY2) to RGBA Convert YUV (YUY2) to BGRA Convert YUV (YVYU) to RGBA Convert YUV (YVYU) to BGRA Convert YUV (YUYV) to RGBA Convert YUV (YUYV) to BGRA Convert YUV (YUNV) to RGBA Convert YUV (YUNV) to BGRA Convert YUV (UYVY) to Gray Convert YUV (YUY2) to Gray Convert YUV (Y422) to Gray Convert YUV (UYNV) to Gray Convert YUV (YVYU) to Gray Convert YUV (YUYV) to Gray Convert YUV (YUNV) to Gray Alpha premultiplication Alpha premultiplication Convert RGB to YUV_I420 Convert BGR to YUV_I420 Convert RGB to YUV_IYUV Convert BGR to YUV_IYUV Convert RGBA to YUV_I420 Convert BGRA to YUV_I420 Convert RGBA to YUV_IYUV Convert BGRA to YUV_IYUV Convert RGB to YUV_YV12 Convert BGR to YUV_YV12 Convert RGBA to YUV_YV12 Convert BGRA to YUV_YV12 Convert BayerBG to BGR (Edge-Aware Demosaicing) Convert BayerGB to BGR (Edge-Aware Demosaicing) Convert BayerRG to BGR (Edge-Aware Demosaicing) Convert BayerGR to BGR (Edge-Aware Demosaicing) Convert BayerBG to RGB (Edge-Aware Demosaicing) Convert BayerGB to RGB (Edge-Aware Demosaicing) Convert BayerRG to RGB (Edge-Aware Demosaicing) Convert BayerGR to RGB (Edge-Aware Demosaicing) The max number, do not use Fonts Hershey simplex Hershey plain Hershey duplex Hershey complex Hershey triplex Hershey complex small Hershey script simplex Hershey script complex Flags used for GEMM function Do not apply transpose to neither matrices transpose src1 transpose src2 transpose src3 Hough detection type Inpaint type Navier-Stokes based method. The method by Alexandru Telea Edge preserving filter flag Recurs filter Norm conv filter Interpolation types Nearest-neighbor interpolation Bilinear interpolation Resampling using pixel area relation. It is the preferred method for image decimation that gives moire-free results. In case of zooming it is similar to CV_INTER_NN method Bicubic interpolation Lanczos interpolation over 8x8 neighborhood Bit exact bilinear interpolation Interpolation type (simple blur with no scaling) - summation over a pixel param1xparam2 neighborhood. If the neighborhood size may vary, one may precompute integral image with cvIntegral function (simple blur) - summation over a pixel param1xparam2 neighborhood with subsequent scaling by 1/(param1xparam2). (Gaussian blur) - convolving image with param1xparam2 Gaussian kernel. (median blur) - finding median of param1xparam1 neighborhood (i.e. the neighborhood is square). (bilateral filter) - applying bilateral 3x3 filtering with color sigma=param1 and space sigma=param2. Information about bilateral filtering can be found cvLoadImage type If set, return the loaded image as is (with alpha channel, otherwise it gets cropped). If set, always convert image to the single channel grayscale image. If set, always convert image to the 3 channel BGR color image. If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit. If set, the image is read in any possible color format. If set, use the gdal driver for loading the image. If set, always convert image to the single channel grayscale image and the image size reduced 1/2. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2. If set, always convert image to the single channel grayscale image and the image size reduced 1/4. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4. If set, always convert image to the single channel grayscale image and the image size reduced 1/8. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8. Flags for Imwrite function For JPEG, it can be a quality from 0 to 100 (the higher is the better). Default value is 95. Enable JPEG features, 0 or 1, default is False. Enable JPEG features, 0 or 1, default is False. JPEG restart interval, 0 - 65535, default is 0 - no restart. Separate luma quality level, 0 - 100, default is 0 - don't use. Separate chroma quality level, 0 - 100, default is 0 - don't use. For PNG, it can be the compression level from 0 to 9. A higher value means a smaller size and longer compression time. Default value is 3. One of cv::ImwritePNGFlags, default is IMWRITE_PNG_STRATEGY_DEFAULT. Binary level PNG, 0 or 1, default is 0. For PPM, PGM, or PBM, it can be a binary format flag, 0 or 1. Default value is 1. For WEBP, it can be a quality from 1 to 100 (the higher is the better). By default (without any parameter) and for quality above 100 the lossless compression is used. OpenCV depth type default Byte SByte UInt16 Int16 Int32 float double contour retrieval mode retrieve only the extreme outer contours retrieve all the contours and puts them in the list retrieve all the contours and organizes them into two-level hierarchy: top level are external boundaries of the components, second level are bounda boundaries of the holes retrieve all the contours and reconstructs the full hierarchy of nested contours The bit to shift for SEQ_ELTYPE The mask of CV_SEQ_ELTYPE The bits to shift for SEQ_KIND The bits to shift for SEQ_FLAG Sequence element type (x,y) freeman code: 0..7 unspecified type of sequence elements =6 pointer to element of other sequence index of element of some other sequence next_o, next_d, vtx_o, vtx_d first_edge, (x,y) vertex of the binary tree connected component (x,y,z) The kind of sequence available generic (unspecified) kind of sequence dense sequence subtypes dense sequence subtypes sparse sequence (or set) subtypes sparse sequence (or set) subtypes Sequence flag close sequence Sequence type for point sets CV_TERMCRIT Iteration Epsilon Types of thresholding value = value > threshold ? max_value : 0 value = value > threshold ? 0 : max_value value = value > threshold ? threshold : value value = value > threshold ? value : 0 value = value > threshold ? 0 : value use Otsu algorithm to choose the optimal threshold value; combine the flag with one of the above CV_THRESH_* values Methods for comparing two array R(x,y)=sumx',y'[T(x',y')-I(x+x',y+y')]2 R(x,y)=sumx',y'[T(x',y')-I(x+x',y+y')]2/sqrt[sumx',y'T(x',y')2 sumx',y'I(x+x',y+y')2] R(x,y)=sumx',y'[T(x',y') I(x+x',y+y')] R(x,y)=sumx',y'[T(x',y') I(x+x',y+y')]/sqrt[sumx',y'T(x',y')2 sumx',y'I(x+x',y+y')2] R(x,y)=sumx',y'[T'(x',y') I'(x+x',y+y')], where T'(x',y')=T(x',y') - 1/(wxh) sumx",y"T(x",y") I'(x+x',y+y')=I(x+x',y+y') - 1/(wxh) sumx",y"I(x+x",y+y") R(x,y)=sumx',y'[T'(x',y') I'(x+x',y+y')]/sqrt[sumx',y'T'(x',y')2 sumx',y'I'(x+x',y+y')2] IPL_DEPTH indicates if the value is signed 1bit unsigned 8bit unsigned (Byte) 16bit unsigned 32bit float (Single) 8bit signed 16bit signed 32bit signed double Enumeration used by cvFlip No flipping Flip horizontally Flip vertically Enumeration used by cvCheckArr Checks that every element is neither NaN nor Infinity If set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neigther NaN nor Infinity If set, the function does not raises an error if an element is invalid or out of range Type of floodfill operation The default type If set the difference between the current pixel and seed pixel is considered, otherwise difference between neighbor pixels is considered (the range is floating). If set, the function does not fill the image (new_val is ignored), but the fills mask (that must be non-NULL in this case). The type for cvSampleLine 8-connected 4-connected The type of line for drawing Filled 8-connected 4-connected Anti-alias Distance transform algorithm flags Connected component The pixel Defines for Distance Transform User defined distance distance = |x1-x2| + |y1-y2| Simple euclidean distance distance = max(|x1-x2|,|y1-y2|) L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1)) distance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998 distance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846 distance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345 The types for cvMulSpectrums The default type Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Conjugate the second argument of cvMulSpectrums Flag used for cvDFT Do forward 1D or 2D transform. The result is not scaled Do inverse 1D or 2D transform. The result is not scaled. CV_DXT_FORWARD and CV_DXT_INVERSE are mutually exclusive, of course Scale the result: divide it by the number of array elements. Usually, it is combined with CV_DXT_INVERSE, and one may use a shortcut Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Inverse and scale Flag used for cvDCT Do forward 1D or 2D transform. The result is not scaled Do inverse 1D or 2D transform. The result is not scaled. CV_DXT_FORWARD and CV_DXT_INVERSE are mutually exclusive, of course Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Calculates fundamental matrix given a set of corresponding points for 7-point algorithm. N == 7 for 8-point algorithm. N >= 8 for LMedS algorithm. N >= 8 for RANSAC algorithm. N >= 8 CV_FM_LMEDS_ONLY | CV_FM_8POINT CV_FM_RANSAC_ONLY | CV_FM_8POINT General enumeration Error codes Types for WarpAffine Neither FILL_OUTLIERS nor CV_WRAP_INVERSE_MAP Fill all the destination image pixels. If some of them correspond to outliers in the source image, they are set to fillval. Indicates that matrix is inverse transform from destination image to source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transform from map_matrix. Types of Adaptive Threshold indicates that "Mean minus C" should be used for adaptive threshold. indicates that "Gaussian minus C" should be used for adaptive threshold. Shape of the Structuring Element A rectangular element. A cross-shaped element. An elliptic element. A user-defined element. PCA Type the vectors are stored as rows (i.e. all the components of a certain vector are stored continously) the vectors are stored as columns (i.e. values of a certain vector component are stored continuously) use pre-computed average vector cvInvert method Gaussian elimination with optimal pivot element chose In case of LU method the function returns src1 determinant (src1 must be square). If it is 0, the matrix is not inverted and src2 is filled with zeros. Singular value decomposition (SVD) method In case of SVD methods the function returns the inversed condition number of src1 (ratio of the smallest singular value to the largest singular value) and 0 if src1 is all zeros. The SVD methods calculate a pseudo-inverse matrix if src1 is singular Eig method for a symmetric positively-defined matrix QR decomposition Normal cvCalcCovarMatrix method types Calculates covariation matrix for a set of vectors transpose([v1-avg, v2-avg,...]) * [v1-avg,v2-avg,...] [v1-avg, v2-avg,...] * transpose([v1-avg,v2-avg,...]) Do not calc average (i.e. mean vector) - use the input vector instead (useful for calculating covariance matrix by parts) Scale the covariance matrix coefficients by number of the vectors All the input vectors are stored in a single matrix, as its rows All the input vectors are stored in a single matrix, as its columns Type for cvSVD The default type enables modification of matrix src1 during the operation. It speeds up the processing. indicates that only a vector of singular values `w` is to be processed, while u and vt will be set to empty matrices when the matrix is not square, by default the algorithm produces u and vt matrices of sufficiently large size for the further A reconstruction; if, however, FULL_UV flag is specified, u and vt will be full-size square orthogonal matrices. Type for cvCalcOpticalFlowPyrLK The default type Uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate. use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. Various camera calibration flags The default value intrinsic_matrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center (image_size is used here), and focal distances are computed in some least-squares fashion The optimization procedure consider only one of fx and fy as independent variable and keeps the aspect ratio fx/fy the same as it was set initially in intrinsic_matrix. In this case the actual initial values of (fx, fy) are either taken from the matrix (when CV_CALIB_USE_INTRINSIC_GUESS is set) or estimated somehow (in the latter case fx, fy may be set to arbitrary values, only their ratio is used) The principal point is not changed during the global optimization, it stays at the center and at the other location specified (when CV_CALIB_FIX_FOCAL_LENGTH - Both fx and fy are fixed. CV_CALIB_USE_INTRINSIC_GUESS is set as well) Tangential distortion coefficients are set to zeros and do not change during the optimization The focal length is fixed The 1st distortion coefficient (k1) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 2nd distortion coefficient (k2) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 3rd distortion coefficient (k3) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 4th distortion coefficient (k4) is fixed (see above) The 5th distortion coefficient (k5) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 6th distortion coefficient (k6) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed Rational model Thin prism model Fix S1, S2, S3, S4 Tilted model Fix Taux Tauy Use QR instead of SVD decomposition for solving. Faster but potentially less precise Only for stereo: Fix intrinsic Only for stereo: Same focal length For stereo rectification: Zero disparity For stereo rectification: use LU instead of SVD decomposition for solving. much faster but potentially less precise Type of chessboard calibration Default type Use adaptive thresholding to convert the image to black-n-white, rather than a fixed threshold level (computed from the average image brightness) Normalize the image using cvNormalizeHist before applying fixed or adaptive thresholding. Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads that are extracted at the contour retrieval stage If it is on, then this check is performed before the main algorithm and if a chessboard is not found, the function returns 0 instead of wasting 0.3-1s on doing the full search. Type of circles grid calibration symmetric grid asymmetric grid Clustering IO type for eigen object related functions No callback input callback output callback both callback orientation clockwise counter clockwise Stereo Block Matching Prefilter type No prefilter XSobel Type of cvHomography method regular method using all the point pairs Least-Median robust method RANSAC-based robust method Type used by cvMatchShapes I_1(A,B)=sum_{i=1..7} abs(1/m^A_i - 1/m^B_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively I_2(A,B)=sum_{i=1..7} abs(m^A_i - m^B_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively I_3(A,B)=sum_{i=1..7} abs(m^A_i - m^B_i)/abs(m^A_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively The result type of cvSubdiv2DLocate. One of input arguments is invalid. Point is outside the subdivision reference rectangle Point falls into some facet Point coincides with one of subdivision vertices Point falls onto the edge Type used in cvStereoRectify Shift one of the image in horizontal or vertical direction (depending on the orientation of epipolar lines) in order to maximise the useful image area Makes the principal points of each camera have the same pixel coordinates in the rectified views The type for CopyMakeBorder function Used by some cuda methods, will pass the value -1 to the function Border is filled with the fixed value, passed as last parameter of the function The pixels from the top and bottom rows, the left-most and right-most columns are replicated to fill the border Reflect Wrap Reflect 101 Transparent The default border interpolation type. do not look outside of ROI The types for haar detection The default type where no optimization is done. If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing For each scale factor used the function will downscale the image rather than "zoom" the feature coordinates in the classifier cascade. Currently, the option can only be used alone, i.e. the flag can not be set together with the others If it is set, the function finds the largest object (if any) in the image. That is, the output sequence will contain one (or zero) element(s) It should be used only when CV_HAAR_FIND_BIGGEST_OBJECT is set and min_neighbors > 0. If the flag is set, the function does not look for candidates of a smaller size as soon as it has found the object (with enough neighbor candidates) at the current scale. Typically, when min_neighbors is fixed, the mode yields less accurate (a bit larger) object rectangle than the regular single-object mode (flags=CV_HAAR_FIND_BIGGEST_OBJECT), but it is much faster, up to an order of magnitude. A greater value of min_neighbors may be specified to improve the accuracy Specific if it is back or front Back Front The file storage operation type The storage is open for reading The storage is open for writing The storage is open for append Histogram comparison method Correlation/ Chi-Square Intersection Bhattacharyya distance Synonym for Bhattacharyya Alternative Chi-Square The available flags for Farneback optical flow computation Default Use the input flow as the initial flow approximation Use a Gaussian winsize x winsizefilter instead of box filter of the same size for optical flow estimation. Usually, this option gives more accurate flow than with a box filter, at the cost of lower speed (and normally winsize for a Gaussian window should be set to a larger value to achieve the same level of robustness) Grabcut initialization type Initialize with rectangle Initialize with mask Eval CvCapture type. This is the equivalent to CV_CAP_ macros. Auto detect Platform native Platform native Platform native IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers QuickTime Unicap drivers DirectShow (via videoInput) PvAPI, Prosilica GigE SDK OpenNI (for Kinect) OpenNI (for Asus Xtion) Android XIMEA Camera API AVFoundation framework for iOS (OS X Lion will have the same API) Smartek Giganetix GigEVisionSDK Microsoft Media Foundation (via videoInput) Microsoft Windows Runtime using Media Foundation Intel Perceptual Computing SDK OpenNI2 (for Kinect) OpenNI2 (for Asus Xtion and Occipital Structure sensors) gPhoto2 connection GStreamer FFMPEG OpenCV Image Sequence (e.g. img_%02d.jpg) KMeans initialization type Chooses random centers for k-Means initialization Uses the user-provided labels for K-Means initialization Uses k-Means++ algorithm for initialization The type of color map Autumn Bone Jet Winter Rainbow Ocean Summer Spring Cool Hsv Pink Hot The return value for solveLP function Problem is unbounded (target function can achieve arbitrary high values) Problem is unfeasible (there are no points that satisfy all the constraints imposed) There is only one maximum for target function there are multiple maxima for target function - the arbitrary one is returned Morphology operation type Erode Dilate Open Close Gradient Tophat Blackhat Hit or miss. Only supported for CV_8UC1 binary images. Access type Read Write Read and write Mask Dast Rectangle intersect type No intersection There is a partial intersection One of the rectangle is fully enclosed in the other Method for solving a PnP problem Iterative F.Moreno-Noguer, V.Lepetit and P.Fua "EPnP: Efficient Perspective-n-Point Camera Pose Estimation" X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem" A Direct Least-Squares (DLS) Method for PnP Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation Seamless clone method The power of the method is fully expressed when inserting objects with complex outlines into a new background The classic method, color-based selection and alpha masking might be time consuming and often leaves an undesirable halo. Seamless cloning, even averaged with the original image, is not effective. Mixed seamless cloning based on a loose selection proves effective. Monochrome transfer Connected components algorithm output formats The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. The horizontal size of the bounding box. The vertical size of the bounding box. The total area (in pixels) of the connected component. The rotation type Rotate 90 degrees clockwise Rotate 180 degrees clockwise Rotate 270 degrees clockwise Flags for sorting each matrix row is sorted independently each matrix column is sorted independently; this flag and SortEveryRow are mutually exclusive. each matrix row is sorted in the ascending order. each matrix row is sorted in the descending order; this flag and SortAscending are also mutually exclusive. Motion type for the FindTransformECC function sets a translational motion model; warpMatrix is 2x3 with the first 2x2 part being the unity matrix and the rest two parameters being estimated. sets a Euclidean (rigid) transformation as motion model; three parameters are estimated; warpMatrix is 2×3. sets an affine motion model (DEFAULT); six parameters are estimated; warpMatrix is 2×3. sets a homography as a motion model; eight parameters are estimated;`warpMatrix` is 3×3. Fisheye Camera model Fisheye calibration flag. Default flag cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion. Extrinsic will be recomputed after each iteration of intrinsic optimization. The functions will check validity of condition number. Skew coefficient (alpha) is set to zero and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Fix intrinsic Projects points using fisheye model. The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. Array of object points, 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view. Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or vector<Point2f>. rotation vector translation vector Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). The skew coefficient. Optional output 2Nx15 jacobian matrix of derivatives of image points with respect to components of the focal lengths, coordinates of the principal point, distortion coefficients, rotation vector, translation vector, and the skew. In the old interface different components of the jacobian are returned via different output parameters. Distorts 2D points using fisheye model. Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the number of points in the view. Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> . Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). The skew coefficient. Transforms an image to compensate for fisheye lens distortion. Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the number of points in the view. Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f>. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero distortion is used, if R or P is empty identity matrixes are used. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Undistorted image size. Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details. The first output map. The second output map. Transforms an image to compensate for fisheye lens distortion. The function is simply a combination of fisheye::initUndistortRectifyMap (with unity R ) and remap (with bilinear interpolation). Image with fisheye lens distortion. Output image with compensated fisheye lens distortion. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Camera matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix. The function transforms an image to compensate radial and tangential lens distortion. Estimates new camera matrix for undistortion or rectification. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1] Divisor for new focal length. Stereo rectification for fisheye camera model. First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4×4 disparity-to-depth mapping matrix (see reprojectImageTo3D ). Operation flags that may be zero or ZeroDisparity . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap. When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1]. Divisor for new focal length. Performs camera calibaration. vector of vectors of calibration pattern points in the calibration pattern coordinate space. vector of vectors of the projections of calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i. Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix. If UseIntrisicGuess is specified, some or all of fx, fy, cx, cy must be initialized before calling the function. Output vector of distortion coefficients (k1,k2,k3,k4). Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1). Output vector of translation vectors estimated for each pattern view. Different flags Termination criteria for the iterative optimization algorithm. Performs stereo calibration. Vector of vectors of the calibration pattern points. Vector of vectors of the projections of the calibration pattern points, observed by the first camera. Vector of vectors of the projections of the calibration pattern points, observed by the second camera. Input/output first camera matrix.If FixIntrinsic is specified, some or all of the matrix components must be initialized. Input/output vector of distortion coefficients (k1,k2,k3,k4) of 4 elements. Input/output second camera matrix. The parameter is similar to Input/output lens distortion coefficients for the second camera. The parameter is similar to Size of the image used only to initialize intrinsic camera matrix. Output rotation matrix between the 1st and the 2nd camera coordinate systems. Output translation vector between the coordinate systems of the cameras. Fish eye calibration flags Termination criteria for the iterative optimization algorithm. Attribute used by ImageBox to generate Operation Menu Get or Set the exposable value, if true, this function will be displayed in Operation Menu of ImageBox The catefory of this function The size for each generic parameter Options The options for generic parameters Constructor A generic parameter for the Operation class The selected generic parameter type The types that can be used Create a generic parameter for the Operation class The selected generic parameter typ The types that can be used A collection of reflection function that can be applied to ColorType object Get the display color for each channel The color The display color for each channel Get the names of the channels The color The names of the channels A collection of reflection function that can be applied to IImage object Get all the methods that belongs to the IImage and Image class with ExposableMethodAttribute set true. The IImage object to be refelected for methods marked with ExposableMethodAttribute All the methods that belongs to the IImage and Image class with ExposableMethodAttribute set true Get the color type of the image The image to apply reflection on The color type of the image Get the depth type of the image The image to apply reflection on The depth type of the image Get the color at the specific location of the image The image to obtain pixel value from The location to sample a pixel The color at the specific location A class that can be used for writing geotiff The color type of the image to be written The depth type of the image to be written Create a tiff writer to save an image The file name to be saved Write the image to the tiff file The image to be written Write the geo information into the tiff file Model Tie Point, an array of size 6 Model pixel scale, an array of size 3 Release the writer and write all data on to disk. A writer for writing GeoTiff The color type of the image to be written The depth type of the image to be written Create a TitleTiffWriter. The name of the file to be written to The size of the image The tile size in pixels Write a tile into the tile tiff The starting row for the tile The starting col for the tile The tile to be written Get the equivalent size for a tile of data as it would be returned in a call to TIFFReadTile or as it would be expected in a call to TIFFWriteTile. Get the number of bytes of a row of data in a tile. Get tile size in pixels. Write the whole image as tile tiff The image to be written This class contains ocl runtime information Create a empty OclDevice object Get the default OclDevice. Do not dispose this device. Release all the unmanaged memory associated with this OclInfo Get the native device pointer Set the native device pointer Get the string representation of this oclDevice A string representation of this oclDevice Indicates if this is an NVidia device Indicates if this is an Intel device Indicates if this is an AMD device The AddressBits Indicates if the linker is available Indicates if the compiler is available Indicates if the device is available The maximum work group size The max compute unit The local memory size The maximum memory allocation size The device major version number The device minor version number The device half floating point configuration The device single floating point configuration The device double floating point configuration True if the device use unified memory The global memory size The image 2d max width The image2d max height The ocl device type The device name The device version The device vendor name The device driver version The device extensions The device OpenCL version The device OpenCL C version Ocl Device Type Default Cpu Gpu Accerlerator DGpu IGpu All Floating point configuration Denorm inf, nan round to nearest round to zero round to infinite FMA soft float Correctly rounded divide sqrt Class that contains ocl functions Class that contains ocl functions. Class that contains ocl functions. Class that contains ocl functions. Class that contains ocl functions. Class that contains ocl functions. Convert the DepthType to a string that represent the OpenCL value type. The depth type The number of channels A string the repsent the OpenCL value type Get all the platform info as a vector The vector of Platfom info cv::ocl::Image2D Create an OclImage2D object from UMat The UMat from which to get image properties and data Flag to enable the use of normalized channel data types Flag indicating that the image should alias the src UMat. If true, changes to the image or src will be reflected in both objects. Release the unmanaged memory associated with this OclImage2D An opencl kernel Create an opencl kernel Create an opencl kernel The name of the kernel The program source code The build options Option error message container that can be passed to this function True if the kernel can be created Release the opencl kernel Set the parameters for the kernel The index of the parameter The ocl image The next index value to be set Set the parameters for the kernel The index of the parameter The umat The next index value to be set Set the parameters for the kernel The index of the parameter The value The next index value to be set Set the parameters for the kernel The index of the parameter The value The next index value to be set Set the parameters for the kernel The index of the parameter The value The next index value to be set Set the parameters for the kernel The index of the parameter The kernel arg The next index value to be set Set the parameters for the kernel The index of the parameter The data The size of the data in number of bytes The next index value to be set Execute the kernel The global size The local size If true, the code is run synchronously (blocking) Optional Opencl queue True if the execution is sucessful Indicates if the kernel is empty The pointer to the native kernel OpenCL kernel arg KernelArg flags Local Read only Write only Read write Constant Ptr only No size Create the OCL kernel arg The flags The UMat wscale iwscale obj sz Release the unmanaged memory associated with this object This class contains ocl platform information Release all the unmanaged memory associated with this OclInfo Get the OclDevice with the specific index The index of the ocl device The ocl device with the specific index Get the string that represent this oclPlatformInfo object A string that represent this oclPlatformInfo object The platform name The platform version The platform vendor The number of devices Open CL kernel program source code Create OpenCL program source code The source code Get the source code as String Release the unmanaged memory associated with this object An OpenCL Queue OpenCL queue Wait for the queue to finish Release the unmanaged memory associated with this object. A raw data storage The type of elements in the storage The file info Create a binary File Storage The file name of the storage Create a binary File Storage The file name of the storage The data will be read in trunk of this size internally. Can be use to seed up the file read. A good number will be 4096 Create a binary File Storage with the specific data The file name of the storage, all data in the existing file will be replaced The data which will be stored in the storage Append the samples to the end of the storage The samples to be appended to the storage The file name of the storage Delete all data in the existing storage, if there is any. Estimate the number of elements in this storage as the size of the storage divided by the size of the elements An estimation of the number of elements in this storage Get a copy of the first element in the storage. If the storage is empty, a default value will be returned A copy of the first element in the storage. If the storage is empty, a default value will be returned Get the subsampled data in this storage The subsample rate The sub-sampled data in this storage Get the data in this storage The data in this storage The default exception to be thrown when error encounter in Open CV The numeric code for error status The corresponding error string for the Status code The name of the function the error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered The default exception to be thrown when error is encountered in Open CV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered Utilities class The ColorPalette of Grayscale for Bitmap Format8bppIndexed Convert the color palette to four lookup tables The color palette to transform Lookup table for the B channel Lookup table for the G channel Lookup table for the R channel Lookup table for the A channel Convert arrays of data to matrix Arrays of data A two dimension matrix that represent the array Convert arrays of points to matrix Arrays of points A two dimension matrix that represent the points Compute the minimum and maximum value from the points The points The minimum x,y,z values The maximum x,y,z values Copy a generic vector to the unmanaged memory The data type of the vector The source vector Pointer to the destination unmanaged memory Specify the number of bytes to copy. If this is -1, the number of bytes equals the number of bytes in the array The number of bytes copied Copy a jagged two dimensional array to the unmanaged memory The data type of the jagged two dimensional The source array Pointer to the destination unmanaged memory Copy a jagged two dimensional array from the unmanaged memory The data type of the jagged two dimensional The src array Pointer to the destination unmanaged memory memcpy function the destination of memory copy the source of memory copy the number of bytes to be copied Given the source and destination color type, compute the color conversion code for CvInvoke.cvCvtColor function The source color type. Must be a type inherited from IColor The dest color type. Must be a type inherited from IColor The color conversion code for CvInvoke.cvCvtColor function A DataLogger for unmanaged code to log data back to managed code, using callback. Create a MessageLogger and register the callback function The log level. The event that will be raised when the unmanaged code send over data Log some data Pointer to some unmanaged data The logLevel. The Log function only logs when the is greater or equals to the DataLogger's logLevel Release the DataLogger and all the unmanaged memory associated with it. A generic version of the DataLogger The supported type includes System.String and System.ValueType Create a new DataLogger The log level. The event that will be raised when the unmanaged code send over data Log some data The data to be logged The logLevel. The Log function only logs when the is greater or equals to the DataLogger's logLevel Pointer to the unmanaged object Implicit operator for IntPtr The DataLogger The unmanaged pointer for this DataLogger Release the unmanaged memory associated with this DataLogger The event that will be raised when the unmanaged code send over data Cache the size of various header in bytes The size of PointF The size of RangF The size of PointF The size of MCvMat The size of IplImage The size of MCvPoint3D32f The size of MCvMatND This class canbe used to initiate TBB. Only usefull if it is compiled with TBB support Initialize the TBB task scheduler Release the TBB task scheduler Wrapped class of the C++ standard vector of Byte. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Byte Create an standard vector of Byte of the specific size The size of the vector Create an standard vector of Byte with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Byte An array of Byte Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of ColorPoint. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of ColorPoint Create an standard vector of ColorPoint of the specific size The size of the vector Create an standard vector of ColorPoint with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of ColorPoint An array of ColorPoint Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of CvString. Wrapped class of the C++ standard vector of CvString. Create an empty standard vector of CvString Create an standard vector of CvString of the specific size The size of the vector Create an standard vector of CvString with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Convert the standard vector to an array of String An array of String Wrapped class of the C++ standard vector of DMatch. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of DMatch Create an standard vector of DMatch of the specific size The size of the vector Create an standard vector of DMatch with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of DMatch An array of DMatch Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Double. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Double Create an standard vector of Double of the specific size The size of the vector Create an standard vector of Double with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Double An array of Double Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Float. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Float Create an standard vector of Float of the specific size The size of the vector Create an standard vector of Float with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Float An array of Float Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Int. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Int Create an standard vector of Int of the specific size The size of the vector Create an standard vector of Int with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Int An array of Int Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of KeyPoint. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of KeyPoint Create an standard vector of KeyPoint of the specific size The size of the vector Create an standard vector of KeyPoint with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of KeyPoint An array of KeyPoint Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Remove keypoints within borderPixels of an image edge. Image size Border size in pixel Remove keypoints of sizes out of range. Minimum size Maximum size Remove keypoints from some image by mask for pixels of this image. The mask Wrapped class of the C++ standard vector of Mat. Create an empty standard vector of Mat Create an standard vector of Mat of the specific size The size of the vector Create an standard vector of Mat with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Convert a CvArray to cv::Mat and push it into the vector The type of depth of the cvArray The cvArray to be pushed into the vector Convert a group of CvArray to cv::Mat and push them into the vector The type of depth of the cvArray The values to be pushed to the vector Wrapped class of the C++ standard vector of OclPlatformInfo. Create an empty standard vector of OclPlatformInfo Create an standard vector of OclPlatformInfo of the specific size The size of the vector Create an standard vector of OclPlatformInfo with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Point. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Point Create an standard vector of Point of the specific size The size of the vector Create an standard vector of Point with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Point An array of Point Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Point3D32F. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Point3D32F Create an standard vector of Point3D32F of the specific size The size of the vector Create an standard vector of Point3D32F with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Point3D32F An array of Point3D32F Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of PointF. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of PointF Create an standard vector of PointF of the specific size The size of the vector Create an standard vector of PointF with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of PointF An array of PointF Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Rect. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Rect Create an standard vector of Rect of the specific size The size of the vector Create an standard vector of Rect with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Rect An array of Rect Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of Triangle2DF. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Triangle2DF Create an standard vector of Triangle2DF of the specific size The size of the vector Create an standard vector of Triangle2DF with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of Triangle2DF An array of Triangle2DF Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of UMat. Create an empty standard vector of UMat Create an standard vector of UMat of the specific size The size of the vector Create an standard vector of UMat with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of VectorOfDMatch. Create an empty standard vector of VectorOfDMatch Create an standard vector of VectorOfDMatch of the specific size The size of the vector Create an standard vector of VectorOfDMatch with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfDMatch Convert the standard vector to arrays of int Arrays of int Wrapped class of the C++ standard vector of VectorOfInt. Create an empty standard vector of VectorOfInt Create an standard vector of VectorOfInt of the specific size The size of the vector Create an standard vector of VectorOfInt with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfInt Convert the standard vector to arrays of int Arrays of int Wrapped class of the C++ standard vector of VectorOfPoint. Create an empty standard vector of VectorOfPoint Create an standard vector of VectorOfPoint of the specific size The size of the vector Create an standard vector of VectorOfPoint with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfPoint Convert the standard vector to arrays of int Arrays of int Wrapped class of the C++ standard vector of VectorOfPoint3D32F. Create an empty standard vector of VectorOfPoint3D32F Create an standard vector of VectorOfPoint3D32F of the specific size The size of the vector Create an standard vector of VectorOfPoint3D32F with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfPoint3D32F Convert the standard vector to arrays of int Arrays of int Wrapped class of the C++ standard vector of VectorOfPointF. Create an empty standard vector of VectorOfPointF Create an standard vector of VectorOfPointF of the specific size The size of the vector Create an standard vector of VectorOfPointF with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfPointF Convert the standard vector to arrays of int Arrays of int Use zlib included in OpenCV to perform in-memory binary compression and decompression Compress the data using the specific compression level The data to be compressed The compression level, 0-9 where 0 mean no compression at all The compressed bytes Uncompress the data The compressed data The estimated size fo the uncompress data. Must be large enough to hold the decompressed data. The decompressed data Wrapper for cv::String. This class support UTF-8 chars. Create a CvString from System.String The System.String object to be converted to CvString Create an empty CvString Get the string representation of the CvString The string representation of the CvString Gets the length of the string The length of the string Release all the unmanaged resource associated with this object. Interface to the algorithm class Return the pointer to the algorithm object The pointer to the algorithm object Extension methods to the IAlgorithm interface Reads algorithm parameters from a file storage. The algorithm. The node from file storage. Stores algorithm parameters in a file storage The algorithm. The storage. Save the algorithm to file The algorithm The file name where this algorithm will be saved to Save the algorithm to a string The algorithm file format, can be .xml or .yml The algorithm as an yml string Clear the algorithm Returns true if the Algorithm is empty. e.g. in the very beginning or after unsuccessful read. The algorithm Returns true if the Algorithm is empty. e.g. in the very beginning or after unsuccessful read. Loads algorithm from the file The algorithm Name of the file to read. The optional name of the node to read (if empty, the first top-level node will be used) Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Loads algorithm from a String The algorithm The string variable containing the model you want to load. The optional name of the node to read (if empty, the first top-level node will be used) Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. The algorithm Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. This is the proxy class for passing read-only input arrays into OpenCV functions. The unmanaged pointer to the input array. InputArrayOfArrays Extension methods for IInputArrays Determines whether the specified input array is umat. The array True if it is a umat This type is very similar to InputArray except that it is used for input/output function parameters. The unmanaged pointer to the input/output array This is the proxy class for passing read-only input arrays into OpenCV functions. This is the proxy class for passing read-only input arrays into OpenCV functions. Input array type kind shift Fixed type Fixed size Kind mask None Mat Matx StdVector StdVectorVector StdVectorMat Expr Opengl buffer Cuda Host Mem Cuda GpuMat UMat StdVectorUMat StdBoolVector StdVectorCudaGpuMat Create a Input array from an existing unmanaged inputArray pointer The unmanaged pointer the the InputArray The parent object to keep reference to Get an empty input array An empty input array Get the Mat from the input array The index, in case if this is an VectorOfMat The Mat Get the UMat from the input array The index, in case if this is an VectorOfUMat The UMat Get the size of the input array The optional index The size of the input array Return true if the input array is empty True if the input array is empty Get the depth type The optional index The depth type Get the number of dimensions The optional index The dimensions Get the number of channels The optional index The number of channels Copy this Input array to another. The destination array. The optional mask. Release all the unmanaged memory associated with this InputArray True if the input array is a Mat True if the input array is an UMat True if the input array is a vector of Mat True if the input array is a vector of UMat True if the input array is a Matx The type of the input array Get the GpuMat from the input array The GpuMat This type is very similar to InputArray except that it is used for input/output function parameters. Create an InputOutputArray from an existing unmanaged inputOutputArray pointer The pointer to the existing inputOutputArray The parent object to keep reference to Get an empty InputOutputArray An empty InputOutputArray Release all the memory associated with this InputOutputArry This type is very similar to InputArray except that it is used for output function parameters. The unmanaged pointer to the output array OutputArrayOfArrays This type is very similar to InputArray except that it is used for output function parameters. Create an OutputArray from an existing unmanaged outputArray pointer The pointer to the unmanaged outputArray The parent object to keep reference to Get an empty output array An empty output array Release the unmanaged memory associated with this output array. True if the output array is fixed size True if the output array is fixed type True if the output array is needed An implementation of IInputArray intented to convert data to IInputArray Create an InputArray from MCvScalar The MCvScalar to be converted to InputArray Create an InputArray from a double value The double value to be converted to InputArray Convert double scalar to InputArray The double scalar The InputArray Convert MCvSalar to InputArray The MCvScalar The InputArray Release all the unmanaged memory associated with this InputArray The pointer to the input array CvBlob Blob Moments Mement 00 Moment 10 Moment 01 Moment 11 Moment 20 Moment 02 Central moment 11 Central moment 20 Central moment 02 Normalized central moment 11 Normalized central moment 20 Normalized central moment 02 Hu moment 1 Hu moment 2 Get the contour that defines the blob The contour of the blob Get the blob label The minimum bounding box of the blob Get the Blob Moments The centroid of the blob The number of pixels in this blob Pointer to the blob Implicit operator for IntPtr The CvBlob The unmanaged pointer for this object Wrapper for the CvBlob detection functions. The Ptr property points to the label image of the cvb::cvLabel function. Algorithm based on paper "A linear-time component-labeling algorithm using contour tracing technique" of Fu Chang, Chun-Jen Chen and Chi-Jen Lu. Detect blobs from input image. The input image The storage for the detected blobs Number of pixels that has been labeled. Calculates mean color of a blob in an image. The blob. The original image Average color Blob rendering type Render each blog with a different color. Render centroid. Render bounding box. Render angle. Print blob data to log out. Print blob data to std out. The default rendering type Draw the blobs on the image The binary mask. The blobs. Drawing type. The alpha value. 1.0 for solid color and 0.0 for transparent The images with the blobs drawn Get the binary mask for the blobs listed in the CvBlobs The blobs The binary mask for the specific blobs Release all the unmanaged memory associated with this Blob detector CvBlobs Create a new CvBlobs Release all the unmanaged resources used by this CvBlobs Filter blobs by area. Those blobs whose areas are not in range will be erased from the input list of blobs. Minimun area Maximun area Adds the specified label and blob to the dictionary. The label of the blob The blob Determines whether the CvBlobs contains the specified label. The label (key) to be located True if the CvBlobs contains an element with the specific label Get a collection containing the labels in the CvBlobs Removes the blob with the specific label The label of the blob True if the element is successfully found and removed; otherwise, false. Gets the blob associated with the specified label. The blob label When this method returns, contains the blob associated with the specified labe, if the label is found; otherwise, null. This parameter is passed uninitialized. True if the blobs contains a blob with the specific label; otherwise, false Get a collection containing the blobs in the CvBlobs. Get the blob with the speicific label. Set function is not implemented The label for the blob Adds the specified label and blob to the CvBlobs. The structure representing the label and blob to add to the CvBlobs Removes all keys and values Determines whether the CvBlobs contains a specific label and CvBlob. The label and blob to be located True if the specific label and blob is found in the CvBlobs; otherwise, false. Copies the elements to the , starting at the specific arrayIndex. The one-dimensional array that is the defination of the elements copied from the CvBlobs. The array must have zero-base indexing. The zero-based index in at which copying begins. Gets the number of label/Blob pairs contained in the collection Always false Removes a key and value from the dictionary. The structure representing the key and value to be removed True if the key are value is sucessfully found and removed; otherwise false. Returns an enumerator that iterates through the collection. An enumerator that can be used to iterate through the collection Returns a pointer to CvBlobs Pointer to CvBlobs CvTrack Track identification number Label assigned to the blob related to this track X min X max Y min y max Get the minimun bounding rectanble for this track Centroid Indicates how much frames the object has been in scene Indicates number of frames that has been active from last inactive period. Indicates number of frames that has been missing. Compares CvTrack for equality The other track to compares with True if the two CvTrack are equal; otherwise false. Blobs tracking Tracking based on: A. Senior, A. Hampapur, Y-L Tian, L. Brown, S. Pankanti, R. Bolle. Appearance Models for Occlusion Handling. Second International workshop on Performance Evaluation of Tracking and Surveillance Systems & CVPR'01. December, 2001. (http://www.research.ibm.com/peoplevision/PETS2001.pdf) Create a new CvTracks Release all the unmanaged resources used by this CvBlobs Updates list of tracks based on current blobs. List of blobs Distance Max distance to determine when a track and a blob match Inactive Max number of frames a track can be inactive Active If a track becomes inactive but it has been active less than thActive frames, the track will be deleted. Adds the specified id and track to the dictionary. The id of the track The track Determines whether the CvTracks contains the specified id. The id (key) to be located True if the CvTracks contains an element with the specific id Get a collection containing the ids in the CvTracks. Removes the track with the specific id The id of the track True if the element is successfully found and removed; otherwise, false. Gets the track associated with the specified id. The track id When this method returns, contains the track associated with the specified id, if the id is found; otherwise, an empty track. This parameter is passed uninitialized. True if the tracks contains a track with the specific id; otherwise, false Get a collection containing the tracks in the CvTracks. Get or Set the Track with the specific id. The id of the Track Adds the specified id and track to the CvTracks. The structure representing the id and track to add to the CvTracks Removes all keys and values Determines whether the CvTracks contains a specific id and CvTrack. The id and CvTrack to be located True if the is found in the CvTracks; otherwise, false. Copies the elements to the , starting at the specific arrayIndex. The one-dimensional array that is the defination of the elements copied from the CvTracks. The array must have zero-base indexing. The zero-based index in at which copying begins. Gets the number of id/track pairs contained in the collection. Always false. Removes a key and value from the dictionary. The structure representing the key and value to be removed True if the key are value is sucessfully found and removed; otherwise false. Returns an enumerator that iterates through the collection. An enumerator that can be used to iterate through the collection Returns a pointer to CvBlobs Pointer to CvBlobs Defines a Bgr (Blue Green Red) color The MCvScalar representation of the color intensity Create a BGR color using the specific values The blue value for this color The green value for this color The red value for this color Create a Bgr color using the System.Drawing.Color System.Drawing.Color Get or set the intensity of the blue color channel Get or set the intensity of the green color channel Get or set the intensity of the red color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Bgra (Blue Green Red Alpha) color The MCvScalar representation of the color intensity Create a BGRA color using the specific values The blue value for this color The green value for this color The red value for this color The alpha value for this color Get or set the intensity of the blue color channel Get or set the intensity of the green color channel Get or set the intensity of the red color channel Get or set the intensity of the alpha color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Gray color The MCvScalar representation of the color intensity Create a Gray color with the given intensity The intensity for this color The intensity of the gray color The intensity of the gray color Returns the hash code for this color the hash code Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Hls (Hue Lightness Satuation) color The MCvScalar representation of the color intensity Create a Hls color using the specific values The hue value for this color ( 0 < hue < 180 ) The satuation for this color The lightness for this color Get or set the intensity of the hue color channel ( 0 < hue < 180 ) Get or set the intensity of the lightness color channel Get or set the intensity of the satuation color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a HSV (Hue Satuation Value) color The MCvScalar representation of the color intensity Create a HSV color using the specific values The hue value for this color ( 0 < hue < 180 ) The satuation value for this color The value for this color Get or set the intensity of the hue color channel ( 0 < hue < 180 ) Get or set the intensity of the satuation color channel Get or set the intensity of the value color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a CIE Lab color The MCvScalar representation of the color intensity Create a CIE Lab color using the specific values The z value for this color The y value for this color The x value for this color Get or set the intensity of the x color channel Get or set the intensity of the y color channel Get or set the intensity of the z color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a CIE Luv color The MCvScalar representation of the color intensity Create a CIE Lab color using the specific values The z value for this color The y value for this color The x value for this color The intensity of the x color channel The intensity of the y color channel The intensity of the z color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Rgb (Red Green Blue) color The MCvScalar representation of the color intensity Create a RGB color using the specific values The blue value for this color The green value for this color The red value for this color Create a Rgb color using the System.Drawing.Color System.Drawing.Color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Bgr565 (Blue Green Red) color The MCvScalar representation of the color intensity Create a Bgr565 color using the specific values The blue value for this color The green value for this color The red value for this color Create a Bgr565 color using the System.Drawing.Color System.Drawing.Color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Rgba (Red Green Blue Alpha) color The MCvScalar representation of the color intensity Create a RGBA color using the specific values The blue value for this color The green value for this color The red value for this color The alpha value for this color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Get or set the intensity of the alpha color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Xyz color (CIE XYZ.Rec 709 with D65 white point) The MCvScalar representation of the color intensity Create a Xyz color using the specific values The z value for this color The y value for this color The x value for this color Get or set the intensity of the z color channel Get or set the intensity of the y color channel Get or set the intensity of the x color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color Defines a Ycc color (YCrCb JPEG) The MCvScalar representation of the color intensity Create a Ycc color using the specific values The Y value for this color The Cr value for this color The Cb value for this color Get or set the intensity of the Y color channel Get or set the intensity of the Cr color channel Get or set the intensity of the Cb color channel Return true if the two color equals The other color to compare with true if the two color equals Get the dimension of this color Get or Set the equivalent MCvScalar value Represent this color as a String The string representation of this color A line segment A point on the line An other point on the line A point on the line An other point on the line Create a line segment with the specific starting point and end point The first point on the line segment The second point on the line segment The direction of the line, the norm of which is 1 Determine which side of the line the 2D point is at the point 1 if on the right hand side; 0 if on the line; -1 if on the left hand side; Get the exterior angle between this line and The other line The exterior angle between this line and Get the length of the line segment A line segment A point on the line An other point on the line A point on the line An other point on the line Create a line segment with the specific start point and end point The first point on the line segment The second point on the line segment Get the length of the line segment The direction of the line, the norm of which is 1 Obtain the Y value from the X value using first degree interpolation The X value The Y value Determin which side of the line the 2D point is at the point 1 if on the right hand side; 0 if on the line; -1 if on the left hand side; Get the exterior angle between this line and The other line The exterior angle between this line and A 3D line segment A point on the line An other point on the line A point on the line An other point on the line Create a line segment with the specific start point and end point The first point on the line segment The second point on the line segment Get the length of the line segment A circle Create a circle with the specific center and radius The center of this circle The radius of this circle Get or Set the center of the circle The radius of the circle The area of the circle Compare this circle with The other box to be compared true if the two boxes equals A point with Bgr color information The position in meters The blue color The green color The red color A 2D cross The center of this cross The size of this cross Construct a cross The center of the cross the width of the cross the height of the cross Get the horizonal linesegment of this cross Get the vertical linesegment of this cross A solid resembling a cube, with the rectangular faces not all equal; a rectangular parallelepiped. The coordinate of the upper corner The coordinate of the lower corner Check if the specific point is in the Cuboid The point to be checked True if the point is in the cuboid Get the centroid of this cuboid This is used to hold the sizes of the Open CV structures The size of CvPoint The size of CvPoint2D32f The size of CvPoint3D32f The size of CvSize The size of CvSize2D32f The size of CvScalar The size of CvRect The size of CvBox2D The size of CvMat The size of CvMatND The size of CvTermCriteria The size of IplImage An ellipse The RotatedRect representation of this ellipse Create an ellipse with specific parameters The center of the ellipse The width and height of the ellipse The rotation angle in radian for the ellipse Create an ellipse from the specific RotatedRect The RotatedRect representation of this ellipse Result of cvHaarDetectObjects Bounding rectangle for the object (average rectangle of a group) Number of neighbor rectangles in the group Managed structure equivalent to CvMat CvMat signature (CV_MAT_MAGIC_VAL), element type and flags full row length in bytes underlying data reference counter Header reference count data pointers number of rows number of columns Width Height Get the number of channels Constants used by the MCvMat structure Offset of roi Managed structure equivalent to CvMatND CvMatND signature (CV_MATND_MAGIC_VAL), element type and flags number of array dimensions underlying data reference counter Header reference count data pointers pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) for every dimension The MatND Dimension Number of elements in this dimension distance between elements in bytes for this dimension spatial and central moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments central moments central moments central moments central moments central moments central moments central moments m00 != 0 ? 1/sqrt(m00) : 0 The Gravity Center of this Moment Retrieves the spatial moment, which in case of image moments is defined as: M_{x_order,y_order}=sum_{x,y}(I(x,y) * x^{x_order} * y^{y_order}) where I(x,y) is the intensity of the pixel (x, y). x order of the retrieved moment, x_order >= 0 y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The spatial moment of the specific order Retrieves the central moment, which in case of image moments is defined as: mu_{x_order,y_order}=sum_{x,y}(I(x,y)*(x-x_c)^{x_order} * (y-y_c)^{y_order}), where x_c=M10/M00, y_c=M01/M00 - coordinates of the gravity center x order of the retrieved moment, x_order >= 0. y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The center moment Retrieves normalized central moment, which in case of image moments is defined as: eta_{x_order,y_order}=mu_{x_order,y_order} / M00^{(y_order+x_order)/2+1}, where mu_{x_order,y_order} is the central moment x order of the retrieved moment, x_order >= 0. y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The normalized center moment Get the HuMoments The Hu moment computed from this moment Structure contains the bounding box and confidence level for detected object Bounding box for a detected object Confidence level The class identifier Managed Structure equivalent to CvPoint2D64f x-coordinate y-coordinate Create a MCvPoint2D64f structure with the specific x and y coordinates x-coordinate y-coordinate Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Subtract from The first point The point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Returns true if the two points equals. The other point to compare with True if the two points equals Managed Structure equivalent to CvPoint3D32f x-coordinate y-coordinate z-coordinate Create a MCvPoint3D32f structure with the specific x and y coordinates x-coordinate y-coordinate z-coordinate Return the cross product of two 3D point the other 3D point The cross product of the two 3D point Return the dot product of two 3D point the other 3D point The dot product of the two 3D point return the norm of this 3D point Get the normalized point The implicit operator to convert MCvPoint3D32f to MCvPoint3D64f The point to be converted The converted point Subtract one point from the other The point to subtract from The value to be subtracted The subtraction of one point from the other Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Return true if the location of the two points are equal The other point to compare with True if the location of the two points are equal Managed Structure equivalent to CvPoint3D64f x-coordinate y-coordinate z-coordinate Create a MCvPoint3D64f structure with the specific x and y coordinates x-coordinate y-coordinate z-coordinate Return the cross product of two 3D point the other 3D point The cross product of the two 3D point Return the dot product of two 3D point the other 3D point The dot product of the two 3D point Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Subtract from The first point The point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Check if the other point equals to this point The point to be compared True if the two points are equal Managed structure equivalent to CvScalar The scalar value The scalar value The scalar value The scalar value The scalar values as a vector (of size 4) The scalar values as an array Create a new MCvScalar structure using the specific values v0 Create a new MCvScalar structure using the specific values v0 v1 Create a new MCvScalar structure using the specific values v0 v1 v2 Create a new MCvScalar structure using the specific values v0 v1 v2 v3 Return the code to generate this MCvScalar from specific language The programming language to generate code from The code to generate this MCvScalar from specific language Return true if the two MCvScalar equals The other MCvScalar to compare with true if the two MCvScalar equals Managed structure equivalent to CvSlice Start index End index Create a new MCvSlice using the specific start and end index start index end index Get the equivalent of CV_WHOLE_SEQ Managed structure equivalent to CvTermCriteria CV_TERMCRIT value Maximum iteration Epsilon Create the termination criteria using the constrain of maximum iteration The maximum number of iteration allowed Create the termination Criteria using only the constrain of epsilon The epsilon value Create the termination criteria using the constrain of maximum iteration as well as epsilon The maximum number of iteration allowed The epsilon value OpenCV's DMatch structure Query descriptor index Train descriptor index Train image index Distance Managed structure equivalent to IplImage sizeof(IplImage) version (=0) Most of OpenCV functions support 1,2,3 or 4 channels ignored by OpenCV pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U, IPL_DEPTH_16S, IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV 0 - interleaved color channels, 1 - separate color channels. cvCreateImage can only create interleaved images 0 - top-left origin, 1 - bottom-left origin (Windows bitmaps style) Alignment of image rows (4 or 8). OpenCV ignores it and uses widthStep instead image width in pixels image height in pixels image ROI. when it is not NULL, this specifies image region to process must be NULL in OpenCV ditto ditto image data size in bytes (=image->height*image->widthStep in case of interleaved data) pointer to aligned image data size of aligned image row in bytes border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV pointer to a very origin of image data (not necessarily aligned) - it is needed for correct image deallocation OpenCV's KeyPoint class The location of the keypoint Size of the keypoint Orientation of the keypoint Response of the keypoint octave class id The range use to setup the histogram return the full range. Create a range of the specific min/max value The start value of this range The max value of this range The start value of this range The end value of this range Return true if the two Range equals The other Range to compare with True if the two Range equals The range use to setup the histogram Create a range of the specific min/max value The min value of this range The max value of this range The minimum value of this range The Maximum value of this range Return true if the two RangeF equals The other RangeF to compare with True if the two RangeF equals Managed structure equivalent to CvBox2D The center of the box The size of the box The angle between the horizontal axis and the first side (i.e. width) in degrees Possitive value means counter-clock wise rotation Create a RotatedRect structure with the specific parameters The center of the box The size of the box The angle of the box in degrees. Possitive value means counter-clock wise rotation Shift the box by the specific amount The x value to be offseted The y value to be offseted Represent an uninitialized RotatedRect Get the 4 verticies of this Box. The vertives of this RotatedRect Get the minimum enclosing rectangle for this Box The minimum enclosing rectangle for this Box Returns true if the two box are equal The other box to compare with True if two boxes are equal Convert a RectangleF to RotatedRect The rectangle The equivalent RotatedRect A 2D triangle One of the vertex of the triangle One of the vertex of the triangle One of the vertex of the triangle Create a triangle using the specific vertices The first vertex The second vertex The third vertex Get the area of this triangle Returns the centroid of this triangle Compare two triangles and return true if equal the other triangles to compare with true if the two triangles equals, false otherwise Get the vertices of this triangle The vertices of this triangle A 3D triangle One of the vertex of the triangle One of the vertex of the triangle One of the vertex of the triangle Get the area of this triangle Get the normal of this triangle Returns the centroid of this triangle Create a triangle using the specific vertices The first vertex The second vertex The third vertex Compare two triangles and return true if equal the other triangles to compare with true if the two triangles equals, false otherwise Attribute used to specify color information The code which is used for color conversion The code which is used for color conversion The code which is used for color conversion A color type The equivalent MCvScalar value Get the dimension of the color type The base class for algorithms that align images of the same scene with different exposures The pointer to the native AlignExposures object Aligns images. vector of input images vector of aligned images vector of exposure time values for each image 256x1 matrix with inverse camera response function for each pixel value, it should have the same number of channels as images. Reset the pointer that points to the CalibrateCRF object. This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations. Create an AlignMTB object logarithm to the base 2 of maximal shift in each dimension. Values of 5 and 6 are usually good enough (31 and 63 pixels shift respectively). range for exclusion bitmap that is constructed to suppress noise around the median value. if true cuts images, otherwise fills the new regions with zeros. Release the unmanaged memory associated with this AlignMTB object The base class for camera response calibration algorithms. The pointer to the calibrateCRF object Recovers inverse camera response. Vector of input images 256x1 matrix with inverse camera response function Vector of exposure time values for each image Reset the pointer that points to the CalibrateCRF object. Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother. Creates CalibrateDebevec object. Number of pixel locations to use Smoothness term weight. Greater values produce smoother results, but can alter the response. If true sample pixel locations are chosen at random, otherwise the form a rectangular grid. Release the unmanaged memory associated with this CalibrateCRF object Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels. Creates CalibrateRobertson object. maximal number of Gauss-Seidel solver iterations. get difference between results of two successive steps of the minimization. Release the unmanaged memory associated with this CalibrateCRF object The base class algorithms that can merge exposure sequence to a single image. The pointer to the unmanaged MergeExposure object Merges images. Vector of input images Result image Vector of exposure time values for each image 256x1 matrix with inverse camera response function for each pixel value, it should have the same number of channels as images. Reset the native pointer to the MergeExposure object The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response. Creates MergeDebevec object. Release the MergeDebevec object Pixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids. The resulting image weight is constructed as weighted average of contrast, saturation and well-exposedness measures. The resulting image doesn't require tonemapping and can be converted to 8-bit image by multiplying by 255, but it's recommended to apply gamma correction and/or linear tonemapping. Creates MergeMertens object. contrast measure weight. saturation measure weight well-exposedness measure weight Merges images. Vector of input images Result image Release the unmanaged memory associated with this MergeMertens object The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response Creates MergeRobertson object. Release the unmanaged memory associated with this MergeRobertson object Base class for tonemapping algorithms - tools that are used to map HDR image to 8-bit range. The pointer to the unmanaged Tonemap object The pointer to the unmanaged Algorithm object The pointer to the unamanged Algorith object Default constructor that creates empty Tonemap The pointer to the unmanaged object The pointer to the tonemap object Creates simple linear mapper with gamma correction. positive value for gamma correction. Gamma value of 1.0 implies no correction, gamma equal to 2.2f is suitable for most displays. Generally gamma > 1 brightens the image and gamma < 1 darkens it. Tonemaps image. Source image - 32-bit 3-channel Mat destination image - 32-bit 3-channel Mat with values in [0, 1] range Release the unmanaged memory associated with this Tonemap Positive value for gamma correction. Gamma value of 1.0 implies no correction, gamma equal to 2.2f is suitable for most displays. Adaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain. Since it's a global operator the same function is applied to all the pixels, it is controlled by the bias parameter. Creates TonemapDrago object. gamma value for gamma correction. positive saturation enhancement value. 1.0 preserves saturation, values greater than 1 increase saturation and values less than 1 decrease it. value for bias function in [0, 1] range. Values from 0.7 to 0.9 usually give best results, default value is 0.85. Release the unmanaged memory associated with this TonemapDrago Positive saturation enhancement value. 1.0 preserves saturation, values greater than 1 increase saturation and values less than 1 decrease it. Value for bias function in [0, 1] range. Values from 0.7 to 0.9 usually give best results, default value is 0.85. This algorithm decomposes image into two layers: base layer and detail layer using bilateral filter and compresses contrast of the base layer thus preserving all the details. This implementation uses regular bilateral filter from opencv. Creates TonemapDurand object. gamma value for gamma correction. resulting contrast on logarithmic scale, i. e. log(max / min), where max and min are maximum and minimum luminance values of the resulting image. saturation enhancement value. bilateral filter sigma in color space bilateral filter sigma in coordinate space Release the unmanaged memory associated with this TonemapDurand Positive saturation enhancement value. 1.0 preserves saturation, values greater than 1 increase saturation and values less than 1 decrease it. Resulting contrast on logarithmic scale, i. e. log(max / min), where max and min are maximum and minimum luminance values of the resulting image. Bilateral filter sigma in color space bilateral filter sigma in coordinate space This is a global tonemapping operator that models human visual system. Mapping function is controlled by adaptation parameter, that is computed using light adaptation and color adaptation. Creates TonemapReinhard object. gamma value for gamma correction result intensity in [-8, 8] range. Greater intensity produces brighter results. light adaptation in [0, 1] range. If 1 adaptation is based only on pixel value, if 0 it's global, otherwise it's a weighted mean of this two cases. chromatic adaptation in [0, 1] range. If 1 channels are treated independently, if 0 adaptation level is the same for each channel. Release the unmanaged memory associated with this TonemapReinhard Result intensity in [-8, 8] range. Greater intensity produces brighter results. Light adaptation in [0, 1] range. If 1 adaptation is based only on pixel value, if 0 it is global, otherwise it is a weighted mean of this two cases. chromatic adaptation in [0, 1] range. If 1 channels are treated independently, if 0 adaptation level is the same for each channel. This algorithm transforms image to contrast using gradients on all levels of gaussian pyramid, transforms contrast values to HVS response and scales the response. After this the image is reconstructed from new contrast values. Creates TonemapMantiuk object gamma value for gamma correction. contrast scale factor. HVS response is multiplied by this parameter, thus compressing dynamic range. Values from 0.6 to 0.9 produce best results. saturation enhancement value. Release the unmanaged memory associated with this TonemapMantiuk Saturation enhancement value. Contrast scale factor. HVS response is multiplied by this parameter, thus compressing dynamic range. Values from 0.6 to 0.9 produce best results. Interface for all widgets Get the pointer to the widget object Interface for all widget3D Get the pointer to the widget3D object Interface for all widget2D Get the pointer to the widget2D object Represents a 3D visualizer window. Create a new 3D visualizer windows The name of the windows Show a widget in the window A unique id for the widget. The widget to be displayed in the window. Pose of the widget. Removes a widget from the window. The id of the widget that will be removed. Sets pose of a widget in the window. The id of the widget whose pose will be set. The new pose of the widget. The window renders and starts the event loop. Starts the event loop for a given time. Amount of time in milliseconds for the event loop to keep running. If true, window renders. Returns whether the event loop has been stopped. Set the background color Release the unmanaged memory associated with this Viz3d object This 3D Widget defines an arrow. Constructs an WArrow. Start point of the arrow. End point of the arrow. Thickness of the arrow. Thickness of arrow head is also adjusted accordingly. Color of the arrow. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WArrow object This 3D Widget defines a circle. Constructs default planar circle centred at origin with plane normal along z-axis. Radius of the circle. Thickness of the circle. Color of the circle. Constructs repositioned planar circle. Radius of the circle. Center of the circle. Normal of the plane in which the circle lies. Thickness of the circle. Color of the circle. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCircle object This 3D Widget defines a point cloud. Constructs a WCloud. Set of points which can be of type: CV_32FC3, CV_32FC4, CV_64FC3, CV_64FC4. Set of colors. It has to be of the same size with cloud. Constructs a WCloud. Set of points which can be of type: CV_32FC3, CV_32FC4, CV_64FC3, CV_64FC4. A single Color for the whole cloud. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCloud This 3D Widget defines a cone. Constructs default cone oriented along x-axis with center of its base located at origin. Length of the cone. Radius of the cone. Resolution of the cone. Color of the cone. Constructs repositioned planar cone. Radius of the cone. Center of the cone base. Tip of the cone. Resolution of the cone. Color of the cone. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCone object This 3D Widget represents a coordinate system. Constructs a WCoordinateSystem. Determines the size of the axes. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCoordinateSysyem object This 3D Widget defines a cube. Constructs a WCube. Specifies minimum point of the bounding box. Specifies maximum point of the bounding box. If true, cube is represented as wireframe. Color of the cube. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCube object This 3D Widget defines a cylinder. Constructs a WCylinder. A point1 on the axis of the cylinder. A point2 on the axis of the cylinder. Radius of the cylinder. Resolution of the cylinder. Color of the cylinder. Get the pointer to the Widget3D obj Get the pointer to the Widget obj Release the unmanaged memory associated with this WCylinder object This 2D Widget represents text overlay. Constructs a WText. Text content of the widget. Position of the text. Font size. Color of the text. Get the pointer to the widget2D object Get the pointer to the widget object. Release the unmanaged memory associated with this Viz3d object A collection of points Fit an ellipse to the points collection The points to be fitted An ellipse convert a series of points to LineSegment2D the array of points if true, the last line segment is defined by the last point of the array and the first point of the array array of LineSegment2D convert a series of System.Drawing.Point to LineSegment2D the array of points if true, the last line segment is defined by the last point of the array and the first point of the array array of LineSegment2D Find the bounding rectangle for the specific array of points The collection of points The bounding rectangle for the array of points Re-project pixels on a 1-channel disparity map to array of 3D points. Disparity map The re-projection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify The reprojected 3D points Generate a random point cloud around the ellipse. The region where the point cloud will be generated. The axes of e corresponds to std of the random point cloud. The number of points to be generated A random point cloud around the ellipse Interface to the BackgroundSubtractor class Pointer to the native BackgroundSubstractor object A static class that provide extension methods to backgroundSubtractor Update the background model The image that is used to update the background model Use -1 for default The background subtractor The output foreground mask Computes a background image. The output background image The background subtractor Sometimes the background image can be very blurry, as it contain the average background statistics. K-nearest neighbors - based Background/Foreground Segmentation Algorithm. Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create a K-nearest neighbors - based Background/Foreground Segmentation Algorithm. Length of the history. Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update. If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false. Release all the unmanaged memory associated with this background model. The number of last frames that affect the background model The number of data samples in the background model The threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to a data sample. The number of neighbours, the k in the kNN. K is the number of samples that need to be within dist2Threshold in order to decide that pixel is matching the kNN background model. If true, the algorithm detects shadows and marks them. Shadow value is the value used to mark shadows in the foreground mask. Default value is 127. Value 0 in the mask always means background, 255 means foreground. A shadow is detected if pixel is a darker version of the background. The shadow threshold (Tau in the paper) is a threshold defining how much darker the shadow can be. Tau= 0.5 means that if a pixel is more than twice darker then it is not shadow. The class implements the following algorithm: "Improved adaptive Gaussian mixture model for background subtraction" Z.Zivkovic International Conference Pattern Recognition, UK, August, 2004. http://www.zoranz.net/Publications/zivkovic2004ICPR.pdf Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create an "Improved adaptive Gaussian mixture model for background subtraction". The length of the history. The maximum allowed number of mixture components. Actual number is determined dynamically per pixel. If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false. Release all the unmanaged memory associated with this background model. The number of last frames that affect the background model If true, the algorithm detects shadows and marks them. Shadow value is the value used to mark shadows in the foreground mask. Default value is 127. Value 0 in the mask always means background, 255 means foreground. A shadow is detected if pixel is a darker version of the background. The shadow threshold (Tau in the paper) is a threshold defining how much darker the shadow can be. Tau= 0.5 means that if a pixel is more than twice darker then it is not shadow. The number of gaussian components in the background model If a foreground pixel keeps semi-constant value for about backgroundRatio * history frames, it's considered background and added to the model as a center of a new component. It corresponds to TB parameter in the paper. The main threshold on the squared Mahalanobis distance to decide if the sample is well described by the background model or not. Related to Cthr from the paper. The variance threshold for the pixel-model match used for new mixture component generation. Threshold for the squared Mahalanobis distance that helps decide when a sample is close to the existing components (corresponds to Tg in the paper). If a pixel is not close to any component, it is considered foreground or added as a new component. 3 sigma =%gt Tg=3*3=9 is default. A smaller Tg value generates more components. A higher Tg value may result in a small number of components but they can grow too large. The initial variance of each gaussian component The minimum variance The maximum variance Dense Optical flow Gets the dense optical flow pointer. The dense optical flow . Extension methods for IDenseOpticalFlow Calculates an optical flow. First 8-bit single-channel input image. Second input image of the same size and the same type as prev. Computed flow image that has the same size as prev and type CV_32FC2 The dense optical flow object Dual TV L1 Optical Flow Algorithm. Create Dual TV L1 Optical Flow. Release the unmanaged resources Gets the dense optical flow pointer. The pointer to the dense optical flow object. Return the pointer to the algorithm object Time step of the numerical scheme Weight parameter for the data term, attachment parameter Weight parameter for (u - v)^2, tightness parameter Coefficient for additional illumination variation term Number of scales used to create the pyramid of images Number of warpings per scale Stopping criterion threshold used in the numerical scheme, which is a trade-off between precision and running time Inner iterations (between outlier filtering) used in the numerical scheme Outer iterations (number of inner loops) used in the numerical scheme Use initial flow Step between scales (less than 1) Median filter kernel size (1 = no filter) (3 or 5) Class computing a dense optical flow using the Gunnar Farneback's algorithm. Create a FarnebackOpticalFlow object Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field The number of iterations the algorithm does at each pyramid level Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7 Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5 The operation flags Fast Pyramids Release the unmanaged resources Gets the dense optical flow pointer. The pointer to the dense optical flow object. Return the pointer to the algorithm object The class implements a standard Kalman filter. However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality. Initializes a new instance of the class. Dimensionality of the state. Dimensionality of the measurement. Dimensionality of the control vector. Type of the created matrices that should be Cv32F or Cv64F Perform the predict operation using the option control input The control. The predicted state. Updates the predicted state from the measurement. The measured system parameters Release the unmanaged resources Predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k) Corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k)) State transition matrix (A) Control matrix (B) (not used if there is no control) Measurement matrix (H) Process noise covariance matrix (Q) Measurement noise covariance matrix (R) priori error estimate covariance matrix (P'(k)): P'(k)=A*P(k-1)*At + Q) Kalman gain matrix (K(k)): K(k)=P'(k)*Ht*inv(H*P'(k)*Ht+R) posteriori error estimate covariance matrix (P(k)): P(k)=(I-K(k)*H)*P'(k) Sparse Optical flow Gets the sparse optical flow pointer. The sparse optical flow . Extension methods for ISparseOpticalFlow Calculates a sparse optical flow. The sparse optical flow First input image. Second input image of the same size and the same type as prevImg. Vector of 2D points for which the flow needs to be found. Output vector of 2D points containing the calculated new positions of input features in the second image. Output status vector. Each element of the vector is set to 1 if the flow for the corresponding features has been found.Otherwise, it is set to 0. Optional output vector that contains error response for each point (inverse confidence). The class can calculate an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. Create a SparsePyrLKOpticalFlow object size of the search window at each pyramid level. 0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon. operation flags the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations, divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. Release the unmanaged resources Pointer to the unmanaged SparseOpticalFlow object Return the pointer to the algorithm object DIS optical flow algorithm. This class implements the Dense Inverse Search(DIS) optical flow algorithm.Includes three presets with preselected parameters to provide reasonable trade-off between speed and quality.However, even the slowest preset is still relatively fast, use DeepFlow if you need better quality and don't care about speed. More details about the algorithm can be found at: Till Kroeger, Radu Timofte, Dengxin Dai, and Luc Van Gool. Fast optical flow using dense inverse search. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. Preset Ultra fast Fast Medium Create an instance of DIS optical flow algorithm. Algorithm preset Release the unmanaged memory associated with this Optical flow algorithm. Pointer to cv::Algorithm Pointer to native cv::DenseOpticalFlow Finest level of the Gaussian pyramid on which the flow is computed (zero level corresponds to the original image resolution). The final flow is obtained by bilinear upscaling. Size of an image patch for matching (in pixels). Normally, default 8x8 patches work well enough in most cases. Stride between neighbor patches. Must be less than patch size. Lower values correspond to higher flow quality. Maximum number of gradient descent iterations in the patch inverse search stage. Higher values may improve quality in some cases. Number of fixed point iterations of variational refinement per scale. Set to zero to disable variational refinement completely. Higher values will typically result in more smooth and high-quality flow. Weight of the smoothness term Weight of the color constancy term Weight of the gradient constancy term Whether to use mean-normalization of patches when computing patch distance. It is turned on by default as it typically provides a noticeable quality boost because of increased robustness to illumination variations. Turn it off if you are certain that your sequence doesn't contain any changes in illumination. Whether to use spatial propagation of good optical flow vectors. This option is turned on by default, as it tends to work better on average and can sometimes help recover from major errors introduced by the coarse-to-fine scheme employed by the DIS optical flow algorithm. Turning this option off can make the output flow field a bit smoother, however. The motion history class For help on using this class, take a look at the Motion Detection example The motion mask. Do not dispose this image. Create a motion history object In second, the duration of motion history you wants to keep In second. Any change happens between a time interval greater than this will not be considered In second. Any change happens between a time interval smaller than this will not be considered. Create a motion history object In second, the duration of motion history you wants to keep In second. Any change happens between a time interval larger than this will not be considered In second. Any change happens between a time interval smaller than this will not be considered. The start time of the motion history Update the motion history with the specific image and current timestamp The image to be added to history Update the motion history with the specific image and the specific timestamp The foreground of the image to be added to history The time when the image is captured Get a sequence of motion component A sequence of motion components Given a rectangle area of the motion, output the angle of the motion and the number of pixels that are considered to be motion pixel The rectangle area of the motion The orientation of the motion Number of motion pixels within silhouette ROI The foreground mask used to calculate the motion info. Release unmanaged resources Release any images associated with this object DeepFlow optical flow algorithm implementation. Create an instance of DeepFlow optical flow algorithm. Release the unmanaged memory associated with this Object Pointer to the unmanaged cv::Algorithm Pointer to the unmanaged cv::DenseOpticalFlow PCAFlow algorithm. Creates an instance of PCAFlow Release the memory associated with this PCA Flow algorithm Pointer to cv::Algorithm Pointer to native cv::DenseOpticalFlow This class implements variational refinement of the input flow field, i.e. it uses input flow to initialize the minimization of the following functional: E(U)=∫ΩδΨ(EI)+γΨ(EG)+αΨ(ES), where EI,EG,ES are color constancy, gradient constancy and smoothness terms respectively. Ψ(s2)=sqrt(s^2+ϵ^2) is a robust penalizer to limit the influence of outliers. See: Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow estimation based on a theory for warping. In Computer Vision-ECCV 2004, pages 25–36. Springer, 2004. Create an instance of Variational Refinement. Release the unmanaged memory associated with this Optical flow algorithm. Pointer to the unmanaged cv::Algorithm Pointer to the unmanaged cv::DenseOpticalFlow Number of outer (fixed-point) iterations in the minimization procedure. Number of inner successive over-relaxation (SOR) iterations in the minimization procedure to solve the respective linear system. Relaxation factor in SOR Weight of the smoothness term Weight of the color constancy term Weight of the gradient constancy term Abstract base class for histogram cost algorithms. Release the histogram cost extractor A norm based cost extraction. Create a norm based cost extraction. Distance type Number of dummies Default cost An EMD based cost extraction. Create an EMD based cost extraction. Distance type Number of dummies Default cost An Chi based cost extraction. Create an Chi based cost extraction. Number of dummies Default cost An EMD-L1 based cost extraction. Create an EMD-L1 based cost extraction. Number of dummies Default cost Library to invoke functions that belongs to the shape module Implementation of the Shape Context descriptor and matching algorithm proposed by Belongie et al. in “Shape Matching and Object Recognition Using Shape Contexts” (PAMI 2002). The number of iterations The number of angular bins in the shape context descriptor. The number of radial bins in the shape context descriptor. The value of the inner radius. The value of the outer radius. Rotation Invariant The weight of the shape context distance in the final distance value. The weight of the appearance cost in the final distance value. The weight of the Bending Energy in the final distance value. Standard Deviation. Create a shape context distance extractor The histogram cost extractor, use ChiHistogramCostExtractor as default The shape transformer, use ThinPlateSplineSphapeTransformer as default Establish the number of angular bins for the Shape Context Descriptor used in the shape matching pipeline. Establish the number of radial bins for the Shape Context Descriptor used in the shape matching pipeline. Set the inner radius of the shape context descriptor. Set the outer radius of the shape context descriptor. Iterations Release the memory associated with this shape context distance extractor Abstract base class for shape distance algorithms. Pointer to the unmanaged ShapeDistanceExtractor Compute the shape distance between two shapes defined by its contours. Contour defining first shape Contour defining second shape The shape distance between two shapes defined by its contours. Compute the shape distance between two shapes defined by its contours. Contour defining first shape Contour defining second shape The shape distance between two shapes defined by its contours. Release all memory associated with this ShapeDistanceExtractor A simple Hausdorff distance measure between shapes defined by contours, according to the paper “Comparing Images using the Hausdorff distance.” by D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. (PAMI 1993). Create Hausdorff distance extractor Rhe norm used to compute the Hausdorff value between two shapes. It can be L1 or L2 norm. The rank proportion (or fractional value) that establish the Kth ranked value of the partial Hausdorff distance. Experimentally had been shown that 0.6 is a good value to compare shapes. Release the memory associated with this Hausdorff distance extrator Abstract base class for shape transformation algorithms. Get the pointer to the unmanaged shape transformer Definition of the transformation ocupied in the paper “Principal Warps: Thin-Plate Splines and Decomposition of Deformations”, by F.L. Bookstein (PAMI 1989). Create a thin plate spline shape transformer The regularization parameter for relaxing the exact interpolation requirements of the TPS algorithm. Get the pointer the the native ShapeTransformer Release the unmanaged memory associated with this ShapeTransformer object Wrapper class for the OpenCV Affine Transformation algorithm. Create an affine transformer Full affine Release the unmanaged memory associated with this ShapeTransformer object Get the pointer to the native ShapeTransformer Finds features in the given image. Pointer to the unmanaged FeaturesFinder object Get the pointer to the unmanaged FeaturesFinder object ORB features finder. Creates an ORB features finder Use (3, 1) for default grid size The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. Release all the unmanaged memory associated with this FeaturesFinder This class wraps the functional calls to the opencv_stitching module Entry points to the Open CV Stitching module. AKAZE features finder. Creates an AKAZE features finder Type of the extracted descriptor Size of the descriptor in bits. 0 -> Full size Number of channels in the descriptor (1, 2, 3) Detector response threshold to accept point Default number of sublevels per scale level Maximum octave evolution of the image Diffusivity type Release all the unmanaged memory associated with this FeaturesFinder Image Stitching. The stitcher statis Ok. Error, need more images. Error, homography estimateion failed. Error, camera parameters adjustment failed. Wave correction kind horizontal Vertical Stitch mode Mode for creating photo panoramas. Expects images under perspective transformation and projects resulting pano to sphere. Mode for composing scans. Expects images under affine transformation does not compensate exposure by default. Creates a stitcher with the default parameters. If true, the stitcher will try to use GPU for processing when available Creates a Stitcher configured in one of the stitching modes. Scenario for stitcher operation. This is usually determined by source of images to stitch and their transformation. If true, the stitcher will try to use GPU for processing when available Compute the panoramic images given the images The input images. This can be, for example, a VectorOfMat The panoramic image The stitching status Set the features finder for this stitcher. The features finder Set the warper creator for this stitcher. The warper creator Release memory associated with this stitcher Get or Set a flag to indicate if the stitcher should apply wave correction The wave correction type. Get or set the pano confidence threshold Get or Set the compositing resolution Get or Set the seam estimation resolution Get or set the registration resolution The work scale Finds features in the given image. Pointer to the unmanaged WarperCreator object Pointer to the unmanaged RotationWarper object Get a pointer to the unmanaged WarperCreator object Reset the unmanaged pointer associated to this object Builds the projection maps according to the given camera data. Source image size Camera intrinsic parameters Camera rotation matrix Projection map for the x axis Projection map for the y axis Projected image minimum bounding box Projects the image. Source image Camera intrinsic parameters Camera rotation matrix Interpolation mode Border extrapolation mode Projected image Project image top-left corner Warper that maps an image onto the z = 1 plane. Construct an instance of the plane warper class. Projected image scale multiplier Release the unmanaged memory associated with this wraper Warper that maps an image onto the unit sphere located at the origin. Construct an instance of the spherical warper class. Radius of the projected sphere, in pixels. An image spanning the whole sphere will have a width of 2 * scale * PI pixels. Release the unmanaged memory associated with this wraper Fisheye Warper Create a fisheye warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Stereographic Warper Create a stereographic warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Compressed rectilinear warper Create a compressed rectilinear warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Panini warper Create a Panini warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Panini portrait warper Create a panini portrait warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Mercator warper Create a Mercator Warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Transverse mercator warper Create a transverse mercator warper Projected image scale multiplier Release the unmanaged memory associated with this wraper Create a video frame source The pointer to the frame source Create video frame source from video file The name of the file If true, it will try to create video frame source using gpu Create a framesource using the specific camera The index of the camera to create capture from, starting from 0 Get the next frame Release all the unmanaged memory associated with this framesource Supper resolution The type of optical flow algorithms used for super resolution BTVL BTVL using gpu Create a super resolution solver for the given frameSource The type of optical flow algorithm to use The frameSource Release all the unmanaged memory associated to this object Use the Capture class as a FrameSource Create a Capture frame source The capture object that will be converted to a FrameSource Release the unmanaged memory associated with this CaptureFrameSource A FrameSource that can be used by the Video Stabilizer Get or Set the capture type The unmanaged pointer the the frameSource Retrieve the next frame from the FrameSoure Release the unmanaged memory associated with this FrameSource Gaussian motion filter Create a Gaussian motion filter The radius, use 15 for default. The standard deviation, use -1.0f for default Release all the unmanaged memory associated with this object A one pass video stabilizer Create a one pass stabilizer The capture object to be stabalized Set the Motion Filter The motion filter Release the unmanaged memory associated with the stabilizer A two pass video stabilizer Create a two pass video stabilizer. The capture object to be stabilized. Should not be a camera stream. Release the unmanaged memory Neural network Possible activation functions Identity sigmoid symmetric Gaussian Training method for ANN_MLP Back-propagation algorithm Batch RPROP algorithm The simulated annealing algorithm. Create a neural network using the specific parameters Release the memory associated with this neural network Sets the layer sizes. Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is SigmoidSym The first parameter of the activation function. The second parameter of the activation function. Sets training method and common parameters. The training method. param1 passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL. param2 passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL. Termination criteria of the training algorithm BPROP: Strength of the weight gradient term BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations) RPROP: Initial value Delta_0 of update-values Delta_{ij} RPROP: Increase factor RPROP: Decrease factor RPROP: Update-values lower limit RPROP: Update-values upper limit ANNEAL: Update initial temperature. ANNEAL: Update final temperature. ANNEAL: Update cooling ratio. ANNEAL: Update iteration per step. This class contains functions to call into machine learning library Release the ANN_MLP model The ANN_MLP model to be released Create a normal bayes classifier The normal bayes classifier Release the memory associated with the bayes classifier The classifier to release Create a KNearest classifier The KNearest classifier Release the KNearest classifier The classifier to release Create a default EM model Pointer to the EM model Release the EM model Given the EM , predict the probability of the The EM model The input samples The prediction results, should have the same # of rows as the The result. Create a default SVM model Pointer to the SVM model Release the SVM model and all the memory associated to ir The SVM model to be released Get the default parameter grid for the specific SVM type The SVM type The parameter grid reference, values will be filled in by the function call The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The SVM model The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times cGrid gammaGrid pGrid nuGrid coedGrid degreeGrid If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset. The method retrieves a given support vector The SVM model The output support vectors Create a default decision tree Pointer to the decision tree Release the decision tree model The decision tree model to be released Create a default random tree Pointer to the random tree Release the random tree model The random tree model to be released Create a default boost classifier Pointer to the boost classifier Release the boost classifier The boost classifier to be released Create a default SVMSGD model Pointer to the SVMSGD model Release the SVMSGD model and all the memory associated to ir The SVMSGD model to be released Boost Tree Boost Type Discrete AdaBoost. Real AdaBoost. It is a technique that utilizes confidence-rated predictions and works well with categorical data. LogitBoost. It can produce good regression fits. Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data. Create a default Boost classifier Release the Boost classifier and all memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees Decision Trees Create a default decision tree Release the decision tree and all the memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees Expectation Maximization model The type of the mixture covariation matrices A covariation matrix of each mixture is a scaled identity matrix, ?k*I, so the only parameter to be estimated is ?k. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (e.g. in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with cov_mat_type=COV_MAT_DIAGONAL A covariation matrix of each mixture may be arbitrary diagonal matrix with positive diagonal elements, that is, non-diagonal elements are forced to be 0's, so the number of free parameters is d for each matrix. This is most commonly used option yielding good estimation results A covariation matrix of each mixture may be arbitrary symmetrical positively defined matrix, so the number of free parameters in each matrix is about d2/2. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples The default Create an Expectation Maximization model Estimate the Gaussian mixture parameters from a samples set. This variation starts with Expectation step. You need to provide initial means of mixture components. Optionally you can pass initial weights and covariance matrices of mixture components. Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. Initial means of mixture components. It is a one-channel matrix of nclusters x dims size. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. The vector of initial covariance matrices of mixture components. Each of covariance matrices is a one-channel matrix of dims x dims size. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing. Initial weights of mixture components. It should be a one-channel floating-point matrix with 1 x nclusters or nclusters x 1 size. The optional output matrix that contains a likelihood logarithm value for each sample. It has nsamples x 1 size and CV_64FC1 type. The optional output "class label" (indices of the most probable mixture component for each sample). It has nsamples x 1 size and CV_32SC1 type. The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has nsamples x nclusters size and CV_64FC1 type. Estimate the Gaussian mixture parameters from a samples set. This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the Maximum Likelihood Estimate of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure, and optionally computes the output "class label" for each sample. The trained model can be used further for prediction, just like any other classifier. Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. The probs0. The optional output matrix that contains a likelihood logarithm value for each sample. It has nsamples x 1 size and CV_64FC1 type. The optional output "class label" for each sample(indices of the most probable mixture component for each sample). It has nsamples x 1 size and CV_32SC1 type. The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has nsamples x nclusters size and CV_64FC1 type. Predict the probability of the The input samples The prediction results, should have the same # of rows as the Release the memory associated with this EM model The number of mixtures The type of the mixture covariation matrices Termination criteria of the procedure. EM algorithm stops either after a certain number of iterations (term_crit.num_iter), or when the parameters change too little (no more than term_crit.epsilon) from iteration to iteration The KNearest classifier Create a default KNearest classifier Release the classifier and all the memory associated with it Default number of neighbors to use in predict method Whether classification or regression model should be trained Parameter for KDTree implementation Algorithm type ML implements logistic regression, which is a probabilistic classification technique. Specifies the kind of training method used. Batch method Set MiniBatchSize to a positive integer when using this method. Specifies the kind of regularization to be applied. Regularization disabled. L1 norm L2 norm Initializes a new instance of the class. Return the pointer to the StatModel object Return the pointer to the algorithm object Release the unmanaged resources Learning rate Number of iterations Kind of regularization to be applied Kind of training method to be applied Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent Termination criteria of the algorithm The flags for the neural network training function The data layout type Feature vectors are stored as cols Feature vectors are stored as rows Boosting type Discrete AdaBoost Real AdaBoost LogitBoost Gentle AdaBoost Splitting criteria, used to choose optimal splits during a weak tree construction Use the default criteria for the particular boosting method, see below Use Gini index. This is default option for Real AdaBoost; may be also used for Discrete AdaBoost Use misclassification rate. This is default option for Discrete AdaBoost; may be also used for Real AdaBoost Use least squares criteria. This is default and the only option for LogitBoost and Gentle AdaBoost Variable type Numerical or Ordered Catagorical A Normal Bayes Classifier Create a normal Bayes classifier Release the memory associated with this classifier Random trees Create a random tree Release the random tree and all memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees If true then variable importance will be calculated The size of the randomly selected subset of features at each tree node and that are used to find the best split(s) The termination criteria that specifies when the training algorithm stops Interface for statistical models in OpenCV ML. Return the pointer to the StatModel object The pointer to the StatModel object A statistic model Trains the statistical model. The stat model. The training samples. Type of the layout. Vector of responses associated with the training samples. Trains the statistical model. The model. The train data. The flags. Predicts response(s) for the provided sample(s) The model. The input samples, floating-point matrix. The optional output matrix of results. The optional flags, model-dependent. Response for the provided sample Wrapped CvParamGrid structure used by SVM Minimum value Maximum value step Support Vector Machine Type of SVM n-class classification (n>=2), allows imperfect separation of classes with penalty multiplier C for outliers n-class classification with possible imperfect separation. Parameter nu (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of C one-class SVM. All the training data are from the same class, SVM builds a boundary that separates the class from the rest of the feature space Regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than p. For outliers the penalty multiplier C is used Regression; nu is used instead of p. SVM kernel type Custom svm kernel type No mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option. d(x,y) = x y == (x,y) polynomial kernel: d(x,y) = (gamma*(xy)+coef0)^degree Radial-basis-function kernel; a good choice in most cases: d(x,y) = exp(-gamma*|x-y|^2) sigmoid function is used as a kernel: d(x,y) = tanh(gamma*(xy)+coef0) Exponential Chi2 kernel, similar to the RBF kernel Histogram intersection kernel. A fast kernel. K(xi,xj)=min(xi,xj). The type of SVM parameters C Gamma P NU COEF DEGREE Create a support Vector Machine Release all the memory associated with the SVM Get the default parameter grid for the specific SVM type The SVM type The default parameter grid for the specific SVM type The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times cGrid grid for gamma grid for p grid for nu grid for coeff grid for degree If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset. Retrieves all the support vectors. All the support vector as floating-point matrix, where support vectors are stored as matrix rows. Type of a SVM formulation Parameter gamma of a kernel function Parameter coef0 of a kernel function Parameter degree of a kernel function Parameter C of a SVM optimization problem Parameter nu of a SVM optimization problem Parameter epsilon of a SVM optimization problem Initialize with one of predefined kernels Termination criteria of the iterative SVM training procedure which solves a partial case of constrained quadratic optimization problem Type of a SVM kernel Support Vector Machine SVMSGD type. ASGD is often the preferable choice. Stochastic Gradient Descent Average Stochastic Gradient Descent Margin type General case, suits to the case of non-linearly separable sets, allows outliers. More accurate for the case of linearly separable sets. Create a support Vector Machine Set the optimal parameters for the given model type SVMSGD type Margin type Release all the memory associated with the SVMSGD model Algorithm type Margin type marginRegularization of a SVMSGD optimization problem initialStepSize of a SVMSGD optimization problem stepDecreasingPower of a SVMSGD optimization problem Termination criteria of the training algorithm. Train data Creates training data from in-memory arrays. Matrix of samples. It should have CV_32F type. Type of the layout. Matrix of responses. If the responses are scalar, they should be stored as a single row or as a single column. The matrix should have type CV_32F or CV_32S (in the former case the responses are considered as ordered by default; in the latter case - as categorical) Vector specifying which variables to use for training. It can be an integer vector (CV_32S) containing 0-based variable indices or byte vector (CV_8U) containing a mask of active variables. Vector specifying which samples to use for training. It can be an integer vector (CV_32S) containing 0-based sample indices or byte vector (CV_8U) containing a mask of training samples. Optional vector with weights for each sample. It should have CV_32F type. Optional vector of type CV_8U and size <number_of_variables_in_samples> + <number_of_variables_in_responses>, containing types of each input and output variable. Release the unmanaged resources Entry points to the Open CV bioinspired module Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. Input image (with 1- or 3-channels). Multiplier for image values. Spatial size for output image Scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true. Flag which indicates that swap first and last channels in 3-channel image is necessary. Flag which indicates whether image will be cropped after resize or not 4-dimansional Mat with NCHW dimensions order. Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. Input images (all with 1- or 3-channels). Multiplier for images values. Spatial size for output image Scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true. Flag which indicates that swap first and last channels in 3-channel image is necessary. Flag which indicates whether image will be cropped after resize or not Input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. Reads a network model stored in Darknet model files. path to the .cfg file with text description of the network architecture. path to the .weights file with learned network. Network object that ready to do forward, throw an exception in failure cases. Reads a network model stored in Caffe framework's format. Buffer containing the content of the .prototxt file Buffer containing the content of the .caffemodel file Net object. Reads a network model stored in Caffe framework's format. path to the .prototxt file with text description of the network architecture. path to the .caffemodel file with learned network. Net object. Reads a network model stored in TensorFlow framework's format. path to the .pb file with binary protobuf description of the network architecture path to the .pbtxt file that contains text graph definition in protobuf format. Resulting Net object is built by text graph using weights from a binary one that let us make it more flexible. Net object. Reads a network model stored in TensorFlow framework's format. buffer containing the content of the pb file buffer containing the content of the pbtxt file Net object. Convert all weights of Caffe network to half precision floating point Path to origin model from Caffe framework contains single precision floating point weights (usually has .caffemodel extension). Path to destination model with updated weights. Performs non maximum suppression given boxes and corresponding scores. A set of bounding boxes to apply NMS. A set of corresponding confidences. A threshold used to filter boxes by score. A threshold used in non maximum suppression. The kept indices of bboxes after NMS. A coefficient in adaptive threshold If >0, keep at most top_k picked indices. This class allows to create and manipulate comprehensive artificial neural networks. Default constructor. Sets the new value for the layer output blob. Descriptor of the updating layer output blob. Input blob Runs forward pass for the whole network. name for layer which output is needed to get blob for first output of specified layer Release the memory associated with this network. Returns true if there are no layers in the network. Return the LayerNames Ask network to use specific computation backend where it supported. Ask network to make computations on specific target device. Enables or disables layer fusion in the network. An interface for the convex polygon Get the vertices of this convex polygon The vertices of this convex polygon An interface for the convex polygon Get the vertices of this convex polygon The vertices of this convex polygon A deformable parts model detector create a new dpm detector with the specified files and classes Is the detector empty? get the class names get the number of classes Perform detection on the image Dispose Provide interfaces to the Open CV DPM functions A DPM detection rectangle detection score class of the detection create a detection Provide interfaces to the Open CV Saliency functions Compute the saliency. The Saliency object The image. The computed saliency map. true if the saliency map is computed, false otherwise Perform a binary map of given saliency map the saliency map obtained through one of the specialized algorithms the binary map The StatucSaliency object True if the binary map is sucessfully computed A Fast Self-tuning Background Subtraction Algorithm. This background subtraction algorithm is inspired to the work of B. Wang and P. Dudek [2] [2] B. Wang and P. Dudek "A Fast Self-tuning Background Subtraction Algorithm", in proc of IEEE Workshop on Change Detection, 2014 Image width Image height This function allows the correct initialization of all data structures that will be used by the algorithm. constructor Pointer to the unmanaged MotionSaliency object Pointer to the unmanaged Saliency object Pointer to the unmanaged Algorithm object Release the unmanaged memory associated with this object Objectness algorithms based on [3] [3] Cheng, Ming-Ming, et al. "BING: Binarized normed gradients for objectness estimation at 300fps." IEEE CVPR. 2014 W NSS constructor Pointer to the unmanaged Objectness object Pointer to the unmanaged Saliency object Pointer to the unmanaged Algorithm object Release the unmanaged memory associated with this object Return the list of the rectangles' objectness value,. set the correct path from which the algorithm will load the trained model. Base interface for Saliency algorithms Pointer to the unmanaged Saliency object Base interface for StaticSaliency algorithms Pointer to the unmanaged StaticSaliency object Base interface for MotionSaliency algorithms Pointer to the unmanaged MotionSaliency object Base interface for Objectness algorithms Pointer to the unmanaged Objectness object simulate the behavior of pre-attentive visual search constructor Pointer to the unmanaged StaticSaliency object Pointer to the unmanaged Saliency object Pointer to the unmanaged Algorithm object Release the unmanaged memory associated with this object The Fine Grained Saliency approach from Sebastian Montabone and Alvaro Soto. Human detection using a mobile platform and novel features derived from a visual saliency mechanism. In Image and Vision Computing, Vol. 28 Issue 3, pages 391–402. Elsevier, 2010. This method calculates saliency based on center-surround differences. High resolution saliency maps are generated in real time by using integral images. constructor Pointer to the unmanaged StaticSaliency object Pointer to the unmanaged Saliency object Pointer to the unmanaged Algorithm object Release the unmanaged memory associated with this object Class implementing BoostDesc (Learning Image Descriptors with Boosting). See: V. Lepetit T. Trzcinski, M. Christoudias and P. Fua. Boosting Binary Keypoint Descriptors. In Computer Vision and Pattern Recognition, 2013. M. Christoudias T. Trzcinski and V. Lepetit. Learning Image Descriptors with Boosting. submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2013. The type of descriptor BGM is the base descriptor where each binary dimension is computed as the output of a single weak learner. BGM_HARD refers to same BGM but use different type of gradient binning. In the BGM_HARD that use ASSIGN_HARD binning type the gradient is assigned to the nearest orientation bin. BGM_BILINEAR refers to same BGM but use different type of gradient binning. In the BGM_BILINEAR that use ASSIGN_BILINEAR binning type the gradient is assigned to the two neighbouring bins. LBGM (alias FP-Boost) is the floating point extension where each dimension is computed as a linear combination of the weak learner responses. BINBOOST and subvariants are the binary extensions of LBGM where each bit is computed as a thresholded linear combination of a set of weak learners. BINBOOST and subvariants are the binary extensions of LBGM where each bit is computed as a thresholded linear combination of a set of weak learners. BINBOOST and subvariants are the binary extensions of LBGM where each bit is computed as a thresholded linear combination of a set of weak learners. Create an instance of Boost Descriptor type of descriptor to use sample patterns using keypoints orientation adjust the sampling window of detected keypoints 6.25f is default and fits for KAZE, SURF detected keypoints window ratio 6.75f should be the scale for SIFT detected keypoints window ratio 5.00f should be the scale for AKAZE, MSD, AGAST, FAST, BRISK keypoints window ratio 0.75f should be the scale for ORB keypoints ratio 1.50f was the default in original implementation Release all the unmanaged resource associated with BRIEF This class wraps the functional calls to the OpenCV XFeatures2D modules BRIEF Descriptor Create a BRIEF descriptor extractor. The size of descriptor. It can be equal 16, 32 or 64 bytes. Release all the unmanaged resource associated with BRIEF A SURF detector using Cuda Create a Cuda SURF detector The interest operator threshold. The number of octaves to process. The number of layers in each octave. True, if generate 128-len descriptors, false - 64-len descriptors. Max features = featuresRatio * img.size().srea(). If set to true, the orientation is not computed for the keypoints Detect keypoints in the CudaImage The image where keypoints will be detected from The optional mask, can be null if not needed The keypoints GpuMat that will have 1 row. keypoints.at<float[6]>(1, i) contains i'th keypoint format: (x, y, size, response, angle, octave) Detect keypoints in the CudaImage The image where keypoints will be detected from The optional mask, can be null if not needed An array of keypoints Obtain the keypoints array from GpuMat The keypoints obtained from DetectKeyPointsRaw The vector of keypoints Obtain a GpuMat from the keypoints array The keypoints array A GpuMat that represent the keypoints Compute the descriptor given the image and the point location The image where the descriptor will be computed from The optional mask, can be null if not needed The keypoint where the descriptor will be computed from. The order of the keypoints might be changed unless the GPU_SURF detector is UP-RIGHT. The image features founded on the keypoint location Return the size of the descriptor (64/128) Release the unmanaged resource associate to the Detector Daisy descriptor. Create DAISY descriptor extractor Radius of the descriptor at the initial scale. Amount of radial range division quantity. Amount of angular range division quantity. Amount of gradient orientations range division quantity. Descriptors normalization type. optional 3x3 homography matrix used to warp the grid of daisy but sampling keypoints remains unwarped on image Switch to disable interpolation for speed improvement at minor quality loss Sample patterns using keypoints orientation, disabled by default. Normalization type Will not do any normalization (default) Histograms are normalized independently for L2 norm equal to 1.0 Descriptors are normalized for L2 norm equal to 1.0 Descriptors are normalized for L2 norm equal to 1.0 but no individual one is bigger than 0.154 as in SIFT Release all the unmanaged resource associated with BRIEF The FREAK (Fast Retina Keypoint) keypoint descriptor: Alahi, R. Ortiz, and P. Vandergheynst. FREAK: Fast Retina Keypoint. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. CVPR 2012 Open Source Award Winner. The algorithm propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Key- point (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are competitive alternatives to existing keypoints in particular for embedded applications. Create a Freak descriptor extractor. Enable orientation normalization Enable scale normalization Scaling of the description pattern Number of octaves covered by the detected keypoints. Release all the unmanaged resource associated with FREAK Class implementing the Harris-Laplace feature detector Create a HarrisLaplaceFeatureDetector the number of octaves in the scale-space pyramid the threshold for the Harris cornerness measure the threshold for the Difference-of-Gaussians scale selection the maximum number of corners to consider the number of intermediate scales per octave Release all the unmanaged resource associated with FREAK latch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015 LATCH is a binary descriptor based on learned comparisons of triplets of image patches. Create LATCH descriptor extractor The size of the descriptor - can be 64, 32, 16, 8, 4, 2 or 1 Whether or not the descriptor should compensate for orientation changes. the size of half of the mini-patches size. For example, if we would like to compare triplets of patches of size 7x7x then the half_ssd_size should be (7-1)/2 = 3. Release all the unmanaged resource associated with BRIEF The locally uniform comparison image descriptor: An image descriptor that can be computed very fast, while being about as robust as, for example, SURF or BRIEF. Create a locally uniform comparison image descriptor. Kernel for descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth kernel for blurring image prior to descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth Release all the unmanaged resource associated with BRIEF Class implementing the MSD (Maximal Self-Dissimilarity) keypoint detector, described in "Federico Tombari and Luigi Di Stefano. Interest points via maximal self-dissimilarities. In Asian Conference on Computer Vision – ACCV 2014, 2014". The algorithm implements a novel interest point detector stemming from the intuition that image patches which are highly dissimilar over a relatively large extent of their surroundings hold the property of being repeatable and distinctive. This concept of "contextual self-dissimilarity" reverses the key paradigm of recent successful techniques such as the Local Self-Similarity descriptor and the Non-Local Means filter, which build upon the presence of similar - rather than dissimilar - patches. Moreover, it extends to contextual information the local self-dissimilarity notion embedded in established detectors of corner-like interest points, thereby achieving enhanced repeatability, distinctiveness and localization accuracy. Create a MSD (Maximal Self-Dissimilarity) keypoint detector. Patch radius Search area raduis Nms radius Nms scale radius Th saliency Knn Scale factor N scales Compute orientation Release all the unmanaged resource associated with MSDDetector Class implementing PCT (position-color-texture) signature extraction as described in: Martin Krulis, Jakub Lokoc, and Tomas Skopal. Efficient extraction of clustering-based feature signatures using GPU architectures. Multimedia Tools Appl., 75(13):8071–8103, 2016. The algorithm is divided to a feature sampler and a clusterizer. Feature sampler produces samples at given set of coordinates. Clusterizer then produces clusters of these samples using k-means algorithm. Resulting set of clusters is the signature of the input image. A signature is an array of SIGNATURE_DIMENSION-dimensional points.Used dimensions are: weight, x, y position; lab color, contrast, entropy. Color resolution of the greyscale bitmap represented in allocated bits (i.e., value 4 means that 16 shades of grey are used). The greyscale bitmap is used for computing contrast and entropy values. Size of the texture sampling window used to compute contrast and entropy. (center of the window is always in the pixel selected by x,y coordinates of the corresponding feature sample). Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Weights (multiplicative constants) that linearly stretch individual axes of the feature space. (x,y = position. L,a,b = color in CIE Lab space. c = contrast. e = entropy) Number of iterations of the k-means clustering. We use fixed number of iterations, since the modified clustering is pruning clusters (not iteratively refining k clusters). Maximal number of generated clusters. If the number is exceeded, the clusters are sorted by their weights and the smallest clusters are cropped. This parameter multiplied by the index of iteration gives lower limit for cluster size. Clusters containing fewer points than specified by the limit have their centroid dismissed and points are reassigned. Threshold euclidean distance between two centroids. If two cluster centers are closer than this distance, one of the centroid is dismissed and points are reassigned. Remove centroids in k-means whose weight is lesser or equal to given threshold. Distance function selector used for measuring distance between two points in k-means. Point distributions supported by random point generator. Generate numbers uniformly. Generate points in a regular grid. Generate points with normal (gaussian) distribution. Creates PCTSignatures algorithm using sample and seed count. It generates its own sets of sampling points and clusterization seed indexes. Number of points used for image sampling. Number of initial clusterization seeds. Must be lower or equal to initSampleCount Distribution of generated points. Creates PCTSignatures algorithm using pre-generated sampling points and number of clusterization seeds. It uses the provided sampling points and generates its own clusterization seed indexes. Sampling points used in image sampling. Number of initial clusterization seeds. Must be lower or equal to initSamplingPoints.size(). Creates PCTSignatures algorithm using pre-generated sampling points and clusterization seeds indexes. Sampling points used in image sampling. Indexes of initial clusterization seeds. Its size must be lower or equal to initSamplingPoints.size(). Release the unmanaged memory associated with this PCTSignatures object Computes signature of given image. Input image of CV_8U type. Output computed signature. Draws signature in the source image and outputs the result. Signatures are visualized as a circle with radius based on signature weight and color based on signature color. Contrast and entropy are not visualized. Source image. Image signature. Output result. Determines maximal radius of signature in the output image. Border thickness of the visualized signature. Class implementing Signature Quadratic Form Distance (SQFD). See also: Christian Beecks, Merih Seran Uysal, Thomas Seidl. Signature quadratic form distance. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 438-445. ACM, 2010. Lp distance function selector. L0_25 L0_5 L1 L2 L2 squared L5 L infinity Similarity function selector. -d(c_i, c_j) e^{ -\alpha * d^2(c_i, c_j)} 1 / (\alpha + d(c_i, c_j)) Creates the algorithm instance using selected distance function, similarity function and similarity function parameter. Distance function selector. Similarity function selector. Parameter of the similarity function. Computes Signature Quadratic Form Distance of two signatures. The first signature. The second signature. The Signature Quadratic Form Distance of two signatures Computes Signature Quadratic Form Distance between the reference signature and each of the other image signatures. The signature to measure distance of other signatures from. Vector of signatures to measure distance from the source signature. Output vector of measured distances. Release the unmanaged memory associated with this PCTSignaturesSQFD object Wrapped SIFT detector Create a SIFT using the specific values The desired number of features. Use 0 for un-restricted number of features The number of octave layers. Use 3 for default Contrast threshold. Use 0.04 as default Detector parameter. Use 10.0 as default Use 1.6 as default Release the unmanaged resources associated with this object StarDetector Create a star detector with the specific parameters Maximum size of the features. The following values of the parameter are supported: 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128 Threshold for the approximated laplacian, used to eliminate weak features. The larger it is, the less features will be retrieved Another threshold for the laplacian to eliminate edges. The larger the threshold, the more points you get. Another threshold for the feature size to eliminate edges. The larger the threshold, the more points you get. Release the unmanaged memory associated with this detector. Class for extracting Speeded Up Robust Features from an image Create a SURF detector using the specific values Only features with keypoint.hessian larger than that are extracted. good default value is ~300-500 (can depend on the average local contrast and sharpness of the image). user can further filter out some features based on their hessian values and other characteristics false means basic descriptors (64 elements each), true means extended descriptors (128 elements each) The number of octaves to be used for extraction. With each next octave the feature size is doubled The number of layers within each octave False means that detector computes orientation of each feature. True means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting upright=true. Release the unmanaged memory associated with this detector. Class implementing VGG (Oxford Visual Geometry Group) descriptor trained end to end using "Descriptor Learning Using Convex Optimisation" (DLCO) aparatus See: K. Simonyan, A. Vedaldi, and A. Zisserman. Learning local feature descriptors using convex optimisation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014. The VGG descriptor type 120 dimension float 80 dimension float 64 dimension float 48 dimension float Create an instance of VGG Type of descriptor to use gaussian kernel value for image blur use image sample intensity normalization sample patterns using keypoints orientation adjust the sampling window of detected keypoints to 64.0f (VGG sampling window) 6.25f is default and fits for KAZE, SURF detected keypoints window ratio 6.75f should be the scale for SIFT detected keypoints window ratio 5.00f should be the scale for AKAZE, MSD, AGAST, FAST, BRISK keypoints window ratio 0.75f should be the scale for ORB keypoints ratio clamp descriptors to 255 and convert to uchar CV_8UC1 Release all the unmanaged resource associated with VGG Implementation of bio-inspired features (BIF) from the paper: Guo, Guodong, et al. "Human age estimation using bio-inspired features." Computer Vision and Pattern Recognition, 2009. CVPR 2009. Create an instance of bio-inspired features The number of filter bands used for computing BIF. The number of image rotations. Computes features by input image. Input image (CV_32FC1) Feature vector (CV_32FC1) Release the unmanaged memory associated with this BIF Class that contains entry points for the Face module. A function to load the trained model before the fitting process. The facemark object A string represent the filename of a trained model. Trains a Facemark algorithm using the given dataset. The facemark object Input image. Represent region of interest of the detected faces. Each face is stored in cv::Rect container. The detected landmark points for each faces. True if successful Utility to draw the detected facial landmark points. The input image to be processed. Contains the data of points which will be drawn. The color of points in BGR format Parameters for the FacemarkAAM model Create the paramaters with the default values. Release the unmanaged memory associated with this object. filename where the trained model will be saved M N Number of iteration show the training print-out flag to save the trained model or not The maximum value of M The maximum value of N The Facemark AMM model Pointer to the unmanaged Facemark object Pointer to the unmanaged Algorithm object Create an instance of FacemarkAAM model The model parameters Release all the unmanaged memory associated with this Facemark Parameters for the FacemarkLBF model Create the paramaters with the default values. Release the unmanaged memory associated with this object. offset for the loaded face landmark points show the training print-out number of landmark points multiplier for augment the training data number of refinement stages number of tree in the model for each landmark point refinement the depth of decision tree, defines the size of feature overlap ratio for training the LBF feature flag to save the trained model or not filename of the face detector model filename where the trained model will be saved The FacemarkLBF model Pointer to the unmanaged Facemark object Pointer to the unmanaged Algorithm object Create an instance of the FacemarkLBF model The model parameters Release all the unmanaged memory associated with this Facemark Face Recognizer Train the face recognizer with the specific images and labels The images used in the training. This can be a VectorOfMat The labels of the images. This can be a VectorOfInt Train the face recognizer with the specific images and labels The images used in the training. The labels of the images. Predict the label of the image The image where prediction will be based on The prediction label The prediction result The label The distance Save the FaceRecognizer to a file The file name to be saved to Load the FaceRecognizer from the file The file where the FaceRecognizer will be loaded from Release the unmanaged memory associated with this FaceRecognizer Eigen face recognizer Create an EigenFaceRecognizer The number of components The distance threshold Fisher face recognizer Create a FisherFaceRecognizer The number of components The distance threshold LBPH face recognizer Create a LBPH face recognizer Radius Neighbors Grid X Grid Y The distance threshold Updates a FaceRecognizer with given data and associated labels. The training images, that means the faces you want to learn. The data has to be given as a VectorOfMat. The labels corresponding to the images Update the face recognizer with the specific images and labels The images used for updating the face recognizer The labels of the images Interface to the Facemark class Return the pointer to the Facemark object The pointer to the Facemark object Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm Release all the unmanaged memory associate with this ERFilter Takes image on input and returns the selected regions in a vector of ERStat only distinctive ERs which correspond to characters are selected by a sequential classifier Sinle channel image CV_8UC1 Output for the 1st stage and Input/Output for the 2nd. The selected Extremal Regions are stored here. The grouping method Only perform grouping horizontally. Perform grouping in any orientation. Find groups of Extremal Regions that are organized as text blocks. The image where ER grouping is to be perform on Array of single channel images from which the regions were extracted Vector of ER’s retrieved from the ERFilter algorithm from each channel The XML or YAML file with the classifier model (e.g. trained_classifier_erGrouping.xml) The minimum probability for accepting a group. The grouping methods The output of the algorithm that indicates the text regions Extremal Region Filter for the 1st stage classifier of N&M algorithm Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm The file name of the classifier Threshold step in subsequent thresholds when extracting the component tree. The minimum area (% of image size) allowed for retreived ER’s. The maximum area (% of image size) allowed for retreived ER’s. The minimum probability P(er|character) allowed for retreived ER’s. Whenever non-maximum suppression is done over the branch probabilities. The minimum probability difference between local maxima and local minima ERs. Extremal Region Filter for the 2nd stage classifier of N&M algorithm Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm The file name of the classifier The minimum probability P(er|character) allowed for retreived ER’s. computeNMChannels operation modes A combination of red (R), green (G), blue (B), lightness (L), and gradient magnitude (Grad). In N&M algorithm, the combination of intensity (I), hue (H), saturation (S), and gradient magnitude channels (Grad) are used in order to obtain high localization recall. This class wraps the functional calls to the OpenCV Text modules Converts MSER contours (vector of point) to ERStat regions. Source image CV_8UC1 from which the MSERs where extracted. Input vector with all the contours (vector of Point). Output where the ERStat regions are stored. Compute the different channels to be processed independently in the N&M algorithm. Source image. Must be RGB CV_8UC3. Output vector of Mat where computed channels are stored. Mode of operation The ERStat structure represents a class-specific Extremal Region (ER). An ER is a 4-connected set of pixels with all its grey-level values smaller than the values in its outer boundary. A class-specific ER is selected (using a classifier) from all the ER’s in the component tree of the image. Seed point Threshold (max grey-level value) Area Perimeter Euler number Bounding box Order 1 raw moments to derive the centroid Order 1 raw moments to derive the centroid Order 2 central moments to construct the covariance matrix Order 2 central moments to construct the covariance matrix Order 2 central moments to construct the covariance matrix Pointer owner to horizontal crossings Pointer to horizontal crossings Median of the crossings at three different height levels Hole area ratio Convex hull ratio Number of inflexion points get the pixels list. Probability that the ER belongs to the class we are looking for Pointer to the parent ERStat Pointer to the child ERStat Pointer to the next ERStat Pointer to the previous ERStat If or not the regions is a local maxima of the probability Pointer to the ERStat that is the max probability ancestor Pointer to the ERStat that is the min probability ancestor Get the center of the region The source image width The center of the region Wrapped class of the C++ standard vector of ERStat. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of ERStat Create an standard vector of ERStat of the specific size The size of the vector Create an standard vector of ERStat with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of ERStat An array of ERStat Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Wrapped class of the C++ standard vector of VectorOfERStat. Create an empty standard vector of VectorOfERStat Create an standard vector of VectorOfERStat of the specific size The size of the vector Create an standard vector of VectorOfERStat with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Create the standard vector of VectorOfERStat Convert the standard vector to arrays of int Arrays of int Background subtraction based on counting. About as fast as MOG2 on a high end system. More than twice faster than MOG2 on cheap hardware (benchmarked on Raspberry Pi3). Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Creates a CNT Background Subtractor. number of frames with same pixel color to consider stable determines if we're giving a pixel credit for being stable for a long time maximum allowed credit for a pixel in history determines if we're parallelizing the algorithm Release all the unmanaged memory associated with this background model. Background Subtractor module based on the algorithm given in: Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg, “Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation”, American Control Conference, Montreal, June 2012. Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create a background subtractor module based on GMG Number of frames used to initialize the background models. Threshold value, above which it is marked foreground, else background. Release all the unmanaged memory associated with this background model. Implementation of the different yet better algorithm which is called GSOC, as it was implemented during GSOC and was not originated from any paper. Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Creates an instance of BackgroundSubtractorGSOC algorithm. Whether to use camera motion compensation. Number of samples to maintain at each point of the frame. Probability of replacing the old sample - how fast the model will update itself. Probability of propagating to neighbors. How many positives the sample must get before it will be considered as a possible replacement. Scale coefficient for threshold. Bias coefficient for threshold. Blinking supression decay factor. Blinking supression multiplier. Strength of the noise removal for background points. Strength of the noise removal for foreground points. Release all the unmanaged memory associated with this background model. Background Subtraction using Local SVD Binary Pattern. More details about the algorithm can be found at: L. Guo, D. Xu, and Z. Qiang. Background subtraction using local svd binary pattern. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1159–1167, June 2016. Camera Motion compensation mode None Use LK camera compensation Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Creates an instance of BackgroundSubtractorLSBP algorithm. Whether to use camera motion compensation. Number of samples to maintain at each point of the frame. LSBP descriptor radius. Lower bound for T-values. Upper bound for T-values. Increase step for T-values. Decrease step for T-values. Scale coefficient for threshold values. Increase/Decrease step for threshold values. Strength of the noise removal for background points. Strength of the noise removal for foreground points. Threshold for LSBP binary string. Minimal number of matches for sample to be considered as foreground. Release all the unmanaged memory associated with this background model. Gaussian Mixture-based Background/Foreground Segmentation Algorithm. The class implements the following algorithm: "An improved adaptive background mixture model for real-time tracking with shadow detection" P. KadewTraKuPong and R. Bowden, Proc. 2nd European Workshp on Advanced Video-Based Surveillance Systems, 2001." http://personal.ee.surrey.ac.uk/Personal/R.Bowden/publications/avbs01/avbs01.pdf Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create an "Improved adaptive Gaussian mixture model for background subtraction". The length of the history. The maximum number of gaussian mixtures. Background ratio Noise strength (standard deviation of the brightness or each color channel). 0 means some automatic value. Release all the unmanaged memory associated with this background model. Class that contains entry points for the Contrib module. Interface for realizations of Domain Transform filter. The three modes for filtering 2D signals in the article. NC IC RF Create instance of DTFilter and produce initialization routines. Guided image (used to build transformed distance, which describes edge structure of guided image). Parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. Parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. One form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article. Optional number of iterations used for filtering, 3 is quite enough. Produce domain transform filtering operation on source image. Filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. Destination image. Optional depth of the output image. dDepth can be set to Default, which will be equivalent to src.depth(). Release the unmanaged memory associated with this object Library to invoke XImgproc functions Extended Image Processing Applies the joint bilateral filter to an image. Joint 8-bit or floating-point, 1-channel or 3-channel image. Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image. Destination image of the same size and type as src . Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Border type Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. Source image whose depth is 8-bit UINT or 32-bit FLOAT Destination image of the same size and type as src. Radius of kernel to be used for filtering. It should be positive integer Number of iterations of algorithm, It should be positive integer Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated. Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper. For more details about this filter see: Hojin Cho, Hyunjoon Lee, Henry Kang, and Seungyong Lee. Bilateral texture filtering. ACM Transactions on Graphics, 33(4):128:1–128:8, July 2014. Applies the rolling guidance filter to an image Source 8-bit or floating-point, 1-channel or 3-channel image. Destination image of the same size and type as src. Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace . Number of iterations of joint edge-preserving filtering applied on the source image. Border type Simple one-line Fast Global Smoother filter call. image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels. source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels. destination image. parameter defining the amount of regularization parameter, that is similar to color space sigma in bilateralFilter. internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts. number of iterations used for filtering, 3 is usually enough. Global image smoothing via L0 gradient minimization. Source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth. Destination image. Parameter defining the smooth term weight. Parameter defining the increasing factor of the weight of the gradient data term. Simple one-line Adaptive Manifold Filter call. joint (also called as guided) image or array of images with any numbers of channels. filtering image with any numbers of channels. output image. spatial standard deviation. color space standard deviation, it is similar to the sigma in the color space into bilateralFilter. optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper. Simple one-line Guided Filter call. guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used. filtering image with any numbers of channels. output image. radius of Guided Filter. regularization term of Guided Filter. eps^2 is similar to the sigma in the color space into bilateralFilter. optional depth of the output image. Simple one-line Domain Transform filter call. guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels. output image parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter. parameter in the original article, it's similar to the sigma in the color space into bilateralFilter. Dt filter mode optional number of iterations used for filtering, 3 is quite enough. Niblack threshold The source image The output result Value that defines which local binarization algorithm should be used. Block size delta Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Computes the estimated covariance matrix of an image using the sliding window forumlation. The source image. Input image must be of a complex type. The destination estimated covariance matrix. Output matrix will be size (windowRows*windowCols, windowRows*windowCols). The number of rows in the window. The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix. Applies weighted median filter to an image. Joint 8-bit, 1-channel or 3-channel image. Source 8-bit or floating-point, 1-channel or 3-channel image. Destination image. Radius of filtering kernel, should be a positive integer. Filter range standard deviation for the joint image. The type of weight definition A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling. For more details about this implementation, please see: Qi Zhang, Li Xu, and Jiaya Jia. 100+ times faster weighted median filter (wmf). In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2830–2837. IEEE, 2014. Applies Paillou filter to an image. Source 8-bit or 16bit image, 1-channel or 3-channel image. result CV_32F image with same number of channel than op. see paper see paper For more details about this implementation, please see: Philippe Paillou. Detecting step edges in noisy sar images: a new linear operator. IEEE transactions on geoscience and remote sensing, 35(1):191–196, 1997. Applies Paillou filter to an image. Source 8-bit or 16bit image, 1-channel or 3-channel image. result CV_32F image with same number of channel than op. see paper see paper For more details about this implementation, please see: Philippe Paillou. Detecting step edges in noisy sar images: a new linear operator. IEEE transactions on geoscience and remote sensing, 35(1):191–196, 1997. Applies Y Deriche filter to an image. Source 8-bit or 16bit image, 1-channel or 3-channel image. result CV_32FC image with same number of channel than _op. see paper see paper For more details about this implementation, please see here Applies X Deriche filter to an image. Source 8-bit or 16bit image, 1-channel or 3-channel image. result CV_32FC image with same number of channel than _op. see paper see paper For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen. Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values. Destination image of the same size and the same type as src. The function can work in-place. Value that defines which thinning algorithm should be used. Performs anisotropic diffusion on an image. Grayscale Source image. Destination image of the same size and the same number of channels as src . The amount of time to step forward by on each iteration (normally, it's between 0 and 1). sensitivity to the edges The number of iterations Graph Based Segmentation Algorithm. The class implements the algorithm described in Pedro F Felzenszwalb and Daniel P Huttenlocher. Efficient graph-based image segmentation. volume 59, pages 167–181. Springer, 2004. Creates a graph based segmentor. The sigma parameter, used to smooth image The k parameter of the algorithm The minimum size of segments Segment an image and store output in dst. The input image. Any number of channel (1 (Eg: Gray), 3 (Eg: RGB), 4 (Eg: RGB-D)) can be provided The output segmentation. It's a CV_32SC1 Mat with the same number of cols and rows as input image, with an unique, sequential, id for each pixel. Release the unmanaged memory associated with this object. Selective search segmentation algorithm The class implements the algorithm described in: Jasper RR Uijlings, Koen EA van de Sande, Theo Gevers, and Arnold WM Smeulders. Selective search for object recognition. International journal of computer vision, 104(2):154–171, 2013. Selective search segmentation algorithm Set a image used by switch* functions to initialize the class. The image Initialize the class with the 'Single stragegy' parameters The k parameter for the graph segmentation The sigma parameter for the graph segmentation Initialize the class with the 'Selective search fast' parameters The k parameter for the first graph segmentation The increment of the k parameter for all graph segmentations The sigma parameter for the graph segmentation Initialize the class with the 'Selective search quality' parameters The k parameter for the first graph segmentation The increment of the k parameter for all graph segmentations The sigma parameter for the graph segmentation Add a new image in the list of images to process. The image Based on all images, graph segmentations and stragies, computes all possible rects and return them. The list of rects. The first ones are more relevents than the lasts ones. Release the unmanaged memory associated with this object. Class implementing edge detection algorithm from Piotr Dollár and C Lawrence Zitnick. Structured forests for fast edge detection. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 1841–1848. IEEE, 2013. name of the file where the model is stored optional object inheriting from RFFeatureGetter. You need it only if you would like to train your own forest, pass NULL otherwise The function detects edges in src and draw them to dst. The algorithm underlies this function is much more robust to texture presence, than common approaches, e.g. Sobel source image (RGB, float, in [0;1]) to detect edges destination image (grayscale, float, in [0;1]) where edges are drawn Release the unmanaged memory associated with this object. Helper class for training part of [P. Dollar and C. L. Zitnick. Structured Forests for Fast Edge Detection, 2013]. Create a default RFFeatureGetter Release the unmanaged memory associated with this RFFeatureGetter. Class implementing the LSC (Linear Spectral Clustering) superpixels algorithm described in "Zhengqin Li and Jiansheng Chen. Superpixel segmentation using linear spectral clustering. June 2015." LSC (Linear Spectral Clustering) produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images The function initializes a SuperpixelLSC object for the input image. Image to segment Chooses an average superpixel size measured in pixels Chooses the enforcement of superpixel compactness factor of superpixel Calculates the actual amount of superpixels on a given segmentation computed and stored in SuperpixelLSC object Returns the segmentation labeling of the image. Each label represents a superpixel, and each pixel is assigned to one superpixel label. A CV_32SC1 integer array containing the labels of the superpixel segmentation. The labels are in the range [0, NumberOfSuperpixels]. Returns the mask of the superpixel segmentation stored in SuperpixelLSC object. Return: CV_8U1 image mask where -1 indicates that the pixel is a superpixel border, and 0 otherwise. If false, the border is only one pixel wide, otherwise all pixels at the border are masked. Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelLSC object. This function can be called again without the need of initializing the algorithm with createSuperpixelLSC(). This save the computational cost of allocating memory for all the structures of the algorithm. Number of iterations. Higher number improves the result. Release the unmanaged memory associated with this object. Class implementing the SEEDS (Superpixels Extracted via Energy-Driven Sampling) superpixels algorithm described in Michael Van den Bergh, Xavier Boix, Gemma Roig, Benjamin de Capitani, and Luc Van Gool. Seeds: Superpixels extracted via energy-driven sampling. In Computer Vision–ECCV 2012, pages 13–26. Springer, 2012. The function initializes a SuperpixelSEEDS object for the input image. Image width Image height Number of channels of the image. Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number. Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. Enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5]. Number of histogram bins. If true, iterate each block level twice for higher accuracy. The function computes the superpixels segmentation of an image with the parameters initialized with the function createSuperpixelSEEDS(). Returns the segmentation labeling of the image. Each label represents a superpixel, and each pixel is assigned to one superpixel label. Return: A CV_32UC1 integer array containing the labels of the superpixel segmentation. The labels are in the range [0, NumberOfSuperpixels]. Returns the mask of the superpixel segmentation stored in SuperpixelSEEDS object. Return: CV_8UC1 image mask where -1 indicates that the pixel is a superpixel border, and 0 otherwise. If false, the border is only one pixel wide, otherwise all pixels at the border are masked. Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelSEEDS object. This function can be called again for other images without the need of initializing the algorithm with createSuperpixelSEEDS(). This save the computational cost of allocating memory for all the structures of the algorithm. Input image. Supported formats: CV_8U, CV_16U, CV_32F. Image size & number of channels must match with the initialized image size & channels with the function createSuperpixelSEEDS(). It should be in HSV or Lab color space. Lab is a bit better, but also slower. Number of pixel level iterations. Higher number improves the result. Release the unmanaged memory associated with this object. Class implementing the SLIC (Simple Linear Iterative Clustering) superpixels algorithm described in Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell., 34(11):2274–2282, nov 2012. The algorithm to use SLIC segments image using a desired region_size SLICO will choose an adaptive compactness factor. The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. Image to segment Chooses the algorithm variant to use Chooses an average superpixel size measured in pixels Chooses the enforcement of superpixel smoothness factor of superpixel Calculates the actual amount of superpixels on a given segmentation computed and stored in SuperpixelSLIC object. Returns the segmentation labeling of the image. Each label represents a superpixel, and each pixel is assigned to one superpixel label. A CV_32SC1 integer array containing the labels of the superpixel segmentation. The labels are in the range [0, NumberOfSuperpixels]. Returns the mask of the superpixel segmentation stored in SuperpixelSLIC object. CV_8U1 image mask where -1 indicates that the pixel is a superpixel border, and 0 otherwise. If false, the border is only one pixel wide, otherwise all pixels at the border are masked. Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelSLIC object. This function can be called again without the need of initializing the algorithm with createSuperpixelSLIC(). This save the computational cost of allocating memory for all the structures of the algorithm. Number of iterations. Higher number improves the result. Release the unmanaged memory associated with this object. Domain Transform filter type NC IC RF Weight type exp(-|I1-I2|^2/(2*sigma^2)) (|I1-I2|+sigma)^-1 (|I1-I2|^2+sigma^2)^-1 dot(I1,I2)/(|I1|*|I2|) (min(r1,r2)+min(g1,g2)+min(b1,b2))/(max(r1,r2)+max(g1,g2)+max(b1,b2)) unweighted Thinning type Thinning technique of Zhang-Suen Thinning technique of Guo-Hall LocalBinarizationMethods type Classic Niblack binarization. Sauvola's technique. Wolf's technique. NICK's technique. Class that contains entry points for the XPhoto module. The function implements simple dct-based denoising, link: http://www.ipol.im/pub/art/2011/ys-dct/. Source image Destination image Expected noise standard deviation Size of block side where dct is computed Inpaint type Shift map The function implements different single-image inpainting algorithms source image, it could be of any type and any number of channels from 1 to 4. In case of 3- and 4-channels images the function expect them in CIELab colorspace or similar one, where first color component shows intensity, while second and third shows colors. Nonetheless you can try any colorspaces. mask (CV_8UC1), where non-zero pixels indicate valid image area, while zero pixels indicate area to be inpainted destination image algorithm type Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms. Input three-channel image in the BGR color space (either CV_8UC3 or CV_16UC3) Output image of the same size and type as src. Gain for the B channel Gain for the G channel Gain for the R channel Performs image denoising using the Block-Matching and 3D-filtering algorithm with several computational optimizations. Noise expected to be a gaussian white noise. Input 8-bit or 16-bit 1-channel image. Output image of the first step of BM3D with the same size and type as src. Output image of the second step of BM3D with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used for block-matching. Should be power of 2. Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize. Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Maximum size of the 3D group for collaborative filtering. Sliding step to process every next reference block. Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero. Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results. Step of BM3D to be executed. Possible variants are: step 1, step 2, both steps. Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported. Performs image denoising using the Block-Matching and 3D-filtering algorithm with several computational optimizations. Noise expected to be a gaussian white noise. Input 8-bit or 16-bit 1-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used for block-matching. Should be power of 2. Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize. Block matching threshold for the first step of BM3D (hard thresholding), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Block matching threshold for the second step of BM3D (Wiener filtering), i.e. maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Maximum size of the 3D group for collaborative filtering. Sliding step to process every next reference block. Kaiser window parameter that affects the sidelobe attenuation of the transform of the window. Kaiser window is used in order to reduce border effects. To prevent usage of the window, set beta to zero. Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results. Step of BM3D to be executed. Allowed are only BM3D_STEP1 and BM3D_STEPALL. BM3D_STEP2 is not allowed as it requires basic estimate to be present. Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported. Gray-world white balance algorithm. This algorithm scales the values of pixels based on a gray-world assumption which states that the average of all channels should result in a gray image. It adds a modification which thresholds pixels based on their saturation value and only uses pixels below the provided threshold in finding average pixel values. Saturation is calculated using the following for a 3-channel RGB image per pixel I and is in the range [0, 1]: Saturation[I]= max(R,G,B)−min(R,G,B) / max(R,G,B) A threshold of 1 means that all pixels are used to white-balance, while a threshold of 0 means no pixels are used. Lower thresholds are useful in white-balancing saturated images. Currently supports images of type CV_8UC3 and CV_16UC3. Maximum saturation for a pixel to be included in the gray-world assumption Creates a gray-world white balancer Release all the unmanaged memory associated with this white balancer More sophisticated learning-based automatic white balance algorithm. As GrayworldWB, this algorithm works by applying different gains to the input image channels, but their computation is a bit more involved compared to the simple gray-world assumption. More details about the algorithm can be found in: Dongliang Cheng, Brian Price, Scott Cohen, and Michael S Brown. Effective learning-based illuminant estimation using simple features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1000–1008, 2015. To mask out saturated pixels this function uses only pixels that satisfy the following condition: max(R,G,B) / range_max_val < saturation_thresh Currently supports images of type CV_8UC3 and CV_16UC3. Maximum possible value of the input image (e.g. 255 for 8 bit images, 4095 for 12 bit images) Threshold that is used to determine saturated pixels, i.e. pixels where at least one of the channels exceeds saturation_threshold x range_max_val are ignored. Defines the size of one dimension of a three-dimensional RGB histogram that is used internally by the algorithm. It often makes sense to increase the number of bins for images with higher bit depth (e.g. 256 bins for a 12 bit image). Create a learning based white balancer. Release all the unmanaged memory associated with this white balancer A simple white balance algorithm that works by independently stretching each of the input image channels to the specified range. For increased robustness it ignores the top and bottom p% of pixel values. Input image range minimum value Input image range maximum value Output image range minimum value Output image range maximum value Percent of top/bottom values to ignore Creates a simple white balancer Release all the unmanaged memory associated with this white balancer BM3D denoising transform types Un-normalized Haar transform BM3D steps Execute all steps of the algorithm Execute only first step of the algorithm Execute only second step of the algorithm The base class for auto white balance algorithms. Pointer to the native white balancer object Applies white balancing to the input image. Input image White balancing result Reset the pointer to the native white balancer object This class is used to track multiple objects using the specified tracker algorithm. The MultiTracker is naive implementation of multiple object tracking. It process the tracked objects independently without any optimization accross the tracked objects. Constructor. In the case of trackerType is given, it will be set as the default algorithm for all trackers. Add a new object to be tracked. The defaultAlgorithm will be used the newly added tracker. The tracker to use for tracking the image Input image A rectangle represents ROI of the tracked object True if successfully added Update the current tracking status. The result will be saved in the internal storage. Input image the tracking result, represent a list of ROIs of the tracked objects. True id update success Release the unmanaged memory associated with this multi-tracker. This is a real-time object tracking based on a novel on-line version of the AdaBoost algorithm. The classifier uses the surrounding background as negative examples in update step to avoid the drifting problem. Create a Boosting Tracker The number of classifiers to use in a OnlineBoosting algorithm Search region parameters to use in a OnlineBoosting algorithm search region parameters to use in a OnlineBoosting algorithm The initial iterations Number of features, a good value would be 10*numClassifiers + iterationInit Release all the unmanaged memory associated with this Boosting Tracker Median Flow tracker implementation. The tracker is suitable for very smooth and predictable movements when object is visible throughout the whole sequence.It's quite and accurate for this type of problems (in particular, it was shown by authors to outperform MIL). During the implementation period the code at http://www.aonsquared.co.uk/node/5, the courtesy of the author Arthur Amarra, was used for the reference purpose. Create a median flow tracker Points in grid, use 10 for default. Win size, use (3, 3) for default Max level, use 5 for default. Termination criteria, use count = 20 and eps = 0.3 for default win size NCC, use (30, 30) for default Max median length of displacement difference Release the unmanaged resources associated with this tracker The MIL algorithm trains a classifier in an online manner to separate the object from the background. Multiple Instance Learning avoids the drift problem for a robust tracking. Original code can be found here http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml Creates a MIL Tracker radius for gathering positive instances during init negative samples to use during init size of search window radius for gathering positive instances during tracking positive samples to use during tracking negative samples to use during tracking features Release all the unmanaged memory associated with this tracker TLD is a novel tracking framework that explicitly decomposes the long-term tracking task into tracking, learning and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates detector's errors and updates it to avoid these errors in the future. Creates a TLD tracker Release the unmanaged resources associated with this tracker KCF is a novel tracking framework that utilizes properties of circulant matrix to enhance the processing speed. This tracking method is an implementation of @cite KCF_ECCV which is extended to KFC with color-names features(@cite KCF_CN). The original paper of KCF is available at http://home.isr.uc.pt/~henriques/circulant/index.html as well as the matlab implementation.For more information about KCF with color-names features, please refer to http://www.cvl.isy.liu.se/research/objrec/visualtracking/colvistrack/index.html. Feature type to be used in the tracking grayscale, colornames, compressed color-names The modes available now: - "GRAY" -- Use grayscale values as the feature - "CN" -- Color-names feature Grayscale Color Custom Creates a KCF Tracker detection confidence threshold gaussian kernel bandwidth regularization linear interpolation factor for adaptation spatial bandwidth (proportional to target) compression learning rate activate the resize feature to improve the processing speed split the training coefficients into two matrices wrap around the kernel values activate the pca method to compress the features threshold for the ROI size feature size after compression compressed descriptors of TrackerKCF::MODE non-compressed descriptors of TrackerKCF::MODE Release the unmanaged resources associated with this tracker GOTURN is kind of trackers based on Convolutional Neural Networks (CNN). While taking all advantages of CNN trackers, GOTURN is much faster due to offline training without online fine-tuning nature. GOTURN tracker addresses the problem of single target tracking: given a bounding box label of an object in the first frame of the video, we track that object through the rest of the video. NOTE: Current method of GOTURN does not handle occlusions; however, it is fairly robust to viewpoint changes, lighting changes, and deformations. Inputs of GOTURN are two RGB patches representing Target and Search patches resized to 227x227. Outputs of GOTURN are predicted bounding box coordinates, relative to Search patch coordinate system, in format X1,Y1,X2,Y2. Original paper is here: http://davheld.github.io/GOTURN/GOTURN.pdf As long as original authors implementation: https://github.com/davheld/GOTURN#train-the-tracker Implementation of training algorithm is placed in separately here due to 3d-party dependencies: https://github.com/Auron-X/GOTURN_Training_Toolkit GOTURN architecture goturn.prototxt and trained model goturn.caffemodel are accessible on opencv_extra GitHub repository. Create a GOTURN tracker Release the unmanaged resources associated with this tracker MOSSE Visual Object Tracking using Adaptive Correlation Filters note, that this tracker works with grayscale images, if passed bgr ones, they will get converted internally. Create a MOSSE tracker Release the unmanaged resources associated with this tracker Discriminative Correlation Filter Tracker with Channel and Spatial Reliability Creates a CSRT tracker Release the unmanaged resources associated with this tracker Long-term tracker The native pointer to the tracker Initialize the tracker with a know bounding box that surrounding the target. The initial frame The initial bounding box Update the tracker, find the new most likely bounding box for the target. The current frame The bounding box that represent the new target location, if true was returned, not modified otherwise True means that target was located and false means that tracker cannot locate target in current frame. Note, that latter does not imply that tracker has failed, maybe target is indeed missing from the frame (say, out of sight) Release the unmanaged memory associated with this tracker A 2D plot Create 2D plot from data The data to be plotted Create 2D plot for data The data for the X-axis The data for the Y-axis Render the plot to the resulting Mat The output plot Set the line color The plot line color Set the background color The background color Set the axis color the axis color Set the plot grid color The plot grid color Set the plot text color The plot text color Set the plot size The width The height Release unmanaged memory associated with this plot2d. Min X Min Y Max X Max Y Plot line width Entry points for the cv::plot functions Entry points for the Aruco module. Draw a canonical marker image. dictionary of markers indicating the type of markers identifier of the marker that will be returned. It has to be a valid id in the specified dictionary. size of the image in pixels output image with the marker width of the marker border. Performs marker detection in the input image. Only markers included in the specific dictionary are searched. For each detected marker, it returns the 2D position of its corner in the image and its corresponding identifier. Note that this function does not perform pose estimation. input image indicates the type of markers that will be searched vector of detected marker corners. For each marker, its four corners are provided, (e.g VectorOfVectorOfPointF ). For N detected markers, the dimensions of this array is Nx4. The order of the corners is clockwise. vector of identifiers of the detected markers. The identifier is of type int (e.g. VectorOfInt). For N detected markers, the size of ids is also N. The identifiers have the same order than the markers in the imgPoints array. marker detection parameters contains the imgPoints of those squares whose inner code has not a correct codification. Useful for debugging purposes. Given the pose estimation of a marker or board, this function draws the axis of the world coordinate system, i.e. the system centered on the marker/board. Useful for debugging purposes. input/output image. It must have 1 or 3 channels. The number of channels is not altered. input 3x3 floating-point camera matrix vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6],[s1,s2,s3,s4]]) of 4, 5, 8 or 12 elements rotation vector of the coordinate system that will be drawn. translation vector of the coordinate system that will be drawn. length of the painted axis in the same unit than tvec (usually in meters) This function receives the detected markers and returns their pose estimation respect to the camera individually. So for each marker, one rotation and translation vector is returned. The returned transformation is the one that transforms points from each marker coordinate system to the camera coordinate system. The marker corrdinate system is centered on the middle of the marker, with the Z axis perpendicular to the marker plane. The coordinates of the four corners of the marker in its own coordinate system are: (-markerLength/2, markerLength/2, 0), (markerLength/2, markerLength/2, 0), (markerLength/2, -markerLength/2, 0), (-markerLength/2, -markerLength/2, 0) vector of already detected markers corners. For each marker, its four corners are provided, (e.g VectorOfVectorOfPointF ). For N detected markers, the dimensions of this array should be Nx4. The order of the corners should be clockwise. the length of the markers' side. The returning translation vectors will be in the same unit. Normally, unit is meters. input 3x3 floating-point camera matrix vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6],[s1,s2,s3,s4]]) of 4, 5, 8 or 12 elements array of output rotation vectors. Each element in rvecs corresponds to the specific marker in imgPoints. array of output translation vectors (e.g. VectorOfPoint3D32F ). Each element in tvecs corresponds to the specific marker in imgPoints. Refine not detected markers based on the already detected and the board layout. Input image Layout of markers in the board. Vector of already detected marker corners. Vector of already detected marker identifiers. Vector of rejected candidates during the marker detection process Optional input 3x3 floating-point camera matrix Optional vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6],[s1,s2,s3,s4]]) of 4, 5, 8 or 12 elements Minimum distance between the corners of the rejected candidate and the reprojected marker in order to consider it as a correspondence. (default 10) Rate of allowed erroneous bits respect to the error correction capability of the used dictionary. -1 ignores the error correction step. (default 3) Consider the four posible corner orders in the rejectedCorners array. If it set to false, only the provided corner order is considered (default true). Optional array to returns the indexes of the recovered candidates in the original rejectedCorners array. marker detection parameters Draw detected markers in image. Input/output image. It must have 1 or 3 channels. The number of channels is not altered. Positions of marker corners on input image. (e.g std::vector<std::vector<cv::Point2f> > ). For N detected markers, the dimensions of this array should be Nx4. The order of the corners should be clockwise. Vector of identifiers for markers in markersCorners . Optional, if not provided, ids are not painted. Color of marker borders. Rest of colors (text color and first corner color) are calculated based on this one to improve visualization. Calibrate a camera using aruco markers. Vector of detected marker corners in all frames. The corners should have the same format returned by detectMarkers List of identifiers for each marker in corners Number of markers in each frame so that corners and ids can be split Marker Board layout Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix. Output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6],[s1,s2,s3,s4]]) of 4, 5, 8 or 12 elements Output vector of rotation vectors (see Rodrigues ) estimated for each board view (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the board pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the board pattern in the k-th pattern view (k=0.. M -1). Output vector of translation vectors estimated for each pattern view. Flags Different flags for the calibration process Termination criteria for the iterative optimization algorithm. The final re-projection error. Interpolate position of ChArUco board corners vector of already detected markers corners. For each marker, its four corners are provided, (e.g VectorOfVectorOfPointF ). For N detected markers, the dimensions of this array should be Nx4.The order of the corners should be clockwise. list of identifiers for each marker in corners input image necesary for corner refinement. Note that markers are not detected and should be sent in corners and ids parameters. layout of ChArUco board. interpolated chessboard corners interpolated chessboard corners identifiers optional 3x3 floating-point camera matrix optional vector of distortion coefficients, (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]]) of 4, 5, 8 or 12 elements number of adjacent markers that must be detected to return a charuco corner The number of interpolated corners. Draws a set of Charuco corners image input/output image. It must have 1 or 3 channels. The number of channels is not altered. vector of detected charuco corners list of identifiers for each corner in charucoCorners color of the square surrounding each corner Pose estimation for a ChArUco board given some of their corners vector of detected charuco corners list of identifiers for each corner in charucoCorners layout of ChArUco board. input 3x3 floating-point camera matrix vector of distortion coefficients, 4, 5, 8 or 12 elements Output vector (e.g. cv::Mat) corresponding to the rotation vector of the board Output vector (e.g. cv::Mat) corresponding to the translation vector of the board. defines whether initial guess for rvec and tvec will be used or not. If pose estimation is valid, returns true, else returns false. Detect ChArUco Diamond markers input image necessary for corner subpixel. list of detected marker corners from detectMarkers function. list of marker ids in markerCorners. rate between square and marker length: squareMarkerLengthRate = squareLength / markerLength.The real units are not necessary. output list of detected diamond corners (4 corners per diamond). The order is the same than in marker corners: top left, top right, bottom right and bottom left. Similar format than the corners returned by detectMarkers(e.g VectorOfVectorOfPointF ). ids of the diamonds in diamondCorners. The id of each diamond is in fact of type Vec4i, so each diamond has 4 ids, which are the ids of the aruco markers composing the diamond. Optional camera calibration matrix. Optional camera distortion coefficients. Draw a set of detected ChArUco Diamond markers input/output image. It must have 1 or 3 channels. The number of channels is not altered. positions of diamond corners in the same format returned by detectCharucoDiamond(). (e.g VectorOfVectorOfPointF ). For N detected markers, the dimensions of this array should be Nx4. The order of the corners should be clockwise. vector of identifiers for diamonds in diamondCorners, in the same format returned by detectCharucoDiamond() (e.g. VectorOfMat ). Optional, if not provided, ids are not painted. color of marker borders. Rest of colors (text color and first corner color) are calculated based on this one. Draw a ChArUco Diamond marker dictionary of markers indicating the type of markers. list of 4 ids for each ArUco marker in the ChArUco marker. size of the chessboard squares in pixels. size of the markers in pixels. output image with the marker. The size of this image will be 3*squareLength + 2*marginSize. minimum margins (in pixels) of the marker in the output image width of the marker borders. Parameters for the detectMarker process Type of corner refinement method Default corners Refine the corners using subpix Refine the corners using the contour-points minimum window size for adaptive thresholding before finding contours (default 3) maximum window size for adaptive thresholding before finding contours (default 23). increments from adaptiveThreshWinSizeMin to adaptiveThreshWinSizeMax during the thresholding (default 10). constant for adaptive thresholding before finding contours (default 7) determine minimum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image (default 0.03). determine maximum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image (default 4.0). minimum accuracy during the polygonal approximation process to determine which contours are squares. minimum distance between corners for detected markers relative to its perimeter (default 0.05) minimum distance of any corner to the image border for detected markers (in pixels) (default 3) minimum mean distance beetween two marker corners to be considered similar, so that the smaller one is removed. The rate is relative to the smaller perimeter of the two markers (default 0.05). Corner refinement method window size for the corner refinement process (in pixels) (default 5). maximum number of iterations for stop criteria of the corner refinement process (default 30). minimum error for the stop criteria of the corner refinement process (default: 0.1) number of bits of the marker border, i.e. marker border width (default 1). number of bits (per dimension) for each cell of the marker when removing the perspective (default 8). width of the margin of pixels on each cell not considered for the determination of the cell bit. Represents the rate respect to the total size of the cell, i.e. perpectiveRemovePixelPerCell (default 0.13) maximum number of accepted erroneous bits in the border (i.e. number of allowed white bits in the border). Represented as a rate respect to the total number of bits per marker (default 0.35). minimun standard deviation in pixels values during the decodification step to apply Otsu thresholding (otherwise, all the bits are set to 0 or 1 depending on mean higher than 128 or not) (default 5.0) error correction rate respect to the maximun error correction capability for each dictionary. (default 0.6). Get the detector parameters with default values The default detector parameters Dictionary/Set of markers. It contains the inner codification. Create a Dictionary using predefined values The name of the predefined dictionary Generates a new customizable marker dictionary. number of markers in the dictionary number of bits per dimension of each markers Generates a new customizable marker dictionary. number of markers in the dictionary number of bits per dimension of each markers Include the markers in this dictionary at the beginning (optional) The name of the predefined dictionary Dict4X4_50 Dict4X4_100 Dict4X4_250 Dict4X4_1000 Dict5X5_50 Dict5X5_100 Dict5X5_250 Dict5X5_1000 Dict6X6_50 Dict6X6_100 Dict6X6_250 Dict6X6_1000 Dict7X7_50 Dict7X7_100 Dict7X7_250 Dict7X7_1000 standard ArUco Library Markers. 1024 markers, 5x5 bits, 0 minimum distance Release the unmanaged resource Board of markers Pointer to native IBoard Planar board with grid arrangement of markers More common type of board. All markers are placed in the same plane in a grid arrangment. Create a GridBoard object. number of markers in X direction number of markers in Y direction marker side length (normally in meters) separation between two markers (same unit than markerLenght) dictionary of markers indicating the type of markers. The first markersX*markersY markers in the dictionary are used. id of first marker in dictionary to use on board. Draw a GridBoard. size of the output image in pixels. output image with the board. The size of this image will be outSize and the board will be on the center, keeping the board proportions. minimum margins (in pixels) of the board in the output image width of the marker borders. Release the unmanaged resource associated with this GridBoard Pointer to native IBoard A ChArUco board is a planar board where the markers are placed inside the white squares of a chessboard.The benefits of ChArUco boards is that they provide both, ArUco markers versatility and chessboard corner precision, which is important for calibration and pose estimation. ChArUco board number of chessboard squares in X direction number of chessboard squares in Y direction chessboard square side length (normally in meters) marker side length (same unit than squareLength) dictionary of markers indicating the type of markers. Draw a ChArUco board size of the output image in pixels. output image with the board. The size of this image will be outSize and the board will be on the center, keeping the board proportions. minimum margins (in pixels) of the board in the output image width of the marker borders. Release the unmanaged resource associated with this ChArUco board Pointer to native IBoard The module brings implementation of the image processing algorithms based on fuzzy mathematics. Function type Linear Sinus Inpaint algorithm One step algorithm. Algorithm automaticaly increasing radius of the basic function. Iterative algorithm running in more steps using partial computations. Creates kernel from basic functions. Basic function used in axis x. Basic function used in axis y. Final 32-b kernel derived from A and B. Number of kernel channels. Creates kernel from general functions. Function type Radius of the basic function. Final 32-b kernel. Number of kernel channels. Image inpainting. Input image. Mask used for unwanted area marking. Output 32-bit image. Radius of the basic function. Function type Algorithm type Image filtering. Input image. Final 32-b kernel. Output 32-bit image. Class implements both functionalities for detection of lines and computation of their binary descriptor. Default constructor Line detection. Input image Vector that will store extracted lines for one or more images Mask matrix to detect only KeyLines of interest Descriptors computation. Input image Vector containing lines for which descriptors must be computed Computed descriptors will be stored here When true, original non-binary descriptors are returned Release unmanaged memory associated with this binary descriptor Entry points for LineDescriptor module The lines extraction methodology described in the following is mainly based on: R Grompone Von Gioi, Jeremie Jakubowicz, Jean-Michel Morel, and Gregory Randall. Lsd: A fast line segment detector with a false detection control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(4):722–732, 2010. Default constructor Detect lines inside an image. input image vector that will store extracted lines for one or more images scale factor used in pyramids generation number of octaves inside pyramid mask matrix to detect only KeyLines of interest Release the unmanaged memory associated with this object. A class to represent a line. Orientation of the line Object ID, that can be used to cluster keylines by the line they represent Octave (pyramid layer), from which the keyline has been extracted Coordinates of the middlepoint The response, by which the strongest keylines have been selected. It's represented by the ratio between line's length and maximum between image's width and height Minimum area containing line Lines's extremes in original image Lines's extremes in original image Lines's extremes in original image Lines's extremes in original image Line's extremes in image it was extracted from Line's extremes in image it was extracted from Line's extremes in image it was extracted from Line's extremes in image it was extracted from The length of line Number of pixels covered by the line Wrapped class of the C++ standard vector of KeyLine. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of KeyLine Create an standard vector of KeyLine of the specific size The size of the vector Create an standard vector of KeyLine with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of KeyLine An array of KeyLine Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. WaldBoost detector. Create instance of WBDetector. Read detector from FileNode. FileNode for input Write detector to FileStorage. FileStorage for output Train WaldBoost detector. Path to directory with cropped positive samples Path to directory with negative (background) images Detect objects on image using WaldBoost detector. Input image for detection Bounding boxes coordinates output vector Confidence values for bounding boxes output vector Release all the unmanaged memory associated with this WBDetector. Class that contains entry points for the XObjdetect module. A wrapper class which allows the Gipsa/Listic Labs model to be used. This retina model allows spatio-temporal image processing (applied on still images, video sequences). As a summary, these are the retina model properties: 1. It applies a spectral whithening (mid-frequency details enhancement); 2. high frequency spatio-temporal noise reduction; 3. low frequency luminance to be reduced (luminance range compression); 4. local logarithmic luminance compression allows details to be enhanced in low light conditions. USE : this model can be used basically for spatio-temporal video effects but also for : _using the getParvo method output matrix : texture analysiswith enhanced signal to noise ratio and enhanced details robust against input images luminance ranges _using the getMagno method output matrix : motion analysis also with the previously cited properties For more information, reer to the following papers : Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011 Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891. The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author : _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007 _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. more informations in the above cited Jeanny Heraults's book. Create a retina model The input frame size Create a retina model The input frame size Specifies if (true) color is processed of not (false) to then processing gray level image Specifies which kind of color sampling will be used Activate retina log sampling, if true, the 2 following parameters can be used Only useful if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak Only useful if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied Get or Set the Retina parameters. Method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors. and The input image to be processed Accessors of the details channel of the retina (models foveal vision) The details channel of the retina. Accessors of the motion channel of the retina (models peripheral vision) The motion channel of the retina. Clear all retina buffers (equivalent to opening the eyes after a long period of eye close. Release all unmanaged memory associated with the retina model. The retina color sampling method. Each pixel position is either R, G or B in a random choice Color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR... Standard bayer sampling Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters Specifies if (true) color is processed of not (false) to then processing gray level image Normalise output. Use true for default Photoreceptors local adaptation sensitivity. Use 0.7 for default Photoreceptors temporal constant. Use 0.5 for default Photoreceptors spatial constant. Use 0.53 for default Horizontal cells gain. Use 0.0 for default Hcells temporal constant. Use 1.0 for default Hcells spatial constant. Use 7.0 for default Ganglion cells sensitivity. Use 0.7 for default Inner Plexiform Layer Magnocellular channel (IplMagno) Normalise output ParasolCells_beta. Use 0.0 for default ParasolCells_tau. Use 0.0 for default ParasolCells_k. Use 7.0 for default Amacrin cells temporal cut frequency. Use 1.2 for default V0 compression parameter. Use 0.95 for default LocalAdaptintegration_tau. Use 0.0 for default LocalAdaptintegration_k. Use 7.0 for default Retina parameters Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters Inner Plexiform Layer Magnocellular channel (IplMagno) Entry points to the Open CV bioinspired module Computes average hash value of the input image. This is a fast image hashing algorithm, but only work on simple case. Create an average hash object. Release all the unmanaged resource associated with AverageHash The module brings implementation of the image processing algorithms based on fuzzy mathematics. Image hash based on block mean. Block Mean Hash mode use fewer block and generate 16*16/8 uchar hash value use block blocks(step sizes/2), generate 31*31/8 + 1 uchar hash value Create a Block Mean Hash object The hash mode Release all the unmanaged resource associated with BlockMeanHash Image hash based on color moments. Create a Color Moment Hash object Release all the unmanaged resource associated with ColorMomentHash The Image Hash base class The pointer to the ImgHashBase object Get the pointer to the ImgHashBase object The pointer to the ImgHashBase object Reset the pointers Computes hash of the input image input image to compute hash value hash of the image Compare the hash value between inOne and inTwo Hash value one Hash value two indicate similarity between inOne and inTwo, the meaning of the value vary from algorithms to algorithms Marr-Hildreth Operator Based Hash, slowest but more discriminative. Create a Marr-Hildreth operator based hash. Scale factor for marr wavelet. Level of scale factor Release all the unmanaged resource associated with MarrHildrethHash Slower than average hash, but tolerant of minor modifications Create a PHash object Release all the unmanaged resource associated with AverageHash Image hash based on Radon transform Create an image hash based on Radon transform Sigma Number of angle line Release all the unmanaged resource associated with RadialVarianceHash Class implementing two-dimensional phase unwrapping. This algorithm belongs to the quality-guided phase unwrapping methods. First, it computes a reliability map from second differences between a pixel and its eight neighbours. Reliability values lie between 0 and 16*pi*pi. Then, this reliability map is used to compute the reliabilities of "edges". An edge is an entity defined by two pixels that are connected horizontally or vertically. Its reliability is found by adding the the reliabilities of the two pixels connected through it. Edges are sorted in a histogram based on their reliability values. This histogram is then used to unwrap pixels, starting from the highest quality pixel. Create a HistogramPhaseUnwrapping instance Phase map width. Phase map height. Bins in the histogram are not of equal size. Default value is 3*pi*pi. The one before "histThresh" value are smaller. Number of bins between 0 and "histThresh". Default value is 10. Number of bins between "histThresh" and 32*pi*pi (highest edge reliability value). Default value is 5. Release the unmanaged resources assocuated with the HistogramPhaseUnwrapping Get the reliability map computed from the wrapped phase map. Image where the reliability map is stored. Unwraps a 2D phase map. The wrapped phase map that needs to be unwrapped. The unwrapped phase map. Optional parameter used when some pixels do not hold any phase information in the wrapped phase map. Provide interfaces to the Open CV PhaseUnwrapping functions Contrast Limited Adaptive Histogram Equalization Create the Contrast Limited Adaptive Histogram Equalization Threshold for contrast limiting. Use 40.0 for default Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. This parameter defines the number of tiles in row and column. Use (8, 8) for default Equalizes the histogram of a grayscale image using Contrast Limited Adaptive Histogram Equalization. Source image Destination image Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged memory associated with this object This class wraps the functional calls to the opencv_gpu module Get the compute capability of the device The device The major version of the compute capability The minor version of the compute capability Get the number of multiprocessors on device The device The number of multiprocessors on device Get the device name Return true if Cuda is found on the system Get the opencl platform summary as a string An opencl platfor summary Get the number of Cuda enabled devices The number of Cuda enabled devices Set the current Gpu Device The id of the device to be setted as current Get the current Cuda device id The current Cuda device id Create a GpuMat from the specific region of . The data is shared between the two GpuMat. The gpuMat to extract regions from. The column range. Use MCvSlice.WholeSeq for all columns. The row range. Use MCvSlice.WholeSeq for all rows. Pointer to the GpuMat Resize the GpuMat The input GpuMat The resulting GpuMat The interpolation type Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). gpuMatReshape the src GpuMat The source GpuMat The resulting GpuMat, as input it should be an empty GpuMat. The new number of channels The new number of rows Returns header, corresponding to a specified rectangle of the input GpuMat. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Input GpuMat Zero-based coordinates of the rectangle of interest. Pointer to the resultant sub-array header. Shifts a matrix to the left (c = a << scalar) The matrix to be shifted. The scalar to shift by. The result of the shift Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Shifts a matrix to the right (c = a >> scalar) The matrix to be shifted. The scalar to shift by. The result of the shift Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Adds one matrix to another (c = a + b). The first matrix to be added. The second matrix to be added. The sum of the two matrix The optional mask that is used to select a subarray. Use null if not needed Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Subtracts one matrix from another (c = a - b). The matrix where subtraction take place The matrix to be substracted The result of a - b The optional mask that is used to select a subarray. Use null if not needed Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Computes element-wise product of the two GpuMat: c = scale * a * b. The first GpuMat to be element-wise multiplied. The second GpuMat to be element-wise multiplied. The element-wise multiplication of the two GpuMat The scale Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Computes element-wise quotient of the two GpuMat (c = scale * a / b). The first GpuMat The second GpuMat The element-wise quotient of the two GpuMat The scale Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Computes the weighted sum of two arrays (dst = alpha*src1 + beta*src2 + gamma) The first source GpuMat The weight for The second source GpuMat The weight for The constant to be added The result Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Computes element-wise absolute difference of two GpuMats (c = abs(a - b)). The first GpuMat The second GpuMat The result of the element-wise absolute difference. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes absolute value of each pixel in an image The source GpuMat, support depth of Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes square of each pixel in an image The source GpuMat, support depth of byte, UInt16, Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes square root of each pixel in an image The source GpuMat, support depth of byte, UInt16, Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Transposes a matrix. Source matrix. 1-, 4-, 8-byte element sizes are supported for now. Destination matrix. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Compares elements of two GpuMats (c = a <cmpop> b). Supports CV_8UC4, CV_32FC1 types The first GpuMat The second GpuMat The result of the comparison. The type of comparison Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Resizes the image. The source image. Has to be GpuMat<Byte>. If stream is used, the GpuMat has to be either single channel or 4 channels. The destination image. The interpolation type. Supports INTER_NEAREST, INTER_LINEAR. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Scale factor along the horizontal axis. If it is zero, it is computed as: (double)dsize.width/src.cols Scale factor along the vertical axis. If it is zero, it is computed as: (double)dsize.height/src.rows Destination image size. If it is zero, it is computed as: dsize = Size(round(fx* src.cols), round(fy* src.rows)). Either dsize or both fx and fy must be non-zero. Copies each plane of a multi-channel GpuMat to a dedicated GpuMat The multi-channel gpuMat Pointer to an array of single channel GpuMat pointers Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Makes multi-channel GpuMat out of several single-channel GpuMats Pointer to an array of single channel GpuMat pointers The multi-channel gpuMat Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Computes exponent of each matrix element (b = exp(a)) The source GpuMat. Supports Byte, UInt16, Int16 and float type. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes power of each matrix element: (dst(i,j) = pow( src(i,j) , power), if src.type() is integer; (dst(i,j) = pow(fabs(src(i,j)), power), otherwise. supports all, except depth == CV_64F The source GpuMat The power The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes natural logarithm of absolute value of each matrix element: b = log(abs(a)) The source GpuMat. Supports Byte, UInt16, Int16 and float type. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes magnitude of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes squared magnitude of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes angle (angle(i)) of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the output angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts Cartesian coordinates to polar The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the output angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts polar coordinates to Cartesian The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the input angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Finds minimum and maximum element values and their positions. The extremums are searched over the whole GpuMat or, if mask is not IntPtr.Zero, in the specified GpuMat region. The source GpuMat, single-channel Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask that is used to select a subarray. Use null if not needed Finds global minimum and maximum matrix elements and returns their values with locations. Single-channel source image. The output min and max values The ouput min and max locations Optional mask to select a sub-matrix. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs downsampling step of Gaussian pyramid decomposition. The source CudaImage. The destination CudaImage, should have 2x smaller width and height than the source. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs up-sampling step of Gaussian pyramid decomposition. The source CudaImage. The destination image, should have 2x smaller width and height than the source. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes mean value and standard deviation The GpuMat. Supports only CV_8UC1 type The mean value The standard deviation Computes norm of the difference between two GpuMats The GpuMat. Supports only CV_8UC1 type If IntPtr.Zero, norm operation is apply to only. Otherwise, this is the GpuMat of type CV_8UC1 The norm type. Supports NORM_INF, NORM_L1, NORM_L2. The norm of the if is IntPtr.Zero. Otherwise the norm of the difference between two GpuMats. Returns the norm of a matrix. Source matrix. Any matrices except 64F are supported. Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now. optional operation mask; it must have the same size as src1 and CV_8UC1 type. The norm of a matrix Returns the norm of a matrix. Source matrix. Any matrices except 64F are supported. The GpuMat to store the result Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now. optional operation mask; it must have the same size as src1 and CV_8UC1 type. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Returns the difference of two matrices. Source matrix. Any matrices except 64F are supported. Second source matrix (if any) with the same size and type as src1. The GpuMat where the result will be stored in Norm type. NORM_L1 , NORM_L2 , and NORM_INF are supported for now. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Returns the sum of absolute values for matrix elements. Source image of any depth except for CV_64F. optional operation mask; it must have the same size as src and CV_8UC1 type. The sum of absolute values for matrix elements. Returns the sum of absolute values for matrix elements. Source image of any depth except for CV_64F. The GpuMat where the result will be stored. optional operation mask; it must have the same size as src1 and CV_8UC1 type. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Returns the squared sum of matrix elements. Source image of any depth except for CV_64F. optional operation mask; it must have the same size as src1 and CV_8UC1 type. The squared sum of matrix elements. Returns the squared sum of matrix elements. Source image of any depth except for CV_64F. The GpuMat where the result will be stored optional operation mask; it must have the same size as src1 and CV_8UC1 type. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Counts non-zero array elements Single-channel source image. The number of non-zero GpuMat elements Counts non-zero array elements Single-channel source image. A Gpu mat to hold the result Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Normalizes the norm or value range of an array. Input array. Output array of the same size as src . Norm value to normalize to or the lower range boundary in case of the range normalization. Upper range boundary in case of the range normalization; it is not used for the norm normalization. Normalization type ( NORM_MINMAX , NORM_L2 , NORM_L1 or NORM_INF ). Optional depth of the output array. Optional operation mask. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Reduces GpuMat to a vector by treating the GpuMat rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The input GpuMat Destination vector. Its size and type is defined by dim and dtype parameters Dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row. 1 means that the matrix is reduced to a single column. The reduction operation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Optional depth of the output array. Flips the GpuMat<Byte> in one of different 3 ways (row and column indices are 0-based). The source GpuMat. supports 1, 3 and 4 channels GpuMat with Byte, UInt16, int or float depth Destination GpuMat. The same source and type as Specifies how to flip the GpuMat. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical conjunction of two GpuMats: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical or of two GpuMats: dst(I)=src1(I) | src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical and of two GpuMats: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical not dst(I)=~src(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes per-element minimum of two GpuMats (dst = min(src1, src2)) The first GpuMat The second GpuMat The result GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes per-element maximum of two GpuMats (dst = max(src1, src2)) The first GpuMat The second GpuMat The result GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Applies fixed-level thresholding to single-channel array. The function is typically used to get bi-level (binary) image out of grayscale image or for removing a noise, i.e. filtering out pixels with too small or too large values. There are several types of thresholding the function supports that are determined by thresholdType Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Threshold value Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Thresholding type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs generalized matrix multiplication: dst = alpha*op(src1)*op(src2) + beta*op(src3), where op(X) is X or XT The first source array. The second source array. The scalar The third source array (shift). Can be IntPtr.Zero, if there is no shift. The scalar The destination array. The gemm operation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Warps the image using affine transformation The source GpuMat The destination GpuMat The 2x3 transformation matrix (pointer to CvArr) Supports NN, LINEAR, CUBIC The border mode, use BORDER_TYPE.CONSTANT for default. The border value, use new MCvScalar() for default. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). The size of the destination image Warps the image using perspective transformation The source GpuMat The destination GpuMat The 2x3 transformation matrix (pointer to CvArr) Supports NN, LINEAR, CUBIC The border mode, use BORDER_TYPE.CONSTANT for default. The border value, use new MCvScalar() for default. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). The size of the destination image DST[x,y] = SRC[xmap[x,y],ymap[x,y]] with bilinear interpolation. The source GpuMat. Supports CV_8UC1, CV_8UC3 source types. The dstination GpuMat. Supports CV_8UC1, CV_8UC3 source types. The xmap. Supports CV_32FC1 map type. The ymap. Supports CV_32FC1 map type. Interpolation type. Border mode. Use BORDER_CONSTANT for default. The value of the border. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Rotates an image around the origin (0,0) and then shifts it. Source image. Supports 1, 3 or 4 channels images with Byte, UInt16 or float depth Destination image with the same type as src. Must be pre-allocated Angle of rotation in degrees Shift along the horizontal axis Shift along the verticle axis The size of the destination image Interpolation method. Only INTER_NEAREST, INTER_LINEAR, and INTER_CUBIC are supported. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Copies a 2D array to a larger destination array and pads borders with the given constant. Source image. Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom). Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Border Type Border value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes the integral image and integral for the squared image The source GpuMat, supports only CV_8UC1 source type The sum GpuMat, supports only CV_32S source type, but will contain unsigned int values Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes squared integral image The source GpuMat, supports only CV_8UC1 source type The sqsum GpuMat, supports only CV32F source type. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs a forward or inverse discrete Fourier transform (1D or 2D) of floating point matrix. Param dft_size is the size of DFT transform. If the source matrix is not continous, then additional copy will be done, so to avoid copying ensure the source matrix is continous one. If you want to use preallocated output ensure it is continuous too, otherwise it will be reallocated. Being implemented via CUFFT real-to-complex transform result contains only non-redundant values in CUFFT's format. Result as full complex matrix for such kind of transform cannot be retrieved. For complex-to-real transform it is assumed that the source matrix is packed in CUFFT's format. The source GpuMat The resulting GpuMat of the DST, must be pre-allocated and continious. If single channel, the result is real. If double channel, the result is complex Size of a discrete Fourier transform. DFT flags Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs a per-element multiplication of two Fourier spectrums and scales the result. First spectrum. Second spectrum with the same size and type. Destination spectrum. Mock parameter used for CPU/CUDA interfaces similarity, simply add a 0 value. Scale constant. Optional flag to specify if the second spectrum needs to be conjugated before the multiplication. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs a per-element multiplication of two Fourier spectrums. First spectrum. Second spectrum with the same size and type. Destination spectrum. Mock parameter used for CPU/CUDA interfaces similarity. Optional flag to specify if the second spectrum needs to be conjugated before the multiplication. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the GpuMat Pointer to the GpuMat Create an empty GpuMat Pointer to an empty GpuMat Convert a CvArr to a GpuMat Pointer to a CvArr Pointer to the GpuMat Get the GpuMat size: width == number of columns, height == number of rows The GpuMat The size of the matrix Get the GpuMat type The GpuMat The GpuMat type Create a GpuMat of the specified size Pointer to the native cv::Mat The number of rows (height) The number of columns (width) The type of GpuMat Pointer to the GpuMat Create a GpuMat of the specified size. The allocated data is continuous within this GpuMat. The number of rows (height) The number of columns (width) The type of GpuMat Pointer to the GpuMat Performs blocking upload data to GpuMat. The destination gpuMat The CvArray to be uploaded to GPU Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Downloads data from device to host memory. Blocking calls. The source GpuMat The CvArray where data will be downloaded to Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Copy the source GpuMat to destination GpuMat, using an optional mask. The GpuMat to be copied from The GpuMat to be copied to The optional mask, use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). This function has several different purposes and thus has several synonyms. It copies one GpuMat to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel GpuMats are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination GpuMat element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate convertTo synonym. Source GpuMat Destination GpuMat The depth type of the destination GpuMat Scale factor Value added to the scaled source GpuMat elements Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Changes shape of GpuMat without copying data. The GpuMat to be reshaped. The result GpuMat. New number of channels. newCn = 0 means that the number of channels remains unchanged. New number of rows. newRows = 0 means that the number of rows remains unchanged unless it needs to be changed according to newCn value. A GpuMat of different shape Converts image from one color space to another The source GpuMat The destination GpuMat The color conversion code Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code . Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts an image from Bayer pattern to RGB or grayscale. Source image (8-bit or 16-bit single channel). Destination image. Color space conversion code (see the description below). Number of channels in the destination image. If the parameter is 0, the number of the channels is derived automatically from src and the code . Stream for the asynchronous version. Swap channels. The image where the channels will be swapped Integer array describing how channel values are permutated. The n-th entry of the array contains the number of the channel that is stored in the n-th channel of the output image. E.g. Given an RGBA image, aDstOrder = [3,2,1,0] converts this to ABGR channel order. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Routines for correcting image color gamma Source image (3- or 4-channel 8 bit). Destination image. True for forward gamma correction or false for inverse gamma correction. Stream for the asynchronous version. Composites two images using alpha opacity values contained in each image. First image. Supports CV_8UC4 , CV_16UC4 , CV_32SC4 and CV_32FC4 types. Second image. Must have the same size and the same type as img1 . Destination image Flag specifying the alpha-blending operation Stream for the asynchronous version Calculates histogram for one channel 8-bit image. Source image with CV_8UC1 type. Destination histogram with one row, 256 columns, and the CV_32SC1 type. tream for the asynchronous version. Equalizes the histogram of a grayscale image. Source image with CV_8UC1 type. Destination image. Stream for the asynchronous version. Calculates histogram with evenly distributed bins for single channel source. The source GpuMat. Supports CV_8UC1, CV_16UC1 and CV_16SC1 types. Histogram with evenly distributed bins. A GpuMat<int> type. The size of histogram (number of levels) The lower level The upper level Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Histogram with evenly distributed bins Calculates a histogram with bins determined by the levels array Source image. CV_8U , CV_16U , or CV_16S depth and 1 or 4 channels are supported. For a four-channel image, all channels are processed separately. Destination histogram with one row, (levels.cols-1) columns, and the CV_32SC1 type. Number of levels in the histogram. Stream for the asynchronous version. Performs linear blending of two images. First image. Supports only CV_8U and CV_32F depth. Second image. Must have the same size and the same type as img1 . Weights for first image. Must have tha same size as img1. Supports only CV_32F type. Weights for second image. Must have tha same size as img2. Supports only CV_32F type. Destination image. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Applies bilateral filter to the image. The source image The destination image; should have the same size and the same type as src The diameter of each pixel neighborhood, that is used during filtering. Filter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace. Pixel extrapolation method, use DEFAULT for default Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift filtering for each point of the source image. It maps each point of the source image into another point, and as the result we have new color and new position of each point. Source CudaImage. Only CV 8UC4 images are supported for now. Destination CudaImage, containing color of mapped points. Will have the same size and type as src. Spatial window radius. Color window radius. Termination criteria. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift procedure and stores information about processed points (i.e. their colors and positions) into two images. Source CudaImage. Only CV 8UC4 images are supported for now. Destination CudaImage, containing color of mapped points. Will have the same size and type as src. Destination CudaImage, containing position of mapped points. Will have the same size as src and CV 16SC2 type. Spatial window radius. Color window radius. Termination criteria. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift segmentation of the source image and eleminates small segments. Source CudaImage. Only CV 8UC4 images are supported for now. Segmented Image. Will have the same size and type as src. Note that this is an Image type and not CudaImage type Spatial window radius. Color window radius. Minimum segment size. Smaller segements will be merged. Termination criteria. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Pointer to cv::gpu::TemplateMatching Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Calculates a dense optical flow. The dense optical flow object first input image. second input image of the same size and the same type as . computed flow image that has the same size as I0 and type CV_32FC2. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates a sparse optical flow. The sparse optical flow First input image. Second input image of the same size and the same type as . Vector of 2D points for which the flow needs to be found. Output vector of 2D points containing the calculated new positions of input features in the second image. Output status vector. Each element of the vector is set to 1 if the flow for the corresponding features has been found. Otherwise, it is set to 0. Optional output vector that contains error response for each point (inverse confidence). Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). The Cuda device information Query the information of the gpu device that is currently in use. Query the information of the cuda device with the specific id. The device id The id of the device The name of the device The compute capability The number of single multi processors Get the amount of free memory at the moment Get the amount of total memory Indicates if the decive has the specific feature Checks whether the Cuda module can be run on the given device GPU feature Cuda compute 1.0 Cuda compute 1.1 Cuda compute 1.2 Cuda compute 1.3 Cuda compute 2.0 Cuda compute 2.1 Global Atomic Shared Atomic Native double Release the unmanaged resource related to the GpuDevice An CudaImage is very similar to the Emgu.CV.Image except that it is being used for GPU processing Color type of this image (either Gray, Bgr, Bgra, Hsv, Hls, Lab, Luv, Xyz, Ycc, Rgb or Rbga) Depth of this image (either Byte, SByte, Single, double, UInt16, Int16 or Int32) Create an empty CudaImage Create the CudaImage from the unmanaged pointer. The unmanaged pointer to the GpuMat. It is the user's responsibility that the Color type and depth matches between the managed class and unmanaged pointer. if true, unpon object disposal, we will cann the release function on the unmanaged Create a GPU image from a regular image The image to be converted to GPU image Create a CudaImage of the specific size The number of rows (height) The number of columns (width) Indicates if the data should be continuous Create a CudaImage of the specific size The number of rows (height) The number of columns (width) Create a CudaImage of the specific size The size of the image Create a CudaImage from the specific region of . The data is shared between the two CudaImage The CudaImage where the region is extracted from The column range. Use MCvSlice.WholeSeq for all columns. The row range. Use MCvSlice.WholeSeq for all rows. Convert the current CudaImage to a regular Image. A regular image Convert the current CudaImage to the specific color and depth The type of color to be converted to The type of pixel depth to be converted to CudaImage of the specific color and depth Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The color type of the source image The color depth of the source image The sourceImage Create a clone of this CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). A clone of this CudaImage Resize the CudaImage. The calling GpuMat be GpuMat%lt;Byte>. If stream is specified, it has to be either 1 or 4 channels. The new size The interpolation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). A CudaImage of the new size Returns a CudaImage corresponding to a specified rectangle of the current CudaImage. The data is shared with the current matrix. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Zero-based coordinates of the rectangle of interest. A CudaImage that represent the region of the current CudaImage. The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the ith row of the CudaImage. The data is shared with the current Image. The row to be extracted The ith row of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the [ ) rows of the CudaImage. The data is shared with the current Image. The inclusive stating row to be extracted The exclusive ending row to be extracted The [ ) rows of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the ith column of the CudaImage. The data is shared with the current Image. The column to be extracted The ith column of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the [ ) columns of the CudaImage. The data is shared with the current Image. The inclusive stating column to be extracted The exclusive ending column to be extracted The [ ) columns of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion convert the current CudaImage to its equivalent Bitmap representation Gpu look up table Create the look up table It should be either 1 or 3 channel matrix of 1x256 Transform the image using the lookup table The image to be transformed The transformation result Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged memory associated with this look up table A GpuMat, use the generic version if possible. The non generic version is good for use as buffer in stream calls. Create an empty GpuMat Create a GpuMat of the specified size The number of rows (height) The number of columns (width) The number of channels The type of depth Indicates if the data should be continuous allocates new GpuMat data unless the GpuMat already has specified size and type The number of rows The number of cols The depth type The number of channels. Create a GpuMat from the specific pointer Pointer to the unmanaged gpuMat True if we need to call Release function to during object disposal Create a GpuMat from an CvArray of the same depth type The CvArry to be converted to GpuMat Create a GpuMat from the specific region of . The data is shared between the two GpuMat The matrix where the region is extracted from The column range. The row range. Release the unmanaged memory associated with this GpuMat Get the GpuMat size: width == number of columns, height == number of rows Get the type of the GpuMat Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Upload data to GpuMat The CvArray to be uploaded to GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Downloads data from device to host memory. The destination CvArray where the GpuMat data will be downloaded to. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Convert the GpuMat to Mat The Mat that contains the same data as this GpuMat Copies scalar value to every selected element of the destination GpuMat: arr(I)=value if mask(I)!=0 Fill value Operation mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Can be IntPtr.Zero if not used Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Copy the source GpuMat to destination GpuMat, using an optional mask. The output array to be copied to The optional mask, use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). This function has several different purposes and thus has several synonyms. It copies one GpuMat to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel GpuMats are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination GpuMat element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate convertTo synonym. Destination GpuMat Result type Scale factor Value added to the scaled source GpuMat elements Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Changes shape of GpuMat without copying data. New number of channels. newCn = 0 means that the number of channels remains unchanged. New number of rows. newRows = 0 means that the number of rows remains unchanged unless it needs to be changed according to newCn value. A GpuMat of different shape Returns a GpuMat corresponding to the ith row of the GpuMat. The data is shared with the current GpuMat. The row to be extracted The ith row of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the [ ) rows of the GpuMat. The data is shared with the current GpuMat. The inclusive stating row to be extracted The exclusive ending row to be extracted The [ ) rows of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the ith column of the GpuMat. The data is shared with the current GpuMat. The column to be extracted The ith column of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the [ ) columns of the GpuMat. The data is shared with the current GpuMat. The inclusive stating column to be extracted The exclusive ending column to be extracted The [ ) columns of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns true if the two GpuMat equals The other GpuMat to be compares with True if the two GpuMat equals Makes multi-channel array out of several single-channel arrays An array of single channel GpuMat where each item in the array represent a single channel of the GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of single channel GpuMat where each item in the array represent a single channel of the original GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Split current GpuMat into an array of single channel GpuMat where each element in the array represent a single channel of the original GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). An array of single channel GpuMat where each element in the array represent a single channel of the original GpuMat Get the Bitmap from this GpuMat Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Save the GpuMat to a file The file name Make a clone of the GpuMat A clone of the GPU Mat True if the data is continues Depth type True if the matrix is empty Number of channels Similar to CvArray but use GPU for processing The type of element in the matrix Create a GpuMat from the unmanaged pointer The unmanaged pointer to the GpuMat If true, will call the release function on the Create an empty GpuMat Create a GpuMat from an CvArray of the same depth type The CvArry to be converted to GpuMat Create a GpuMat of the specified size The number of rows (height) The number of columns (width) The number of channels Indicates if the data should be continuous Create a GpuMat of the specified size The size of the GpuMat The number of channels Convert this GpuMat to a Matrix The matrix that contains the same values as this GpuMat Returns a GpuMat corresponding to a specified rectangle of the current GpuMat. The data is shared with the current matrix. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Zero-based coordinates of the rectangle of interest. A GpuMat that represent the region of the current matrix. The parent GpuMat should never be released before the returned GpuMat the represent the subregion Encapculates Cuda Stream. Provides interface for async coping. Passed to each function that supports async kernel execution. Reference counting is enabled Create a new Cuda Stream Wait for the completion Check if the stream is completed Release the stream Gives information about what GPU archs this OpenCV GPU module was compiled for Check if the GPU module is build with the specific feature set. The feature set to be checked. True if the GPU module is build with the specific feature set. Check if the GPU module is targeted for the specific device version The major version The minor version True if the GPU module is targeted for the specific device version. Check if the GPU module is targeted for the specific PTX version The major version The minor version True if the GPU module is targeted for the specific PTX version. Check if the GPU module is targeted for the specific BIN version The major version The minor version True if the GPU module is targeted for the specific BIN version. Check if the GPU module is targeted for equal or less PTX version The major version The minor version True if the GPU module is targeted for equal or less PTX version. Check if the GPU module is targeted for equal or greater device version The major version The minor version True if the GPU module is targeted for equal or greater device version. Check if the GPU module is targeted for equal or greater PTX version The major version The minor version True if the GPU module is targeted for equal or greater PTX version. Check if the GPU module is targeted for equal or greater BIN version The major version The minor version True if the GPU module is targeted for equal or greater BIN version. Wrapped class of the C++ standard vector of GpuMat. Create an empty standard vector of GpuMat Create an standard vector of GpuMat of the specific size The size of the vector Create an standard vector of GpuMat with the initial values The initial values Get the size of the vector Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. Gaussian Mixture-based Background/Foreground Segmentation Algorithm. Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create a Gaussian Mixture-based Background/Foreground Segmentation model Updates the background model Next video frame. The learning rate, use -1.0f for default value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). The foregroundMask Release all the unmanaged resource associated with this object Gaussian Mixture-based Background/Foreground Segmentation Algorithm. Pointer to the unmanaged Algorithm object Pointer to the unmanaged BackgroundSubtractor object Create a Gaussian Mixture-based Background/Foreground Segmentation model Updates the background model Next video frame. The output forground mask The learning rate, use -1.0f for default value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged resource associated with this object Box filter Create a BoxMax filter. Size of the kernel The center of the kernel. User (-1, -1) for the default kernel center. The border type. The border value. The source image depth type The number of channels in the source image The destination image depth type The number of channels in the destination image BoxMax filter Create a BoxMax filter. Size of the kernel The center of the kernel. User (-1, -1) for the default kernel center. The border type. The border value. The depth type of the source image The number of channels of the source image BoxMin filter Create a BoxMin filter. Size of the kernel The center of the kernel. User (-1, -1) for the default kernel center. The border type. The border value. The depth of the source image The number of channels in the source image A vertical 1D box filter. Creates a vertical 1D box filter. Input image depth. Input image channel. Output image depth. Output image channel. Kernel size. Anchor point. The default value (-1) means that the anchor is at the kernel center. Pixel extrapolation method. Default border value. A generalized Deriv operator. Creates a generalized Deriv operator. Source image depth. Source image channels. Destination array depth. Destination array channels. Derivative order in respect of x. Derivative order in respect of y. Aperture size. Flag indicating whether to normalize (scale down) the filter coefficients or not. Optional scale factor for the computed derivative values. By default, no scaling is applied. Pixel extrapolation method in the vertical direction. Pixel extrapolation method in the horizontal direction. Base Cuda filter class Release all the unmanaged memory associated with this gpu filter Apply the cuda filter The source CudaImage where the filter will be applied to The destination CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Gaussian filter Create a Gaussian filter. The size of the kernel This parameter may specify Gaussian sigma (standard deviation). If it is zero, it is calculated from the kernel size. In case of non-square Gaussian kernel the parameter may be used to specify a different (from param3) sigma in the vertical direction. Use 0 for default The row border type. The column border type. The depth type of the source image The number of channels in the source image The depth type of the destination image The number of channels in the destination image Laplacian filter Create a Laplacian filter. Either 1 or 3 Optional scale. Use 1.0 for default The border type. The border value. The depth type of the source image The number of channels in the source image The depth type of the destination image The number of channels in the destination image Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image Create a Gpu LinearFilter Convolution kernel, single-channel floating point matrix (e.g. Emgu.CV.Matrix). If you want to apply different kernels to different channels, split the gpu image into separate color planes and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center Border type. Use REFLECT101 as default. The border value The depth type of the source image The number of channels in the source image The depth type of the dest image The number of channels in the dest image median filtering for each point of the source image. Create a median filter Type of of source image. Only 8U images are supported for now. Type of of source image. Only single channel images are supported for now. Size of the kernerl used for the filtering. Uses a (windowSize x windowSize) filter. Specifies the parallel granularity of the workload. This parameter should be used GPU experts when optimizing performance. Morphology filter Create a Morphology filter. Type of morphological operation 2D 8-bit structuring element for the morphological operation. Anchor position within the structuring element. Negative values mean that the anchor is at the center. Number of times erosion and dilation to be applied. The depth type of the source image The number of channels in the source image A horizontal 1D box filter. Creates a horizontal 1D box filter. Input image depth. Only 8U type is supported for now. Input image channel. Only single channel type is supported for now. Output image depth. Only 32F type is supported for now. Output image channel. Only single channel type is supported for now. Kernel size. Anchor point. The default value (-1) means that the anchor is at the kernel center. Pixel extrapolation method. Default border value. A vertical or horizontal Scharr operator. Creates a vertical or horizontal Scharr operator. Source image depth. Source image channels. Destination array depth. Destination array channels. Order of the derivative x. Order of the derivative y. Optional scale factor for the computed derivative values. By default, no scaling is applied. Pixel extrapolation method in the vertical direction. For details, see borderInterpolate. Pixel extrapolation method in the horizontal direction. SeparableLinearFilter Create a SeparableLinearFilter Source array depth Source array channels Destination array depth Destination array channels Horizontal filter coefficients. Support kernels with size <= 32 . Vertical filter coefficients. Support kernels with size <= 32 . Anchor position within the kernel. Negative values mean that anchor is positioned at the aperture center. Pixel extrapolation method in the vertical direction Pixel extrapolation method in the horizontal direction Sobel filter Create a Sobel filter. The depth of the source image The number of channels of the source image The depth of the destination image The number of channels of the the destination image Order of the derivative x Order of the derivative y Size of the extended Sobel kernel Optional scale, use 1 for default. The row border type. The column border type. Cascade Classifier for object detection using Cuda Canny edge detector using Cuda. The first threshold, used for edge linking The second threshold, used to find initial segments of strong edges Aperture parameter for Sobel operator, use 3 for default Use false for default Finds the edges on the input and marks them in the output image edges using the Canny algorithm. Input image Image to store the edges found by the function Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged memory associate with this Canny edge detector. Base CornernessCriteria class Release all the unmanaged memory associated with this gpu filter Apply the cuda filter The source CudaImage where the filter will be applied to The destination CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Cuda implementation of GoodFeaturesToTrackDetector Create the Cuda implementation of GoodFeaturesToTrackDetector Find the good features to track Release all the unmanaged memory associated with this detector Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image. Create a Cuda Harris Corner detector The depth of the source image The number of channels in the source image Neighborhood size Harris detector free parameter. Boreder type, use REFLECT101 for default Base class for circles detector algorithm. Create hough circles detector Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. The higher threshold of the two passed to Canny edge detector (the lower one is twice smaller). The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Minimum circle radius. Maximum circle radius. Maximum number of output circles. Finds circles in a grayscale image using the Hough transform. 8-bit, single-channel grayscale input image. Output vector of found circles. Each vector is encoded as a 3-element floating-point vector. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Finds circles in a grayscale image using the Hough transform. 8-bit, single-channel grayscale input image. Circles detected Release the unmanaged memory associated with this circle detector. Base class for lines detector algorithm. Create a hough lines detector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Accumulator threshold parameter. Only those lines are returned that get enough votes (> threshold). Performs lines sort by votes. Maximum number of output lines. Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image Output vector of lines.Output vector of lines. Each line is represented by a two-element vector. The first element is the distance from the coordinate origin (top-left corner of the image). The second element is the line rotation angle in radians. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged memory associated to this line detector. Base class for line segments detector algorithm. Create a hough segment detector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. Maximum number of output lines. Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image Output vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) , where (x1, y1) and (x2, y2) are the ending points of each detected line segment. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged memory associated with this segment detector Bayer Demosaicing (Malvar, He, and Cutler) BayerBG2BGR_MHT BayerGB2BGR_MHT BayerRG2BGR_MHT BayerGR2BGR_MHT BayerBG2RGB_MHT BayerGB2RGB_MHT BayerRG2RGB_MHT BayerGR2RGB_MHT BayerBG2GRAY_MHT BayerGB2GRAY_MHT BayerRG2GRAY_MHT BayerGR2GRAY_MHT Alpha composite types Over In Out Atop Xor Plus Over Premul In Premul Out Premul Atop Premul Xor Premul Plus Premul Premul Implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria). Creates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria). Input source depth. Only 8U and 32F are supported for now. Input source type. Only single channel are supported for now. Neighborhood size. Aperture parameter for the Sobel operator. Pixel extrapolation method. Only BORDER_REFLECT101 and BORDER_REPLICATE are supported for now. Cuda template matching filter. Create a Cuda template matching filter Specifies the way the template must be compared with image regions The block size The depth type of the image that will be used in the template matching The number of channels of the image that will be used in the template matching This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the buffer Brox optical flow Create the Brox optical flow solver Flow smoothness Gradient constancy importance Pyramid scale factor Number of lagged non-linearity iterations (inner loop) Number of warping iterations (number of pyramid levels) Number of linear system solver iterations Release all the unmanaged memory associated with this optical flow solver. Pointer to the unmanaged DenseOpticalFlow object Pointer to the unmanaged Algorithm object PyrLK optical flow Create the PyrLK optical flow solver Windows size. Use 21x21 for default The maximum number of pyramid levels. The number of iterations. Weather or not use the initial flow in the input matrix. Release all the unmanaged memory associated with this optical flow solver. Pointer to the unmanaged DenseOpticalFlow object Pointer to the unmanaged Algorithm object Farneback optical flow Release all the unmanaged memory associated with this optical flow solver. Pointer to the unmanaged DenseOpticalFlow object Pointer to the unamanged Algorithm object DualTvl1 optical flow Initializes a new instance of the class. Release all the unmanaged memory associated with this optical flow solver. Pointer to the DenseOpticalFlow object Pointer to the algorithm object Sparse PyrLK optical flow Create the PyrLK optical flow solver Windows size. Use 21x21 for default The maximum number of pyramid levels. The number of iterations. Weather or not use the initial flow in the input matrix. Release all the unmanaged memory associated with this optical flow solver. Pointer to the unmanaged SparseOpticalFlow object Pointer to the unmanaged Algorithm object Cuda Dense Optical flow Pointer to cv::cuda::denseOpticalFlow Interface to provide access to the cuda::SparseOpticalFlow class. Pointer the the native cuda::sparseOpticalFlow object. Descriptor matcher Pointer to the native cv::Algorithm Find the k-nearest match An n x m matrix of descriptors to be query for nearest neighbors. n is the number of descriptor and m is the size of the descriptor Number of nearest neighbors to search for Can be null if not needed. An n x 1 matrix. If 0, the query descriptor in the corresponding row will be ignored. Matches. Each matches[i] is k or less matches for the same query descriptor. Parameter used when the mask (or masks) is not empty. If compactResult is false, the matches vector has the same size as queryDescriptors rows. If compactResult is true, the matches vector does not contain matches for fully masked-out query descriptors. Train set of descriptors. This set is not added to the train descriptors collection stored in the class object. Add the model descriptors The model descriptors Release all the unmanaged memory associated with this matcher A Brute force matcher using Cuda Create a CudaBruteForceMatcher using the specific distance type The distance type A FAST detector using Cuda Create a fast detector with the specific parameters Threshold on difference between intensity of center pixel and pixels on circle around this pixel. Use 10 for default. Specifiy if non-maximum supression should be used. The maximum number of keypoints to be extracted. The detector type Release the unmanaged resource associate to the Detector An ORB detector using Cuda Create a ORBDetector using the specific values The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. The level at which the image is given. If 1, that means we will also look at the image. times bigger How far from the boundary the points should be. How many random points are used to produce each cell of the descriptor (2, 3, 4 ...). Type of the score to use. Patch size. Blur for descriptor Fast threshold Release the unmanaged resource associate to the Detector The feature 2D base class Get the pointer to the Feature2DAsync object The pointer to the Feature2DAsync object Class that contains extension methods for Feature2DAsync Detect keypoints in an image and compute the descriptors on the image from the keypoint locations. The Feature2DAsync object The image The optional mask, can be null if not needed The detected keypoints will be stored in this vector The descriptors from the keypoints If true, the method will skip the detection phase and will compute descriptors for the provided keypoints Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Detect the features in the image The Feature2DAsync object The result vector of keypoints The image from which the features will be detected from The optional mask. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Compute the descriptors on the image from the given keypoint locations. The Feature2DAsync object The image to compute descriptors from The keypoints where the descriptor computation is perfromed The descriptors from the given keypoints Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts keypoints array from internal representation to standard vector. The Feature2DAsync object GpuMat representation of the keypoints. Vector of keypoints Disparity map refinement using joint bilateral filtering given a single color image. Qingxiong Yang, Liang Wang†, Narendra Ahuja http://vision.ai.uiuc.edu/~qyang6/ Create a GpuDisparityBilateralFilter Number of disparities. Use 64 as default Filter radius, use 3 as default Number of iterations, use 1 as default Apply the filter to the disparity image The input disparity map The image The output disparity map, should have the same size as the input disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged resources associated with the filter. Use Block Matching algorithm to find stereo correspondence Create a stereoBM The number of disparities. Must be multiple of 8. Use 64 for default The SAD window size. Use 19 for default Computes disparity map for the input rectified stereo pair. The left single-channel, 8-bit image The right image of the same size and the same type The disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the stereo state and all the memory associate with it A Constant-Space Belief Propagation Algorithm for Stereo Matching. Qingxiong Yang, Liang Wang, Narendra Ahuja. http://vision.ai.uiuc.edu/~qyang6/ A Constant-Space Belief Propagation Algorithm for Stereo Matching The number of disparities. Use 128 as default The number of BP iterations on each level. Use 8 as default. The number of levels. Use 4 as default The number of active disparity on the first level. Use 4 as default. Computes disparity map for the input rectified stereo pair. The left single-channel, 8-bit image The right image of the same size and the same type The disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged memory Cascade Classifier for object detection using Cuda Create a Cuda cascade classifier using the specific file The file to create the classifier from Create a Cuda cascade classifier using the specific file storage The file storage to create the classifier from Detects objects of different sizes in the input image. Matrix of type CV_8U containing an image where objects should be detected. Buffer to store detected objects (rectangles). Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts objects array from internal representation to standard vector. Objects array in internal representation. Resulting array. Release all unmanaged resources associated with this object Parameter specifying how much the image size is reduced at each image scale Parameter specifying how many neighbors each candidate rectangle should have to retain it The maximum number of objects If true, only return the largest object The maximum object size The minimum object size The classifier size A HOG descriptor The descriptor format Row by row Col by col Create a new HOGDescriptor using the specific parameters Block size in cells. Use (16, 16) for default. Cell size. Use (8, 8) for default. Block stride. Must be a multiple of cell size. Use (8,8) for default. Number of bins. Detection window size. Must be aligned to block size and block stride. Must match the size of the training image. Use (64, 128) for default. Returns coefficients of the classifier trained for people detection (for default window size). The default people detector Set the SVM detector The SVM detector Performs object detection with increasing detection window. The CudaImage to search in The regions where positives are found Performs object detection with a multi-scale window. Source image. Detected objects boundaries. Optional output array for confidences. Release the unmanaged memory associated with this HOGDescriptor Flag to specify whether the gamma correction preprocessing is required or not Gaussian smoothing window parameter Maximum number of detection window increases Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping. See groupRectangles. Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specfied in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Coefficient of the detection window increase. L2-Hys normalization method shrinkage. The descriptor format Returns the number of coefficients required for the classification. Window stride. It must be a multiple of block stride. Returns the block histogram size. Library to invoke Tesseract OCR functions The tesseract page iterator Returns orientation for the block the iterator points to. Returns the baseline of the current object at the given level. The baseline is the line that passes through (x1, y1) and (x2, y2). WARNING: with vertical text, baselines may be vertical! Returns null if there is no baseline at the current position. Page iterator level The baseline of the current object at the given level Release the page iterator The orientation Page orientation Writing direction Textline order after rotating the block so the text orientation is upright, how many radians does one have to rotate the block anti-clockwise for it to be level? -Pi/4 <= deskew_angle <= Pi/4 Page orientation Up Right Down Left Writing direction Left to right Right to left Top to bottom Textline order Left to right Right to left Top to bottom Page iterator level Block of text/image/separator line. Paragraph within a block. Line within a paragraph. Word within a textline. Symbol/character within a word. Leptonica Pix image structure Create a Pix object by coping data from Mat The Mat to create the Pix object from Release all the unmanaged memory associated with this Pix The tesseract OCR engine Get the tesseract version as String Get the tesseract version Create a default tesseract engine. Needed to Call Init function to load language files in a later stage. Get the OpenCL device pointer Pointer to the opencl device Create an tesseract OCR engine. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode Create an tesseract OCR engine. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode This can be used to specify a white list for OCR. e.g. specify "1234567890" to recognize digits only. Note that the white list currently seems to only work with OcrEngineMode.OEM_TESSERACT_ONLY Check whether a word is valid according to Tesseract's language model The word to be checked. 0 if the word is invalid, non-zero if valid Gets or sets the page seg mode. The page seg mode. Initialize the OCR engine using the specific dataPath and language name. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode Release the unmanaged resource associated with this class Set the image for optical character recognition The image where detection took place Set the image for optical character recognition The image where detection took place Recognize the image from SetAndThresholdImage, generating Tesseract internal structures. Returns 0 on success. Set the variable to the specific value. The name of the tesseract variable. e.g. use "tessedit_char_blacklist" to black list characters and ""tessedit_char_whitelist" to white list characters. The full list of options can be found in the Tesseract OCR source code "tesseractclass.h" The value to be set Get all the text in the image All the text in the image Make a TSV-formatted string from the internal data structures. pageNumber is 0-based but will appear in the output as 1-based. A TSV-formatted string from the internal data structures. The recognized text is returned as coded in the same format as a box file used in training. pageNumber is 0-based but will appear in the output as 1-based. The recognized text is returned as coded in the same format as a box file used in training. The recognized text is returned coded as UNLV format Latin-1 with specific reject and suspect codes pageNumber is 0-based but will appear in the output as 1-based. The recognized text is returned coded as UNLV format Latin-1 with specific reject and suspect codes The recognized text pageNumber is 0-based but will appear in the output as 1-based. The recognized text Make a HTML-formatted string with hOCR markup from the internal data structures. pageNumber is 0-based but will appear in the output as 1-based. A HTML-formatted string with hOCR markup from the internal data structures. Detect all the characters in the image. All the characters in the image This represent a character that is detected by the OCR engine The text The cost. The lower it is, the more confident is the result The region where the character is detected. Turn a single image into symbolic text. The pix is the image processed. Metadata used by side-effect processes, such as reading a box file or formatting as hOCR. Metadata used by side-effect processes, such as reading a box file or formatting as hOCR. retryConfig is useful for debugging. If not NULL, you can fall back to an alternate configuration if a page fails for some reason. terminates processing if any single page takes too long. Set to 0 for unlimited time. Responible for creating the output. For example, use the TessTextRenderer if you want plaintext output, or the TessPDFRender to produce searchable PDF. Returns true if successful, false on error. Runs page layout analysis in the mode set by SetPageSegMode. May optionally be called prior to Recognize to get access to just the page layout results. Returns an iterator to the results. Returns NULL on error or an empty page. The returned iterator must be deleted after use. WARNING! This class points to data held within the TessBaseAPI class, and therefore can only be used while the TessBaseAPI class still exists and has not been subjected to a call of Init, SetImage, Recognize, Clear, End DetectOS, or anything else that changes the internal PAGE_RES. Get the OCR Engine Mode When Tesseract/LSTM is initialized we can choose to instantiate/load/run only the Tesseract part, only the Cube part or both along with the combiner. The preference of which engine to use is stored in tessedit_ocr_engine_mode. Run Tesseract only - fastest Run just the LSTM line recognizer. Run the LSTM recognizer, but allow fallback to Tesseract when things get difficult. Specify this mode when calling init_*(), to indicate that any of the above modes should be automatically inferred from the variables in the language-specific config, command-line configs, or if not specified in any of the above should be set to the default OEM_TESSERACT_ONLY. Tesseract page segmentation mode PageOrientation and script detection only. Automatic page segmentation with orientation and script detection. (OSD) Automatic page segmentation, but no OSD, or OCR. Fully automatic page segmentation, but no OSD. Assume a single column of text of variable sizes. Assume a single uniform block of vertically aligned text. Assume a single uniform block of text. (Default.) Treat the image as a single text line. Treat the image as a single word. Treat the image as a single word in a circle. Treat the image as a single character. Find as much text as possible in no particular order. Sparse text with orientation and script det. Treat the image as a single text line, bypassing hacks that are Tesseract-specific. Number of enum entries. This structure is primary used for PInvoke The length The cost The region Interface to the TesseractResultRender Pointer to the unmanaged TessResultRendered Renders tesseract output into searchable PDF Create a PDF renderer dataDir is the location of the TESSDATA. We need it because we load a custom PDF font from this location. Release the unmanaged memory associated with this Renderer Pointer to the unmanaged TessResultRendered Wrapped class of the C++ standard vector of TesseractResult. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of TesseractResult Create an standard vector of TesseractResult of the specific size The size of the vector Create an standard vector of TesseractResult with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Push multiple values from the other vector into this vector The other vector, from which the values will be pushed to the current vector Convert the standard vector to an array of TesseractResult An array of TesseractResult Get the size of the vector Clear the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray The size of the item in this Vector, counted as size in bytes. An abstract class that wrap around a disposable object Track whether Dispose has been called. The dispose function that implements IDisposable interface Dispose(bool disposing) executes in two distinct scenarios. If disposing equals true, the method has been called directly or indirectly by a user's code. Managed and unmanaged resources can be disposed. If disposing equals false, the method has been called by the runtime from inside the finalizer and you should not reference other objects. Only unmanaged resources can be disposed. If disposing equals false, the method has been called by the runtime from inside the finalizer and you should not reference other objects. Only unmanaged resources can be disposed. Release the managed resources. This function will be called during the disposal of the current object. override ride this function if you need to call the Dispose() function on any managed IDisposable object created by the current object Release the unmanaged resources Destructor A generic EventArgs The type of arguments Create a generic EventArgs with the specific value The value The value of the EventArgs A generic EventArgs The type of the first value The type of the second value Create a generic EventArgs with two values The first value The second value The first value The second value Implement this interface if the object can output code to generate it self. Return the code to generate the object itself from the specific language The programming language to output code The code to generate the object from the specific language An object that can be interpolated The index that will be used for interpolation Interpolate base on this point and the other point with the given index The other point The interpolation index The interpolated point A Pinnned array of the specific type The type of the array Create a Pinnned array of the specific type The size of the array Get the address of the pinned array A pointer to the address of the the pinned array Get the array Release the GCHandle Disposed the unmanaged data Provide information for the platform which is using. Get the type of the current operating system Get the type of the current runtime environment utilities functions for Emgu Convert an object to an xml document The type of the object to be converted The object to be serialized An xml document that represents the object Convert an object to an xml document The type of the object to be converted The object to be serialized Other types that it must known ahead to serialize the object An xml document that represents the object Convert an xml document to an object The type of the object to be converted to The xml document The object representation as a result of the deserialization of the xml document Convert an xml document to an object The type of the object to be converted to The xml document Other types that it must known ahead to deserialize the object The object representation as a result of the deserialization of the xml document Convert an xml string to an object The type of the object to be converted to The xml document as a string The object representation as a result of the deserialization of the xml string Similar to Marshal.SizeOf function The type The size of T in bytes Merges two byte vector into one the first byte vector to be merged the second byte vector to be merged The bytes that is a concatenation of a and b Call a command from command line The name of the executable The arguments to the executable The standard output Use reflection to find the base type. If such type do not exist, null is returned The type to search from The name of the base class to search The base type Convert some generic vector to vector of Bytes type of the input vector array of data the byte vector Perform first degree interpolation give the sorted data and the interpolation The sorted data that will be interpolated from The indexes of the interpolate result Get subsamples with the specific rate The source which the subsamples will be derived from The subsample rate subsampled with the specific rate Joining multiple index ascending IInterpolatables together as a single index ascending IInterpolatable. The type of objects that will be joined The enumerables, each should be stored in index ascending order A single enumerable sorted in index ascending order Maps the specified executable module into the address space of the calling process. The name of the dll The handle to the library Decrements the reference count of the loaded dynamic-link library (DLL). When the reference count reaches zero, the module is unmapped from the address space of the calling process and the handle is no longer valid The handle to the library If the function succeeds, the return value is true. If the function fails, the return value is false. Adds a directory to the search path used to locate DLLs for the application The directory to be searched for DLLs True if success Type of operating system Windows Linux Mac OSX iOS devices. iPhone, iPad, iPod Touch Android devices The windows phone The runtime environment .Net runtime Windows Store app runtime Mono runtime The type of Programming languages C# C++ An Unmanaged Object is a disposable object with a Ptr property pointing to the unmanaged object A pointer to the unmanaged object Pointer to the unmanaged object Implicit operator for IntPtr The UnmanagedObject The unmanaged pointer for this object