Introduction To Computer Vision Using OpenCV.doc
《Introduction To Computer Vision Using OpenCV.doc》由会员分享,可在线阅读,更多相关《Introduction To Computer Vision Using OpenCV.doc(9页珍藏版)》请在三一文库上搜索。
1、Introduction To Computer Vision Using OpenCVThe name OpenCV has become synonymous with computer vision, but what is OpenCV? OpenCV is a collection of software algorithms put together in a library to be used by industry and academia for computer vision applications and research (Figure 1). OpenCV sta
2、rted at Intel in the mid 1990s as a method to demonstrate how to accelerate certain algorithms in hardware. In 2000, Intel released OpenCV to the open source community as a beta version, followed by v1.0 in 2006. In 2008, Willow Garage took over support for OpenCV and immediately released v1.1.Figur
3、e 1: OpenCV, an algorithm library (courtesy Willow Garage)Willow Garage dates from 2006. The company has been in the news a lot lately, subsequent to the unveiling of its PR2 robot (Figure 2). Gary Bradski began working on OpenCV when he was at Intel; as a senior scientist at Willow Garage he aggres
4、sively continues his work on the library.Figure 2: Willow Garages PR2 robotOpenCV v2.0, released in 2009, contained many improvements and upgrades. Initially, OpenCV was primarily a C library. The majority of algorithms were written in C, and the primary method of using the library was via a C API.
5、OpenCV v2.0 migrated towards C+ and a C+ API. Subsequent versions of OpenCV added Python support, along with Windows, Linux, iOS and Android OS support, transforming OpenCV (currently at v2.3) into a cross-platform tool. OpenCV v2.3 contains more than 2500 algorithms; the original OpenCV only had 50
6、0. And to assure quality, many of the algorithms provide their own unit tests.So, what can you do with OpenCV v2.3? Think of OpenCV as a box of 2500 different food items. The chefs job is to combine the food items into a meal. OpenCV in itself is not the full meal; it contains the pieces required to
7、 make a meal. But heres the good news; OpenCV includes a bunch of recipes to provide examples of what it can do.Experimenting with OpenCV, no programming experience necessaryBDTI has created the OpenCV Executable Demo Package, an easy-to-use tool that allows anyone with a Windows computer and a web
8、camera to experiment with some of the algorithms in OpenCV v2.3. You can download the installer zip file here. After the download is complete, double-click on the zip file to uncompress its contents, then double-click on the setup.exe file.The installer will place various prebuilt OpenCV application
9、s on your computer. You can run the examples directly from your Start menu (Figure 3). Just click on:START - BDTi_OpenCV_Executable_Demo_Package - The example you want to runFigure 3. OpenCV examples included with the BDTI-developed tutorial toolExamples named xxxxxxSample.bat will use a video clip
10、as an input (example clips are provided with the installation), while examples named xxxxxWebCamera.bat will use a web camera as an input. Keep an eye on www.embedded- for additional examples in the future.Computer vision involves classifying groups of pixels in an image or video stream as either ba
11、ckground or a unique feature. Each of the following examples demonstrates various algorithms that separate unique features from the background using different techniques. Some of them use code derived from the book OpenCV 2 Computer Vision Application Programming Cookbook by Robert Laganiere (ISBN-1
12、0: 1849513244, ISBN-13: 978-1849513241) (Figure 4).Figure 4: The source of the code used in some of the examples discussed in this articleMotion detectionAs the name implies, motion detection uses the change of pixels between frames to classify pixels as unique features (Figure 5). The algorithm con
13、siders pixels that do not change between frames as being stationary and therefore part of the background. Motion detection or background subtraction is a very practical and easy-to-implement algorithm. In its simplest form, the algorithm looks for differences between two frames of video by subtracti
14、ng one frame from the next. White pixels are moving, black pixels are stationary.Figure 5: The user interface for the motion detection exampleThis example adds an additional element to the simple frame subtraction algorithm; a running average of the frames. Each frame averaging routine runs over a t
15、ime period specified by the LearnRate parameter. The higher the LearnRate, the longer the running average. By setting LearnRate to 0, you disable the running average and the algorithm simply subtracts one frame from the next.The Threshold parameter sets the level required for a pixel to be considere
16、d moving. The algorithm subtracts the current frame from the previous frame, giving a result. If the result is greater than the threshold, the algorithm displays a white pixel and considers that pixel to be moving.LearnRate: Regulates the update speed (how fast the accumulator forgets about earlier
17、images).Threshold: The minimum value for a pixel difference to be considered moving.Line detectionLine detection classifies straight edges in an image as features (Figure 6). The algorithm relegates to the background anything in the image that it does not recognize as a straight edge, thereby ignori
18、ng it. Edge detection is another fundamental component in computer vision.Figure 6: The user interface for the line detection exampleImage processing determines an edge by sensing close-proximity pixels of differering intensity. For example, a black pixel next to a white pixel defines a hard edge. A
19、 gray pixel next to a black (or white) pixel defines a soft edge. The Threshold parameter sets a minimum limit on how hard an edge has to be in order for it to be classified as an edge. A Threshold of 255 would require a white pixel be next to a black pixel to qualify as an edge. As the Threshold va
20、lue decreases, softer edges in the image appear in the display.After the algorithm detects an edge, it must make a difficult decision; is this edge part of a straight line? The Hough transform, employed to make this decision, attempts to group pixels classified as edges into a straight line. It uses
21、 the MinLength and MaxGap parameters to decide (i.e. classify in computer science lingo) a group of edge pixels into either a straight continuous line or ignored background information (edge pixels not part of a continuous straight line are considered background, and therefore not a feature).Thresho
22、ld: Sets the minimum difference between adjoining groups of pixels to be classified as an edge.MinLength: The minimum number of continuous edge pixels required to classify a potential feature as a straight line.MaxGap: The maximum allowable number of missing edge pixels that still enable classificat
23、ion of a potential feature as a continuous straight line.Optical flowOptical flow describes how a group of pixels in the current frame change position in the next frame of a video sequence (Figure 7). The group of pixels is a feature. Optical flow estimation finds use in tracking features in an imag
24、e, by predicting where the features will appear next. Many optical flow estimation algorithms exist; this particular example uses the Lucas-Kanade approach. The algorithms first step involves finding good features to track between frames. Specifically, the algorithm is looking for groups of pixels c
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- Introduction To Computer Vision Using OpenCV
链接地址:https://www.31doc.com/p-10020467.html