- OpenCV 4 with Python Blueprints
- Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
- 279字
- 2021-06-24 16:50:03
Understanding hand region segmentation
The automatic detection of an arm—and later, the hand region—could be designed to be arbitrarily complicated, maybe by combining information about the shape and color of an arm or hand. However, using skin color as a determining feature to find hands in visual scenes might fail terribly in poor lighting conditions or when the user is wearing gloves. Instead, we choose to recognize the user's hand by its shape in the depth map.
Allowing hands of all sorts to be present in any region of the image unnecessarily complicates the mission of the present chapter, so we make two simplifying assumptions:
- We will instruct the user of our app to place their hand in front of the center of the screen, orienting their palm roughly parallel to the orientation of the Kinect sensor so that it is easier to identify the corresponding depth layer of the hand.
- We will also instruct the user to sit roughly 1 to 2 meters away from the Kinect and to slightly extend their arm in front of their body so that the hand will end up in a slightly different depth layer than the arm. However, the algorithm will still work even if the full arm is visible.
In this way, it will be relatively straightforward to segment the image based on the depth layer alone. Otherwise, we would have to come up with a hand detection algorithm first, which would unnecessarily complicate our mission. If you feel adventurous, feel free to do this on your own.
Let's see how to find the most prominent depth of the image center region in the next section.
- Learning LibGDX Game Development(Second Edition)
- scikit-learn Cookbook
- LabVIEW入門與實戰開發100例
- Mastering Concurrency in Go
- Mastering matplotlib
- Android 應用案例開發大全(第3版)
- Serverless架構
- 量化金融R語言高級教程
- 數據結構習題解析與實驗指導
- Kotlin從基礎到實戰
- C#應用程序設計教程
- Learning jQuery(Fourth Edition)
- 新一代SDN:VMware NSX 網絡原理與實踐
- 深入解析Java編譯器:源碼剖析與實例詳解
- Visual C++程序設計全程指南