官术网_书友最值得收藏!

Finding the most prominent depth of the image center region

Once the hand is placed roughly in the center of the screen, we can start finding all image pixels that lie on the same depth plane as the hand. This is done by following these steps:

  1. First, we simply need to determine the most prominent depth value of the center region of the image. The simplest approach would be to look only at the depth value of the center pixel, like this:
width, height = depth.shape 
center_pixel_depth = depth[width/2, height/2] 
  1. Then, create a mask in which all pixels at a depth of center_pixel_depth are white and all others are black, as follows:
import numpy as np 
 
depth_mask = np.where(depth == center_pixel_depth, 255, 
0).astype(np.uint8)

However, this approach will not be very robust, because there is the chance that it will be compromised by the following:

  • Your hand will not be placed perfectly parallel to the Kinect sensor.
  • Your hand will not be perfectly flat.
  • The Kinect sensor values will be noisy.

Therefore, different regions of your hand will have slightly different depth values.

The segment_arm method takes a slightly better approach—it looks at a small neighborhood in the center of the image and determines the median depth value. This is done by following these steps:

  1. First, we find the center region (for example, 21 x 21 pixels) of the image frame, like this:
def segment_arm(frame: np.ndarray, abs_depth_dev: int = 14) -> np.ndarray:
height, width = frame.shape
# find center (21x21 pixels) region of imageheight frame
center_half = 10 # half-width of 21 is 21/2-1
center = frame[height // 2 - center_half:height // 2 + center_half,
width // 2 - center_half:width // 2 + center_half]
  1. Then, we determine the median depth value, med_val, as follows:
med_val = np.median(center) 

We can now compare med_val with the depth value of all pixels in the image and create a mask in which all pixels whose depth values are within a particular range [med_val-abs_depth_dev, med_val+abs_depth_dev] are white, and all other pixels are black.

However, for reasons that will become clear in a moment, let's paint the pixels gray instead of white, like this:

frame = np.where(abs(frame - med_val) <= abs_depth_dev,
128, 0).astype(np.uint8)
  1. The result will look like this:

You will note that the segmentation mask is not smooth. In particular, it contains holes at points where the depth sensor failed to make a prediction. Let's learn how to apply morphological closing to smoothen the segmentation mask, in the next section.

主站蜘蛛池模板: 保山市| 黎城县| 南靖县| 鲜城| 长子县| 昌江| 靖宇县| 阜城县| 重庆市| 洛南县| 大理市| 龙山县| 泰来县| 曲阳县| 普定县| 尼勒克县| 英德市| 罗定市| 绥芬河市| 衡山县| 大洼县| 珲春市| 汉阴县| 邹平县| 扶风县| 海门市| 黄平县| 萝北县| 常山县| 屏南县| 肥西县| 沁源县| 施甸县| 山阴县| 康定县| 白银市| 六安市| 宁明县| 汝阳县| 晋宁县| 新乐市|