官术网_书友最值得收藏!

Accessing the Kinect 3D sensor

The easiest way to access a Kinect sensor is by using an OpenKinect module called freenect. For installation instructions, take a look at the preceding section.

The freenect module has functions such as sync_get_depth() and sync_get_video(), used to obtain images synchronously from the depth sensor and camera sensor respectively. For this chapter, we will need only the Kinect depth map, which is a single-channel (grayscale) image in which each pixel value is the estimated distance from the camera to a particular surface in the visual scene.

Here, we will design a function that will read a frame from the sensor and convert it to the desired format, and return the frame together with a success status, as follows:

def read_frame(): -> Tuple[bool,np.ndarray]:

The function consists of the following steps:

  1. Grab a frame; terminate the function if a frame was not acquired, like this:
    frame, timestamp = freenect.sync_get_depth() 
if frame is None:
return False, None

The sync_get_depth method returns both the depth map and a timestamp. By default, the map is in an 11-bit format. The last 10 bits of the sensor describes the depth, while the first bit states that the distance estimation was not successful when it's equal to 1.

  1. It is a good idea to standardize the data into an 8-bit precision format, as an 11-bit format is inappropriate to be visualized with cv2.imshow right away, as well as in the future. We might want to use some different sensor that returns in a different format, as follows:
np.clip(depth, 0, 2**10-1, depth) 
depth >>= 2 

In the previous code, we have first clipped the values to 1,023 (or 2**10-1) to fit in 10 bits. Such clipping results in the assignment of the undetected distance to the farthest possible point. Next, we shift 2 bits to the right to fit the distance in 8 bits.

  1. Finally, we convert the image into an 8-bit unsigned integer array and return the result, as follows:
return True, depth.astype(np.uint8) 

Now, the depth image can be visualized as follows:

cv2.imshow("depth", read_frame()[1]) 

Let's see how to use OpenNI-compatible sensors in the next section.

主站蜘蛛池模板: 遂昌县| 津市市| 永州市| 安图县| 高唐县| 嘉禾县| 淳安县| 长兴县| 阿巴嘎旗| 金乡县| 莆田市| 兴化市| 漳州市| 平泉县| 台州市| 周口市| 东阿县| 和平县| 抚顺市| 绥滨县| 高雄市| 安仁县| 千阳县| 思茅市| 西吉县| 四子王旗| 赣州市| 大渡口区| 云和县| 谷城县| 绥阳县| 孟村| 泾阳县| 临猗县| 原阳县| 新河县| 五莲县| 晴隆县| 平山县| 奉化市| 彰化县|