Kinect .NET SDK–Getting Started
Today Microsoft (16/06/2011) released the Kinect .NET SDK, I’ve seen it first from Guy Burstein and right away I had to download it and give it a try, and it’s Amazing!!!
As you can see from the picture below I did a simple demo for getting started using Kinect SDK and I’ll go step by step on how to get start using Kinect .NET SDK for Video (NUI), over the next post I’ll also talk about Audio with Kinect.
Download Demo Project
Step 1 – Prepare your environment
In order to work with Kinect .NET SDK you need to have the below requirements:
Supported Operating Systems and Architectures
- Computer with a dual-core, 2.66-GHz or faster processor
- Windows 7–compatible graphics card that supports Microsoft® DirectX® 9.0c capabilities
- 2 GB of RAM
- Kinect for Xbox 360® sensor—retail edition, which includes special USB/power cabling
Step 2: Create New WPF Project
Add reference to Microsoft.Research.Kinect.Nui (locate under - C:\Program Files (x86)\Microsoft Research KinectSDK) and make sure the solution file for the sample targets the x86 platform, because this Beta SDK includes only x86 libraries.
An application must initialize the Kinect sensor by calling Runtime.Initialize before calling any other methods on the Runtime object. Runtime.Initialize initializes the internal frame-capture engine, which starts a thread that retrieves data from the Kinect sensor and signals the application when a frame is ready. It also initializes the subsystems that collect and process the sensor data. The Initialize method throws InvalidOperationException if it fails to find a Kinect sensor, so the call to Runtime.Initialize appears in a try/catch block.
Create Windows Load Event and call InitializeNui
private void InitializeNui()
//Declares _kinectNui as a Runtime object, which represents the Kinect sensor instance
. _kinectNui = new Runtime();
//Open the video and depth streams, and sets up the event handlers that the runtime calls when a video, depth, or skeleton frame is ready
//An application must initialize the Kinect sensor by calling Runtime.Initialize before calling any other methods on the Runtime object.
_kinectNui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);
//To stream color images:
// • The options must include UseColor.
// • Valid image resolutions are Resolution1280x1024 and Resolution640x480.
// • Valid image types are Color, ColorYUV, and ColorYUVRaw.
_kinectNui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.ColorYuv);
//To stream depth and player index data:
// • The options must include UseDepthAndPlayerIndex.
// • Valid resolutions for depth and player index data are Resolution320x240 and Resolution80x60.
// • The only valid image type is DepthAndPlayerIndex.
_kinectNui.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240, ImageType.DepthAndPlayerIndex);
lastTime = DateTime.Now;
_kinectNui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(NuiVideoFrameReady);
_kinectNui.DepthFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_DepthFrameReady);
catch (InvalidOperationException ex)
And Window Closed Event
private void WindowClosed(object sender, EventArgs e)
Step 3: Show Video
Both Video and Depth returns PlanarImage and we just need to create new Bitmap and display on the UI.
Video Frame Ready Event Handler
oid NuiVideoFrameReady(object sender, ImageFrameReadyEventArgs e)
PlanarImage Image = e.ImageFrame.Image;
image.Source = BitmapSource.Create(Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel);
imageCmyk32.Source = BitmapSource.Create(Image.Width, Image.Height, 96, 96, PixelFormats.Cmyk32, null, Image.Bits, Image.Width * Image.BytesPerPixel);
Depth Frame Ready Event Handler
Depth is different because the image you are getting back is 16bit and we need to convert it to 32, I’ve used the same method like in the SDK samples that allows me to convert the image.
void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
var Image = e.ImageFrame.Image;
var convertedDepthFrame = convertDepthFrame(Image.Bits);
depth.Source = BitmapSource.Create(Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, convertedDepthFrame, Image.Width * 4);
// Converts a 16-bit grayscale depth frame which includes player indexes into a 32-bit frame
// that displays different players in different colors
byte convertDepthFrame(byte depthFrame16)
for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16 += 2, i32 += 4)
int player = depthFrame16[i16] & 0x07;
int realDepth = (depthFrame16[i16 + 1] << 5) | (depthFrame16[i16] >> 3);
// transform 13-bit depth information into an 8-bit intensity appropriate
// for display (we disregard information in most significant bit)
byte intensity = (byte)(255 - (255 * realDepth / 0x0fff));
depthFrame32[i32 + RED_IDX] = intensity;
depthFrame32[i32 + BLUE_IDX] = intensity;
depthFrame32[i32 + GREEN_IDX] = intensity;
var cur = DateTime.Now;
if (cur.Subtract(lastTime) > TimeSpan.FromSeconds(1))
int frameDiff = totalFrames - lastFrames;
lastFrames = totalFrames;
lastTime = cur;
frameRate.Text = frameDiff.ToString() + " fps";
Download Demo Project