Windows Phone Mango–What’s New? (“Camera” – Part 4 of 8)

May 24, 2011

All Windows Phone 7 devices equipped with camera. The minimum required camera resolution is 5 Mega pixels. For developers accession camera information enables many scenarios like image recognition, video chatting, augmented reality and others. From the beginning, Windows Phone 7 RTM devices supported camera usage scenario via launchers and choosers. Phone shell API provided the developer with CameraCaptureTask which could be used to take a picture and use it in the application. This scenario based on phone’s operation system camera interface and not covered in this post. In this post I’ll cover live camera input.

To use live input from phone’s camera we need to initialize an instance of PhotoCamera class from Microsoft.Devices namespace. Also, to use the camera in your applications define ID_CAP_ISV_CAMERA capability in WMAppManifest.xml:

<Capability Name=ID_CAP_ISV_CAMERA/>

PhotoCamera class represents the basic camera functionality for a Windows Phone still camera application and enables the developer to configure the camera with resolution, flash and focus settings. Note, that developing camera-enabled application requires the physical device. The Windows Phone Emulator supports camera APIs but doesn’t provide neither hardware button emulation, neither camera visual feedback. The emulator shows white rectangle with small black rectangle travelling around:


To use PhotoCamera object it must be assigned as a source to some drawing surface. In our sample we are using VideoBrush object to draw paint with live camera feed. The main surface of our application is covered by Rectangle object with Fill property assigned to the VideoBrush:

<Rectangle x:Name=”rectPreview” Width=”780″ Height=”460″ Margin=”10″>
VideoBrush x:Name=”previewVideo”/>

In next code snippet we will initialize the PhotoCamera object and assign it as a source to the VideoBrush named previewVideo (this is done in OnNavigatedTo event handler):

protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
photoCamera = new PhotoCamera();
photoCamera.Initialized += new EventHandler<CameraOperationCompletedEventArgs>(photoCamera_Initialized);



PhotoCamera class take some time to initialize, thus most of the functionality will be available only at Initialized event we subscribed at previous code snippet. The event handler function will take care of rest initializations procedures there:

void photoCamera_Initialized(object sender, CameraOperationCompletedEventArgs e)
if (photoCamera.IsFlashModeSupported(FlashMode.Auto))
photoCamera.FlashMode = FlashMode.Auto;
//Select the lowest available resolution
photoCamera.Resolution = photoCamera.AvailableResolutions.ElementAt(0);
//Match the preview resolution to the camera resolution
photoCamera.PreviewBufferResolution = photoCamera.AvailableResolutions.ElementAt(0);
photoCamera.AutoFocusCompleted += new EventHandler<CameraOperationCompletedEventArgs>(photoCamera_AutoFocusCompleted);
photoCamera.ButtonFullPress += new EventHandler(photoCamera_ButtonFullPress);
photoCamera.ButtonHalfPress += new EventHandler(photoCamera_ButtonHalfPress);
photoCamera.ButtonRelease += new EventHandler(photoCamera_ButtonRelease);
photoCamera.CaptureCompleted += new EventHandler<CameraOperationCompletedEventArgs>(photoCamera_CaptureCompleted);
photoCamera.CaptureImageAvailable += new EventHandler<ContentReadyEventArgs>(photoCamera_CaptureImageAvailable);


This code snippet sets the flash to work automatically, and desired capture resolution. Usually the camera supports more than one resolution. The higher resolution means more pixels packed into the final picture, but also means the bigger resulting file. Our sample will process the live video feed, thus higher resolution also means more pixels need to be processed each frame. To minimize the number of processed pixels and final image size, the code sets first available camera resolution. The PhotoCamera object holds the AvailableResolutions which depends on physical camera component installed. The AvailableResolutions is a collection of CaptureResolution objects. This collection sorted such way, that lower resolutions are at the beginning of the list and higher resolutions are at the end of the list.

Now let’s understand when those events fired:

Event When Fired
AutoFocusCompleted Fired when auto-focus sequence completed
ButtonFullPress Fired when the hardware shutter button receives a full press
ButtonHalfPress Fired when the hardware shutter button receives a half press
ButtonRelease Fired when the hardware shutter button is released
CaptureCompeted Fired when the capture sequence is complete
CaptureImageAvailable Fired when the capture sequence is complete and an image is available

Let’s see how those events are used in our sample. When user press the hardware button half-way down we want to start the auto-focus sequence:

void photoCamera_ButtonHalfPress(object sender, EventArgs e)

When auto-focus completes and AutoFocusCompleted event fired our sample will show image at the central part of the screen much like regular point-and-shot cameras do:


void photoCamera_AutoFocusCompleted(object sender, CameraOperationCompletedEventArgs e)
Deployment.Current.Dispatcher.BeginInvoke(() =>
imgFocus.Visibility = Visibility.Visible;

If user press the hardware buttons fully, the ButtonFullPress event fires and our sample begins image capturing sequence:

void photoCamera_ButtonFullPress(object sender, EventArgs e)

CaptureImage function starts the asynchronous capturing process. When the capturing process is over, the CaptureCompleted and CaptureImageAvailable events fired. At the CaptureCompleted event handler the sample hides the auto-focus indicator:

void photoCamera_CaptureCompleted(object sender, CameraOperationCompletedEventArgs e)
Deployment.Current.Dispatcher.BeginInvoke(() =>
imgFocus.Visibility = Visibility.Collapsed;

When CaptureImageAvailable, the sample saves the captured image into phone’s media library:

void photoCamera_CaptureImageAvailable(object sender, ContentReadyEventArgs e)
MediaLibrary mediaLibrary = new MediaLibrary();
string fileName = string.Format(“{0:yyyy-MM-dd-HH-mm-ss}.jpg”, DateTime.Now);
mediaLibrary.SavePicture(fileName, e.ImageStream);

The event arguments have the captured image as Stream. Our sample will use XNA’s MediaLibrary class to save the picture. To use it add reference to the Microsoft.Xna.Framework assembly and the following using statement:

using Microsoft.Xna.Framework.Media;

Last, but not least if user releases the hardware button before taking a picture (pressing it full way down), the ButtonRelease event fired. Our sample cancels auto-focus sequence and hides the auto-focus indicator if it already shown:

void photoCamera_ButtonRelease(object sender, EventArgs e)

Deployment.Current.Dispatcher.BeginInvoke(() =>
imgFocus.Visibility = Visibility.Collapsed;

When leaving the page doesn’t forget to unsubscribe form PhotoCamera events and release the object Smile

In addition to using various camera events, it is possible to receive raw bits from camera to process this data live. The PhotoCamera class provides number of functions to copy current viewfinder frame into array for further processing. The viewfinder frame can be copied as ARGB, YUV or YCrCb pixel data. In our sample we are using ARGB buffer to create the negative image effect. A negative image is a total inversion of a positive image, in which light areas appear dark and vice versa. A negative color image is additionally color reversed, with red areas appearing cyan, greens appearing magenta and blues appearing yellow.

The negative effect created by each color channel value from its maximum value (255 in case of ARGB). The pixel processing done as shown at the following code snippet:

int[] pixelData = new int[photoCamera.PreviewBufferResolution.Width * photoCamera.PreviewBufferResolution.Height];

int[] target = new int[pixelData.Length];

for (int i = 0; i < pixelData.Length; i++)
target[i] = Negate(pixelData[i]);

This this code snippet we are creating array big enough to accommodate the viewfinder’s date and executes GetPreviewBufferArgb32 function to fill the array. The resulting target array used to initialize an instance of WriteableBitmap and present the effect on screen:

//Copy to WriteableBitmap
target.CopyTo(previewWriteableBitmap.Pixels, 0);


The resulting application will show negative live image preview on phone’s screen. The sample hosted here.

Stay tuned to the part 5 – “Background Agents”.


Add comment
facebook linkedin twitter email

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>



  1. NonaApril 22, 2012 ב 5:09

    Do you mind if I quote a few of your posts as long
    as I provide credit and sources back to your webpage?
    My website is in the very same niche as yours and my visitors would genuinely benefit from some of the information you present here.
    Please let me know if this alright with you. Cheers!

  2. DavidMay 3, 2012 ב 17:00

    Well, i just received the hemi connect concentration observe.
    It says to make sure not to be controlled by it as well loud (it truly is
    on my ipod, so as a result not flac, but throughout mp3 however 256kbps so it should be good),
    but the way loud must i listen to it? I don’t know the amount of your static I’m likely to hear. Also, do My spouse and i listen to it while Now i am doing a thing that I need to concentrate in or must i lay along and listen closely it as being a self-hypnosis recording. And, if you’ve any experience using this, does the item work? I have a very really challenging time focusing and I have a ton of studying and homework to accomplish to finish off the semester, and I am really hoping this can help. Any input is enormously appreciated.

  3. ChristieMay 3, 2012 ב 22:55

    Spot on with this write-up, I honestly think this amazing site needs a lot more attention.
    I’ll probably be back again to read through more, thanks for the advice!

  4. MarcusMay 6, 2012 ב 3:38

    Hi there to all, how is the whole thing, I think every one is getting more from this
    site, and your views are good in favor of new viewers.

  5. ViolaMay 11, 2012 ב 3:15

    Good way of telling, and fastidious paragraph to get information on the topic of
    my presentation subject, which i am going to present in university.

  6. PatinoJune 6, 2012 ב 14:57

    What’s up to every one, it’s in fact a pleasant for me to go to see this web site, it contains precious Information.

  7. BryanJune 12, 2012 ב 13:54

    Really good post. Highly recommended.

  8. RoyerJune 26, 2012 ב 15:03

    This info is worth everyone’s attention. Where can I find out more?

  9. BolinJuly 19, 2012 ב 3:18

    Hello, always i used to check weblog posts here in the early hours in the daylight, as i love to gain knowledge of more and more.

  10. TheriaultJanuary 14, 2013 ב 9:02

    Thanks for finally writing about >Windows Phone Mango–What’s New?
    (“Camera” – Part 4 of 8) – Alex Golesh’s Blog About Silverlight Development Reply

  11. TysonJanuary 28, 2013 ב 7:34

    It’s going to be ending of mine day, but before ending I am reading this fantastic post to improve my know-how.