iOS 5 Face Detection with Core Image

by Bob McCune on March 22, 2012

One of the great new features of iOS 5 is the addition of the Core Image (CI) framework. This framework has been available for a few years on the Mac, but now its advanced image processing and filtering capabilities are available on the iOS platform as well. One particularly cool new feature available in Core Image, on both iOS 5 and Lion, is its ability to detect faces in an image. Thanks to some recent inspiration from Jon Steinmetz over at Pixel Research Labs I have put together this tutorial showing how to use this great new feature.

Getting Started

The face detection API is surprisingly simple to use. It really boils down to two classes: CIDetector and CIFaceFeature. CIDetector is responsible for performing the analysis of an image and returns a collection of CIFaceFeature objects describing the face(s) found in the image. You begin by creating a new instance of CIDetector using its detectorOfType:context:options class method.

CIDetector can currently only be configured to perform face detection so you’ll always pass the string constant CIDetectorTypeFace for the type argument. The context and options arguments are optional, but you will typically provide it an options dictionary describing the accuracy level to use. This can be configured by defining a dictionary with the key CIDetectorAccuracy and a value of either CIDetectorAccuracyLow or CIDetectorAccuracyHigh. The high accuracy algorithm can produce far more accurate results, but takes significantly longer to perform the analysis. Depending on what you need to accomplish you may find the low accuracy setting produces acceptable results.

Analyzing the Image

With a properly configured detector in hand you’re ready to analyze an image. You call the detector’s featuresInImage: method passing it an image to analyze. The Core Image framework doesn’t know anything about UIImage so you can’t directly pass it an image of this type, however, UIKit provides a category on CIImage making it easy to create an instance of CIImage from a UIImage.

The featuresInImage: method will return a collection of CIFaceFeature objects describing the features of the detected faces. Specifically, each instance defines a face rectangle, and points for the left eye, right eye, and mouth. It only defines the center point of each feature so you’d have to perform some additional calculations if you’d need to know the feature’s shape, angle, or relative location.

Visualizing the Results

The following images show examples of the face detection API in action. The images illustrate the differences between the low and high accuracy settings along with the approximate times it took to run the detection. The location of the detected features is not significantly different between the two images, but you’ll notice the high accuracy setting took more that 10x longer to compute on an iPhone 4. It will likely require a fair amount of testing of a representative set of images to determine the appropriate accuracy setting for your app.

CIDetectorAccuracyLow

~253ms on iPhone 4

CIDetectorAccuracyHigh

~ 2810ms on iPhone 4

I have put together a sample app containing images of several iconic faces. Flip through the images and run the analysis to see the face detection in action. You can run the sample on the simulator, but I’d recommend running it on your device so you can get a realistic sense for the performance. Enjoy!

Download iOS 5 Sample App: Faces

[ Comments on this entry are closed ]

Creating Custom Xcode 4 File Templates

by Bob McCune on March 4, 2012

When Xcode 4 was released about a year ago it introduced an entirely new Project and File Template syntax. This was initially frustrating to me as the new syntax invalidated all of the custom templates I had written for Xcode 3. However, the new template format has proven to be more flexible and capable and represents an improvement over the old format. Now that Xcode 4.3 is officially available I thought it would be a good time to discuss creating custom file templates. In this post I’ll explain the file template layout, structure, and syntax and then finish up by creating some custom templates to simplify building Singleton objects and generating GHUnit test cases.

Getting Started

The easiest way to become familiar with Xcode’s template format is to study the default templates included with Xcode itself. Unfortunately, Apple doesn’t document the syntax, but a lot can be learned simply by experimenting with the ones already included with Xcode. The repackaging of Xcode 4.3 into a single app bundle makes it a little less obvious where to find these files. They still exist on disk, but instead of finding them under the /Developer folder you’ll find them inside of the Xcode.app bundle itself:

Read More

[ Comments on this entry are closed ]

A funny thing happened on the way to the ARC

by Bob McCune on February 28, 2012

As I was preparing to post the QuartzDemos project on GitHub I decided to modernize the project and covert it to use Automatic Reference Counting (ARC). Overall this was fairly easy to do despite some of the slightly confusing rules related to bridging between Objective-C objects and Core Foundation types. Everything was looking good on the Simulator, but decided I should give it a quick once over on the device. I expected everything to work fine so I launched my first demo and crash! Hmm… probably just a debugger anomaly. So I fired it up again and crash! The debugging begins…

I quickly found the cause of the crashes was due to the way I was converting between UIColor and CGColor types. There were many places where I was using UIColor to produce a CGColorRef, e.g.:

This code worked fine in a manual memory management case as the UIColor instance gets added to the current autorelease pool which means you can count on it being around for the duration of the current run loop. However, that behavior is changed when using ARC. As UIColor is not being assigned anywhere the compiler assumes it’s not needed and immediately disposes of it. I was originally planning to go into more detail about why this is happening but noticed a tweet this morning that Mark Dalrymple, from The Big Nerd Ranch, beat me to the punch. Since Mark already did the heavy lifting, I’ll refer you to his post for the details. Instead, I’ll cut to the chase and explain how to fix this scenario.

After realizing the cause of the problem implementing a solution was fairly straightforward. I converted the code above to the following:

Instead of defining the start and end colors as CGColorRef I define them as UIColor references and defer getting their underlying Core Graphics equivalent until it’s actually needed. This has the effect of extending the lifetime of the UIColor instances long enough to resolve the problem I was running into.

For more examples of this solution in action I’ll refer you to the QuartzDemos project on GitHub.

[ Comments on this entry are closed ]

Drawing with Quartz on iOS

by Bob McCune on February 27, 2012

A couple of weeks ago I gave a presentation entitled “Drawing with Quartz on iOS” to the Minnesota CocoaHeads group. We had a big crowd that night and I hope you all enjoyed it. The slide deck is available on SlideShare and the QuartzDemos project is available on Github.

Quartz is an essential API for Mac and iOS developers to know, but I don’t believe it is widely understood. There are many useful resources on the Internets, Ray Wenderlich’s and Andy Finnell’s sites immediately come to mind, but the information on the topic is largely scattered. I’m hoping this presentation and sample project help to consolidate some of this information and make it easier for developers to learn the framework.

Feel free to reach out if you have any questions or comments.

[ Comments on this entry are closed ]

Screen Capture in iOS Apps

by Bob McCune on September 8, 2011

I occasionally come across the need to grab the contents of a view as an image. This is often the result of needing to perform some non-stock, animated transition between views, but there are a variety of reasons why this might be useful. Thanks to the Core Animation framework’s CALayer class, this is easy to do.

All UIView instances have an underlying instance of a CALayer. The layer is responsible for rendering the view’s contents and performing any view-related animations. CALayer defines a method called renderInContext: which allows you to render the layer, and its sublayers, into a given graphics context:

Before you can access any layer-specific APIs, you’ll need to make sure you’re linking against the QuartzCore framework. Xcode’s default templates don’t link against this framework so you’ll need to select Target Name > Build Phases > Link Binary With Libraries and select QuartzCore.framework.

Additionally, you’ll need to add the following import to your code wherever you are calling the layer’s properties or methods:

With the necessary project configuration out of the way, the next question is where do we get a graphics context into which we can render the view’s content? This can be created using UIKit’s UIGraphicsBeginImageContextWithOptions function which will create a new bitmap-based graphics context for us.

This function takes a CGSize (you view’s size), a BOOL indicating if your view is opaque, and a CGFloat specifying the scaling factor. If you’re rendering a fully opaque, rectangular view you can pass YES for the opaque argument so the alpha channel can be discarded from the context.

Now that we’ve created a graphics context we can use the UIGraphicsGetCurrentContext() and UIGraphicsGetImageFromCurrentImageContext() functions to get reference to this new context and retrieve the rendered image data from it. Finally, we’ll want to call the UIGraphicsEndImageContext() function to clean up this context and remove it from the stack. Putting all this together we end up with the following:

To see this code in action I’ve put together a simple demo app. You can tap on Archer, Lana, or the background to capture the contents of the view and write the image to your Photo Library.

Note: Before running the demo be sure to open the “Photos” app on the Simulator so it can initialize its database or the images won’t be written. Enjoy!

Download Demo

[ Comments on this entry are closed ]

Quick Github PSA

by Bob McCune on September 6, 2011

My Core Animation Demos projects was previously available on Github under my personal account, but I have since moved it to my business account. The new location of of the CA demos is:

https://github.com/tapharmonic/Core-Animation-Demos

I haven’t made any major changes to code at this point, however, I did start an ARC branch to house my ongoing ARC changes.

[ Comments on this entry are closed ]