iOS 5 Face Detection with Core Image

by Bob McCune on March 22, 2012

One of the great new features of iOS 5 is the addition of the Core Image (CI) framework. This framework has been available for a few years on the Mac, but now its advanced image processing and filtering capabilities are available on the iOS platform as well. One particularly cool new feature available in Core Image, on both iOS 5 and Lion, is its ability to detect faces in an image. Thanks to some recent inspiration from Jon Steinmetz over at Pixel Research Labs I have put together this tutorial showing how to use this great new feature.

Getting Started

The face detection API is surprisingly simple to use. It really boils down to two classes: CIDetector and CIFaceFeature. CIDetector is responsible for performing the analysis of an image and returns a collection of CIFaceFeature objects describing the face(s) found in the image. You begin by creating a new instance of CIDetector using its detectorOfType:context:options class method.

CIDetector can currently only be configured to perform face detection so you’ll always pass the string constant CIDetectorTypeFace for the type argument. The context and options arguments are optional, but you will typically provide it an options dictionary describing the accuracy level to use. This can be configured by defining a dictionary with the key CIDetectorAccuracy and a value of either CIDetectorAccuracyLow or CIDetectorAccuracyHigh. The high accuracy algorithm can produce far more accurate results, but takes significantly longer to perform the analysis. Depending on what you need to accomplish you may find the low accuracy setting produces acceptable results.

Analyzing the Image

With a properly configured detector in hand you’re ready to analyze an image. You call the detector’s featuresInImage: method passing it an image to analyze. The Core Image framework doesn’t know anything about UIImage so you can’t directly pass it an image of this type, however, UIKit provides a category on CIImage making it easy to create an instance of CIImage from a UIImage.

The featuresInImage: method will return a collection of CIFaceFeature objects describing the features of the detected faces. Specifically, each instance defines a face rectangle, and points for the left eye, right eye, and mouth. It only defines the center point of each feature so you’d have to perform some additional calculations if you’d need to know the feature’s shape, angle, or relative location.

Visualizing the Results

The following images show examples of the face detection API in action. The images illustrate the differences between the low and high accuracy settings along with the approximate times it took to run the detection. The location of the detected features is not significantly different between the two images, but you’ll notice the high accuracy setting took more that 10x longer to compute on an iPhone 4. It will likely require a fair amount of testing of a representative set of images to determine the appropriate accuracy setting for your app.

CIDetectorAccuracyLow

~253ms on iPhone 4

CIDetectorAccuracyHigh

~ 2810ms on iPhone 4

I have put together a sample app containing images of several iconic faces. Flip through the images and run the analysis to see the face detection in action. You can run the sample on the simulator, but I’d recommend running it on your device so you can get a realistic sense for the performance. Enjoy!

Download iOS 5 Sample App: Faces

[ 6 comments ]

Wendell August 29, 2012 at 3:57 pm

This works for an existing photo, but what if you wanted to access the camera to make your application react to the quantity of faces, proximity of faces, and location of faces? Do you have to take a snapshot, scan it to find the face(s), and then delete it?

Bob McCune August 29, 2012 at 4:25 pm

You would need to use AVFoundation to interact with the camera in this way. Look at the 2012 WWDC session videos for how to do this.

Alejandro Ramirez December 14, 2012 at 9:13 pm

I noticed that this program only works as long as the pictures are exactly of the same size than the ones included in the program resource files.

How can I manage for different image sizes?

Bob McCune December 14, 2012 at 9:26 pm

You’d just need to adjust the image view frames and resize the scrollview’s contentSize property as necessary.

Noor January 26, 2013 at 4:32 am

Hi Bob,

Nice tutorial. I am wondering if we can provide different detector types by providing our custom type.
Example:
CIDetector *detector =
[CIDetector detectorOfType:CIDetectorTypePetFace context:nil options:options];

CIDetector *detector =
[CIDetector detectorOfType:CIDetectorTypePlant context:nil options:options];

Or some other way. In essence, I want to detect something other than faces which can be my custom defined thing.

Thanks,
Noor

Bob McCune January 26, 2013 at 4:36 am

No, custom detectors aren’t supported. The only supported type is CIDetectorTypeFace.