How to run JavaCV (with sample face recognition) on Android ARM device – Netbeans and nbandroid

I promised some tutorials so here we go.

Here is simple tutorial of how to detect face on Android device using Netbeans IDE with nbandroid, JavaCV wrapper to OpenCV and FFMpeg.

Android developers in netbeans skip to step 6

  1. Download and install Netbeans – http://www.netbeans.org/
  2. Download and install nbandroid plugin to netbeans http://nbandroid.org/wiki/index.php/Installation
  3. Download Android SDK – http://developer.android.com/sdk/index.html
    1. Extract it
    2. Run SDK manager
    3. Download desired SDK (ARM system image is essential since javacv supports only arm)– I suggest 4.0.3 and update what you can
  4. Download drivers for your phone or Create android virtual device
    1. Create AVD make sure you select back camera as your notebook webcam or other camera. On windows only 512 mb RAM is available L for AVD.
    2. Start AVD
  5. Run Netbeans
  6. Create new Android Project select desired SDK (and make sure classpath to SDK is set properly)
    1. Tools -> Options -> Miscellaneous -> Android -> SDK location (For example C:\adt-bundle-windows-x86_64\sdk)
  7. For testing purposes set package name to com.googlecode.javacv.facepreview
  8. Download JavaCV
    1. Download javacv-bin from https://code.google.com/p/javacv/downloads/list
    2. Download javacv-cppjars
    3. Extract both zip files
  9. Copy libraries javacpp.jar and javacv.jar from javacv-bin to project libs folder (if it does not exist create it)
    1. For example C:\Users\DeathX\Documents\NetBeansProjects\AndroidApplication1\libs
  10. Create folder armeabi under libs folder
    1. For example C:\Users\DeathX\Documents\NetBeansProjects\AndroidApplication1\libs\armeabi
  11. Extract .so files from javacv-bin/javacv-android-arm.jar (only .so files no folder hierarchy!) to libs\armeabi
  12. Extract .so files from javacv-cppjars/ffmpeg-1.2-android-arm.jar and javacv-cppjars/opencv-2.4.5-android-arm.jar jar (only .so files no folder hierarchy!) to libs\armeabi
    1. It will look like this: …\AndroidApplication1\libs\armeabi\libavcodec.so
  13. Copy sample file FacePreview.java from javacv-bin/samples/ to your project
    1. …\AndroidApplication1\src\com\googlecode\javacv\facepreview\FacePreview.java
  14. Download haarcascade_frontalface_alt.xml from https://warai.googlecode.com/files/haarcascade_frontalface_alt.xml and place it in the same folder as FacePreview.java
  15. Open FacePreview.java in netbeans and in the comments on the top of the file there is Android manifest so copy it
  16. Open android manifest of your project (important files) and replace it contents with one you just copied (don’t forget do delete stars * on the start of the lines)
  17. Clean and build the project
  18. Run the project
  19. Have fun 🙂

33 Comments

  1. Senthil

    Hi:

    Your above tutorial was pretty straight forward. Thanks a lot and it worked great.

    Just wanted to know if you got pointers to tell – how to do face recognition in Android?

    Thanks:
    Senthil

    Reply
  2. drndos (Post author)

    Hello,
    I am working on a tutorial for actual face recognition but I don’t know when I will release it yet.
    Here are some pointers:
    – Look at FaceRecognition.java file located in javaCV samples. There is working face recognition.
    – When porting to android don’t forget to add permission to write on SD card
    – in Android you can access files like this: imgListFile = new BufferedReader(new FileReader(new File(context.getExternalFilesDir(null), filename)));
    This will create path: “/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/” + filename

    – You have to resize every one of the images to same size to make it work
    You can resize it with:
    IplImage newImage = cvCreateImage(cvSize(266, 320), IPL_DEPTH_8U, 1);
    cvResize(grayImage, newImage, CV_INTER_CUBIC);

    Or you can crop it somehow, I am still working on this thing.

    – You don’t really need storeEigenfaceImages(); method so feel free to delete it for better performance, it is only good for debugging

    You can pass IplImage taken with camera in FacePreview to FaceRecognition

    Reply
    1. Andy

      Hi,

      Can you provide more information about how to do the face recognition? just like the tutorial step by step.. or you can email to me thx alot.

      Andy

      Reply
    2. aliu

      Hi,
      Sir, Now can you provide me a complete tutorial of face recognition(Happy,sad etc) code with android.

      Reply
  3. widi

    hi
    i just run facepreview.java sample on my device, its work fine and can detect face in landscape mode. But when i change to portrait mode, it’s work but can’t detect face. How i can run it’s on portrait mode ?
    thank

    Reply
  4. drndos (Post author)

    Hello,
    If you want to run it in complete portrait mode do the following:
    change android manifest file line:
    [xml language=””]
    android:screenOrientation="landscape">
    [/xml]
    to:
    [xml language=””]
    android:screenOrientation="portrait">
    [/xml]

    Open FacePreview.java and find:
    [java language=””]
    mCamera = Camera.open();
    [/java]
    On new line after add:
    [java language=””]
    mCamera.setDisplayOrientation(90);
    [/java]

    Find:
    [java language=””]
    cvClearMemStorage(storage);
    [/java]
    On new line After add:
    [java language=””]
    IplImage grayImage2 = rotateImage(grayImage,90);
    [/java]

    Find:
    [java language=””]
    faces = cvHaarDetectObjects(grayImage, classifier, storage, 1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
    [/java]
    Change grayImage to grayImage2

    Find:
    [java language=””]
    @Override
    protected void onDraw(Canvas canvas) {
    [/java]

    BEFORE add:
    [java language=””]
    private IplImage rotateImage(final IplImage src, float angleDegrees)
    {
    // Create a map_matrix, where the left 2×2 matrix
    // is the transform and the right 2×1 is the dimensions.
    float[] m = new float[6];
    CvMat M = CvMat.create(2, 3, CV_32F);
    int w = src.width();
    int h = src.height();
    float angleRadians = angleDegrees * ((float)Math.PI / 180.0f);
    m[0] = (float)( Math.cos(angleRadians) );
    m[1] = (float)( Math.sin(angleRadians) );
    m[3] = -m[1];
    m[4] = m[0];
    m[2] = w*0.5f;
    m[5] = h*0.5f;
    M.put(0, m[0]);
    M.put(1, m[1]);
    M.put(2, m[2]);
    M.put(3, m[3]);
    M.put(4, m[4]);
    M.put(5, m[5]);

    // Make a spare image for the result
    CvSize sizeRotated = new CvSize();
    sizeRotated.width(Math.round(w));
    sizeRotated.height(Math.round(h));

    // Rotate
    IplImage imageRotated = cvCreateImage( sizeRotated, src.depth(), src.nChannels());

    // Transform the image
    cvGetQuadrangleSubPix(src, imageRotated, M);

    return imageRotated;
    }
    [/java]

    Whole file looks like this:
    [java language=””]
    /*
    * Copyright (C) 2010,2011,2012 Samuel Audet
    *
    * FacePreview – A fusion of OpenCV’s facedetect and Android’s CameraPreview samples,
    * with JavaCV + JavaCPP as the glue in between.
    *
    * This file was based on CameraPreview.java that came with the Samples for
    * Android SDK API 8, revision 1 and contained the following copyright notice:
    *
    * Copyright (C) 2007 The Android Open Source Project
    *
    * Licensed under the Apache License, Version 2.0 (the "License");
    * you may not use this file except in compliance with the License.
    * You may obtain a copy of the License at
    *
    * http://www.apache.org/licenses/LICENSE-2.0
    *
    * Unless required by applicable law or agreed to in writing, software
    * distributed under the License is distributed on an "AS IS" BASIS,
    * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    * See the License for the specific language governing permissions and
    * limitations under the License.
    *
    *
    * IMPORTANT – Make sure the AndroidManifest.xml file looks like this:
    *
    * <?xml version="1.0" encoding="utf-8"?>
    * <manifest xmlns:android="http://schemas.android.com/apk/res/android&quot;
    * package="com.googlecode.javacv.facepreview"
    * android:versionCode="1"
    * android:versionName="1.0" >
    * <uses-sdk android:minSdkVersion="4" />
    * <uses-permission android:name="android.permission.CAMERA" />
    * <uses-feature android:name="android.hardware.camera" />
    * <application android:label="@string/app_name">
    * <activity
    * android:name="FacePreview"
    * android:label="@string/app_name"
    * android:screenOrientation="landscape">
    * <intent-filter>
    * <action android:name="android.intent.action.MAIN" />
    * <category android:name="android.intent.category.LAUNCHER" />
    * </intent-filter>
    * </activity>
    * </application>
    * </manifest>
    */

    package sk.drndos.ar;

    import android.app.Activity;
    import android.app.AlertDialog;
    import android.content.Context;
    import android.graphics.Canvas;
    import android.graphics.Color;
    import android.graphics.ImageFormat;
    import android.graphics.Paint;
    import android.hardware.Camera;
    import android.hardware.Camera.Size;
    import android.os.Bundle;
    import android.util.Log;
    import android.view.Surface;
    import android.view.SurfaceHolder;
    import android.view.SurfaceView;
    import android.view.View;
    import android.view.Window;
    import android.view.WindowManager;
    import android.view.WindowManager.LayoutParams;
    import android.widget.FrameLayout;
    import java.io.File;
    import java.io.IOException;
    import java.nio.ByteBuffer;
    import java.util.List;
    import com.googlecode.javacpp.Loader;
    import com.googlecode.javacv.cpp.opencv_objdetect;

    import static com.googlecode.javacv.cpp.opencv_core.*;
    import static com.googlecode.javacv.cpp.opencv_imgproc.*;
    import static com.googlecode.javacv.cpp.opencv_objdetect.*;
    import static com.googlecode.javacv.cpp.opencv_highgui.*;

    // ———————————————————————-

    // ———————————————————————-

    public class MainActivity extends Activity {
    private FrameLayout layout;
    private FaceView faceView;
    private Preview mPreview;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
    // Hide the window title.
    requestWindowFeature(Window.FEATURE_NO_TITLE);
    getWindow().addFlags(LayoutParams.FLAG_KEEP_SCREEN_ON);
    super.onCreate(savedInstanceState);

    getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN);

    // Create our Preview view and set it as the content of our activity.
    try {
    layout = new FrameLayout(this);
    faceView = new FaceView(this);
    mPreview = new Preview(this, faceView);
    layout.addView(mPreview);
    layout.addView(faceView);
    setContentView(layout);
    } catch (IOException e) {
    e.printStackTrace();
    new AlertDialog.Builder(this).setMessage(e.getMessage()).create().show();
    }
    }
    }
    class FaceView extends View implements Camera.PreviewCallback {
    public static final int SUBSAMPLING_FACTOR = 4;

    private IplImage grayImage;
    private CvHaarClassifierCascade classifier;
    private CvMemStorage storage;
    private CvSeq faces;

    public FaceView(MainActivity context) throws IOException {
    super(context);

    // Load the classifier file from Java resources.
    File classifierFile = Loader.extractResource(getClass(),
    "/sk/drndos/ar/haarcascade_frontalface_alt.xml",
    context.getCacheDir(), "classifier", ".xml");
    if (classifierFile == null || classifierFile.length() <= 0) {
    throw new IOException("Could not extract the classifier file from Java resource.");
    }

    // Preload the opencv_objdetect module to work around a known bug.
    Loader.load(opencv_objdetect.class);
    classifier = new CvHaarClassifierCascade(cvLoad(classifierFile.getAbsolutePath()));
    classifierFile.delete();
    if (classifier.isNull()) {
    throw new IOException("Could not load the classifier file.");
    }
    storage = CvMemStorage.create();
    }

    public void onPreviewFrame(final byte[] data, final Camera camera) {
    try {
    Camera.Size size = camera.getParameters().getPreviewSize();
    processImage(data, size.width, size.height);
    camera.addCallbackBuffer(data);
    } catch (RuntimeException e) {
    // The camera has probably just been released, ignore.
    }
    }

    protected void processImage(byte[] data, int width, int height) {
    // First, downsample our image and convert it into a grayscale IplImage
    int f = SUBSAMPLING_FACTOR;
    if (grayImage == null || grayImage.width() != width/f || grayImage.height() != height/f) {
    grayImage = IplImage.create(width/f, height/f, IPL_DEPTH_8U, 1);
    }
    int imageWidth = grayImage.width();
    int imageHeight = grayImage.height();
    int dataStride = f*width;
    int imageStride = grayImage.widthStep();
    ByteBuffer imageBuffer = grayImage.getByteBuffer();
    for (int y = 0; y < imageHeight; y++) {
    int dataLine = y*dataStride;
    int imageLine = y*imageStride;
    for (int x = 0; x < imageWidth; x++) {
    imageBuffer.put(imageLine + x, data[dataLine + f*x]);
    }
    }

    Log.w("FACEPREVIEW","Rotating");

    cvClearMemStorage(storage);
    IplImage grayImage2 = rotateImage(grayImage,90);
    faces = cvHaarDetectObjects(grayImage2, classifier, storage, 1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
    postInvalidate();
    }
    private IplImage rotateImage(final IplImage src, float angleDegrees)
    {
    // Create a map_matrix, where the left 2×2 matrix
    // is the transform and the right 2×1 is the dimensions.
    float[] m = new float[6];
    CvMat M = CvMat.create(2, 3, CV_32F);
    int w = src.width();
    int h = src.height();
    float angleRadians = angleDegrees * ((float)Math.PI / 180.0f);
    m[0] = (float)( Math.cos(angleRadians) );
    m[1] = (float)( Math.sin(angleRadians) );
    m[3] = -m[1];
    m[4] = m[0];
    m[2] = w*0.5f;
    m[5] = h*0.5f;
    M.put(0, m[0]);
    M.put(1, m[1]);
    M.put(2, m[2]);
    M.put(3, m[3]);
    M.put(4, m[4]);
    M.put(5, m[5]);

    // Make a spare image for the result
    CvSize sizeRotated = new CvSize();
    sizeRotated.width(Math.round(w));
    sizeRotated.height(Math.round(h));

    // Rotate
    IplImage imageRotated = cvCreateImage( sizeRotated, src.depth(), src.nChannels());

    // Transform the image
    cvGetQuadrangleSubPix(src, imageRotated, M);

    return imageRotated;
    }

    @Override
    protected void onDraw(Canvas canvas) {
    Paint paint = new Paint();
    paint.setColor(Color.RED);
    paint.setTextSize(20);
    Log.w("FACEPREVIEW","Working");
    String s = "FacePreview – This side up.";
    float textWidth = paint.measureText(s);
    canvas.drawText(s, (getWidth()-textWidth)/2, 20, paint);

    if (faces != null) {
    paint.setStrokeWidth(2);
    paint.setStyle(Paint.Style.STROKE);
    float scaleX = (float)getWidth()/grayImage.height();
    float scaleY = (float)getHeight()/grayImage.width();
    int total = faces.total();
    for (int i = 0; i < total; i++) {
    CvRect r = new CvRect(cvGetSeqElem(faces, i));
    int x = r.y(), y = r.x(), w = r.width(), h = r.height();
    canvas.drawRect(x*scaleX, y*scaleY, (x+w)*scaleX, (y+h)*scaleY, paint);
    }
    }
    }
    }

    // ———————————————————————-

    class Preview extends SurfaceView implements SurfaceHolder.Callback {
    SurfaceHolder mHolder;
    Camera mCamera;
    Camera.PreviewCallback previewCallback;

    Preview(Context context, Camera.PreviewCallback previewCallback) {
    super(context);
    this.previewCallback = previewCallback;
    // Install a SurfaceHolder.Callback so we get notified when the
    // underlying surface is created and destroyed.
    mHolder = getHolder();
    mHolder.addCallback(this);
    mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
    }

    public void surfaceCreated(SurfaceHolder holder) {
    // The Surface has been created, acquire the camera and tell it where
    // to draw.
    mCamera = Camera.open();
    mCamera.setDisplayOrientation(90);
    try {
    mCamera.setPreviewDisplay(holder);
    } catch (IOException exception) {
    mCamera.release();
    mCamera = null;
    // TODO: add more exception handling logic here
    }
    }

    public void surfaceDestroyed(SurfaceHolder holder) {
    // Surface will be destroyed when we return, so stop the preview.
    // Because the CameraDevice object is not a shared resource, it’s very
    // important to release it when the activity is paused.
    mCamera.stopPreview();
    mCamera.release();
    mCamera = null;
    }

    private Size getOptimalPreviewSize(List<Size> sizes, int w, int h) {
    final double ASPECT_TOLERANCE = 0.05;
    double targetRatio = (double) w / h;
    if (sizes == null) return null;

    Size optimalSize = null;
    double minDiff = Double.MAX_VALUE;

    int targetHeight = h;

    // Try to find an size match aspect ratio and size
    for (Size size : sizes) {
    double ratio = (double) size.width / size.height;
    if (Math.abs(ratio – targetRatio) > ASPECT_TOLERANCE) continue;
    if (Math.abs(size.height – targetHeight) < minDiff) {
    optimalSize = size;
    minDiff = Math.abs(size.height – targetHeight);
    }
    }

    // Cannot find the one match the aspect ratio, ignore the requirement
    if (optimalSize == null) {
    minDiff = Double.MAX_VALUE;
    for (Size size : sizes) {
    if (Math.abs(size.height – targetHeight) < minDiff) {
    optimalSize = size;
    minDiff = Math.abs(size.height – targetHeight);
    }
    }
    }
    return optimalSize;
    }

    public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
    // Now that the size is known, set up the camera parameters and begin
    // the preview.
    Camera.Parameters parameters = mCamera.getParameters();

    List<Size> sizes = parameters.getSupportedPreviewSizes();
    Size optimalSize = getOptimalPreviewSize(sizes, w, h);
    parameters.setPreviewSize(optimalSize.width, optimalSize.height);

    mCamera.setParameters(parameters);
    if (previewCallback != null) {
    mCamera.setPreviewCallbackWithBuffer(previewCallback);
    Camera.Size size = parameters.getPreviewSize();
    byte[] data = new byte[size.width*size.height*
    ImageFormat.getBitsPerPixel(parameters.getPreviewFormat())/8];
    mCamera.addCallbackBuffer(data);
    }
    mCamera.startPreview();
    }

    }
    [/java]

    Reply
    1. widi

      wow it’s work, thank u sir

      but it’s litle slow to detect face, how i can improve the detection ?

      Reply
      1. drndos (Post author)

        Hello, try increasing Subsampling factor to 5
        [java]
        public static final int SUBSAMPLING_FACTOR = 4;
        [/java]

        Reply
    2. BraunX

      Thanks for your tutorial it worked great!
      Just one additon for the portrait mode. I think that in rotateImage() this code is correct:

      // Make a spare image for the result
      CvSize sizeRotated = new CvSize();
      sizeRotated.width(Math.round(h));
      sizeRotated.height(Math.round(w));

      as soon as i was swaped the ‘h’ and ‘w’ variables in the code.

      Reply
  5. LJ

    can you show me show to realize the face match? I want to learn about the javacv.

    Reply
    1. drndos (Post author)

      Hello,
      Sure I can provide you with working example and you can modify it to your needs.
      Facepreview.java
      [java]
      /*
      * Copyright (C) 2010,2011,2012 Samuel Audet
      *
      * FacePreview – A fusion of OpenCV’s facedetect and Android’s CameraPreview samples,
      * with JavaCV + JavaCPP as the glue in between.
      *
      * This file was based on CameraPreview.java that came with the Samples for
      * Android SDK API 8, revision 1 and contained the following copyright notice:
      *
      * Copyright (C) 2007 The Android Open Source Project
      *
      * Licensed under the Apache License, Version 2.0 (the "License");
      * you may not use this file except in compliance with the License.
      * You may obtain a copy of the License at
      *
      * http://www.apache.org/licenses/LICENSE-2.0
      *
      * Unless required by applicable law or agreed to in writing, software
      * distributed under the License is distributed on an "AS IS" BASIS,
      * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      * See the License for the specific language governing permissions and
      * limitations under the License.
      *
      *
      * IMPORTANT – Make sure the AndroidManifest.xml file looks like this:
      *
      * <?xml version="1.0" encoding="utf-8"?>
      * <manifest xmlns:android="http://schemas.android.com/apk/res/android&quot;
      * package="com.googlecode.javacv.facepreview"
      * android:versionCode="1"
      * android:versionName="1.0" >
      * <uses-sdk android:minSdkVersion="4" />
      * <uses-permission android:name="android.permission.CAMERA" />
      * <uses-feature android:name="android.hardware.camera" />
      * <application android:label="@string/app_name">
      * <activity
      * android:name="FacePreview"
      * android:label="@string/app_name"
      * android:screenOrientation="landscape">
      * <intent-filter>
      * <action android:name="android.intent.action.MAIN" />
      * <category android:name="android.intent.category.LAUNCHER" />
      * </intent-filter>
      * </activity>
      * </application>
      * </manifest>
      */

      package com.googlecode.javacv.facepreview;

      import android.app.Activity;
      import android.app.AlertDialog;
      import android.content.Context;
      import android.graphics.Canvas;
      import android.graphics.Color;
      import android.graphics.ImageFormat;
      import android.graphics.Paint;
      import android.hardware.Camera;
      import android.hardware.Camera.Size;
      import android.os.Bundle;
      import android.os.PowerManager;
      import android.util.Log;
      import android.view.SurfaceHolder;
      import android.view.SurfaceView;
      import android.view.View;
      import android.view.View.OnClickListener;
      import android.view.Window;
      import android.view.WindowManager;
      import android.widget.FrameLayout;
      import java.io.File;
      import java.io.IOException;
      import java.nio.ByteBuffer;
      import java.util.List;
      import com.googlecode.javacpp.Loader;
      import com.googlecode.javacv.cpp.opencv_objdetect;

      import static com.googlecode.javacv.cpp.opencv_core.*;
      import static com.googlecode.javacv.cpp.opencv_imgproc.*;
      import static com.googlecode.javacv.cpp.opencv_objdetect.*;
      import static com.googlecode.javacv.cpp.opencv_highgui.*;
      import java.io.FileOutputStream;
      import java.io.InputStream;
      import java.io.OutputStream;

      // ———————————————————————-

      public class FacePreview extends Activity {
      private final static String CLASS_LABEL = "FacePreview";
      private final static String LOG_TAG = CLASS_LABEL;
      private FrameLayout layout;
      private FaceView faceView;
      private Preview mPreview;
      public FaceRecognition faceRecognition;
      @Override
      protected void onCreate(Bundle savedInstanceState) {
      // Hide the window title.
      requestWindowFeature(Window.FEATURE_NO_TITLE);

      super.onCreate(savedInstanceState);
      faceRecognition = new FaceRecognition();
      //faceRecognition.learn("data/some-training-faces.txt");
      faceRecognition.context=this;
      getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN);

      // Create our Preview view and set it as the content of our activity.
      try {
      layout = new FrameLayout(this);
      faceView = new FaceView(this);
      mPreview = new Preview(this, faceView);
      layout.addView(mPreview);
      layout.addView(faceView);
      setContentView(layout);
      } catch (IOException e) {
      e.printStackTrace();
      new AlertDialog.Builder(this).setMessage(e.getMessage()).create().show();
      }
      layout.setOnClickListener(new OnClickListener() {
      @Override
      public void onClick(View v) {
      faceRecognition.learn("all10.txt");

      }
      });
      }

      }

      // ———————————————————————-

      class FaceView extends View implements Camera.PreviewCallback {
      public static final int SUBSAMPLING_FACTOR = 2;

      private IplImage grayImage;
      private CvHaarClassifierCascade classifier;
      private CvMemStorage storage;
      private CvSeq faces;

      public FaceView(FacePreview context) throws IOException {
      super(context);

      // Load the classifier file from Java resources.
      File classifierFile = Loader.extractResource(getClass(),
      "/com/googlecode/javacv/facepreview/haarcascade_frontalface_alt.xml",
      context.getCacheDir(), "classifier", ".xml");
      if (classifierFile == null || classifierFile.length() <= 0) {
      throw new IOException("Could not extract the classifier file from Java resource.");
      }

      // Preload the opencv_objdetect module to work around a known bug.
      Loader.load(opencv_objdetect.class);
      classifier = new CvHaarClassifierCascade(cvLoad(classifierFile.getAbsolutePath()));
      classifierFile.delete();
      if (classifier.isNull()) {
      throw new IOException("Could not load the classifier file.");
      }
      storage = CvMemStorage.create();
      }

      public void onPreviewFrame(final byte[] data, final Camera camera) {
      try {
      Camera.Size size = camera.getParameters().getPreviewSize();
      processImage(data, size.width, size.height);
      camera.addCallbackBuffer(data);
      } catch (RuntimeException e) {
      // The camera has probably just been released, ignore.
      }
      }
      private String name ="";
      protected void processImage(byte[] data, int width, int height) {
      // First, downsample our image and convert it into a grayscale IplImage
      int f = SUBSAMPLING_FACTOR;
      if (grayImage == null || grayImage.width() != width/f || grayImage.height() != height/f) {
      grayImage = IplImage.create(width/f, height/f, IPL_DEPTH_8U, 1);
      }
      int imageWidth = grayImage.width();
      int imageHeight = grayImage.height();
      int dataStride = f*width;
      int imageStride = grayImage.widthStep();
      ByteBuffer imageBuffer = grayImage.getByteBuffer();
      for (int y = 0; y < imageHeight; y++) {
      int dataLine = y*dataStride;
      int imageLine = y*imageStride;
      for (int x = 0; x < imageWidth; x++) {
      imageBuffer.put(imageLine + x, data[dataLine + f*x]);
      }
      }

      cvClearMemStorage(storage);

      faces = cvHaarDetectObjects(grayImage, classifier, storage, 1.1, 3, CV_HAAR_DO_CANNY_PRUNING);

      postInvalidate();
      }

      @Override
      protected void onDraw(Canvas canvas) {
      Paint paint = new Paint();
      paint.setColor(Color.RED);
      paint.setTextSize(20);

      if (faces != null) {
      paint.setStrokeWidth(2);
      paint.setStyle(Paint.Style.STROKE);
      float scaleX = (float)getWidth()/grayImage.width();
      float scaleY = (float)getHeight()/grayImage.height();
      int total = faces.total();
      for (int i = 0; i < total; i++) {
      CvRect r = new CvRect(cvGetSeqElem(faces, i));
      int x = r.x(), y = r.y(), w = r.width(), h = r.height();
      canvas.drawRect(x*scaleX, y*scaleY, (x+w)*scaleX, (y+h)*scaleY, paint);
      Log.w("FACEPREVIEW","RECT x: "+x+" y: "+y+" width: "+w+" height: "+h);
      Log.w("FACEPREVIEW","GRAYIMAGE width: "+grayImage.width()+" height: "+grayImage.height());

      int width = (int)((float)r.height()/(float)266*(float)320);
      int x1 = r.x()-(Math.abs(width-r.width())/2);
      r = r.x(x1).width(width);
      Log.w("FACEPREVIEW","R1 x: "+r.x()+" y: "+r.y()+" width: "+r.width()+" height: "+r.height());
      cvSetImageROI(grayImage, r);

      IplImage tmp = cvCreateImage(cvSize((int)(r.width()), (int)(r.height())), IPL_DEPTH_8U,1);
      cvCopy(grayImage, tmp, null);

      IplImage newImage = cvCreateImage(cvSize(266, 320), IPL_DEPTH_8U, 1);
      cvResize(tmp, newImage, CV_INTER_CUBIC);
      cvReleaseImage(tmp);
      name= ((FacePreview)this.getContext()).faceRecognition.recognizeImage(newImage);
      cvResetImageROI(grayImage);

      }
      }
      String s = "FacePreview – This side up.";
      if(name!=null)
      {
      s=name;
      }
      float textWidth = paint.measureText(s);
      canvas.drawText(s, (getWidth()-textWidth)/2, 20, paint);
      }
      protected void onClick()
      {
      ((FacePreview)this.getContext()).faceRecognition.learn("all10.txt");
      }
      }

      // ———————————————————————-

      class Preview extends SurfaceView implements SurfaceHolder.Callback {
      SurfaceHolder mHolder;
      Camera mCamera;
      Camera.PreviewCallback previewCallback;

      Preview(Context context, Camera.PreviewCallback previewCallback) {
      super(context);
      this.previewCallback = previewCallback;

      // Install a SurfaceHolder.Callback so we get notified when the
      // underlying surface is created and destroyed.
      mHolder = getHolder();
      mHolder.addCallback(this);
      mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
      }

      public void surfaceCreated(SurfaceHolder holder) {
      // The Surface has been created, acquire the camera and tell it where
      // to draw.
      mCamera = Camera.open();

      try {
      mCamera.setPreviewDisplay(holder);
      mCamera.autoFocus(new Camera.AutoFocusCallback() {
      public void onAutoFocus(boolean bln, Camera camera) {
      Log.w("FACEPRWVIEW", "focusing");
      }
      });
      } catch (IOException exception) {
      mCamera.release();
      mCamera = null;
      // TODO: add more exception handling logic here
      }
      }

      public void surfaceDestroyed(SurfaceHolder holder) {
      // Surface will be destroyed when we return, so stop the preview.
      // Because the CameraDevice object is not a shared resource, it’s very
      // important to release it when the activity is paused.
      mCamera.stopPreview();
      mCamera.release();
      mCamera = null;
      }

      private Size getOptimalPreviewSize(List<Size> sizes, int w, int h) {
      final double ASPECT_TOLERANCE = 0.05;
      double targetRatio = (double) w / h;
      if (sizes == null) return null;

      Size optimalSize = null;
      double minDiff = Double.MAX_VALUE;

      int targetHeight = h;

      // Try to find an size match aspect ratio and size
      for (Size size : sizes) {
      double ratio = (double) size.width / size.height;
      if (Math.abs(ratio – targetRatio) > ASPECT_TOLERANCE) continue;
      if (Math.abs(size.height – targetHeight) < minDiff) {
      optimalSize = size;
      minDiff = Math.abs(size.height – targetHeight);
      }
      }

      // Cannot find the one match the aspect ratio, ignore the requirement
      if (optimalSize == null) {
      minDiff = Double.MAX_VALUE;
      for (Size size : sizes) {
      if (Math.abs(size.height – targetHeight) < minDiff) {
      optimalSize = size;
      minDiff = Math.abs(size.height – targetHeight);
      }
      }
      }
      return optimalSize;
      }

      public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
      // Now that the size is known, set up the camera parameters and begin
      // the preview.
      Camera.Parameters parameters = mCamera.getParameters();

      List<Size> sizes = parameters.getSupportedPreviewSizes();
      Size optimalSize = getOptimalPreviewSize(sizes, w, h);
      parameters.setPreviewSize(optimalSize.width, optimalSize.height);

      mCamera.setParameters(parameters);
      if (previewCallback != null) {
      mCamera.setPreviewCallbackWithBuffer(previewCallback);
      Camera.Size size = parameters.getPreviewSize();
      byte[] data = new byte[size.width*size.height*
      ImageFormat.getBitsPerPixel(parameters.getPreviewFormat())/8];
      mCamera.addCallbackBuffer(data);
      }
      mCamera.startPreview();
      }

      }
      [/java]
      FaceRecognition.java
      [java]
      package com.googlecode.javacv.facepreview;

      /*
      * FaceRecognition.java
      *
      * Created on Dec 7, 2011, 1:27:25 PM
      *
      * Description: Recognizes faces.
      *
      * Copyright (C) Dec 7, 2011, Stephen L. Reed, Texai.org. (Fixed April 22, 2012, Samuel Audet)
      *
      * This file is a translation from the OpenCV example http://www.shervinemami.info/faceRecognition.html, ported
      * to Java using the JavaCV library. Notable changes are the addition of the Java Logging framework and the
      * installation of image files in a data directory child of the working directory. Some of the code has
      * been expanded to make debugging easier. Expected results are 100% recognition of the lower3.txt test
      * image index set against the all10.txt training image index set. See http://en.wikipedia.org/wiki/Eigenface
      * for a technical explanation of the algorithm.
      *
      * stephenreed@yahoo.com
      *
      * FaceRecognition is free software: you can redistribute it and/or modify
      * it under the terms of the GNU General Public License as published by
      * the Free Software Foundation, either version 2 of the License, or
      * (at your option) any later version (subject to the "Classpath" exception
      * as provided in the LICENSE.txt file that accompanied this code).
      *
      * FaceRecognition is distributed in the hope that it will be useful,
      * but WITHOUT ANY WARRANTY; without even the implied warranty of
      * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
      * GNU General Public License for more details.
      *
      * You should have received a copy of the GNU General Public License
      * along with JavaCV. If not, see <http://www.gnu.org/licenses/&gt;.
      *
      */
      import android.util.Log;
      import com.googlecode.javacpp.FloatPointer;
      import com.googlecode.javacpp.Pointer;
      import java.io.BufferedReader;
      import java.io.FileReader;
      import java.io.IOException;
      import java.util.ArrayList;
      import java.util.List;
      import java.util.logging.Logger;
      import static com.googlecode.javacv.cpp.opencv_core.*;
      import static com.googlecode.javacv.cpp.opencv_highgui.*;
      import static com.googlecode.javacv.cpp.opencv_legacy.*;
      import java.io.File;

      /** Recognizes faces.
      *
      * @author reed
      */
      public class FaceRecognition {

      /** the logger */
      private static final Logger LOGGER = Logger.getLogger(FaceRecognition.class.getName());
      /** the number of training faces */
      private int nTrainFaces = 0;
      /** the training face image array */
      IplImage[] trainingFaceImgArr;
      /** the test face image array */
      IplImage[] testFaceImgArr;
      /** the person number array */
      CvMat personNumTruthMat;
      /** the number of persons */
      int nPersons;
      /** the person names */
      final List<String> personNames = new ArrayList<String>();
      /** the number of eigenvalues */
      int nEigens = 0;
      /** eigenvectors */
      IplImage[] eigenVectArr;
      /** eigenvalues */
      CvMat eigenValMat;
      /** the average image */
      IplImage pAvgTrainImg;
      /** the projected training faces */
      CvMat projectedTrainFaceMat;

      /** Constructs a new FaceRecognition instance. */
      public FaceRecognition() {
      }

      /** Trains from the data in the given training text index file, and store the trained data into the file ‘data/facedata.xml’.
      *
      * @param trainingFileName the given training text index file
      */
      public void learn(final String trainingFileName) {
      int i;

      // load training data
      LOGGER.info("===========================================");
      LOGGER.info("Loading the training images in " + trainingFileName);
      trainingFaceImgArr = loadFaceImgArray(trainingFileName);
      nTrainFaces = trainingFaceImgArr.length;
      LOGGER.info("Got " + nTrainFaces + " training images");
      if (nTrainFaces < 3) {
      LOGGER.severe("Need 3 or more training faces\n"
      + "Input file contains only " + nTrainFaces);
      return;
      }

      // do Principal Component Analysis on the training faces
      doPCA();

      LOGGER.info("projecting the training images onto the PCA subspace");
      // project the training images onto the PCA subspace
      projectedTrainFaceMat = cvCreateMat(
      nTrainFaces, // rows
      nEigens, // cols
      CV_32FC1); // type, 32-bit float, 1 channel

      // initialize the training face matrix – for ease of debugging
      for (int i1 = 0; i1 < nTrainFaces; i1++) {
      for (int j1 = 0; j1 < nEigens; j1++) {
      projectedTrainFaceMat.put(i1, j1, 0.0);
      }
      }

      LOGGER.info("created projectedTrainFaceMat with " + nTrainFaces + " (nTrainFaces) rows and " + nEigens + " (nEigens) columns");
      if (nTrainFaces < 5) {
      LOGGER.info("projectedTrainFaceMat contents:\n" + oneChannelCvMatToString(projectedTrainFaceMat));
      }

      final FloatPointer floatPointer = new FloatPointer(nEigens);
      for (i = 0; i < nTrainFaces; i++) {
      cvEigenDecomposite(
      trainingFaceImgArr[i], // obj
      nEigens, // nEigObjs
      eigenVectArr, // eigInput (Pointer)
      0, // ioFlags
      null, // userData (Pointer)
      pAvgTrainImg, // avg
      floatPointer); // coeffs (FloatPointer)

      if (nTrainFaces < 5) {
      LOGGER.info("floatPointer: " + floatPointerToString(floatPointer));
      }
      for (int j1 = 0; j1 < nEigens; j1++) {
      projectedTrainFaceMat.put(i, j1, floatPointer.get(j1));
      }
      }
      if (nTrainFaces < 5) {
      LOGGER.info("projectedTrainFaceMat after cvEigenDecomposite:\n" + projectedTrainFaceMat);
      }

      // store the recognition data as an xml file
      storeTrainingData();

      // Save all the eigenvectors as images, so that they can be checked.
      //storeEigenfaceImages();
      }
      /** Recognizes the face in each of the test images given, and compares the results with the truth.
      *
      * @param szFileTest the index file of test images
      */
      public String recognizeImage(final IplImage image) {
      LOGGER.info("===========================================");
      int i = 0;
      int nTestFaces = 0; // the number of test images
      CvMat trainPersonNumMat; // the person numbers during training
      float[] projectedTestFace;
      String answer;
      int nCorrect = 0;
      int nWrong = 0;
      double timeFaceRecognizeStart;
      double tallyFaceRecognizeTime;
      float confidence = 0.0f;

      // load test images and ground truth for person number
      testFaceImgArr = loadImage(image);
      nTestFaces = testFaceImgArr.length;

      LOGGER.info(nTestFaces + " test faces loaded");

      // load the saved training data
      trainPersonNumMat = loadTrainingData();
      if (trainPersonNumMat == null) {
      return null;
      }

      // project the test images onto the PCA subspace
      projectedTestFace = new float[nEigens];
      timeFaceRecognizeStart = (double) cvGetTickCount(); // Record the timing.
      int totalNearest=0;
      for (i = 0; i < nTestFaces; i++) {
      int iNearest;
      int nearest;
      int truth;

      // project the test image onto the PCA subspace
      cvEigenDecomposite(
      testFaceImgArr[i], // obj
      nEigens, // nEigObjs
      eigenVectArr, // eigInput (Pointer)
      0, // ioFlags
      null, // userData
      pAvgTrainImg, // avg
      projectedTestFace); // coeffs

      //LOGGER.info("projectedTestFace\n" + floatArrayToString(projectedTestFace));

      final FloatPointer pConfidence = new FloatPointer(confidence);
      iNearest = findNearestNeighbor(projectedTestFace, new FloatPointer(pConfidence));
      confidence = pConfidence.get();
      truth = personNumTruthMat.data_i().get(i);
      nearest = trainPersonNumMat.data_i().get(iNearest);

      if (nearest == truth) {
      answer = "Correct";
      nCorrect++;
      } else {
      answer = "WRONG!";
      nWrong++;
      }
      totalNearest=nearest-1;
      LOGGER.info("nearest = " + nearest + ", Truth = " + truth + " (" + answer + "). Confidence = " + confidence);
      }
      tallyFaceRecognizeTime = (double) cvGetTickCount() – timeFaceRecognizeStart;
      if (nCorrect + nWrong > 0) {
      LOGGER.info("TOTAL ACCURACY: " + (nCorrect * 100 / (nCorrect + nWrong)) + "% out of " + (nCorrect + nWrong) + " tests.");
      LOGGER.info("TOTAL TIME: " + (tallyFaceRecognizeTime / (cvGetTickFrequency() * 1000.0 * (nCorrect + nWrong))) + " ms average.");
      }
      return personNames.get(totalNearest);
      }
      public IplImage[] loadImage(IplImage image){
      IplImage[] faceImgArr;
      BufferedReader imgListFile;
      String imgFilename;
      int iFace = 0;
      int nFaces = 0;
      int i;

      nFaces=1;
      LOGGER.info("nFaces: " + nFaces);

      faceImgArr = new IplImage[1];
      personNumTruthMat = cvCreateMat(
      1, // rows
      nFaces, // cols
      CV_32SC1); // type, 32-bit unsigned, one channel

      // initialize the person number matrix – for ease of debugging
      for (int j1 = 0; j1 < nFaces; j1++) {
      personNumTruthMat.put(0, j1, 0);
      }

      personNames.clear(); // Make sure it starts as empty.
      nPersons = 0;

      String personName;
      String sPersonName;
      int personNumber;
      personNumber = nFaces+1;
      personName = "Niekto";
      sPersonName = personName;
      LOGGER.info("Got " + iFace + " " + personNumber + " " + personName);

      // Check if a new person is being loaded.
      if (personNumber > nPersons) {
      // Allocate memory for the extra person (or possibly multiple), using this new person’s name.
      personNames.add(sPersonName);
      nPersons = personNumber;
      LOGGER.info("Got new person " + sPersonName + " -> nPersons = " + nPersons + " [" + personNames.size() + "]");
      }

      // Keep the data
      personNumTruthMat.put(
      0, // i
      iFace, // j
      personNumber); // v

      // load the face image
      faceImgArr[iFace] = image;

      if (faceImgArr[iFace] == null) {
      throw new RuntimeException("Can’t load image from camera");
      }

      final StringBuilder stringBuilder = new StringBuilder();
      stringBuilder.append("People: ");
      if (nPersons > 0) {
      stringBuilder.append("<").append(personNames.get(0)).append(">");
      }
      for (i = 1; i < nPersons && i < personNames.size(); i++) {
      stringBuilder.append(", <").append(personNames.get(i)).append(">");
      }
      LOGGER.info(stringBuilder.toString());

      return faceImgArr;
      }
      /** Recognizes the face in each of the test images given, and compares the results with the truth.
      *
      * @param szFileTest the index file of test images
      */
      public void recognizeFileList(final String szFileTest) {
      LOGGER.info("===========================================");
      LOGGER.info("recognizing faces indexed from " + szFileTest);
      int i = 0;
      int nTestFaces = 0; // the number of test images
      CvMat trainPersonNumMat; // the person numbers during training
      float[] projectedTestFace;
      String answer;
      int nCorrect = 0;
      int nWrong = 0;
      double timeFaceRecognizeStart;
      double tallyFaceRecognizeTime;
      float confidence = 0.0f;

      // load test images and ground truth for person number
      testFaceImgArr = loadFaceImgArray(szFileTest);
      nTestFaces = testFaceImgArr.length;

      LOGGER.info(nTestFaces + " test faces loaded");

      // load the saved training data
      trainPersonNumMat = loadTrainingData();
      if (trainPersonNumMat == null) {
      return;
      }

      // project the test images onto the PCA subspace
      projectedTestFace = new float[nEigens];
      timeFaceRecognizeStart = (double) cvGetTickCount(); // Record the timing.

      for (i = 0; i < nTestFaces; i++) {
      int iNearest;
      int nearest;
      int truth;

      // project the test image onto the PCA subspace
      cvEigenDecomposite(
      testFaceImgArr[i], // obj
      nEigens, // nEigObjs
      eigenVectArr, // eigInput (Pointer)
      0, // ioFlags
      null, // userData
      pAvgTrainImg, // avg
      projectedTestFace); // coeffs

      //LOGGER.info("projectedTestFace\n" + floatArrayToString(projectedTestFace));

      final FloatPointer pConfidence = new FloatPointer(confidence);
      iNearest = findNearestNeighbor(projectedTestFace, new FloatPointer(pConfidence));
      confidence = pConfidence.get();
      truth = personNumTruthMat.data_i().get(i);
      nearest = trainPersonNumMat.data_i().get(iNearest);

      if (nearest == truth) {
      answer = "Correct";
      nCorrect++;
      } else {
      answer = "WRONG!";
      nWrong++;
      }
      LOGGER.info("nearest = " + nearest + ", Truth = " + truth + " (" + answer + "). Confidence = " + confidence);
      }
      tallyFaceRecognizeTime = (double) cvGetTickCount() – timeFaceRecognizeStart;
      if (nCorrect + nWrong > 0) {
      LOGGER.info("TOTAL ACCURACY: " + (nCorrect * 100 / (nCorrect + nWrong)) + "% out of " + (nCorrect + nWrong) + " tests.");
      LOGGER.info("TOTAL TIME: " + (tallyFaceRecognizeTime / (cvGetTickFrequency() * 1000.0 * (nCorrect + nWrong))) + " ms average.");
      }
      }
      public FacePreview context;
      /** Reads the names & image filenames of people from a text file, and loads all those images listed.
      *
      * @param filename the training file name
      * @return the face image array
      */
      private IplImage[] loadFaceImgArray(final String filename) {
      IplImage[] faceImgArr;
      BufferedReader imgListFile;
      String imgFilename;
      int iFace = 0;
      int nFaces = 0;
      int i;
      try {
      // open the input file
      imgListFile = new BufferedReader(new FileReader(new File(context.getExternalFilesDir(null), filename)));

      // count the number of faces
      while (true) {
      final String line = imgListFile.readLine();
      if (line == null || line.isEmpty()) {
      break;
      }
      nFaces++;
      }
      LOGGER.info("nFaces: " + nFaces);
      imgListFile = new BufferedReader(new FileReader(new File(context.getExternalFilesDir(null), filename)));

      // allocate the face-image array and person number matrix
      faceImgArr = new IplImage[nFaces];
      personNumTruthMat = cvCreateMat(
      1, // rows
      nFaces, // cols
      CV_32SC1); // type, 32-bit unsigned, one channel

      // initialize the person number matrix – for ease of debugging
      for (int j1 = 0; j1 < nFaces; j1++) {
      personNumTruthMat.put(0, j1, 0);
      }

      personNames.clear(); // Make sure it starts as empty.
      nPersons = 0;

      // store the face images in an array
      for (iFace = 0; iFace < nFaces; iFace++) {
      String personName;
      String sPersonName;
      int personNumber;

      // read person number (beginning with 1), their name and the image filename.
      final String line = imgListFile.readLine();
      if (line.isEmpty()) {
      break;
      }
      final String[] tokens = line.split(" ");
      personNumber = Integer.parseInt(tokens[0]);
      personName = tokens[1];
      imgFilename = "/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/"+tokens[2];
      sPersonName = personName;
      LOGGER.info("Got " + iFace + " " + personNumber + " " + personName + " " + imgFilename);

      // Check if a new person is being loaded.
      if (personNumber > nPersons) {
      // Allocate memory for the extra person (or possibly multiple), using this new person’s name.
      personNames.add(sPersonName);
      nPersons = personNumber;
      LOGGER.info("Got new person " + sPersonName + " -> nPersons = " + nPersons + " [" + personNames.size() + "]");
      }

      // Keep the data
      personNumTruthMat.put(
      0, // i
      iFace, // j
      personNumber); // v

      // load the face image
      faceImgArr[iFace] = cvLoadImage(
      imgFilename, // filename
      CV_LOAD_IMAGE_GRAYSCALE); // isColor

      if (faceImgArr[iFace] == null) {
      throw new RuntimeException("Can’t load image from " + imgFilename);
      }
      }

      imgListFile.close();

      } catch (IOException ex) {
      throw new RuntimeException(ex);
      }

      LOGGER.info("Data loaded from ‘" + filename + "’: (" + nFaces + " images of " + nPersons + " people).");
      final StringBuilder stringBuilder = new StringBuilder();
      stringBuilder.append("People: ");
      if (nPersons > 0) {
      stringBuilder.append("<").append(personNames.get(0)).append(">");
      }
      for (i = 1; i < nPersons && i < personNames.size(); i++) {
      stringBuilder.append(", <").append(personNames.get(i)).append(">");
      }
      LOGGER.info(stringBuilder.toString());

      return faceImgArr;
      }

      /** Does the Principal Component Analysis, finding the average image and the eigenfaces that represent any image in the given dataset. */
      private void doPCA() {
      int i;
      CvTermCriteria calcLimit;
      CvSize faceImgSize = new CvSize();

      // set the number of eigenvalues to use
      nEigens = nTrainFaces – 1;

      LOGGER.info("allocating images for principal component analysis, using " + nEigens + (nEigens == 1 ? " eigenvalue" : " eigenvalues"));

      // allocate the eigenvector images
      faceImgSize.width(trainingFaceImgArr[0].width());
      faceImgSize.height(trainingFaceImgArr[0].height());
      eigenVectArr = new IplImage[nEigens];
      for (i = 0; i < nEigens; i++) {
      eigenVectArr[i] = cvCreateImage(
      faceImgSize, // size
      IPL_DEPTH_32F, // depth
      1); // channels
      }

      // allocate the eigenvalue array
      eigenValMat = cvCreateMat(
      1, // rows
      nEigens, // cols
      CV_32FC1); // type, 32-bit float, 1 channel

      // allocate the averaged image
      pAvgTrainImg = cvCreateImage(
      faceImgSize, // size
      IPL_DEPTH_32F, // depth
      1); // channels

      // set the PCA termination criterion
      calcLimit = cvTermCriteria(
      CV_TERMCRIT_ITER, // type
      nEigens, // max_iter
      1); // epsilon

      LOGGER.info("computing average image, eigenvalues and eigenvectors");
      // compute average image, eigenvalues, and eigenvectors
      cvCalcEigenObjects(
      nTrainFaces, // nObjects
      trainingFaceImgArr, // input
      eigenVectArr, // output
      CV_EIGOBJ_NO_CALLBACK, // ioFlags
      0, // ioBufSize
      null, // userData
      calcLimit,
      pAvgTrainImg, // avg
      eigenValMat.data_fl()); // eigVals

      LOGGER.info("normalizing the eigenvectors");
      cvNormalize(
      eigenValMat, // src (CvArr)
      eigenValMat, // dst (CvArr)
      1, // a
      0, // b
      CV_L1, // norm_type
      null); // mask
      }

      /** Stores the training data to the file ‘data/facedata.xml’. */
      private void storeTrainingData() {
      CvFileStorage fileStorage;
      int i;

      LOGGER.info("writing data/facedata.xml");

      // create a file-storage interface
      fileStorage = cvOpenFileStorage(
      "/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/facedata.xml", // filename
      null, // memstorage
      CV_STORAGE_WRITE, // flags
      null); // encoding
      LOGGER.info("OPENED file and writing");
      // Store the person names. Added by Shervin.
      cvWriteInt(
      fileStorage, // fs
      "nPersons", // name
      nPersons); // value

      for (i = 0; i < nPersons; i++) {
      String varname = "personName_" + (i + 1);
      cvWriteString(
      fileStorage, // fs
      varname, // name
      personNames.get(i), // string
      0); // quote
      }

      // store all the data
      cvWriteInt(
      fileStorage, // fs
      "nEigens", // name
      nEigens); // value

      cvWriteInt(
      fileStorage, // fs
      "nTrainFaces", // name
      nTrainFaces); // value

      cvWrite(
      fileStorage, // fs
      "trainPersonNumMat", // name
      personNumTruthMat); // value

      cvWrite(
      fileStorage, // fs
      "eigenValMat", // name
      eigenValMat); // value

      cvWrite(
      fileStorage, // fs
      "projectedTrainFaceMat", // name
      projectedTrainFaceMat);

      cvWrite(fileStorage, // fs
      "avgTrainImg", // name
      pAvgTrainImg); // value

      for (i = 0; i < nEigens; i++) {
      String varname = "eigenVect_" + i;
      cvWrite(
      fileStorage, // fs
      varname, // name
      eigenVectArr[i]); // value
      }
      LOGGER.info("Face data written");
      // release the file-storage interface
      cvReleaseFileStorage(fileStorage);
      }

      /** Opens the training data from the file ‘data/facedata.xml’.
      *
      * @param pTrainPersonNumMat
      * @return the person numbers during training, or null if not successful
      */
      private CvMat loadTrainingData() {
      LOGGER.info("loading training data");
      CvMat pTrainPersonNumMat = null; // the person numbers during training
      CvFileStorage fileStorage;
      int i;

      // create a file-storage interface
      fileStorage = cvOpenFileStorage(
      "/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/facedata.xml", // filename
      null, // memstorage
      CV_STORAGE_READ, // flags
      null); // encoding
      if (fileStorage == null) {
      LOGGER.severe("Can’t open training database file ‘data/facedata.xml’.");
      return null;
      }

      // Load the person names.
      personNames.clear(); // Make sure it starts as empty.
      nPersons = cvReadIntByName(
      fileStorage, // fs
      null, // map
      "nPersons", // name
      0); // default_value
      if (nPersons == 0) {
      LOGGER.severe("No people found in the training database ‘data/facedata.xml’.");
      return null;
      } else {
      LOGGER.info(nPersons + " persons read from the training database");
      }

      // Load each person’s name.
      for (i = 0; i < nPersons; i++) {
      String sPersonName;
      String varname = "personName_" + (i + 1);
      sPersonName = cvReadStringByName(
      fileStorage, // fs
      null, // map
      varname,
      "");
      personNames.add(sPersonName);
      }
      LOGGER.info("person names: " + personNames);

      // Load the data
      nEigens = cvReadIntByName(
      fileStorage, // fs
      null, // map
      "nEigens",
      0); // default_value
      nTrainFaces = cvReadIntByName(
      fileStorage,
      null, // map
      "nTrainFaces",
      0); // default_value
      Pointer pointer = cvReadByName(
      fileStorage, // fs
      null, // map
      "trainPersonNumMat"); // name
      pTrainPersonNumMat = new CvMat(pointer);

      pointer = cvReadByName(
      fileStorage, // fs
      null, // map
      "eigenValMat"); // name
      eigenValMat = new CvMat(pointer);

      pointer = cvReadByName(
      fileStorage, // fs
      null, // map
      "projectedTrainFaceMat"); // name
      projectedTrainFaceMat = new CvMat(pointer);

      pointer = cvReadByName(
      fileStorage,
      null, // map
      "avgTrainImg");
      pAvgTrainImg = new IplImage(pointer);

      eigenVectArr = new IplImage[nTrainFaces];
      for (i = 0; i <= nEigens; i++) {
      String varname = "eigenVect_" + i;
      pointer = cvReadByName(
      fileStorage,
      null, // map
      varname);
      eigenVectArr[i] = new IplImage(pointer);
      }

      // release the file-storage interface
      cvReleaseFileStorage(fileStorage);

      LOGGER.info("Training data loaded (" + nTrainFaces + " training images of " + nPersons + " people)");
      final StringBuilder stringBuilder = new StringBuilder();
      stringBuilder.append("People: ");
      if (nPersons > 0) {
      stringBuilder.append("<").append(personNames.get(0)).append(">");
      }
      for (i = 1; i < nPersons; i++) {
      stringBuilder.append(", <").append(personNames.get(i)).append(">");
      }
      LOGGER.info(stringBuilder.toString());

      return pTrainPersonNumMat;
      }

      /** Saves all the eigenvectors as images, so that they can be checked. */
      private void storeEigenfaceImages() {
      // Store the average image to a file
      LOGGER.info("Saving the image of the average face as ‘data/out_averageImage.bmp’");
      cvSaveImage("/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/out_averageImage.bmp", pAvgTrainImg);

      // Create a large image made of many eigenface images.
      // Must also convert each eigenface image to a normal 8-bit UCHAR image instead of a 32-bit float image.
      LOGGER.info("Saving the " + nEigens + " eigenvector images as ‘data/out_eigenfaces.bmp’");

      if (nEigens > 0) {
      // Put all the eigenfaces next to each other.
      int COLUMNS = 8; // Put upto 8 images on a row.
      int nCols = Math.min(nEigens, COLUMNS);
      int nRows = 1 + (nEigens / COLUMNS); // Put the rest on new rows.
      int w = eigenVectArr[0].width();
      int h = eigenVectArr[0].height();
      CvSize size = cvSize(nCols * w, nRows * h);
      final IplImage bigImg = cvCreateImage(
      size,
      IPL_DEPTH_8U, // depth, 8-bit Greyscale UCHAR image
      1); // channels
      for (int i = 0; i < nEigens; i++) {
      // Get the eigenface image.
      IplImage byteImg = convertFloatImageToUcharImage(eigenVectArr[i]);
      // Paste it into the correct position.
      int x = w * (i % COLUMNS);
      int y = h * (i / COLUMNS);
      CvRect ROI = cvRect(x, y, w, h);
      cvSetImageROI(
      bigImg, // image
      ROI); // rect
      cvCopy(
      byteImg, // src
      bigImg, // dst
      null); // mask
      cvResetImageROI(bigImg);
      cvReleaseImage(byteImg);
      }
      cvSaveImage(
      "/mnt/sdcard/Android/data/com.googlecode.javacv.facepreview/files/out_eigenfaces.bmp", // filename
      bigImg); // image
      cvReleaseImage(bigImg);
      }
      }

      /** Converts the given float image to an unsigned character image.
      *
      * @param srcImg the given float image
      * @return the unsigned character image
      */
      private IplImage convertFloatImageToUcharImage(IplImage srcImg) {
      IplImage dstImg;
      if ((srcImg != null) && (srcImg.width() > 0 && srcImg.height() > 0)) {
      // Spread the 32bit floating point pixels to fit within 8bit pixel range.
      CvPoint minloc = new CvPoint();
      CvPoint maxloc = new CvPoint();
      double[] minVal = new double[1];
      double[] maxVal = new double[1];
      cvMinMaxLoc(srcImg, minVal, maxVal, minloc, maxloc, null);
      // Deal with NaN and extreme values, since the DFT seems to give some NaN results.
      if (minVal[0] < -1e30) {
      minVal[0] = -1e30;
      }
      if (maxVal[0] > 1e30) {
      maxVal[0] = 1e30;
      }
      if (maxVal[0] – minVal[0] == 0.0f) {
      maxVal[0] = minVal[0] + 0.001; // remove potential divide by zero errors.
      } // Convert the format
      dstImg = cvCreateImage(cvSize(srcImg.width(), srcImg.height()), 8, 1);
      cvConvertScale(srcImg, dstImg, 255.0 / (maxVal[0] – minVal[0]), -minVal[0] * 255.0 / (maxVal[0] – minVal[0]));
      return dstImg;
      }
      return null;
      }

      /** Find the most likely person based on a detection. Returns the index, and stores the confidence value into pConfidence.
      *
      * @param projectedTestFace the projected test face
      * @param pConfidencePointer a pointer containing the confidence value
      * @param iTestFace the test face index
      * @return the index
      */
      private int findNearestNeighbor(float projectedTestFace[], FloatPointer pConfidencePointer) {
      double leastDistSq = Double.MAX_VALUE;
      int i = 0;
      int iTrain = 0;
      int iNearest = 0;

      LOGGER.info("…………….");
      LOGGER.info("find nearest neighbor from " + nTrainFaces + " training faces");
      for (iTrain = 0; iTrain < nTrainFaces; iTrain++) {
      //LOGGER.info("considering training face " + (iTrain + 1));
      double distSq = 0;

      for (i = 0; i < nEigens; i++) {
      //LOGGER.debug(" projected test face distance from eigenface " + (i + 1) + " is " + projectedTestFace[i]);

      float projectedTrainFaceDistance = (float) projectedTrainFaceMat.get(iTrain, i);
      float d_i = projectedTestFace[i] – projectedTrainFaceDistance;
      distSq += d_i * d_i; // / eigenValMat.data_fl().get(i); // Mahalanobis distance (might give better results than Eucalidean distance)
      // if (iTrain < 5) {
      // LOGGER.info(" ** projected training face " + (iTrain + 1) + " distance from eigenface " + (i + 1) + " is " + projectedTrainFaceDistance);
      // LOGGER.info(" distance between them " + d_i);
      // LOGGER.info(" distance squared " + distSq);
      // }
      }

      if (distSq < leastDistSq) {
      leastDistSq = distSq;
      iNearest = iTrain;
      LOGGER.info(" training face " + (iTrain + 1) + " is the new best match, least squared distance: " + leastDistSq);
      }
      }

      // Return the confidence level based on the Euclidean distance,
      // so that similar images should give a confidence between 0.5 to 1.0,
      // and very different images should give a confidence between 0.0 to 0.5.
      float pConfidence = (float) (1.0f – Math.sqrt(leastDistSq / (float) (nTrainFaces * nEigens)) / 255.0f);
      pConfidencePointer.put(pConfidence);

      LOGGER.info("training face " + (iNearest + 1) + " is the final best match, confidence " + pConfidence);
      return iNearest;
      }

      /** Returns a string representation of the given float array.
      *
      * @param floatArray the given float array
      * @return a string representation of the given float array
      */
      private String floatArrayToString(final float[] floatArray) {
      final StringBuilder stringBuilder = new StringBuilder();
      boolean isFirst = true;
      stringBuilder.append(‘[‘);
      for (int i = 0; i < floatArray.length; i++) {
      if (isFirst) {
      isFirst = false;
      } else {
      stringBuilder.append(", ");
      }
      stringBuilder.append(floatArray[i]);
      }
      stringBuilder.append(‘]’);

      return stringBuilder.toString();
      }

      /** Returns a string representation of the given float pointer.
      *
      * @param floatPointer the given float pointer
      * @return a string representation of the given float pointer
      */
      private String floatPointerToString(final FloatPointer floatPointer) {
      final StringBuilder stringBuilder = new StringBuilder();
      boolean isFirst = true;
      stringBuilder.append(‘[‘);
      for (int i = 0; i < floatPointer.capacity(); i++) {
      if (isFirst) {
      isFirst = false;
      } else {
      stringBuilder.append(", ");
      }
      stringBuilder.append(floatPointer.get(i));
      }
      stringBuilder.append(‘]’);

      return stringBuilder.toString();
      }

      /** Returns a string representation of the given one-channel CvMat object.
      *
      * @param cvMat the given CvMat object
      * @return a string representation of the given CvMat object
      */
      public String oneChannelCvMatToString(final CvMat cvMat) {
      //Preconditions
      if (cvMat.channels() != 1) {
      throw new RuntimeException("illegal argument – CvMat must have one channel");
      }

      final int type = cvMat.type();
      StringBuilder s = new StringBuilder("[ ");
      for (int i = 0; i < cvMat.rows(); i++) {
      for (int j = 0; j < cvMat.cols(); j++) {
      if (type == CV_32FC1 || type == CV_32SC1) {
      s.append(cvMat.get(i, j));
      } else {
      throw new RuntimeException("illegal argument – CvMat must have one channel and type of float or signed integer");
      }
      if (j < cvMat.cols() – 1) {
      s.append(", ");
      }
      }
      if (i < cvMat.rows() – 1) {
      s.append("\n ");
      }
      }
      s.append(" ]");
      return s.toString();
      }

      }
      [/java]
      Android manifest:
      [xml]
      <?xml version="1.0" encoding="utf-8"?>
      <manifest xmlns:android="http://schemas.android.com/apk/res/android&quot;
      package="com.googlecode.javacv.facepreview"
      android:versionCode="1"
      android:versionName="1.0" >
      <uses-sdk android:minSdkVersion="4" />
      <uses-permission android:name="android.permission.CAMERA" />
      <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
      <uses-permission android:name="android.permission.WAKE_LOCK"/>
      <uses-feature android:name="android.hardware.camera" />
      <application android:label="@string/app_name">
      <activity
      android:name="FacePreview"
      android:label="@string/app_name"
      android:screenOrientation="landscape">
      <intent-filter>
      <action android:name="android.intent.action.MAIN" />
      <category android:name="android.intent.category.LAUNCHER" />
      </intent-filter>
      </activity>
      </application>
      </manifest>
      [/xml]
      You will need haarcascade_frontalface_alt.xml
      Then on your sdcard create folder Android/data/com.googlecode.javacv.facepreview/files/
      And copy there pictures to learn, pictures must have same size 266×320 and create text file all10.txt containing ID number of person, person name and filename.
      [text]
      1 Filip IMG_5437.JPG
      1 Filip IMG_5438.JPG
      1 Filip IMG_5439.JPG
      1 Filip IMG_5441.JPG
      1 Filip IMG_8738.JPG
      2 Adam IMG_8733.JPG
      3 Jozef P1050186.JPG
      3 Jozef P1050028.JPG
      3 Jozef P1050029.JPG
      [/text]

      Reply
      1. drndos (Post author)

        Run application and click on the screen, program will learn from images and then you can recognize people.

        Reply
        1. widi

          its use eigenface algortihm
          how doing it on fisherface ?

          Reply
          1. drndos (Post author)

            Here is C++ code with tutorial, it should not be a problem to port it to Java:
            http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html

            Use The Linear Discriminant Analysis instead of Principal Component Analysis (PCA)

        2. Anoop

          I copied your face matching code. When i ran it on the device it crashed as soon as a face is detected. What will be the probable issue

          Reply
      2. Anoop

        The code is running till it detects a face. whenever a face is detected in camera preview, it crashes, the error shown in log cat is OpenCV Bad Argument, Different Sizes of Images etc

        Reply
        1. drndos (Post author)

          Hello, learning pictures be 266×320

          Reply
  6. Tin

    Hi Pro,

    I want to build a application “Encrypt Data Using Biometric (Face)”.
    Can i reuse FacePreview for Detect face and FaceRecognition to feature extraction from face.And i use Feature Extraction to protect Encrypt Key.

    How can i do that?
    Thanks

    Reply
    1. drndos (Post author)

      Hello, the problem of this solution is that every picture is different. FaceRecogniton is actualy only computing distance between two faces and chooses the lowest one.
      There must be other approach to this, currently I am looking for it 🙂

      Reply
  7. Jose Luis

    Hello! I followed your steps but the “Application has stopped unexpectedly” and send the next messages by the Log Cat: Fatal exception : main …… at com.googlecode.javacvpp.Loader.load….. I leave a link where you could check my project. I hope your help! Tanks https://www.dropbox.com/sh/dmz00l96h8vnsaw/WPukzC03CL

    Reply
  8. Andy

    hello drndos, i followed your step build and run the apps but it shows “unfortunately MainActivity has stopped” in my avd.When i created the android project from netbeans there’s a file named MainActivity.java i should remove this file or just leave it?

    Reply
  9. Denny

    Hi Pro,

    I have followed the steps exactly same with what you post. But I couldn’t run the application on my AVD. The error message was “Unfortunately, MainActivity has stopped”.

    Your reply are extremely appreciated.

    Reply
  10. Mahesh

    Thanks! At last i got something that actually works. Good job 🙂

    Reply
  11. André

    Hi, I can´t solve the errors about this function :
    cvEigenDecomposite(
    trainingFaceImgArr[i], // obj
    nEigens, // nEigObjs
    eigenVectArr, // eigInput (Pointer)
    0, // ioFlags
    null, // userData (Pointer)
    pAvgTrainImg, // avg
    floatPointer); // coeffs (FloatPointer).

    I have this message from Eclipse:
    The method cvEigenDecomposite(opencv_core.IplImage, int, Pointer, int, Pointer, opencv_core.IplImage, FloatPointer) in the type opencv_legacy is not applicable for the arguments (opencv_core.IplImage, int, opencv_core.IplImage[], int, null, opencv_core.IplImage, FloatPointer).

    Could You help me, please?

    Reply
  12. vishnu

    Thank you for such a great tutorial. How do i set the app to use only the front camera?

    Reply
  13. reem

    this link does not work
    http://www.nbandroid.org/p/installation.html
    Sorry, the page you were looking for in this blog does not exist.

    Reply
    1. drndos (Post author)

      Thank you, fixed

      Reply
  14. Ahsan

    Sir,
    I am working on face recognition on android eclipse, how can we use it?

    Reply
  15. Dave

    thanks!!! helps a lot.

    Reply
  16. Sachin Tibrewal

    Thanks a ton!!! Really my blessings to you

    Reply
  17. aru

    how to create a txt file containing id, name , images

    Reply
  18. Danny

    So in Eclipse I have exception on android emulator and phycal machine could not extract classifier file from Java resource. how i can fix it?

    Reply
  19. Babar

    Thanks, did you find any way for actual facial recognition?

    Reply

Leave a Reply to aliu Cancel reply

Your email address will not be published. Required fields are marked *