Face Reading through Face API Cognitive Service

Rajnilari2015
Posted by in Azure category on for Beginner level | Points: 250 | Views : 9943 red flag
Rating: 4.33 out of 5  
 3 vote(s)

This is a step by step article where we will use the Face API for reading the facial expression.


 Download source code for Face Reading through Face API Cognitive Service

Introduction

Microsoft has come up with Cognitive Services. These are a set of machine learning algorithms which has been developed to solve problems in the field of Artificial Intelligence (AI).

A few words from Microsoft docs

Microsoft Cognitive Services (formerly Project Oxford) are a set of APIs, SDKs and services available to developers to make their applications more intelligent, engaging and discoverable. Microsoft Cognitive Services expands on Microsoft’s evolving portfolio of machine learning APIs and enables developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications. Our vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.

The Cognitive Services API's are classified as

  1. Vision API [ Computer Vision API , Face API ]
  2. Speech API [ Bing Speech API , Speaker Recognition API ]
  3. Language API[ Bing Spell Check API v7 , Text Analytics API ]
  4. Knowledge API[ Custom Decision Service ]
  5. Search API[ Bing Search APIs v7, Bing Autosuggest API v7, Bing Custom Search API, Bing Entity Search API ]

This is a step by step article where we will use the Face API for reading the facial expression.

Let's do the experiment

Step 1

Let us first get the Face API key(s)

Step 2

Open Visual Studio 2017 and fire a WPF application

Step 3

Make the design as under

<Window x:Class="WpfApp2.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:WpfApp2"
        mc:Ignorable="d"
        Title="MainWindow" Height="463.039" Width="525">
    <Grid>
        <Button Content="UploadPhoto" HorizontalAlignment="Left" Margin="110,113,0,0" VerticalAlignment="Top" Width="100" Click="btnUploadPhoto_Click"/>

        <Image x:Name="imgFacePhoto" Stretch="Uniform" Margin="0,0,-702.333,49.667" MouseMove="ImgFacePhoto_MouseMove" />

        <StatusBar VerticalAlignment="Bottom">
            <StatusBarItem>
                <TextBlock Name="txtFacialDescription" />
            </StatusBarItem>
        </StatusBar>
        <TextBlock Name="txtDescription" HorizontalAlignment="Left" Margin="10,157,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="240" Width="412"/>

    </Grid>
</Window>

Step 4

From the Nuget package manager console, fire the below

PM > Install-Package Microsoft.ProjectOxford.Face

Step 5

In the code behind, the most important part is the

 private readonly IFaceServiceClient faceServiceClient =
           new FaceServiceClient("Service Key", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0");

The above line helps to connect , authenticate and query webservice at the remote endpoint.It has a method DetectAsync that accepts image as a stream alog with the informatiomn that has been sought and return a Task of type Face

Task<Contract.Face[]> DetectAsync(Stream imageStream, bool returnFaceId = true, bool returnFaceLandmarks = false, IEnumerable<FaceAttributeType> returnFaceAttributes = null);

The code snippet will clear the way of invocation to this function

 private async Task<Face[]> UploadAndDetectFaces(string imageFilePath)
        {
            // We are interested in the below information about the image face.
            IEnumerable<FaceAttributeType> faceAttributes =
                new FaceAttributeType[]
                {
                    FaceAttributeType.Age,
                    FaceAttributeType.Gender,
                    FaceAttributeType.HeadPose,
                    FaceAttributeType.Smile,
                    FaceAttributeType.FacialHair,
                    FaceAttributeType.Glasses,
                    FaceAttributeType.Emotion,
                    FaceAttributeType.Hair,
                    FaceAttributeType.Makeup
                };

            
            try
            {
				//reads the image as a stream
                using (Stream imageFileStream = File.OpenRead(imageFilePath))
                {
                    Face[] faces = 
					await faceServiceClient
					
					// Call the Face API.
					.DetectAsync
					(
						imageFileStream, 
						returnFaceId: true, 
						returnFaceLandmarks: false, 
						returnFaceAttributes: faceAttributes // face attributes in which we have asked for the information
					);
					
                    return faces;
                }
            }           
            // Catch and display all other errors.
            catch (Exception e)
            {
                MessageBox.Show(e.Message);
                return new Face[0];
            }
        }

Now finally, we have received the facial information. Next task is to read the information by using some methods as described under

//Head position
private string getHeadPosition(HeadPose headPose)
{
	Dictionary<string, double> dict = new Dictionary<string, double>();
	dict.Add("Pitch", headPose.Pitch);
	dict.Add("Roll", headPose.Roll);
	dict.Add("Yaw", headPose.Yaw);
	return dict.Aggregate((l, r) => l.Value > r.Value ? l : r).Key;           
}

//How  is the user smiling
private string readSmile(double smile)
{
	return
	 (smile) < 0.6 ? "Normal" :
	 (smile) >= 0.6 && (smile) < 0.8 ? "Somewhat smiling" :
	 (smile) >= 0.8 && (smile) < 0.9 ? "Smiling" : "Very happy";
}

//Information about facial hairs
private string readFacialHairs(FacialHair facialHair)
{           
	List<string> lstFacialHairs = new List<string>();
	if (facialHair.Beard > 0) lstFacialHairs.Add("Beard");
	if (facialHair.Moustache > 0) lstFacialHairs.Add("Moustache");
	if (facialHair.Sideburns > 0) lstFacialHairs.Add("Sideburns");

	return
		(lstFacialHairs.Count == 0) ? "No Facial Hairs" :
		(lstFacialHairs.Count == 1) ? lstFacialHairs.Single() : lstFacialHairs.Aggregate((a, b) => a + " and " + b);
	 
}

//Read the person's emotion from the face
private string readEmotions(EmotionScores emotion)
{
	Dictionary<string, double> dict = new Dictionary<string, double>();
	dict.Add("Anger", emotion.Anger);
	dict.Add("Contempt", emotion.Contempt);
	dict.Add("Disgust", emotion.Disgust);
	dict.Add("Fear", emotion.Fear);
	dict.Add("Neutral", emotion.Neutral);
	dict.Add("Sadness", emotion.Sadness);
	dict.Add("Surprise", emotion.Surprise);
	return dict.Aggregate((l, r) => l.Value > r.Value ? l : r).Key;
}

The above methods are called as under - we have requested for some additional information like Age, Gender, spectacular, euye and Lip makeups etc.

private string FaceDescription(Face face)
{
	StringBuilder sb = new StringBuilder();

	sb.AppendLine("Face Reading Information");
	sb.AppendLine("------------------------");

	//Read Age
	sb.AppendLine("Age : " + face.FaceAttributes.Age + " years");

	//Read Gender
	sb.AppendLine("Gender : " + face.FaceAttributes.Gender);

	//Read Head Position
	sb.AppendLine("Head Position : " + getHeadPosition(face.FaceAttributes.HeadPose));

	//Read Smile 
	sb.AppendLine("Smiling behaviour : " + readSmile(face.FaceAttributes.Smile));

	//Read Facial hairs 
	sb.AppendLine("Facial hairs : " + readFacialHairs(face.FaceAttributes.FacialHair));

	//Read glasses.
	sb.AppendLine("Glasses : " + face.FaceAttributes.Glasses);

	//Read Emotions 
	sb.AppendLine("Emotions : " + readEmotions(face.FaceAttributes.Emotion));

	//Read Hair Color
	var hairColor = face.FaceAttributes.Hair.HairColor.ToList().FirstOrDefault();
	if(hairColor !=null)
		sb.AppendLine("Hair Color : " + hairColor.Color);

	//Read makeup
	if(face.FaceAttributes.Makeup.EyeMakeup)
		sb.AppendLine("Eye makeup detected");

	if (face.FaceAttributes.Makeup.LipMakeup)
		sb.AppendLine("Lip makeup detected");

	// Return the built string.
	return sb.ToString();
}

Step 6

Now let's run the application.

When the mouse pointer is hover over the first image we get the below information

When the mouse pointer is hover over the second image we get the below information

Reference

  1. Azure Cognitive Services
  2. Azure Face API

Conclusion

Face Reading through Face API Cognitive Service with an example.Hope this helps. Thanks for reading.Zipped file attached.

Disclaimer: The image used in this article is for demo purpose only. They might be respective owners copyright content.

Page copy protected against web site content infringement by Copyscape

About the Author

Rajnilari2015
Full Name: Niladri Biswas (RNA Team)
Member Level: Platinum
Member Status: Member,Microsoft_MVP,MVP
Member Since: 3/17/2015 2:41:06 AM
Country: India
-- Thanks & Regards, RNA Team


Login to vote for this post.

Comments or Responses

Posted by: Annaeverson on: 3/15/2018 | Points: 25
That is really cool, that you have added examples and images)
Posted by: Annaeverson on: 8/7/2018 | Points: 25
Wow! It's really amazing
Posted by: Collinsjordan on: 9/19/2018 | Points: 25
It was useful for me, can I do it for myself?
Posted by: Jordandavid on: 11/3/2018 | Points: 25
Amazingly accommodating post. This is my first time i visit here. I found such an extensive number of captivating stuff in your blog especially its trade. Genuinely it's unprecedented article. Keep it up.
Posted by: Edgefindings on: 10/31/2019 | Points: 25
Face recognition is a risky tech...

Login to post response

Comment using Facebook(Author doesn't get notification)