Face API - v1.0 - Cognitive Services APIs Reference - Microsoft
文章推薦指數: 80 %
Face - Detect. Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. No image will be stored.
POSTFace-Detect
POSTFace-FindSimilar
POSTFace-Group
POSTFace-Identify
POSTFace-Verify
POSTFaceList-AddFace
PUTFaceList-Create
DELETEFaceList-Delete
DELETEFaceList-DeleteFace
GETFaceList-Get
GETFaceList-List
PATCHFaceList-Update
POSTLargeFaceList-AddFace
PUTLargeFaceList-Create
DELETELargeFaceList-Delete
DELETELargeFaceList-DeleteFace
GETLargeFaceList-Get
GETLargeFaceList-GetFace
GETLargeFaceList-GetTrainingStatus
GETLargeFaceList-List
GETLargeFaceList-ListFace
POSTLargeFaceList-Train
PATCHLargeFaceList-Update
PATCHLargeFaceList-UpdateFace
PUTLargePersonGroup-Create
DELETELargePersonGroup-Delete
GETLargePersonGroup-Get
GETLargePersonGroup-GetTrainingStatus
GETLargePersonGroup-List
POSTLargePersonGroup-Train
PATCHLargePersonGroup-Update
POSTLargePersonGroupPerson-AddFace
POSTLargePersonGroupPerson-Create
DELETELargePersonGroupPerson-Delete
DELETELargePersonGroupPerson-DeleteFace
GETLargePersonGroupPerson-Get
GETLargePersonGroupPerson-GetFace
GETLargePersonGroupPerson-List
PATCHLargePersonGroupPerson-Update
PATCHLargePersonGroupPerson-UpdateFace
PUTPersonGroup-Create
DELETEPersonGroup-Delete
GETPersonGroup-Get
GETPersonGroup-GetTrainingStatus
GETPersonGroup-List
POSTPersonGroup-Train
PATCHPersonGroup-Update
POSTPersonGroupPerson-AddFace
POSTPersonGroupPerson-Create
DELETEPersonGroupPerson-Delete
DELETEPersonGroupPerson-DeleteFace
GETPersonGroupPerson-Get
GETPersonGroupPerson-GetFace
GETPersonGroupPerson-List
PATCHPersonGroupPerson-Update
PATCHPersonGroupPerson-UpdateFace
POSTSnapshot-Apply
DELETESnapshot-Delete
GETSnapshot-Get
GETSnapshot-GetOperationStatus
GETSnapshot-List
POSTSnapshot-Take
PATCHSnapshot-Update
APIdefinition
Swagger
WADL
FaceAPI-v1.0
ThisAPIiscurrentlyavailablein:
AustraliaEast-australiaeast.api.cognitive.microsoft.com
BrazilSouth-brazilsouth.api.cognitive.microsoft.com
CanadaCentral-canadacentral.api.cognitive.microsoft.com
CentralIndia-centralindia.api.cognitive.microsoft.com
CentralUS-centralus.api.cognitive.microsoft.com
EastAsia-eastasia.api.cognitive.microsoft.com
EastUS-eastus.api.cognitive.microsoft.com
EastUS2-eastus2.api.cognitive.microsoft.com
FranceCentral-francecentral.api.cognitive.microsoft.com
JapanEast-japaneast.api.cognitive.microsoft.com
JapanWest-japanwest.api.cognitive.microsoft.com
KoreaCentral-koreacentral.api.cognitive.microsoft.com
NorthCentralUS-northcentralus.api.cognitive.microsoft.com
NorthEurope-northeurope.api.cognitive.microsoft.com
SouthAfricaNorth-southafricanorth.api.cognitive.microsoft.com
SouthCentralUS-southcentralus.api.cognitive.microsoft.com
SoutheastAsia-southeastasia.api.cognitive.microsoft.com
UKSouth-uksouth.api.cognitive.microsoft.com
WestCentralUS-westcentralus.api.cognitive.microsoft.com
WestEurope-westeurope.api.cognitive.microsoft.com
WestUS-westus.api.cognitive.microsoft.com
WestUS2-westus2.api.cognitive.microsoft.com
UAENorth-uaenorth.api.cognitive.microsoft.com
Face-Detect
Detecthumanfacesinanimage,returnfacerectangles,andoptionallywithfaceIds,landmarks,andattributes.
Noimagewillbestored.Onlytheextractedfacefeature(s)willbestoredonserver.ThefaceIdisanidentifierofthefacefeatureandwillbeusedin
Face-Identify,
Face-Verify,and
Face-FindSimilar.ThestoredfacefeatureswillexpireandbedeletedatthetimespecifiedbyfaceIdTimeToLiveaftertheoriginaldetectioncall.
OptionalparametersincludefaceId,landmarks,andattributes.Attributesincludeage,gender,headPose,smile,facialHair,
glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise,mask,andqualityForRecognition.Someoftheresultsreturnedforspecificattributesmaynotbehighlyaccurate.
JPEG,PNG,GIF(thefirstframe),andBMPformataresupported.Theallowedimagefilesizeisfrom1KBto6MB.
Theminimumdetectablefacesizeis36x36pixelsinanimagenolargerthan1920x1080pixels.Imageswithdimensionshigherthan1920x1080pixelswillneedaproportionallylargerminimumfacesize.
Upto100facescanbereturnedforanimage.Facesarerankedbyfacerectanglesizefromlargetosmall.
Foroptimalresultswhenquerying
Face-Identify,
Face-Verify,and
Face-FindSimilar('returnFaceId'istrue),pleaseusefacesthatare:frontal,clear,andwithaminimumsizeof200x200pixels(100pixelsbetweeneyes).
Different'detectionModel'valuescanbeprovided.Touseandcomparedifferentdetectionmodels,pleaserefertoHowtospecifyadetectionmodel
'detection_01':ThedefaultdetectionmodelforFace-Detect.Recommendfornearfrontalfacedetection.Forscenarioswithexceptionallylargeangle(head-pose)faces,occludedfacesorwrongimageorientation,thefacesinsuchcasesmaynotbedetected.
'detection_02':Detectionmodelreleasedin2019Maywithimprovedaccuracyespeciallyonsmall,sideandblurryfaces.Faceattributesandlandmarksaredisabledifyouchoosethisdetectionmodel.
'detection_03':Detectionmodelreleasedin2021Februarywithimprovedaccuracyespeciallyonsmallfaces.Faceattributes(maskandheadPoseonly)andlandmarksaresupportedifyouchoosethisdetectionmodel.
Different'recognitionModel'valuesareprovided.Iffollow-upoperationslikeVerify,Identify,FindSimilarareneeded,pleasespecifytherecognitionmodelwith'recognitionModel'parameter.Thedefaultvaluefor'recognitionModel'is'recognition_01',iflatestmodelneeded,pleaseexplicitlyspecifythemodelyouneedinthisparameter.Oncespecified,thedetectedfaceIdswillbeassociatedwiththespecifiedrecognitionmodel.Moredetails,pleaserefertoHowtospecifyarecognitionmodel.
'recognition_01':ThedefaultrecognitionmodelforFace-Detect.AllthosefaceIdscreatedbefore2019Marcharebondedwiththisrecognitionmodel.
'recognition_02':Recognitionmodelreleasedin2019March.
'recognition_03':Recognitionmodelreleasedin2020May.
'recognition_04':Recognitionmodelreleasedin2021February.'recognition_04'isrecommendedsinceitsaccuracyisimprovedonfaceswearingmaskscomparedwith'recognition_03',anditsoverallaccuracyisimprovedcomparedwith'recognition_01'and'recognition_02'.
HttpMethod
POST
Selectthetestingconsoleintheregionwhereyoucreatedyourresource:
WestUS
WestUS2
EastUS
EastUS2
WestCentralUS
SouthCentralUS
WestEurope
NorthEurope
SoutheastAsia
EastAsia
AustraliaEast
BrazilSouth
CanadaCentral
CentralIndia
UKSouth
JapanEast
CentralUS
FranceCentral
KoreaCentral
JapanWest
NorthCentralUS
SouthAfricaNorth
UAENorth
NorwayEast
WestUS3
JioIndiaWest
RequestURL
https://{endpoint}/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel][&faceIdTimeToLive]
Requestparameters
returnFaceId
(optional)
boolean
ReturnfaceIdsofthedetectedfacesornot.Thedefaultvalueistrue.
returnFaceLandmarks
(optional)
boolean
Returnfacelandmarksofthedetectedfacesornot.Thedefaultvalueisfalse.
returnFaceAttributes
(optional)
string
Analyzeandreturntheoneormorespecifiedfaceattributesinthecomma-separatedstringlike"returnFaceAttributes=age,gender".Supportedfaceattributesincludeage,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise,mask,andqualityForRecognition.Faceattributeanalysishasadditionalcomputationalandtimecost.
recognitionModel
(optional)
string
The'recognitionModel'associatedwiththedetectedfaceIds.Supported'recognitionModel'valuesinclude"recognition_01","recognition_02","recognition_03"and"recognition_04".Thedefaultvalueis"recognition_01"."recognition_04"isrecommendedsinceitsaccuracyisimprovedonfaceswearingmaskscomparedwith"recognition_03",anditsoverallaccuracyisimprovedcomparedwith"recognition_01"and"recognition_02".
returnRecognitionModel
(optional)
boolean
Return'recognitionModel'ornot.Thedefaultvalueisfalse.
detectionModel
(optional)
string
The'detectionModel'associatedwiththedetectedfaceIds.Supported'detectionModel'valuesinclude"detection_01","detection_02"and"detection_03".Thedefaultvalueis"detection_01".
faceIdTimeToLive
(optional)
integer
ThenumberofsecondsforthefaceIDbeingcached.Supportedrangefrom60secondsupto86400seconds.Thedefaultvalueis86400(24hours).
Requestheaders
Content-Type
string
MediatypeofthebodysenttotheAPI.
Ocp-Apim-Subscription-Key
string
SubscriptionkeywhichprovidesaccesstothisAPI.FoundinyourCognitiveServicesaccounts.
Requestbody
TodetectinaURL(orbinarydata)specifiedimage.
JSONfieldsintherequestbody:
FieldsTypeDescription
urlStringURLofinputimage.
application/json
application/octet-stream
{
"url":"http://example.com/1.jpg"
}
[binarydata]
Response200
Asuccessfulcallreturnsanarrayoffaceentriesrankedbyfacerectanglesizeindescendingorder.Anemptyresponseindicates
nofacesdetected.Afaceentrymaycontainthefollowingvaluesdependingoninputparameters:
Fields
Type
Description
faceId
String
UniquefaceIdofthedetectedface,createdbydetectionAPIanditwillexpire24hoursafterthedetectioncall.
Toreturnthis,itrequires"returnFaceId"parametertobetrue.
recognitionModel
String
The'recognitionModel'associatedwiththisfaceId.Thisisonlyreturnedwhen'returnRecognitionModel'isexplicitlysetastrue.
faceRectangle
Object
Arectangleareaforthefacelocationonimage.
faceLandmarks
Object
Anarrayof27-pointfacelandmarkspointingtotheimportantpositionsoffacecomponents.Toreturnthis,it
requires"returnFaceLandmarks"parametertobetrue.
faceAttributes
Object
FaceAttributes:
age:anestimated"visualage"numberinyears.Itishowoldapersonlookslikeratherthantheactualbiologicalage.
gender:maleorfemale.
smile:smileintensity,anumberbetween[0,1].
facialHair:returnlengthsinthreefacialhairareas:moustache,beardandsideburns.Thelengthis
anumberbetween[0,1].0fornofacialhairinthisarea,1forlongorverythickfacialhairs
inthisarea.
headPose:3-Droll/yaw/pitchanglesforfacedirection.
glasses:glassestype.Valuesinclude'NoGlasses','ReadingGlasses','Sunglasses','SwimmingGoggles'.
emotion:emotionintensity,includingneutral,anger,contempt,disgust,fear,happiness,sadnessand
surprise.
hair:groupofhairvaluesindicatingwhetherthehairisvisible,bald,andhaircolorifhairisvisible.
makeup:whethereye,lipareasaremade-upornot.
accessories:accessoriesaroundface,including'headwear','glasses'and'mask'.Emptyarraymeansno
accessoriesdetected.Notethisisafterafaceisdetected.Largemaskcouldresultinnofaceto
bedetected.
blur:faceisblurryornot.Levelreturns'Low','Medium'or'High'.Valuereturnsanumberbetween
[0,1],thelargertheblurrier.
exposure:faceexposurelevel.Levelreturns'GoodExposure','OverExposure'or'UnderExposure'.
noise:noiseleveloffacepixels.Levelreturns'Low','Medium'and'High'.Valuereturnsanumberbetween
[0,1],thelargerthenoisier
occlusion:whethereachfacialareaisoccluded.includingforehead,eyesandmouth.
mask:whethereachfaceiswearingamask.Masktypereturns'noMask','faceMask','otherMaskOrOcclusion',
or'uncertain'.Valuereturnsaboolean'noseAndMouthCovered'indicatingwhethernoseandmoutharecovered.
qualityForRecognition:theoverallimagequalityregardingwhethertheimagebeingusedinthedetectionis
ofsufficientqualitytoattemptfacerecognitionon.Thevalueisaninformalratingoflow,medium,orhigh.
Only"high"qualityimagesarerecommendedforpersonenrollmentandqualityatorabove"medium"is
recommendedforidentificationscenarios.
Theattributeisonlyavailablewhenusinganycombinationsofdetectionmodelsdetection_01ordetection_03,
andrecognitionmodelsrecognition_03orrecognition_04.
application/json
[
{
"faceId":"c5c24a82-6845-4031-9d5d-978df9175426",
"recognitionModel":"recognition_03",
"faceRectangle":{
"width":78,
"height":78,
"left":394,
"top":54
},
"faceLandmarks":{
"pupilLeft":{
"x":412.7,
"y":78.4
},
"pupilRight":{
"x":446.8,
"y":74.2
},
"noseTip":{
"x":437.7,
"y":92.4
},
"mouthLeft":{
"x":417.8,
"y":114.4
},
"mouthRight":{
"x":451.3,
"y":109.3
},
"eyebrowLeftOuter":{
"x":397.9,
"y":78.5
},
"eyebrowLeftInner":{
"x":425.4,
"y":70.5
},
"eyeLeftOuter":{
"x":406.7,
"y":80.6
},
"eyeLeftTop":{
"x":412.2,
"y":76.2
},
"eyeLeftBottom":{
"x":413.0,
"y":80.1
},
"eyeLeftInner":{
"x":418.9,
"y":78.0
},
"eyebrowRightInner":{
"x":4.8,
"y":69.7
},
"eyebrowRightOuter":{
"x":5.5,
"y":68.5
},
"eyeRightInner":{
"x":441.5,
"y":75.0
},
"eyeRightTop":{
"x":446.4,
"y":71.7
},
"eyeRightBottom":{
"x":447.0,
"y":75.3
},
"eyeRightOuter":{
"x":451.7,
"y":73.4
},
"noseRootLeft":{
"x":428.0,
"y":77.1
},
"noseRootRight":{
"x":435.8,
"y":75.6
},
"noseLeftAlarTop":{
"x":428.3,
"y":89.7
},
"noseRightAlarTop":{
"x":442.2,
"y":87.0
},
"noseLeftAlarOutTip":{
"x":424.3,
"y":96.4
},
"noseRightAlarOutTip":{
"x":446.6,
"y":92.5
},
"upperLipTop":{
"x":437.6,
"y":105.9
},
"upperLipBottom":{
"x":437.6,
"y":108.2
},
"underLipTop":{
"x":436.8,
"y":111.4
},
"underLipBottom":{
"x":437.3,
"y":114.5
}
},
"faceAttributes":{
"age":71.0,
"gender":"male",
"smile":0.88,
"facialHair":{
"moustache":0.8,
"beard":0.1,
"sideburns":0.02
},
"glasses":"sunglasses",
"headPose":{
"roll":2.1,
"yaw":3,
"pitch":1.6
},
"emotion":{
"anger":0.575,
"contempt":0,
"disgust":0.006,
"fear":0.008,
"happiness":0.394,
"neutral":0.013,
"sadness":0,
"surprise":0.004
},
"hair":{
"bald":0.0,
"invisible":false,
"hairColor":[
{"color":"brown","confidence":1.0},
{"color":"blond","confidence":0.88},
{"color":"black","confidence":0.48},
{"color":"other","confidence":0.11},
{"color":"gray","confidence":0.07},
{"color":"red","confidence":0.03}
]
},
"makeup":{
"eyeMakeup":true,
"lipMakeup":false
},
"occlusion":{
"foreheadOccluded":false,
"eyeOccluded":false,
"mouthOccluded":false
},
"accessories":[
{"type":"headWear","confidence":0.99},
{"type":"glasses","confidence":1.0},
{"type":"mask","confidence":0.87}
],
"blur":{
"blurLevel":"Medium",
"value":0.51
},
"exposure":{
"exposureLevel":"GoodExposure",
"value":0.55
},
"noise":{
"noiseLevel":"Low",
"value":0.12
},
"qualityForRecognition":"high"
}
}
]
Response400
ErrorcodeandmessagereturnedinJSON:
ErrorCodeErrorMessageDescription
BadArgumentJSONparsingerror.BadorunrecognizablerequestJSONbody.
BadArgumentInvalidargumentreturnFaceAttributes.Supportedvaluesare:age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noiseandmaskinacomma-separatedformat.
BadArgument'recognitionModel'isinvalid.
BadArgument'detectionModel'isinvalid.
BadArgument'returnFaceAttributes'isnotsupportedby detection_02.
BadArgument'returnLandmarks'isnotsupported bydetection_02.
InvalidURLInvalidimageformatorURL.SupportedformatsincludeJPEG,PNG,GIF(thefirstframe)andBMP.
InvalidURLFailedtodownloadimagefromthespecifiedURL.Remoteservererrorreturned.
InvalidImageDecodingerror,imageformatunsupported.
InvalidImageSizeImagesizeistoosmall.Thevalidimagefilesizeshouldbelargerthanorequalto1KB.
InvalidImageSizeImagesizeistoobig.Thevalidimagefilesizeshouldbenolargerthan6MB.
application/json
{
"error":{
"code":"BadArgument",
"message":"Requestbodyisinvalid."
}
}
Response401
ErrorcodeandmessagereturnedinJSON:
ErrorCodeErrorMessageDescription
UnspecifiedInvalidsubscriptionKeyoruser/planisblocked.
application/json
{
"error":{
"code":"Unspecified",
"message":"Accessdeniedduetoinvalidsubscriptionkey.MakesureyouaresubscribedtoanAPIyouaretryingtocallandprovidetherightkey."
}
}
Response403
application/json
{
"error":{
"statusCode":403,
"message":"Outofcallvolumequota.Quotawillbereplenishedin2days."
}
}
Response408
Operationexceedsmaximumexecutiontime.
application/json
{
"error":{
"code":"OperationTimeOut",
"message":"RequestTimeout."
}
}
Response415
Unsupportedmediatypeerror.Content-Typeisnotintheallowedtypes:
ForanimageURL,Content-Typeshouldbeapplication/json
Foralocalimage,Content-Typeshouldbeapplication/octet-stream
application/json
{
"error":{
"code":"BadArgument",
"message":"InvalidMediaType."
}
}
Response429
application/json
{
"error":{
"statusCode":429,
"message":"Ratelimitisexceeded.Tryagainin26seconds."
}
}
Codesamples
Curl
C#
Java
JavaScript
ObjC
PHP
Python
Ruby
@ECHOOFF
curl-v-XPOST"https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes={string}&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400"
-H"Content-Type:application/json"
-H"Ocp-Apim-Subscription-Key:{subscriptionkey}"
--data-ascii"{body}"
usingSystem;
usingSystem.Net.Http.Headers;
usingSystem.Text;
usingSystem.Net.Http;
usingSystem.Web;
namespaceCSHttpClientSample
{
staticclassProgram
{
staticvoidMain()
{
MakeRequest();
Console.WriteLine("HitENTERtoexit...");
Console.ReadLine();
}
staticasyncvoidMakeRequest()
{
varclient=newHttpClient();
varqueryString=HttpUtility.ParseQueryString(string.Empty);
//Requestheaders
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key","{subscriptionkey}");
//Requestparameters
queryString["returnFaceId"]="true";
queryString["returnFaceLandmarks"]="false";
queryString["returnFaceAttributes"]="{string}";
queryString["recognitionModel"]="recognition_04";
queryString["returnRecognitionModel"]="false";
queryString["detectionModel"]="detection_03";
queryString["faceIdTimeToLive"]="86400";
varuri="https://westus.api.cognitive.microsoft.com/face/v1.0/detect?"+queryString;
HttpResponseMessageresponse;
//Requestbody
byte[]byteData=Encoding.UTF8.GetBytes("{body}");
using(varcontent=newByteArrayContent(byteData))
{
content.Headers.ContentType=newMediaTypeHeaderValue("
延伸文章資訊
- 1臉部REST API 參考-Azure 認知服務
Face - Detect With Stream - REST API (Azure Cognitive Services - Face). 偵測影像中的人臉、退回臉部矩形,以及選擇性地使用f...
- 2Face documentation - Azure Cognitive Services - Microsoft Docs
The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in ...
- 3臉部辨識
"By enhancing our service with the Face API, we have a competitive advantage in markets that dema...
- 4API 參考- 臉部- Azure Cognitive Services | Microsoft Docs
API 參考提供Person、LargePersonGroup/PersonGroup、LargeFaceList/FaceList 和臉部演算法API 的相關資訊。
- 5呼叫偵測API - 臉部- Azure Cognitive Services | Microsoft Docs
IList<DetectedFace> faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId:...