Refman.net
Refman.net
Contents
1
4.1
Build Environment . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2
4.3
Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4
Exception Handling . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5
10
Tutorial - Overview
11
13
17
23
Namespace Index
27
9.1
27
Package List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 Class Index
10.1 Class Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 Class Index
11.1 Class List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
29
31
31
ii
CONTENTS
12 Namespace Documentation
33
33
37
38
39
40
41
42
43
44
12.10Package Cognitec.FRsdk.Portrait . . . . . . . . . . . . . . . . . . . .
45
12.11Package Cognitec.FRsdk.Portrait.Feature . . . . . . . . . . . . . . .
46
12.12Package Cognitec.FRsdk.Verification . . . . . . . . . . . . . . . . .
47
13 Class Documentation
49
49
50
51
53
54
62
68
70
71
72
73
75
76
79
81
84
86
87
89
CONTENTS
iii
91
93
94
95
97
98
99
100
101
102
104
106
107
108
109
110
111
113
115
117
118
120
121
123
124
126
128
129
131
132
133
135
136
138
iv
CONTENTS
14 Example Documentation
139
14.1 acquisition.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
139
14.2 enroll.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
145
14.3 eyesfind.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
14.4 facefind.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150
14.5 identify.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152
14.6 tracklife.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
154
14.7 trackrec.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
156
14.8 verify.cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
158
Chapter 1
Chapter 2
Chapter 3
Chapter 4
4.1
Build Environment
For the developing of .NET applications the following parts of the directory tree of the
installed FaceVACS-SDK distribution are relevant:
- <install root>
... the root directory of the FaceVACS-SDK installation
|
|
+- examples
... location for the examples
|
|
|
+- cs
... C# sources, solution and project files
|
|
|
+- x86_32 ... 32 bit binaries, together with noipp runtime lib
|
+- lib
... the location of FaceVACS-SDK libraries
|
libfrsdk-<version>.{lib|dll} and libfrsdknet-<version>.dll
|
+- x86_32
... for 32 bit platforms
| |
| +- msc_7.1_crtdll
and Visual C++ 7.1 (Visual C++ .NET 2003, .Net Framework 1.
| |
| +- msc_8.0_crtdll
and Visual C++ 8.0 (Visual C++ .NET 2005, .Net Framework 2.
|
+- x86_64
... for 64 bit platforms
|
+- msc_8.0_crtdll
and Visual C++ 8.0 (Visual C++ .NET 2005, .Net Framework 2.
cd <install root>\examples\cs
csc /reference:..\..\lib\x86_32\msc_8.0_crtdll\libfrsdknet-<version>.dll /out:enroll.exe enro
Additionally one can use the provided solution/project files together with the Microsoft
Visual Studio .NET environment. The bin directory contains the precompiled example
binaries together with a FaceVACS-SDK runtime lib (noipp). To run the example application call them simple from the bin directory. If no command line parameters are
given a short help description explains the usage.
4.2
Applications build with an assembly reference have to resolve this reference at runtime. The .NET framework offers two different solutions: either Private Assembly or
Shared Assembly. Independent which form will be used, the delivered FaceVACSSDK.NET assembly libfrsdknet-<version>.dll references a FaceVACS-SDK library
build for Visual C++ 7.1 and .Net Framework version 1.1 or Visual C++ 8.0 and .Net
Framework 2.0. This referenced library has to be available in the search path or in the
same directory where the FaceVACS-SDK.NET assembly was installed. See the C++
Userguide for the available types of this library (debug/nodebug/ipp/noipp..).
c 2009 by Cognitec Systems GmbH
Copyright
4.3 Redistribution
4.2.1
Private Assembly
4.2.2
Shared Assembly
To make the assembly globally available they have to be placed into the global assembly cache. Example using gacutil:
cd <install root>\lib\dotnet\msc_8.0_crtdll
gacutil /i libfrsdknet.dll
4.3
Redistribution
For a complete description what to deploy see the Redistribution/Run-Time Environment section of the C++ Userguide.
- <application install root> ... the root directory of the application
|
installation
+- frsdk.cfg (can reside anywhere, or compiled in)
|
+- bin
|
+-<your application>
|
+-<runtime FaceVACS-SDK DLLs/shared libraries>
|
+-<.NET runtime library (libfrsdknet.dll)>
|
+- etc
+- portrait
|
+- *.dat
|
+- cara
|
+- *.dat
|
+- cmp
|
+- *.dat
|
+- ojo
+- *.dat
Additional the libfrsdknet-<version>.dll assembly has to be copied to the bin location of the distributed application directory tree. This directory must contain also the
dependent runtime libraries.
4.4
Exception Handling
10
4.5
Here is a brief listing of the main differences between the .NET and the C++ API.
4.5.1
4.5.2
The Reference counting used by the C++ SDK is not necessary in the .NET library
because .NET manages the lifetime of its objects themselfs.
Chapter 5
Tutorial - Overview
12
Tutorial - Overview
Currently there is no dedicated tutorial available for .NET.
Due to the fact, that the interfaces are completely similar to the C++ ones just refer to
the C++ tutorial .
Chapter 6
Namespace Index
6.1
Package List
.
.
.
.
.
.
33
37
38
39
40
41
. 42
. 43
.
.
.
.
44
45
46
47
14
Namespace Index
Chapter 7
Class Index
7.1
Class Hierarchy
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
50
51
53
136
54
62
68
70
71
72
75
100
101
73
76
79
81
84
86
87
89
91
93
94
95
97
98
99
16
Class Index
Location . . . . . . . . . . . . .
Location . . . . . . . . . . . . .
Match . . . . . . . . . . . . . .
Pgm . . . . . . . . . . . . . . .
Png . . . . . . . . . . . . . . .
Population . . . . . . . . . . . .
Position . . . . . . . . . . . . .
Processor . . . . . . . . . . . .
Processor . . . . . . . . . . . .
Processor . . . . . . . . . . . .
Rgb . . . . . . . . . . . . . . .
Sample . . . . . . . . . . . . .
SampleEvaluator . . . . . . . .
SampleQuality . . . . . . . . .
Score . . . . . . . . . . . . . .
ScoreMappings . . . . . . . . .
Set . . . . . . . . . . . . . . . .
Shape . . . . . . . . . . . . . .
ShapeImage . . . . . . . . . . .
Test . . . . . . . . . . . . . . .
Test . . . . . . . . . . . . . . .
Tracker . . . . . . . . . . . . .
TrackerLocation . . . . . . . . .
WinCaptureDevice.VideoFormat
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
102
104
106
107
108
109
110
111
113
115
117
118
120
121
123
124
126
128
129
131
132
133
135
138
Chapter 8
Class Index
8.1
Class List
Here are the classes, structs, unions and interfaces with brief descriptions:
Analyzer (Portrait Characteristics Analyzer, create Portrait characteristics
from annotated images ) . . . . . . . . . . . . . . . . . . . . . . .
AnnotatedImage (Annotated image ) . . . . . . . . . . . . . . . . . . . . . .
Bmp (Bitmap (BMP) image implementation ) . . . . . . . . . . . . . . . . .
CaptureDevice (Abstract image capturing device ) . . . . . . . . . . . . . . .
Characteristics (Portrait Characteristics ) . . . . . . . . . . . . . . . . . . . .
Compliance (Compliance assessment results ) . . . . . . . . . . . . . . . . .
Configuration (Opaque configuration object of the FaceVACS SDK library ) .
Configuration.ProtectedItem (Key value pair of a protected key ) . . . . . . .
Creator (Extract a Full Frontal Image from the source image ) . . . . . . . . .
Exception (An object of this type is thrown if exceptions in the library occure )
FacialMatchingEngine (Low level match facilities ) . . . . . . . . . . . . . .
FeatureDisabled (An object of this type is thrown at any time if requesting or
accessing a disabled FaceVACS-SDK feature ) . . . . . . . . . . . .
Feedback (The feedback from the verification process ) . . . . . . . . . . . .
Feedback (The feedback for the identification process ) . . . . . . . . . . . .
Feedback (The feedback for the enrollment procedure ) . . . . . . . . . . . .
Finder (Face finder ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Finder (Eyes finder ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FIR (FIR - Facial Identification Record ) . . . . . . . . . . . . . . . . . . . .
FIRBuilder (Building FIRs from serialized representations Use Enrollment.Processor to build FIRs from primary biometric data (face
images) ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image (Abstract image ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ImageManipulation (Vignetting ) . . . . . . . . . . . . . . . . . . . . . . . .
ImagePropertiesFeedback (Feedback for getting image properties on image
loading ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jpeg (JPEG image support ) . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jpeg2000 (JPEG 2000 image support ) . . . . . . . . . . . . . . . . . . . . .
49
50
51
53
54
62
68
70
71
72
73
75
76
79
81
84
86
87
89
91
93
94
95
97
18
Class Index
Jpeg.Properties (Properties of a JPEG image ) . . . . . . . . . . . . . . . .
LenseDistortionCorrector (Lens distorsion correction ) . . . . . . . . . . .
LicenseSignatureMismatch (License signature mismatch ) . . . . . . . . .
LimitExceeded (An object of this type is thrown at any time if a configured
limit of FaceVACS-SDK is exceeded ) . . . . . . . . . . . . . . .
Location (The Face.Location describes a image location where a face was
found ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Location (The Eyes.Location describes a location in the image where eyes
within a face have been found ) . . . . . . . . . . . . . . . . . . .
Match (Named score for a match of two FIRs ) . . . . . . . . . . . . . . .
Pgm (PGM/PPM image format support ) . . . . . . . . . . . . . . . . . . .
Png (PNG (Portable Network Graphics) image format support ) . . . . . . .
Population (An ordered (in the order of additions by add() ) set of named
FIRs which represents the population used for identifications ) . .
Position (Continuous two-dimensional coordinates ) . . . . . . . . . . . . .
Processor (This class represents the interface to the enrollment process ) . .
Processor (This class represents the interface to the verification process ) . .
Processor (This class represents the interface to the identification process ) .
Rgb (Rgb color model ) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample (Sample A compound input data type containing an image and, optionally, eyes annotation and/or shape data ) . . . . . . . . . . . .
SampleEvaluator (Evaluate the quality of a sample (image) ) . . . . . . . .
SampleQuality (A description of the biometric quality of a sample (image) )
Score (This class represents a score for representing the comparison result
between a FIR and the biometric evidence ) . . . . . . . . . . . .
ScoreMappings (FAR - FRR Score mappings ) . . . . . . . . . . . . . . . .
Set (Feature assessment results ) . . . . . . . . . . . . . . . . . . . . . . .
Shape (Shape Image I/O support ) . . . . . . . . . . . . . . . . . . . . . .
ShapeImage (Abstract image ) . . . . . . . . . . . . . . . . . . . . . . . .
Test (Test for features in a portrait ) . . . . . . . . . . . . . . . . . . . . . .
Test (Compliance assessment ) . . . . . . . . . . . . . . . . . . . . . . . .
Tracker (The Face Tracker locates and tracks faces across a sequence of images in an efficient way by analyzing the spatial and temporal dependencies between faces in subsequent images ) . . . . . . . . .
TrackerLocation (The location of a face being tracked by the face tracker ) .
WinCaptureDevice (Win32 DirectShow video capture device ) . . . . . . .
WinCaptureDevice.VideoFormat (Opaque type representing a video format
(resolution, bits per pixel, etc) ) . . . . . . . . . . . . . . . . . . .
. 98
. 99
. 100
. 101
. 102
.
.
.
.
104
106
107
108
.
.
.
.
.
.
109
110
111
113
115
117
. 118
. 120
. 121
.
.
.
.
.
.
.
123
124
126
128
129
131
132
. 133
. 135
. 136
. 138
Chapter 9
Namespace Documentation
9.1
Package Cognitec.FRsdk
FaceVACS-SDK.NET.
Classes
struct Exception
An object of this type is thrown if exceptions in the library occure.
struct LicenseSignatureMismatch
License signature mismatch.
struct FeatureDisabled
An object of this type is thrown at any time if requesting or accessing a disabled
FaceVACS-SDK feature.
struct LimitExceeded
An object of this type is thrown at any time if a configured limit of FaceVACS-SDK is
exceeded.
class Configuration
Opaque configuration object of the FaceVACS SDK library.
struct Rgb
rgb color model
struct Position
continuous two-dimensional coordinates
interface Image
20
Namespace Documentation
Abstract image.
interface ShapeImage
Abstract image.
struct Jpeg
JPEG image support.
struct Jpeg2000
JPEG 2000 image support.
struct Pgm
PGM/PPM image format support.
struct Png
PNG (Portable Network Graphics) image format support.
struct Bmp
Bitmap (BMP) image implementation.
struct Shape
Shape Image I/O support.
struct ImageManipulation
Vignetting.
class LenseDistortionCorrector
Lens distorsion correction.
interface CaptureDevice
Abstract image capturing device.
class WinCaptureDevice
Win32 DirectShow video capture device.
struct AnnotatedImage
annotated image
struct Sample
Sample A compound input data type containing an image and, optionally, eyes annotation and/or shape data.
struct Score
This class represents a score for representing the comparison result between a FIR
and the biometric evidence.
class ScoreMappings
c 2009 by Cognitec Systems GmbH
Copyright
struct SampleQuality
A description of the biometric quality of a sample (image).
class SampleEvaluator
Evaluate the quality of a sample (image).
class FIR
FIR - Facial Identification Record.
class FIRBuilder
Building FIRs from serialized representations Use Enrollment.Processor to build
FIRs from primary biometric data (face images).
class Population
An ordered (in the order of additions by add() ) set of named FIRs which represents
the population used for identifications.
struct Match
a named score for a match of two FIRs
class FacialMatchingEngine
Low level match facilities.
Packages
package ImageIO
additinal image properties, Image I/O support
package Face
The face finding facility.
package Eyes
The eyes finding facility.
package Portrait
Portrait characteristics and feature tests.
package Enrollment
the namespace for the enrollment facility
21
22
Namespace Documentation
package Verification
the namespace for the verification facility
package Identification
the namespace for the identification facility
9.1.1
Detailed Description
FaceVACS-SDK.NET.
Namespace for the .NET FaceVACS-SDK.
9.2
Package Cognitec.FRsdk.Enrollment
Classes
interface Feedback
The feedback for the enrollment procedure.
class Processor
this class represents the interface to the enrollment process
9.2.1
Detailed Description
23
24
Namespace Documentation
9.3
Package Cognitec.FRsdk.Eyes
Classes
struct Location
The Eyes.Location describes a location in the image where eyes within a face have
been found.
class Finder
eyes finder
9.3.1
Detailed Description
9.4
Package Cognitec.FRsdk.Face
Classes
struct Location
The Face.Location describes a image location where a face was found.
class Finder
face finder
struct TrackerLocation
The location of a face being tracked by the face tracker.
class Tracker
The Face Tracker locates and tracks faces across a sequence of images in an efficient
way by analyzing the spatial and temporal dependencies between faces in subsequent
images.
9.4.1
Detailed Description
25
26
Namespace Documentation
9.5
Package Cognitec.FRsdk.Identification
Classes
interface Feedback
The feedback for the identification process.
class Processor
this class represents the interface to the identification process
9.5.1
Detailed Description
9.6
Package Cognitec.FRsdk.ImageIO
Classes
class ImagePropertiesFeedback
feedback for getting image properties on image loading
Enumerations
enum ImageColorMode
image color modes
9.6.1
Detailed Description
9.6.2
9.6.2.1
enum ImageColorMode
27
28
Namespace Documentation
9.7
Packages
package FullFrontal
Compliance with ISO 19794 5 Full Frontal Image type.
package TokenFace
Support for Token Face Image type as defined in ISO 19794 5 9.2.
9.7.1
Detailed Description
9.8
Classes
class Creator
Extract a Full Frontal Image from the source image.
class Compliance
Compliance assessment results.
class Test
Compliance assessment.
9.8.1
Detailed Description
29
30
Namespace Documentation
9.9
Support for Token Face Image type as defined in ISO 19794 5 9.2.
9.9.1
Detailed Description
Support for Token Face Image type as defined in ISO 19794 5 9.2.
According to ISO 19794 5 the Token Face Image is used to store the extracted face
information from any other image source.
9.10
Package Cognitec.FRsdk.Portrait
Classes
class Characteristics
Portrait Characteristics.
class Analyzer
Portrait Characteristics Analyzer, create Portrait characteristics from annotated images.
Packages
package Feature
Test for features in the portrait.
9.10.1
Detailed Description
31
32
Namespace Documentation
9.11
Package Cognitec.FRsdk.Portrait.Feature
Classes
class Set
Feature assessment results.
class Test
Test for features in a portrait.
Enumerations
enum Gender
Gender.
enum Ethnicity
Ethnicity.
9.11.1
Detailed Description
9.11.2
9.11.2.1
enum Gender
Gender.
9.11.2.2
enum Ethnicity
Ethnicity.
9.12
Package Cognitec.FRsdk.Verification
Classes
interface Feedback
The feedback from the verification process.
class Processor
this class represents the interface to the verification process
9.12.1
Detailed Description
33
34
Namespace Documentation
Chapter 10
Class Documentation
10.1
10.1.1
Detailed Description
10.1.2
10.1.2.1
36
Class Documentation
10.2
annotated image
Public Attributes
Image image
the image
Eyes.Location annotation
the annotation
10.2.1
Detailed Description
annotated image
An image with annotated eye positions
10.2.2
10.2.2.1
10.2.3
10.2.3.1
Image image
the image
10.2.3.2
Eyes.Location annotation
the annotation
10.3
10.3.1
Detailed Description
37
38
Class Documentation
10.3.2
10.3.2.1
static Image load (IntPtr bi, IntPtr img, string name) [static]
load the Bmp image from file; the image will get the name of the file
10.3.2.3
write bitmap info to buffer; buffer size has to be at least the size returned by getBitmapInfoSize().
Note that the bitmap format stored in info is the format of the FRsdk.Image color
representation that can differ from that used when constructing an image from bitmap
data. To access bitmap data use img.colorRepresentation()
10.4
CaptureDevice
WinCaptureDevice
10.4.1
Detailed Description
10.4.2
10.4.2.1
Image capture ()
39
40
Class Documentation
10.5
Portrait Characteristics.
uint height ()
the height of the portrait image
Position eye0 ()
Coordinate of Feature Point 12.2 (Right eye center).
Position eye1 ()
Coordinate of Feature Point 12.1 (Left eye center).
float eyeDistance ()
Get the eye distance in pixels.
Position faceCenter ()
Coordinate of the center of the line connecting Feature Points 12.1 and 12.2 (Center
of left and right eye) See ISO standard 5.6.4.
uint numberOfFaces ()
Try to detect all faces within the given image, ignores given position.
float eye0Open ()
Returns the confidence for the persons eye0 beeing open.
float eye1Open ()
Returns the confidence for the persons eye1 beeing open.
float eye0GazeFrontal ()
Returns the confidence for the persons eye0 looking frontal to the camera.
float eye1GazeFrontal ()
Returns the confidence for the persons eye1 looking frontal to the camera.
float eye0Tinted ()
Returns a value how tinted the left eye and the environment is.
float eye1Tinted ()
Returns a value how tinted the right eye and the environment is.
float eye1Red ()
Returns the redness of eyes pupils.
float glasses ()
Returns a measure for the probability of the person in the portrait to wear glasses See
ISO standard A.3.2.4.
float exposure ()
Returns average gray value within facial region.
uint grayScaleDensity ()
Gray scale density (number of different gray values) within facial region.
float naturalSkinColour ()
Returns the natural colours ratio ( 0.0 - 1.0) within face region.
float hotSpots ()
Returns the percentage of hot spot pixels ( 0.0 - 1.0) within face region.
float backgroundUniformity ()
Background ist not nromativ according to ISO standard section 7.2.6, but according
to A 2.4.3 the background uniformity is tested by this function.
float widthOfHead ()
Horizontal distance between the points where the external ear connects the head in
pixels.
float lengthOfHead ()
Vertical distance between base of the chin and the crown in pixels.
float chin ()
Returns the estimated distance (in pixel) of chin plane from the eyes plan.
float crown ()
Returns the estimated distance (in pixel) of crown plane from the eyes plan.
float ear0 ()
Returns the estimated distance (in pixel) between the center of the face and the left
bounding plane of the face reagion marked by the coordinate point 10.10 (left ear to
head connection, see ISO standard 5.6.3.
float ear1 ()
Returns the estimated distance (in pixel) between the center of the face and the left
bounding plane of the face reagion marked by the coordinate point 10.10 (left ear to
head connection, see ISO standard 5.6.3.
c 2009 by Cognitec Systems GmbH
Copyright
41
42
Class Documentation
float poseAngleRoll ()
Returns the tangent of the Pose Angle - Roll.
float deviationFromFrontalPose ()
Returns a measure for the deviation from frontal pose.
float isMale ()
Returns a measure for the probability that the image contains a portrait of a male
person.
float isChild ()
Returns a measure for the probability that the image contains a portrait of a child
(person in the age of 0 - 7 years).
float isToddler ()
Returns a measure for the probability that the image of an child contains a portrait of
an toddler (person in the age of 0 - 4 years).
float isInfant ()
Returns a measure for the probability that the image of an toddler contains a portrait
of an infant (person in the age of 0 - 1 year).
float isBelow26 ()
returns measurement for the probability that the given image is below 26 age
float isBelow36 ()
returns measurement for the probability that the given image is below 36 age
float mouthClosed ()
Returns a measure how open a mouth is.
float deviationFromUniformLighting ()
Returns a measure for the deviation from uniform lighting in the face area.
10.5.1
Detailed Description
Portrait Characteristics.
An instance of this class is produced by analyzing a face portrait using Portrait.Analyzer. It provides various measures important for determining compliance
with ISO 19794 5.
c 2009 by Cognitec Systems GmbH
Copyright
10.5.2
10.5.2.1
uint width ()
uint height ()
Position eye0 ()
Position eye1 ()
float eyeDistance ()
Position faceCenter ()
Coordinate of the center of the line connecting Feature Points 12.1 and 12.2 (Center of
left and right eye) See ISO standard 5.6.4.
10.5.2.7
uint numberOfFaces ()
Try to detect all faces within the given image, ignores given position.
10.5.2.8
float eye0Open ()
float eye1Open ()
43
44
Class Documentation
10.5.2.10
float eye0GazeFrontal ()
Returns the confidence for the persons eye0 looking frontal to the camera.
The higher the returned value is, the more frontal the gaze is. See ISO standard 7.2.3.
10.5.2.11
float eye1GazeFrontal ()
Returns the confidence for the persons eye1 looking frontal to the camera.
The higher the returned value is, the more frontal the gaze is. See ISO standard 7.2.3.
10.5.2.12
float eye0Tinted ()
Returns a value how tinted the left eye and the environment is.
Higher Value means mor tinted. See ISO standard 7.2.11.
10.5.2.13
float eye1Tinted ()
Returns a value how tinted the right eye and the environment is.
Higher Value means mor tinted. See ISO standard 7.2.11.
10.5.2.14
float eye0Red ()
10.5.2.15
float eye1Red ()
10.5.2.16
float glasses ()
Returns a measure for the probability of the person in the portrait to wear glasses See
ISO standard A.3.2.4.
10.5.2.17
float exposure ()
uint grayScaleDensity ()
Gray scale density (number of different gray values) within facial region.
The facial region used is the area enclosed by the 2 semiellipses uniquely defined by
crown(), ear0(), ear1() and chin(), ear0(), ear1(), respectively. See ISO standard 7.4.2.1
10.5.2.19
float naturalSkinColour ()
Returns the natural colours ratio ( 0.0 - 1.0) within face region.
For details refer to ISO standard section 7.3.4.
10.5.2.20
float hotSpots ()
Returns the percentage of hot spot pixels ( 0.0 - 1.0) within face region.
For details refer to ISO standard section 7.2.10 and 7.2.11.
10.5.2.21
float backgroundUniformity ()
Background ist not nromativ according to ISO standard section 7.2.6, but according to
A 2.4.3 the background uniformity is tested by this function.
10.5.2.22
float widthOfHead ()
Horizontal distance between the points where the external ear connects the head in
pixels.
See ISO 19794 5 8.3.4.
10.5.2.23
float lengthOfHead ()
Vertical distance between base of the chin and the crown in pixels.
See ISO 19794 5 8.3.5.
10.5.2.24
float chin ()
Returns the estimated distance (in pixel) of chin plane from the eyes plan.
Chin is definded as the central forward portion of the lower jaw (see ISO standard 4.1.)
In context of face recognition it is a part to determine the face region. So, chin can be
seen as a lower limit plane of the face region. The eyes plane is defined by the eyes
position and the chin plane is parallel to the eyes plane.
c 2009 by Cognitec Systems GmbH
Copyright
45
46
Class Documentation
10.5.2.25
float crown ()
Returns the estimated distance (in pixel) of crown plane from the eyes plan.
Crown is definded as the top head if it could be seen (see ISO standard 4.6.) In context
of face recognition it is a part to determine the face region. So, crown can be seen as a
upper limit plane of the face region. The eyes plane is defined by the eyes position and
the crown plane is parallel to the eyes plane.
10.5.2.26
float ear0 ()
Returns the estimated distance (in pixel) between the center of the face and the left
bounding plane of the face reagion marked by the coordinate point 10.10 (left ear to
head connection, see ISO standard 5.6.3.
).
10.5.2.27
float ear1 ()
Returns the estimated distance (in pixel) between the center of the face and the left
bounding plane of the face reagion marked by the coordinate point 10.10 (left ear to
head connection, see ISO standard 5.6.3.
).
10.5.2.28
float poseAngleRoll ()
10.5.2.29
float deviationFromFrontalPose ()
10.5.2.30
float isMale ()
Returns a measure for the probability that the image contains a portrait of a male person.
10.5.2.31
float isChild ()
Returns a measure for the probability that the image contains a portrait of a child (person in the age of 0 - 7 years).
c 2009 by Cognitec Systems GmbH
Copyright
float isToddler ()
Returns a measure for the probability that the image of an child contains a portrait of
an toddler (person in the age of 0 - 4 years).
10.5.2.33
float isInfant ()
Returns a measure for the probability that the image of an toddler contains a portrait of
an infant (person in the age of 0 - 1 year).
10.5.2.34
float isBelow26 ()
returns measurement for the probability that the given image is below 26 age
10.5.2.35
float isBelow36 ()
returns measurement for the probability that the given image is below 36 age
10.5.2.36
float mouthClosed ()
float deviationFromUniformLighting ()
Returns a measure for the deviation from uniform lighting in the face area.
Higher absolute values mean higher deviation. The implementation returns a value
within the range of [-1..1]. Negative values mean an darker left side and a brighter
right side, positive values the opposite. See ISO standard 7.2.7
47
48
Class Documentation
10.6
bool goodVerticalFacePosition ()
Test the vertical position of the face.
bool horizontallyCenteredFace ()
Test whether the face is centered in the image.
bool widthOfHead ()
Width of the head compared to image width.
bool widthOfHeadBestPractice ()
According to section A 3.2.2 best practice is a range of image width to face width
ratio betwee 1.4 and 2.0.
bool lengthOfHead ()
Length of head is limited to the range of 60% to 90% of the image height.
bool lengthOfHeadBestPractice ()
Best practice reduces the range of face length to 70% to 80% of the image hieght.
bool resolution ()
Resolution of the full images shall be at least 18x0 pixels for the width of the head or
90 pixels from eye center to eye center (see ISO standard 8.4.1).
bool resolutionBestPractice ()
Best Practice recommendation are more strict.
bool imageWidthToHeightBestPractice ()
Paragraph A3.2.1 of ISO standard describes a best practice of ratio between image
height and width.
bool goodGrayScaleProfile ()
Grayscale density 7 bit or 128 intensity values.
bool hasNaturalSkinColour ()
Natural colours in face region Returns true if the face region has natural colors,
otherwise false.
bool isBackgroundUniformBestPractice ()
background uniformity.
bool goodExposure ()
See ISO 19794 5 7.3.2.
bool isFrontal ()
The face is considered frontal if the rotation of the head is less than +/-5 degrees from
frontal for yaw and pitch and if roll angle of head is less then +/-8 degrees.
bool isFrontalBestPractice ()
The face is considered frontal if the rotation of the head is less than +/-5 degrees from
frontal in every direction (roll, pitch and yaw).
bool isLightingUniform ()
Returns true if lighting is equally distributed in the face area.
bool eyesOpenBestPractice ()
Returns true if the persons eyes are open.
bool eyesGazeFrontalBestPractice ()
Returns true if the persons eyes are looking frontal to the camera.
bool eyesNotRedBestPractice ()
returns true if both eyes pupils are not detected as red.
bool noTintedGlasses ()
according to 7.2.11 and best recommendations glasses should not be tinted.
bool isSharp ()
returns true if the face area (from chin to crown and from left to right ear) fits the
focus and depth in field characteristics (see ISO 19794 5 section 7.3.3).
bool mouthClosedBestPractice ()
returns true if mouth is closed according to ISO 19794 5 section 7.2.3
bool isCompliant ()
Returns true if the images is compliant with the ISO 19794 5 requirements only.
bool isBestPractice ()
The test contains is Compliant and additionally all checks according to best practice
represented by function names of this class with BestPractice in name.
49
50
Class Documentation
10.6.1
Detailed Description
10.6.2
10.6.2.1
bool onlyOneFaceVisible ()
Only one face has to be visible in the image according to ISO 19794 5 7.2.4.
10.6.2.2
bool goodVerticalFacePosition ()
bool horizontallyCenteredFace ()
bool widthOfHead ()
bool widthOfHeadBestPractice ()
According to section A 3.2.2 best practice is a range of image width to face width ratio
betwee 1.4 and 2.0.
10.6.2.6
bool lengthOfHead ()
Length of head is limited to the range of 60% to 90% of the image height.
See ISO 19794 5 8.3.5.
10.6.2.7
bool lengthOfHeadBestPractice ()
Best practice reduces the range of face length to 70% to 80% of the image hieght.
See ISO 19794 5 A 3.2.3.
c 2009 by Cognitec Systems GmbH
Copyright
bool resolution ()
Resolution of the full images shall be at least 18x0 pixels for the width of the head or
90 pixels from eye center to eye center (see ISO standard 8.4.1).
10.6.2.9
bool resolutionBestPractice ()
bool imageWidthToHeightBestPractice ()
Paragraph A3.2.1 of ISO standard describes a best practice of ratio between image
height and width.
It should be between 1.25 and 1.34.
10.6.2.11
bool goodGrayScaleProfile ()
bool hasNaturalSkinColour ()
Natural colours in face region Returns true if the face region has natural colors, otherwise false.
See ISO 19794 5 standard 7.3.4
10.6.2.13
bool noHotSpots ()
bool isBackgroundUniformBestPractice ()
background uniformity.
returns true if the background is uniform. See ISO Standard A 2.4.3
10.6.2.15
bool goodExposure ()
51
52
Class Documentation
10.6.2.16
bool isFrontal ()
The face is considered frontal if the rotation of the head is less than +/-5 degrees from
frontal for yaw and pitch and if roll angle of head is less then +/-8 degrees.
See ISO 19794 5 7.2.2.
10.6.2.17
bool isFrontalBestPractice ()
The face is considered frontal if the rotation of the head is less than +/-5 degrees from
frontal in every direction (roll, pitch and yaw).
See ISO 19794 5 7.2.2 and A 2.2.
10.6.2.18
bool isLightingUniform ()
10.6.2.19
bool eyesOpenBestPractice ()
10.6.2.20
bool eyesGazeFrontalBestPractice ()
Returns true if the persons eyes are looking frontal to the camera.
See ISO standard 7.2.3.
10.6.2.21
bool eyesNotRedBestPractice ()
10.6.2.22
bool noTintedGlasses ()
bool isSharp ()
returns true if the face area (from chin to crown and from left to right ear) fits the focus
and depth in field characteristics (see ISO 19794 5 section 7.3.3).
10.6.2.24
bool mouthClosedBestPractice ()
bool isCompliant ()
Returns true if the images is compliant with the ISO 19794 5 requirements only.
If it failes only member function without BestPractice in name must be checked in
order to get the reason why the test failes.
10.6.2.26
bool isBestPractice ()
The test contains is Compliant and additionally all checks according to best practice
represented by function names of this class with BestPractice in name.
True is return incase of all checks are passed, else false is returned.
53
54
Class Documentation
10.7
Configuration (System.IO.Stream s)
Constructor for creating from given stream.
string licenseInformation ()
returns a string which contains the license information
ProtectedItem[ ] protectedItems ()
returns a list of configuration items (key/value pairs) which are protected by the license key.
Classes
struct ProtectedItem
key value pair of a protected key
10.7.1
Detailed Description
10.7.2
10.7.2.1
Configuration (string s)
Configuration (System.IO.Stream s)
10.7.3
10.7.3.1
string licenseInformation ()
Configuration item value access, returns the string representation of the configuration
items value, the key has to be a valid configuration item name in dotted notation, e.g.
FRSDK.ComparisonAlgorithm
10.7.3.3
Set configuration item to given value, the key has to be a valid configuration item name,
the value has to the string representation of the items value, e.g.
B2ComparisonAlgorithm, 0.5 or 42. The changes will become persistent if the
Configuration object was created from a configFilename. Otherwise the changes will
apply to the current object only. Note that most classes use the configuration at object
construction time only. In these cases later changes might have no effect.
10.7.3.4
ProtectedItem [ ] protectedItems ()
returns a list of configuration items (key/value pairs) which are protected by the license
key.
Changing keys or values from that list in the FaceVACS-SDK configuration file will
cause LicenseSignatureMismatch exceptions.
55
56
Class Documentation
10.8
Public Attributes
string key
key
string value
value
10.8.1
Detailed Description
10.8.2
10.8.2.1
string key
key
10.8.2.2
string value
value
10.9
10.9.1
Detailed Description
10.9.2
10.9.2.1
extract the Full frontal Image from an annotated image, headLengthToImageHeightRatio and verticalHeadToImagePositionRatio can be given at runtime.
It describes ratio of head to image dimension of the cropped area.
57
58
Class Documentation
10.10
Exception
FeatureDisabled
10.10.1
LicenseSignatureMismatch
LimitExceeded
Detailed Description
10.11
Match[ ] bestMatches (FIR fir, Population population, Score threshold, uint maxMatches)
Calculates best matches of the comparison between fir and the FIRs in the population.
10.11.1
Detailed Description
10.11.2
10.11.2.1
FacialMatchingEngine (Configuration c)
10.11.3
10.11.3.1
59
60
Class Documentation
10.11.3.2
Calculate the scores between fir and the FIRs in population (One-To-Many Matching).
Scores in the list returned have the same order as the FIRs within the Population.
MT-safe
It is safe to call this function concurrently from different threads. The population
has to be kept unchanged during function execution.
10.11.3.3
Calculates best matches of the comparison between fir and the FIRs in the population.
Matches returned are sorted by score value in descending order. Size of the match list
returned is controlled by both a score threshold and a maximum size.
MT-safe
It is safe to call this function concurrently from different threads. The population
has to be kept unchanged during function execution.
Parameters:
threshold threshold for match decision
maxMatches the maximum size of the FRsdk.Matches to be returned in the feedback
10.12
Exception
FeatureDisabled
10.12.1
Detailed Description
61
62
Class Documentation
10.13
void eyesNotFound ()
Called if no eyes have been found in the current image, may happen if the image does
not contain a face.
void sampleQualityTooLow ()
Called if the sample quality of the current image is too low for verification processing.
void success ()
Called if the verification was successful, i.e.
void failure ()
Called if verification failed, i.e.
void end ()
Called at the end of the verification procedure; this function will be called in any case.
10.13.1
Detailed Description
10.13.2
10.13.2.1
void start ()
10.13.2.2
10.13.2.3
Called if eyes have been found in the current image; the location l indicates the position
they have been found at.
10.13.2.4
void eyesNotFound ()
Called if no eyes have been found in the current image, may happen if the image does
not contain a face.
10.13.2.5
Informs about sample quality of the current image; will only be called if eyes could be
found in the image.
10.13.2.6
void sampleQualityTooLow ()
Called if the sample quality of the current image is too low for verification processing.
10.13.2.7
Called if the current image could be compared with the given FIR.
Parameters:
s the score obtained.
10.13.2.8
void success ()
63
64
Class Documentation
10.13.2.9
void failure ()
void end ()
Called at the end of the verification procedure; this function will be called in any case.
10.14
void eyesNotFound ()
Called if no eyes have been found in the current image; this may happen if the image
does not contain a face.
void sampleQualityTooLow ()
Called if the sample quality of the current image is too low for identification processing.
void end ()
Called at the end of identification procedure.
10.14.1
Detailed Description
10.14.2
10.14.2.1
void start ()
65
66
Class Documentation
10.14.2.2
Called if eyes have been found in the current image; the location l indicates the position
they have been found at.
10.14.2.4
void eyesNotFound ()
Called if no eyes have been found in the current image; this may happen if the image
does not contain a face.
10.14.2.5
Informs about sample quality of the current image; will only be called if eyes could be
found in the image.
10.14.2.6
void sampleQualityTooLow ()
Called if the sample quality of the current image is too low for identification processing.
10.14.2.7
Called if at least one of the input images matches with the given FIR population.
The best match is the first one and the worst the last one. The FIRs are referenced by
names.
10.14.2.8
void end ()
10.15
void eyesNotFound ()
Called if no eyes have been found in the current image.
void sampleQualityTooLow ()
Called during stream enrollment if the sample quality of the current image is too low
for enrollment processing.
void failure ()
Called if the enrollment was not successful (due to failure conditions).
void end ()
Called at the end of the enrollment procedure.
10.15.1
Detailed Description
67
68
Class Documentation
10.15.2
10.15.2.1
void start ()
Called if eyes have been found in the current image; the Eyes.Location indicates the
position they have been found at.
10.15.2.4
void eyesNotFound ()
void sampleQualityTooLow ()
Called during stream enrollment if the sample quality of the current image is too low
for enrollment processing.
10.15.2.7
void failure ()
10.15.2.9
void end ()
69
70
Class Documentation
10.16
face finder
10.16.1
Detailed Description
face finder
This class represents a interface to the face finding procedure.
10.16.2
10.16.2.1
Finder (Configuration c)
10.16.3
10.16.3.1
Returns:
array of Face.Location for the faces found.
Searching is focused to faces in the eye distance range
minRelativeEyeDistance = 0.1,
maxRelativeEyeDistance = 0.4 (relative to the image width) and within the given
search box
x1 = INT MIN / 2,
y1 = INT MIN / 2,
x2 = INT MAX / 2 - 1,
y2 = INT MAX / 2 - 1, spanned by (x1, y1, x2, y2).
The search process can result in faces that are slightly smaller or bigger than suggested
by these numbers.
c 2009 by Cognitec Systems GmbH
Copyright
Returns:
array of Face.Location for the faces found.
Searching is focused to faces in the given eye distance range (relative to the image
width) and within the given search box spanned by (x1, y1, x2, y2). The given
search box is clipped by the boundaries of the image, so the default settings for the
search box denote that the entire image has to be used as search area. (See your compilers <limits.h> for INT MIN and INT MAX definitions.). Also note that minRelativeEyeDistance and maxRelativeEyeDistance are hints for the finding engine. The
search process can result in faces that are slightly smaller or bigger than suggested by
these numbers.
71
72
Class Documentation
10.17
eyes finder
10.17.1
Detailed Description
eyes finder
This class represents a interface to the eye finding procedure.
MT-safe
It is safe to call the find method concurrently from different threads.
10.17.2
10.17.2.1
Finder (Configuration c)
10.17.3
10.17.3.1
10.18
string version ()
returns the version id string
10.18.1
Detailed Description
10.18.2
10.18.2.1
uint size ()
10.18.2.2
string version ()
73
74
Class Documentation
10.18.2.3
10.19
10.19.1
Detailed Description
10.19.2
10.19.2.1
FIRBuilder (Configuration c)
10.19.3
10.19.3.1
create a FIR from stream containing a platform independent representation created with
FIR.serialize()
c 2009 by Cognitec Systems GmbH
Copyright
75
76
Class Documentation
10.19.3.3
create a FIR from platform dependent representation created using FIR.writeTo, starting from p, the memory must be valid during the lifetime of the FIR object, after the
construction p points to the first byte after the FIRs data.
10.20
Abstract image.
Inherited by NetImage.
uint width ()
returns the width of the image in pixels
uint height ()
returns the height of the image in pixels
IntPtr grayScaleRepresentation ()
Returns a pointer to an array of size width() x height() x sizeof( byte) containing the
gray scale representation of the image.
IntPtr colorRepresentation ()
Returns a pointer to an array of size width() x height() x sizeof( uint) containing the
color representation of the image.
string name ()
returns the name of the image, or an empty string
10.20.1
Detailed Description
Abstract image.
10.20.2
10.20.2.1
bool isColor ()
10.20.2.2
uint width ()
77
78
Class Documentation
10.20.2.3
uint height ()
IntPtr grayScaleRepresentation ()
Returns a pointer to an array of size width() x height() x sizeof( byte) containing the
gray scale representation of the image.
The pointer has to remain valid during the whole lifetime of the Image object.
10.20.2.5
IntPtr colorRepresentation ()
Returns a pointer to an array of size width() x height() x sizeof( uint) containing the
color representation of the image.
The pointer has to remain valid during the whole lifetime of the Image object. Note
that the order of colors per pixel is BGRA (blue green red alpha) instead of RGB.
10.20.2.6
string name ()
10.21
Vignetting.
Public Types
enum SmoothingFunction
There are threee ways to blend the margin to the target color: Gaussian blends intensisty of each color channel with a half gaussian function, Linear blend is propotional
to the border distance and Fixed sets all pixel of margin to the border color.
10.21.1
Detailed Description
Vignetting.
10.21.2
10.21.2.1
enum SmoothingFunction
There are threee ways to blend the margin to the target color: Gaussian blends intensisty of each color channel with a half gaussian function, Linear blend is propotional
to the border distance and Fixed sets all pixel of margin to the border color.
10.21.3
10.21.3.1
static Image vignetting (Image img, Rgb rgb, int margin, int radius,
SmoothingFunction sf ) [static]
79
80
Class Documentation
10.22
10.22.1
Detailed Description
10.22.2
10.22.2.1
ImagePropertiesFeedback ()
10.22.3
10.22.3.1
the given Parameter tells about the original coding/compression of the image data.
10.22.3.2
10.23
Classes
struct Properties
contains properties of a JPEG image
10.23.1
Detailed Description
81
82
Class Documentation
color and gray scale images, with quality parameter.
MT-safe It is safe to call the member functions
concurrently from different threads.
10.23.2
10.23.2.1
Constructs an image representation from the given file which must be in jpeg format.
10.23.2.2
Constructs an image representation from the given stream which is in jpeg format.
10.23.2.3
saves the image to a file given by name with given jpeg quality (1 .
. 100) Note that jpeg quality influences the For a details see the ostream based save()
function description above.
Returns the number of bytes written to the file.
10.23.2.6
saves the image to the ostream which has a size constraint given by maxSize (Byte).
Returns the achieved JPEG image Properties.
10.24
10.24.1
Detailed Description
10.24.2
10.24.2.1
Loads an image representation from the given file which must be in jpeg 2000 format.
10.24.2.2
Loads an image representation from memory which is expected to contain jpeg 2000
format.
83
84
Class Documentation
10.25
Public Attributes
int fileSize
the file size
int quality
the JPEG quality
10.25.1
Detailed Description
10.25.2
10.25.2.1
int fileSize
int quality
10.26
10.26.1
Detailed Description
10.26.2
10.26.2.1
Parameters:
k Parameter controlling undistortion.
img w width of images to be processed
img h height of images to be processed
dcx (optical axis) relative to image center, should be 0 in most cases
dcy (optical axis) relative to image center, should be 0 in most cases
10.26.3
10.26.3.1
Parameters:
img The Image to be undistorted. Dimensions (w,h) of the image must match
those used to instantiate this LenseDistortionCorrector
85
86
Class Documentation
10.27
Exception
LicenseSignatureMismatch
10.27.1
Detailed Description
10.28
Exception
LimitExceeded
10.28.1
Detailed Description
87
88
Class Documentation
10.29
Public Attributes
Position pos
the face position
float width
the width of the found face
float confidence
confidence for the face
float rotationAngle
rotation angle for the face
10.29.1
Detailed Description
10.29.2
10.29.2.1
Position pos
10.29.2.2
float width
10.29.2.3
float confidence
float rotationAngle
89
90
Class Documentation
10.30
The Eyes.Location describes a location in the image where eyes within a face have
been found.
Public Attributes
Position first
position of first eye
Position second
position of second eye
float firstConfidence
confidence for first eye
float secondConfidence
confidence for second eye
10.30.1
Detailed Description
The Eyes.Location describes a location in the image where eyes within a face have
been found.
The positions represent the center (located at half the distance between left and right
eye corner) of the first and the second eye, respectively. First and second eye positions
are defined relative to the images coordinate system (and corresponds to the usual
way images are displayed). The first eye is by definition the one with the lowest xcoodinate and the second the other. Whether the first eye (following this definition)
is at the same time the persons actual left eye will depend on how the image is acquired and/or transformed. For common devices (cameras, scanners, image processing
libraries, displays etc), the first eye definition in our context will correspond to the persons right eye. On the contrary, if one of these devices mirrors the image, then the first
eye in this context will correspond to the persons left eye.
(WARNING: The definition in this context has nothing to do with the ISO/IEC 19794-5
Token Frontal face image type).
The usual range of the confidence is [0...6] - high values are good confidences, values
near to 0 are bad confidences. Values in the range 2 ... 4 are usual.
10.30.2
10.30.2.1
Position first
Position second
float firstConfidence
float secondConfidence
91
92
Class Documentation
10.31
Public Attributes
string name
name
Score score
score
10.31.1
Detailed Description
10.31.2
10.31.2.1
string name
name
10.31.2.2
Score score
score
10.32
10.32.1
Detailed Description
10.32.2
10.32.2.1
Saves the image to filename; gray scale images will be stored in pgm-format, color
images in ppm-format.
93
94
Class Documentation
10.33
10.33.1
Detailed Description
10.33.2
10.33.2.1
Write image as PNG image to the given stream, the stream will not be closed.
The intention of the PNG image format is to be lossless. Compression is in the range
from 0-9, 0 means no compression and fast reading/writing 9 means high compression
and slower reading/writing. The default level is 6 which is a compromise in speed and
compression.
10.34
An ordered (in the order of additions by add() ) set of named FIRs which represents
the population used for identifications.
10.34.1
Detailed Description
An ordered (in the order of additions by add() ) set of named FIRs which represents
the population used for identifications.
10.34.2
10.34.2.1
Population (Configuration c)
10.34.3
10.34.3.1
95
96
Class Documentation
10.35
Public Attributes
float x
the x coordinate
float y
the y coordinate
10.35.1
Detailed Description
10.35.2
10.35.2.1
float x
the x coordinate
10.35.2.2
float y
the y coordinate
10.36
10.36.1
Detailed Description
10.36.2
10.36.2.1
Processor (Configuration c)
10.36.3
10.36.3.1
Stream enrollment.
The processor sequentially captures an intensity image, runs face finding, eye finding
and preprocessing. To locate the face in each image the Face.Finder is used. The minimum number of enrollable images, the maximum number of images to be processed
as well as the maximum time to be used are configurable. Enrollable are faces where
face and eyes could have been found
where the image quality check confirms Good for Enrollment
c 2009 by Cognitec Systems GmbH
Copyright
97
98
Class Documentation
where the final preprocessing step succeeds.
If either the maximum time configured is exceeded or the maximum number of images
to be processed is reached, the enrollment fails. Otherwise,a single FIR is generated
out of the images. Note that this function will fail anyway if Shape image usage has
been configured to be mandatory.
10.36.3.2
Merging of 2 FIRs to create a new one, which combines biometric features of both.
When using this function, take into account that the capacity of an FIR is limited and
that using an FIR merged from a set of FIRs can result in lower biometric performance
than would result from combining results obtained from the single FIRs.
10.37
void process (CaptureDevice capDev, FIR fir, Score threshold, Feedback fb)
Stream verification.
10.37.1
Detailed Description
10.37.2
10.37.2.1
Processor (Configuration c)
10.37.3
10.37.3.1
Stream verification.
The processor continuously fetches images from the CaptureDevice and tries to verify
the face in the image against the FIR passed upon Processor construction. Processing
terminates
once a sample could have been verified
after a configurable timeout if none of the samples was successfully verified so
far
after a configurable number of samples has been processed
c 2009 by Cognitec Systems GmbH
Copyright
99
100
Class Documentation
whatever happens first.
To locate the faces in the captured samples the Face.Finder is used.
Parameters:
fir FIR to be used for verification
threshold the threshold for verification success decision
fb the feedback for observing processing and returning results to the application
10.37.3.2
10.38
void process (CaptureDevice capDev, Score threshold, Feedback fb, uint maxMatches)
Stream identification.
10.38.1
Detailed Description
10.38.2
10.38.2.1
10.38.3
10.38.3.1
Stream identification.
The processor grabs continously images from the capture device and tries to identify
the face in the image if one. The process cancels after a configurable timeout if no face
was found or no person could be identified. To locate the faces in the captured sample
images the Face.Finder is used.
Parameters:
threshold threshold for match decision
c 2009 by Cognitec Systems GmbH
Copyright
101
102
Class Documentation
fb the feedback to observe processing and returning results to the application
maxMatches the maximum size of the FRsdk.Matches to be returned in the feedback
10.38.3.2
10.39
Public Attributes
unsigned char b
blue color part
unsigned char g
green color part
unsigned char r
red color part
unsigned char a
alpha channel part
10.39.1
Detailed Description
10.39.2
10.39.2.1
unsigned char b
unsigned char g
unsigned char r
unsigned char a
103
104
Class Documentation
10.40
Sample A compound input data type containing an image and, optionally, eyes annotation and/or shape data.
Sample (AnnotatedImage a)
Construct a sample from annotated Image.
Public Attributes
Image image
the Image
ShapeImage shapeimage
the optional ShapeImage
Eyes.Location annotation
the annotation
10.40.1
Detailed Description
Sample A compound input data type containing an image and, optionally, eyes annotation and/or shape data.
10.40.2
10.40.2.1
Sample (AnnotatedImage a)
10.40.3
10.40.3.1
Image image
the Image
10.40.3.2
ShapeImage shapeimage
Eyes.Location annotation
the annotation
105
106
Class Documentation
10.41
10.41.1
Detailed Description
10.41.2
10.41.2.1
Parameters:
img the Image to evaluate
eloc location of the eyes
10.42
Public Attributes
bool goodForEnrollment
the sample is good for Enrollment
bool goodForVerification
the sample is good for Verification ( 1-to-1 match) and Identification ( 1-to-n match)
float quality
the quality in the range of [0,1], higher values meaning better quality
string hint
a textual description of what was wrong in case of a bad quality, otherwise empty.
10.42.1
Detailed Description
10.42.2
10.42.2.1
10.42.3
10.42.3.1
bool goodForEnrollment
107
108
Class Documentation
10.42.3.2
bool goodForVerification
the sample is good for Verification ( 1-to-1 match) and Identification ( 1-to-n match)
10.42.3.3
float quality
the quality in the range of [0,1], higher values meaning better quality
10.42.3.4
string hint
a textual description of what was wrong in case of a bad quality, otherwise empty.
The string is a comma separated composition of one or more of the following substrings:
Eye distance too small for Verification or
Eye distance too small for Enrollment (only one of these 2 will occur in the
hint string)
dynamic range too low
histogram shape too far from ideal
noise level too high
sharpness too low
10.43
This class represents a score for representing the comparison result between a FIR and
the biometric evidence.
Public Attributes
float value
the float representation of the score
10.43.1
Detailed Description
This class represents a score for representing the comparison result between a FIR and
the biometric evidence.
The range is between 0.0f and 1.0f. Applications may use requestFAR() or
requestFRR() to obtain score values corresponding to requested values of FAR
or FRR. Score values can be used as thresholds for verifications and identifications and
will be communicated as results of these operations.
10.43.2
10.43.2.1
float value
109
110
Class Documentation
10.44
10.44.1
Detailed Description
10.44.2
10.44.2.1
ScoreMappings (Configuration c)
10.44.3
10.44.3.1
10.44.3.3
Get a value for the False Acceptance Rate that would be achieved if a given score was
used as the threshold.
Parameters:
threshold score used as threshold
10.44.3.4
Get a value for the False Rejection Rate that would be achieved if a given score was
used as the threshold.
Parameters:
threshold score used as threshold
111
112
Class Documentation
10.45
Gender gender ()
Returns the gender of the person.
Ethnicity ethnicity ()
return the ethnicity of person
bool isChild ()
Returns true if the image contains a portrait of a child (person in the age of 0 - 7
years).
bool isToddler ()
Returns true if the image of an child contains a portrait of an toddler (person in the
age of 0 - 4 years).
bool isInfant ()
Returns true if the image of an toddler contains a portrait of an infant (person in the
age of 0 - 1 year).
bool isBelow26 ()
Returns true if the given image of an adult is below 26 years old.
bool isBelow36 ()
Returns true if the given image of an adult is below 36 years old.
10.45.1
Detailed Description
10.45.2
10.45.2.1
bool wearsGlasses ()
Gender gender ()
Ethnicity ethnicity ()
bool isChild ()
Returns true if the image contains a portrait of a child (person in the age of 0 - 7 years).
10.45.2.5
bool isToddler ()
Returns true if the image of an child contains a portrait of an toddler (person in the age
of 0 - 4 years).
10.45.2.6
bool isInfant ()
Returns true if the image of an toddler contains a portrait of an infant (person in the
age of 0 - 1 year).
10.45.2.7
bool isBelow26 ()
bool isBelow36 ()
113
114
Class Documentation
10.46
10.46.1
Detailed Description
10.46.2
10.46.2.1
10.47
Abstract image.
Inherited by NetShapeImage.
uint height ()
returns the height of the image in pixels
IntPtr vertices ()
Returns a pointer to an array of size width() x height() x sizeof( Vertex) containing the
vertices of the shape.
IntPtr mask ()
Returns a pointer to an array of size width() x height() x sizeof( bool) containing the
validity mask of the ShapeImage.
10.47.1
Detailed Description
Abstract image.
10.47.2
10.47.2.1
uint width ()
10.47.2.2
uint height ()
10.47.2.3
IntPtr vertices ()
Returns a pointer to an array of size width() x height() x sizeof( Vertex) containing the
vertices of the shape.
The pointer has to remain valid during the whole lifetime of the ShapeImage object.
c 2009 by Cognitec Systems GmbH
Copyright
115
116
Class Documentation
10.47.2.4
IntPtr mask ()
Returns a pointer to an array of size width() x height() x sizeof( bool) containing the
validity mask of the ShapeImage.
Only vertices with a corresponding mask value of true are considered for processing.
The pointer has to remain valid during the whole lifetime of the ShapeImage object.
10.48
10.48.1
Detailed Description
10.48.2
10.48.2.1
117
118
Class Documentation
10.49
Compliance assessment.
Boundaries boundaries ()
Get struct containing the boundaries used to assess portrait charateristics.
10.49.1
Detailed Description
Compliance assessment.
An instance of this class can be used to assess Portrait.Characteristics of a portrait to
determine the compliance with the ISO 19794 5 Full Frontal Image requirements.
MT-safe It is safe to call the class member functions
concurrently from different threads.
10.49.2
10.49.2.1
Boundaries boundaries ()
10.50
The Face Tracker locates and tracks faces across a sequence of images in an efficient
way by analyzing the spatial and temporal dependencies between faces in subsequent
images.
10.50.1
Detailed Description
The Face Tracker locates and tracks faces across a sequence of images in an efficient
way by analyzing the spatial and temporal dependencies between faces in subsequent
images.
10.50.2
10.50.2.1
Tracker (Configuration c)
10.50.3
10.50.3.1
Processes an image (usually a frame from a video stream) and returns the tracked faces
with their eyes positions in this image.
captureTime is the capture time of the given frame img in milliseconds relative to a
user defined reference point in the past (e.g. capture time of the first frame). If the
capture time of an image passed to processImage() is equal to or lower than the capture
time of a image passed earlier the behavior is undefined.
c 2009 by Cognitec Systems GmbH
Copyright
119
120
Class Documentation
10.50.3.2
10.51
Public Attributes
string id
a unique ID that corresponds to the face being tracked
Eyes.Location eyesLocation
an Eyes.Location that corresponds to the face being tracked
10.51.1
Detailed Description
10.51.2
10.51.2.1
string id
Eyes.Location eyesLocation
121
122
Class Documentation
10.52
CaptureDevice
WinCaptureDevice
bool configure ()
call up device configuration dialog provided by the device driver
bool videoFormatDialog ()
call up video format dialog provided by the device driver
VideoFormat getVideoFormat ()
retrieve current video format
Classes
class VideoFormat
Opaque type representing a video format (resolution, bits per pixel, etc).
10.52.1
Detailed Description
10.52.2
10.52.2.1
10.52.3
10.52.3.1
capture an image
Implements CaptureDevice.
10.52.3.2
bool configure ()
bool videoFormatDialog ()
VideoFormat getVideoFormat ()
123
124
Class Documentation
10.53
Opaque type representing a video format (resolution, bits per pixel, etc).
10.53.1
Detailed Description
Opaque type representing a video format (resolution, bits per pixel, etc).
Since video format settings are highly device dependent, a video format created from
one video device might be inappropriate for a different one. Even a video format
created from a given device might be inappropriate to the same device under certain
conditions (e.g. a changed frame rate).
10.53.2
10.53.2.1
VideoFormat (System.IO.Stream i)
10.53.3
10.53.3.1
write to ostream
Chapter 11
Example Documentation
11.1
acquisition.cs
System;
Cognitec.FRsdk;
Face = Cognitec.FRsdk.Face;
Eyes = Cognitec.FRsdk.Eyes;
Portrait = Cognitec.FRsdk.Portrait;
FullFrontal = Cognitec.FRsdk.ISO_19794_5.FullFrontal;
TokenFace = Cognitec.FRsdk.ISO_19794_5.TokenFace;
Feature = Cognitec.FRsdk.Portrait.Feature;
class EyesFinding
{
class AcquisitionError : System.Exception
{
public AcquisitionError (string msg_)
{
msg = msg_;
}
AcquisitionError () {}
public string what ()
{
return msg;
}
private string msg;
}
class ImgPropFeedback: Cognitec.FRsdk.ImageIO.Feedback
{
public ImgPropFeedback() {}
126
Example Documentation
ImgPropFeedback() {}
public void compressionMode( ref Cognitec.FRsdk.ImageIO.ImageColorMode cm)
{
Console.WriteLine( "ImageColorMode: ");
Console.WriteLine( cm);
}
public void pixelDepth( uint pd )
{
Console.WriteLine("pixelDepth: ");
Console.WriteLine(pd);
}
}
11.1 acquisition.cs
img = Jpeg.load (args[1]);
}
catch {}
try // try loading jpeg2000 image
{
img = Jpeg2000.load (args[1]);
}
catch {}
try // try loading bmp image
{
img = Bmp.load (args[1]);
}
catch {}
try // try loading png image
{
System.IO.FileStream imageFile =
new System.IO.FileStream
(args[1], System.IO.FileMode.Open, System.IO.FileAccess.Read);
img = Png.load (imageFile);
}
catch {}
try // try loading pgm/ppm image
{
img = Pgm.load (args[1]);
}
catch {}
if (img == null) throw new AcquisitionError
("<image file> contains no recognized image file format");
// find faces
Face.Location[] faceLocations = faceFinder.find (img);
if (faceLocations.Length == 0) throw new AcquisitionError
("Unable to locate face!");
Face.Location faceLoc = faceLocations[0];
// print face location
Console.WriteLine
("Face location: [x={0} y={1} width={2}] (confidence={3})",
faceLoc.pos.x, faceLoc.pos.y, faceLoc.width, faceLoc.confidence);
// find eyes
Eyes.Location[] eyesLocations = eyesFinder.find (img, faceLoc);
if (eyesLocations.Length == 0) throw new AcquisitionError
("No Eyes found!");
Eyes.Location eyesLoc = eyesLocations[0];
// print eye locations
Console.WriteLine( "Eye locations: first [x={0} y={1}] (confidence={2})",
eyesLoc.first.x, eyesLoc.first.y, eyesLoc.firstConfidence);
Console.WriteLine( "
second [x={0} y={1}] (confidence={2})",
eyesLoc.second.x, eyesLoc.second.y, eyesLoc.secondConfidence);
// annotated image
AnnotatedImage annotatedImage = new AnnotatedImage (img, eyesLoc);
// analyze portrait
Portrait.Characteristics portraitCharacteristics =
portraitAnalyzer.analyze (annotatedImage);
Console.WriteLine( "Left Eye Open: (confidence={0})",
127
128
Example Documentation
portraitCharacteristics.eye0Open ());
Console.WriteLine( "Right Eye Open: (confidence={0})",
portraitCharacteristics.eye1Open ());
Console.WriteLine( "Left Eye Red: (confidence={0})",
portraitCharacteristics.eye0Red ());
Console.WriteLine ("Right Eye Red: (confidence={0})",
portraitCharacteristics.eye1Red ());
Console.WriteLine ("Left Eye Tinted: (confidence={0})",
portraitCharacteristics.eye0Tinted ());
Console.WriteLine ("Right Eye Tinted: (confidence={0})",
portraitCharacteristics.eye1Tinted ());
Console.WriteLine( "Left Eye Gaze Frontal: (confidence={0})",
portraitCharacteristics.eye0GazeFrontal());
Console.WriteLine( "Right Eye Gaze Frontal: (confidence={0})",
portraitCharacteristics.eye1GazeFrontal());
Console.WriteLine( "Exposure: {0}",
portraitCharacteristics.exposure());
Console.WriteLine( "Sharpness: {0}",
portraitCharacteristics.sharpness());
Console.WriteLine( "Natural skin color: {0}",
portraitCharacteristics.naturalSkinColour());
Console.WriteLine( "Hot spots: (confidence={0})",
portraitCharacteristics.hotSpots());
// test features
Feature.Set features = featureTest.assess (portraitCharacteristics);
if (features.wearsGlasses ())
Console.WriteLine
("Feature test: Person with glasses. (confidence={0})",
portraitCharacteristics.glasses());
else
Console.WriteLine
("Feature test: Person without glasses. (confidence={0})",
portraitCharacteristics.glasses ());
11.1 acquisition.cs
129
}
// create a Token Face Image according to ISO 19794-5
AnnotatedImage iso19794Img = tfcreator.extract (annotatedImage);
AnnotatedImage[] annotatedImages = new AnnotatedImage [1];
annotatedImages[0] = iso19794Img;
if (args.Length > 2)
{
// save tokenface in cbeff format
System.IO.FileStream tokenOut =
new System.IO.FileStream
(args[2], System.IO.FileMode.CreateNew, System.IO.FileAccess.Write);
TokenFace.IO.write (tokenOut, annotatedImages);
if (args.Length > 3)
{
Bmp.save (iso19794Img.image, args[3]); // save bmp file
// System.IO.FileStream imageOut = new System.IO.FileStream (args[3], System.IO.FileMode
// Jpeg.save (iso19794Img.image, imageOut, 100); // save jpeg file
}
}
130
Example Documentation
Console.WriteLine ("Acquisition process done.");
}
catch (AcquisitionError aE)
{
Console.WriteLine (aE.what ());
}
catch (System.Exception ex)
{
Console.WriteLine ("\n--- Exception ---\n{0}", ex.Message );
return 1;
}
return 0;
}
}
11.2 enroll.cs
11.2
enroll.cs
System;
Cognitec.FRsdk;
Eyes = Cognitec.FRsdk.Eyes;
Enrollment = Cognitec.FRsdk.Enrollment;
// Implementation of Enrollment.Feedback
class EnrollmentFeedback: Enrollment.Feedback
{
public EnrollmentFeedback( string firFilename_)
{ firFilename = firFilename_; }
public void start() { Console.WriteLine("start"); }
public void processingImage( Image img)
{ Console.WriteLine( "processing image[{0}]", img.name()); }
public void eyesFound( Eyes.Location eyes)
{
Console.WriteLine
( "found eyes at [[first x={0} y={1}] [second x={2} y={3}]]",
eyes.first.x, eyes.first.y, eyes.second.x, eyes.second.y);
}
public void eyesNotFound()
{ Console.WriteLine( "eyes not found"); }
public void sampleQualityTooLow()
{ Console.WriteLine( "sample quality too low"); }
public void sampleQuality( float f )
{ Console.WriteLine( "sample quality: {0}", f); }
public void success( FIR fir)
{
Console.WriteLine
( "successful enrollment, FIR[ filename, id, size] = " +
"[\"{0}\", \"{1}\", {2}]", firFilename, fir.version(), fir.size());
// write the fir
fir.serialize
( new System.IO.FileStream( firFilename, System.IO.FileMode.Create));
}
public void failure() { Console.WriteLine( "failure"); }
public void end() { Console.WriteLine( "end"); }
private String firFilename;
};
class EnrollmentExample
c 2009 by Cognitec Systems GmbH
Copyright
131
132
Example Documentation
{
public static int Main( string[] args)
{
if( args.Length < 3) {
Console.WriteLine
( "usage:\n" +
"enroll {config file} {fir} {jpeg images}...\n" +
"\tconfig file ... the frsdk config file\n" +
"\tfir
... a filename for a FIR\n" +
"\tjpeg image ... one or more jpeg images for enrollment." );
return 1;
}
try {
// initialisation of configuration
Configuration cfg = new Configuration( args[0]);
// prepare array for enrollment images
Sample[] enrollmentSamples = new Sample[ args.Length - 2];
// collect image files for enrollment images
for( int i = 2; i < args.Length; i++) {
Sample s = new Sample(Jpeg.load( args[i]));
enrollmentSamples[ i-2] = s;
}
// create an enrollment processor
Enrollment.Processor proc = new Enrollment.Processor( cfg);
// create the needed interaction instances
Enrollment.Feedback feedback = new EnrollmentFeedback( args[1]);
Console.WriteLine( "start processing ...");
// do the enrollment
proc.process( enrollmentSamples, feedback);
} catch ( System.Exception ex) {
Console.WriteLine
( "\nException ---\n{0}\n{1}", ex.Message, ex.StackTrace );
return 1;
}
return 0;
}
}
11.3 eyesfind.cs
11.3
133
eyesfind.cs
Example showing the usage of face and eyes finders and portrait analyzer.
// Copyright (c) 2004 Cognitec Systems GmbH
//
// $Revision: 1.12 $
//
using
using
using
using
using
using
using
System;
Cognitec.FRsdk;
Face = Cognitec.FRsdk.Face;
Eyes = Cognitec.FRsdk.Eyes;
Portrait = Cognitec.FRsdk.Portrait;
FullFrontal = Cognitec.FRsdk.ISO_19794_5.FullFrontal;
Feature = Cognitec.FRsdk.Portrait.Feature;
class EyesFinding
{
private static int usage ()
{
Console.WriteLine
( "Usage: eyesfind <frsdk configuration file> <jpeg file>");
return 1;
}
public static int Main( string[] args)
{
if( args.Length != 2) return usage ();
try {
// initialisation of configuration
Configuration cfg = new Configuration( args[0]);
// initialization for single face finding (default)
float minrEyeDist = 0.1F;
// face finder instantiation
Face.Finder faceFinder = new Face.Finder( cfg);
// eyes finder instantiation
Eyes.Finder eyesFinder = new Eyes.Finder( cfg);
// portrait analyzer instantiation
Portrait.Analyzer portraitAnalyzer = new Portrait.Analyzer( cfg);
// Feature assessment instantiation
Feature.Test featureTest = new Feature.Test( cfg);
// ISO 19794-5 Full Frontal image assessment instantiation
FullFrontal.Test fullFrontalTest = new FullFrontal.Test( cfg);
// find faces
Face.Location[] faceLocations = faceFinder.find (img, minrEyeDist, 0.4F, int.MinValue / 2, int.Min
Console.WriteLine ("number of faces found: {0}", faceLocations.Length);
Console.WriteLine ("----------------------");
134
Example Documentation
// iterate over face locations
foreach( Face.Location faceLoc in faceLocations) {
// print face location
Console.WriteLine
( "Face location: [x={0} y={1} width={2} conf={3}]",
faceLoc.pos.x, faceLoc.pos.y, faceLoc.width, faceLoc.confidence);
// find eyes
Eyes.Location[] eyesLocations = eyesFinder.find( img, faceLoc);
// iterate over eyes locations
foreach( Eyes.Location eyesLoc in eyesLocations) {
// print eye locations
Console.WriteLine
( "Eye locations: [[first x={0} y={1} conf={2}]" +
" [second x={3} y={4} conf={5}]]",
eyesLoc.first.x, eyesLoc.first.y, eyesLoc.firstConfidence,
eyesLoc.second.x, eyesLoc.second.y, eyesLoc.secondConfidence);
}
// do we have eyes found
if( eyesLocations.Length > 0) {
// analyze portrait
Portrait.Characteristics portraitCharacteristics =
portraitAnalyzer.analyze
( new AnnotatedImage( img, eyesLocations[0]));
Console.WriteLine
( "Left Eye Open: {0}",
portraitCharacteristics.eye0Open());
Console.WriteLine
( "Right Eye Open {0}",
portraitCharacteristics.eye1Open());
Console.WriteLine
( "Exposure: {0}",
portraitCharacteristics.exposure());
Console.WriteLine
( "Sharpness: {0}",
portraitCharacteristics.sharpness());
// test features
Feature.Set features = featureTest.assess( portraitCharacteristics);
if( features.wearsGlasses())
Console.WriteLine
( "Feature test: Person with glasses. ({0}",
portraitCharacteristics.glasses());
else
Console.WriteLine
( "Feature test: Person without glasses. ({0})",
portraitCharacteristics.glasses());
// test compliance with ISO 19794-5
FullFrontal.Compliance compliance = fullFrontalTest.assess
( portraitCharacteristics);
if( compliance.isCompliant()) {
Console.WriteLine
( "Image compliant with ISO 19794-5 requirements");
11.3 eyesfind.cs
} else {
if( !compliance.goodVerticalFacePosition())
Console.WriteLine( "Bad vertical face position!");
if( !compliance.horizontallyCenteredFace())
Console.WriteLine( "Face not centered horizontally!");
if( !compliance.widthOfHead())
Console.WriteLine( "Bad sizing (Width)!");
if( !compliance.lengthOfHead())
Console.WriteLine( "Bad sizing (Height)!");
if( !compliance.goodExposure())
Console.WriteLine( "Bad exposure!");
if( !compliance.isFrontal())
Console.WriteLine( "Face not frontal!");
Console.WriteLine
( "Image not compliant with ISO 19794-5 requirements");
}
}
else Console.WriteLine ("--- no eyes found! ---");
Console.WriteLine ("----------------------");
}
} catch ( System.Exception ex) {
Console.WriteLine( "\nException ---\n{0}", ex.Message );
return 1;
}
return 0;
}
}
135
136
Example Documentation
11.4
facefind.cs
// find faces
Face.Location[] faceLocations = faceFinder.find (img, minrEyeDist, 0.4F, int.MinValue /
Console.WriteLine ("number of found faces: {0}", faceLocations.Length);
// print face locations
foreach( Face.Location faceLoc in faceLocations) {
Console.WriteLine
( "Face location: [x={0} y={1} width={2} conf={3}]",
faceLoc.pos.x, faceLoc.pos.y, faceLoc.width, faceLoc.confidence);
}
}
catch ( System.Exception ex) {
Console.WriteLine( "\nException ---\n{0}", ex.Message );
return 1;
}
return 0;
}
}
11.5 identify.cs
11.5
identify.cs
System;
Cognitec.FRsdk;
Eyes = Cognitec.FRsdk.Eyes;
Identification = Cognitec.FRsdk.Identification;
// Implementation of Identification.Feedback
class IdentificationFeedback: Identification.Feedback
{
public void start() { Console.WriteLine("start"); }
public void processingImage( Image img)
{ Console.WriteLine( "processing image[{0}]", img.name()); }
public void eyesFound( Eyes.Location eyes)
{
Console.WriteLine
( "found eyes at [[first x={0} y={1}] [second x={2} y={3}]]",
eyes.first.x, eyes.first.y, eyes.second.x, eyes.second.y);
}
public void eyesNotFound()
{ Console.WriteLine( "eyes not found"); }
public void sampleQualityTooLow()
{ Console.WriteLine( "sample quality too low"); }
public void sampleQuality( float f )
{ Console.WriteLine( "sample quality: {0}", f); }
public void matches( Match[] matches)
{
foreach( Match match in matches) {
Console.WriteLine
( "match on fir[{0}] got Score[{1}]",
match.name, match.score.value);
}
}
public void failure() { Console.WriteLine( "failure"); }
public void end() { Console.WriteLine( "end"); }
};
class IdentificationExample
{
public static int Main( string[] args)
{
if( args.Length < 3) {
Console.WriteLine
( "usage:\n" +
"identify {config file} {jpeg image} {fir} ...\n" +
c 2009 by Cognitec Systems GmbH
Copyright
137
138
Example Documentation
"\tconfig file ... the frsdk config file\n" +
"\tjpeg image ... a jpeg image file for processing\n" +
"\tfir
... one or more FIR files for the identification" +
" population.");
return 1;
}
try {
// initialisation of configuration
Configuration cfg = new Configuration( args[0]);
// prepare array for identification images
Sample[] identificationSamples = new Sample[ 1];
Sample s = new Sample(Jpeg.load( args[1]));
identificationSamples[ 0] = s;
// build the fir population for identification
FIRBuilder firBuilder = new FIRBuilder( cfg);
Population population = new Population( cfg);
for( int j = 2; j < args.Length; j++) {
string filename = args[ j];
Console.WriteLine( "[{0}]", filename);
population.append
( firBuilder.build
( new System.IO.FileStream( filename, System.IO.FileMode.Open)),
filename);
}
// request Score
ScoreMappings sm = new ScoreMappings( cfg);
Score score = sm.requestFAR( 0.001f);
// create a identification processor
Identification.Processor proc =
new Identification.Processor( cfg, population);
// create the needed interaction instances
Identification.Feedback feedback = new IdentificationFeedback();
Console.WriteLine( "start processing ...");
// do the identification
proc.process( identificationSamples, score, feedback, 3);
} catch ( System.Exception ex) {
Console.WriteLine
( "\nException ---\n{0}\n{1}", ex.Message, ex.StackTrace );
return 1;
}
return 0;
}
}
11.6 tracklife.cs
11.6
tracklife.cs
Example showing the usage of the tracker if frames with an varying frame rate are
available.
// Copyright (c) 2008 Cognitec Systems GmbH
//
// $Revision: 1.1 $
//
using
using
using
using
System;
Cognitec.FRsdk;
Face = Cognitec.FRsdk.Face;
Eyes = Cognitec.FRsdk.Eyes;
class LifeFaceTracking
{
public static int Main( string[] args)
{
if( args.Length < 2) {
Console.WriteLine
( "Usage: tracklife <frsdk configuration file> <device name>");
return 1;
}
try {
// initialisation of configuration
Configuration cfg = new Configuration( args[0]);
// face tracker instantiation
Face.Tracker faceTracker = new Face.Tracker( cfg);
// capturing device instantiation
WinCaptureDevice capDev = new WinCaptureDevice( cfg, args[1]);
for(; true; ) {
// grab image
Image img = capDev.capture();
// get time
uint captureTime = (uint) DateTime.Now.Ticks;
// track objects in frame
Face.TrackerLocation[] l = faceTracker.processImage( img, captureTime);
// print time and locations
Console.Write( "{0} ", captureTime);
foreach( Face.TrackerLocation i in l) {
Eyes.Location e = i.eyesLocation;
Console.Write
( "{0} : [x={1} y={2}] ",
i.id, (e.first.x + e.second.x) /2, (e.first.y + e.second.y) /2);
}
Console.WriteLine();
}
} catch ( System.Exception ex) {
Console.WriteLine( "\nException ---\n{0}", ex.Message );
return 1;
}
return 0;
}
}
139
140
Example Documentation
11.7 trackrec.cs
11.7
trackrec.cs
Example showing the usage of the tracker if frames with an contant frame rate are
available.
// Copyright (c) 2008 Cognitec Systems GmbH
//
// $Revision: 1.1 $
//
using
using
using
using
System;
Cognitec.FRsdk;
Face = Cognitec.FRsdk.Face;
Eyes = Cognitec.FRsdk.Eyes;
141
142
Example Documentation
string fn = args[ a];
// load image
Image img = loadImage( fn);
// track objects of frame
Face.TrackerLocation[] l = faceTracker.processFrame( img);
// print filename and locations
Console.Write( "{0} ", fn);
foreach( Face.TrackerLocation i in l) {
Eyes.Location e = i.eyesLocation;
Console.Write
( "{0} : [x={1} y={2}] ",
i.id, (e.first.x + e.second.x) /2, (e.first.y + e.second.y) /2);
}
Console.WriteLine();
}
} catch ( System.Exception ex) {
Console.WriteLine( "\nException ---\n{0}", ex.Message );
return 1;
}
return 0;
}
}
11.8 verify.cs
11.8
verify.cs
System;
Cognitec.FRsdk;
Eyes = Cognitec.FRsdk.Eyes;
Verification = Cognitec.FRsdk.Verification;
// Implementation of Verification.Feedback
class VerificationFeedback: Verification.Feedback
{
public void start() { Console.WriteLine("start"); }
public void processingImage( Image img)
{ Console.WriteLine( "processing image[{0}]", img.name()); }
public void eyesFound( Eyes.Location eyes)
{
Console.WriteLine
( "found eyes at [[first x={0} y={1}] [second x={2} y={3}]]",
eyes.first.x, eyes.first.y, eyes.second.x, eyes.second.y);
}
public void eyesNotFound()
{ Console.WriteLine( "eyes not found"); }
public void sampleQualityTooLow()
{ Console.WriteLine( "sample quality too low"); }
public void sampleQuality( float f )
{ Console.WriteLine( "sample quality: {0}", f); }
public void match( Score s)
{ Console.WriteLine( "match got score: {0}", s.value); }
public void success()
{ Console.WriteLine( "successful verification." ); }
public void failure() { Console.WriteLine( "failure"); }
public void end() { Console.WriteLine( "end"); }
};
class VerificationExample
{
public static int Main( string[] args)
{
if( args.Length < 3) {
Console.WriteLine
( "usage:\n" +
"verify {config file} {fir} {jpeg images}...\n" +
"\tconfig file ... the frsdk config file\n" +
"\tfir
... a filename for a FIR\n" +
"\tjpeg image ... one or more jpeg image files for verification." );
c 2009 by Cognitec Systems GmbH
Copyright
143
144
Example Documentation
return 1;
}
try {
// initialisation of configuration
Configuration cfg = new Configuration( args[0]);
// prepare array for verification images
Sample[] verificationSamples = new Sample[ args.Length - 2];
// collect image files for verification images
for( int i = 2; i < args.Length; i++) {
Sample s = new Sample(Jpeg.load( args[i]));
verificationSamples[ i-2] = s;
}
FIRBuilder firBuilder = new FIRBuilder( cfg);
// get the FIR to verify against
Console.WriteLine( "reading fir {0} ... ", args[1]);
FIR fir = firBuilder.build
( new System.IO.FileStream( args[1], System.IO.FileMode.Open));
Console.WriteLine( "done");
// request Score
ScoreMappings sm = new ScoreMappings( cfg);
Score score = sm.requestFAR( 0.001f);
Console.WriteLine( "required success score: {0}", score.value);
// create a verification processor
Verification.Processor proc = new Verification.Processor( cfg);
// create the needed interaction instances
Verification.Feedback feedback = new VerificationFeedback();
Console.WriteLine( "start processing ...");
// do the verification
proc.process( verificationSamples, fir, score, feedback);
} catch ( System.Exception ex) {
Console.WriteLine
( "\nException ---\n{0}\n{1}", ex.Message, ex.StackTrace );
return 1;
}
return 0;
}
}