• Không có kết quả nào được tìm thấy

Score-Level Fusion in Multi-Perspective Finger Vein Recognition

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Chia sẻ "Score-Level Fusion in Multi-Perspective Finger Vein Recognition"

Copied!
267
0
0

Loading.... (view fulltext now)

Văn bản

Nevertheless, true finger vein recognition with multiple perspectives (evaluating more than two different views around the finger) has not been explored until now, except in our previous work [2]. Our multi-perspective finger vein capture device can capture images around the long axis of the finger (360◦).

Fig. 10.1 Multi-perspective finger vein set-up exhibiting three different perspectives based on three image sensors and three illuminator modules
Fig. 10.1 Multi-perspective finger vein set-up exhibiting three different perspectives based on three image sensors and three illuminator modules

Fusion in Finger Vein Recognition

Single Modality (Finger Vein Only) Fusion

30] proposed a result-level fusion scheme including pixel-based and super-pixel-based finger vein features. 33] tested different preprocessing cascades to improve the individual performance of single finger vein feature extraction schemes.

Table 10.1 Related work in single modality finger vein fusion, ordered according to fusion level and year of publication
Table 10.1 Related work in single modality finger vein fusion, ordered according to fusion level and year of publication

Multi-modality Fusion Including Finger Veins

They were able to achieve a minimum EER of 0.27% using score-level fusion compared to a minimum EER of 0.47% for the single features. Therefore, we created a multi-perspective finger vein dataset using our self-designed multi-perspective finger vein capture device, described in Sects.

Table 10.2 Related work in finger vein fusion, multi-modality fusion involving finger veins, ordered according to fusion level and year of publication
Table 10.2 Related work in finger vein fusion, multi-modality fusion involving finger veins, ordered according to fusion level and year of publication

Experimental Analysis

Finger Vein Dataset

Finger Vein Recognition Tool chain

The final binary output image is obtained by thresholding the location space using the mean as the threshold. The curves are obtained from the Eigenvalues ​​of the Hessian matrix at each pixel.

Fig. 10.7 ROI extraction process (images originally published in [2], c  2018 IEEE)
Fig. 10.7 ROI extraction process (images originally published in [2], c 2018 IEEE)

Score-Level Fusion Strategy and Toolkit

The final comparison score is determined as the ratio of the matched points to the sum of the number of detected key points in both images. BOSARIS has the ability to set a target in advance of the cost of a miss and a false alarm for the training phase of the fusion.

Evaluation Protocol

Based on the results from two individual fusion strategies, we determine the best possible combinations/fusions of perspectives and feature extraction methods. The final results are evaluated based on the total results (genuine and false) of all five tests.

Single Perspective Performance Results

In addition, vein extraction algorithms include some features related to finger texture. The RPD given in Equation 10.1) is calculated relative to the smallest EER (EERFTmin) achieved for a particular feature extraction method, where EERFTperspective is the EER of the current perspective.

Fig. 10.8 Recognition performance for different projections: EER (top) and relative performance degradation in relation to the best performing view (bottom)
Fig. 10.8 Recognition performance for different projections: EER (top) and relative performance degradation in relation to the best performing view (bottom)

Multi-perspective Fusion Results

Table 10.5 details the best results in terms of EER, FMR1000 and ZeroFMR for each feature extraction method. Thus, we additionally analyzed the palmar and dorsal view occurrences in the first 25 results for each feature extraction method.

Fig. 10.11 Recognition performance for two-view fusion. Top row: MC (left), PC (right), bottom row: GF (left), SIFT (middle) and DTFPM (right)
Fig. 10.11 Recognition performance for two-view fusion. Top row: MC (left), PC (right), bottom row: GF (left), SIFT (middle) and DTFPM (right)

Multi-algorithm Fusion Results

Considering single feature extraction methods, MC or PC is included in more than 70% of the best results. The combinations of either MC/PC and SIFT/DTFPM lead to 98% of the best results in fusion with two-feature extraction methods.

Table 10.8 Estimated χ 2 from the EER for multi-perspective fusion. Best results per number of involved views is highlighted bold font
Table 10.8 Estimated χ 2 from the EER for multi-perspective fusion. Best results per number of involved views is highlighted bold font

Combined Multi-perspective and Multi-algorithm Fusion

The results presented in Fig. 10.11 and Table 10.5 show that the best results are achieved with the fusion of palmar and dorsal vision. For the sake of completeness, we also calculated the results of the best 3-, 4- and 5-MAF combinations with the palmar and dorsal view.

Table 10.11 Performance results: Fusion of vein pattern based with key-point based features for both, palmar and dorsal view
Table 10.11 Performance results: Fusion of vein pattern based with key-point based features for both, palmar and dorsal view

Results Discussion

The best result with an EER of 0.12% was achieved using MC functions that fuse palmar and dorsal vision. By using the best performing perspectives of the dual perspective (palmar and dorsal) approach and combining them with a vein pattern based (MC, PC or GF) and a key point based method (SIFT or DTFPM), we were able to achieve 0.04% EER using MC and SIFT.

Conclusion and Future Work

This proposed finger vein capture device configuration achieves an EER of 0.04%, which is a performance increase of a factor of 11 compared to the best single-view, single-feature performance. In particular, a finger vein capture device that captures the palmar and dorsal view, including MC and SIFT features in a combined fusion, offers the best trade-off between the considerations mentioned above and is, therefore, our decision of design favorite.

Lu Y, Yoon S, Park DS (2013) Finger vein recognition based on score-level matching fusion of gabor features. Yang W, Huang X, Liao Q (2012) Fusion of finger vein and finger dorsal texture for personal identification based on comparative competitive coding.

Sclera and Retina Biometrics

Retinal Vascular Characteristics

Introduction

  • Anatomy of the Retina
  • History of Retinal Recognition
  • Medical and Biometric Examination and Acquisition Tools
    • Medical Devices
    • Biometric Devices
    • Device EYRINA
  • Recognition Schemes
  • Achieved Results Using Our Scheme
  • Limitations

The third generation can already find the eye in the camera, move the optical system to the center of the image (alignment of the optical axis of the eye and the camera) and take pictures of the retina (in the visible spectrum) to shoot a short video (in the infrared spectrum). We had to calculate the distance between the center of the blind spot (hereafter CBS) and yellow spot (hereafter CYS).

Fig. 11.1 Anatomy of the human eye [42]
Fig. 11.1 Anatomy of the human eye [42]

Eye Diseases

  • Automatic Detection of Druses and Exudates
  • Testing

A detachment of the eye (see Figure 11.16 left) of the eye occurs when various tears appear in the retina, causing the vitreous to come under the retina and lift it up. Detection of droplets and exudates works with the green channel of the default image (Figure 11.17 left).

Fig. 11.13 (Left) Hard and soft exudates [46] and (right) haemorrhage and micro-aneurysms [47]
Fig. 11.13 (Left) Hard and soft exudates [46] and (right) haemorrhage and micro-aneurysms [47]

Biometric Information Amounts in the Retina

  • Theoretical Determination of Biometric Information in Retina
  • Used Databases and Applications
  • Results

In this equation, we are particularly interested in the position of the points, then the angle at which. First we marked the border of the blind spot and then the center of the yellow spot.

Fig. 11.23 Unfolding interest area
Fig. 11.23 Unfolding interest area

Synthetic Retinal Images

  • Vascular Bed Layer
  • Layers
  • Background Layers
  • Generating a Vascular Bed
  • Testing
  • Generating Synthetic Images Via Neural Network

The lightest and least transparent color corresponds to the smallest distance from the center of the vessel. First, the position of the leftmost and rightmost ends of thick blood vessels (type 1) is calculated, then it shows the position of the left/right wider weak blood vessels resulting from vascular type 1 (type 2) and finally, the position of the other vessel (type 3).

Fig. 11.25 (Left) Arterial fluid texture; (middle) vein texture; (right) resulting vascular fluid texture
Fig. 11.25 (Left) Arterial fluid texture; (middle) vein texture; (right) resulting vascular fluid texture

Vascular Biometric Graph Comparison

Theory and Performance

Introduction

The purpose of this chapter is to provide a single resource for biometric researchers to learn and use the current state of the art in Biometric Chart Comparison1 for vascular modalities. Preliminary investigation of the benchmarking performance of this approach has given encouraging results for retinal databases where there is an intrinsic alignment in the images [5].

The Biometric Graph

  • The Biometric Graph
    • Vascular Graphs
  • Biometric Graph Extraction

To construct the biometric graph from a two-dimensional biometric image, the vessel skeleton is extracted from the image and the feature points are found. The existence of a component without feature points means that the two points are connected in the skeleton, otherwise they are not.

Figure 12.1 shows typical vascular pattern images from the databases of each of the four modalities we have investigated and their corresponding Biometric Graphs, extracted as above.
Figure 12.1 shows typical vascular pattern images from the databases of each of the four modalities we have investigated and their corresponding Biometric Graphs, extracted as above.

The Biometric Graph Comparison Algorithm

  • BGR-Biometric Graph Registration
    • BGR Algorithm Outline
    • Other Approaches to Registration of BGs
  • BGC-Biometric Graph Comparison
    • BGC Algorithm Outline

The average of the medians of the edge lengths in the two graphs is chosen as the threshold. The edge-induced MCS is most connected to the richest structure of the four.

Fig. 12.2 This figure shows the Maximum Common Subgraph between the palm vessel graphs in a and b resulting from applying BGC with the structure S to be c vertices, d edges, e claws and f two-claws
Fig. 12.2 This figure shows the Maximum Common Subgraph between the palm vessel graphs in a and b resulting from applying BGC with the structure S to be c vertices, d edges, e claws and f two-claws

Results

  • Vascular Databases
  • Comparison of Graph Topology Across Databases
    • BG Statistics
    • Proximity Graphs
  • Comparison of MCS Topology in BGC
  • Comparison of BGC Performance Across Databases

Gouru [16] in his work on Facial Vessels representing the vascular pattern under the skin of the face uses a database collected by the University of Houston and extracts BGs. For handle BGs using the SNIR and SFIR databases in [21], we have the 7 measurementsdv,de,|Vc1|,|Vc1| + |Vc2|,σD2,Dmaxen, for the first time, the average degree μD of the vertices in the MCS.

Table 12.2 Vessel image databases used for BGC Database Subjects ×
Table 12.2 Vessel image databases used for BGC Database Subjects ×

Anchors for a BGC Approach to Template Protection

  • Dissimilarity Vector Templates for Biometric Graphs
  • Anchors for Registration
  • The Search for Anchors
  • Queries and Discoveries for Anchors
  • Results
  • Conclusion

Once the anchor is found, it must be reliably found in a new pattern of the same theme. Figure 12.8a,c show the distribution of the anchor overlap measure in the palm and wrist databases.

Fig. 12.4 An example of a dissimilarity vector for a retina graph g in ESRID from a set of cohort graphs in VARIA
Fig. 12.4 An example of a dissimilarity vector for a retina graph g in ESRID from a set of cohort graphs in VARIA

Deep Sclera Segmentation and Recognition

Introduction

One feature that presents itself as a particularly viable option in this context is the scleral vasculature. Specifically, we first present a new technique for segmenting the vascular structure of the sclera based on a cascaded SegNet assembly [15].

Related Work

  • Ocular Biometrics
  • Sclera Recognition
  • Existing Datasets

Unlike the described techniques, our approach uses supervised segmentation models (usually with better performance), which are possible due to the manual labeling of the scleral vasculature that comes with the SBVPI dataset (introduced later in this chapter) and, to the best of our knowledge, are not available with any existing ocular imaging datasets. With ScleraNET, we present a model for computing the first learned image descriptor for sclera recognition.

Table 13.1 Comparison of the main characteristics of existing datasets for ocular biometrics
Table 13.1 Comparison of the main characteristics of existing datasets for ocular biometrics

Methods

  • Overview
  • Region-Of-Interest (ROI) Extraction
    • The Two-Step Segmentation Procedure
    • The SegNet Architecture
    • Model Training and Output Generation
  • ScleraNET for Recognition
    • ScleraNET Architecture
    • Learning Objective and Model Training
    • Identity Inference with ScleraNET

The vascular structure of the sclera is first segmented from the input image using a two-step procedure. In the initial segmentation step, a binary mask of the sclera region is generated from a SegNet model.

Fig. 13.1 Block diagram of the proposed sclera recognition approach. The vascular structure of the sclera is first segmented from the input image x using a two-step procedure
Fig. 13.1 Block diagram of the proposed sclera recognition approach. The vascular structure of the sclera is first segmented from the input image x using a two-step procedure

The Sclera Blood Vessels, Periocular and Iris (SBVPI) Dataset

  • Dataset Description
  • Available Annotations

The images show (from left to right): a sample image of SBVPI, the iris marker, the sclera marker, and the vascular structure marker. In particular, all 1858 images contain a pixel-level markup of the sclera and iris regions, as illustrated in Figure 13-6.

Fig. 13.4 An example image from the SVBPI dataset with a zoomed in region that shows the vascular patterns of the sclera
Fig. 13.4 An example image from the SVBPI dataset with a zoomed in region that shows the vascular patterns of the sclera

Experiments and Results

  • Performance Metrics
  • Experimental Protocol and Training Details
    • Segmentation Experiments
    • Recognition Experiments
  • Evaluation of Sclera Segmentation Models
  • Evaluation of Vasculature Segmentation Models
  • Recognition Experiments

We show some examples of the segmentation results produced by the tested segmentation models in Fig.13.8. An example of the probability map generated with the SegNet model is shown in Fig.13.11.

Table 13.4 Segmentation results generated based on binary segmentation masks. For the CNN- CNN-based models, the masks are produced by thresholding the generated probability maps with a value of Δ that ensures the highest possible F1-score, whereas the USS
Table 13.4 Segmentation results generated based on binary segmentation masks. For the CNN- CNN-based models, the masks are produced by thresholding the generated probability maps with a value of Δ that ensures the highest possible F1-score, whereas the USS

Conclusion

Miyazawa K, Ito K, Aoki T, Kobayashi K, Nakajima H (2008) An effective approach for iris recognition using phase-based image matching. Gangwar A, Joshi A (2016) Deepirisnet: deep iris representation with applications in iris recognition and cross-sensor iris recognition.

Security and Privacy in Vascular Biometrics

Presentation Attack Detection for Finger Recognition

Introduction

The first studies on the vulnerability of finger vein recognition systems to presentation attacks were only conducted in 2014 [76]. However, the applications of PAD based on finger veins are not limited to finger vein recognition.

Presentation Attack Detection

This approach is currently being followed in the BATL project [6] within the US Odin research program [55]: among other sensors, finger vein images are used to detect fingerprint presentation attacks. We will then describe the multimodal sensor developed in the BATL project and the proposed approach for finger vein-based PAD to detect fingerprint PAIs (Sect.14.4).

Related Works

  • Finger Vein Presentation Attack Detection
  • Fingerprint Presentation Attack Detection

As for fingerprints, they compare their approach with other state-of-the-art methods on the LivDet 2009 fingerprint database, which includes three different PAI types. They evaluate several different test scenarios and outperform other state-of-the-art approaches on LivDet datasets.

Table 14.2 Summary of the most relevant methodologies for software-based fingerprint presen- presen-tation attack detection
Table 14.2 Summary of the most relevant methodologies for software-based fingerprint presen- presen-tation attack detection

Proposed Finger Vein Presentation Attack Detection

  • Multimodal Finger Capture Device
    • Finger Photo Sensor
    • Finger Vein Sensor
  • Presentation Attack Detection Algorithm

An example of a finger photo as captured by the camera is shown in Fig. 14.2 for both the finger vein and the finger photo recording. An example of the four selected PLBP images of the bona fide sample shown in Fig. 14.4 is shown in fig. 14.8.

Fig. 14.1 Sensor diagram: a box, with a slot in the middle to place the finger, encloses all the components: a single camera, two sets of LEDs for visible (VIS) and NIR illumination and the light guide necessary for the finger vein capture (more details in
Fig. 14.1 Sensor diagram: a box, with a slot in the middle to place the finger, encloses all the components: a single camera, two sets of LEDs for visible (VIS) and NIR illumination and the light guide necessary for the finger vein capture (more details in

Experimental Evaluation

  • Experimental Set-Up
  • Results

The corresponding graphs with the APCER and BPCER for each pyramid level are presented in Fig.14.11. Comparing these samples with the bona fide samples from Fig. 14.8, we can see the large similarities for the transparent overlays in (a) and (c).

Table 14.4 Listing of all PAI species and the number of samples in parenthesis
Table 14.4 Listing of all PAI species and the number of samples in parenthesis

Summary and Conclusions

In: Proceedings of the International Workshop on Ubiquitous Implicit Biometrics and Health Signal Monitoring for Person-Centered Applications (UBIO). In: Proceedings of the International Conference on Signal Image Technology and Internet Systems (SITIS), p. 628–632.

On the Recognition Performance of BioHash-Protected Finger Vein

Introduction

Hình ảnh

Fig. 10.2 Self-designed multi-perspective finger vein capture device (image originally published in [2], c  2018 IEEE)
Table 10.1 Related work in single modality finger vein fusion, ordered according to fusion level and year of publication
Fig. 10.10 Recognition performance among the different projections: FMR1000 (top), ZeroFMR (bottom)
Table 10.4 Best/worst single perspective results per feature extraction method and single perspec- perspec-tive
+7

Tài liệu tham khảo

Tài liệu liên quan

Read the following passage and mark the letter A, B, C, or D on your answer sheet to indicate the correct word or phrase that best fits each of the numbered blanks.. Many students

Depth image reconstructed by the different imaging models with two miss- ing data patterns: a the ground-truth depth image, b the corrupted image by the missing mask representing missing