Biometric Authentication and Digital Identity Verification

Explore top LinkedIn content from expert professionals.

  • View profile for Matt Marino

    President at WinkPay | Building & Scaling Revenue Orgs from $0 to $120M | Intrapreneur | 3 Acquisitions | Data, Analytics, AI

    7,009 followers

    That's not me in the picture. Last week, Sam Altman sounded the alarm: AI‑powered impersonation and payment fraud are about to spike. At Wink we're already shipping defenses built for that reality: 🛡️ Passive Liveness Detection No awkward “blink twice” prompts. Our computer‑vision models silently track micro‑expressions, depth, and skin texture right from the device camera (POS, kiosk, mobile, laptop). Deep‑fake photos and video clones don’t stand a chance. 🔒 Multimodal Authentication Spoof one factor? Maybe. Spoof three, all at once? Good luck. -Device signals (secure enclave, geolocation, IP reputation) -Face + Voice match (stops voice‑only deepfakes) -Palm biometrics (prints, veins, hand geometry) for high‑risk flows 🌀 Continuous & Adaptive Checks Identity isn’t a single gate - it’s a real‑time guardrail. We rescore every interaction mid‑session, flagging odd micro‑movements or voice anomalies before fraudsters can cash out. Takeaway: AI bad actors move fast, but layered, passive, continuous biometrics move faster. If your fraud defenses still rely on static checks, now’s the moment to upgrade. #ai #fraudprevention #biometrics #fintech #security

  • View profile for Frances Zelazny

    Co-Founder & CEO, Anonybit | Strategic Advisor | Startups and Scaleups | Enterprise SaaS | Marketing, Business Development, Strategy | CHIEF | Women in Fintech Power List 100 | SIA Women in Security Forum Power 100

    10,533 followers

    Last week, 2 major announcements seemed to rock the identity world: The first one: A finance worker was tricked into paying $26M after a video call with deepfake creations of his CFO an other management team members. The second one: An underground website claims to use neural networks to generate realistic photos of fake IDs for $15. That these happened should not be a surprise to anyone. In fact, as iProov revealed in a recent report, deepfake face swap attacks on ID verification systems were up 704% in 2023 and I am sure that the numbers in 2024 so far are only getting worse. Deepfakes, injection attacks, fake IDs, it is all happening. Someone asked me if identity industry is now worthless because of these developments and the answer is absolutely not. There is no reason to be alarmist. Thinking through these cases, it becomes obvious that the problem is with poor system design and authentication methodologies: - Storing personal data in central honeypots that are impossible to protect - Enabling the use of the data for creating synthetic identities and bypassing security controls - Using passwords, one time codes and knowledge questions for authentication - Not having proper controls for high risk, high value, privileged access transactions Layering capabilities like: - Decentralized biometrics can help an enterprise maintain a secure repository of identities that can be checked against every time someone registers an account. (For example, for duplicates, synthetic identities and blocked identities.) If you just check a document for validity and don't run a selfie comparison on the document, or check the selfie against an existing repository, you could be exposing yourself to downstream fraud. - Liveness detection and injection detection can eliminate the risk of presentation attacks and deepfakes at onboarding and at any point in the authentication journey. - Biometrics should be used to validate a transaction and 2 or more people should be required to approve a transaction above a certain amount and/or to a new payee. In fact, adding a new payee or changing account details can also require strong authentication. And by strong authentication, I mean biometrics, not one time codes, knowledge questions or other factors that can be phished out of you. It goes back to why we designed the Anonybit solution the way we did. (See my blog from July on the topic.) Essentially, if you agree that: - Personal data should not be stored in centralized honeypots - Biometrics augmented with liveness and injection detection should be the primary form of authentication - The same biometric that is collected in the onboarding process is what should be used across the user journey Then Anonybit will make sense to you. Let's talk. #digitalidentity #scams #deepfakes #generativeai #fraudprevention #identitymanagement #biometricsecurity #privacymatters #innovation #privacyenhancingtechnologies

  • "On the Internet, nobody knows you're a dog" Peter Steiner, published in The New Yorker on July 5, 1993. If your IT Help Desk gets a call from a hacker pretending to be one of your employees - how will you KNOW who is on the other end? I ask this question to give a metaphor of forensic evidence in court, and the associated assurance of the method used to identify the person. Say that you want to identify a suspect's identity and you have these evidence options for verification: 1) Fiber strands found in the crime scene compared to the suspect's clothing 2) Tire marks found in the crime scene compared to suspect's car tires 3) Shoe prints found in the crime scene compared to the suspect's shoes 4) Hair found in the crime scene compared to the suspect's hair 5) Fingerprints found in the crime scene compared the suspect's fingerprints 6) DNA found in the crime scene compared to the suspect's DNA While all the methods are useful, and some have been used for many years, there is something different about DNA evidence. It is authoritative, and has the source of the identity. Every other option is a proxy, a facsimile to the real identity. I am asking this question to open our collective awareness, and the discussion - about how will we know who is calling the IT Help Desk with certainty, before providing network access? before providing privileged access? With the latest #Deepfakes and #GenAI ability to completely mimic the person's look, voice, video animation - how will we overcome it without upgrading our tool set to meet the new AI deepfake challenge? My stance is, after being a practitioner, a fraud investigator, a vendor - that we have to move to what works, and to use more authoritative sources than ever before. Why? Because it is becoming futile to tell the difference of a real image, from a well-trained #AI image that is transmitted on the wire. We need to take the cyber battle to where AI CAN NOT. Use an IDV strategy that AI is not able to guess/generate the information, as it was not trained on it. https://lnkd.in/gUFEuzes

Explore categories