Identifying Security Risks in Hiring

Explore top LinkedIn content from expert professionals.

  • View profile for Troy Fine

    SOC 2 Auditor | Cybersecurity Compliance

    38,251 followers

    If you hire remote workers you should be doing a deep dive on your recruiting, hiring, and onboarding processes to understand how you are confirming the identity of the person you are hiring. There are an estimated several dozen “laptop farmers” that have popped up across the U.S. as part of a scam to infiltrate American companies. Americans are being scammed to operate dozens of laptops meant to be used by legitimate remote workers living in the U.S. What the employers and the farmers don’t realize is that the workers are North Koreans living abroad but using stolen U.S. identities. Once they get a job, they coordinate with an American who can provide some “American cover” by accepting deliveries of the computer, setting up the online connections and helping facilitate paychecks. Meanwhile, the North Koreans log into the laptops from overseas every day through remote-access software. CrowdStrike recently identified about 150 cases of North Korean workers on customer networks, and has identified laptop farms in at least eight states. While the primary goal for these workers might be to steal money in the form of cashed paychecks from American companies, many of them are also interested in stealing data for espionage or to use as ransom. At this point, with the speed of AI advancement, this risk is only going to increase for remote-first companies. Get your Security, HR, and Legal teams together to start discussing how you can mitigate this risk. You should even think about recent new hires where this could have potentially occurred and do some investigation. One possible mitigation is to force new hires in certain high-risk roles to come onsite during their first week for onboarding to get their company laptop. During the recruiting process, the recruiter should discuss the mandatory onsite onboarding and ask if they would be available to come onsite their first week for onboarding and to receive their laptop. The I-9 verification should also be done during this onboarding. I would also recommend heightened monitoring on new hires’ devices to ensure there are no red flags indicating suspicious or malicious behavior. I think it’s easy to overlook this risk and think it would be obvious to tell that you hired someone in North Korea, but these scams are getting sophisticated and AI is only going to make it harder to detect. Link to article: https://lnkd.in/e3iAmshM

  • View profile for Jessie (Bolton) Van Wagoner

    CEO, Bolt Resources | Business Resilience & Workforce Innovation Leader | ISC2 Partner | NIST-NICE Community Council Member | NTXISSA Board | Speaker | Podcast Host | Power Connector | Lover of Humans 💞

    12,414 followers

    The FBI just exposed a nationwide operation involving 29 U.S.-based “laptop farms” — physical setups used by North Korean operatives to pose as remote IT workers and gain employment at over 100 American companies. These weren’t cyberattacks. They were intentional infiltrations of the U.S. workforce. The operatives used stolen identities, manipulated hiring systems, and exploited remote work loopholes to appear as legitimate contractors. Millions of dollars were funneled directly to the DPRK regime. Export-controlled U.S. military technology was accessed — and, in some cases, stolen. The most alarming part? They didn’t hack in. They were hired in. They passed interviews. They used fake identities. They bypassed background checks. They embedded themselves into remote teams. This should be a wake-up call for every hiring manager, HR leader, CIO, and CISO across the country. What this FBI operation revealed about today’s hiring systems: ❌ Remote IT hiring risks are growing and largely underestimated ❌ Identity verification often stops after onboarding ❌ Speed-to-hire still outweighs long-term trust and risk mitigation ❌ Insider threats in remote work are harder to detect without oversight ❌ HR and security still operate in silos — and attackers exploit the gap This is no longer just a cybersecurity workforce issue, it’s a talent acquisition and identity risk issue across industry. If your organization is hiring remote workers without continuous identity verification, your workforce may already be compromised. Trust used to be built in person. In today’s remote-first world, it has to be engineered into your hiring process — or you’re leaving the door wide open. What companies can do now: ✔ Reevaluate hiring platforms for identity and access control gaps ✔ Integrate your CISO or security team into hiring decisions ✔ Train recruiters to recognize red flags highlighted by the FBI and DOJ ✔ Stop relying solely on automation to vet identity and intent ✔ Build a cybersecurity hiring strategy that includes continuous workforce vetting Trust is now part of your attack surface. Your hiring practices are either protecting your organization, or exposing it. If you’re unsure where to begin, this is exactly the kind of challenge I help solve. Let’s talk. #cybersecurity #talentstrategy #remoteworkforce #cyberrisk #BoltResources

  • View profile for Yang Mou

    CEO @ Fonzi AI

    4,377 followers

    We've seen a lot of fake candidates for remote engineering roles, either completely fabricated identities or fabricated experience/education. Luckily, we've been able to reliably detect most of them with AI before they waste a recruiter's time on the phone. Fonzi AI looks for a number of different anomalies and combines the signals to come up with an overall determination for fraud that a human verifies. Here are some of the signals that we currently look at: ‣ Missing LinkedIn photo – For fake identities, they often are missing a photo on LinkedIn, although we've also seen a lot of AI-generated images more recently. ‣ Few LinkedIn connections – Fake identities are often recently created LinkedIn accounts with few connections. ‣ Mismatch between resume and LinkedIn profile – There are sometimes very different timelines between a resume and LinkedIn profile. ‣ Incorrect technologies for company – Fraudulent resumes will get technical details wrong like using PHP at Google or using Angular at Facebook. ‣ Using redundant technologies – Fraudulent resumes will often keyword stuff with unlikely tech stacks, like using React, Angular, and Vue all at the same startup. ‣ Working at companies before founding – This is a red flag where someone claims to have worked at a company before it existed. ‣ Using technologies before invention – Similarly, another red flag is where someone claims to have been using a technology before it was released. ‣ Suspicious email address – This is a weaker, but non-zero signal. Email addresses with strings of four random numbers or the word "dev" added to the name seem to be popular patterns for frauds. ‣ Suspicious location – Similarly, it seems like they tend to autogenerate locations in the US, so a small town nowhere near their college or previous work experiences is suspicious. No single data point will flag a resume as potentially fraudulent, but most fraudulent resumes will contain multiple anomalies. In general, we're pretty sensitive to false positives and have done a lot of manual verification and tuning of the signals. It's also a constant game of cat and mouse. As we see fraudulent resumes evade our detection, we update the signals to make sure we're detecting new patterns.

  • View profile for Balaji Kummari

    Co-founder, CEO @ scale.jobs | Techstars’24

    4,175 followers

    Ever wonder what really happened to those 250 job applications that ghosted you? You’re not alone. I spent the last month tracking companies that posted "We're hiring!" on LinkedIn. What I discovered made me angry. Four in 10 companies posting job openings have ZERO intention of hiring. Not eventually. Not maybe. Never. Last week, I spoke with a client (a product manager) who'd been interviewing for 4 months. She'd made it to the final rounds six times. Each time, radio silence. She started digging. Three of those companies never removed the posting. One reposted it monthly for a year. She wasn't being rejected. She was being played. On Greenhouse alone, 18-22% of job listings are fake. And surprisingly, 85% of companies with fake jobs still interviewed candidates. Think about that. Real people taking time off work. Buying new interview clothes. Practicing with friends. Dreaming about finally leaving their toxic job. All for a role that was never real. Why this deception? Companies want their overworked teams to think relief is coming. They want competitors to think they're growing. They want to collect resumes "just in case." Some even want current employees to feel replaceable. A fear tactic disguised as opportunity. Seven in 10 hiring managers believe posting fake jobs is morally acceptable. So how do you protect yourself? Red flags I've identified: - Requirements so vague they could fit anyone (or no one)  - Posted 60+ days ago but still "actively recruiting"  - No salary range in pay-transparent states  - Identical role reposted every few weeks - Zero response after multiple follow-ups How you can stay ahead of the curve: - Message employees at the company. Ask if they know anyone in that role  - Look for the job on the company's actual careers page  - Track reposting patterns using LinkedIn's date filters - Check if the hiring manager's title even exists This is personal for me. Scale.jobs exists because I've been there. Sending applications into the void, wondering if anyone even read them. Now we verify every single job before our assistants apply. Your hope isn't something to be toyed with. The job market is hard enough without phantom positions. What's the most obvious ghost job you've encountered? Drop it below. Let's help each other. 

  • View profile for Ravi Sandepudi

    CEO & Co-Founder of Effectiv (Exited to Socure) | Head of Platform @ Socure | Former Trust & Safety Lead @ Google | Employee 1 @ Simility (Exited to Paypal) | Building Highly Scalable and Configurable Fraud Engines

    5,402 followers

    We just discovered something unsettling in our hiring pipeline that every tech company needs to know about. We had this candidate with a stellar resume, Ivy League credentials, and crushed the technical interviews. Everything looked perfect until background verification raised some flags. Started digging deeper. Reached out to friends at other tech companies. Turns out everyone's seeing the same pattern: a sudden influx of premium candidates with mixed-and-matched elite university backgrounds, all targeting deep tech roles. The real kicker came when we connected with Applicant Tracking platforms. They see the whole network, thousands of companies. They confirmed what we suspected - Coordinated application spikes, all hitting tech companies hard. We traced our suspicious candidates back through their digital footprints. Found connections to North Korea. Not just one or two isolated cases - an entire ring operating with sophisticated coordination. Some applicants weren't North Korean but showed identical application behaviors. Same patterns, same targeting, same red flags during verification. This changes everything about how we think about hiring security. We're not dealing with resume padding anymore. We're facing organized, potentially state-sponsored infiltration attempts targeting our most sensitive technical positions. Every CISO and VP of HR in tech needs to update their threat models. The hiring process has become an attack vector.

Explore categories