Skip to main content
Generative AI in the Real World
Generative AI in the Real World
Generative AI in the Real World: Measuring Skills with Kian Katanforoosh
Loading
/

How do we measure skills in an age of AI? That question has an effect on everything from hiring to productive teamwork. Join Kian Katanforoosh, founder and CEO of Workera, and Ben Lorica for a discussion of how we can use AI to assess skills more effectively. How do we get beyond pass/fail exams to true measures of a person’s ability?

Check out other episodes of this podcast or the full-length version of this episode on the O’Reilly learning platform.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Timestamps

  • 0:00: Introduction
  • 0:28: Can you give a sense of how big the market for skills verification is?
  • 0:42: It’s extremely large. Anything that touches skills data is on the rise. When you extrapolate university admissions to someone’s career, you realize that there are many times when they need to validate their skills.
  • 1:59: Roughly what’s the breakdown between B2B and B2C?
  • 2:04: Workera is exclusively B2B and federal. However, there are also assessments focused on B2C. Workera has free assessments for consumers.
  • 3:00: Five years ago, there were tech companies working on skill assessment. What were prior solutions before the rise of generative AI?
  • 3:27: Historically, assessments have been used for summative purposes. Pass/fail, high stakes, the goal is to admit or reject you. We provided the use of assessments for people to know where they stand, compare themselves to the market, and decide what to study next. That takes different technology.
  • 4:50: Generative AI became much more prominent with the rise of ChatGPT. What changed?
  • 5:09: Skills change faster than ever. You need to update skills much more frequently. The half-life of skills used to be over 10 years. Today, it’s estimated to be around 2.5 years in the digital area. Writing a quiz is easy. Writing a good assessment is extremely hard. Validity is a concept showing that what you intend to measure is what you are measuring. AI can help.  
  • 6:39: AI can help with modeling the competencies you want to measure.
  • 6:57: AI can help streamline the creation of an assessment.
  • 7:22: AI can help test the assessment with synthetic users.
  • 7:42: AI can help with monitoring postassessment. There are a lot of things that can go wrong.
  • 8:25: Five years ago in program, people used tests to filter people out. That has changed; people will use coding assistants on the job. Why shouldn’t I be able to use a coding assistant when I’m doing an assessment?
  • 9:16: You should be able to use it. The assessment has to change. The previous generation of assessments focused on syntax. Do you care if you forgot a semicolon? Assessments should focus on other cognitive levels, such as analyzing and synthesizing information.

On May 8, O’Reilly Media will be hosting Coding with AI: The End of Software Development as We Know It—a live virtual tech conference spotlighting how AI is already supercharging developers, boosting productivity, and providing real value to their organizations. If you’re in the trenches building tomorrow’s development practices today and interested in speaking at the event, we’d love to hear from you by March 12. You can find more information and our call for presentations here. Just want to attend? Register for free here.

Post topics: AI & MLGenerative AI in the Real World
Post tags: Commentary