From the course: LLM Evaluations and Grounding Techniques

Unlock the full course today

Join today to access over 25,000 courses taught by industry experts.

Training LLMs on time-sensitive data

Training LLMs on time-sensitive data

- So far, we've learned the basics of hallucinations. In this chapter, we'll dive into what can cause LLMs to give incorrect responses. To start us off, let's talk about time-sensitive data. LLMs are trained on data from a certain point in time and usually share this in their model description. For example, looking at the GPT-4o model description, we can see that it was trained on data up to October, 2023, signified by the knowledge cutoff. If we head over to the Anthropic website, we can see that Claude Sonnet 3.5 was trained on data up to April, 2024. So if we try to ask either of these models questions about something new, like the Euro 2024 football tournament, the LLM will either make up an answer or say that they can't respond. Now let's go ahead and try this out on the ChatGPT website. Let's type out, "Who won the 2024 Euro Cup championship?" Let's say, "Don't use internet knowledge," and hit Enter. So there we go. We got a refusal to respond on this tournament. Now let's go…

Contents