In-context learning (ICL)
In-context learning (ICL) is the process of providing examples during the input prompt to train the data of the large language model (LLM) for a specific conversation. ICL does not persist in the model’s memory or training data, allowing users to generalize to new tasks or adapt in real-time to the input.
Typically, LLMs are trained on a vast dataset, giving them a wealth of information to use from their memory. ICL allows the model to be trained on a new format or behavior in a few examples using many-shot or few-shot prompting techniques.
Why is in-context learning (ICL) important in large language models and AI applications?
When context is critical to agentic workflows, custom chatbots, and dynamic RAG systems, ICL helps to support task flexibility without retraining the model. Developers can fine-tune model outputs on demand.
Secure your agentic AI and AI-native application journey with Straiker
.avif)