-
Where Open Education Meets Generative AI: OELMs – improving learning
- I am absolutely ready to predict that the large publishers will begin creating bundles of proprietary supplemental materials designed specifically for use with proprietary language models.
- we should take the initiative now to ensure that instructors who want to use LLMs as course materials have access to high quality, openly licensed options from the start.
- Those options should include both the models themselves and the additional resources necessary to use them easily and effectively.
- ensure that generative AI tools can move us forward on affordability, access, and equity instead of backward
- Open Educational Language Models (OELMs) bring together a collection of openly licensed components that allow an openly licensed language model to be used easily and effectively in support of teaching and learning
- But because the model weights are open, we have the opportunity to revise and remix them
- Because the model weights are open, we can change the way learners and teachers interact with them in order to increase access, affordability, and equity. Because the model weights are open, we have significantly greater agency
- An OELM includes a comprehensive collection of pre-written prompts
- For teachers, these activities might include lesson planning, designing an active learning exercise for use in class, differentiating instruction, revising or remixing OER, and drafting feedback on student work.
- what about students? students should have access to the prompts as well...
- When a teacher or learner submits a prompt to the model, before the prompt is sent to the model, relevant information is searched for in the collection of OER and added to the prompt.
- The model then uses the information it has retrieved from the OER as the basis for its response to the user, augmenting its general knowledge about the topic before generating a response.
- specially designed collection of open content that can be used to steer the model’s behavior
- This can be embedded in the system prompt (a prompt which the user doesn’t see but which steers model behavior in the background) or used for fine-tuning.
- In the OELM context, fine-tuning is the process by which a model can be made to behave more pedagogically
- Each of these four components – the model weights, content for fine-tuning, content for RAG, and pre-written prompts – can be openly licensed, providing teachers, learners, and others with permission to engage in the 5R activities.
- Retain, Reuse, Revise, Remix, and Redistribute
- Think of the model weights as the core textbook and the other components as the supplemental materials necessary for widespread adoption.
- And just like with traditional OER, the ability to copy, edit, and share prompts and other OELM components means they can be localized in order to best meet the needs of individual learners,
- The foundational R in the 5Rs framework is Retain
- Then you can take that copy you downloaded and revise, remix, reuse, and redistribute it to meet your needs and the needs of others around you.
- small models are a key to this strategy over the medium to long-term.
- Advances in running models locally are important because people without reliable access to the internet are currently unable to take advantage of generative AI in support of teaching and learning.
-
Saturday, December 21, 2024
Weekly Sporto bookmarks (weekly)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment