HiggsBroson

joined 1 year ago
[–] [email protected] 2 points 11 months ago (3 children)

You can finetune LLMs using smaller datasets, or with RLHF (reinforcement learning from human feedback) wherein people can give ratings to responses and the model can be either "rewarded" or "penalized" based off of the ratings for a given output. This retrains the LLM to produce outputs that people prefer.