Mode Expansion Through Prompting

26 Mar 2023 - Jack Hullis

Mode collapse occurs when a model learns to generate a limited set of outputs, usually as a consequence of finetuning through RLHF. This is problematic as it greatly reduces model creativity, which in some cases can lead to a decrease in overall model performance. Mode expansion is a reversal technique that involves using prompts to improve efficacy and increase output diversity. If we can figure out how to effectively prompt a model towards mode expansion, we can potentially increase model creativity and diversity, which could lead to better model performance.

Mode Collapse

Mode collapse is a serious problem that occurs when finetuning LLMs. We use finetuning in order to shape our language models to generate outputs that better fit our requests. However, this comes at a trade off against model creativity. When we finetune our models, we are essentially telling them what we want them to generate. We are pointing responses in a single specified direction.

Whilst finetuning can lead to a model that is very good at generating a limited set of outputs, as a consequence it can lead to a decrease in performance over an alternate wider range of outputs. For example, a model which has been finetuned to produce more evocative poems might, as a trade off, lose some of the creativity that it had picked up during pretraining.

Mode Expansion

Mode expansion can be thought of as the opposite to mode collapse. Mode expansion occurs when a model is encouraged to generate a wider range of outputs. Instead of messing with model weights, we can do this through intelligent prompting to encourage model output diversity. For example, we can prompt a model to generate a poem that is both unique and evocative.

However, this isn’t always reliable. A finetuned model might have learnt to pay less attention to its instructions. Luckily, we can of course combat this by finetuning our model to better follow our instructions. This will increase the impact that the prompt has on the output of the model.

Altman has since said that this technique was used when finetuning GPT-4, and the results have been promising. GPT-4 shows a notable increase in prompt obedience. As a result, inputs can be used to greater steer the outputs of the model towards the desired output.

Challenges

Mode expansion is a promising technique that can be used to increase model creativity and diversity. However, there are some challenges and limitations that must be addressed. One of these is that mode expansion can lead to a decrease in the accuracy of the models outputs. For example if knowledge is limited, but creativity is encouraged, inaccuracies and hallucinations are more likely to be generated. This works against the goal of finetuning, which is to increase the accuracy of the models outputs. However, by encouraging prompt obedience, we can hope to reduce the impact of this.

Return to blog