The entire game when it comes to prompting LLMs is to carefully control their context - the inputs (and subsequent outputs) that make it into the current conversation with the model.