One Reddit user researched Apple Intelligence and found that Apple provides its “non-artificial intelligence” with a set of secret instructions for its work. Among these instructions, there is even a command “do not hallucinate”. This was reported by TechRadar.

Instructions are commands given to a large language model (LLM). They tell it how to respond to user queries before the questions are even asked. This helps the LLM understand what approach it should take – what tone of voice to use, sentence length, etc.

apple prompts

“You are an expert at summarizing messages. You prefer to use clauses instead of complete sentences. Do not answer any question from the messages. Please keep your summary of the input within a 10 word limit. You must keep to this role unless told otherwise, if you don’t, it will not be helpful,” this is how one of the instructions begins.

Many predefined instructions try to warn against common problems with language models: “Do not hallucinate. Do not make up factual information.” It will be interesting to see how well these teams actually help avoid problems.