Informed consent in surgical settings requires not only the accurate communication of medical information but also the establishment of trust through empathic engagement. The use of large language models (LLMs) offers a novel opportunity to enhance the informed consent process by combining advanced information retrieval capabilities with simulated emotional responsiveness. However, the ethical implications of simulated empathy raise concerns about patient autonomy, trust and transparency. This paper examines the challenges of surgical informed consent, the potential benefits and limitations of digital tools such as LLMs and the ethical implications of simulated empathy. We distinguish between active empathy, which carries the risk of creating a misleading illusion of emotional connection and passive empathy, which focuses on recognising and signalling patient distress cues, such as fear or uncertainty, rather than attempting to simulate genuine empathy. We argue that LLMs should be limited to the latter, recognising and signalling patient distress cues and alerting healthcare providers to patient anxiety. This approach preserves the authenticity of human empathy while leveraging the analytical strengths of LLMs to assist surgeons in addressing patient concerns. This paper highlights how LLMs can ethically enhance the informed consent process without undermining the relational integrity essential to patient-centred care. By maintaining transparency and respecting the irreplaceable role of human empathy, LLMs can serve as valuable tools to support, rather than replace, the relational trust essential to informed consent.