INLG 2025 Workshop: LLM Reasoning on Medicine: Challenges, Opportunities, and Future
(Full-day, in-person)
Workshop Scope
Recent breakthroughs in Large Language Models (LLMs) have opened new horizons for Natural Language Generation (NLG) in healthcare. In medicine, LLMs offer significant potential, from generating clinical notes and summarising patient records to supporting decision-making, ultimately promising to enhance efficiency, accessibility, and quality of care. However, applying LLMs in medicine is a double-edged sword. The specialized nature of medical treatment and the severe consequences of errors or hallucinations necessitate new approaches to reasoning, evaluation, and deployment of LLMs.
This workshop will provide a dedicated forum to explore the current state and limitations of LLM reasoning in medical contexts, identify key challenges in generating accurate, explainable, and trustworthy medical outputs, and foster interdisciplinary collaboration to develop robust evaluation metrics and benchmarks for medical NLG. We will also discuss the ethical, legal, and societal implications of deploying LLMs in healthcare and chart future research directions for integrating domain knowledge and reasoning capabilities into LLMs for medicine.
Topics of Interest
Topics of interest include, but are not limited to:
- Novel architectures for LLM reasoning in medical applications
- Evaluation methods and benchmarks for medical NLG
- Datasets for LLM reasoning in medicine
- Reasoning algorithms and prompting techniques for medical LLMs
- Multi-modal reasoning combining text with images, signals, or structured data
- Case studies of LLM applications in clinical decision support, explainable AI in healthcare, and patient communication
- Analysis of LLM reasoning in medical contexts
- Mitigation strategies for hallucination in medical LLMs
- Domain adaptation for medical LLMs
- Safety and ethical considerations in deploying LLMs in healthcare
- Integration of domain knowledge and logical inference into LLMs for medicine
- Explainability and trustworthiness in medical AI
- Regulatory considerations for LLMs in healthcare
Special Track: Neuro-Symbolic Clinical Reasoning
We particularly encourage papers that combine neural and symbolic approaches to improve transparency and reduce data dependence in medical LLM reasoning. Submission instructions are identical to the main track; please select “Neuro-Symbolic” when uploading.
Submission Guidelines
- Long papers: up to 8 pages (excluding references/appendices) describing completed work with evaluation.
- Short papers: up to 4 pages. May report preliminary results, negative findings, position pieces, or demos.
- One extra page will be granted for camera-ready revisions.
- Supplementary material (code, data, appendices) is optional but encouraged and must be anonymised.
- Templates: Submissions should follow ACL Author Guidelines and policies for submission, review and citation, and be anonymised for double blind reviewing. Please use ACL style files; LaTeX style files and Microsoft Word templates are available at https://acl-org.github.io/ACLPUB/formatting.html.
- Review is double-blind; please anonymise manuscripts.
- Submission site: Papers should be submitted directly through the LLMRM 2025 paper submission site.
- Multiple-submission policy: Non-archival versions may be under review elsewhere; please indicate parallel submissions.
Important Dates:
- First call for papers: 30 July 2025
- Paper submission deadline: 26 August 2025
- Notification of acceptance: 24 September 2025
- Camera-ready papers due: 3 October 2025
- All deadlines are 23:59 AoE.
- Workshop: 30 October 2025 @ INLG 2025
Programme Structure (Full-Day)
- Keynote talks by leaders in NLG, clinical AI, and ethics
- Peer-reviewed oral presentation and poster presentations
- Panel: “Promises & Pitfalls of LLMs in Medicine”
- Breakout discussions on hallucination mitigation, evaluation, and deployment
- Interactive poster & demo session
Organising Committee
- Dr. Changmeng Zheng, The Hong Kong Polytechnic University
- Mr. Jiatong Li, The Hong Kong Polytechnic University
- Mr. Qi Peng, The Hong Kong Polytechnic University
- Prof. Xiaoyong Wei, The Hong Kong Polytechnic University
- Prof. Qing Li, The Hong Kong Polytechnic University
Contact: changmeng.zheng@polyu.edu.hk
We look forward to your contributions and to a productive workshop that will advance the understanding of LLM reasoning in medicine, promote the development of shared resources, and build a community of practice for ongoing collaboration.