Implementation of GPT models for Text Generation in Healthcare Domain


  • Anirban Karak PES University image/svg+xml
  • Kaustuv Kunal Great Learning
  • Narayana Darapaneni Northwestern University image/svg+xml
  • Anwesh Reddy Paduri Great Learning



healthcare, text generation, GPT-2, PubMed dataset, medicine, NLP


INTRODUCTION: This paper highlights the potential of using generalized language models to extract structured texts from natural language descriptions of workflows in various industries like healthcare domain

OBJECTIVES: Despite the criticality of these workflows to the business, they are often not fully automated or formally specified. Instead, employees may rely on natural language documents to describe the procedures. Text generation methods offer a way to extract structured plans from these natural language documents, which can then be used by an automated system.

METHODS: This paper explores the effectiveness of using generalized language models, such as GPT-2, to perform text generation directly from these texts

RESULTS: These models have already shown success in multiple text generation tasks, and the paper's initial results suggest that they could also be effective in text generation in healthcare domain. In fact, the paper demonstrates that GPT-2 can generate comparable results to many current text generation methods.

CONCLUSION: This suggests that generalized language models can increase the efficiency and accuracy in text generation, where workflows are repetitive and sequential.


Download data is not yet available.


Virapat Kieuvongngam, Bowen Tan, and Yiming Niu. Automatic text summarization of covid-19 medical research articles using bert and gpt-2.arXiv preprint arXiv:2006.01997, 2020

Luo, Renqian, Sun, Liai , Xia, Yingce , Qin, Tao Zhang, Sheng , Poon, Hoifung, Liu and Tie-Yan. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, Oxford Academic, 2022.

Su, Nigel, Yixuan and Collier. Contrastive search is what you need for neural text generation. arXiv preprint arXiv:2210.14140, 2022

Chang, Ernie and Shen, Xiaoyu and Zhu, Dawei and Demberg, Vera and Su, Hui. Neural data-to-text generation with lm-based text augmentation. arXiv preprint arXiv:2102.03556, 2021

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. corr abs/1706.03762 (2017). 2017.

Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2018. URL- models/language-models.pdf

Yang Liu and Mirella Lapata. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345, 2019.

Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.

Zhen Huang, Shiyi Xu, Minghao Hu, Xinyi Wang, Jinyan Qiu, Yongquan Fu, Yuncai Zhao, Yuxing Peng, and Changjian Wang., "Recent trends in deep learning based open-domain textual question answering systems.," 2020.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018.

Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface’s transformers: State-of-the-art natural language processing, 2019.

Derek Miller. Leveraging bert for extractive text summarization on lectures, 2019.

Dima Suleiman and Arafat Awajan. Deep learning based abstractive text summarization: approaches, datasets, evaluation measures, and challenges. Mathematical problems in engineering, 2020.




How to Cite

A. Karak, K. Kunal, N. Darapaneni, and A. R. Paduri, “Implementation of GPT models for Text Generation in Healthcare Domain”, EAI Endorsed Trans AI Robotics, vol. 3, Apr. 2024.