Large language model para geração de prontuários eletrônicos sintéticos

Autores

  • Gabriel Constantin da Silva UFCSPA
  • Silvio César Cazella UFCSPA

DOI:

https://doi.org/10.59681/2175-4411.v16.iEspecial.2024.1275

Palavras-chave:

Ciência Aberta, Large Language Model, Prontuários eletrônicos sintéticos

Resumo

Introdução: A utilização de dados de saúde  em pesquisas é limitada por questões éticas. Isso desafia os pesquisadores a encontrarem formas de obter o material necessário para desenvolverem seu trabalho. Método: Usou-se uma ferramenta de Large Language Model (LLM) para gerar prontuários eletrônicos (PE) sintéticos de pacientes cardiológicos utilizando-se as técnicas "few-shot prompting" e "chain-of-thought prompting". Objetivo: criar um conjunto de dados abrangente e acessível para auxiliar no treinamento de algoritmos de classificação de texto em cenários médicos. Resultados: Foram gerados 103 PE sintéticos, abrangendo diagnósticos cardíacos distintos. Conclusão: A geração de PE sintéticos através de LLM apresentaram qualidade esperada, sendo condizentes com o conteúdo encontrado em PE reais. O conjunto de dados está disponível no repositório Zenodo para uso irrestrito pela comunidade de pesquisa, seguindo o conceito de  ciência  aberta.

Biografias Autor

Gabriel Constantin da Silva, UFCSPA

Mestrando, PPGTIG Saúde, UFCSPA, Porto Alegre (RS), Brasil.

Silvio César Cazella, UFCSPA

Professor Doutor, PPGTIG Saúde, UFCSPA, Porto Alegre (RS), Brasil.

Referências

Bozkurt M, Harman M. Automatically generating realistic test input from web services. In: Service Oriented System Engineering (SOSE), IEEE 6th International Symposium; 2011. DOI: https://doi.org/10.1109/SOSE.2011.6139088

Rubin D. Discussion: Statistical disclosure limitation. J Off Stat. 1993;9(2):461-8.

Van Panhuis WG, Paul P, Emerson C, Grefenstette J, Wilder R, Herbst AJ, Burke DS. A systematic review of barriers to data sharing in public health. BMC Public Health. 2014;14:1-9. DOI: https://doi.org/10.1186/1471-2458-14-1144

Weston J, Bordes A, Chopra S, Rush A, Merrienboer B, Joulin A, Mikolov T. Towards AI-complete question answering: A set of prerequisite toy tasks.

Gargiulo F, Ternes S, Huet S, Deffuant G. An iterative approach for generating statistically realistic populations of households. PLoS ONE. 2010;5(1). DOI: https://doi.org/10.1371/journal.pone.0008828

Eigenschink P, Reutterer T, Vamosi S, Vamosi R, Sun C, Kalcher K. Deep Generative Models for Synthetic Sequential Data: A Survey. IEEE Access. 2023. DOI: https://doi.org/10.1109/ACCESS.2023.3275134

Ahmad A, Waseem M, Liang P, Fahmideh M, Aktar MS, Mikkonen T. Towards human-bot collaborative software architecting with chatgpt. In: Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering; 2023. p. 279-85. DOI: https://doi.org/10.1145/3593434.3593468

Wang J, Liu Z, Zhao L, Wu Z, Ma C, Yu S, Zhang S. Review of large vision models and visual prompt engineering. Meta-Radiol. 2023;100047. DOI: https://doi.org/10.1016/j.metrad.2023.100047

Meskó B. Prompt engineering as an important emerging skill for medical professionals: tutorial. J Med Internet Res. 2023;25:e50638. DOI: https://doi.org/10.2196/50638

Hassani H, Silva ES. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field. Big Data Cogn Comput. 2023;7(2):62. [doi.org]10.3390/bdcc7020062. DOI: https://doi.org/10.3390/bdcc7020062

Janik R. Aspects of human memory and Large Language Models. ArXiv [Preprint]. 2023:abs/2311.03839.

Heston TF, Khun C. Prompt engineering in medical education. Int Med Educ. 2023;2(3):198-205. DOI: https://doi.org/10.3390/ime2030019

Cawsey AJ, Webber BL, Jones RB. Natural language generation in health care. J Am Med Inform Assoc. 1997;4(6):473-482. doi:10.1136/jamia.1997.0040473. DOI: https://doi.org/10.1136/jamia.1997.0040473

Shickel B, Tighe P, Bihorac A, Rashidi P. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform. 2018;22(5):1589-604. DOI: https://doi.org/10.1109/JBHI.2017.2767063

Chen, R. J., Lu, M. Y., Chen, T. Y., Williamson, D. F., & Mahmood, F. (2021). Synthetic data in machine learning for medicine and healthcare. Nature Biomedical Engineering, 5(6), 493-497. DOI: https://doi.org/10.1038/s41551-021-00751-8

Clarke LA. A system to generate test data and symbolically execute programs. IEEE Trans Softw Eng. 1976;SE-2(3):215-222. DOI: https://doi.org/10.1109/TSE.1976.233817

Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, Zhou D. Chain-of-thought prompting elicits reasoning in large language models. Adv Neural Inf Process Syst. 2022;35:24824-37.

Shao Z, Gong Y, Shen Y, Huang M, Duan N, Chen W. Synthetic prompting: Generating chain-of-thought demonstrations for large language models. In: International Conference on Machine Learning. PMLR; 2023 Jul. p. 30706-75.

Li, J., Cheng, X., Zhao, W., Nie, J., & Wen, J. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models. ArXiv. 2023;abs/2305.11747. https://doi.org/10.48550/arXiv.2305.11747. DOI: https://doi.org/10.18653/v1/2023.emnlp-main.397

Choi E, Biswal S, Malin B, Duke J, Stewart WF, Sun J. Generating multi-label discrete patient records using generative adversarial networks. ArXiv [Preprint]. 2017:1703.06490.

Scott McLachlan, Kudakwashe Dube, and Thomas Gallagher. Using the caremap with health incidents statistics for generating the realistic synthetic electronic ealthcare record. In Healthcare Informatics (ICHI), 2016 IEEE International Conference on, pages 439–448. IEEE, 2016. DOI: https://doi.org/10.1109/ICHI.2016.83

Lombardo JS, Moniz LJ. A method for generation and distribution of synthetic medical record data for evaluation of disease-monitoring systems. Johns Hopkins APL Tech Dig. 2008;27(4):356.

Esteban C, Hyland SL, Rätsch G. Real-valued (medical) time series generation with recurrent conditional GANs. arXiv preprint arXiv:1706.02633. 2017.

Strasser A. On pitfalls (and advantages) of sophisticated large language models. ArXiv [Internet]. 2023 Mar [citado 2024 Mar 27]; abs/2303.17511. Disponível em: https://doi.org/10.48550/arXiv.2303.17511

Luo J, Xiao C, Ma F. Zero-Resource Hallucination Prevention for Large Language Models. ArXiv. 2023;abs/2309.02654. https://doi.org/10.48550/arXiv.2309.02654.

Baowaly MK, Lin C-C, Liu C-L, Chen KT. Synthesizing electronic health records using improved generative adversarial networks. J Am Med Inform Assoc. 2019;26(3):228-241. DOI: https://doi.org/10.1093/jamia/ocy142

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Polosukhin I. Attention is all you need. Adv Neural Inf Process Syst. 2017;30.

Wiggers K. OpenAI's attempts to watermark AI text hit limits. TechCrunch. 2022 Dec 10. Disponível em: https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits

Kumar UL, Pal A, Sankarasubbu M. Med-HALT: Medical Domain Hallucination Test for Large Language Models. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP); 2023:314-334. https://doi.org/10.48550/arXiv.2307.15343.

Coda-Forno J, Binz M, Akata Z, Botvinick M, Wang JX, Schulz E. Meta-in-context learning in large language models. ArXiv abs/2305.12907 [Internet]. 2023 [cited 2024 Apr 07]. Available from: doi: 10.48550/arXiv.2305.12907.

Publicado

2024-11-19

Como Citar

da Silva, G. C., & Cazella, S. C. (2024). Large language model para geração de prontuários eletrônicos sintéticos. Journal of Health Informatics, 16(Especial). https://doi.org/10.59681/2175-4411.v16.iEspecial.2024.1275

Artigos Similares

1 2 3 4 5 6 7 8 9 10 > >> 

Também poderá iniciar uma pesquisa avançada de similaridade para este artigo.

Artigos mais lidos do(s) mesmo(s) autor(es)