#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Applications of machine learning in decision analysis for dose management for dofetilide


Autoři: Andrew E. Levy aff001;  Minakshi Biswas aff001;  Rachel Weber aff002;  Khaldoun Tarakji aff003;  Mina Chung aff003;  Peter A. Noseworthy aff004;  Christopher Newton-Cheh aff005;  Michael A. Rosenberg aff001
Působiště autorů: Division of Cardiology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States of America aff001;  Division of Biostatistics and Informatics, Colorado School of Public Health, Aurora, CO, United States of America aff002;  Center for Atrial Fibrillation, Section of Cardiac Pacing and Electrophysiology, Cleveland Clinic Foundation, Cleveland, OH, United States of America aff003;  Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States of America aff004;  Cardiovascular Research Center, Department of Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States of America aff005;  Colorado Center for Personalized Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States of America aff006
Vyšlo v časopise: PLoS ONE 14(12)
Kategorie: Research Article
prolekare.web.journal.doi_sk: https://doi.org/10.1371/journal.pone.0227324

Souhrn

Background

Initiation of the antiarrhythmic medication dofetilide requires an FDA-mandated 3 days of telemetry monitoring due to heightened risk of toxicity within this time period. Although a recommended dose management algorithm for dofetilide exists, there is a range of real-world approaches to dosing the medication.

Methods and results

In this multicenter investigation, clinical data from the Antiarrhythmic Drug Genetic (AADGEN) study was examined for 354 patients undergoing dofetilide initiation. Univariate logistic regression identified a starting dofetilide dose of 500 mcg (OR 5.0, 95%CI 2.5–10.0, p<0.001) and sinus rhythm at the start of dofetilide loading (OR 2.8, 95%CI 1.8–4.2, p<0.001) as strong positive predictors of successful loading. Any dose-adjustment during loading (OR 0.19, 95%CI 0.12–0.31, p<0.001) and a history coronary artery disease (OR 0.33, 95%CI 0.19–0.59, p<0.001) were strong negative predictors of successful dofetilide loading. Based on the observation that any dose adjustment was a significant negative predictor of successful initiation, we applied multiple supervised approaches to attempt to predict the dose adjustment decision, but none of these approaches identified dose adjustments better than a probabilistic guess. Principal component analysis and cluster analysis identified 8 clusters as a reasonable data reduction method. These 8 clusters were then used to define patient states in a tabular reinforcement learning model trained on 80% of dosing decisions. Testing of this model on the remaining 20% of dosing decisions revealed good accuracy of the reinforcement learning model, with only 16/410 (3.9%) instances of disagreement.

Conclusions

Dose adjustments are a strong determinant of whether patients are able to successfully initiate dofetilide. A reinforcement learning algorithm informed by unsupervised learning was able to predict dosing decisions with 96.1% accuracy. Future studies will apply this algorithm prospectively as a data-driven decision aid.

Klíčová slova:

Cardiology – Electrocardiography – Coronary heart disease – Machine learning algorithms – Machine learning – Decision making – Data processing – Dose prediction methods


Zdroje

1. Hogendoorn W, Moll FL, Sumpio BE, Hunink MG. Clinical Decision Analysis and Markov Modeling for Surgeons: An Introductory Overview. Annals of surgery. 2016;264(2):268–74. Epub 2016/01/13. doi: 10.1097/SLA.0000000000001569 26756750.

2. Alagoz O, Hsu H, Schaefer AJ, Roberts MS. Markov decision processes: a tool for sequential decision making under uncertainty. Medical decision making: an international journal of the Society for Medical Decision Making. 2010;30(4):474–83. Epub 2010/01/02. doi: 10.1177/0272989x09353194 20044582; PubMed Central PMCID: PMC3060044.

3. Kaelbling LP, Littman ML, AW. M. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research. 1996;(4):237–85.

4. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al. Mastering the game of Go without human knowledge. Nature. 2017;550(7676):354–9. Epub 2017/10/21. doi: 10.1038/nature24270 29052630.

5. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, et al. Human-level control through deep reinforcement learning. Nature. 2015;518(7540):529–33. Epub 2015/02/27. doi: 10.1038/nature14236 25719670.

6. Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. Artificial Intelligence in Precision Cardiovascular Medicine. J Am Coll Cardiol. 2017;69(21):2657–64. Epub 2017/05/27. doi: 10.1016/j.jacc.2017.03.571 28545640.

7. Johnson KW, Torres Soto J, Glicksberg BS, Shameer K, Miotto R, Ali M, et al. Artificial Intelligence in Cardiology. J Am Coll Cardiol. 2018;71(23):2668–79. Epub 2018/06/09. doi: 10.1016/j.jacc.2018.03.521 29880128.

8. Shortreed SM, Laber E, Lizotte DJ, Stroup TS, Pineau J, Murphy SA. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Machine learning. 2011;84(1–2):109–36. Epub 2011/07/30. doi: 10.1007/s10994-010-5229-0 21799585; PubMed Central PMCID: PMC3143507.

9. Prasad N, Cheng LF, Chivers C, Draugelis M, B. E. A reinforcement learning approach to weaning of mechanical ventilation in intensive care units. arXiv 2017;https://arxiv.org/abs/1704.06300.

10. Pfizer I. Tikosyn Label Information [Prescribing information]. http://labelingpfizercom/showlabelingaspx?id=639. 2011.

11. Naksuk N, Sugrue AM, Padmanabhan D, Kella D, DeSimone CV, Kapa S, et al. Potentially modifiable factors of dofetilide-associated risk of torsades de pointes among hospitalized patients with atrial fibrillation. J Interv Card Electrophysiol. 2019;54(2):189–96. Epub 2018/10/26. doi: 10.1007/s10840-018-0476-2 30353374.

12. Funck-Brentano C, Jaillon P. Rate-corrected QT interval: techniques and limitations. Am J Cardiol. 1993;72(6):17b–22b. Epub 1993/08/26. doi: 10.1016/0002-9149(93)90035-b 8256750.

13. Chai KE, Anthony S, Coiera E, Magrabi F. Using statistical text classification to identify health information technology incidents. Journal of the American Medical Informatics Association: JAMIA. 2013;20(5):980–5. Epub 2013/05/15. doi: 10.1136/amiajnl-2012-001409 23666777; PubMed Central PMCID: PMC3756261.

14. J D, M. G, editors. The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd International Conference on Machine Learning (ICML); 2006; Pittsburgh, PA, USA.

15. Sutton RS AG B. Reinforcement Learning. 2nd ed. Cambridge, MA: MIT Press; 2018.

16. Vassiliades V, Cleanthous A, Christodoulou C. Multiagent reinforcement learning: spiking and nonspiking agents in the iterated Prisoner's Dilemma. IEEE transactions on neural networks. 2011;22(4):639–53. Epub 2011/03/23. doi: 10.1109/TNN.2011.2111384 21421435.

17. Qiao J, Wang G, Li W, Chen M. An adaptive deep Q-learning strategy for handwritten digit recognition. Neural networks: the official journal of the International Neural Network Society. 2018;107:61–71. Epub 2018/05/08. doi: 10.1016/j.neunet.2018.02.010 29735249.

18. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, et al. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484–9. Epub 2016/01/29. doi: 10.1038/nature16961 26819042.

19. Millán JDR, Torras C. A reinforcement connectionist approach to robot path finding in non-maze-like environments. Machine learning. 1992;8(3):363–95. doi: 10.1007/bf00992702

20. Gullapalli V. A stochastic reinforcement learning algorithm for learning real-valued functions. Neural Networks. 1990;3(6):671–92. doi: 10.1016/0893-6080(90)90056-Q

21. Yom-Tov E, Feraru G, Kozdoba M, Mannor S, Tennenholtz M, Hochberg I. Encouraging Physical Activity in Patients With Diabetes: Intervention Using a Reinforcement Learning System. Journal of medical Internet research. 2017;19(10):e338. Epub 2017/10/12. doi: 10.2196/jmir.7994 29017988; PubMed Central PMCID: PMC5654735.

22. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24(11):1716–20. Epub 2018/10/24. doi: 10.1038/s41591-018-0213-5 30349085.

23. Boyan JAaM, Andrew W. Generalization in reinforcement learning: Safely approximating the value function. NIPS. 1995:pp. 369–76.

24. Tsitsiklis JNaVR Benjamin. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control. 1997;42:674–90.

25. Attia ZI, Sugrue A, Asirvatham SJ, Ackerman MJ, Kapa S, Friedman PA, et al. Noninvasive assessment of dofetilide plasma concentration using a deep learning (neural network) analysis of the surface electrocardiogram: A proof of concept study. PLoS One. 2018;13(8):e0201059. Epub 2018/08/23. doi: 10.1371/journal.pone.0201059 30133452; PubMed Central PMCID: PMC6104915.

26. Sugrue A, Kremen V, Qiang B, Sheldon SH, DeSimone CV, Sapir Y, et al. Electrocardiographic Predictors of Torsadogenic Risk During Dofetilide or Sotalol Initiation: Utility of a Novel T Wave Analysis Program. Cardiovascular drugs and therapy / sponsored by the International Society of Cardiovascular Pharmacotherapy. 2015;29(5):433–41. Epub 2015/09/29. doi: 10.1007/s10557-015-6619-0 26411977; PubMed Central PMCID: PMC4731047.


Článok vyšiel v časopise

PLOS One


2019 Číslo 12
Najčítanejšie tento týždeň
Najčítanejšie v tomto čísle
Kurzy

Zvýšte si kvalifikáciu online z pohodlia domova

Aktuální možnosti diagnostiky a léčby litiáz
nový kurz
Autori: MUDr. Tomáš Ürge, PhD.

Všetky kurzy
Prihlásenie
Zabudnuté heslo

Zadajte e-mailovú adresu, s ktorou ste vytvárali účet. Budú Vám na ňu zasielané informácie k nastaveniu nového hesla.

Prihlásenie

Nemáte účet?  Registrujte sa

#ADS_BOTTOM_SCRIPTS#