Reproducibility & Data Version Control
for LangChain & LLM/OpenAI Models

FREE Virtual Workshop
Nov. 29th,1PM EST

Proudly Sponsored By

Presenter: Amit Kesarwani
Director of Solution Engineering
lakeFS by Treeverse

Machine Learning Summit in Healthcare

April 26th to 27th, 2022 – 1:00 PM to 5:00 PM

A Uniquely Interactive Experience

Join us for the Machine Learning in Healthcare Annual Gathering

8 speakers and 4 workshops will explore applications of Machine Learning from both the business and technical areas of expertise.

Attendees will have opportunities to meet with both academic researchers and industrial parties active in the Healthcare sector in order to gain new perspectives from each other’s scope of work.

The Micro-Summit includes:

  • 8 Speakers
  • 4 Workshop
  • Access 6 hours of live-streamed content (incl. recordings)
  • Talks for beginners/intermediate & advanced
  • Case Studies, Executive Track – Business Alignment & Advanced Technical Research
  • Q+A with Speakers
  • Channels to share your work with community

 

Join this new initiative to help push the AI community forward.

We’re Hosting

Breakout Sessions
(All Levels)
Discussion Groups
Workshops
Virtual Platform

Chair

Azra Dhalla, Director, Health AI Implementation, Vector Institute
Azra Dhalla

Director, Health AI Implementation, Vector Institute

Andrea Smith, Director, Health Data Partnerships, Vector Institute
Andrea Smith

Director, Health Data Partnerships, Vector Institute

Speakers

Laleh Seyyed-Kalantari, Associate Scientist, Lunenfeld Tanenbaum Research Institute, Sinai Health, We’re (not) Fine: Lack of Fairness in AI-based Medical Image Diagnostic Tools
Laleh Seyyed-Kalantari

Associate Scientist, Lunenfeld Tanenbaum Research Institute, Sinai Health System

Talk: We’re (not) Fine: Lack of Fairness in AI-based Medical Image Diagnostic Tools

Soroosh Shahtalebi, Postdoctoral Research Fellow, Vector Institute, Explainable Artificial Intelligence for Discrimination of Neurological Disorders
Soroosh Shahtalebi

Postdoctoral Research Fellow, Vector Institute

Talk: Explainable Artificial Intelligence for Discrimination of Neurological Disorders

Vanessa Allen, Medical Microbiologist and Clinical Infectious Diseases Specialist, Sinai Health System, Machine Learning Supported Tick Identification to Combat Lyme Disease
Vanessa Allen

Medical Microbiologist and Clinical Infectious Diseases Specialist, Sinai Health System

Talk: Machine Learning Supported Tick Identification to Combat Lyme Disease

Co-Presenter: Sina Akbarian

Sina Akbarian Data Scientist, Klick Machine Learning Supported Tick Identification to Combat Lyme Disease
Sina Akbarian
Data Scientist, Klick

Talk: Machine Learning Supported Tick Identification to Combat Lyme Disease

Co-Presenter: Vanessa Allen

Moritz Steller, Applied AI Leader, Microsoft, Automated Patient Risk Adjustment and Medicare HCC Coding from Clinical Notes
Moritz Steller

Applied AI Leader, Microsoft

Talk: Automated Patient Risk Adjustment and Medicare HCC Coding from Clinical Notes

Victor Sonck, Evangelist, ClearML Bio & Abstract ,The importance of Experiment Tracking and Data Traceability
Victor Sonck

Evangelist, ClearML

Talk: The Importance of Experiment Tracking and Data Traceability
Sedef Akinli Kocak, Senior Project Manager & Tech Translator, Vector Institute, Using Entity Extraction to Help Solve the Long COVID Puzzle
Sedef Akinli Kocak

Senior Project Manager & Tech Translator, Vector Institute

Talk: Using Entity Extraction to Help Solve the Long COVID Puzzle

Co-Presenter: Elham Dolatabadi

Elham Dolatabadi, Machine Learning Scientist, Vector Institute, Using Entity Extraction to Help Solve the Long COVID Puzzle
Elham Dolatabadi

Machine Learning Scientist, Vector Institute

Talk: Using Entity Extraction to Help Solve the Long COVID Puzzle

Co-Presenter: Sedef Akinli Kocak

Dr. Amin Madani, Staff Surgeon, University Health Network; Assistant Professor of Surgery, University of Toronto; Director, Surgical Artificial Intelligence Research Academy, UHN, Artificial Intelligence and Augmentation of Surgical Performance
Dr. Amin Madani

Staff Surgeon, University Health Network; Assistant Professor of Surgery, University of Toronto; Director, Surgical Artificial Intelligence Research Academy, UHN

Talk: Artificial Intelligence and Augmentation of Surgical Performance
Alexander Wong, Professor and Canada Research Chair, University of Waterloo, Towards Trustworthy and Transparent Clinical Decision Support: From Explainability to Best Practices
Alexander Wong

Professor and Canada Research Chair, University of Waterloo

Talk: Towards Trustworthy and Transparent Clinical Decision Support: From Explainability to Best Practices

Thomas Doyle, Associate Professor of Electrical and Computer Engineering, McMaster University, Human HUMAN.AI: Human Partnership with Medical Artificial Intelligence
Thomas Doyle

Associate Professor of Electrical and Computer Engineering, McMaster University

Talk: Human HUMAN.AI: Human Partnership with Medical Artificial Intelligence

Dr. Devin Singh, Pediatric ER Physician, Co-Founder (Hero AI), The Hospital for Sick Children & Hero AI, Clinical Automation in Emergency Medicine with Machine Learning Medical Directives
Dr. Devin Singh

Pediatric ER Physician, Co-Founder (Hero AI), The Hospital for Sick Children & Hero AI

Talk: Clinical Automation in Emergency Medicine with Machine Learning Medical Directives

Workshop Facilitators

Kelci Miclaus, AI Solutions Director – Life Sciences, Dataiku, How Machine Learning and Medical Imaging Transform Point-of-Care Systems
Kelci Miclaus

AI Solutions Director – Life Sciences, Dataiku

Workshop: How Machine Learning and Medical Imaging Transform Point-of-Care Systems

Co-Presenter: Nico Rode

Nico Rode, Sales Engineer, Dataiku, How Machine Learning and Medical Imaging Transform Point-of-Care Systems
Nico Rode

Sales Engineer, Dataiku

Workshop: How Machine Learning and Medical Imaging Transform Point-of-Care Systems

Co-Presenter: Kelci Miclaus
Donny Cheung, Technical Lead Manager, Google, Building a Scalable Healthcare AI Service: A Case Study
Donny Cheung

Technical Lead Manager, Google

Workshop: Building a Scalable Healthcare AI Service: A Case Study

Gold Sponsors
Silver Sponsors
This event has ended
This event is no longer available.
Laleh Seyyed-Kalantari

Associate Scientist, Lunenfeld Tanenbaum Research Institute, Sinai Health System

Laleh is an associate scientist at Lunenfeld Tanenbaum Research Institute, Toronto, Canada. Before that with a PhD degree in electrical engineering from McMaster University, she was an NSERC postdoctoral fellow at the Vector Institute and the University of Toronto. Her research interests are developing AI-based medical image diagnostic tools with novel contributions in theory and application and with a focus on their fairness. Her ultimate goal is to remove barriers toward the trustable deployment of AI-based medical image diagnostic tools in clinics, such that they benefit the patients, provide fairness, and reduce the workload of clinical staff. She has received a number of highly competitive national, provincial, and institutional awards such as NSERC Postdoctoral Fellowship (2018), Research in Motion Ontario Graduate Scholarship (2015), Ontario Graduate Scholarship, and Queen Elizabeth II Graduate Scholarship in Science and Technology (2014), and Ontario Graduate Scholarship (2013), among others.

Talk: We’re (not) Fine: Lack of Fairness in AI-based Medical Image Diagnostic Tools

Abstract: Artificial intelligence (AI)-based medical image diagnostic tools are trained to yield diagnostic labels from medical images. These tools have reached radiologist performance in diagnosis, which makes them a clear case for deployment due to the global radiologist shortage. In developing these tools a common practice is to optimize for and report the performance of the general population. However, doing so the state-of-the-art chest X-ray diagnostic tools trained on large public datasets fail to be fair naturally. We define unfairness as differences in performance against or in favor of a subpopulation for a predictive task (e.g., higher performance on disease diagnosis in White patients versus Black). We have shown AI models not only can detect the patient race from their chest X-rays but also behave against underserved (e.g Black) patients and misreport them healthy at a higher rate, potentially leading those patients denied care at a higher rate upon deployment.

What You’ll Learn: Fairness analysis in AI diagnostic tools

Soroosh Shahtalebi

Postdoctoral Research Fellow, Vector Institute

Soroosh Shahtalebi is a postdoctoral research fellow with Vector Institute, researching on the notion of generalization and robustness in deep learning models, particularly in the case of distribution shifts. Previously, he was a postdoctoral researcher with Quebec AI Institute – Mila, and a PhD student with Concordia University, Montreal, Canada. He is research interests include the theory of generalization in machine learning models, and its applications in healthcare and NLP.

Talk: We’re (not) Fine: Lack of Fairness in AI-based Medical Image Diagnostic Tools

Abstract: Pathological hand tremor (PHT) is a common symptom of Parkinson’s disease (PD) and essential tremor (ET), which affects manual targeting, motor coordination, and movement kinetics. Effective treatment and management of the symptoms relies on the correct and in-time diagnosis of the affected individuals, where the characteristics of PHT serve as an imperative metric for this purpose. Due to the overlapping features of the corresponding symptoms, however, a high level of expertise and specialized diagnostic methodologies are required to correctly distinguish PD from ET. In this work, we propose the data-driven NeurDNet model, which processes the kinematics of the hand in the affected individuals and classifies the patients into PD or ET. NeurDNet is trained over 90 hours of hand motion signals consisting of 250 tremor assessments from 81 patients, recorded at the London Movement Disorders Centre, ON, Canada. NeurDNet outperforms its state-of-the-art counterparts achieving exceptional differential diagnosis accuracy of 95.55%. In addition, using the explainability and interpretability measures for machine learning models, clinically viable and statistically significant insights on how the data-driven model discriminates between the two groups of patients are achieved.

What You’ll Learn: How to statistically validate a machine learning model by means of explainability analysis.

Vanessa Allen

Medical Microbiologist and Clinical Infectious Diseases Specialist, Sinai Health System

Vanessa Allen is a Medical Microbiologist and Clinical Infectious Diseases Specialist at Sinai Health and the University Health Network. She is an Assistant Professor in the Department of Laboratory Medicine and Pathobiology at the University of Toronto.
Her contributions and research have focused on strategies to combat antimicrobial resistance primarily in the areas of bacterial sexually transmitted infections and enteric pathogens, including work in addressing multidrug resistant Neisseria gonorrhoeae, and the implementation and evaluation of the use of novel diagnostic methods such as genomics, machine learning and point of care testing to advance infectious diseases and public health prevention and response.
She served as the Chief of Microbiology and Laboratory Science at Public Health Ontario from May 2013 to September 2021.

Talk: Machine Learning Supported Tick Identification to Combat Lyme Disease
Co-Presenter: Sina Akbarian

Abstract: A priority for public health is to enhance the systematic collection, analysis, and interpretation of data essential to delivering improved public health program and services. Production of data and the ability to convert that data into usable information at a large scale and in a timely manner can initiate appropriate public health action. In this talk, we will present the application of machine learning to the identification of ticks to enable early risk assessment and prevention of Lyme Disease. Conducted in collaboration with Vector and Public Health Ontario, this approach modernizes public health surveillance and response systems to harness the power and promise of user-generated data.

What You’ll Learn:  The application of machine learning can be used to improve access to tools for risk assessment and prevention of Lyme Disease

Sina Akbarian

Data Scientist, Klick

Sina is a Data Scientist working on developing computer vision and natural language processing systems and deploying them as cellphone and website applications. He received his M.A.Sc. in Biomedical Engineering from the University of Toronto with expertise in machine learning and deep learning.

Talk: Machine Learning Supported Tick Identification to Combat Lyme Disease
Co-Presenter: Vanessa Allen

Abstract: A priority for public health is to enhance the systematic collection, analysis, and interpretation of data essential to delivering improved public health program and services. Production of data and the ability to convert that data into usable information at a large scale and in a timely manner can initiate appropriate public health action. In this talk, we will present the application of machine learning to the identification of ticks to enable early risk assessment and prevention of Lyme Disease. Conducted in collaboration with Vector and Public Health Ontario, this approach modernizes public health surveillance and response systems to harness the power and promise of user-generated data.

What You’ll Learn: The application of machine learning can be used to improve access to tools for risk assessment and prevention of Lyme Disease

Moritz Steller

Applied AI Leader, Microsoft

Moritz is a Senior Cloud Solutions Architect in AI at Microsoft with a long-term focus on AI/ML, AI Business Development and Strategy, and Analytics + Data Platform. He specializes in NLP, Cognitive Services, Forecasting, and AI Automation across Supply Chain, Healthcare, Finance and Insurance. As an Applied AI Leader, his big passion is to revolutionize high-risk industries with innovative deep machine intelligence, deliver high scalable customer-centric solutions, transforming data into actions, and enabling the AI-driven enterprise.
Outside of Microsoft, Moritz supports an NLP Healthcare company and is a career advisor for AI/ML at Harvard University.
Moritz earned his Masters Degree in Information Management Systems from Harvard University, Stanford’s Artificial Intelligence Professional Certificate, and several industry/solution certifications.

Talk: Automated Patient Risk Adjustment and Medicare HCC Coding from Clinical Notes

Abstract: Medicare risk adjustment is a rule-based calculation, based on seven variables: ICD Codes of the patient’s diagnoses, age, gender, eligibility segment, entitlement reason, Medicaid status, and if the patient is disabled. All variables are picked from structured data – claims or EMR. NLP is applied to derive missed ICD codes resulting in lower risk adjustments for patient, which hurt the revenue of the provider/ACO taking the risk on them or underestimate the risk for payers insuring the patient.
A scalable data lakehouse environment helps healthcare organizations to enable reporting capabilities using such automated patient risk adjustment and advanced Medicare HCC coding. Azure Databricks/Azure Synapse Analytics embedded in Azure cloud deliver a secure, enterprise-ready environment using the capabilities of Spark NLP for Healthcare achieving this goal for payers or at-risk providers.

What You’ll Learn: This session focusses on the end-to-end delivery of automated Patient Risk Adjustment and Medicare HCC Coding from Clinical Notes.

Victor Sonck

Evangelist, ClearML

Victor started out as a Machine Learning engineer and is currently spreading the word about the importance of MLops to anyone who’s willing to listen.

Talk: The importance of Experiment Tracking and Data Traceability

Abstract: Data scientists are usually not trained to go further than their analyses, however in order to get to a more mature AI infrastructure that can support more models in production, additional steps will have to be taken. Experiment management and Data versioning are very important first steps in the direction of the “MLops” way of working. Done properly, they can serve as a foundation to build more advanced systems on top, such as pipelines, remote workers and advanced automation. When a data scientist can include this way of working into their day-to-day, they have a very powerful tool in hand to raise the success rate of their models and analyses.

What You’ll Learn: Learn the importance of experiment management and data versioning in any data analysis and AI workflow. Learn some of the advantages they bring to the table and how easy and painless it can be to add to your current workflow. Learn how applying these principles can lead to more complex systems that fall under the umbrella of “MLOps”.

Sedef Akinli Kocak

Senior Project Manager & Tech Translator, Vector Institute

Sedef is an Acting Director of the Professional Development and Senior Project Manager & Technical Translator at Vector Institute for AI where she engages Vector sponsors on applied AI projects. Sedef has led several large scale multi-industrial applied AI projects at Vector such as Natural Language Processing, Computer Vision, and Privacy Enhancing Technologies. Sedef recently was nominated DMZ Women of the Year annual award for her inspirational work, accomplishments, and contributions to create impact on Canadian tech ecosystem.

Talk: Using Entity Extraction to Help Solve the Long COVID Puzzle
Co-Presenter: Elham Dolatabadi

Abstract: From the outset of the COVID-19 pandemic, social media has provided a platform for sharing and discussing experiences in real time. This rich source of information may also prove useful to researchers for uncovering evolving insights into Long COVID. In order to leverage social media data, this study explores entity-extraction methods to provide insights about Long COVID and addresses the gap between state-of-the-art entity recognition models and the extraction of clinically relevant entities which may be useful to provide explanations for gaining relevant insights from Twitter data.

What You’ll Learn: 
(1) How to utilize publicly-available user-generated conversations on social media towards capturing terms related to Long COVID symptoms, recoveries and experiences
(2) How to utilize entity-extraction methods to provide insights into Long COVID experiences as expressed by patients.
(3) How to compare performance capabilities of state-of-the-art models on existing datasets and how to use data augmentation to bridge gap between of human evaluation

Elham Dolatabadi

Machine Learning Scientist, Vector Institute

Elham is an applied Machine Learning scientist and part of the AI engineering team at Vector Institute. She is also an Assistant Professor (status only) at the Institute of Health Policy Management. Her work portfolio and research agenda are mainly focused on the adoption of Machine Learning (ML) and Deep Learning technologies into real-world needs.

Talk: Using Entity Extraction to Help Solve the Long COVID Puzzle
Co-Presenter: Elham Dolatabadi

Abstract: From the outset of the COVID-19 pandemic, social media has provided a platform for sharing and discussing experiences in real time. This rich source of information may also prove useful to researchers for uncovering evolving insights into Long COVID. In order to leverage social media data, this study explores entity-extraction methods to provide insights about Long COVID and addresses the gap between state-of-the-art entity recognition models and the extraction of clinically relevant entities which may be useful to provide explanations for gaining relevant insights from Twitter data.

What You’ll Learn: 
(1) How to utilize publicly-available user-generated conversations on social media towards capturing terms related to Long COVID symptoms, recoveries and experiences
(2) How to utilize entity-extraction methods to provide insights into Long COVID experiences as expressed by patients.
(3) How to compare performance capabilities of state-of-the-art models on existing datasets and how to use data augmentation to bridge gap between of human evaluation

Dr. Amin Madani

Staff Surgeon, University Health Network; Assistant Professor of Surgery, University of Toronto; Director, Surgical Artificial Intelligence Research Academy, UHN

Dr. Madani is an endocrine and acute care surgeon at the University Health Network (UHN), and a researcher at The Institute for Education Research. He has been in the Division of General Surgery at UHN since 2019 and is an Assistant Professor of Surgery at the University of Toronto. He attended medical school at Western University and completed his general surgery residency training at McGill University. During his training he also obtained his PhD in surgical education and innovation. He subsequently completed a clinical fellowship in endocrine surgery at Columbia University-New York Presbyterian Hospital, where he specializes in the surgical management of thyroid, parathyroid and adrenal disease.
Dr. Madani’s research focus is in surgical expertise and the design and development of technology (such as artificial intelligence and simulation) to optimize performance in the operating room. He is the director of the Surgical AI Research Academy, where he leads a team of surgeons, computer scientists, engineers and game developers. Dr. Madani serves as the chair of the Surgical Data Science Task Force of the Society of American Gastrointestinal and Endoscopic Surgeon, as well as the founder and chair of the Global Surgical AI Collaborative, spearheading efforts to bring the surgical and AI community together to disseminate surgical expertise around the world in a sustainable manner.

Talk: Artificial Intelligence and Augmentation of Surgical Performance

Abstract: Surgical complications are extremely common and one of healthcare’s biggest source of morbidity, mortality and costs. Evidence suggests that most adverse events in the operating room have root causes related to errors in mental processes, such as judgment, human visual perception and pattern recognition. Recent advances in machine learning methodologies have made it possible to develop algorithms capable of advanced functions related to perception and cognition. In this presentation, we will explore how deep learning for computer vision can be used to provide real-time guidance to surgeons and augment their performance.

What You’ll Learn: 
– Various applications of surgical data science for improving surgical care
– Use cases on AI for computer vision in the surgical field
– Barriers for scaling the full potential of surgical data science to improve patient care

Alexander Wong

Professor and Canada Research Chair, University of Waterloo

Alexander Wong, P.Eng., is currently the Canada Research Chair in Artificial Intelligence and Medical Imaging, Member of the College of the Royal Society of Canada, co-director of the Vision and Image Processing Research Group, and a professor in the Department of Systems Design Engineering at the University of Waterloo. He has published over 650 refereed journal and conference papers, as well as patents, in various fields such as computational imaging, artificial intelligence, and computer vision.
In the area of computational imaging, his focus is on integrative computational imaging systems for biomedical imaging (inventor/co-inventor of Correlated Diffusion Imaging, Compensated Magnetic Resonance Imaging, Spectral Light-field Fusion Micro-tomography, Compensated Ultrasound Imaging, Coded Hemodynamic Imaging, High-throughput Computational Slits, Spectral Demultiplexing Imaging, and Parallel Epi-Spectropolarimetric Imaging). In the area of artificial intelligence, his focus is on operational artificial intelligence (co-inventor/inventor of Generative Synthesis, evolutionary deep intelligence, Deep Bayesian Residual Transform, Discovery Radiomics, and random deep intelligence via deep-structured fully-connected graphical models). He has received numerous awards including three Outstanding Performance Awards, a Distinguished Performance Award, an Engineering Research Excellence Award, a Sandford Fleming Teaching Excellence Award, an Early Researcher Award from the Ministry of Economic Development and Innovation, Top 10 Outstanding AI Projects Recognition from the United Nation’s AI Centre, a Best Paper Award at the NIPS Workshop on NIPS Workshop on Transparent and Interpretable Machine Learning (2017), a Best Paper Award at the NIPS Workshop on Efficient Methods for Deep Neural Networks (2016), an Outstanding Paper Award at the CVPR Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (2021), two Best Paper Awards by the Canadian Image Processing and Pattern Recognition Society (CIPPRS) (2009 and 2014), a Distinguished Paper Award by the Society of Information Display (2015), three Best Paper Awards for the Conference of Computer Vision and Imaging Systems (CVIS) (2015,2017,2018), Synaptive Best Medical Imaging Paper Award (2016), two Magna Cum Laude Awards and one Cum Laude Award from the Annual Meeting of the Imaging Network of Ontario, CIX TOP 20 (2017), Technology in Motion Best Toronto Startup (2018), Top Ten Startup at AutoMobility LA (2018), AquaHacking Challenge First Prize (2017), Best Student Paper at Ottawa Hockey Analytics Conference (2017), and the Alumni Gold Medal. He is the co-founder of DarwinAI, an award-winning AI company focused on accelerating deep learning development through explainable AI.

Talk: Towards Trustworthy and Transparent Clinical Decision Support: From Explainability to Best Practices

Abstract: Recent years has seen tremendous breakthroughs and advances in AI for healthcare and medicine, ranging from clinical decision support to treatment planning to drug discovery. However, there continues to be a tremendous barrier to widespread adoption of AI solutions in the real-world clinical practice. One of the biggest challenges is trust and understanding by clinicians in the AI solutions themselves, especially given the critical nature of clinical care. In this talk, I will speak about the challenges and opportunities towards trustworthy and transparent clinical decision support, and the latest innovations ranging from explainability to best practices in the context of clinical applications and use cases that will hopefully help us get one step closer to widespread clinical acceptance and adoption of AI solutions for human-machine collaboration-based clinical workflows.

What You’ll Learn: You will learn about recent developments in AI for healthcare, the challenges with widespread adoption, methods and strategies for improving trustworthiness and transparency in clinical decision support systems.

Thomas Doyle

Associate Professor of Electrical and Computer Engineering, McMaster University

Thomas E. Doyle is currently an Associate Professor with the Department of Electrical and Computer Engineering and a Member of the School of Biomedical Engineering, Faculty of Engineering, McMaster University. In addition, he is a Faculty Affiliate with the Vector Institute for Artificial Intelligence. From 2014 to 2019, he was the Director of the McMaster eHealth Graduate Program. His research interests include artificial intelligence and machine learning for human health and performance, human-AI partnership, and trust in autonomous medical advisory systems..

Talk: Human HUMAN.AI: Human Partnership with Medical Artificial Intelligence

Abstract: There is a wealth of research demonstrating the efficacy of AI for medical advisory systems. However, even some of largest entities in medical AI research and development have struggled to achieve generalizable models. The promise of individualized predictive health models and doctor in a box have not materialized. The vision of continuous wearable sensor monitoring for personal health quantification, elderly parent care, or remote medical support have suffered from a cacophony of standards and protocols.  These efforts are further complicated by regulatory approval that is still struggling to define parameters for medical AI. Improved active autonomous medical and health advisory systems will broadly benefit human health and performance, regardless of whether the application is supporting an elderly person living alone, a medical resident training a new procedure, an attending physician proposed treatment confirmed with best practice literature, a surgical team managing a rapidly evolving medical event, or a crew of astronauts managing their health and performance on long duration deployment.
For partnership, AI must be given the tools to be considered as an active team member. There is a consensus in the research literature that a team is defined as consisting of two or more individuals, who have specific roles, perform interdependent tasks, are adaptable, and share a common goal.  By extension, a Human-AI team would be defined by the inclusion of an AI as a member of a team comprised of at least one human, and that had a specific role, that would perform functions that in some manner were interrelated to those of human members of the team (i.e., not simply a passive tool to be referenced when needed), that would be adaptable in its task and function and that would contribute to a specific goal common to those of the other team members.
This talk will present identified challenges to the goal of Human-AI partnership in the medical domain, related research from the Biomedic.AI lab, and future necessary directions to achieve such partnership.

What You’ll Learn: Attendees will learn about the burgeoning field of human-AI partnership in medicine, challenges, related research, and future directions.

Dr. Devin Singh

Pediatric ER Physician, Co-Founder (Hero AI), The Hospital for Sick Children & Hero AI

Dr. Devin Singh is a practicing Paediatric Emergency Medicine Physician at SickKids. He completed his undergraduate studies at the University of Western in medical sciences and went to work for the Ontario Provincial Government as a business analyst. Afterwards, he attended medical school at the University of Sydney, Australia with his paediatric residency and emergency medicine subspecialty training at SickKids Hospital. He also completed an additional fellowship in Clinical AI at the University of Toronto. His research focuses on the use of machine learning to solve some of healthcare’s largest problems. He is the Physician Lead for Clinical Artificial Intelligence and Machine Learning at SickKids for the Division of Emergency Medicine and has recently obtained a Masters of Computer Science at the University of Toronto. He is also the co-founder of Hero AI, an innovative healthcare technology start-up dedicated to empowering patients and healthcare providers with AI.

Talk: Clinical Automation in Emergency Medicine with Machine Learning Medical Directives

Abstract: Importance: Increased wait times and long lengths of stay in emergency departments (EDs) are associated with poor patient outcomes. Systems to improve ED efficiency would be useful. Specifically, minimizing the time to diagnosis by developing novel workflows that expedite test ordering can help accelerate clinical decision-making.
Objective: To explore the use of machine learning–based medical directives (MLMDs) to automate diagnostic testing at triage for patients with common pediatric ED diagnoses.
Design, Setting, and Participants: Machine learning models trained on retrospective electronic health record data were evaluated in a decision analytical model study conducted at the ED of the Hospital for Sick Children Toronto, Canada. Data were collected on all patients aged 0 to 18 years presenting to the ED from July 1, 2018, to June 30, 2019 (77 219 total patient visits).
Exposure: Machine learning models were trained to predict the need for urinary dipstick testing, electrocardiogram, abdominal ultrasonography, testicular ultrasonography, bilirubin level testing, and forearm radiographs.
Main Outcomes and Measures: Models were evaluated using area under the receiver operator curve, true-positive rate, false-positive rate, and positive predictive values. Model decision thresholds were determined to limit the total number of false-positive results and achieve high positive predictive values. The time difference between patient triage completion and test ordering was assessed for each use of MLMD. Error rates were analyzed to assess model bias. In addition, model explainability was determined using Shapley Additive Explanations values.
Results: Models obtained high area under the receiver operator curve (0.89-0.99) and positive predictive values (0.77-0.94) across each of the use cases. The proposed implementation of MLMDs would streamline care for 22.3% of all patient visits and make test results available earlier by 165 minutes (weighted mean) per affected patient. Model explainability for each MLMD demonstrated clinically relevant features having the most influence on model predictions. Models also performed with minimal to no sex bias.
Conclusions and Relevance: The findings of this study suggest the potential for clinical automation using MLMDs. When integrated into clinical workflows, MLMDs may have the potential to autonomously order common ED tests early in a patient’s visit with explainability provided to patients and clinicians.

What You’ll Learn: In this presentation we will discuss the latest research published in JAMA Open that focuses on enabling clinical automation in emergency departments through the use of a novel concept called machine learning medical directives. We will explore how these models were developed with a heavy focus on the translation and integration of machine learning models into clinical workflows.
In addition, I will discuss the intersection between academia and industry by highlighting the journey of Hero AI and how these two worlds can truly fuel each other in healthcare by hyper-focusing on a patient-centric approach.
Lastly, I will highlight important equity considerations and the risk of potentially expanding the digital divide with AI.

Kelci Miclaus

AI Solutions Director – Life Sciences, Dataiku

Dr. Kelci Miclaus is AI Solutions Director for Life Sciences at Dataiku. She joined Dataiku from Veeva Systems as Senior Director of Veeva Stats, leading product and strategy for statistical computing solutions. Previously, she spent 15 years developing statistical/ML algorithms and led R&D product teams for the JMP Life Sciences division at SAS Institute. She holds a PhD in Statistics with biomedical and genomic research focus from North Carolina State University. Kelci is a subject matter expert around the role of software, data platforms, and analytics/visualization/ML/AI in advancing and operationalizing discoveries in health/life sciences.

Workshop: How Machine Learning and Medical Imaging Transform Point-of-Care Systems
Co-Presenter: Nico Rode

Abstract: Traditionally, radiologists manually review results from imaging scans such as X-rays, CT scans, MRIs, and ultrasounds in order to draw conclusions about a patient’s medical condition. This is both laborious and requires great skill and domain knowledge for accurate diagnoses. In recent years, AI has shown promise to provide guidance from automated analyses of digital images/biomedical scans, due to advances in deep learning and image processing techniques. It also comes with great concerns around quality, accuracy, and responsibility in routine clinical care.
In this session, we discuss both the opportunities and challenges of using ML/AI in the healthcare sector around the rise of virtual care, digital health devices, and point-of-care clinical decision support systems. We will then walk you through a project that takes two different approaches to identify Tuberculosis in patient chest x-ray images.

What You’ll Learn: 
– Key trends in Healthcare
– Custom image data processing to flatten images into tabular formats for analysis
– Comparing various deep learning algorithms to a simple logistic regression model for image analytic insights for disease prediction

Nico Rode

Sales Engineer, Dataiku

Nico Rode is a Solutions Engineer who helps to manage the technical relationship between Dataiku and organizations. He graduated with a degree in Computer Science. Most recently, he was a Senior Data Scientist at TIBCO Software. He designed, wrote, & deployed machine learning models for customers and helped to architect & implement TIBCO’s AutoML solution. He has worn many hats throughout his career but now sits comfortably at the intersection of Analytics, engineering, & customer success in his current role with Dataiku.

Workshop: How Machine Learning and Medical Imaging Transform Point-of-Care Systems
Co-Presenter: Kelci Miclaus

Abstract: Traditionally, radiologists manually review results from imaging scans such as X-rays, CT scans, MRIs, and ultrasounds in order to draw conclusions about a patient’s medical condition. This is both laborious and requires great skill and domain knowledge for accurate diagnoses. In recent years, AI has shown promise to provide guidance from automated analyses of digital images/biomedical scans, due to advances in deep learning and image processing techniques. It also comes with great concerns around quality, accuracy, and responsibility in routine clinical care.
In this session, we discuss both the opportunities and challenges of using ML/AI in the healthcare sector around the rise of virtual care, digital health devices, and point-of-care clinical decision support systems. We will then walk you through a project that takes two different approaches to identify Tuberculosis in patient chest x-ray images.

What You’ll Learn: 
– Key trends in Healthcare
– Custom image data processing to flatten images into tabular formats for analysis
– Comparing various deep learning algorithms to a simple logistic regression model for image analytic insights for disease prediction

Donny Cheung

Technical Lead Manager, Google

Donny is the technical lead for Healthcare AI with Google Cloud AI and Industry Solutions, and works on building products that leverage cloud technology to apply advanced analytics and machine learning to healthcare data problems. Previously, he worked in Google’s Display Ads team, building global-scale machine learning applications to improve the quality and relevance of users’ display ad experiences. Prior to Google, he worked as a senior scientist at a medical device startup in Toronto, Canada, focusing on medical imaging device design and image reconstruction algorithms.
Donny holds a PhD in Mathematics from the University of Waterloo and held a postdoctoral fellowship at the University of Calgary in the Quantum Information Science group.

Workshop: Building a Scalable Healthcare AI Service: A Case Study

Abstract: Donny will talk through his experience building scalable cloud-based machine learning services using Google’s Cloud Healthcare Natural Language API as a case study.

What You’ll Learn: Gaining a better understanding of the challenges in building healthcare software services that incorporate machine learning models.

Sign Up for TMLS 2023 News Updates

Who Attends

0 +
Attendees
0 %
Data Practitioners
0 %
Researchers/Academics
0 %
Business Leaders

2023 Event Demographics

0 %
Highly Qualified Practitioners*
0 %
Currently Working in Industry*
0 %
Attendees Looking for Solutions
0 %
Currently Hiring
0 .0%
Attendees Actively Job-Searching

2023 Technical Background

Expert
19.2%
Advanced
49.8%
Intermediate
24.1%
Beginner
6.9%

2023 Attendees & Thought Leadership

0 +
Attendees
0 +
Speakers
0 +
Company Sponsors

Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.

Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.

Ignite what is an Ignite Talk?

Ignite is an innovative and fast-paced style used to deliver a concise presentation.

During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.

The result is a fun and engaging five-minute presentation.

You can see all our speakers and full agenda here

Get our official conference app
For Blackberry or Windows Phone, Click here
For feature details, visit Whova