2018 TMLS Conference Program
November 20th Day 1
~~~~Full abstracts best viewed on desktop~~~~
If you are a speaker please consider our presentation guidelines
Day 2 November 21st
~~~~Full abstracts best viewed on desktop~~~~
f you are a speaker please consider our presentation guidelines
NOVEMBER 19TH : BONUS WORKSHOP DAY
Abstracts: Select Abstracts Day 1
Business Talk - Agile for Data Science Teams - Jennifer Prendki, VP of Machine Learning at Figure Eight
Who is this Presentation for? Chief data officers, data executives, and data science managers Prerequisite knowledge Familiarity with Agile methodologies and data science management (useful but not required)
What you'll learn: Understand why managing data teams is different from managing engineering teams Learn how to adapt the planning methods and techniques that work in software engineering for data science Description Since the publication of the Manifesto for Agile Software Development in 2001, Agile methodologies have been adopted by a majority of tech companies and have unquestionably revolutionized the tech industry and its culture. Agile’s huge success is hardly a surprise: Agile development came as a breath of fresh air at a time when the tech industry was crippled by the many inefficiencies caused by its own success. Back then, the Agile mindset was a panacea for tech’s growing pains. However, the tech industry is now facing a new revolution: big data, machine learning, and artificial intelligence. The methodologies that were so beneficial to the field of software development seem inappropriate for data science teams because data science is part engineering, part research. The speaker demonstrates how, with a minimum amount of tweaking, data science managers can adapt Agile techniques and establish best practices to make their teams more efficient. The speaker will start by discussing the Agile Manifesto in detail and reviewing the reasons for its major success in software engineering. She then outlines the different ways that organizations set up their data science initiatives and explains in which ways these teams differ or are similar to software engineering teams. The speaker then concludes by detailing how to adapt traditional Agile methodologies to create a powerful framework for data science managers and shares tips on how to allocate resources, improve best practices, and tweak the usage of planning and organization tools for the benefit of data teams.
Applied ML and Case Studies Talk - How can natural language processing be applied to identify toxic online conversations? - Isar Nejadgholi, IMRSV Data labs, Head of Machine Learning Research
With the sheer volume of online content, we are plagued by our current inability to effectively monitor its contents. Social media platforms are ridden with verbal abuse, giving way to an increasingly unsafe and highly offensive online environment. With the threat of sanctions and user turnover, governments and social media platforms currently have huge incentives to create systems that accurately detect and remove abusive content. When considering possible solutions, the binary classification of online data, as simply toxic and non-toxic content, can be very problematic. Even with very low error rates of misclassification, the removal of said flagged conversations can impact a user's reputation or freedom of speech. Developing classifiers that can flag the type and likelihood of toxic content is a far better approach. It empowers users and online platforms to control their content based on provided metrics and calculated thresholds. While a multi-label classifier would yield a more powerful application, it's also a considerably more challenging natural language processing problem. The online conversational text contains shortenings, abbreviations, spelling mistakes, and ever-evolving slang. Huge annotated datasets are needed so that the models can learn all this variability across communities and online platforms. In our work, we used the Wikimedia Toxicity dataset to train models that can flag toxicity types such as insult, identity hate and thread. The speaker considered stacking of multiple neural network models that can learn sentence labels through training recurrent and attention layers and reached 0.9862 ROC AUC score. They also pruned the stacked model to be efficiently deployed in real time and studied how the model performs across subgroups of data and other publicly available datasets that contain online content. Our results shed light on the discussion around how automatic labelling of online conversations can be used to make social media safer and more inclusive environments.
Advanced Research/Technical Talk - A New Universal Deep Learning Approach for Natural Language Processing - Professor Hui Jiang, York University
Most NLP tasks rely on modelling variable-length sequences of words, not just isolated words. The conventional approach is to formulate these NLP tasks as sequence labelling problems and apply conditional random fields (CRF), convolutional neural networks (CNN) and recurrent neural networks (RNN). In this talk, the speaker will introduce a new, universal deep learning approach applicable to almost all NLP tasks, not limited to sequence labelling problems. The proposed method is built upon a simple but theoretically-guaranteed lossless encoding method, named fixed-size ordinally-forgetting encoding (FOFE), which can almost uniquely encode any variable-length word sequence into fixed-size representation. Next, simple feedforward neural networks are used as universal function approximators to map fixed-size FOFE codes to various NLP targets. This framework is appealing since it is elegant and well-founded in theory and meanwhile fairly easy and fast to train in practice. It is totally data-driven without any feature engineering, and equally applicable to a wide range of NLP tasks. In this talk, the speaker will introduce their recent work to apply this approach to several important NLP tasks, such as word embedding, language modelling, named entity recognition (NER) and mention detection, coreference resolution, Question Answering (QA) and text categorization. Experiments have shown that the proposed approach yields strong performance in all examined tasks, including Google 1-billion-word language modelling, KBP EDL contests, Pronoun Disambiguation Problem (PDP) in Winograd Schema Challenge, factoid knowledge-base QA, word sense disambiguation (WSD).
Business Talk - Next Big Things in Technology, Media & Telecommunications, & The Intersection with Trends in Machine Learning - Duncan Stewart, Director of Research, Deloitte
Everybody needs to know what’s going to happen next. That’s true across all industries, but the areas of technology, media, and telecom seem to be changing faster than ever before. Duncan Stewart helps you get ahead of the trends. A globally recognized expert on the forecasting of consumer and enterprise technology, media, and telecommunications trends (TMT), he also regularly presents on marketing, technology and consumer trends, as well as the longer term TMT outlook.
Duncan is a member of Deloitte’s national TMT executive team, and co-author of Deloitte Research’s annual Predictions Report on trends in TMT. He has 28 years of experience in the TMT industry. As an analyst and portfolio manager, he has provided research or made investments in the entire Canadian technology and telecommunications sector, and won the Canadian Technology Fund Manager of the Year award in its inaugural year. In his time as an investor, he deployed a cumulative $2 billion of capital into global TMT markets.
Duncan has a high profile media presence and is frequently interviewed on technology, media, and telecommunications issues. He has been a technology columnist for The Globe and Mail, CBC Radio, and The National Post.
Advanced Research and Technical Talk - Adversarial Examples and Understanding Neural Network Representation Space Nicholas Frosst, Researcher, Google Brain
Business Talk - Client Insight Analytics in the Cloud - Diane Reynolds Chief Data Scientist, Client Insights, Financial Services Sector IBM
Abstract: Diane Reynolds has worked in the financial services analytics sector for over 20 years, but one of her most interesting projects began 2 years ago when a small team set out to transform a client insight analysis tool based on traditional software tools into a fully cloud-based solution. In this talk, she will share her experience in moving from relational databases to big data, how she achieved multi-tenancy despite the prevalence of sensitive data, and how she leveraged a variety of machine learning platforms and algorithms. You'll walk away with insights into how to 'productize' your machine learning algorithms on the cloud.
Applied ML and Case Studies Talk - Automated Regulatory Compliance with Cognitive Computing - Dr. Rahmatullah Hafiz, Lead Researcher, Cognitive Computing, Exiger
Successful enforcement of regulatory compliance requires a high precision in the detecting financial crimes including bribery, corruption, money laundering, terrorist financing, fraud, etc. Historically, infrastructures developed by regulatory bodies primarily consist of manual investigation and due diligence. Existing systems also include hard-coded regulatory rules that are painstakingly maintained and curated by manual efforts. These manual research and rule-maintenance are not only very expensive, time-consuming, and hard to maintain; but also introduce erroneous amount of costly false positives.
Applied ML and Case Studies Talk - Generative Adversarial Networks for Live Makeup Augmentation - Irina Kezele, Director of AI at ModiFace
Real-time virtual makeup try-on is becoming an essential component in beauty e-commerce as well as in-store shopping experience. It gives users easy tools to tune relevant attributes of the product (e.g. colour and glossiness) according to their personal preferences. Under the hood, traditional solutions to this problem involve two steps: detecting facial landmarks and using them to overlay augmented makeup. This approach has several limitations. First, errors in facial landmark detection cause incorrect makeup alignment. Second, correctly simulating all the physical properties of the augmented makeup is challenging. Third, blending the augmented make-up and the original image is not trivial. Due to these limitations and others, this solution cannot easily scale to a large number of products while maintaining realism, and it would, therefore, be convenient to have an end-to-end model that can learn a conditional makeup space on its own. We opt for the CycleGAN architecture with a number of modifications to allow for multi-directional image translation, and to ensure greater stability in training. Given an input image without makeup and an encoding of the desired makeup, we conditionally train a generator to produce a realistic image with makeup while preserving all the other input image properties, such as personal identity, head pose, and facial expression. We show that the resulting approach does not suffer from the limitations of the aforementioned, standard approach, and is easily extendable to support an arbitrary number of makeup products.
Advanced Research/Technical Talk - K-FAC and Natural Gradients - Roger Grosse, Assistant Professor, Computer Science UofT, Founding Member Vector Institute
Roger Grosse is an Assistant Professor of Computer Science at the University of Toronto, and a founding member of the Vector Institute.
His group’s research focuses on machine learning, especially deep learning and Bayesian modeling. They aim to develop architectures and algorithms that train faster, generalize better, give calibrated uncertainty, and uncover the structure underlying a problem. They’re especially interested in scalable and flexible uncertainty models, so that intelligent agents can explore effectively and make robust decisions at test time. Towards these objectives, they also aim to automate the configuration of ML systems, from tuning of optimization and regularization hyperparameters to the design of models, architectures, and algorithms. Finally, they are starting to investigate the important and neglected problem of ensuring that AI systems remain aligned with human values.
Previously, he received my BS in symbolic systems from Stanford in 2008, his MS in computer science from Stanford in 2009, and his PhD in computer science from MIT in 2014, studying under Bill Freeman and Josh Tenenbaum. From 2014-2016, he was a postdoc at the University of Toronto, working with Ruslan Salakhutdinov. Along with Colorado Reed, he created Metacademy, a website which uses a dependency graph of concepts to create personalized learning plans for machine learning and related fields.
Business Panel: Insurance & AI Panel Fernando Moreira, SVP, Scotiabank, Dave Keirstead AVP, Advanced Analytics, Manulife, Steve Holder, National Strategy Executive Analytics & AI, SAS
Applied Machine Learning and Business Panel: Why Machine Learning Needs Design Thinking, Ramy Nasser, Consultant and former Director, Retail Innovation Lab, Mattel, Inc.
Advanced Research/Technical Talk - Neural Ordinary Differential Equations - David Duvenaud, Assistant Professor UofT, Founding Member of the Vector Institute
The speaker will introduce a new family of deep neural networks. Instead of specifying a sequence of hidden layers, he describes how to parameterize the continuous-time dynamics of the hidden state using a neural network. The outputs of these networks are computed by a differential equation solver. These continuous-depth models adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. They also let one build continuous-time analogs of recurrent neural networks, as well as a new class of generative models that allow exact reversibility without restricting network architecture.
Business Talk: How Human-Like Robots Will Impact Work, Life and Society, Geordie Rose, Founder and CEO at Sanctuary.ai, D-Wave and Kindred.ai
Geordie founded D-Wave, the world’s first quantum computing company, and Kindred, the world’s first robotics company to use reinforcement learning in a production environment. He is now leading a new venture called Sanctuary.ai.
Throughout the entirety of human history, we have always dreamed about, and been captivated by, the idea of creating machines that are like us. Human-like robots have appeared in endless science fiction movies, in both utopian and – more often – dystopian narratives. The speaker will talk about advancements in robotics and AI that are enabling robots to become increasingly human-like.
Applied ML and Case Studies Talk - Walmart's Retail Journey: From Business Intelligence to Artificial Intelligence, Prakhar Mehrotra, Walmart Labs Senior Director, Machine Learning
The objective of this talk will be to take the audience through the AI transformation journey of the world's biggest retailer. Currently, Walmart is tackling the two most important problems in brick & mortar retail: intelligent pricing, and assortment optimization. Causal nature of these problems demands need for causal inference paradigm. The human decision making requires highly interpretable models. The speaker will discuss how the machine learning group is trying to balance interpretability with accuracy.
Advanced Technical/ Business Talk - Scalable Machine Learning for Data Cleaning, Ihab Ilyas, Professor of Computer Science, Cofounder, University of Waterloo, Tamr
Who is this presentation for?- CIOs, CDOs, VPs of data management, and any other senior IT leaders
Prerequisite knowledge -A basic understanding of data management and data management technologies
What you'll learn- Learn how to curate data at scale to enable transformational analytics and business outcomes
Description: Machine learning tools promise to help solve data curation problems. While the principles are well understood, the engineering details in configuring and deploying ML techniques are the biggest hurdle. Ihab Ilyas explains why leveraging data semantics and domain-specific knowledge is key in delivering the optimizations necessary for truly scalable ML curation solutions.
Ihab focuses on two main problems: entity consolidation, which is arguably the most difficult data curation challenge because it is notoriously complex and hard to scale, and using probabilistic inference to enrich data and suggest data repair for identified errors and anomalies. The problem statement in both cases sounds deceptively simple: find all the records from a collection of multiple data sources that refer to the same real-world entity or use trusted data sources to suggest how to correct errors. However, both problems have been challenging researchers and practitioners for decades due to the fundamentally combinatorial explosion in the space of solutions and the lack of ground truth.
There’s a large body of work on this problem by both academia and industry. Techniques have included human curation, rules-based systems, and automatic discovery of clusters using predefined thresholds on record similarity, Unfortunately, none of these techniques alone has been able to provide sufficient accuracy and scalability. Ihab provides deeper insight into the entity consolidation and data repair problems and discusses how machine learning, human expertise, and problem semantics collectively can deliver a scalable, high-accuracy solution.
Applied Machine Learning and Case Study Talk - The quest for the ideal antibody, revolutionize product search in science - David Qixiang Chen, Co-founder & CTO, BenchSci
The antibody, an important part of the immune system, is a widely used reagent of biomedical experiments. Misuse of antibodies, often due to insufficient data, are responsible for up to 50% of failed experiments, and incur enormous cost in time and money for drug discovery. The best evidence of antibody use, and other scientific products are found in scientific publications. Existing publication search tools (pubmed, google scholar) are not meant for products. We decoded antibody experimental contexts from open and close-source publications with a combination of text mining, bioinformatics, and machine learning.
At the end of the day, scientists prefer to judge experimental outcome by inspecting the publication images. We linked antibody contexts to its figure image, by identifying the correct product from amongst 4M antibodies, within 9M publications, across 300K contexts, and associate them with over 37M protein aliases. This complex task was computed using Spark and the search served on Elasticsearch. Deep neural nets were used to judge product/context usage relationship (embeddings, LSTM with attention), and to identify technique subpanel (CNN) in figures to fine-tune data accuracy.
This platform is well received by academic and industry scientists. Some of the largest pharmaceuticals in the world are our customers, where their R&D scientists use Benchsci daily. Scientists told us that we have reduced their search time from weeks to minutes, and Benchsci has proven to be a game-changer in scientific product search.
The mission for BenchSci is to close the gap between idea to outcome in science. We accelerate the pace of discoveries by removing roadblocks in the scientific iteration cycle. ML has proven to be indispensable, where the scaling of data processing with a small team could only have been achieved through the use of deep learning.
Advanced Technical and Research Talk: Alán Aspuru-Guzik Professor, Faculty Member, Vector Institute
Alán Aspuru-Guzik’s research lies at the interface of computer science with chemistry and physics. He works in the integration of robotics, machine learning and high-throughput quantum chemistry for the development of materials acceleration platforms. These “self-driving laboratories” promise to accelerate the rate of scientific discovery, with applications to clean energy and optoelectronic materials. Alán also develops quantum computer algorithms for quantum machine learning and has pioneered quantum algorithms for the simulation of matter. He is jointly appointed as a Professor of Chemistry and Computer Science at the University of Toronto. Previously, he was a full professor at Harvard University. Alán is also a co-founder of Zapata Computing and Kebotix, two early-stage ventures in quantum computing and self-driving laboratories respectively.
Alan’s Highlights include: Canada 150 Research Chair in Theoretical and Quantum Chemistry, Google Focus Research Award in Quantum Computing, CIFAR Senior Fellow, Biologically-Inspired Solar Energy Program MIT Technology Review 35 under 35, Alfred P. Sloan Fellow, Elected Fellow of the American Association for the Advancement of Science, Fellow of the American Physical Society
Abstracts: Select Abstracts Day 2
Big Data - From Competition to Conglomeration - Prajakta Kharkar, Senior Economist, AI Partnerships Strategy, Data Valuation, Telus
Abstract: Today, everybody is in the data business. The competitive landscape has changed. Everyone is a competitor, and we are all competing to monetize our information, before anyone else. While privacy concerns exist, and individuals may not have a real choice in how companies in the data business generate insights about their behaviour, is sharing data such a bad idea after all? Could there, in fact, be benefits in doing so? In fact, the growing trend among large companies is towards data sharing instead of data competition. The future, in my opinion, belongs to large data conglomerates.
Applied Machine Learning and Case Study Talk - Deeper understanding of sports: Teaching a machine to see, understand, describe, and predict the game in real-time - Mehrsan Javan, CTO and Co-founder, SPORTLOGiQ
After describing the big picture of AI for sports analytics the speaker will focus on one real example of player performance evaluation with multi-agent reinforcement learning.
Advanced Research and Technical Talk - Machine Learning for Medical Imaging- Anne Martel, Professor, Medical Biophysics, UofT, Senior Scientist at Sunnybrook Research
Recent advances in machine learning in general, and deep learning in particular have transformed the field of medical image analysis. In applications ranging from image reconstruction, to the detection and diagnosis of disease, neural networks are outperforming more traditional methods of analysis. As well as having profound implications for clinicians and researchers, this has led to an explosion of new companies who are developing medical applications built on imaging data. This talk will provide a brief overview of the field and will provide some case studies in computer-aided diagnosis and survival prediction in breast cancer with MRI and digital pathology.
Applied Machine Learning and Case Study Talk - Challenges with applying ML in a high-transaction Big Data environment- Lubna Khader Lead Data Scientist at Pelmorex
Pelmorex is a leading mobile advertising company in the Canadian market. We have built a large ad serving platform that we use to enable our clients to advertise the right service/product to the right person. Our platform handles up to 100K ad auctions per second and tries to make the best bidding decision for each of these to meet many business constraints and optimize many metrics, and in this process, it generates ~6TB of data a day. We heavily rely on Machine Learning to guide our decisions in real-time, based on knowledge derived from the huge datasets that our bidding processes generate. When universities teach Machine Learning, they usually cover the most popular algorithms, the math and logic behind them, when they are best used etc. and they hand students packaged and prepared datasets to apply the algorithms on. All of this is great, but after working for 4.5 years on Machine learning/ Big Data projects, I found there is so much more to the application of ML than what I was taught. Finding an algorithm that works well for a prepared sample set takes a small fraction of the time needed to solve the real problem in a production environment. Many popular algorithms break when the data grows to the scale we are dealing with at Pelmorex, at that point, we had to explore different approaches to tweak the algorithm or switch the algorithm completely sacrificing performance in favour of a simpler and lighter one. In other cases, the algorithm itself wasn’t the bottleneck, but rather acquiring the necessary training data, or storage of the results or intermediary data. Big Data has introduced various challenges to how traditional systems work, and it is not trivial to use ML with Big Data. In my talk, I will discuss the challenges my team and I have faced with the application of ML at Pelmorex, will go through some techniques we’ve used to go around these challenges, and will emphasize the tradeoffs that need to be made when managing an ML project. This information will help data science practitioners shift their perspectives from focussing on the most trending ML algorithms to rather paying attention to the applicability of suitable algorithms and manipulating them to make them work in a real-life scenario.
Panel: Ethics & AI: Pursuing Responsible Innovation Laila Paszti Attorney, GTC Law Group PC & Affiliates, Ozge Yeloglu, Chief Data Scientist, Microsoft Canada, Parinaz Sobhani, Director of Machine Learning, Georgian Partners, Richard Zuroff, Industry Solutions, Element.ai, Tyler Schnoebelen, Principal Product Manager, Integrate.ai
Business Talk - Beyond Research and Coding: Selling and Delivering AI, Co-presented by Steve Kludt, CTO for Canvass Analytics, and John Devins, VP of Sales for Canvass Analytics
In today’s competitive job market, most AI positions focus on one of two areas: Theory/Research or Development/Engineering. As we see AI products mature in the marketplace, there's an ever-growing need for a third area: customer engagement in the field.
Customer engagement is the lifeline to creating sustainable applications of AI, and there is a growing demand for Individuals with strong AI/ML backgrounds coupled with the ability to learn and communicate with users and customers on the tangible business benefits of AI.
Hear from Canvass Analytics as they describe some of the opportunities in technical sales and customer success for AI-based products, share real-world experiences in overcoming rebuttals to sales, and provide recommendations on how you can position yourself for success in these exciting and high-demand roles.
Applied Machine Learning and Case Study Talk - AI Explainability: Why it matters, why it's hard
Xavier Snelgrove, Applied Research Scientist, Element AI
As more machine learning algorithms move from research labs to the real world, questions of trust, accountability and auditability are driving the push for explainability in AI. We are unlikely to trust a high-stakes algorithmic decision without a good understanding of why it came to that decision. Unfortunately current state-of-the-art machine learning models are highly complex, and reasoning about their behaviour is difficult. Further complicating things, there are (infinitely?) many true explanations for any phenomenon, and the one we want is context and audience dependent. In this talk we will motivate AI explainability, discuss the technical and philosophical reasons why it's a difficult and interesting problem, and explain the technical workings of some recent methods.
Business Panel: Future of Healthcare in AI- Wanda Peteanu, Director of Information Management, MHSc,CHE AI + Healthcare Education, UHN, Linda Kaleis, Lead Data Scientist, MEMOTEXT Corporation, Gabriel Musso, Chief Scientific Officer at BioSymetrics Inc.
Applied ML Panel - Choosing your Technology Stack, and Making it AI-Capable, Rupinder Dhillon, Director Data Science and Machine Learning, Bell.
Unsupervised Video Object Segmentation for Deep Reinforcement Learning - Professor Pascal Poupart, David R. Cheriton School of Computer Science, UWaterloo; Research Director, Borealis AI; Faculty Member, Vector Institute
Deep reinforcement learning (RL) in visual domains is often sample inefficient since the agent is implicitly learning to extract useful information from raw images while optimizing its policy. Furthermore, the resulting policy is often a black box that is difficult to explain. The speaker will present a new technique for deep RL that automatically detects moving objects and uses the relevant information for action selection. The detection of moving objects is done in an unsupervised way by exploiting structure from motion. Over time, the agent identifies which objects are critical for decision making and gradually builds a policy based on relevant moving objects. This approach, which we call Motion-Oriented REinforcement Learning (MOREL), is demonstrated on a suite of Atari games where the ability to detect moving objects reduces the amount of interaction needed with the environment to obtain a good policy. Furthermore, the resulting policy is more interpretable than policies that directly map images to actions or values with a black box neural network. We can gain insight into the policy by inspecting the segmentation and motion of each object detected by the agent. This allows practitioners to confirm whether a policy is making decisions based on sensible information.
Business & Applied ML Panel: AI in Retail
Applied Machine Learning & Case Study Talk - Finding flights you didn’t know you wanted: Large scale recommendation at Hopper, Patrick Surry, Chief Data Scientist Hopper
In 2015, Hopper first began testing a new recommendation algorithm, built on top of its core platform, which sent users notifications about deals for alternate origins, destinations, and dates. What they found is that users converted 2.5x better on those recommendations than on the trips for which they’d originally searched.
Soon, Hopper was selling more than $1.5M a week of airfare for flights that users had never even asked for and today roughly 20% of their sales come directly from these automated recommendations. Making personalized recommendations in the travel category means not just dealing with data at a scale and complexity unmatched by most companies using similar approaches, but also with significantly less frequent feedback. A slower feedback loop (people taking 2-3 flights per year vs. watching a movie every night), coupled with trillions of possible trips to recommend, makes training the algorithm a much more complex task.
Hopper collects about a trillion priced itinerary options per month, with an archive stretching back almost five years. This data drives their core price forecasting, and now in conjunction with their growing volume of user intent data allows the development of much smarter personalized recommendation algorithms. Users have watched 55M trips on Hopper, with about 150,000 new trips currently being added per day. Watching a trip starts a conversational relationship driven through push notifications and in-app interaction. They’ve sent over two billion notifications to date, with about 90% of sales resulting directly from push notifications. The core conversation is focused on the best time to buy, but also involves suggesting potential alternatives and learning more about user preferences & intent in the same way that a human travel agent might have interacted with a client. Feedback from these interactions is used to make future recommendations more relevant and personalized. This talk will explore our approach in more detail, explain how our approach has evolved over time, discuss key technical challenges both in scale and methodology, and outline our ongoing research plans.
Applied Machine Learning & Case Study Talk- Learn how to leverage proprietary datasets while protecting privacy and intellectual property - Monica Holboke, CEO, Founder CryptoNumerics
Data is the key ingredient in any AI strategy. But some of the most valuable datasets are inaccessible because they reside in silos created by privacy concerns, a fear of loss of intellectual property, regulatory requirements and contractual obligations. This is most common in highly regulated industries such as healthcare and financial services. Siloed data hampers organizations ability to accelerate their data-driven innovation. However, by employing state of the art techniques in cryptography and numerical methods, organizations can collaborate on building statistical and machine learning models with decentralized siloed data to generate superior insights while preserving privacy and IP.