Reproducibility & Data Version Control
for LangChain & LLM/OpenAI Models

FREE Virtual Workshop
Nov. 29th,1PM EST

Proudly Sponsored By

Presenter: Amit Kesarwani
Director of Solution Engineering
lakeFS by Treeverse

Past Event Speakers

Advanced Technical/Research

Click Speaker to Read Abstract

Bénédicte Pierrejean

Béné Pierrejean

Senior Machine Learning Scientist, Ada
Talk: Automatic Evaluation of Dialogue Systems

Mandy Gu

Mandy Gu

Senior Software Development Manager, Wealthsimple
Talk: MLOps Behind LLMs in Production at Wealthsimple

Jay Dawani

Jay Dawani

Co-Founder & CEO, Lemurian Labs
Talk: The GenAI Datacenter: Rethinking Computer Architecture To Democratize AI

Business Strategy

Denys Linkov

Denys Linkov

Head of ML, Voiceflow
Talk: The Metrics, Automation and Technology Behind Using Gen AI for Customer Support

Tahsim Ahmed

Tahsim Ahmed

Customer Experience (CX) & AI Assistants Lead, Voiceflow
Talk: The Metrics, Automation and Technology Behind Using Gen AI for Customer Support

Margaret Wu

Margaret Wu

Lead Investor, Georgian
Talk: A VC Perspective on Generative AI

Ehsan Amjadian

Dr. Ehsan Amjadian

Head of AI Solution Acceleration & Innovation, RBC
Talk: AI & Embedded Finance

Sonaabh Sood

Sonaabh Sood

Director, Product Innovation & Delivery, RBC
Talk: AI & Embedded Finance

Alfred Poon

Alfred Poon

Director, Software Development, RBC
Talk: Enabling the Use of Large Language Models for RBC Applications

Case Study

Jess Leung

Jess Leung

Staff ML Engineer, Recursion
Talk: Creating the World’s Premier Biological Foundation Model

Workshops

Amit Kesarwani

Amit Kesarwani

Director of Solutions Engineering, Treeverse
Virtual Workshop: Reproducibility and Data version control for LangChain and LLM/OpenAI Models

Rohit Saha

Rohit Saha

Applied Research Scientist, Georgian
Workshop: Exploring the Advantages of Using Open Source to Fine-tune and Deploy Your Own LLM

Kyryl Truskovskyi

Kyryl Truskovskyi

Machine Learning Engineer, Georgian
Workshop: Exploring the Advantages of Using Open Source to Fine-tune and Deploy Your Own LLM

Maria Ponomarenko

Maria Ponomarenko

MLOps Intern, Georgian
Workshop: Exploring the Advantages of Using Open Source to Fine-tune and Deploy Your Own LLM

Suhas Pai

Suhas Pai

NLP Researcher and Co-Founder/CTO Hudson Labs (formerly Bedrock AI)
Workshop: How to Build Production-Ready Applications Using Open-Source LLMs

Christopher Parisien

Christopher Parisien

Senior Manager, Applied Research, NVIDIA
Workshop: Adding Safety and Security to Chatbots with NeMo Guardrails

Benjamin Ye, CFA

Benjamin Ye, CFA

ML Research Intern, Georgian
Workshop: Exploring the Advantages of Using Open Source to Fine-tune and Deploy Your Own LLM

Griffin Lacey

Griffin Lacey

Manager, Solutions Architect, NVIDIA
Workshop: Adding Safety and Security to Chatbots with NeMo Guardrails

James Sutton

James Sutton

Lead Machine Learning Engineer, Union.ai
Workshop: Fine-Tuning Language Models with Declarative AI Orchestration

Tickets

Agenda

This agenda is still subject to changes
Talk: Automatic Evaluation of Dialogue Systems

Presenter:
Béné Pierrejean, Senior Machine Learning Scientist, Ada

About the Speaker:
Bénédicte Pierrejean is a Senior ML Scientist in the Applied Machine Learning team at Ada. She has a PhD in Natural Language Processing and is passionate about improving customer’s experiences using ML.

Talk Track: Advanced Technical/Research

Talk Technical Level:  4/7

Talk Abstract:
LLMs are making a huge impact in a variety of fields and they are especially powerful for customer service. They make it easier to move away from structured dialogue flows and provide users with natural conversations. However, using LLMs for dialogue systems is not without challenges. One such challenge is evaluation. How do we ensure that the model we use produces content that is safe, accurate and relevant? How do we make sure we are driving conversations towards resolution?

This talk will focus on how we use LLMs for evaluation by simulating conversations between our production dialogue systems and users. We will see how using this approach we can reproduce realistic testing conditions and be able to quickly assess the impact of any new change to our production pipelines.

What You’ll Learn:
Leveraging LLMs for evaluation, creative solutions

Talk: MLOps Behind LLMs in Production at Wealthsimple

Presenter:
Mandy Gu, Senior Software Development Manager, Wealthsimple

About the Speaker:
Mandy is a Senior Software Development Manager at Wealthsimple, where she leads Machine Learning & Data Engineering. Her teams’ mandates is to provide a simple and reliable platform to empower the rest of the company to iterate quickly on machine learning applications, GenAI tools and leverage data assets to make better decisions. Previously, Mandy worked in the NLP space and as a data scientist.

Talk Track: Advanced Technical/Research

Talk Technical Level:  5/7

Talk Abstract:
LLMs have revolutionized the way we leverage and consume AI – new state of the art models and approaches are being developed at a rapid pace. At Wealthsimple, our ML Engineering team strives to equip every employee with the tools they need to mine this AI gold rush. We accomplish this by building a platform around LLMs. This talk illustrates the stack we use to deploy self-hosted LLMs, how we are building towards fine tuning and the tools and integrations we developed on top of our LLM Platform.

What You’ll Learn:
ML Engineering for LLMs, platforms and the leverage they provide. The first self-hosted LLM took weeks to deploy. With our investments in the platform, the most recent LLM took less than an hour.

Talk: The GenAI Datacenter: Rethinking Computer Architecture To Democratize AI

Presenter:
Jay Dawani, Co-Founder & CEO, Lemurian Labs

About the Speaker:
Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book “”Mathematics for Deep Learning””, he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.

Talk Track: Advanced Technical/Research

Talk Technical Level:  3/7

Talk Abstract:
Generative AI workloads are breaking every aspect of the data center. As the capabilities of AI increase, so does its demand. The conventional path of performance improvement in legacy processors has stagnated, providing diminishing returns. We can no longer expect continued acceleration purely from hardware, we need to rethink the software stack as well. This raises concerns on whether we may ever fully realize the potential of AI. In this talk we will share how combining our software-first methodology and novel data representation leads to breakthrough performance gains in general purpose accelerated computing, pushing developers to the limit of physics.

What You’ll Learn:
We’ll be promoting how we get to a 20X improvement in performance. We’ll also be promoting early access to our virtual development kit

Talk: The Metrics, Automation and Technology Behind Using Gen AI for Customer Support

Presenters:
Denys Linkov, Head of ML, Voiceflow | Tahsim Ahmed, Customer Experience (CX) & AI Assistants Lead, Voiceflow

About the Speakers:
Denys is the head of Machine Learning at Voiceflow a conversational AI company. He is a strong advocate of building practical, reliable and realtime ML systems in both the Gen AI and NLU space. He is an active member of the ML community leading discussion groups, mentorship sessions and answering questions on forums. He previously worked as a Senior Cloud Architect at a Global Bank.

Tahsim serves as the Customer Experience (CX) & AI Assistants Lead at Voiceflow, overseeing all Customer Operations & Product Support functions and where he masterminded ‘Tico’, a cutting-edge AI Conversational Support Agent. Starting as a Customer Support Specialist, he swiftly advanced through four key roles, driving support for over 200,000 users with an AI-first approach to CX. Before Voiceflow, Tahsim spearheaded innovation at Canada’s largest retailer and championed Robotic Process Automation (RPA) at PwC, among other tech endeavors. His expertise fuses AI with an operational automation & utility lens, elevating user & stakeholder experiences and driving significant business metrics.

Talk Track: Business Strategy

Talk Technical Level:  4/7

Talk Abstract:
It’s been a year since ChatGPT came out, but how has it affected business metrics? In this talk we’ll cover the impact of Tico, Voiceflow’s Gen AI support assistant and its ROI.

We’ll focus on two use cases, customer support and community management, going through business, customer support and technical metrics. This talk will also include a review of the Tico architecture and functionality from a technical perspective, including our implementation of NLU, and RAG based approaches.

What You’ll Learn:
The business impact of a well structured Gen AI project

Talk: A VC Perspective on Generative AI

Presenter:
Margaret Wu, Lead Investor, Georgian

About the Speaker:
Margaret Wu, who goes by “Margo,” is a Lead Investor at Georgian and is involved in due diligence, deal selection, and board governance for growth-stage B2B software companies exploiting Applied AI. Prior to Georgian, she was a Senior Product Manager at Amazon, co-founded a biotech startup, and served as COO at OneSpout. Margaret began her career in technology consulting at Accenture and holds an MBA from Cornell University, as well as a BSc and BES from the University of Waterloo. She is currently an active board member at Xanadu, OysterHR, OpenWeb, Darrow, SPINS, Tealium, Top Hat, and Devo.

Talk Track: Business Strategy

Talk Technical Level: 1/7

Talk Abstract:
A view into the current fundraising environment and investor diligence for Generative AI companies, as well as different use cases for LLM experimentation across growth-stage startups.

What You’ll Learn:
– Insights into the characteristics that make a Generative AI company attractive to investors and the typical diligence steps to get there
– A snapshot of the current market for Generative AI technologies, what segments are hot, and which are saturated
– Insights into how growth-stage startups are leveraging Large Language Models (LLMs) and how these tools scale business operations or disrupt traditional models

Talk: AI & Embedded Finance

Presenters:
Dr. Ehsan Amjadian, Head of AI Solution Acceleration & Innovation, RBC | Sonaabh Sood, Director, Product Innovation & Delivery, RBC

About the Speakers:
Dr. Ehsan Amjadian is the Head of AI Solution Acceleration & Innovation at the Royal Bank of Canada (RBC). He has led numerous advanced AI products and initiatives from ideation to production and has filed multiple patents in the areas of Data Protection, Finance & Climate, and Computer Vision & its applications to Satellite Images.

He earned his Ph.D. in Deep Learning & Natural Language Processing from Carleton University, Canada and is presently an Adjunct Professor of Computer Science at University of Waterloo. He is published in a variety of additional Artificial Intelligence and Computer Science domains including Recommender Engines, Information Extraction, Computer Vision, and Cybersecurity.

As Director for Product Innovation in RBC’s Solution Acceleration & Innovation team, Sonaabh Sood leads creation of novel client experiences in banking including the RBC Launch app – RBC’s beta innovation platform. He has also spearheaded the digitization of RBC’s Lending business and capabilities and holds patents for creating simple user experiences powered by deep insights and a multitude of information sources.

Prior to RBC, Sonaabh has worked with UBS Investment Bank, Macquarie Bank and Panasonic in Singapore. He holds a Computer Engineering degree from Nanyang Technological University, Singapore and an MBA from Western University, Canada. In his spare time, Sonaabh likes to work on cars, read space sci-fi and go trekking.

 

Talk Track: Business Strategy

Talk Technical Level:  4/7

Talk Abstract:
At RBC we strive to provide value to our clients by moving up the value chain, facilitate their journey, and improve their experience. In doing do, we employ cutting edge and responsible artificial intelligence technologies to achieve that for them. In this talk we’ll walk through some of our AI power products, their functionalities, and some of the technical challenges faced when building them in highly regulated domains such as financial services. The talk will include capabilities such as Shopping, Insights/Calendar, NOMI, Offer Recommender, Autofill and alike.

What You’ll Learn:
AI’s role in embedded finance
Embedded Finance
AI utilization in financial services

Talk: Enabling the Use of Large Language Models for RBC Applications

Presenter:
Alfred Poon, Director, Software Development, RBC

About the Speaker:
Alfred is Director of Software Development at RBC in the Lumina platform team. He has 20+ years of experience in Application Development, Data Engineering and Graph Technologies. He currently leads the Generative AI Platform Team, helping to bring Gen AI capabilities to the rest of RBC.

Talk Track: Advanced Technical/Research

Talk Technical Level: 4/7

Talk Abstract:
Come join us to build the next generation of AI technologies at RBC! The RBC Lumina Generative AI platform is an Enterprise approved gateway to LLMs at RBC. In this session participants will learn how this gateway enables them to: prevent data loss; protect against malicious activity; monitor usage and manage costs; choose a language model that’s fit for purpose; access safety benchmarks across LLMs; and write good prompts (Feedback & Versioning).

What You’ll Learn:
Leveraging LLMs for evaluation, creative solutions

Talk: Creating the World’s Premier Biological Foundation Model

Presenter:
Jess Leung, Staff ML Engineer, Recursion

About the Speaker:
Jess Leung has been shipping machine learning to production throughout their career. They are currently a Staff Machine Learning Engineer at Recursion, where they lead the ML Platform team. Prior to Recursion, Jess has held technical leadership where they have shipped products in a wide variety of domains including: internet-scale platforms, e-commerce, life science solutions, public transportation services, and financial systems. Jess holds a B.Sci in Electrical Engineering from Queen’s University.

Talk Track: Case Study

Talk Technical Level:  5/7

Talk Abstract:
Recursion has built the world’s most expansive biological foundation model, boasting billions of parameters and trained on hundreds of millions of high-resolution images. Dive into an exploration of Recursion’s groundbreaking approach that is reshaping and revolutionizing the drug discovery process. This talk will provide an in-depth look at the infrastructure and tooling necessary for building such models. We’ll share insights into the intricate process of efficient data management, large-scale model training, scaling inference, and effective use of our in-house supercomputer and public cloud environments.

What You’ll Learn:
What it takes (infrastructure, techniques, talent, culture and practices) that is involved in training large deep learning models

Workshop: Reproducibility and Data version control for LangChain and LLM/OpenAI Models

Presenter:
Amit Kesarwani, Director of Solutions Engineering, Treeverse

About the Speaker:
Amit heads the solution architecture group at Treeverse, the company behind lakeFS, an open-source platform that delivers a git-like experience to object-storage based data lakes. Amit has 30+ years of experience as a technologist working with fortune 100 companies as well as start-ups. Designing and implementing technical solutions for complicated business problems. As an entrepreneur, he launched a cloud offering to provide Data Warehouse as a Service.

Amit holds a Master’s certificate in Project Management from George Washington University and a bachelor’s degree in Computer Science and Technology from Indian Institute of Technology (IIT), India. He is the inventor of patent: System and Method for Managing and Controlling Data.

Talk Track: Workshop

Talk Technical Level: 6/7

Talk Abstract:
In the last couple of years, Large Language Models (LLMs) have really skyrocketed in popularity and usefulness. Companies have harnessed this novel approach to machine learning and AI to build Foundation Models. A Foundation Model is an AI neural network — trained on mountains of raw data, generally with unsupervised learning — that can be adapted to accomplish a broad range of tasks.

Working with Foundation Models, however, differs from “traditional” ML. Instead of creating a new model from scratch using training and validation data for the task at hand, users of foundation models typically take an existing model, including its knowledge of the world, and “bend” it to fit a new task: adding more business or domain-specific knowledge to the existing model in order to adapt it to a new task. There are various techniques to achieve this, such as Fine Tuning, Prompt Engineering and Retrieval Augmented Generation.

To make an LLM application useful, this step is only one part of a sequence of operations that are required:
– Provide high quality data to use when fine tuning
– Converting the data into a format that the model can understand (also known as embedding)
– Indexing the data in a vector database, allowing efficient search
– Managing and optimizing prompts to ensure the model knows how to optimally use the data available to it
– Tuning the model and its parameters to ensure data is both trustworthy and up to date
– Wrapping the resulting model, embedding, parameters and prompts in an application consumable by its intended users

That’s where LangChain comes into play. LangChain is a comprehensive library of open-source components that help abstract away a lot of the complexity of working with LLMs.

Foundation models generally learn from unlabeled datasets, saving the time and expense of manually describing each item in massive collections. However, as data grows, the challenge of efficiently managing and controlling large datasets becomes more pronounced. Also, reproducibility, a core problem in ML, is even harder when it comes to LLMs. But you can achieve reproducibility easily with LangChain and lakeFS.

lakeFS is an open source, scalable data version control system that works on top of existing object stores. It allows users to treat vast amounts of data, in any format, as if they were all hosted on a giant Git repository: branching, committing, traversing history – all without having to copy the data itself. LangChain now includes an official lakeFS document loader. Using the document loader, users can now easily read documents from any lakeFS repository and version, with little configuration or coding.

What You’ll Learn:
In this workshop, we will demonstrate:
– How to use LangChain to define “chains” – pipelines consisting of the above mentioned steps – from loading data, indexing it as OpenAI embeddings, storing them in an in-memory vector database, generating and managing prompts, to interacting with foundation models – making a relatively complex process much easier to design, implement and deploy
– How to use the lakeFS LangChain Document Loader to build a real world application that reads input data from lakeFS to ensure reproducibility
– How to use lakeFS to manage and use different versions of documents with LangChain and LLM models

We will be leveraging a technology stack of:
– LangChain
– OpenAI
– Meta’s FAISS
– lakeFS

Workshop: Exploring the Advantages of Using Open Source to Fine-tune and Deploy Your Own LLM

Presenters:
Rohit Saha, Applied Research Scientist, Georgian | Kyryl Truskovskyi, Machine Learning Engineer, Georgian | Maria Ponomarenko, MLOps Intern, Georgian | Benjamin Ye, CFA, ML Research Intern, Georgian

About the Speakers:
Rohit is an Applied Research Scientist on Georgian’s R&D team, where he works with portfolio companies to accelerate their AI roadmap. This includes scoping research problems to building ML models to moving them in production. He has over 5 years of experience developing ML models across Vision, Language and Speech modalities. His latest project entails figuring out how businesses can leverage LLMs to address their needs. He holds a Master’s degree in Applied Computing from the University of Toronto, and has spent 2 years at MIT and Brown where he worked at the intersection of Computer Vision and domain adaptation.

Ben is an Applied Research Scientist Intern at Georgian where he helps companies to implement the latest techniques from Machine Learning literature. He obtained his Bachelor’s from Ivey and Master’s from Penn. Prior to Georgian, he worked as a quantitative investment researcher.

Mariia is an MLOps intern at Georgian, where she applies her engineering skills and knowledge in Natural Language Processing from her time as a research assistant at the University of Toronto. Now, she is working on finding straightforward ways to put Large Language Models (LLMs) into production, and handling different parts of the machine learning lifecycle.

Kyryl has several years of experience in the field of Machine Learning. For the bulk of his career, he has helped build machine learning startups, from inception to product. He has also developed expertise in choosing and implementing state-of-the-art deep learning architectures and large-scale solutions based on them.

Talk Track: Workshop

Talk Technical Level:  5/7

Talk Abstract:
Generative AI is poised to disrupt multiple industries as enterprises rush to incorporate AI in their product offerings. The primary driver of this technology has been the ever-increasing sophistication of Large Language Models (LLMs) and their capabilities. In the first innings of Generative AI, a handful of third-party vendors have led the development of foundational LLMs and their adoption by enterprises. However, development of open-source LLMs have made massive strides lately, to the point where they compete or even outperform their close-source counterparts. This competition presents an unique opportunity to enterprises who are still navigating the trenches of Generative AI and how best to utilize LLMs to build enduring products.

What You’ll Learn:
In this workshop, we will be presenting notebooks that showcase (i) how open-source LLMs like Llama2, Mistral and Zephyr can be leveraged (fine-tuned) and how they compare to closed-source LLMs, and (ii) the best ways to deploy them on Ray and Hugging Face’s TGI. (Github LLM-Finetuning-Hub)

Workshop: How to Build Production-Ready Applications Using Open-Source LLMs

Presenter:
Suhas Pai, NLP Researcher and Co-Founder/CTO at Bedrock AI

About the Speaker:
Suhas Pai is a NLP researcher and co-founder/CTO at Bedrock AI), a Toronto based startup. At Bedrock AI, he works on text ranking, representation learning, and productionizing LLMs. He is also currently writing a book on Designing Large Language Model Applications with O’Reilly Media. Suhas has been active in the ML community, being the Chair of the TMLS (Toronto Machine Learning Summit) conference since 2021 and also NLP lead at Aggregate Intellect (AISC). He was also co-lead of the Privacy working group at Big Science, as part of the BLOOM project.

Talk Track: Workshop

Talk Technical Level:  4/7

Talk Abstract:
In this workshop, we will explore the landscape of open-source LLMs and provide a playbook on how to effectively utilize them to build production-ready applications. We will show how to choose an LLM that best fits your task, among the plethora of choices available. We will demonstrate several fine-tuning techniques that enable you to adapt the LLM to your domain of interest. We will also discuss techniques to deal with reasoning limitations, hallucinations, bias and fairness issues, which are critical to ensure your applications are helpful and harmless when deployed in the real world.

What You’ll Learn:
Attendees will learn how to adapt open-source LLMs to build useful applications. They will learn various ways of addressing the limitations of LLMs and develop applications that go beyond demos and prototypes.

Workshop: Adding Safety and Security to Chatbots with NeMo Guardrails

Presenters:
Christopher Parisien, Senior Manager, Applied Research, NVIDIA | Griffin Lacey, Manager, Solutions Architect, NVIDIA

About the Speakers:
Christopher Parisien is a Senior Manager of Applied Research at NVIDIA, leading the development of NeMo Guardrails, a toolkit for safety and security in Large Language Models. Chris holds a PhD in Computational Linguistics from the University of Toronto, where he used AI models to explain the strange ways that children learn language. During his time in industry, he helped build the first generation of mainstream chatbots, developed systems to understand medical records, and served as Chief Technology Officer at NexJ Health, a patient-centred health platform. His current focus at NVIDIA is to bring trustworthy language models to large enterprises.

Griffin Lacey is a Solutions Architect Manager for NVIDIA in Canada. In his current role, he assists customers in designing and deploying their AI compute infrastructure. Prior to NVIDIA, Griffin was a deep learning researcher for the University of Guelph and Google. He holds a Bachelors and Masters of Engineering degree from the University of Guelph, where his research focused on the efficiency of deep learning at both the hardware and software level.

Talk Track: Workshop

Talk Technical Level:  5/7

Talk Abstract:
NeMo Guardrails is an open-source toolkit for easily adding programmable rails to large language model (LLM)-based conversational systems. This toolkit helps ensure smart applications powered by LLMs are accurate, appropriate, on topic and secure. The software includes all the code, examples and documentation businesses need to add safety to AI apps that generate text.

NeMo Guardrails enables developers to set up three kinds of boundaries:
– Topical guardrails prevent apps from veering off into undesired areas. For example, they keep customer service assistants from answering questions about politics.
– Safety guardrails ensure apps respond with accurate, appropriate information. They can filter out unwanted language and enforce that references are made only to credible sources.
– Security guardrails help prevent prompt injection attacks and restrict apps to making connections only to external third-party applications known to be safe.

In this workshop, participants will learn how to build a chat-based LLM application using NeMo Guardrails. Through detailed examples, we will design, build, and test a realistic application.

What You’ll Learn:
Adding safety and security to LLM applications can easily be in control of the developer.

Workshop: Fine-Tuning Language Models with Declarative AI Orchestration

Presenter:
James Sutton, Lead Machine Learning Engineer, Union.ai

About the Speaker:
James is a machine learning engineer at Union AI; a contributor to Flyte, a contributor to pytorch, and has been working in MLOps for the past 7 years.

He has a B.Eng in Materials Engineering; and previously worked as a laboratory researcher in failure analysis / material synthesis.

James is named on patent US 20190155660.

His research interests include Model Training Efficiency, GPU / Accelerator design, ML model failure modes, AI Alignment and open source contributions.

Talk Track: Workshop

Talk Technical Level:  5/7

Talk Abstract:
In this interactive workshop, you’ll dive into the increasingly popular world of Language Models (LMs), which have become more user-friendly due to the widespread availability of essential datasets and ML frameworks. We acknowledge that while many organizations possess the computational prowess to train Large Language Models (LLMs) or foundation models, the broader machine learning community often struggles with the infrastructure needed for fine-tuning these models for specialized tasks. Even with tools like Google Colab and consumer-grade processing units at our disposal, setting up the necessary environment is a formidable task. Together, we’ll explore how to employ Flyte to declaratively set up the infrastructure, enabling you to configure and run training jobs on the compute resources required to fine-tune LMs with your proprietary data.

What You’ll Learn:
This workshop has two main learning goals. First, attendees will learn the main concepts behind Flyte, a workflow orchestrator for data and machine learning. Many of these concepts are orchestrator-agnostic, such as containerization for reproducibility, declarative infrastructure, and type-safety. Secondly, they will also learn how to leverage the latest deep learning frameworks that optimize memory and compute resources required to fine-tune language models in the most economical way.

Sign Up for TMLS 2023 News Updates

Who Attends

0 +
Attendees
0 %
Data Practitioners
0 %
Researchers/Academics
0 %
Business Leaders

2023 Event Demographics

0 %
Highly Qualified Practitioners*
0 %
Currently Working in Industry*
0 %
Attendees Looking for Solutions
0 %
Currently Hiring
0 .0%
Attendees Actively Job-Searching

2023 Technical Background

Expert
19.2%
Advanced
49.8%
Intermediate
24.1%
Beginner
6.9%

2023 Attendees & Thought Leadership

0 +
Attendees
0 +
Speakers
0 +
Company Sponsors

Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.

Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.

Ignite what is an Ignite Talk?

Ignite is an innovative and fast-paced style used to deliver a concise presentation.

During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.

The result is a fun and engaging five-minute presentation.

You can see all our speakers and full agenda here

Get our official conference app
For Blackberry or Windows Phone, Click here
For feature details, visit Whova