Loading…
DeveloperWeek Management 2024 + AI DevSummit 2024 (+ DW...
Attending this event?
Tensorflow & PyTorch & Open Source Frameworks [clear filter]
Wednesday, May 29
 

10:00am PDT

KEYNOTE (AI): Intuit -- Gen AI Unleashed: The Pioneering Role of Product Managers in Crafting the Future
Sharmila More, Intuit, Platform Product Management Leader

In today's digital age, Generative AI has significantly transformed various areas of software product development. While there's been plenty of literature on how Gen AI can assist PM jobs such as generating ideas, writing User stories, or defining your product market fit, very few articles provide insights into how it changes the product development lifecycle and the role of Product Managers in influencing their teams in embracing this technology.
To analyze the impact of Gen AI and its intersection with PM roles, a necessary prerequisite would be for Product Managers to understand the basics of these technologies.
Some of the critical areas of product development that have undergone a transformation with the introduction of Generative AI are as follows:
Determining whether LLM is even the appropriate solution to address your customer problem or you should embrace the engineering solution.
A thorough assessment of the advantages and disadvantages of an Open source vs Closed source LLM.
Understand potential upfront costs, ongoing expenses, and the cost-to-benefit ratio of implementing a Generative AI solution.
Properly address data sensitivity concerns keeping in mind compliance and security reviews.
Effect of unpredictability and experimentation involved in LLMs to time-to-market.
Is it acceptable for your solution to have a probabilistic output?
Emphasize the importance of responsible AI, adhering to ethical and legal protocols.
Product Managers must devise metrics to measure the benefits of Generative AI in product development, with the idea of continuous improvement and innovation in the PM function.
In conclusion, this proposal focuses on the emerging need for PMs to remain up-to-date on the current developments in LLM and Generative AI and how they can use these technologies to their advantage, ultimately contributing to increased customer value and organizational growth.

Speakers
avatar for Sharmila More

Sharmila More

Platform Product Management Leader, Intuit
I embarked upon my professional journey as a software engineer and have since retained a deep-rooted passion for technology. Over time, I have worked in various roles, spanning from Engineering Manager to IC to Technical Program Manager, and I am presently working as a Platform Product... Read More →


Wednesday May 29, 2024 10:00am - 10:25am PDT
AI DevSummit Main Stage

11:00am PDT

PRO TALK (AI): Building a Data Platform for Foundation Models Based on Open Standards
Rakesh Jain, IBM, Senior Technical Staff Member & Researcher

In this session, we will describe how we built the Data Management Platform based on open table format Apache Iceberg, serving terabytes of data. The primary use case - data preprocessing for foundation models training at large scale. In addition, the Lakehouse was extended to support model checkpoints store and controlled sharing, providing a full Data and Model Factory experience. We will talk about various approaches we tried with focus on data acquisition, governance, preprocessing, leading up to tokenization, maintaining lineage from data to models and back.
In addition, the platform has been extended to support other aspects of foundation models, including fine tuning, evaluation, Retrieval Augmented Generation (RAG) etc. We will also talk about different strategies we adopted to deal with small data and big data, so that we can provide a seamless experience to different user bases of our Data Pl

Speakers
avatar for Rakesh Jain

Rakesh Jain

Senior Technical Staff Member & Researcher, IBM
Rakesh Jain is Chief Architect and Researcher with IBM Research in San Jose CA. He is an expert in building large scale distributed platforms, data analytics, cloud automation, storage management and high availability. He is also involved in the development of data and storage management... Read More →


Wednesday May 29, 2024 11:00am - 11:25am PDT
AI DevSummit Main Stage

3:00pm PDT

OPEN TALK (AI): Malicious Models: Defending Against Supply Chain Attacks on Machine Learning
Sam Washko, Protect AI, Software Engineer

In security trust no one, especially not unvetted machine learning models! Machine learning is increasingly being democratized through the sharing of foundation models on hubs like Hugging Face. However, due to the open nature of model hubs, compromised artifacts are very easy to share and distribute, so supply chain attacks on ML systems are becoming a serious attack vector.

Most ML model formats are inherently vulnerable to Model Serialization Attacks (MSA), the injection of malicious code that will execute automatically when the model file is deserialized. MSAs are the Trojan horses of ML, capable of turning a seemingly innocuous model into a backdoor to your whole system. An attacker could easily download a popular model, inject malicious code, and upload it under a similar name to trick consumers. This problem is not purely theoretical: 3,354 public models on Hugging Face today are capable of arbitrary code execution upon serialization, 41% of which are not flagged as unsafe by Hugging Face.

So what can we do to protect against it? Use ModelScan, the open source tool I’ve been developing for the past year along with a few other talented researchers and engineers. Model scanning is our window into the black boxes that are model files. By scanning the model before deserialization, we can examine the operators and architecture it uses to determine whether it contains suspicious code, without actually unpacking it and becoming vulnerable to the attack. It can detect signs of MSA in various different model formats and categorizes the potential severity of the attack.

Often we think of cybersecurity as more of a concern targeting big companies or governments, but it was important to us to make this tool open source since MSA is a threat to everyone who uses community model hubs - from academics to small businesses to individuals learning and making personal projects. It’s clear that AI/ML is key to the future of technology, and as it becomes more accessible to everyone, the risks do as well. But with tools like ModelScan, we can stop the MSA Trojan Horses at the gates and make ML more secure for everyone.

In this talk, attendees will learn: how MSA works, why they may be at risk, and what ModelScan looks for in suspicious models, as well as lessons learned writing an open source security tool.

Speakers
avatar for Sam Washko

Sam Washko

Software Engineer, Protect AI
Sam Washko is a software engineer passionate about the intersection of security and software development. She works for Protect AI developing tools for making machine learning systems more secure. She holds a BS in Computer Science from Duke University, and prior to joining Protect... Read More →


Wednesday May 29, 2024 3:00pm - 3:25pm PDT
AI DevSummit Expo Stage
 
Thursday, May 30
 

10:00am PDT

OPEN TALK (AI): DIY: How to Build a Feature Store at Home
Abhay Bothra, Fennel AI, Cofounder & CTO
Nikhil Garg, Fennel AI, Co-founder

Feature stores have become an integral part of any production ML platform but building them stays incredibly hard. In this talk, we will look at the key components of a feature store, evaluate the open source technologies that can be plumbed together to build a feature store, key architectural gotchas and common design patterns.

At the end of the talk, the attendees should be able to go back and conceive of a more informed plan to build an in-house feature store.

Speakers
avatar for Abhay Bothra

Abhay Bothra

Co-founder & CTO, Fennel AI
Abhay is the co-founder and CTO of Fennel. Prior to starting Fennel, Abhay was a tech lead at Meta and ThoughtSpot where he helped solve some of the industry’s hardest distributed systems problems
avatar for Nikhil Garg

Nikhil Garg

Cofounder, Fennel AI
Nikhil Garg is the co-founder/CEO at Fennel, a startup building realtime data infrastructure for machine learning. Previously, he was at Meta where he ran several teams behind open source PyTorch and before that, he led an org of ~100 or so ML engineers working on personalization... Read More →


Thursday May 30, 2024 10:00am - 10:25am PDT
AI DevSummit Expo Stage
 
Wednesday, June 5
 

10:00am PDT

[Virtual] KEYNOTE (AI): Intuit -- Gen AI Unleashed: The Pioneering Role of Product Managers in Crafting the Future
Sharmila More, Intuit, Platform Product Management Leader

In today's digital age, Generative AI has significantly transformed various areas of software product development. While there's been plenty of literature on how Gen AI can assist PM jobs such as generating ideas, writing User stories, or defining your product market fit, very few articles provide insights into how it changes the product development lifecycle and the role of Product Managers in influencing their teams in embracing this technology.
To analyze the impact of Gen AI and its intersection with PM roles, a necessary prerequisite would be for Product Managers to understand the basics of these technologies.
Some of the critical areas of product development that have undergone a transformation with the introduction of Generative AI are as follows:
Determining whether LLM is even the appropriate solution to address your customer problem or you should embrace the engineering solution.
A thorough assessment of the advantages and disadvantages of an Open source vs Closed source LLM.
Understand potential upfront costs, ongoing expenses, and the cost-to-benefit ratio of implementing a Generative AI solution.
Properly address data sensitivity concerns keeping in mind compliance and security reviews.
Effect of unpredictability and experimentation involved in LLMs to time-to-market.
Is it acceptable for your solution to have a probabilistic output?
Emphasize the importance of responsible AI, adhering to ethical and legal protocols.
Product Managers must devise metrics to measure the benefits of Generative AI in product development, with the idea of continuous improvement and innovation in the PM function.
In conclusion, this proposal focuses on the emerging need for PMs to remain up-to-date on the current developments in LLM and Generative AI and how they can use these technologies to their advantage, ultimately contributing to increased customer value and organizational growth.

Speakers
avatar for Sharmila More

Sharmila More

Platform Product Management Leader, Intuit
I embarked upon my professional journey as a software engineer and have since retained a deep-rooted passion for technology. Over time, I have worked in various roles, spanning from Engineering Manager to IC to Technical Program Manager, and I am presently working as a Platform Product... Read More →


Wednesday June 5, 2024 10:00am - 10:25am PDT
VIRTUAL AI DevSummit Main Stage

11:00am PDT

[Virtual] PRO TALK (AI): Building a Data Platform for Foundation Models Based on Open Standards
Rakesh Jain, IBM, Senior Technical Staff Member & Researcher

In this session, we will describe how we built the Data Management Platform based on open table format Apache Iceberg, serving terabytes of data. The primary use case - data preprocessing for foundation models training at large scale. In addition, the Lakehouse was extended to support model checkpoints store and controlled sharing, providing a full Data and Model Factory experience. We will talk about various approaches we tried with focus on data acquisition, governance, preprocessing, leading up to tokenization, maintaining lineage from data to models and back.
In addition, the platform has been extended to support other aspects of foundation models, including fine tuning, evaluation, Retrieval Augmented Generation (RAG) etc. We will also talk about different strategies we adopted to deal with small data and big data, so that we can provide a seamless experience to different user bases of our Data Pl

Speakers
avatar for Rakesh Jain

Rakesh Jain

Senior Technical Staff Member & Researcher, IBM
Rakesh Jain is Chief Architect and Researcher with IBM Research in San Jose CA. He is an expert in building large scale distributed platforms, data analytics, cloud automation, storage management and high availability. He is also involved in the development of data and storage management... Read More →


Wednesday June 5, 2024 11:00am - 11:25am PDT
VIRTUAL AI DevSummit Main Stage

3:00pm PDT

[Virtual] OPEN TALK (AI): Malicious Models: Defending Against Supply Chain Attacks on Machine Learning
Sam Washko, Protect AI, Software Engineer

In security trust no one, especially not unvetted machine learning models! Machine learning is increasingly being democratized through the sharing of foundation models on hubs like Hugging Face. However, due to the open nature of model hubs, compromised artifacts are very easy to share and distribute, so supply chain attacks on ML systems are becoming a serious attack vector.

Most ML model formats are inherently vulnerable to Model Serialization Attacks (MSA), the injection of malicious code that will execute automatically when the model file is deserialized. MSAs are the Trojan horses of ML, capable of turning a seemingly innocuous model into a backdoor to your whole system. An attacker could easily download a popular model, inject malicious code, and upload it under a similar name to trick consumers. This problem is not purely theoretical: 3,354 public models on Hugging Face today are capable of arbitrary code execution upon serialization, 41% of which are not flagged as unsafe by Hugging Face.

So what can we do to protect against it? Use ModelScan, the open source tool I’ve been developing for the past year along with a few other talented researchers and engineers. Model scanning is our window into the black boxes that are model files. By scanning the model before deserialization, we can examine the operators and architecture it uses to determine whether it contains suspicious code, without actually unpacking it and becoming vulnerable to the attack. It can detect signs of MSA in various different model formats and categorizes the potential severity of the attack.

Often we think of cybersecurity as more of a concern targeting big companies or governments, but it was important to us to make this tool open source since MSA is a threat to everyone who uses community model hubs - from academics to small businesses to individuals learning and making personal projects. It’s clear that AI/ML is key to the future of technology, and as it becomes more accessible to everyone, the risks do as well. But with tools like ModelScan, we can stop the MSA Trojan Horses at the gates and make ML more secure for everyone.

In this talk, attendees will learn: how MSA works, why they may be at risk, and what ModelScan looks for in suspicious models, as well as lessons learned writing an open source security tool.

Speakers
avatar for Sam Washko

Sam Washko

Software Engineer, Protect AI
Sam Washko is a software engineer passionate about the intersection of security and software development. She works for Protect AI developing tools for making machine learning systems more secure. She holds a BS in Computer Science from Duke University, and prior to joining Protect... Read More →


Wednesday June 5, 2024 3:00pm - 3:25pm PDT
VIRTUAL AI DevSummit Expo Stage
 
Thursday, June 6
 

10:00am PDT

[Virtual] OPEN TALK (AI): DIY: How to Build a Feature Store at Home
Abhay Bothra, Fennel AI, Cofounder & CTO
Nikhil Garg, Fennel AI, Co-founder

Feature stores have become an integral part of any production ML platform but building them stays incredibly hard. In this talk, we will look at the key components of a feature store, evaluate the open source technologies that can be plumbed together to build a feature store, key architectural gotchas and common design patterns.

At the end of the talk, the attendees should be able to go back and conceive of a more informed plan to build an in-house feature store.

Speakers
avatar for Nikhil Garg

Nikhil Garg

Cofounder, Fennel AI
Nikhil Garg is the co-founder/CEO at Fennel, a startup building realtime data infrastructure for machine learning. Previously, he was at Meta where he ran several teams behind open source PyTorch and before that, he led an org of ~100 or so ML engineers working on personalization... Read More →
avatar for Abhay Bothra

Abhay Bothra

Co-founder & CTO, Fennel AI
Abhay is the co-founder and CTO of Fennel. Prior to starting Fennel, Abhay was a tech lead at Meta and ThoughtSpot where he helped solve some of the industry’s hardest distributed systems problems


Thursday June 6, 2024 10:00am - 10:25am PDT
VIRTUAL AI DevSummit Expo Stage
 

Filter sessions
Apply filters to sessions.