WEBINAR
ON DEMAND

Safeguarding Sensitive Data in the Age of Generative AI

In this on-demand webinar, we discuss:
  • What Generative AI could mean for your business
  • Secure and scalable ways to protect sensitive data in AI and LLMs
  • How a privacy-by-engineering approach supports the ethical use of AI
By clicking Submit below, you agree to our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Introduction
It’s no secret that generative AI and LLMs are the next big thing to revolutionize the tech world. But while these technologies offer transformative opportunities for businesses, they also raise significant concerns about the privacy and security of sensitive data because as LLMs learn, they remember everything — and you can’t make them “unlearn”. This raises a pressing concern: What are the implications of sharing sensitive data with generative AI models?
Join us for an insightful tech talk with Skyflow Head of Developer Relations, Sean Falconer, as we explore:
  • Privacy challenges raised by new AI technologies
  • Secure ways to leverage LLMs and generative AI without sacrificing customer privacy
  • Practical strategies for implementing privacy-by-engineering in AI systems
Speaker
Speakers

Sean Falconer

Head of Developer Relations, Skyflow

Sean Falconer is Head of Developer Relations at Skyflow. Sean spends his time building, writing, speaking, and connecting with communities about engineering and data privacy. He has a wide range of interests and expertise, including full stack development, developer experience, and API design. Prior to Skyflow, Sean led developer relations for Google’s Business Communications product suite and was CTO and founder of a startup focused on mobile hiring tools.