On 2/13/2019, in Carlsbad, CA:
will address the CTO Colloquium with presentations on:
“Gathering, Analyzing and Learning from Data, the No Buzzwords Edition”
“Bridging the Gap Between Data Science and Software Engineering”
“Machine Learning on Kubernetes”
- 5:30pm to 6pm – Arrival Food & Drinks
- 6:00 to 6:30 pm – Bryan Hall
- 6:30 to 7:00 pm – Ryan Rusnak
- 7:00 to 8:00 pm – Igor Mameshin
- 8pm Adjourn to closest Pub
On Gathering, Analyzing and Learning from Data, the No Buzzwords Edition: Bryan will share the pitfalls and lessons learned from building a data, analytics and machine learning stack from the ground up twice in the last five years. This talk will focus on the pieces that were vital to success and how to understand and explain them to get buy in from your entire team.
On Bridging the Gap Between Data Science and Software Engineering: So your company is growing and now you have tons of data. Data is powerful and there must be key business insights in that data so you hired a data scientist. The data scientist found lots of good data and trained a neural network to spit out beautiful insights that your customers would be happy to pay for. Great! However, the data scientist doesn’t know how to code in your stack your software engineers don’t know how to serve up a Tensorflow model. What do you do? Most companies never make it across this chasm. Come learn how to cross the chasm and start using your machine learning models in production.
On Machine Learning on Kubernetes: Understanding all steps in the ML workflow will help CTOs to make better hiring and technology decisions. In this session, you will learn how to build, train and deploy machine learning models efficiently and at scale on Kubernetes. We will start with an overview of machine learning workflow: prepare data, code your model, train and evaluate the model, deploy the model, get predictions, and monitor the ongoing predictions. We will go into details on what is under the hood to understand more of the magic that is happening. At the end of the demo, you will learn how to deploy a trained model on Kubernetes, and send prediction requests to your model from a web application.
Using Jupyter notebook, we will show how to:
- Interactively define a KubeFlow Pipeline using the Python SDK
- Train model – this example pipeline trains a Tensorflow model on Github issue data, learning to predict issue titles from issue descriptions
- Deploy model – launch a web app which interacts with the TF-Serving instance in order to get model predictions.
The tutorial is based on Kubeflow – open source machine learning stack that runs on Kubernetes. This demo is deployed on AWS, but it can run on any Kubernetes cluster on GCP, Azure, or on-prem.
There is a hands-on session to follow this talk. To participate in the hands-on session, you need to have a laptop with a web browser (Chrome or Safari), text editor of your choice, and Git client.