In this example, we'll show you how to use matcha to setup a default cloud environment on Azure and hook up a movie recommendation pipeline to run on that environment.
If you're wondering what on earth matcha is (besides the drink) then check out our main repository here and our documentation - don't forget to come back to try out this example!
There's a bit of a setup required before unleashing matcha, the steps below will guide you through this.
Before you start, this example workflow requires the Azure CLI to be installed. See here for how to do that.
You will also need to ensure you have installed Docker and that the Docker daemon is running on your machine.
Finally, you also need to ensure that you have Terraform installed on your machine.
Before you start, this example workflow requires the Azure CLI to be installed. See here for how to do that.
You will also need to ensure you have installed Docker and that the Docker daemon is running on your machine.
Clone this repo:
git clone git@github.com:fuzzylabs/matcha-examples.gitGo to the recommendation example directory:
cd matcha-examples/recommendationLog into Azure via your terminal:
az loginCreate a virtual environment:
python3 -m venv venv
source venv/bin/activateThere is a requirement for the Python version being used to be 3.8+. We recommend making use of pyenv to manage your versions.
Install matcha:
pip install matcha-mlYou need to be in the
recommendationsdirectory before running this!
matcha provisionOnce that's finished, crack on!
Set up the environment:
This will install the requirements for the example (see requirements.txt) and setup ZenML:
./setup.shYou may need to give the
setup.shfile the correct permissions to run, if so then do the following:chmod +x setup.sh.
Once setup.sh has completed, do the following to run the training pipeline:
python run.py --trainOnce training has finished, we can deploy our trained model by doing the following:
python run.py --deployWe can also run both training and deployment with one command:
python run.py --train --deploy[Optional] Run the tests:
python -m pytest tests✅ You've trained a model
✅ You've deployed it
❓ And now you want to get predictions.
We've created a handy inference script which you can use to send a user_id and a movie_id to the deployed model get a predicted rating:
python inference.py --user 100 --movie 100And the output should be something similar to:
User 100 is predicted to give the movie (100) a rating of: 4.2 out of 5Alternatively, you can
curlthe endpoint with the following:curl -XPOST -H 'Content-Type: application/json' -d '{"data": {"ndarray": [{"iid": "302", "uid": "196"}]}}' <endpoint_url>The output will be the raw predictions sent back by the model!
Even though we've chosen a sensible default configuration for you, leaving the resources you've provisioned in this example running on Azure is going to run up a bill.
To deprovision the resources, run the following command:
matcha destroy