My Deep Learning Journey with Fast.AI: Chapter 2 – Deployment

Chapter 2 was a bit more straight-forward than Chapter 1 even. Deployment is pretty easy, you save a model that you trained (computationally heavy) and load it (computationally light) and use it (also computationally light).

It did expose me to huggingface which is a pretty cool prototyping site. I went back, re-honed my model (trimmed all the images down to the faces, added a couple epochs to the training, etc) and via gradio implemented a webcam input and text output that predicts if the person is smiling or upset.

Here’s some of my augmented data set to focus on facial features primarily (that will now haunt my nightmares):

And an updated Jupyter Notebook: fastai_course/smile-or-frown-predictor.ipynb at main · JonathonCwik/fastai_course (github.com)

Try it out for yourself (although please note that you need decent light and the better the cam the better it performs too. As well, being closer to the webcam seems to help with accuracy too): https://huggingface.co/spaces/JonathonCwik/SmileOrFrown

I do wonder if it’s matching on background stuff sometimes. I’d be curious if you could make a model that gets faces, then run this on the subset of the image that is a face or if that wouldn’t make any difference at all.