This post serves as a collection of follow-up resources for my AppBuilders 2020 talk, Practical Machine Learning for iOS. If you have any questions, feel free to reach out to me (via Twitter is preferred).
If you want a full transcript, the script for this talk is available here as a PDF.
Here’s some useful links, roughly in the order they might interest you related to the talk:
- CreateML — Apple’s easy to use tool for creating machine learning models based on tasks
- CoreML — Apple’s framework for using machine learning models that are in the CoreML mlmodel format
Building a Sound Classifier:
- ECS-50 Sound Dataset
- Apple’s MLSoundClassifier — the system we’re training with CreateML to make a sound classifier
- The CoreML Survival Guide — a book that deep dives into the internals of CoreML
- Apple’s SNAudioFileAnalyzer
Building a Caption Generator:
- Apple’s model page — a great resource for getting pre-trained CoreML models
- Apple’s Vision framework
Some additional links that might be of interest:
- Apple’s Machine Learning Journal
- Data is Plural newsletter — a great newsletter the showcases all sorts of interesting datasets that can be useful for machine learning tasks
- Turi Create, Apple’s Python framework (that’s very similar to CreateML)
- Apple’s CoreML Tools — Python tools for converting models from other formats to CoreML’s format
- Fritz AI — A cool startup doing amazing things with mobile AI
- and their amazing blog (well worth it)
And finally, we have a GitHub repository with the code that was shown in the talk, as well as the code repository for our book, Practical AI with Swift, that has a whole lot of great activities for you to use (even if you don’t have the book) across sound, vision, text, and more.