Your folders, files, and images are all where they need to be, and you’re ready to continue with the next step.
We’ll be using Xcode to open our file, so make sure you have it installed and up-to-date.
Loading Dataset ImagesFinally, it’s time to begin telling Turi what it needs to do via the Python file we just created.
If you double click the file, it should open up by default in Xcode, if you have it installed.
If not, you can also use another editor or a Python IDE.
Import frameworksAt the top of the file, you’ll need to import the Turi Create framework.
If you want, you can create a name for reference by adding as <your name>.
For example, if you wanted to refer to it as tc in your code, you could write:This would allow you to call it tc instead of writing out turicreate.
In this tutorial, I’ll be using the full version, calling it turicreate to reduce ambiguity.
You’ll also need to deal with folder names and other OS-related tasks in order to classify your images.
This will require another Python library called os.
To import it, simply add the following:2.
Loading imagesHere, we’re storing all of the images in our dataset into a variable called data.
Since our dog_breeds.
py file is in the same directory as the images folder, we can simply put “images/” as the path.
Defining labelsNow that Turi Create has all of your images, you need to link the folder names to a label name.
These label names are what will be returned in your Core ML model when it’s being used in an iOS or macOS app.
This allows you to map all of your folder names to a “label” name, which tells Turi Create that all of the images that are in the “cocker_spaniel” folder are indeed Cocker Spaniels, for example.
Save as SFrameIn case you’re not familiar with an SFrame, in simple terms, it’s a dictionary of all your data (in this case, an image) and all of the labels (in this case, the dog breed).
Save your SFrame like this:It allows you to store your labeled images for use in the next step.
This is a fairly standard data type in the machine learning industry.
Training and TestingAfter Turi Create has all of your labeled images in place, it’s time to enter the home stretch and finally train your model.
We also need to split the data so that 80% is used for training, and 20% is saved for testing the model once it’s done training — we won’t have to test it manually.
Loading SFrameNow, we need to load the SFrame we just created in the previous step.
This is what we’ll use to split into testing and training data later.
This assigns the data variable, which is now of type SFrame to the SFrame that we saved in the previous step.
Now, we’ll need to split up the data into testing and training data.
As aforementioned, we’ll be doing an 80:20 split of testing to training data.
Splitting dataIt’s time to split the data.
After your SFrame code, add the following:This code randomly splits the data 80–20 and assigns it to two variables, testing and training, respectively.
Now, Turi will automatically test your model without you needing to manually supply test images and create an app.
iIf you need to make adjustments, you won’t need to fully implement it first, and instead, you can do them right in your Python file.
Training, testing, and exportingYour hard work has finally paid off!.In this line of Python code, you’ll just tell Turi Create to train your model, while specifying the architecture that you’d like to use.
You’re simply telling Turi to use your testing data (specified earlier), and use them to predict the labels (based on the folder structure from before), while using resnet-50, which is one of the most accurate machine learning model architectures.
To use your testing data and ensure that your model is accurate, add this:This uses the training data you specified and stores the results after testing in a variable called (you guessed it) testing.
For your information, it prints out the accuracy, but you can print other things as well, given enough time on Turi Create’s APIs.
Last but not least, you can save your model right into your file system with this one-liner after you give it a useful name:Of course, you can also save your model in other formats, but for this example, I’ve saved it as a Core ML model.
Running and OutputFor all you iOS developers out there — no, this isn’t an Xcode project that keeps compiling automatically and complaining about errors.
In order for the code you just wrote to execute, we’ll need to do it via the terminal.
Running the Python fileRunning the Python file is easy!.Make sure you’re in the correct directory, and all you need to do is enter the following in your terminal window:OutputAfter a couple of minutes of training, your images folder and dog_breeds.
py file will be accompanied by an SFrame, a model folder, and a .
mlmodel file, which is your Core ML model!You’ll also be presented with output in your Terminal window, which will look something like this:Figure 4: Output after running PythonThis gives you information about training and training accuracy, the amount of images processed, and other useful information, which you can use to analyze your model without ever even using it.
ConclusionI hope you enjoyed reading this tutorial as much as I enjoyed making it!.Here are some steps on where to go from here.
If you want to learn how to use your Core ML model in an iOS app, check out another one of my tutorials:Get Started With Image Recognition in Core MLWith technological advances, we’re at the point where our devices can use their built-in cameras to accurately identify…code.
comThis tutorial will show you how to take your resulting dog_classifier.
mlmodel model and implement it in a real world iOS app.
It will also teach you to parse a live video feed and take individual frames for image classification.
If you have any questions or comments regarding this tutorial, don’t hesitate to ask them down in the comments section below!.I’m always eager to hear feedback, questions, or how you used your knowledge from this tutorial.