{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "view-in-github" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Training and inference on your own data using Google Drive" ] }, { "cell_type": "markdown", "metadata": { "id": "K5xp-A8Oc80Q" }, "source": [ "In this notebook we'll install SLEAP, import training data into Colab using [Google Drive](https://www.google.com/drive), and run training and inference." ] }, { "cell_type": "markdown", "metadata": { "id": "yX9noEb8m8re" }, "source": [ "## Install SLEAP\n", "Note: Before installing SLEAP check [SLEAP releases](https://github.com/talmolab/sleap/releases) page for the latest version." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "DUfnkxMtLcK3", "outputId": "988097ae-e996-4b81-eb06-ec85aa0b2d9d" }, "outputs": [], "source": [ "!pip uninstall -y opencv-python opencv-contrib-python\n", "!pip install sleap" ] }, { "cell_type": "markdown", "metadata": { "id": "iq7jrgUksLtR" }, "source": [ "## Import training data into Colab with Google Drive\n", "We'll first prepare and export the training data from SLEAP, then upload it to Google Drive, and then mount Google Drive into this Colab notebook." ] }, { "cell_type": "markdown", "metadata": { "id": "GwpEwrxYdLMR" }, "source": [ "### Create and export the training job package\n", "A self-contained **training job package** contains a .slp file with labeled data and images which will be used for training, as well as .json training configuration file(s).\n", "\n", "A training job package can be exported in the SLEAP GUI fron the \"Run Training..\" dialog under the \"Predict\" menu." ] }, { "cell_type": "markdown", "metadata": { "id": "ApaDWxW4dLMS" }, "source": [ "### Upload training job package to Google Drive\n", "To be consistent with the examples in this notebook, name the SLEAP project `colab` and create a directory called `sleap` in the root of your Google Drive. Then upload the exported training job package `colab.slp.training_job.zip` into `sleap` directory.\n", "\n", "If you place your training pckage somewhere else, or name it differently, adjust the paths/filenames/parameters below accordingly." ] }, { "cell_type": "markdown", "metadata": { "id": "3HCGgy4kdLMS", "tags": [] }, "source": [ "### Mount your Google Drive\n", "Mounting your Google Drive will allow you to accessed the uploaded training job package in this notebook. When prompted to log into your Google account, give Colab access and the copy the authorization code into a field below (+ hit 'return')." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "GBWjF4jpMG2N", "outputId": "612b3674-60b4-4cd3-d61b-90ab56312293" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Mounted at /content/drive/\n" ] } ], "source": [ "from google.colab import drive\n", "drive.mount('/content/drive/')" ] }, { "cell_type": "markdown", "metadata": { "id": "KQhv_gsdJzaq" }, "source": [ "Let's set your current working directory to the directory with your training job package and unpack it there. Later on the output from training (i.e., the models) and from interence (i.e., predictions) will all be saved in this directory as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "9umui-gI2rBz", "outputId": "a0c208c2-edaf-41cb-ad03-34c4c0e25770" }, "outputs": [], "source": [ "import os\n", "os.chdir(\"/content/drive/My Drive/sleap\")\n", "!unzip colab.slp.training_job.zip\n", "!ls" ] }, { "cell_type": "markdown", "metadata": { "id": "xZ-sr67av5uu" }, "source": [ "## Train a model\n", "\n", "Let's train a model with the training profile (.json file) and the project data (.slp file) you have exported from SLEAP.\n", "\n", "\n", "### Note on training profiles\n", "Depending on the pipeline you chose in the training dialog, the config filename(s) will be:\n", "\n", "- for a **bottom-up** pipeline approach: `multi_instance.json` (this is the pipeline we assume here),\n", "\n", "- for a **top-down** pipeline, you'll have a different profile for each of the models: `centroid.json` and `centered_instance.json`,\n", "\n", "- for a **single animal** pipeline: `single_instance.json`.\n", "\n", "\n", "### Note on training process\n", "When you start training, you'll first see the training parameters and then the training and validation loss for each training epoch.\n", "\n", "As soon as you're satisfied with the validation loss you see for an epoch during training, you're welcome to stop training by clicking the stop button. The version of the model with the lowest validation loss is saved during training, and that's what will be used for inference.\n", "\n", "If you don't stop training, it will run for 200 epochs or until validation loss fails to improve for some number of epochs (controlled by the early_stopping fields in the training profile)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QKf6qzMqNBUi" }, "outputs": [], "source": [ "!sleap-train multi_instance.json colab.pkg.slp" ] }, { "cell_type": "markdown", "metadata": { "id": "whOf8PaFxYbt" }, "source": [ "If instead of bottom-up you've chosen the top-down pipeline (with two training configs), you would need to invoke two separate training jobs in sequence:\n", "\n", "- `!sleap-train centroid.json colab.pkg.slp`\n", "- `!sleap-train centered_instance.json colab.pkg.slp`\n" ] }, { "cell_type": "markdown", "metadata": { "id": "nIsKUX661xFK" }, "source": [ "## Run inference to predict instances\n", "\n", "Once training finishes, you'll see a new directory (or two new directories for top-down training pipeline) containing all the model files SLEAP needs to use for inference.\n", "\n", "Here we'll use the created model files to run inference in two modes:\n", "\n", "- predicting instances in suggested frames from the exported .slp file\n", "\n", "- predicting and tracking instances in uploaded video\n", "\n", "You can also download the trained models for running inference from the SLEAP GUI on your computer (or anywhere else).\n", "\n", "### Predicting instances in suggested frames\n", "This mode of predicting instances is useful for accelerating the manual labeling work; it allows you to get early predictions on suggested frames and merge them back into the project for faster labeling.\n", "\n", "Here we assume you've trained a bottom-up model and that the model files were written in directory named `colab_demo.bottomup`; later in this notebook we'll also show how to run inference with the pair of top-down models instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!sleap-track \\\n", " -m colab_demo.bottomup \\\n", " --only-suggested-frames \\\n", " -o colab.predicted_suggestions.slp \\\n", " colab.pkg.slp" ] }, { "cell_type": "markdown", "metadata": { "id": "nIsKUX661xFK" }, "source": [ "Now, you can download the generated `colab.predicted_suggestions.slp` file and merge it into your labeling project (**File -> Merge into Project...** from the GUI) to get new predictions for your suggested frames." ] }, { "cell_type": "markdown", "metadata": { "id": "nIsKUX661xFK" }, "source": [ "### Predicting and tracking instances in uploaded video\n", "Let's first upload the video we want to run inference on and name it `colab_demo.mp4`. (If your video is not named `colab_demo.mp4`, adjust the names below accordingly.)\n", "\n", "For this demo we'll just get predictions for the first 200 frames (or you can adjust the --frames parameter below or remove it to run on the whole video)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "CLtjtq9E1Znr", "outputId": "7c6da613-08b5-4c79-8eeb-7b785774417c" }, "outputs": [], "source": [ "!sleap-track colab_demo.mp4 \\\n", " --frames 0-200 \\\n", " --tracking.tracker simple \\\n", " -m colab_demo.bottomup" ] }, { "cell_type": "markdown", "metadata": { "id": "nzObCUToEqwA" }, "source": [ "When inference is finished, it will save the predictions in a file which can be opened in the GUI as a SLEAP project file. The file will be in the same directory as the video and the filename will be `{video filename}.predictions.slp`.\n", "\n", "Let's inspect the predictions file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nPfmNMSt-vS7" }, "outputs": [], "source": [ "!sleap-inspect colab_demo.mp4.predictions.slp" ] }, { "cell_type": "markdown", "metadata": { "id": "LoJ2kNBK-w6k" }, "source": [ "You can copy this file from your Google Drive to a local drive and open it in the SLEAP GUI app (or open it directly if you have your Google Drive mounted on your local machine). If the video is in the same directory as the predictions file, SLEAP will automatically find it; otherwise, you'll be prompted to locate the video (since the path to the video on your local machine will be different than the path to the video on Colab)." ] }, { "cell_type": "markdown", "metadata": { "id": "qW-NoJOFvYHM" }, "source": [ "### Inference with top-down models\n", "\n", "If you trained the pair of models needed for top-down inference, you can call `sleap-track` with `-m path/to/model` for each model, like so:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QPKnMc1qvim7" }, "outputs": [], "source": [ "!sleap-track colab_demo.mp4 \\\n", " --frames 0-200 \\\n", " --tracking.tracker simple \\\n", " -m colab_demo.centered_instance \\\n", " -m colab_demo.centroid" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "Training_and_inference_using_Google_Drive.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.12" } }, "nbformat": 4, "nbformat_minor": 4 }