Feature Reference

Command Line Interfaces

GUI

sleap-label runs the GUI application. This is an entry-point for sleap.gui.app.

Training

sleap-train is the command-line interface for training. Use this for training on a remote machine/cluster/colab notebook. This is an entry-point for sleap.nn.training.

Inference and Tracking

sleap-track is the command-line interface for running inference using models which have already been trained. Use this for running inference on a remote machine such as an HPC cluster or Colab notebook. This is an entry-point for sleap.nn.inference.

You can also use sleap-track to run the cross-frame identity tracker (or re-run with different parameters) without needed to re-run inference. Instead of specifying models, you specify a predictions dataset file (with the --labels argument) and the tracking parameters.

If you specify how many identities there should be in a frame (i.e., the number of animals) with the --tracking.clean_instance_count argument, then we will use a heuristic method to connect “breaks” in the track identities where we lose one identity and spawn another. This can be used as part of the inference pipeline (if models are specified), as part of the tracking-only pipeline (if the predictions file is specified and no models are specified), or by itself on predictions with pre-tracked identities (if you specify --tracking.tracker none).

Dataset Files

sleap-convert allows you to convert between various dataset file formats. Amongst other things, it can be used to export data from a SLEAP dataset into an HDF5 file that can be easily used for analysis (e.g., read from MATLAB). See sleap.io.convert for more information.

sleap-inspect gives you various information about a SLEAP dataset file such as a list of videos and a count of the frames with labels. If you’re inspecting a predictions dataset (i.e., the output from running sleap-track or inference in the GUI) it will also include details about how those predictions were created (i.e., the models, the version of SLEAP, and any inference parameters).

Debugging

There’s also a script to output diagnostic information which may help us if you need to contact us about problems installing or running SLEAP. If you were able to install the SLEAP Python package, you can run this script with sleap-diagnostic. Otherwise, you can download diagnostic.py and run python diagnostic.py.

Note

For more details about any command, run with the --help argument (e.g., sleap-track --help).

Application GUI

Mouse

Right-click (or control + click) on node: Toggle visibility

Right-click (or control + click) elsewhere on image: Add instance (with pop-up menu)

Alt + drag: Zoom into region

Alt + double-click: Zoom out

Alt + drag on node (or node label): Move entire instance

Alt + click and hold on node (or node label) + mouse wheel: Rotate entire instance

(On a Mac, substitute Option for Alt.)

Double-click on predicted instance: Create new editable instance from prediction

Double-click on editable instance: Any missing nodes (nodes added to the skeleton after this instance was created) will be added and marked as “non-visible”

Click on instance: Select that instance

Click elsewhere on image: Clear selection

Selection Keys

Number (e.g., 2) key: Select the instance corresponding to that number

Escape key: Deselect all instances

Seekbar

Shift + drag: Select a range of frames

Shift + click: Clear frame selection

Alt + drag: Zoom into a range of frames

Alt + click: Zoom out so that all frames are visible in seekbar

Labeling Suggestions

There are various methods to generate a list “suggested” frames for labeling or proofreading.

The sample method is a quick way to get some number of frames for every video in your project. You can tell it how many samples (frames) to take from each video, and whether they should be evenly spaced throughout a video (the “stride” sampling method) or randomly distributed.

The image feature method uses various algorithms to give you visually distinctive frames, since you will be able to train more robust models if the frames you’ve labeled are more representative of the visual variations in your videos. Generating suggestions based on image features can be slow.

The prediction score method will identify frames which have more than some number of instances predicted and where the instance prediction score is below some threshold. This method can be useful when proofreading frame-by-frame prediction results. The instance score depends on your specific skeleton so you’ll need to look at the instance scores you’re getting to decide an appropriate threshold.

The velocity method will identify frames where a predicted instance appears to move more than is typical in the video. This is based on the tracking results, so it can be useful for finding frames where the tracker incorrectly matched up two identities (since this will make the identity “jump”).