sleap.io.dataset#
A SLEAP dataset collects labeled video frames, together with required metadata.
This contains labeled frame data (user annotations and/or predictions), together with all the other data that is saved for a SLEAP project (videos, skeletons, etc.).
The most convenient way to load SLEAP labels files is to use the high level loader:
> import sleap
> labels = sleap.load_file(filename)
The Labels class provides additional functionality for loading SLEAP labels files. To load a labels dataset file from disk:
> labels = Labels.load_file(filename)
If you’re opening a dataset file created on a different computer (or if you’ve
moved the video files), it’s likely that the paths to the original videos will
not work. We automatically check for the videos in the same directory as the
labels file, but if the videos aren’t there, you can tell load_file
where
to search for the videos. There are various ways to do this:
> Labels.load_file(filename, single_path_to_search)
> Labels.load_file(filename, [path_a, path_b])
> Labels.load_file(filename, callback_function)
> Labels.load_file(filename, video_search=...)
The callback_function can be created via make_video_callback()
and has the
option to make a callback with a GUI window so the user can locate the videos.
To save a labels dataset file, run:
> Labels.save_file(labels, filename)
If the filename has a supported extension (e.g., “.slp”, “.h5”, “.json”) then the file will be saved in the corresponding format. You can also specify the default extension to use if none is provided in the filename.
- class sleap.io.dataset.Labels(labeled_frames: List[LabeledFrame] = _Nothing.NOTHING, videos: List[Video] = _Nothing.NOTHING, skeletons: List[Skeleton] = _Nothing.NOTHING, nodes: List[Node] = _Nothing.NOTHING, tracks: List[Track] = _Nothing.NOTHING, suggestions: List[SuggestionFrame] = _Nothing.NOTHING, negative_anchors: Dict[Video, list] = _Nothing.NOTHING, provenance: Dict[str, Union[str, int, float, bool]] = _Nothing.NOTHING)[source]#
The
Labels
class collects the data for a SLEAP project.This class is front-end for all interactions with loading, writing, and modifying these labels. The actual storage backend for the data is mostly abstracted away from the main interface.
- labeled_frames#
A list of
LabeledFrame
objects- Type:
- videos#
A list of
Video
objects that these labels may or may not reference. The video for everyLabeledFrame
will be stored invideos
attribute, but some videos in this list may not have any associated labeled frames.- Type:
List[sleap.io.video.Video]
- skeletons#
A list of
Skeleton
objects (again, that may or may not be referenced by anInstance
in labeled frame).- Type:
List[sleap.skeleton.Skeleton]
- tracks#
A list of
Track
that instances can belong to.- Type:
List[sleap.instance.Track]
- suggestions#
List that stores “suggested” frames for videos in project. These can be suggested frames for user to label or suggested frames for user to review.
- Type:
List[sleap.gui.suggestions.SuggestionFrame]
- negative_anchors#
Dictionary that stores center-points around which to crop as negative samples when training. Dictionary key is
Video
, value is list of (frame index, x, y) tuples.- Type:
Dict[sleap.io.video.Video, list]
- provenance#
Dictionary that denotes the origin of the
Labels
.- Type:
Dict[str, Union[str, int, float, bool]]
- add_instance(frame: LabeledFrame, instance: Instance)[source]#
Add instance to frame, updating track occupancy.
- add_suggestion(video: Video, frame_idx: int)[source]#
Add a suggested frame to the labels.
- Parameters:
video –
sleap.Video
instance of the suggestion.frame_idx – Index of the frame of the suggestion.
- add_video(video: Video)[source]#
Add a video to the labels if it is not already in it.
Video instances are added automatically when adding labeled frames, but this function allows for adding videos to the labels before any labeled frames are added.
- Parameters:
video –
Video
instance
- append(value: LabeledFrame)[source]#
Add labeled frame to list of labeled frames.
- classmethod complex_merge_between(base_labels: Labels, new_labels: Labels, unify: bool = True) tuple [source]#
Merge frames and other data from one dataset into another.
Anything that can be merged cleanly is merged into base_labels.
Frames conflict just in case each labels object has a matching frame (same video and frame idx) with instances not in other.
Frames can be merged cleanly if:
the frame is in only one of the labels, or
the frame is in both labels, but all instances perfectly match (which means they are redundant), or
the frame is in both labels, maybe there are some redundant instances, but only one version of the frame has additional instances not in the other.
- Parameters:
- Returns:
- Dictionary, keys are
Video
, values are dictionary in which keys are frame index (int) and value is list of
Instance
objects
- Dictionary, keys are
list of conflicting
Instance
objects from baselist of conflicting
Instance
objects from new
- Return type:
tuple of three items
- copy() Labels [source]#
Return a full deep copy of the labels. .. admonition:: Notes
All objects will be re-created by serializing and then deserializing the labels. This may be slow and will create new instances of all data structures.
- export(filename: str)[source]#
Export labels to analysis HDF5 format.
This expects the labels to contain data for a single video (e.g., predictions).
- Parameters:
filename – Path to output HDF5 file.
Notes
This will write the contents of the labels out as a HDF5 file without complete metadata.
- The resulting file will have datasets:
/node_names
: List of skeleton node names./track_names
: List of track names./tracks
: All coordinates of the instances in the labels./track_occupancy
: Mask denoting which instances are present in eachframe.
- export_csv(filename: str)[source]#
Export labels to CSV format.
- Parameters:
filename – Output path for the CSV format file.
Notes
This will write the contents of the labels out as a CSV file.
- export_nwb(filename: str, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime] = None)[source]#
Export all
PredictedInstance
objects in aLabels
object to an NWB file.Use
Labels.numpy
to create apynwb.NWBFile
with a separatepynwb.ProcessingModule
for eachVideo
in theLabels
object.To access the
pynwb.ProcessingModule
for a specificVideo
, use the key ‘SLEAP_VIDEO_{video_idx:03}_{video_fn.stem}’ whereisinstance(video_fn, pathlib.PurePath)
. Ex:video: ‘path_to_video/my_video.mp4’ video index: 3/5 key: ‘003_my_video’
Within each
pynwb.ProcessingModule
is andx_pose.PoseEstimation
for each unique track in theVideo
.The
ndx_pose.PoseEstimation
for each uniqueTrack
is stored under the key ‘track{track_idx:03}’ if tracks are set or ‘untrack{track_idx:03}’ if untracked wheretrack_idx
ranges from 0 to (number of tracks) - 1. Ex:track_idx: 1 key: ‘track001’
Each
ndx_pose.PoseEstimation
has andx_pose.PoseEstimationSeries
for everyNode
in theSkeleton
.The
ndx_pose.PoseEstimationSeries
for a specificNode
is stored under the key ‘Node.name
’. Ex:node name: ‘head’ key: ‘head’
- Parameters:
filename – Output path for the NWB format file.
labels – The
Labels
object to covert to a NWB format file.overwrite – Boolean that overwrites existing NWB file if True. If False, data will be appended to existing NWB file.
session_description – Description for entire project. Stored under NWBFile “session_description” key. If appending data to a preexisting file, then the session_description will not be used.
identifier – Unique identifier for project. If no identifier is specified, then will generate a GUID. If appending data to a preexisting file, then the identifier will not be used.
session_start_time – THe datetime associated with the project. If no session_start_time is given, then the current datetime will be used. If appending data to a preexisting file, then the session_start_time will not be used.
- Returns:
A
pynwb.NWBFile
with a separatepynwb.ProcessingModule
for eachVideo
in theLabels
object.
- extend_from(new_frames: Union[Labels, List[LabeledFrame]], unify: bool = False)[source]#
Merge data from another
Labels
object orLabeledFrame
list.- Arg:
new_frames: the object from which to copy data unify: whether to replace objects in new frames with
corresponding objects from current
Labels
data
- Returns:
bool, True if we added frames, False otherwise
- extract(inds, copy: bool = False) Labels [source]#
Extract labeled frames from indices and return a new
Labels
object. :param inds: Any valid indexing keys, e.g., a range, slice, list of label indices,numpy array,
Video
, etc. See__getitem__
for full list.- Parameters:
copy – If
True
, create a new copy of all of the extracted labeled frames and associated labels. IfFalse
(the default), a shallow copy with references to the original labeled frames and other objects will be returned.- Returns:
A new
Labels
object with the specified labeled frames. This will preserve the other data structures even if they are not found in the extracted labels, including:
- find(video: Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) List[LabeledFrame] [source]#
Search for labeled frames given video and/or frame index.
- Parameters:
video – A
Video
that is associated with the project.frame_idx – The frame index (or indices) which we want to find in the video. If a range is specified, we’ll return all frames with indices in that range. If not specific, then we’ll return all labeled frames for video.
return_new – Whether to return singleton of new and empty
LabeledFrame
if none is found in project.
- Returns:
List of
LabeledFrame
objects that match the criteria. Empty if no matches found, unless return_new is True, in which case it contains a newLabeledFrame
withvideo
andframe_index
set.
- find_first(video: Video, frame_idx: Optional[int] = None, use_cache: bool = False) Optional[LabeledFrame] [source]#
Find the first occurrence of a matching labeled frame.
Matches on frames for the given video and/or frame index.
- Parameters:
video – A
Video
instance that is associated with the labeled framesframe_idx – An integer specifying the frame index within the video
use_cache – Boolean that determines whether Labels.find_first() should instead instead call Labels.find() which uses the labels data cache. If True, use the labels data cache, else loop through all labels to search.
- Returns:
First
LabeledFrame
that match the criteria or None if none were found.
- find_last(video: Video, frame_idx: Optional[int] = None) Optional[LabeledFrame] [source]#
Find the last occurrence of a matching labeled frame.
Matches on frames for the given video and/or frame index.
- Parameters:
video – a
Video
instance that is associated with the labeled framesframe_idx – an integer specifying the frame index within the video
- Returns:
Last
LabeledFrame
that match the criteria or None if none were found.
- find_track_occupancy(video: Video, track: Union[Track, int], frame_range=None) List[Instance] [source]#
Get instances for a given video, track, and range of frames.
- Parameters:
video – the
Video
track – the
Track
or int (“pseudo-track” index to instance list)frame_range (optional) – If specified, only return instances on frames in range. If None, return all instances for given track.
- Returns:
List of
Instance
objects.
- static finish_complex_merge(base_labels: Labels, resolved_frames: List[LabeledFrame])[source]#
Finish conflicted merge from complex_merge_between.
- Parameters:
base_labels – the
Labels
that we’re merging intoresolved_frames – the list of frames to add into base_labels
- frames(video: Video, from_frame_idx: int = -1, reverse=False)[source]#
Return an iterator over all labeled frames in a video.
- Parameters:
video – A
Video
that is associated with the project.from_frame_idx – The frame index from which we want to start. Defaults to the first frame of video.
reverse – Whether to iterate over frames in reverse order.
- Yields:
LabeledFrame
- get(key: Union[int, slice, integer, ndarray, list, range, Video, Tuple[Video, Union[integer, ndarray, int, list, range]]], *secondary_key: Union[int, slice, integer, ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) Union[LabeledFrame, List[LabeledFrame]] [source]#
Return labeled frames matching key or return
None
if not found.This is a safe version of
labels[...]
that will not raise an exception if the item is not found.- Parameters:
key – Indexing argument to match against. If
key
is aVideo
or tuple of(Video, frame_index)
, frames that match the criteria will be searched for. If a scalar, list, range or array of integers are provided, the labels with those linear indices will be returned.secondary_key – Numerical indexing argument(s) which supplement
key
. Only used whenkey
is of typeVideo
.use_cache – Boolean that determines whether Labels.find_first() should instead instead call Labels.find() which uses the labels data cache. If True, use the labels data cache, else loop through all labels to search.
raise_errors – Boolean that determines whether KeyErrors should be raised. If True, raises KeyErrors, else catches KeyErrors and returns None instead of raising KeyError.
- Raises:
KeyError – If the specified key could not be found.
- Returns:
A list with the matching
LabeledFrame`s, or a single `LabeledFrame
if a scalar key was provided, orNone
if not found.
- get_next_suggestion(video, frame_idx, seek_direction=1)[source]#
Return a (video, frame_idx) tuple seeking from given frame.
- get_suggestions() List[SuggestionFrame] [source]#
Return all suggestions as a list of SuggestionFrame items.
- get_unlabeled_suggestion_inds() List[int] [source]#
Find labeled frames for unlabeled suggestions and return their indices.
This is useful for generating a list of example indices for inference on unlabeled suggestions.
- Returns:
List of indices of the labeled frames that correspond to the suggestions that do not have user instances.
If a labeled frame corresponding to a suggestion does not exist, an empty one will be created.
See also:
Labels.remove_empty_frames
- get_video_suggestions(video: Video, user_labeled: bool = True) List[int] [source]#
Return a list of suggested frame indices.
- Parameters:
video – Video to get suggestions for.
user_labeled – If
True
(the default), return frame indices for suggestions that already have user labels. IfFalse
, only suggestions with no user labeled instances will be returned.
- Returns:
Indices of the suggested frames for for the specified video.
- has_frame(lf: Optional[LabeledFrame] = None, video: Optional[Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) bool [source]#
Check if the labels contain a specified frame.
- Parameters:
lf –
LabeledFrame
to search for. If not provided, thevideo
andframe_idx
must not beNone
.video –
Video
of the frame. Not necessary iflf
is given.frame_idx – Integer frame index of the frame. Not necessary if
lf
is given.use_cache – If
True
(the default), use label lookup cache for faster searching. IfFalse
, check every frame without the cache.
- Returns:
A
bool
indicating whether the specifiedLabeledFrame
is contained in the labels.This will return
True
if there is a matching frame with the same video and frame index, even if they contain different instances.
Notes
The
Video
instance must be the same as the ones in these labels, so if comparing toVideo`s loaded from another file, be sure to load those labels with matching, i.e.: `sleap.Labels.load_file(..., match_to=labels)
.
- property has_missing_videos: bool#
Return True if any of the video files in the labels are missing.
- insert(index, value: LabeledFrame)[source]#
Insert labeled frame at given index.
- instance_count(video: Video, frame_idx: int) int [source]#
Return number of instances matching video/frame index.
- instances(video: Optional[Video] = None, skeleton: Optional[Skeleton] = None)[source]#
Iterate over instances in the labels, optionally with filters.
- Parameters:
video – Only iterate through instances in this video
skeleton – Only iterate through instances with this skeleton
- Yields:
Instance – The next labeled instance
- property is_multi_instance: bool#
Returns
True
if there are multiple user instances in any frame.
- property labels#
Alias for labeled_frames.
- classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]#
Load file, detecting format from filename.
- classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) Callable [source]#
Create a callback for finding missing videos.
The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have paths from a different system).
The callback function returns True to signal “abort”.
- Parameters:
search_paths – If specified, this is a list of paths where we’ll automatically try to find the missing videos.
context – A dictionary containing a “changed_on_load” key with a boolean value. Used externally to determine if any filenames were updated.
- Returns:
The callback function.
- static merge_container_dicts(dict_a: Dict, dict_b: Dict) Dict [source]#
Merge data from dict_b into dict_a.
- merge_matching_frames(video: Optional[Video] = None)[source]#
Merge
LabeledFrame
objects that are for the same video frame.- Parameters:
video – combine for this video; if None, do all videos
- merge_nodes(base_node: str, merge_node: str)[source]#
Merge two nodes and update data accordingly.
- Parameters:
base_node – Name of skeleton node that will remain after merging.
merge_node – Name of skeleton node that will be merged into the base node.
Notes
This method can be used to merge two nodes that might have been named differently but that should be associated with the same node.
This is useful, for example, when merging a different set of labels where a node was named differently.
If the
base_node
is visible and has data, it will not be updated. Otherwise, it will be updated with the data from themerge_node
on the same instance.
- numpy(video: Optional[Union[Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) ndarray [source]#
Construct a numpy array from instance points.
- Parameters:
video – Video or video index to convert to numpy arrays. If
None
(the default), uses the first video.all_frames – If
True
(the default), allocate array of the same number of frames as the video. IfFalse
, only return data between the first and last frame with data.untracked – If
False
(the default), include only instances that have a track assignment. IfTrue
, includes all instances in each frame in arbitrary order.return_confidence – If
False
(the default), only return points of nodes. IfTrue
, return the points and scores of nodes.
- Returns:
An array of tracks of shape
(n_frames, n_tracks, n_nodes, 2)
ifreturn_confidence
isFalse
. Otherwise returned shape is(n_frames, n_tracks, n_nodes, 3)
ifreturn_confidence
isTrue
.Missing data will be replaced with
np.nan
.If this is a single instance project, a track does not need to be assigned.
Only predicted instances (NOT user instances) will be returned.
Notes
This method assumes that instances have tracks assigned and is intended to function primarily for single-video prediction results.
- property predicted_instances: List[PredictedInstance]#
Return list of all predicted instances.
- remove(value: LabeledFrame)[source]#
Remove given labeled frame.
- remove_empty_instances(keep_empty_frames: bool = True)[source]#
Remove instances with no visible points.
- Parameters:
keep_empty_frames – If True (the default), frames with no remaining instances will not be removed.
Notes
This will modify the labels in place. If a copy is desired, call
labels.copy()
before this.
- remove_frame(lf: LabeledFrame, update_cache: bool = True)[source]#
Remove a given labeled frame.
- Parameters:
lf – Labeled frame instance to remove.
update_cache – If True, update the internal frame cache. If False, cache update can be postponed (useful when removing many frames).
- remove_frames(lfs: List[LabeledFrame])[source]#
Remove a list of frames from the labels.
- Parameters:
lfs – A sequence of labeled frames to remove.
- remove_instance(frame: LabeledFrame, instance: Instance, in_transaction: bool = False)[source]#
Remove instance from frame, updating track occupancy.
- remove_predictions(new_labels: Optional[Labels] = None)[source]#
Clear predicted instances from the labels.
Useful prior to merging operations to prevent overlapping instances from new predictions.
- Parameters:
new_labels – If not
None
, only predicted instances in frames that also contain predictions in the new labels will be removed. If not provided (the default), all predicted instances will be removed.
Notes
If providing
new_labels
, it must have been loaded usingsleap.Labels.load_file(..., match_to=labels)
to ensure that conflicting frames can be detected.Labeled frames without any instances after clearing will also be removed from the dataset.
- remove_suggestion(video: Video, frame_idx: int)[source]#
Remove a suggestion from the list by video and frame index.
- Parameters:
video –
sleap.Video
instance of the suggestion.frame_idx – Index of the frame of the suggestion.
- remove_track(track: Track)[source]#
Remove a track from the labels, updating (but not removing) instances.
- remove_untracked_instances(remove_empty_frames: bool = True)[source]#
Remove instances that do not have a track assignment.
- Parameters:
remove_empty_frames – If
True
(the default), removes frames that do not contain any instances after removing untracked ones.
- remove_user_instances(new_labels: Optional[Labels] = None)[source]#
Clear user instances from the labels.
Useful prior to merging operations to prevent overlapping instances from new labels.
- Parameters:
new_labels – If not
None
, only user instances in frames that also contain user instances in the new labels will be removed. If not provided (the default), all user instances will be removed.
Notes
If providing
new_labels
, it must have been loaded usingsleap.Labels.load_file(..., match_to=labels)
to ensure that conflicting frames can be detected.Labeled frames without any instances after clearing will also be removed from the dataset.
- remove_video(video: Video)[source]#
Remove a video from the labels and all associated labeled frames.
- Parameters:
video –
Video
instance to be removed.
- save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]#
Save the labels to a file.
- Parameters:
filename – Path to save the labels to ending in
slp
. If the filename does not end inslp
, the extension will be automatically appended.with_images – If
True
, the image data for frames with labels will be embedded in the saved labels. This is useful for generating a single file to be used when training remotely. Defaults toFalse
.embed_all_labeled – If
True
, save image data for labeled frames without user-labeled instances (defaults toFalse
). This is useful for selecting arbitrary frames to save by adding empty `LabeledFrame`s to the dataset. Labeled frame metadata will be saved regardless.embed_suggested – If
True
, save image data for frames in the suggestions (defaults toFalse
). Useful for predicting on remaining suggestions after training. Suggestions metadata will be saved regardless.
Notes
This is an instance-level wrapper for the
Labels.save_file
class method.
- classmethod save_file(labels: Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]#
Save file, detecting format from filename.
- Parameters:
labels – The dataset to save.
filename – Path where we’ll save it. We attempt to detect format from the suffix (e.g., “.json”).
default_suffix – If we can’t detect valid suffix on filename, we can add default suffix to filename (and use corresponding format). Doesn’t need to have “.” before file extension.
- Raises:
ValueError – If cannot detect valid filetype.
- save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[HDF5Video] [source]#
Write images for labeled frames from all videos to hdf5 file.
Note that this will make an HDF5 video, not an HDF5 labels dataset.
- Parameters:
output_path – Path to HDF5 file.
format – The image format to use for the data. Defaults to png.
user_labeled – Include labeled frames with user instances. Defaults to
True
.all_labeled – Include all labeled frames, including those with user-labeled instances, predicted instances or labeled frames with no instances. Defaults to
False
.suggested – Include suggested frames even if they do not have instances. Useful for inference after training. Defaults to
False
.progress_callback – If provided, this function will be called to report the progress of the frame data saving. This function should be a callable of the form:
fn(n, n_total)
wheren
is the number of frames saved so far andn_total
is the total number of frames that will be saved. This is called after each video is processed. If the function has a return value and it returnsFalse
, saving will be canceled and the output deleted.
- Returns:
A list of
HDF5Video
objects with the stored frames.
- save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[ImgStoreVideo] [source]#
Write images for labeled frames from all videos to imgstore datasets.
This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores.
- Parameters:
output_dir – Path to directory which will contain imgstores.
format – The image format to use for the data. Use “png” for lossless, “jpg” for lossy. Other imgstore formats will probably work as well but have not been tested.
all_labeled – Include any labeled frames, not just the frames we’ll use for training (i.e., those with
Instance
objects ).suggested – Include suggested frames even if they do not have instances. Useful for inference after training. Defaults to
False
.progress_callback – If provided, this function will be called to report the progress of the frame data saving. This function should be a callable of the form:
fn(n, n_total)
wheren
is the number of frames saved so far andn_total
is the total number of frames that will be saved. This is called after each video is processed. If the function has a return value and it returnsFalse
, saving will be canceled and the output deleted.
- Returns:
A list of
ImgStoreVideo
objects with the stored frames.
- split(n: Union[float, int], copy: bool = True) Tuple[Labels, Labels] [source]#
Split labels randomly.
- Parameters:
n – Number or fraction of elements in the first split.
copy – If
True
(the default), return copies of the labels.
- Returns:
A tuple of
(labels_a, labels_b)
where both aresleap.Labels
instances subsampled from these labels.
Notes
If there is only 1 labeled frame, this will return two copies of the same labels. For
len(labels) > 1
, splits are guaranteed to be mutually exclusive.Example
You can generate multiple splits by calling this repeatedly:
`py # Generate a 0.8/0.1/0.1 train/val/test split. labels_train, labels_val_test = labels.split(n=0.8) labels_val, labels_test = labels_val_test.split(n=0.5) `
- to_dict(skip_labels: bool = False) Dict[str, Any] [source]#
Serialize all labels to dicts.
Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of primitive python types. It is used to create JSON and HDF5 serialized datasets.
- Parameters:
skip_labels – If True, skip labels serialization and just do the metadata.
- Returns:
version - The version of the dict/json serialization format.
skeletons - The skeletons associated with these underlying instances.
nodes - The nodes that the skeletons represent.
videos - The videos that that the instances occur on.
labels - The labeled frames
tracks - The tracks associated with each instance.
suggestions - The suggested frames.
negative_anchors - The negative training sample anchors.
- Return type:
A dict containing the followings top level keys
- to_json()[source]#
Serialize all labels in the underling list of LabeledFrame(s) to JSON.
- Returns:
The JSON representation of the string.
- to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) sleap.pipelines.Pipeline [source]#
Create a pipeline for reading the dataset.
- Parameters:
batch_size – If not
None
, the video frames will be batched into rank-4 tensors. Otherwise, single rank-3 images will be returned.prefetch – If
True
, pipeline will include prefetching.frame_indices – Labeled frame indices to limit the pipeline reader to. If not specified (default), pipeline will read all the labeled frames in the dataset.
user_labeled_only – If
True
(default), will only read frames with user labeled instances.
- Returns:
A
sleap.pipelines.Pipeline
that buildstf.data.Dataset
for high throughput I/O during inference.
See also: sleap.pipelines.LabelsReader
- track_set_instance(frame: LabeledFrame, instance: Instance, new_track: Track)[source]#
Set track on given instance, updating occupancy.
- track_swap(video: Video, new_track: Track, old_track: Optional[Track], frame_range: tuple)[source]#
Swap track assignment for instances in two tracks.
If you need to change the track to or from None, you’ll need to use
track_set_instance()
for each specific instance you want to modify.- Parameters:
video – The
Video
for which we want to swap tracks.new_track – A
Track
for which we want to swap instances with another track.old_track – The other
Track
for swapping.frame_range – Tuple of (start, end) frame indexes. If you want to swap tracks on a single frame, use (frame index, frame index + 1).
- property unlabeled_suggestions: List[SuggestionFrame]#
Return suggestions without user labels.
- property user_labeled_frame_inds: List[int]#
Return a list of indices of frames with user labeled instances.
- property user_labeled_frames: List[LabeledFrame]#
Return all labeled frames with user (non-predicted) instances.
- with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) Labels [source]#
Return a new
Labels
containing only user labels.This is useful as a preprocessing step to train on only user-labeled data.
- Parameters:
user_instances_only – If
True
(the default), predicted instances will be removed from frames that also have user instances.with_track_only – If
True
, remove instances without a track.copy – If
True
(the default), create a new copy of all of the extracted labeled frames and associated labels. IfFalse
, a shallow copy with references to the original labeled frames and other objects will be returned. Warning: If returning a shallow copy, predicted and untracked instances will be removed from the original labels as well!
- Returns:
A new
Labels
with only the specified subset of frames and instances.
- class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]#
Class for maintaining cache of data in labels dataset.
- add_instance(frame: LabeledFrame, instance: Instance)[source]#
Add an instance to the labels.
- find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]#
Return a list of frame idxs, with optional start position/order.
- find_frames(video: Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) Optional[List[LabeledFrame]] [source]#
Return list of LabeledFrames matching video/frame_idx, or None.
- get_filtered_frame_idxs(video: Optional[Video] = None, filter: str = '') Set[Tuple[int, int]] [source]#
Return list of (video_idx, frame_idx) tuples matching video/filter.
- get_frame_count(video: Optional[Video] = None, filter: str = '') int [source]#
Return (possibly cached) count of frames matching video/filter.
- get_track_occupancy(video: Video, track: Track) RangeList [source]#
Access track occupancy cache that adds video/track as needed.
- get_video_track_occupancy(video: Video) Dict[Track, RangeList] [source]#
Return track occupancy information for specified video.
- remove_frame(frame: LabeledFrame)[source]#
Remove frame and update cache as needed.
- remove_instance(frame: LabeledFrame, instance: Instance)[source]#
Remove an instance and update the cache as needed.
- track_swap(video: Video, new_track: Track, old_track: Optional[Track], frame_range: tuple)[source]#
Swap tracks and update cache as needed.
- update(new_frame: Optional[LabeledFrame] = None)[source]#
Build (or rebuilds) various caches.
- update_counts_for_frame(frame: LabeledFrame)[source]#
Updated the cached count. Should be called after frame is modified.
- sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) str [source]#
Find a path to a missing file given a set of paths to search in.
- Parameters:
missing_path – Path to the missing filename.
search_paths – List of paths to search in.
- Returns:
The corrected path if it was found, or the original missing path if it was not.
- sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[Labels] = None) Labels [source]#
Load a SLEAP labels file.
SLEAP labels files (
slp
) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, user-labeled and predicted instances, suggestions and tracks.See
sleap.io.dataset.Labels
for more detailed information.- Parameters:
filename – Path to a SLEAP labels (.slp) file.
detect_videos – If True, will attempt to detect missing videos by searching for their filenames in the search paths. This is useful when loading SLEAP labels files that were generated on another computer with different paths.
search_paths – A path or list of paths to search for the missing videos. This can be the direct path to the video file or its containing folder. If not specified, defaults to searching for the videos in the same folder as the labels.
match_to – If a
sleap.Labels
object is provided, attempt to match and reuse video and skeleton objects when loading. This is useful when comparing the contents across sets of labels.
- Returns:
The loaded
Labels
instance.
Notes
This is a convenience method to call
sleap.Labels.load_file
. See that class method for more functionality in the loading process.The video files do not need to be accessible in order to load the labels, for example, when only the predicted instances or user labels are required.