sleap.io.video#

Video reading and writing interfaces for different formats.

class sleap.io.video.DummyVideo(filename: str = '', height: int = 2000, width: int = 2000, frames: int = 10000, channels: int = 1, dummy: bool = True)[source]#

Fake video backend,returns frames with all zeros.

This can be useful when you want to look at labels for a dataset but don’t have access to the real video.

class sleap.io.video.HDF5Video(filename: Optional[str] = None, dataset: Optional[str] = None, input_format: str = 'channels_last', convert_range: bool = True)[source]#

Video data stored as 4D datasets in HDF5 files.

Parameters:
  • filename – The name of the HDF5 file where the dataset with video data is stored.

  • dataset – The name of the HDF5 dataset where the video data is stored.

  • file_h5 – The h5.File object that the underlying dataset is stored.

  • dataset_h5 – The h5.Dataset object that the underlying data is stored.

  • input_format

    A string value equal to either “channels_last” or “channels_first”. This specifies whether the underlying video data is stored as:

    • ”channels_first”: shape = (frames, channels, height, width)

    • ”channels_last”: shape = (frames, height, width, channels)

  • convert_range – Whether we should convert data to [0, 255]-range

property channels#

See Video.

check(attribute, value)[source]#

Called by attrs to validates input format.

close()[source]#

Close the HDF5 file object (if it’s open).

property dtype#

See Video.

property embedded_frame_inds: List[int]#

Return list of frame indices with embedded images.

property enable_source_video: bool#

If set to True, will attempt to read from original video for frames not saved in the file.

property frames#

See Video.

get_frame(idx) ndarray[source]#

Get a frame from the underlying HDF5 video data.

Parameters:

idx – The index of the frame to get.

Returns:

The numpy.ndarray representing the video frame data.

property has_embedded_images: bool#

Return True if the file was saved with cached frame images.

property height#

See Video.

property last_frame_idx: int#

The idx number of the last frame.

Overrides method of base Video class for videos with select frames indexed by number from original video, since the last frame index here will not match the number of frames in video.

matches(other: HDF5Video) bool[source]#

Check if attributes match those of another video.

Parameters:

other – The other video to compare with.

Returns:

True if attributes match, False otherwise.

reset()[source]#

Reloads the video.

property source_video: Video#

Return the source video if available, otherwise return None.

property source_video_available: bool#

Return True if the source file is available for reading uncached frames.

property width#

See Video.

class sleap.io.video.ImgStoreVideo(filename: Optional[str] = None, index_by_original: bool = True)[source]#

Video data stored as an ImgStore dataset.

See: loopbio/imgstore This class is just a lightweight wrapper for reading such datasets as video sources for SLEAP.

Parameters:
  • filename – The name of the file or directory to the imgstore.

  • index_by_original – ImgStores are great for storing a collection of selected frames from an larger video. If the index_by_original is set to True then the get_frame function will accept the original frame numbers of from original video. If False, then it will accept the frame index from the store directly. Default to True so that we can use an ImgStoreVideo in a dataset to replace another video without having to update all the frame indices on LabeledFrame objects in the dataset.

property channels#

See Video.

close()[source]#

Close the imgstore if it isn’t already closed.

Returns:

None

property dtype#

See Video.

property frames#

See Video.

get_frame(frame_number: int) ndarray[source]#

Get a frame from the underlying ImgStore video data.

Parameters:

frame_number – The number of the frame to get. If index_by_original is set to True, then this number should actually be a frame index within the imgstore. That is, if there are 4 frames in the imgstore, this number should be be from 0 to 3.

Returns:

The numpy.ndarray representing the video frame data.

property height#

See Video.

property imgstore#

Get the underlying ImgStore object for this Video.

Returns:

The imgstore that is backing this video object.

property last_frame_idx: int#

The idx number of the last frame.

Overrides method of base Video class for videos with select frames indexed by number from original video, since the last frame index here will not match the number of frames in video.

matches(other)[source]#

Check if attributes match.

Parameters:

other – The instance to comapare with.

Returns:

True if attributes match, False otherwise

open()[source]#

Open the image store if it isn’t already open.

Returns:

None

reset()[source]#

Reloads the video.

property width#

See Video.

class sleap.io.video.MediaVideo(filename: str, grayscale: bool = _Nothing.NOTHING, bgr: bool = True, dataset: str = '', input_format: str = '')[source]#

Video data stored in traditional media formats readable by FFMPEG

This class provides bare minimum read only interface on top of OpenCV’s VideoCapture class.

Parameters:
  • filename – The name of the file (.mp4, .avi, etc)

  • grayscale – Whether the video is grayscale or not. “auto” means detect based on first frame.

  • bgr – Whether color channels ordered as (blue, green, red).

property channels#

See Video.

property dtype#

See Video.

property fps: float#

Returns frames per second of video.

property frames#

See Video.

get_frame(idx: int, grayscale: Optional[bool] = None) ndarray[source]#

See Video.

property height#

See Video.

matches(other: MediaVideo) bool[source]#

Check if attributes match those of another video.

Parameters:

other – The other video to compare with.

Returns:

True if attributes match, False otherwise.

reset(filename: Optional[str] = None, grayscale: Optional[bool] = None, bgr: Optional[bool] = None)[source]#

Reloads the video.

property width#

See Video.

class sleap.io.video.NumpyVideo(filename: Union[str, ndarray])[source]#

Video data stored as Numpy array.

Parameters:
  • filename – Either a file to load or a numpy array of the data.

  • shape (* numpy data) – (frames, height, width, channels)

property channels#

See Video.

property dtype#

See Video.

property frames#

See Video.

get_frame(idx)[source]#

See Video.

property height#

See Video.

property is_missing: bool#

Return True if the video comes from a file and is missing.

matches(other: NumpyVideo) ndarray[source]#

Check if attributes match those of another video.

Parameters:

other – The other video to compare with.

Returns:

True if attributes match, False otherwise.

reset()[source]#

Reload the video.

property width#

See Video.

class sleap.io.video.SingleImageVideo(filename: Optional[str] = None, filenames: Optional[List[str]] = _Nothing.NOTHING, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = _Nothing.NOTHING)[source]#

Video wrapper for individual image files.

Parameters:

filenames – Files to load as video.

property channels#

See Video.

property dtype#

See Video.

property frames#

See Video.

get_frame(idx: int, grayscale: Optional[bool] = None) ndarray[source]#

See Video.

property height#

See Video.

matches(other: SingleImageVideo) bool[source]#

Check if attributes match those of another video.

Parameters:

other – The other video to compare with.

Returns:

True if attributes match, False otherwise.

reset(filename: Optional[str] = None, filenames: Optional[List[str]] = None, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = None)[source]#

Reloads the video.

property width#

See Video.

class sleap.io.video.Video(backend: Union[HDF5Video, NumpyVideo, MediaVideo, ImgStoreVideo, SingleImageVideo, DummyVideo])[source]#

The top-level interface to any Video data used by SLEAP.

This class provides a common interface for various supported video data backends. It provides the bare minimum of properties and methods that any video data needs to support in order to function with other SLEAP components. This interface currently only supports reading of video data, there is no write support. Unless one is creating a new video backend, this class should be instantiated from its various class methods for different formats. For example:

>>> video = Video.from_hdf5(filename="test.h5", dataset="box")
>>> video = Video.from_media(filename="test.mp4")

Or we can use auto-detection based on filename:

>>> video = Video.from_filename(filename="test.mp4")
Parameters:
  • backend – A backend is an object that implements the following basic required methods and properties

  • Properties (*) –

    • frames: The number of frames in the video

    • channels: The number of channels in the video (e.g. 1 for grayscale, 3 for RGB)

    • width: The width of each frame in pixels

    • height: The height of each frame in pixels

  • Methods (*) –

    • get_frame(frame_index: int) -> np.ndarray: Get a single frame from the underlying video data with output shape=(height, width, channels).

static cattr()[source]#

Return a cattr converter for serialiazing/deserializing Video objects.

Returns:

A cattr converter.

static fixup_path(path: str, raise_error: bool = False, raise_warning: bool = False) str[source]#

Try to locate video if the given path doesn’t work.

Given a path to a video try to find it. This is attempt to make the paths serialized for different video objects portable across multiple computers. The default behavior is to store whatever path is stored on the backend object. If this is an absolute path it is almost certainly wrong when transferred when the object is created on another computer. We try to find the video by looking in the current working directory as well.

Note that when loading videos during the process of deserializing a saved Labels dataset, it’s usually preferable to fix video paths using a video_search callback or path list.

Parameters:
  • path – The path the video asset.

  • raise_error – Whether to raise error if we cannot find video.

  • raise_warning – Whether to raise warning if we cannot find video.

Raises:

FileNotFoundError – If file still cannot be found and raise_error is True.

Returns:

The fixed up path

classmethod from_filename(filename: str, *args, **kwargs) Video[source]#

Create an instance of a video object, auto-detecting the backend.

Parameters:
  • filename

    The path to the video filename. Currently supported types are:

    • Media Videos - AVI, MP4, etc. handled by OpenCV directly

    • HDF5 Datasets - .h5 files

    • Numpy Arrays - npy files

    • imgstore datasets - produced by loopbio’s Motif recording

      system. See: loopbio/imgstore.

  • args – Arguments to pass to NumpyVideo

  • kwargs – Arguments to pass to NumpyVideo

Returns:

A Video object with the detected backend.

classmethod from_hdf5(dataset: Union[str, Dataset], filename: Optional[Union[str, File]] = None, input_format: str = 'channels_last', convert_range: bool = True) Video[source]#

Create an instance of a video object from an HDF5 file and dataset.

This is a helper method that invokes the HDF5Video backend.

Parameters:
  • dataset – The name of the dataset or and h5.Dataset object. If filename is h5.File, dataset must be a str of the dataset name.

  • filename – The name of the HDF5 file or and open h5.File object.

  • input_format – Whether the data is oriented with “channels_first” or “channels_last”

  • convert_range – Whether we should convert data to [0, 255]-range

Returns:

A Video object with HDF5Video backend.

classmethod from_image_filenames(filenames: List[str], height: Optional[int] = None, width: Optional[int] = None, *args, **kwargs) Video[source]#

Create an instance of a SingleImageVideo from individual image file(s).

classmethod from_media(filename: str, *args, **kwargs) Video[source]#

Create an instance of a video object from a typical media file.

For example, mp4, avi, or other types readable by FFMPEG.

Parameters:
  • filename – The name of the file

  • args – Arguments to pass to MediaVideo

  • kwargs – Arguments to pass to MediaVideo

Returns:

A Video object with a MediaVideo backend

classmethod from_numpy(filename: Union[str, ndarray], *args, **kwargs) Video[source]#

Create an instance of a video object from a numpy array.

Parameters:
  • filename – The numpy array or the name of the file

  • args – Arguments to pass to NumpyVideo

  • kwargs – Arguments to pass to NumpyVideo

Returns:

A Video object with a NumpyVideo backend

get_frame(idx: int) ndarray[source]#

Return a single frame of video from the underlying video data.

Parameters:

idx – The index of the video frame

Returns:

The video frame with shape (height, width, channels)

get_frames(idxs: Union[int, Iterable[int]]) ndarray[source]#

Return a collection of video frames from the underlying video data.

Parameters:

idxs – An iterable object that contains the indices of frames.

Returns:

The requested video frames with shape (len(idxs), height, width, channels).

get_frames_safely(idxs: Iterable[int]) Tuple[List[int], ndarray][source]#

Return list of frame indices and frames which were successfully loaded. :param idxs: An iterable object that contains the indices of frames.

Returns: A tuple of (frame indices, frames), where
  • frame indices is a subset of the specified idxs, and

  • frames has shape (len(frame indices), height, width, channels).

If zero frames were loaded successfully, then frames is None.

classmethod imgstore_from_filenames(filenames: list, output_filename: str, *args, **kwargs) Video[source]#

Create an imgstore from a list of image files.

Parameters:
  • filenames – List of filenames for the image files.

  • output_filename – Filename for the imgstore to create.

Returns:

A Video object for the new imgstore.

property is_missing: bool#

Return True if the video is a file and is not present.

property last_frame_idx: int#

Return the index number of the last frame. Usually num_frames - 1.

property num_frames: int#

Return the number of frames in the video.

property shape: Tuple[Optional[int], Optional[int], Optional[int], Optional[int]]#

Return tuple of (frame count, height, width, channels).

to_hdf5(path: str, dataset: str, frame_numbers: Optional[List[int]] = None, format: str = '', index_by_original: bool = True)[source]#

Convert frames from arbitrary video backend to HDF5Video.

Used for building an HDF5 that holds all data needed for training.

Parameters:
  • path – Filename to HDF5 (which could already exist).

  • dataset – The HDF5 dataset in which to store video frames.

  • frame_numbers – A list of frame numbers from the video to save. If None save the entire video.

  • format – If non-empty, then encode images in format before saving. Otherwise, save numpy matrix of frames.

  • index_by_original – If the index_by_original is set to True then the get_frame function will accept the original frame numbers of from original video. If False, then it will accept the frame index directly. Default to True so that we can use resulting video in a dataset to replace another video without having to update all the frame indices in the dataset.

Returns:

A new Video object that references the HDF5 dataset.

to_imgstore(path: str, frame_numbers: Optional[List[int]] = None, format: str = 'png', index_by_original: bool = True) Video[source]#

Convert frames from arbitrary video backend to ImgStoreVideo.

This should facilitate conversion of any video to a loopbio imgstore.

Parameters:
  • path – Filename or directory name to store imgstore.

  • frame_numbers – A list of frame numbers from the video to save. If None save the entire video.

  • format – By default it will create a DirectoryImgStore with lossless PNG format unless the frame_indices = None, in which case, it will default to ‘mjpeg/avi’ format for video.

  • index_by_original – ImgStores are great for storing a collection of selected frames from an larger video. If the index_by_original is set to True then the get_frame function will accept the original frame numbers of from original video. If False, then it will accept the frame index from the store directly. Default to True so that we can use an ImgStoreVideo in a dataset to replace another video without having to update all the frame indices on LabeledFrame objects in the dataset.

Returns:

A new Video object that references the imgstore.

to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None) sleap.pipelines.Pipeline[source]#

Create a pipeline for reading the video.

Parameters:
  • batch_size – If not None, the video frames will be batched into rank-4 tensors. Otherwise, single rank-3 images will be returned.

  • prefetch – If True, pipeline will include prefetching.

  • frame_indices – Frame indices to limit the pipeline reader to. If not specified (default), pipeline will read the entire video.

Returns:

A sleap.pipelines.Pipeline that builds tf.data.Dataset for high throughput I/O during inference.

See also: sleap.pipelines.VideoReader

sleap.io.video.available_video_exts() Tuple[str][source]#

Return tuple of supported video extensions.

Returns:

Tuple of supported video extensions.

sleap.io.video.load_video(filename: str, grayscale: ~typing.Optional[bool] = None, dataset=<class 'NoneType'>, channels_first: bool = False, **kwargs) Video[source]#

Open a video from disk.

Parameters:
  • filename – Path to a video file. The video reader backend will be determined by the file extension. Support extensions include: mp4, avi, h5, hdf5 and slp (for embedded images in a labels file). If the path to a folder is provided, images within that folder will be treated as video frames.

  • grayscale – Read frames as a single channel grayscale images. If None (the default), this will be auto-detected.

  • dataset – Name of the dataset that contains the video if loading a video stored in an HDF5 file. This has no effect for non-HDF5 inputs.

  • channels_first – If False (the default), assume the data in the HDF5 dataset are formatted in (frames, height, width, channels) order. If False, assume the data are in (frames, channels, width, height) format. This has no effect for non-HDF5 inputs.

Returns:

A sleap.Video instance with the appropriate backend for its format.

This enables numpy-like access to video data.

Example:

>>> video = sleap.load_video("centered_pair_small.mp4")
>>> video.shape
(1100, 384, 384, 1)
>>> imgs = video[0:3]
>>> imgs.shape
(3, 384, 384, 1)

See also

sleap.io.video.Video