Skip to content

Keypoints

Built-in pipelines for preprocessing, agumentating and predicting.

[source]

KeypointNetInference

paz.pipelines.keypoints.KeypointNetInference(model, num_keypoints=None, radius=5)

Performs inference from a KeypointNetShared model.

Arguments

  • model: Keras model for predicting keypoints.
  • num_keypoints: Int or None. If None num_keypoints is tried to be inferred from model.output_shape
  • radius: Int. used for drawing the predicted keypoints.

[source]

KeypointNetSharedAugmentation

paz.pipelines.keypoints.KeypointNetSharedAugmentation(renderer, size)

Wraps RenderTwoViews as a sequential processor for using it directly with a paz.GeneratingSequence.

Arguments

  • renderer: RenderTwoViews processor.
  • size: Image size.

[source]

EstimateKeypoints2D

paz.pipelines.keypoints.EstimateKeypoints2D(model, num_keypoints, draw=True, radius=3, color=4)

Basic 2D keypoint prediction pipeline.

Arguments

  • model: Keras model for predicting keypoints.
  • num_keypoints: Int or None. If None num_keypoints is tried to be inferred from model.output_shape
  • draw: Boolean indicating if inferences should be drawn.
  • radius: Int. used for drawing the predicted keypoints.

[source]

DetectKeypoints2D

paz.pipelines.detection.DetectKeypoints2D(detect, estimate_keypoints, offsets=[0, 0], radius=3)

[source]

GetKeypoints

paz.pipelines.keypoints.GetKeypoints(max_num_instance, keypoint_order, detection_thresh=0.2, tag_thresh=1)

Extract out the top k keypoints heatmaps and group the keypoints with their respective tags value. Adjust and refine the keypoint locations by removing the margins. Arguments

  • max_num_instance: Int. Maximum number of instances to be detected.
  • keypoint_order: List of length 17 (number of keypoints).
  • heatmaps: Numpy array of shape (1, num_keypoints, H, W)
  • Tags: Numpy array of shape (1, num_keypoints, H, W, 2)

Returns

  • grouped_keypoints: numpy array. keypoints grouped by tag
  • scores: int: score for the keypoint

[source]

TransformKeypoints

paz.pipelines.keypoints.TransformKeypoints(inverse=False)

Transform the keypoint coordinates. Arguments

  • grouped_keypoints: Numpy array. keypoints grouped by tag
  • center: Tuple. center of the imput image
  • scale: Float. scaled imput image dimension
  • shape: Tuple/List

Returns

  • transformed_keypoints: keypoint location with respect to the input image

[source]

HigherHRNetHumanPose2D

paz.pipelines.keypoints.HigherHRNetHumanPose2D(dataset='COCO', data_with_center=False, max_num_people=30, with_flip=True, draw=True)

Estimate human pose 2D keypoints and draw a skeleton.

Arguments

  • model: Weights trained on HigherHRNet model.
  • keypoint_order: List of length 17 (number of keypoints). where the keypoints are listed order wise.
  • flipped_keypoint_order: List of length 17 (number of keypoints). Flipped list of keypoint order.
  • dataset: String. Name of the dataset used for training the model.
  • data_with_center: Boolean. True is the model is trained using the center.

Returns

dictonary with the following keys: image: contains the image with skeleton drawn on it. keypoints: location of keypoints score: score of detection


[source]

DetNetHandKeypoints

paz.pipelines.keypoints.DetNetHandKeypoints(shape=(128, 128), draw=True, right_hand=False)

Estimate 2D and 3D keypoints from minimal hand and draw a skeleton.

Arguments

  • shape: List/tuple. Input image shape for DetNet model.
  • draw: Boolean. Draw hand skeleton if true.
  • right_hand: Boolean. If 'True', detect keypoints for right hand, else detect keypoints for left hand.
  • input_image: Array

Returns

  • image: contains the image with skeleton drawn on it.
  • keypoints2D: Array [num_joints, 2]. 2D location of keypoints.
  • keypoints3D: Array [num_joints, 3]. 3D location of keypoints.

[source]

MinimalHandPoseEstimation

paz.pipelines.keypoints.MinimalHandPoseEstimation(draw=True, right_hand=False)

Estimate 2D and 3D keypoints from minimal hand and draw a skeleton. Estimate absolute and relative joint angle for the minimal hand joints using the 3D keypoint locations.

Arguments

  • draw: Boolean. Draw hand skeleton if true.
  • right_hand: Boolean. If 'True', detect keypoints for right hand, else detect keypoints for left hand.

Returns

  • image: contains the image with skeleton drawn on it.
  • keypoints2D: Array [num_joints, 2]. 2D location of keypoints.
  • keypoints3D: Array [num_joints, 3]. 3D location of keypoints.
  • absolute_angles: Array [num_joints, 4]. quaternion repesentation
  • relative_angles: Array [num_joints, 3]. axis-angle repesentation

[source]

DetectMinimalHand

paz.pipelines.keypoints.DetectMinimalHand(detect, estimate_keypoints, offsets=[0, 0], radius=3)

[source]

EstimateHumanPose3D

paz.pipelines.keypoints.EstimateHumanPose3D(input_shape=(32,), num_keypoints=16)

Estimate human pose 3D from 2D human pose.

Arguments

input_shape: tuple num_keypoints: Int. Number of keypoints.

Return

  • keypoints3D: human pose 3D

[source]

EstimateHumanPose

paz.pipelines.keypoints.EstimateHumanPose(solver, camera_intrinsics, args_to_joints3D=[0, 1, 2, 3, 6, 7, 8, 12, 13, 15, 17, 18, 19, 25, 26, 27], filter=True, draw=True, draw_pose=True)

Estimates 2D and 3D keypoints of human from an image

Arguments

  • estimate_keypoints_3D: 3D simple baseline model
  • args_to_mean: keypoints indices
  • h36m_to_coco_joints2D: h36m joints indices

Returns

keypoints2D, keypoints3D