Skip to content

Applications

Out-of-the-box high-level pipelines for inference. All of these pipelines can be imported too from: paz.pipelines

[source]

SSD512COCO

paz.pipelines.detection.SSD512COCO(score_thresh=0.6, nms_thresh=0.45, draw=True)

Single-shot inference pipeline with SSD512 trained on COCO.

Arguments

  • score_thresh: Float between [0, 1]
  • nms_thresh: Float between [0, 1].
  • draw: Boolean. If True prediction are drawn in the returned image.

Example

from paz.pipelines import SSD512COCO

detect = SSD512COCO()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.

Reference


[source]

SSD300VOC

paz.pipelines.detection.SSD300VOC(score_thresh=0.6, nms_thresh=0.45, draw=True)

Single-shot inference pipeline with SSD300 trained on VOC.

Arguments

  • score_thresh: Float between [0, 1]
  • nms_thresh: Float between [0, 1].
  • draw: Boolean. If True prediction are drawn in the returned image.

Example

from paz.pipelines import SSD300VOC

detect = SSD300VOC()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.

Reference


[source]

SSD512YCBVideo

paz.pipelines.detection.SSD512YCBVideo(score_thresh=0.6, nms_thresh=0.45, draw=True)

Single-shot inference pipeline with SSD512 trained on YCBVideo.

Arguments

  • score_thresh: Float between [0, 1]
  • nms_thresh: Float between [0, 1].
  • draw: Boolean. If True prediction are drawn in the returned image.

Example

from paz.pipelines import SSD512YCBVideo

detect = SSD512YCBVideo()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.


[source]

SSD300FAT

paz.pipelines.detection.SSD300FAT(score_thresh=0.6, nms_thresh=0.45, draw=True)

Single-shot inference pipeline with SSD300 trained on FAT.

Arguments

  • score_thresh: Float between [0, 1]
  • nms_thresh: Float between [0, 1].
  • draw: Boolean. If True prediction are drawn in the returned image.

Example

from paz.pipelines import SSD300FAT

detect = SSD300FAT()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.


[source]

HigherHRNetHumanPose2D

paz.pipelines.keypoints.HigherHRNetHumanPose2D(dataset='COCO', data_with_center=False, max_num_people=30, with_flip=True, draw=True)

Estimate human pose 2D keypoints and draw a skeleton.

Arguments

  • model: Weights trained on HigherHRNet model.
  • keypoint_order: List of length 17 (number of keypoints). where the keypoints are listed order wise.
  • flipped_keypoint_order: List of length 17 (number of keypoints). Flipped list of keypoint order.
  • dataset: String. Name of the dataset used for training the model.
  • data_with_center: Boolean. True is the model is trained using the center.

Returns

dictonary with the following keys: image: contains the image with skeleton drawn on it. keypoints: location of keypoints score: score of detection


[source]

DetectMiniXceptionFER

paz.pipelines.detection.DetectMiniXceptionFER(offsets=[0, 0], colors=[[255, 0, 0], [45, 90, 45], [255, 0, 255], [255, 255, 0], [0, 0, 255], [0, 255, 255], [0, 255, 0]])

Emotion classification and detection pipeline.

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.

Example

from paz.pipelines import DetectMiniXceptionFER

detect = DetectMiniXceptionFER()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.

References


[source]

MiniXceptionFER

paz.pipelines.classification.MiniXceptionFER()

Mini Xception pipeline for classifying emotions from RGB faces.

Example

from paz.pipelines import MiniXceptionFER

classify = MiniXceptionFER()

# apply directly to an image (numpy-array)
inference = classify(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: class_names and scores.

References


[source]

FaceKeypointNet2D32

paz.pipelines.keypoints.FaceKeypointNet2D32(draw=True, radius=3)

KeypointNet2D model trained with Kaggle Facial Detection challenge.

Arguments

  • draw: Boolean indicating if inferences should be drawn.
  • radius: Int. used for drawing the predicted keypoints.

Example

from paz.pipelines import FaceKeypointNet2D32

estimate_keypoints= FaceKeypointNet2D32()

# apply directly to an image (numpy-array)
inference = estimate_keypoints(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and keypoints. The corresponding values of these keys contain the image with the drawn inferences and a numpy array representing the keypoints.


[source]

HeadPoseKeypointNet2D32

paz.pipelines.pose.HeadPoseKeypointNet2D32(camera, offsets=[0, 0], radius=5, thickness=2)

Head pose estimation pipeline using a HaarCascade face detector and a pre-trained KeypointNet2D estimation model.

Arguments

  • camera: Instance of paz.backend.camera.Camera with camera intrinsics.
  • offsets: List of floats indicating the scaled offset to be added to the Box2D coordinates.
  • radius: Int. radius of keypoint to be drawn.

Example

from paz.pipelines import HeadPoseKeypointNet2D32

estimate_pose = HeadPoseKeypointNet2D32()

# apply directly to an image (numpy-array)
inferences = estimate_pose(image)

Returns

A function that takes an RGB image and outputs the following inferences as keys of a dictionary: image, boxes2D, keypoints and poses6D.


[source]

HaarCascadeFrontalFace

paz.pipelines.detection.HaarCascadeFrontalFace(class_name='Face', color=[0, 255, 0], draw=True)

HaarCascade pipeline for detecting frontal faces

Arguments

  • class_name: String indicating the class name.
  • color: List indicating the RGB color e.g. [0, 255, 0].
  • draw: Boolean. If False the bounding boxes are not drawn.

Example

from paz.pipelines import HaarCascadeFrontalFace

detect = HaarCascadeFrontalFace()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image and boxes2D. The corresponding values of these keys contain the image with the drawn inferences and a list of paz.abstract.messages.Boxes2D.


[source]

SinglePowerDrillPIX2POSE6D

paz.pipelines.pose.SinglePowerDrillPIX2POSE6D(camera, epsilon=0.15, resize=False, draw=True)

Predicts the pose6D of the YCB 035_power_drill object from an image. Optionally if a box2D message is given it translates the predicted points2D to new origin located at box2D top-left corner.

Arguments

  • camera: PAZ Camera with intrinsic matrix.
  • epsilon: Float. Values below this value would be replaced by 0.
  • resize: Boolean. If True RGB mask is resized before computing PnP.
  • draw: Boolean. If True drawing functions are applied to output image.

Returns

Dictionary with inferred points2D, points3D, pose6D and image.


[source]

MultiPowerDrillPIX2POSE6D

paz.pipelines.pose.MultiPowerDrillPIX2POSE6D(camera, offsets, epsilon=0.15, resize=False, draw=True)

Predicts poses6D of multiple instances the YCB 035_power_drill object from an image.

Arguments

  • camera: PAZ Camera with intrinsic matrix.
  • offsets: List of length two containing floats e.g. (x_scale, y_scale)
  • epsilon: Float. Values below this value would be replaced by 0.
  • resize: Boolean. If True RGB mask is resized before computing PnP.
  • draw: Boolean. If True drawing functions are applied to output image.

Returns

Dictionary with inferred boxes2D, poses6D and image.


[source]

PIX2POSEPowerDrill

paz.pipelines.pose.PIX2POSEPowerDrill(camera, score_thresh=0.5, nms_thresh=0.45, offsets=[0.5, 0.5], epsilon=0.15, resize=False, draw=True)

PIX2POSE inference pipeline with SSD300 trained on FAT and UNET-VGG16 trained with domain randomization for the YCB object 035_power_drill.

Arguments

  • score_thresh: Float between [0, 1] for object detector.
  • nms_thresh: Float between [0, 1] indicating the non-maximum supression.
  • offsets: List of length two containing floats e.g. (x_scale, y_scale)
  • epsilon: Float. Values below this value would be replaced by 0.
  • draw: Boolean. If True prediction are drawn in the returned image.

Returns

Dictionary with inferred boxes2D, poses6D and image.


[source]

PIX2YCBTools6D

paz.pipelines.pose.PIX2YCBTools6D(camera, score_thresh=0.45, nms_thresh=0.15, offsets=[0.25, 0.25], epsilon=0.15, resize=False, draw=True)

Predicts poses6D of multiple instances of the YCB tools: '035_power_drill', '051_large_clamp', '037_scissors'

Arguments

  • camera: PAZ Camera with intrinsic matrix.
  • score_thresh: Float between [0, 1] for filtering Boxes2D.
  • nsm_thresh: Float between [0, 1] non-maximum-supression filtering.
  • offsets: List of length two containing floats e.g. (x_scale, y_scale)
  • epsilon: Float. Values below this value would be replaced by 0.
  • resize: Boolean. If True RGB mask is resized before computing PnP.
  • draw: Boolean. If True drawing functions are applied to output image.

Returns

Dictionary with inferred boxes2D, poses6D and image.


[source]

DetNetHandKeypoints

paz.pipelines.keypoints.DetNetHandKeypoints(shape=(128, 128), draw=True, right_hand=False)

Estimate 2D and 3D keypoints from minimal hand and draw a skeleton.

Arguments

  • shape: List/tuple. Input image shape for DetNet model.
  • draw: Boolean. Draw hand skeleton if true.
  • right_hand: Boolean. If 'True', detect keypoints for right hand, else detect keypoints for left hand.
  • input_image: Array

Returns

  • image: contains the image with skeleton drawn on it.
  • keypoints2D: Array [num_joints, 2]. 2D location of keypoints.
  • keypoints3D: Array [num_joints, 3]. 3D location of keypoints.

[source]

MinimalHandPoseEstimation

paz.pipelines.keypoints.MinimalHandPoseEstimation(draw=True, right_hand=False)

Estimate 2D and 3D keypoints from minimal hand and draw a skeleton. Estimate absolute and relative joint angle for the minimal hand joints using the 3D keypoint locations.

Arguments

  • draw: Boolean. Draw hand skeleton if true.
  • right_hand: Boolean. If 'True', detect keypoints for right hand, else detect keypoints for left hand.

Returns

  • image: contains the image with skeleton drawn on it.
  • keypoints2D: Array [num_joints, 2]. 2D location of keypoints.
  • keypoints3D: Array [num_joints, 3]. 3D location of keypoints.
  • absolute_angles: Array [num_joints, 4]. quaternion repesentation
  • relative_angles: Array [num_joints, 3]. axis-angle repesentation

[source]

DetectMinimalHand

paz.pipelines.keypoints.DetectMinimalHand(detect, estimate_keypoints, offsets=[0, 0], radius=3)

[source]

ClassifyHandClosure

paz.pipelines.classification.ClassifyHandClosure(draw=True, right_hand=False)

Pipeline to classify minimal hand closure status.

Example

from paz.pipelines import ClassifyHandClosure

classify = ClassifyHandClosure()

# apply directly to an image (numpy-array)
inference = classify(image)

Returns

A function that takes an RGB image and outputs an image with class status drawn on it.


[source]

SSD512MinimalHandPose

paz.pipelines.detection.SSD512MinimalHandPose(right_hand=False, offsets=[0.25, 0.25])

Hand detection and minimal hand pose estimation pipeline.

Arguments

  • right_hand: Boolean. True for right hand inference.
  • offsets: List of two elements. Each element must be between [0, 1].

Example

from paz.pipelines import SSD512MinimalHandPose

detect = SSD512MinimalHandPose()

# apply directly to an image (numpy-array)
inferences = detect(image)

Returns

A function that takes an RGB image and outputs the predictions as a dictionary with keys: image, boxes2D, Keypoints2D, Keypoints3D. The corresponding values of these keys contain the image with the drawn inferences.