Skip to content

Backend functionality for 2D keypoints

[source]

build_cube_points3D

paz.backend.keypoints.build_cube_points3D(width, height, depth)

Build the 3D points of a cube in the openCV coordinate system: 4--------1 /| /| / | / | 3--------2 | | 8_|5 | / | / |/ |/ 7--------6

Z (depth) / /_____X (width) | | Y (height)

Arguments

  • height: float, height of the 3D box.
  • width: float, width of the 3D box.
  • depth: float, width of the 3D box.

Returns

Numpy array of shape ``(8, 3)'' corresponding to 3D keypoints of a cube


[source]

normalize_keypoints2D

paz.backend.keypoints.normalize_keypoints2D(points2D, height, width)

Transform points2D in image coordinates to normalized coordinates i.e. [U, V] -> [-1, 1]. UV have maximum values of [W, H] respectively.

Image plane

width (0,0)--------> (U) | height | | v

(V)

Arguments

  • points2D: Numpy array of shape (num_keypoints, 2).
  • height: Int. Height of the image
  • width: Int. Width of the image

Returns

Numpy array of shape (num_keypoints, 2).


[source]

denormalize_keypoints2D

paz.backend.keypoints.denormalize_keypoints2D(points2D, height, width)

Transform nomralized points2D to image UV coordinates i.e. [-1, 1] -> [U, V]. UV have maximum values of [W, H] respectively.

Image plane

(0,0)--------> (U) | | | v

(V)

Arguments

  • points2D: Numpy array of shape (num_keypoints, 2).
  • height: Int. Height of the image
  • width: Int. Width of the image

Returns

Numpy array of shape (num_keypoints, 2).


[source]

project_to_image

paz.backend.keypoints.project_to_image(rotation, translation, points3D, camera_intrinsics)

Project points3D to image plane using a perspective transformation.

Image plane

(0,0)--------> (U) | | | v

(V)

Arguments

  • rotation: Array (3, 3). Rotation matrix (Rco).
  • translation: Array (3). Translation (Tco).
  • points3D: Array (num_points, 3). Points 3D in object frame.
  • camera_intrinsics: Array of shape (3, 3). Diagonal elements represent focal lenghts and last column the image center translation.

Returns

Array (num_points, 2) in UV image space.


[source]

solve_PnP_RANSAC

paz.backend.keypoints.solve_PnP_RANSAC(object_points3D, image_points2D, camera_intrinsics, inlier_threshold=5, num_iterations=100)

Returns rotation (Roc) and translation (Toc) vectors that transform 3D points in object frame to camera frame.

O------------O /| /| / | / | O------------O | | | z | | | O_|_|__O | / |___y| / object | / / | / coordinates |/ x |/ O------------O


Z | / | Rco, Tco /_____X <------| | | camera Y coordinates

Arguments

  • object_points3D: Array (num_points, 3). Points 3D in object reference frame. Represented as (0) in image above.
  • image_points2D: Array (num_points, 2). Points in 2D in camera UV space.
  • camera_intrinsics: Array of shape (3, 3). Diagonal elements represent focal lenghts and last column the image center translation.
  • inlier_threshold: Number of inliers for RANSAC method.
  • num_iterations: Maximum number of iterations.

Returns

Rotation vector in axis-angle form (3) and translation vector (3).


[source]

arguments_to_image_points2D

paz.backend.keypoints.arguments_to_image_points2D(row_args, col_args)

Convert array arguments into UV coordinates.

Image plane

(0,0)--------> (U) | | | v

(V)

Arguments

  • row_args: Array (num_rows).
  • col_args: Array (num_cols).

Returns

Array (num_cols, num_rows) representing points2D in UV space.

Notes

Arguments are row args (V) and col args (U). Image points are in UV coordinates; thus, we concatenate them in that order i.e. [col_args, row_args]


[source]

cascade_classifier

paz.backend.keypoints.cascade_classifier(path)

OpenCV Cascade classifier.

Arguments

  • path: String. Path to default openCV XML format.

Returns

OpenCV classifier with detectMultiScale for inference..


[source]

project_points3D

paz.backend.keypoints.project_points3D(points3D, pose6D, camera)

Projects 3D points into a specific pose.

Arguments

  • points3D: Numpy array of shape (num_points, 3).
  • pose6D: An instance of paz.abstract.Pose6D.
  • camera: An instance of paz.backend.Camera object.

Returns

Numpy array of shape (num_points, 2)


[source]

solve_PNP

paz.backend.keypoints.solve_PNP(points3D, points2D, camera, solver)

Calculates 6D pose from 3D points and 2D keypoints correspondences.

Arguments

  • points3D: Numpy array of shape (num_points, 3). 3D points known in advance.
  • points2D: Numpy array of shape (num_points, 2). Predicted 2D keypoints of object.
  • camera: Instance of ''paz.backend.Camera'' containing as properties the ''camera_intrinsics'' a Numpy array of shape ''(3, 3)'' usually calculated from the openCV ''calibrateCamera'' function, and the ''distortion'' a Numpy array of shape ''(5)'' in which the elements are usually obtained from the openCV ''calibrateCamera'' function.
  • solver: Flag from e.g openCV.SOLVEPNP_UPNP.
  • distortion: Numpy array of shape of 5 elements calculated from the openCV calibrateCamera function.

Returns

A list containing success flag, rotation and translation components of the 6D pose.


[source]

translate_keypoints

paz.backend.keypoints.translate_keypoints(keypoints, translation)

Translate keypoints.

Arguments

  • kepoints: Numpy array of shape (num_keypoints, 2).
  • translation: A list of length two indicating the x,y translation values

Returns

Numpy array


[source]

rotate_point2D

paz.backend.keypoints.rotate_point2D(point2D, rotation_angle)

Rotate keypoint.

Arguments

  • point2D: keypoint [x, y]

rotation angle: Int. Angle of rotation.

Returns

List of x and y rotated points


[source]

transform_keypoint

paz.backend.keypoints.transform_keypoint(keypoint, transform)

Transform keypoint.

Arguments

  • keypoint2D: keypoint [x, y]
  • transform: Array. Transformation matrix

[source]

add_offset_to_point

paz.backend.keypoints.add_offset_to_point(keypoint_location, offset=0)

Add offset to keypoint location

Arguments

  • keypoint_location: keypoint [y, x]
  • offset: Float.

[source]

translate_points2D_origin

paz.backend.keypoints.translate_points2D_origin(points2D, coordinates)

Translates points2D to a different origin

Arguments

  • points2D: Array (num_points, 2)
  • coordinates: Array (4) containing (x_min, y_min, x_max, y_max)

Returns

Translated points2D array (num_points, 2)


[source]

flip_keypoints_left_right

paz.backend.keypoints.flip_keypoints_left_right(keypoints, image_size=(32, 32))

Flip the detected 2D keypoints left to right.

Arguments

  • keypoints: Array
  • image_size: list/tuple
  • axis: int

Returns

  • flipped_keypoints: Numpy array

[source]

compute_orientation_vector

paz.backend.keypoints.compute_orientation_vector(keypoints3D, parents)

Compute bone orientations from joint coordinates (child joint - parent joint). The returned vectors are normalized. For the root joint, it will be a zero vector.

Arguments

keypoints3D : Numpy array [num_keypoints, 3]. Joint coordinates. - parents: Parents of the keypoints from kinematic chain

Returns

Array [num_keypoints, 3]. The unit vectors from each child joint to its parent joint. For the root joint, it's are zero vector.


[source]

rotate_keypoints3D

paz.backend.keypoints.rotate_keypoints3D(rotation_matrix, keypoints)

Rotatate the keypoints by using rotation matrix

Arguments

Rotation matrix [N, 3, 3]. keypoints [N, 3]

Returns

Rotated keypoints [N, 3]


[source]

flip_along_x_axis

paz.backend.keypoints.flip_along_x_axis(keypoints, axis=0)

Flip the keypoints along the x axis.

Arguments

  • keypoints: Array
  • axis: int/list

Returns

Flipped keypoints: Array


[source]

uv_to_vu

paz.backend.keypoints.uv_to_vu(keypoints)

Flips the uv coordinates to vu.

Arguments

  • keypoints: Array.

[source]

standardize

paz.backend.keypoints.standardize(data, mean, scale)

It takes the data the mean and the standard deviation and returns the standardized data

Arguments

  • data: nxd matrix to normalize
  • mean: Array of means
  • scale: standard deviation

Returns

standardized poses2D


[source]

destandardize

paz.backend.keypoints.destandardize(data, mean, scale)

It takes the standardized data the mean and the standard deviation and returns the destandardized data

Arguments

  • data: nxd matrix to unnormalize
  • mean: Array of means
  • scale: standard deviation

Returns

destandardized poses3D


[source]

initialize_translation

paz.backend.keypoints.initialize_translation(joints2D, camera_intrinsics, ratio)

Computes initial 3D translation of root joint

Arguments

  • joints2D: 2D root joint from HigherHRNet
  • camera_intrinsics: camera intrinsic parameters
  • ratio: ration of sum of 3D bones to 2D bones

Returns

Array of initial estimate of the global position of the root joint in 3D


[source]

solve_least_squares

paz.backend.keypoints.solve_least_squares(solver, compute_joints_distance, initial_joints_translation, joints3D, poses2D, camera_intrinsics)

Solve the least squares

Arguments

  • solver: from scipy.optimize import least_squares
  • compute_joints_distance: global_pose.compute_joints_distance
  • initial_root_translation: initial 3D translation of root joint
  • joints3D: 16 moving joints in 3D
  • poses2d: 2D poses
  • camera_intrinsics: camera intrinsic parameters

Returns optimal translation of root joint for each person


[source]

get_bones_length

paz.backend.keypoints.get_bones_length(poses2D, poses3D, start_joints, end_joints=[ 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15])

Computes sum of bone lengths in 3D

Arguments

poses3D: list of predicted poses in 3D (Nx16x3) poses2D: list of poses in 2D (Nx32)

Returns

sum_bones2D: array of sum of length of all bones in the 2D skeleton sum_bones3D: array of sum of length of all bones in the 3D skeleton


[source]

compute_reprojection_error

paz.backend.keypoints.compute_reprojection_error(initial_translation, keypoints3D, keypoints2D, camera_intrinsics)

compute distance between each person joints

Arguments

  • initial_translation: initial guess of position of joint
  • keypoints3D: 3D keypoints to be optimized (Nx16x3)
  • keypoints2D: 2D keypoints (Nx32)
  • camera_inrinsics: camera intrinsic parameters

Returns

  • person_sum: sum of L2 distances between each joint per person

[source]

merge_into_mean

paz.backend.keypoints.merge_into_mean(keypoints2D, args_to_mean)

merge keypoints and take the mean

Arguments:

 keypoints2D: keypoints2D (Nx17x2)
 args_to_mean: dict of joint indices

Returns:

 keypoints2D: keypoints2D after merging

[source]

filter_keypoints

paz.backend.keypoints.filter_keypoints(keypoints, args_to_joints)

filter keypoints.

Arguments

  • keypoints: points in camera coordinates
  • args_to_joints: Array of joints indices

Returns

filtered keypoints


[source]

filter_keypoints3D

paz.backend.keypoints.filter_keypoints3D(keypoints3D, args_to_joints3D)

Selects 16 moving joints (Neck/Nose excluded) from 32 predicted joints in 3D

Arguments

  • keypoints3D: Nx96 points in camera coordinates
  • args_to_joints3D: list of indices

Returns

  • filtered_joints_3D: Nx48 points (moving joints)

[source]

filter_keypoints2D

paz.backend.keypoints.filter_keypoints2D(keypoints2D, args_to_mean, h36m_to_coco_joints2D)

Selects 16 moving joints (Neck/Nose excluded) from 17 predicted joints in 2D

Arguments

  • keypoints3D: Nx17x2 points in camera coordinates
  • args_to_mean: keypoints indices
  • h36m_to_coco_joints2D: human36m dataset list of joints indices

Returns

  • joints2D: Nx32 points (moving joints)

[source]

compute_optimized_pose3D

paz.backend.keypoints.compute_optimized_pose3D(keypoints3D, joint_translation, camera_intrinsics)

Compute the optimized 3D pose

Arguments

  • keypoints3D: 3D keypoints
  • joint_translation: np array joints translation
  • camera_intrinsics: camera intrinsics parameters

Returns

  • optimized_poses3D: np array of optimized posed3D

[source]

human_pose3D_to_pose6D

paz.backend.keypoints.human_pose3D_to_pose6D(poses3D)

Estiate human pose 6D of the root joint from 3D pose of human joints.

Arguments

poses3D: numpy array 3D pose of human joint

return

rotation_matrix: numpy array rotation of human root joint translation: list translation of human root joint