# CS Biology - Duke University Numerical Methods for Inverse Kinematics Kris Hauser ECE 383 / ME 442 Agenda

Numerical methods for IK Root finding Optimization Other IK objective functions Axis constraints Hinge constraints

Jacobians and null space motions Inverse Kinematics Problem: given desired position of point on robot (end effector), what joint angles reach it?

Solving a general equation Solve f(q) = 0 (vector valued nonlinear function) Can include rotation constraints, multiple IK targets q4 q2 q3

q1 Numerical Inverse Kinematics: General Idea For the root finding problem f(q)=0, given a seed q0 Take steps q1,,qk s.t. the error ||f(q)|| shrinks towards 0

Stop when error is small enough or progress stops 3 numerical methods (of many) Cyclic Coordinate Descent Newton-Raphson root finding method

Gradient descent & related methods Cyclic Coordinate Descent method On each iteration, solve min ||f(q)|| for a single joint Derive fast analytical solutions for each joint

Loop through joints, repeat until convergence Newton-Raphson method Multi-D analogue of Newtons method Take steps Newtons method Find a step by solving 1st order Taylor expansion

Newton-Raphson method Find a displacement by solving 1st order Taylor expansion Newtons method f(x)

x0 In a neighborhood of a root, the line tangent to the graph crosses the x axis near the root x

Newtons method f(x) x1 In a neighborhood of a root, the line tangent to the graph crosses the x axis near the root iterate!

x Newtons method f(x) x2

In a neighborhood of a root, the line tangent to the graph crosses the x axis near the root iterate! x Newton-Raphson method

Multi-D root finding technique Find a step by solving 1st order Taylor expansion Pseudoinverse solution: denotes the pseudoinverse of the matrix Divergence f(x)

x1 x Divergence f(x)

x1 x2 x Divergence f(x)

x3 x1 x2 x

Divergence f(x) x3 x1

x2 x4 x Figure 11

f(x) x5 x3

x1 x2 x4 x

Newton-Raphson method Multi-D root finding technique Find a step by solving 1st order Taylor expansion Pseudoinverse solution: denotes the pseudoinverse of the matrix Usually needs to be combined with a line search to prevent

divergence Pseudoinverses A pseudoinverse A# of an m x n matrix A is an n x m matrix such that: A#AA#=A# AA#A=A

If A is square and has an inverse, A# is A-1 Can derive a pseudoinverse for all matrices Typically we talk about the Moore Penrose pseudoinverse A+ which is a unique matrix with the properties: If A is underconstrained, x=A+b is the solution to Ax=b that has minimum norm ||x||

If A is overconstrained, x=A+b is the solution that minimizes the norm of ||Ax-b|| Can be calculated in all major math packages, typically via the Singular Value Decomposition A=UWVT Gradient-descent methods Solve using the gradient of

Gradient gives direction of steepest ascent Gradient direction is orthogonal to the level sets (contours) of g, points in direction of steepest increase Gradient direction is orthogonal to the level sets (contours) of g, points in direction of steepest increase

Gradient descent: iteratively move in direction Gradient descent: iteratively move in direction Gradient descent: iteratively move in direction

Gradient descent: iteratively move in direction Gradient descent: iteratively move in direction Gradient descent: iteratively move in direction Gradient descent: iteratively move in direction

Relation to Newton-Raphson = In column vector form, Gradient descent takes steps using negated Jacobian transpose

Newton Raphson takes steps using negated Jacobian pseudoinverse Line search: pick step size to lead to decrease in function value f(x-af(x))

a* a Line search: pick step size to lead to decrease in function value (Use your favorite univariate optimization method)

Gradient Descent Pseudocode Input: f, starting value x1, termination tolerances For t=1,2,,maxIters: Compute the search direction dt = -f(xt) If ||dt||< g then: return Converged to critical point, output xt Find at so that f(xt+at dt) < f(xt) using line search

If ||at dt||< x then: return Converged in x, output xt Let xt+1 = xt+at dt Return Max number of iterations reached, output xmaxIters Higher order optimization

methods Assume function is locally quadratic Newtons method uses root finding on (for IK, requires calculating Hessian of ) Quasi-newton methods (e.g., BFS, BFGS): approximate Hessian using differencing. Multi-D analogue of secant method All optimization methods have a major drawback

Many local minima: need good initialization, or random restarts Null Space A motion from q->q that maintains f(q)=0 is known as a motion in the null space.

Null Space Null space velocity dq must satisfy J(q)dq = 0 => dq lies in the null space of J(q) For any vector y, (I-J+J)y lies in the null space Recall J+ is the pseudoinverse of J A basis of the null space can be found by SVD

J = UWVT Let the last k diagonal entries of W be 0, first n-k nonzero WVTdq = 0 First n-k entries of VTdq must be zero Last k entries of VTdq may be non zero => Last k columns of V are a basis for null space

Recap Two general ways of solving inverse kinematics: analytical and numerical With nonredundant manipulators, there are a finite number of solutions (usually > 1, without joint limits) With redundant manipulators, there are an infinite number of solutions

Null space motions Next class Klampt lab

Robots, objects, and worlds Configurations Forward kinematics Inverse kinematics