Before a robot arm can perform any task — from simple pick-and-place to precision welding — you need to understand two core pillars: Kinematics (describing geometric motion) and Dynamics (describing forces and torques). This is the first post in the Traditional Manipulation Control series, where we build from mathematical foundations to real-world robot deployment.
Why Kinematics and Dynamics Matter
Imagine you are commanding a 6-DOF robot arm to grasp an object at position [x, y, z] in space. The first question: how much should each joint rotate? That is the Inverse Kinematics (IK) problem. The next question: how much torque is needed at each joint to move? That is Dynamics.
Without mastering these two concepts, you cannot:
- Plan accurate trajectories
- Avoid singularities
- Control contact forces during grasping
- Optimize speed and energy consumption
DH Parameters — The Universal Language for Describing Robots
Denavit-Hartenberg (DH) parameters provide a standardized way to describe robot arm geometry. Each joint is characterized by 4 parameters:
| Parameter | Symbol | Meaning |
|---|---|---|
| Link length | $a_i$ | Distance between joint axes along $x$-axis |
| Link twist | $\alpha_i$ | Angle between joint axes about $x$-axis |
| Link offset | $d_i$ | Distance along $z$-axis (joint variable for prismatic) |
| Joint angle | $\theta_i$ | Rotation about $z$-axis (joint variable for revolute) |
The transformation matrix from frame $i-1$ to frame $i$:
T_i = Rot_z(θ_i) × Trans_z(d_i) × Trans_x(a_i) × Rot_x(α_i)
Comparing DH Parameters: UR5e vs Panda
| Joint | UR5e $a$ (m) | UR5e $d$ (m) | Panda $a$ (m) | Panda $d$ (m) |
|---|---|---|---|---|
| 1 | 0 | 0.1625 | 0 | 0.333 |
| 2 | -0.425 | 0 | 0 | 0 |
| 3 | -0.3922 | 0 | 0 | 0.316 |
| 4 | 0 | 0.1333 | 0.0825 | 0 |
| 5 | 0 | 0.0997 | -0.0825 | 0.384 |
| 6 | 0 | 0.0996 | 0 | 0 |
| 7 | — | — | 0.088 | 0.107 |
The UR5e is a 6-DOF robot (spherical wrist — 3 axes intersecting at one point), while the Panda is 7-DOF (redundant — one extra degree of freedom, more flexible but IK is more complex).
Forward Kinematics (FK) — From Joints to Position
Forward Kinematics answers: "Given all joint angles, where is the end-effector?"
The formula is straightforward — chain-multiply transformation matrices:
T_0^n = T_1 × T_2 × ... × T_n
The matrix $T_0^n$ (4×4) contains:
- Rotation matrix (3×3): end-effector orientation
- Translation vector (3×1): position [x, y, z]
Code: Forward Kinematics with roboticstoolbox-python
import roboticstoolbox as rtb
import numpy as np
# Create UR5e robot from library
ur5e = rtb.models.UR5()
# Set joint angles (radians)
q = np.array([0, -np.pi/4, np.pi/4, -np.pi/2, np.pi/2, 0])
# Forward Kinematics — compute end-effector pose
T = ur5e.fkine(q)
print("End-effector pose:")
print(T)
print(f"\nPosition: x={T.t[0]:.4f}, y={T.t[1]:.4f}, z={T.t[2]:.4f}")
# Visualize robot
ur5e.plot(q, block=True)
FK always has exactly one solution — this is a key difference from IK.
Inverse Kinematics (IK) — From Position to Joints
Inverse Kinematics is the reverse problem: "Given a desired end-effector pose, find the joint angles."
This problem is much harder because:
- Multiple solutions: A 6-DOF robot can have up to 16 IK solutions
- No solution: Position is outside the workspace
- Infinite solutions: 7-DOF (redundant) robots
Two Approaches to IK
| Method | Advantages | Disadvantages |
|---|---|---|
| Analytical | Fast, exact, finds all solutions | Only works for special architectures (spherical wrist) |
| Numerical | General, works for any robot | Slower, may not converge, finds only 1 solution |
Analytical IK for Spherical Wrist Robots
The UR5e has a spherical wrist (last 3 axes intersect), allowing problem decomposition:
- Position IK: Find $\theta_1, \theta_2, \theta_3$ from wrist center position
- Orientation IK: Find $\theta_4, \theta_5, \theta_6$ from end-effector orientation
The IKFast algorithm (from OpenRAVE) can automatically generate analytical IK code for any robot with a suitable structure.
Numerical IK
import roboticstoolbox as rtb
import numpy as np
from spatialmath import SE3
ur5e = rtb.models.UR5()
# Desired pose: [x=0.5, y=0.2, z=0.3], pointing straight down
T_desired = SE3(0.5, 0.2, 0.3) * SE3.Rx(np.pi)
# Levenberg-Marquardt method (numerical)
sol = ur5e.ikine_LM(T_desired)
print(f"IK solution found: {sol.success}")
print(f"Joint angles (deg): {np.degrees(sol.q)}")
# Verify: run FK to check
T_actual = ur5e.fkine(sol.q)
pos_error = np.linalg.norm(T_desired.t - T_actual.t)
print(f"Position error: {pos_error*1000:.2f} mm")
Jacobian Matrix — Bridging Joint and End-Effector Velocities
The Jacobian ($J$) is a 6×n matrix relating joint velocities ($\dot{q}$) to end-effector velocity ($v$):
v = J(q) × q̇
Where $v = [v_x, v_y, v_z, \omega_x, \omega_y, \omega_z]^T$ (linear + angular velocity).
What is the Jacobian Used For?
- Cartesian velocity control: $\dot{q} = J^{-1} v$
- Singularity detection: When $\det(J) \approx 0$, the robot loses degrees of freedom
- End-effector force computation: $F = J^T \tau$ (mapping joint torques to Cartesian forces)
- Resolved-rate control: Controlling the robot along Cartesian trajectories
Singularity — The Robot's "Dead Points"
Singularity occurs when the Jacobian becomes rank-deficient. At singularity:
- The robot loses the ability to move in one or more directions
- Joint velocities blow up ($J^{-1}$ does not exist)
Three common singularity types for 6-DOF robots:
- Shoulder singularity: Wrist center lies on joint 1 axis
- Elbow singularity: Arm is fully extended
- Wrist singularity: Two wrist axes align ($\theta_5 = 0$)
# Compute Jacobian and check singularity
J = ur5e.jacob0(q)
manipulability = np.sqrt(np.linalg.det(J @ J.T))
print(f"Manipulability index: {manipulability:.4f}")
if manipulability < 0.01:
print("WARNING: Near singularity!")
Dynamics — Force and Torque Modeling
While kinematics only considers the geometry of motion, dynamics considers the forces that cause motion. The general dynamics equation:
τ = M(q)q̈ + C(q, q̇)q̇ + g(q) + J^T F_ext
| Component | Meaning |
|---|---|
| $M(q)$ | Inertia matrix |
| $C(q, \dot{q})$ | Coriolis and centrifugal forces |
| $g(q)$ | Gravity forces |
| $J^T F_{ext}$ | External forces at end-effector |
Two Methods for Computing Dynamics
Lagrangian (Energy-based):
- Compute kinetic energy ($T$) and potential energy ($V$): $L = T - V$
- Apply Euler-Lagrange equations
- Advantage: Yields symbolic expressions
- Disadvantage: Computationally expensive for many DOFs
Newton-Euler (Force-based):
- Forward pass: Compute velocities, accelerations from base to end-effector
- Backward pass: Compute forces, torques from end-effector to base
- Advantage: Computationally efficient $O(n)$, suitable for real-time
- Disadvantage: Harder to obtain symbolic expressions
import roboticstoolbox as rtb
import numpy as np
ur5e = rtb.models.UR5()
q = np.array([0, -np.pi/4, np.pi/4, -np.pi/2, np.pi/2, 0])
qd = np.zeros(6) # Zero joint velocities
# Gravity torque — torque needed to hold the robot stationary
tau_g = ur5e.gravload(q)
print(f"Gravity torques (Nm): {np.round(tau_g, 2)}")
# Inertia matrix
M = ur5e.inertia(q)
print(f"\nInertia matrix shape: {M.shape}")
print(f"Condition number: {np.linalg.cond(M):.1f}")
# Inverse dynamics — compute torques for desired trajectory
qdd = np.array([0.1, 0.2, -0.1, 0.05, -0.05, 0.1]) # Desired acceleration
tau = ur5e.rne(q, qd, qdd)
print(f"\nRequired torques (Nm): {np.round(tau, 2)}")
Workspace Analysis — How Far Can the Robot Reach?
The workspace is the set of all positions the end-effector can reach. We distinguish:
- Reachable workspace: Positions reachable with at least one orientation
- Dexterous workspace: Positions reachable with any orientation
import roboticstoolbox as rtb
import numpy as np
import matplotlib.pyplot as plt
ur5e = rtb.models.UR5()
# Monte Carlo workspace sampling
n_samples = 10000
positions = []
for _ in range(n_samples):
q_random = ur5e.random_q()
T = ur5e.fkine(q_random)
positions.append(T.t)
positions = np.array(positions)
# Visualize workspace
fig, axes = plt.subplots(1, 2, figsize=(14, 6))
axes[0].scatter(positions[:, 0], positions[:, 1], s=1, alpha=0.3, c='blue')
axes[0].set_xlabel('X (m)')
axes[0].set_ylabel('Y (m)')
axes[0].set_title('Workspace — Top View (XY)')
axes[0].set_aspect('equal')
axes[0].grid(True)
axes[1].scatter(positions[:, 0], positions[:, 2], s=1, alpha=0.3, c='red')
axes[1].set_xlabel('X (m)')
axes[1].set_ylabel('Z (m)')
axes[1].set_title('Workspace — Side View (XZ)')
axes[1].set_aspect('equal')
axes[1].grid(True)
plt.tight_layout()
plt.savefig('ur5e_workspace.png', dpi=150)
plt.show()
Practical Application: FK/IK in Pick-and-Place
A basic pick-and-place workflow using FK and IK:
- Camera detects object -> Cartesian coordinates
[x, y, z] - IK -> Compute joint angles to position gripper at grasp pose
- FK verification -> Verify actual position after movement
- Jacobian -> Control slow approach (resolved-rate)
# Simple pick-and-place workflow
import roboticstoolbox as rtb
import numpy as np
from spatialmath import SE3
ur5e = rtb.models.UR5()
# Pick position (from camera)
pick_pos = SE3(0.4, 0.3, 0.1) * SE3.Rx(np.pi)
# Approach position (10cm above)
approach_pos = SE3(0.4, 0.3, 0.2) * SE3.Rx(np.pi)
# Place position
place_pos = SE3(-0.3, 0.4, 0.15) * SE3.Rx(np.pi)
# Solve IK for each position
q_approach = ur5e.ikine_LM(approach_pos).q
q_pick = ur5e.ikine_LM(pick_pos).q
q_place = ur5e.ikine_LM(place_pos).q
# Check manipulability at each point
for name, q in [("Approach", q_approach), ("Pick", q_pick), ("Place", q_place)]:
J = ur5e.jacob0(q)
m = np.sqrt(np.linalg.det(J @ J.T))
print(f"{name}: manipulability = {m:.4f}")
References
- Craig, J.J. Introduction to Robotics: Mechanics and Control (4th ed.). Pearson, 2017. DOI: 10.1109/MRA.2005.1577021
- Siciliano, B., Sciavicco, L., Villani, L., Oriolo, G. Robotics: Modelling, Planning and Control. Springer, 2009. DOI: 10.1007/978-1-84628-642-1
- Diankov, R. Automated Construction of Robotic Manipulation Programs (IKFast). PhD Thesis, CMU, 2010. Link
- Corke, P. Robotics, Vision and Control (3rd ed.). Springer, 2023. DOI: 10.1007/978-3-031-07262-8
Conclusion
Kinematics and dynamics are the indispensable foundation for working with robot arms. In this post, we covered:
- DH parameters — standardized geometric description
- FK and IK — converting between joint space and Cartesian space
- Jacobian — velocity mapping and singularity detection
- Dynamics — force modeling with Lagrangian and Newton-Euler methods
In the next post, we will use this knowledge to build trajectory planning — planning paths in Joint Space and Cartesian Space.
Related Posts
- Inverse Kinematics for 6-DOF Robots — A deeper dive into IK methods
- MoveIt2 Motion Planning — The most popular planning framework for ROS 2
- Robot Simulation with MuJoCo — Accurate dynamics simulation in MuJoCo