Technical Information

Determining the System

To figure out the systems moments of inertia and center of mass position, a rough model was created using Autodesk Inventor a parametric CAD package. Each part that was put on our initial design was measured and weighed, and then modeled on Inventor. Once all parts were modeled, and materials were chosen within Inventor so that each part was within a gram or two of its actual mass, an assembly of the Segbot was made. With the assembly, Inventor then calculates the moments of inertia and the center of mass for the assembly as a whole.

insert picture screen from inventor

Insert table

Derivation of Equations of Motion

A schematic of the segbot is shown below.       

Figure x: Segbot Schematic (Figure copied from Baloh and Parent, 2003)

The coordinates of the center of the wheel axel (in the x-y plane) are given by


Here we assume that there is no slip between the wheels and the floor. The location of the centers of the left and right wheels of the segbot are then given by



Finally, the location of the center of mass of the arm of the segbot is given by


The Lagrangian (kinetic minus potential energy) is given by


where K is the kinetic energy of the segbot, V is the potential energy of the segbot, Ψ and X are the rotational and translational coordinates for the system,  and J and M are rotational inertia and mass matrices respectively. The independent variables for the segbot are the wheel and arm angles . Ψ and X are given by



where η is the gear ratio between the wheels and the motors. The rotational inertia and mass matrices J and M are given by the diagonal matrices



The equations of motion for the scooter can be found from


Where τ is a vector giving the torques acting on the wheels. Calculating the derivatives on the left side of (9) we get the equations of motion (in standard robot equation form)



H(q), the inertia matrix, is given by



, the corliolis matrix, is given by



The gravitational force vector is given by


Finally, the torque term τ is given by


where Kmotor is the gain between the command input to the motor from the PWM output of the DSP to the torque output of the motor. From this term, it is clear that the system is underactuated, because the torque on the pendulum arm is the sum of the torques to the two wheels. Reviewing the equations of motion, we see that the nonlinearity in the system arises from the pendulum action of the body, and the coupling between the wheel motion and the body rotation.

Controller Design

We designed a state feedback controller based off of a linearized version of the system equations. To find this controller we needed to get the system into the form

where .

So we take


So the system can be written as


Linearizing (16) about the operating point

and substituting in the values of the system parameters listed in table xx

we arrive at the following linearized system


Using LQR controller design, we arrived at the following state

space controller for the linearized system.



If we set all desired positions and velocities to zero the controller will direct the segbot to say upright holding a position on the floor.

Steering Control

Balancing the robot in one place was all well and good but to steer the robot manually with our joystick we needed to be able to map a desired velocity (Vdes) direction (φdes) heading to the wheel and arm variables. To achieve steering and balance control we maintain the goals for the segbot body position of  and  and map from Vdes and φdes to , , and .  Looking back to equation (1) and Figure (1), we have:



We can solve for , , , and as


We can use this control structure to drive at any heading with a set velocity, while stabilizing the segbot body.


Instead of driving at set orientation , we can consider the input as an error input to the controller (17) as:



With this setup, the input defines an offset which make the segbot turn to the left or right.

Wall Following Control

To achieve right-wall following control, we modified the control used in the lab, using the algorithm used to calculate the “turn” setpoint from the left and front IR sensors, instead calculating , which was then passed on to our state-space controller.


In short, if the front IR sensor does not detect and obstacle in front, we follow the right wall at a set distance . The desired turn offset value  is found from



If the front IR sensor does detect and obstacle that is the front IR measurement is below themaximum value.



The desired velocity Vdesired is set by the user.