To figure out the systems moments of inertia and center of mass
position, a rough model was created using Autodesk Inventor a parametric
CAD package. Each part that was put on our initial design was measured
and weighed, and then modeled on Inventor. Once all parts were modeled,
and materials were chosen within Inventor so that each part was
within a gram or two of its actual mass, an assembly of the Segbot
was made. With the assembly, Inventor then calculates the moments
of inertia and the center of mass for the assembly as a whole.
Derivation of Equations of Motion
A schematic of the segbot is shown below.
Figure x: Segbot Schematic
(Figure copied from Baloh and Parent, 2003)
The coordinates of the center of the wheel axel (in the xy plane)
are given by
Here we
assume that there is no slip between the wheels and the
floor. The location of the centers of the left and right
wheels of the segbot are then given by 

_{
} 
(2) 



Finally, the location of the center of mass of the arm of the
segbot is given by
The Lagrangian (kinetic minus potential energy) is given by
where
K is the kinetic energy of the segbot, V is the potential
energy of the segbot, Ψ and X are the rotational and
translational coordinates for the system, and J and M are
rotational inertia and mass matrices respectively. The independent
variables for the segbot are the wheel and arm angles
_{ } . Ψ and
X are given by 
where
η is the gear ratio between the wheels and the motors.
The rotational inertia and mass matrices J and M are given
by the diagonal matrices 
The equations of motion for the scooter can be found from
Where
τ is a vector giving the torques acting on the wheels.
Calculating the derivatives on the left side of (9) we get
the equations of motion (in standard robot equation form) 

_{
} 
(10)




H(q), the inertia matrix, is given by
Where
_{ } , the corliolis
matrix, is given by
Where
The gravitational force vector _{ } is given by
Finally, the torque term τ is given by
where
K_{motor} is the gain between the command input
to the motor from the PWM output of the DSP to the torque
output of the motor. From this term, it is clear that the
system is underactuated, because the torque on the pendulum
arm is the sum of the torques to the two wheels. Reviewing
the equations of motion, we see that the nonlinearity in
the system arises from the pendulum action of the body,
and the coupling between the wheel motion and the body rotation. 
Controller
Design
We designed a state
feedback controller based off of a linearized version of
the system equations. To find this controller we needed
to get the system into the form _{ }
where _{ } . 
So we take
So the system can be written as
Linearizing (16)
about the operating point _{ }
and substituting in the values of the system parameters
listed in table xx
we arrive at the following linearized system 
Using
LQR controller design, we arrived at the following state
space controller for the linearized
system. 

_{
} 
(18) 



If we set all desired
positions and velocities to zero the controller will direct
the segbot to say upright holding a position on the floor.

Steering Control
Balancing
the robot in one place was all well and good but to steer
the robot manually with our joystick we needed to be able
to map a desired velocity (V_{des}) direction (φ_{des})
heading to the wheel and arm variables. To achieve steering
and balance control we maintain the goals for the segbot
body position of _{ } and _{ } and map from
V_{des} and φ_{des} to _{ } ,_{ } , _{ }and _{ } . Looking back
to equation (1) and Figure (1), we have: 


_{
} 
(19) 




We can solve for _{ } , _{ } , _{ } , and _{ } as
We can
use this control structure to drive at any heading with
a set velocity, while stabilizing the segbot body. 

Instead
of driving at set orientation _{ }, we can consider
the input _{ }as an error input
to the controller (17) as: 

_{
} 
(21) 



With this setup, the input _{ } defines an offset
which make the segbot turn to the left or right.
Wall Following Control
To achieve
rightwall following control, we modified the control used
in the lab, using the algorithm used to calculate the “turn”
setpoint from the left and front IR sensors, instead calculating_{
}, which was then
passed on to our statespace controller. 

In short,
if the front IR sensor does not detect and obstacle in front,
we follow the right wall at a set distance
_{ } . The desired
turn offset value _{ } is found from


_{
} 
(22)




_{If
the front IR sensor does detect and obstacle that is the
front IR measurement is below the}_{maximum value.
}


_{
} 
(23)




_{The desired velocity Vdesired is set by the user. }