## Geometric Kinematics

Often times behavioral robotics is approached with the perspective that it, as a field, is almost singularly subject to qualitative evaluation. That is to say, behaviors are typically designed in an adhoc manner, and evaluated solely by empirical testing. I've even heard it said, by some active in the field, that behaviors can not be proven to achieve the goal task, only carefully designed and then verified by experiment. Of course, as a mathematician, I can't stand this and so I've set out to take an analytical approach to this problem, in the hopes of being able to establish at theoretically sound method of proving the effectiveness of a designed behavior (in the theoretical sense- tolerance in sensing and control does impede this somewhat).

To do this, I've focused principally on a two-pronged approach, comprised of algorithm examination and geometric analysis. The investigation of the algorithm naturally applies to the representation of the behavior and the geometry to the analysis of the motion of the physical robot chassis. Typically, I create three representations for the examination- a state machine or similar for the autonomous action of the robot, a model of the robot itself- the relationships between actuators and sensors, and a model of the world in which the agent acts. We'll see each of these repeatedly below.

Before I get into the details, I want to point out that this approach seeks to analyze and prove the efficacy of a behavior which is already designed, and perhaps guide the refinement of behaviors as well. In general, it's not a synthetic method, and I have not really tried to approach the problem of creating behaviors with these tools (that's the job of the Murin algorithm). It might be possible, but I don't really conceptualize the problems that way.

What we'll be doing here is an investigation of a suite of behaviors to illustrate all the different techniques and tools I use. In particular, they're mostly the behaviors used in the Mordax platform. By doing this I'll have a nice, contextualized way to explain all the different techniques and how the various conceptual objects work together to build a full picture of the agent, its body, and its environment.

Let's start by talking about the most basic behavior- line following. A basic line following platform looks something like this:

To do this, I've focused principally on a two-pronged approach, comprised of algorithm examination and geometric analysis. The investigation of the algorithm naturally applies to the representation of the behavior and the geometry to the analysis of the motion of the physical robot chassis. Typically, I create three representations for the examination- a state machine or similar for the autonomous action of the robot, a model of the robot itself- the relationships between actuators and sensors, and a model of the world in which the agent acts. We'll see each of these repeatedly below.

Before I get into the details, I want to point out that this approach seeks to analyze and prove the efficacy of a behavior which is already designed, and perhaps guide the refinement of behaviors as well. In general, it's not a synthetic method, and I have not really tried to approach the problem of creating behaviors with these tools (that's the job of the Murin algorithm). It might be possible, but I don't really conceptualize the problems that way.

What we'll be doing here is an investigation of a suite of behaviors to illustrate all the different techniques and tools I use. In particular, they're mostly the behaviors used in the Mordax platform. By doing this I'll have a nice, contextualized way to explain all the different techniques and how the various conceptual objects work together to build a full picture of the agent, its body, and its environment.

Let's start by talking about the most basic behavior- line following. A basic line following platform looks something like this:

Looking at the labels on the model, s0 and s1 are sensors which are capable of distinguishing between the line and the floor (line means the indicator that guides the trajectory of the robot, floor means everything else the senors might encounter), d0 and d1 are drive wheels capable of moving forward and backward, preferably at various speeds, and c0 is a castor. Recalling from basic robot kinematics, this chassis grants 2 degrees of freedom (dr and dtheta).

Let's define a few terms here. Firstly, we will call the point about which the chassis rotates when there is any turning motion at all the center of rotation, or COR. When one wheel is moving forward at the same rate at which the other is moving backward, then the COR is at the midpoint between the two wheels, i.e. it bisects the axle line (the line from the rotation point of one wheel to the other).

By varying the relative speeds of each wheel, we can move the COR. When both wheels are moving forward or backwards, the COR is exterior to the robot (outside the space between the wheels- the COR is always on the projected axle line). When one wheel is moving backwards and the other forward, the COR is between the wheels: interior to the robot. We can use this information, along with the knowledge that the wheels will mark out circular paths, to calculate the location of the COR relative to the wheels by analyzing the paths taken by the wheels during a maneuver. A maneuver is an action taken by the robot during which the actuator states do not change (i.e. the motors have fixed speeds).

If we then draw lines from the COR to each sensor, we'll have what I will refer to as the sensor lines:

Let's define a few terms here. Firstly, we will call the point about which the chassis rotates when there is any turning motion at all the center of rotation, or COR. When one wheel is moving forward at the same rate at which the other is moving backward, then the COR is at the midpoint between the two wheels, i.e. it bisects the axle line (the line from the rotation point of one wheel to the other).

By varying the relative speeds of each wheel, we can move the COR. When both wheels are moving forward or backwards, the COR is exterior to the robot (outside the space between the wheels- the COR is always on the projected axle line). When one wheel is moving backwards and the other forward, the COR is between the wheels: interior to the robot. We can use this information, along with the knowledge that the wheels will mark out circular paths, to calculate the location of the COR relative to the wheels by analyzing the paths taken by the wheels during a maneuver. A maneuver is an action taken by the robot during which the actuator states do not change (i.e. the motors have fixed speeds).

If we then draw lines from the COR to each sensor, we'll have what I will refer to as the sensor lines:

Red is the axle line, Blue is the COR and Green are the sensor lines.

We will use these sensor lines extensively: by drawing an arc of a circle centered at the COR, at the radius of the sensor line which starts at the sensors current position, we can see where the sensor will travel during the current maneuver. Behaviors are series of maneuvers which are stimulated by states, intended to result in a specific resultant disposition of the robot.

We will use these sensor lines extensively: by drawing an arc of a circle centered at the COR, at the radius of the sensor line which starts at the sensors current position, we can see where the sensor will travel during the current maneuver. Behaviors are series of maneuvers which are stimulated by states, intended to result in a specific resultant disposition of the robot.

This shows the predicted path of s0 for a maneuver which places the COR as depicted and moves roughly 30 degrees about it.

Let's talk a little about the mathematical formalization of all these terms. The standard approach to robot kinematics is using linear algebra, mostly rotation matrices, to compute the disposition of the robot during a maneuver. I, however, prefer the use of vector mathematics and geometry. I think that they provide a more intuitive approach and a more accessible set of analytical tools.

Starting with the basic notions- we take as given the standard kinematic assumption that wheel slipping is negligible, and that motion due to the wheel occurs parallel rotational plane of the wheel. As a result, the wheel is always tangential to the path drawn by the contact point (the point at which the wheel exerts friction on the floor). Because the wheel is always tangential to the path, and the motion of the wheels during a maneuver is constant, we can deduce that the paths during a maneuver are either straight lines or circular motion (as constant wheel velocities imply constant angular velocity of the orientation of the wheels).

Our first task is to locate the COR. We can break the problem up into two cases- when the COR is internal to the robot, and when it is external. Let r0 be the distance from d0 to the COR, and r1 be the distance from d0 to the COR. Recalling that the length of the axle line within the robot chassis is la, we can produce the constraints: la = r0 + r1 (for COR internal) and r0 + la = r1 or r1 + la = r0 for COR external.

Let's talk a little about the mathematical formalization of all these terms. The standard approach to robot kinematics is using linear algebra, mostly rotation matrices, to compute the disposition of the robot during a maneuver. I, however, prefer the use of vector mathematics and geometry. I think that they provide a more intuitive approach and a more accessible set of analytical tools.

Starting with the basic notions- we take as given the standard kinematic assumption that wheel slipping is negligible, and that motion due to the wheel occurs parallel rotational plane of the wheel. As a result, the wheel is always tangential to the path drawn by the contact point (the point at which the wheel exerts friction on the floor). Because the wheel is always tangential to the path, and the motion of the wheels during a maneuver is constant, we can deduce that the paths during a maneuver are either straight lines or circular motion (as constant wheel velocities imply constant angular velocity of the orientation of the wheels).

Our first task is to locate the COR. We can break the problem up into two cases- when the COR is internal to the robot, and when it is external. Let r0 be the distance from d0 to the COR, and r1 be the distance from d0 to the COR. Recalling that the length of the axle line within the robot chassis is la, we can produce the constraints: la = r0 + r1 (for COR internal) and r0 + la = r1 or r1 + la = r0 for COR external.

We can also use the fact that the wheel paths are circular to derive an additional constraint: because the axle line is fixed, the angular distance about the COR during a maneuver is constant for both wheels, and the path length over the duration of the maneuver is vit, giving:

These two constraints are enough to derive a set of equations describing the location of the COR. Since we have three versions of the constraint due to the distance sum, we will start with two versions of the calculation:

For the external case, note that

**r**i is the greater distance. In these, we have assumed vi to be a magnitude velocity, but if we include the sign, we can use one form for both cases, since the only difference between the two equations is the sign of the ratio of velocities in the denominator. Here, if v1-i and vi have the same sign, the form is that of the external r, and when they are different, that of the internal.

By establishing unit vectors at di, each facing towards the other wheel,

By establishing unit vectors at di, each facing towards the other wheel,

We can construct a vector which points from the COR to di:

Note the negative sign, which makes the vector point form the COR to the wheel, which will be useful shortly.

Because of the way the unit vectors are defined, we can also write compact forms for r1-i:

Because of the way the unit vectors are defined, we can also write compact forms for r1-i:

For both the external and internal cases, the expression is the same.

This result completes the description of the locations of the wheels with respect to the COR in terms of only la and the vi, which are independent parameters. Additionally, for clarity, let vr = v1-i/vi, and α = 1/(1 - vr), so that

Next, we're going to locate the sensors. The abstract kinematic description needs to be pegged to the frame, not the COR, so to start, we mark the sensor's locations with respect to di (which wheel does not matter, as we will see).

**r**i can be compactly written as**r**i = -αla∙**r**iu. This α will show up consistently throughout this examination. Throughout this work, a subscript 'u' will denote a unit vector.Next, we're going to locate the sensors. The abstract kinematic description needs to be pegged to the frame, not the COR, so to start, we mark the sensor's locations with respect to di (which wheel does not matter, as we will see).

At this point, we have a fundamental kinematic description of the robot- sensors and actuators. Let's say we choose (somewhat arbitrarily) d0 as the origin of the chassis of the coordinate system. Then we can locate each pertinent kinematic feature:

Where a 'c' subscript denotes a location relative to the defined origin.

Now this description can be compacted, and we may describe a robot kinematically as <la,

With this knowledge in mind, let's now examine a useful mathematical technique when dealing with vectors: the rotation operator. By way of explanation, I'm going to derive the form of the operator I prefer to use when dealing with vectors.

Consider a vector

Now this description can be compacted, and we may describe a robot kinematically as <la,

**s**0,**s**1> (assume**s**i are relative to di). Furthermore, we may describe the robot in a maneuver by also specifying the ratio of its wheel velocities (vr) and the angle through which it travels around the COR (ϴC). This information is sufficient to specify the COR-relative location of all interesting kinematic components of the robot during a maneuver.With this knowledge in mind, let's now examine a useful mathematical technique when dealing with vectors: the rotation operator. By way of explanation, I'm going to derive the form of the operator I prefer to use when dealing with vectors.

Consider a vector

**k**, as below: Let's say we want to rotate

**k**about O by θ (imagine**k**is a rod affixed to a pivot at O, with a handle sticking up out of O- We're going to grab the handle and turn**k**through θ radians). If we draw**k**as it will be after this rotation (**k**'), it will look like so: We could express this new vector as the product of the appropriate rotation matrix with

**k**, but let's instead consider a coordinate-independent form. We'll break the resultant vector into two parts- one parallel to**k**and one perpendicular to it,**k**a and**k**e, respectively. so

This illustrates a very convenient right triangle wherein we can simply determine the measures of

**k**' =**k**a +**k**e. Note that |**k**'| = |**k**|.This illustrates a very convenient right triangle wherein we can simply determine the measures of

**k**a and**k**e using elementary trigonometry. If we now let

Recollect that the cross product of any two vectors will be mutually perpendicular to both. We will naturally use

So we have a new vector

Notice that from the right hand rule, we can deduce that this formalization regards a clockwise rotation as a positive ϴ, and a counterclockwise rotations is negative ϴ.

We can now write

I'm typically going to write this rotation operation as a single function:

So now, with our Rot operator, and our d0 centered kinematic description (<la,s0,s1>), let's formalize a robot maneuver fully. We calculate the

From these, we can locate the sensors w.r.t the COR very easily:

Let's say that the robot's motion during a maneuver moves through ϴC degrees- which can be computed either from the final positions of any of the kinematic elements (which will be our normal approach), or from the time taken to complete the maneuver and the individual velocities of the wheels (essentially equivalent to inertial measurement). Then we can compute the resultant positions of the kinematic elements by applying a rotation to the COR-reference vectors:

(Note that the

This is of course a direct result of the fact that the chassis is fixed and, consequently, whatever one part of the chassis is orbiting circularly about, all other points also orbit that center circularly.

This reasoning also leads us to another technique- we can graphically plot the kinematic paths during a maneuver by drawing circles centered at the COR which pass through the initial points. If the robot has some sensing condition which will end the maneuver, then we can use this construction to locate the ϴC at which the maneuver terminates, and then construct the remaining kinematic placements from this. This is why we prefer, conceptually, the vector-type rotation operator, it will sync with the use of this tracing approach.

Let's look at an example. First, let's make an initial kinematic representation of the robot R = <la,s0,s1> (Notice that R is the chassis, and M = <vr,ϴC> is a maneuver). Here is the robot:

**k**u represent the unit vector in the direction of**k**, we can write express**k**a:**k**a=|**k**|cos(θ)**k**u, since**k**a is parallel to**k**. However, to express**k**e, we will need a unit vector which is perpendicular to**k**. We can, however, construct this vector easily using the cross product operator.Recollect that the cross product of any two vectors will be mutually perpendicular to both. We will naturally use

**k**u as one of the vectors, but what of the other? Note that we want the result to lie in the plane of movement. By definition, all vectors in the plane are perpendicular to**z**u, the unit vector normal to the plane. We will use this as the second vector in the cross product. In the metaphor of the rod and handle rotation,**z**u is the handle.So we have a new vector

**k**p =**k**u⨯**z**u. Note that**k**p is a unit vector, in-plane with**k**and**k**perpendicular to**k**u. So, we can write:**e = |**

kk

**k**|sin(ϴ)**k**p = |**k**|sin(ϴ)(**k**u⨯**z**u).Notice that from the right hand rule, we can deduce that this formalization regards a clockwise rotation as a positive ϴ, and a counterclockwise rotations is negative ϴ.

We can now write

**k**' in terms of**k**, the original vector; ϴ, the rotation angle, and**z**u, the normal vector defining the plane in which the rotation occurs:**k**' = |**k**|cos(θ)**k**u + |**k**|sin(ϴ)(**k**u⨯**z**u) or,**k**' = |**k**|[cos(θ)**k**u + sin(ϴ)(**k**u⨯**z**u)]I'm typically going to write this rotation operation as a single function:

**k**' = Rot(**k**,ϴ,**z**u)- read as:**k**' is the rotation of**k**by ϴ radians about**z**u. To return to the discussion of robot kinematics, we will be using this operation to describe the path of various parts of the robot during a maneuver.So now, with our Rot operator, and our d0 centered kinematic description (<la,s0,s1>), let's formalize a robot maneuver fully. We calculate the

**r**i*vector*from the COR to ri, the wheel on the outside of the turn (i.e. the greater distance from the COR):**i = -αla∙**

rr

**r**iu and then,**r**1-i =**r**i - la**r**1-iu but note that:**r**iu = -**r**1-iu so:**1-i =**

rr

**r**i + la**r**iuFrom these, we can locate the sensors w.r.t the COR very easily:

**iC =**

ss

**r**i +**s**i**s**1-iC =**r**1-i +**s**1-iLet's say that the robot's motion during a maneuver moves through ϴC degrees- which can be computed either from the final positions of any of the kinematic elements (which will be our normal approach), or from the time taken to complete the maneuver and the individual velocities of the wheels (essentially equivalent to inertial measurement). Then we can compute the resultant positions of the kinematic elements by applying a rotation to the COR-reference vectors:

**r**'i = Rot(-αla∙**r**iu,ϴC,**z**u)**r**'1-i = Rot(**r**i + la**r**iu,ϴC,**z**u)**s**'i = Rot(**r**i +**s**i,ϴC,**z**u) -**r**'i**s**'1-i = Rot(**r**1-i +**s**1-i,ϴC,**z**u) -**r**'1-i(Note that the

**s**j vectors need to be re-cast w.r.t their respective wheels, since the rotation operator does not translate the 'origin'.)This is of course a direct result of the fact that the chassis is fixed and, consequently, whatever one part of the chassis is orbiting circularly about, all other points also orbit that center circularly.

This reasoning also leads us to another technique- we can graphically plot the kinematic paths during a maneuver by drawing circles centered at the COR which pass through the initial points. If the robot has some sensing condition which will end the maneuver, then we can use this construction to locate the ϴC at which the maneuver terminates, and then construct the remaining kinematic placements from this. This is why we prefer, conceptually, the vector-type rotation operator, it will sync with the use of this tracing approach.

Let's look at an example. First, let's make an initial kinematic representation of the robot R = <la,s0,s1> (Notice that R is the chassis, and M = <vr,ϴC> is a maneuver). Here is the robot:

and let's also say that the robot is programmed to perform a maneuver <vr,ϴC> such that the COR is located relative to the robot as so:

And let's also say that there are two regions, which we'll schematically represent on paper like this:

which the sensors s0 and s1 can distinguish between. Let's define the behavior such that the robot will terminate movement when s1 detects region 2. This is important, because it supplies us the key information to know what the disposition of the robot will be at the terminus of the action- it will be positioned at the first intersection of s1 and some region 2 space.

Now assume then that the robot is placed in the following initial configuration and starts its maneuver:

Now assume then that the robot is placed in the following initial configuration and starts its maneuver:

What we're going to do is plot the path of s1 by drawing the circle at COR with radius |s1C|. We can then find the final position of s1, as it will be the the point on the circle where s1 first encounters Region 2, as per our definition of the maneuver.

Draw the circles for the other kinematic elements.

We then draw the lines s1 and s1':

For each other element, we find the locations of the intersection of the element's path circle with these lines (in red below for s0):

and take the measure between them (in green).

Then, we take that measure, and starting from the initial point of that new element, mark the place on its path circle which is that distance from the initial position. That is the final position of the current element (shifted in green):

To see how this works, notice that because the yellow lines are marked at the radius of s0, the length of the green line marks out, at distance |

This process can be repeated for all the remaining kinematic parts, which reveals the disposition of the robot at the end of the maneuver (R' = <la,

**s**0|, the angular distance between s1' and s1. Therefor, by measuring that distance from s0 along s0's path circle, you mark the same angular distance along that path circle from s0, equivalent to rotating s0 by that angle. The line is shifted because you want the two different points to move through the same angular distance, but their angular positions are offset.This process can be repeated for all the remaining kinematic parts, which reveals the disposition of the robot at the end of the maneuver (R' = <la,

**s**0',**s**1'>). One may also shift two points, and use triangulation to reconstruct the remaining points from those, given that the positions of the other objects are known distances from the two shifted anchor points. The below illustration uses the rotation method on all elements, to illustrate the preservation of chassis-fixed positions. Using this deductive geometry, we can fully specify the final disposition of the robot after the maneuver, based entirely on the known parameters of the behavior- the conditions which terminate the action, and the 'movement' (in terms of the COR, as calculated or measured).

At this point, I'd like to address the placement of the robot in a global frame of reference. Up till now, we've been looking at maneuvers which are referenced against the COR, which is individual to a specific maneuver. This is useful analytically, but we may also want to track the robot's position as it progresses through multiple maneuvers.

This will require some additional information, in the form of an extra vector

Let us then say we have a robot maneuver M = <vr,ϴC>, with the robot's global disposition being

We then compute all the normal rotations that generate

For the adjustment of

So, finally, we can write the robot's total disposition as:

R = <la,

and when the robot performs a maneuver M = <vr,ϴC> from position

Which completes the kinematic description of the robot. We will see later how this notation and method can be easily applied to other robot configurations and tasks, including variant sensor types, geometries, etc.

In other sections, we'll apply this method of analysis to a selection of low-level behaviors implemented in the Mordax robot project, and use that analysis to demonstrate consistency, and efficacy of these behaviors. This of course depends on assumptions regarding the accuracy and precision of the robot's instrumentation, if the robot cannot reliably perform a given maneuver, or detect a region, naturally the predictions of the analysis will not apply, but then consistent programming will be nearly impossible, anyway.

At this point, I'd like to address the placement of the robot in a global frame of reference. Up till now, we've been looking at maneuvers which are referenced against the COR, which is individual to a specific maneuver. This is useful analytically, but we may also want to track the robot's position as it progresses through multiple maneuvers.

This will require some additional information, in the form of an extra vector

**g**of two elements- the global location of a fixed reference point on the chassis (call it**p**, and we'll let**p**point to d0 for simplicity) and an vector that represents the heading of the robot. For convenience, we'll use**r**0u (the unit vector in the**r**0 direction), though it could be any frame-locked vector (it does not need to be in the direction of travel, but does need to be fixed with respect to that vector).Let us then say we have a robot maneuver M = <vr,ϴC>, with the robot's global disposition being

**g**= <**p**,**r**0u>. As usual, we say the maneuver progresses through ϴC radians. To compensate for the rotation, we will recast**p**as**p**C, which will be a vector from the origin to the COR (here is why we chose**p**to point to d0):**p**C =**p**-**r**0We then compute all the normal rotations that generate

**r**'i,**r**'1-i,**s**'i, and**s**'1-i. Then, once we have the new relative vectors, we can cast the position vector back to the robot's chassis:**p**' =**p**C +**r**0' or, more primitively:**p**' =**p**-**r**0 +**r**0'For the adjustment of

**r**0u, simply note that as it points out parallel to a line drawn from the COR to d0, we only have to apply the basic rotation, so that:**r**0u' = Rot(**r**0u,ϴC,**z**u)So, finally, we can write the robot's total disposition as:

R = <la,

**s**0,**s**1>**g**= <**p**,**r**0u>and when the robot performs a maneuver M = <vr,ϴC> from position

**g**, the transformation equations are:**r**'i = Rot(-αla∙**r**iu,ϴC,**z**u)**r**'1-i = Rot(**r**i + la∙**r**iu,ϴC,**z**u)**s**'i = Rot(**r**i +**s**i,ϴC,**z**u) -**r**'i**s**'1-i = Rot(**r**1-i +**s**1-i,ϴC,**z**u) -**r**'1-i**p**' =**p**-**r**0 +**r**0'**r**0u' = Rot(**r**0u,ϴC,**z**u)Which completes the kinematic description of the robot. We will see later how this notation and method can be easily applied to other robot configurations and tasks, including variant sensor types, geometries, etc.

In other sections, we'll apply this method of analysis to a selection of low-level behaviors implemented in the Mordax robot project, and use that analysis to demonstrate consistency, and efficacy of these behaviors. This of course depends on assumptions regarding the accuracy and precision of the robot's instrumentation, if the robot cannot reliably perform a given maneuver, or detect a region, naturally the predictions of the analysis will not apply, but then consistent programming will be nearly impossible, anyway.