Edges to Rubies The Complete SketchUp Tutorial
|
Appendix MM—Matrix Multiplication
|
If you look up matrix multiplication on the Internet, you'll likely come away confused. Matrix multiplication is, in fact, a procedure that can be explained with small words in simple sentences. After many false starts, I learned it from Stefan Waner at Hofstra. Here I'll try to make it easy, using lots of pictures.
Introduction
Matrix multiplication is an odd procedure, but a brilliant one. The mechanics, which I explain here, seem to be almost nonsensical. The results are useful, for one example, in solving systems of linear equations. For SketchUp's 3D modeling, matrix multiplication is indispensible. (See Appendix T for uses in rotating, scaling and moving geometry.)
No part of SketchingUp requires any knowledge of this procedure. To multiply two transformations, say T1
and T2
, the Ruby programmer simply writes T1 * T2
and leaves the details to SketchUp. This material is for those who want answers to burning questions such as, "Why isn't matrix multiplication commutative?"
The basic process underlying matrix multiplication involves combining each row in the first matrix with each column in the second matrix. A basic requirement is that the number of rows in the first matrix equals the number of columns in the second matrix. If you are interested in multiplying 4x4 transformation matrices, this requirement is met automatically.
For full generality I will mention here that the result is a matrix with the number of rows in the first matrix and the number of columns in the second matrix. From now on, however, we'll work with 4x4 matrices and 4x4 results.
Rows and Columns
Consider the matrix foo
as a set of rows foo_rows
. Similarly, consider the matrix bar
as a set of columns, bar_cols
. Here we'll focus on the first row in foo
and the first column in bar
.
Reducing the Row/Column Pair
We want to reduce the first row of foo
and the first column of bar
to a single number, which we will place in the product matrix, this way:
foo_rows[0] |
bar_cols[0] |
foo*bar |
|
|
|
Set foo_row
equal to foo_rows[0]
and set bar_col
equal to bar_cols[0]
. To reduce foo_row
and bar_col
to a single number, start with these multiplications:
-
result[0] = foo_row[0] * bar_col[0]
-
result[1] = foo_row[1] * bar_col[1]
-
result[2] = foo_row[2] * bar_col[2]
-
result[3] = foo_row[3] * bar_col[3]
Then sum: single_number = result[0] + result[1] + result[2] + result[3]
.
Filling the Product Matrix
It's pretty easy to let the mathematician's "i"s and "j"s confuse us. Let's try one more illustration, instead:
foo_rows[2] |
bar_cols[1] |
foo*bar |
|
|
|
Is that clear enough? Do it for every row in foo
and every column in bar
and you've multiplied the matrices, an odd procedure with miraculous results.
Why Is It Not Commutative?
The commutative property is the one that says, A op B equals B op A. In normal arithmetic, addition and multiplication are commutative, subtraction and division are not. (3 * 4) == (4 * 3)
but (3 / 4) != (4 / 3)
. Matrix multiplication is not commutative.
First, the requirement is that the number of rows in the first matrix match the number of columns in the second matrix. This means you can multiply a 3xC matrix by an Rx3 matrix. For most values of C and R, you can't even perform the reverse multiplication.
Now let's consider the subset where you are multiplying square matrices of equal size. Consider two 2x2 matrices and focus on the first computation.
Now look at the abstract case, focusing on just the first computation.
The first term (ae
or ea
) is the same, but the second term (bg
in foo*bar
, fc
in bar*foo
) is unrelated.
When you get a little leisure, maybe you'll find a class of matrices where the product is commutative. After all, there's no law that says bg
can't be equal to fc
. Good luck!