17 July 2014

Groups intro

Here's some discussion of the mathematical idea of a group, from a college-level textbook titled simply Algebra, by Michael Artin. [From chapter 2 (Groups), © 1991 Prentice-Hall.]  Notice that the "set of invertible n x n matrices" and matrix multiplication are one type of group and one example of a law of composition, respectively:


"A group is a set on which a law of composition is defined, such that all elements have inverses.    For example, the set of nonzero real numbers forms a group Rx under multiplication, and the set of all real numbers forms a group R+ under addition.  The set of invertible n x n matrices, called the general linear group, is a very important example in which the law of composition is matrix multiplication.  We will see many more examples as we go along.


By a law of composition on a set S, we mean a rule for combining pairs a,b of elements S to get another element, say p, of S.  The original models for this notion are addition and multiplication of real numbers.  Formally, a law of composition is a function of two variables on S, with values in S, or [in other words] it is a map

S X S → S

a, b [squiggly arrow pointing right] p .

Here, S X S denotes, as always, the product set of pairs (a, b) of elements of S.
Functional notation p = f(a,b) isn’t very convenient for laws of composition.  Instead, the element obtained by applying the law to a pair (a,b) is usually denoted using a notation resembling those used for multiplication or addition:

P = ab, a x b, a ° b, a + b,         and so on,

a choice being made for the particular law in question.  We call the element p the product or the sum of a and b, depending on the notation chosen.


Our first example of a law of composition, and one of the two main examples, is matrix multiplication on the set S of n x n matrices.


We will use the product notation ab most frequently.  Anything we prove with product notation can be rewritten using another notation, such as addition.  It will continue to be valid, because the rewriting is just a change of notation.


It is important to note that the symbol ab is a notation for a certain element of S.  Namely, it is the element obtained by applying the given law of composition to the elements called a and b.  Thus if the law is multiplication of matrices and if

     1     3               1      0                                               7        3
a=           and  b=             , then ab denotes the matrix   
     0     2               2      1                                               4        2  .


Once the product ab has been evaluated, the elements a and b cannot be recovered from it.


Let us consider a law of composition written multiplicatively as ab.  It will be called associative if the rule

(ab)c  =  a(bc)    (associative law)

holds for all a,b in S, and commutative if

ab  =  ba        (commutative law)

holds for all a,b in S.  Our example of matrix multiplication is associative but not commutative.


When discussing groups in general, we will use multiplicative notation.  It is customary to reserve additive notation a + b for commutative laws of composition, that is, when a + b = b + a for all a,b.  Multiplication carries no implication either way concerning commutativity.


In additive notation the associative law is (a + b) + c = a + (b + c), and in functional notation it is

f(f(a,b), c) = f(a, f(b,c)).

This ugly formula illustrates the fact that functional notation isn’t convenient for algebraic manipulation.


The associative law is more fundamental than the commutative law; one reason for this is that composition of functions, our second example of a law of composition, is associative."  (end of Artin quotation)

After showing how composition of functions is described mathematically and giving examples, Artin notes “Composition of functions is not commutative.”   

That’s enough group discussion for now.  Oh, wait.  Three pages later, Artin gives the definition of a group, so I should write that down:

Definition.  A group is a set G together with a law of composition which is associative and has an identity element, and such that every element of G has an inverse.


15 July 2014

Matrices intro

Remember the commutative property that we all learned in Algebra I?  It's
                                       ab = ba

where (in Algebra I at least) a and b are real numbers.  That property bugged me when I started taking algebra.  Maybe the associative and distributive properties (of real numbers) also bothered me, but according to my memory of it, I was especially perplexed and put out with the commutative idea because I'd already learned that numbers could be multiplied in forward or backward order. Why did the textbook make such a big deal out of it?  The answer has to do with the emphasis on set theory in the New Math of the sixties and early seventies. 

What I didn't know is there are things besides plain old real numbers that are noncommutative under multiplication. I wish someone had told me that way back then.  (Yeh, don't we all wish someone had told us something way back when?)

For instance, two ARRAYS of real numbers multiplied together don't in general have the commutative or reversal-of-order property.  An array of numbers is written as a rectangular or square table called a matrix.  Actually, an array of any sort of mathematical animal such as variables, operators, or functions is a matrix.  
-------------------
Aside:  In some dialects, matrix is pronounced “mah-trix,” as opposed to the English-speaking version, “may-trix.”  When I moved into a University-owned apartment with a fellow graduate physics student in Columbia, South Carolina, at the turn of the 21st century, he asked me if I wanted to watch a movie he’d rented.  He was from Kazakhstan, and spoke decent English, but I could not understand what movie he was talking about.  The nearest I could guess, he was saying “The Mattress.” He must have finally gotten the box with the DVD (or VHS tape) in it and showed me: It was The Matrix.
------------------

A matrix is usually designated with a capital letter rather than a lower case letter. So let's let A and B be 2 x 2 matrices of consecutive real numbers, like so: 
          1       2                         5        6 
 A =                     and    B =        
          3       4                         7        8   .
Multiplying A times B gives (row elements times column elements, then add the result, and repeat till finished)
1   2            5   6             1*5+2*7      1*6+2*8           
            x                  =                            
3   4            7   8             3*5+4*7      3*6+4*8 

                                           19                 22
                                =  
                                           43                 50  . 
(The multiplications shown here are done first, then the addition.)   And here’s the “backwards” multiplication of these two matrices:

5   6            1   2             5*1+6*3      5*2+6*4 
            x                  =                                     
7   8            3   4             7*1+8*3      7*2+8*4

                                             23                34
                                =
                                             31                46   .

So, we have ABBAA and B don’t commute, or, writing it differently,

                                        AB – BA ≠ 0.

We can give a name to the difference in AB and BA, and also get rid of the pesky “not-equal” sign.  We could just use a name or symbol such as C, but the difference in two matrices multiplied forwards and backwards has important properties in quantum mechanics, so we’ll use a different symbology, called “the commutator"

                                     AB – BA = [A, B].

In our consecutive numbers case [A, B] is equal to

19    22        23    34           -4    -12
               -                   =                     
43    50        31    46           12      4   .


This little example turned out to be more interesting than I thought it would be:  I discovered that the commutator of any consecutive numbers put into 2 X 2 matrices is equal to this -4, -12; 4, 12 matrix.  For instance, let

           57    58                              61    62 
A =                            and  B =    
           59    60                              63    64   .

Then 
                      -4    -12
[A, B]   =                            
                       12     4   .

This happens because the consecutive numbers are of the form n+1, n+2, …, n+8 where n can be any integer. The n’s can be put in their own matrix which is merely added to the basic 1, 2, 3, 4, etc, matrices, like so

n+1    n+2              n     n            1     2
                     =                     +
n+3    n+4              n     n            3     4    .


The “n” matrices cancel each other when the commutator is calculated, and so do cross-terms that have a factor of n in them, leaving only the commutator of the two matrices with 1, 2, …, 8 in them.  Very cool!

I’m writing about matrices because matrix multiplication of operators and wave functions (state vectors) is the bread and butter of quantum mechanics.  For instance, certain quantum mechanical matrices that commute, meaning AB = BA, or that “anti-commute,” meaning AB = -BA, are important for predicting the results of physical measurements.  Matrices that don’t commute are responsible for the theory behind the Heisenberg Uncertainty Principle--well, that's also important for physical measurements! I could just say "measurements" instead of "physical measurements." Or in general: experiments (as opposed to just thought experiments).

Matrices used in quantum mechanics can be transposed (rows and columns interchanged) and have their complex conjugate taken (replace i by -i in a complex number).  Sometimes they can be inverted (have their reciprocal taken). When a matrix is equal to its transpose, it’s called symmetric.  When it’s equal to the negative of its transpose, it’s called anti-symmetric or skew-symmetric.  The most important matrices for quantum mechanics are the ones that equal their own complex-conjugate transpose (Hermitian matrices), and ones that equal their own complex-conjugate inverse (unitary matrices).  

Question:  is the above "consecutive number commutator" matrix,

     -4      -12

      12       4  ,

symmetric or maybe anti-symmetric?  It has a common factor of 4 in all its elements that can be taken out, giving 

               -1      -3
4 times
                3       1  ,

which really looks (even more) symmetric.  But if you take its transpose (interchange the -3 with the 3) you don't get the same matrix.  To get the same matrix, you'd have to have both 3's be positive or both be negative.  It's also not anti-symmetric, which you can check for yourself (adding the transpose to the original would give the zero matrix if the original matrix were anti-symmetric).


Okay that’s enough self-directed math review for today.