Home> urticator.net Search About This Site > Domains Glue Stories Computers Driving Games Humor Law > Math Numbers Science Continued Fractions The Markov Project > The Markov Project (Epilogue) Game Theory (Section) Group Theory Probability Miscellaneous The True Pattern Mutations Some Definitions The True Formula > More About Symmetry Bibliography A Generalization Matrices That Represent Continued Fractions The Second Question
|
The First QuestionFinally, let's tackle the first question. If we have a generalized Cohn matrix that represents a continued fraction, does that continued fraction have the required form? The answer is essentially yes. That was a surprise to me, and I hope it's a surprise to you as well. Here's a short sequence of operations that explains everything.
We start with a symmetrical expansion that corresponds to a symmetrical matrix, and we end with an expansion with the required form that corresponds to the standard parametrization of a generalized Cohn matrix. So, given a generalized Cohn matrix, we can run the sequence backward to get a symmetrical matrix, see that it corresponds to a symmetrical expansion, and then run the sequence forward to get the required form. That's the idea, anyway. As usual with continued fractions, there are plenty of details to consider. First, what if the symmetrical expansion starts and ends with 1, like so?
1(a1−1)ABCD(a1−1)1 In that case we get a different form. We can call the two forms T and B since they're generalizations of T(u) and B(u).
We can create a one-to-one correspondence between the two forms by reading backward, so the two are equally common. Second, it really does work to run the sequence backward. In other words, although there are cases where the operations don't work (see Details About Operations), we don't encounter any of those cases here. Or, to put it another way, every generalized Cohn matrix can be obtained from a symmetrical expansion. A few notes about the (backward) operations:
Now that we've examined the most important features of this elephant, we can appreciate the big picture. In each row, the left side shows a possible symmetrical expansion (and its length) and the right side shows the form that we get when we run the sequence forward.
There are two groups of rows. In the two bottom rows we have forms T and B, the required forms let's say. In the two top halves we have forms for all the combinations of u = 1 and u = 2 and determinant ±1. And, in between we have two new forms of length 3. What should we call them? Should we think of them as special cases or as the smallest required forms? I don't know. With that we've reached the end of the generalization. To sum up, if we have a matrix that satisfies the generalized Cohn condition that the trace is a0 + 1 times the entry in the lower left corner, and if that matrix also represents a continued fraction, then except in a few special cases, that continued fraction has one of the required forms, the forms that allow us to prove that the middle part has reflection symmetry. T(u) satisfies those conditions, so that's the reason that the middle part of T(u) has reflection symmetry. I give most of the credit to the generalized Cohn condition. The condition that the matrix represents a continued fraction seems strong in this situation because we used it to cut the infinite families of parameter values down to single values, but it's not actually very strong. Remember the argument from the middle of A Nice Correspondence about the eight ways of permuting and the eight ways of flipping? There, we only wanted expansions made of positive integers, and there were only 2×1 ways to obtain them. Here, though, we want all possible expansions, and there are 4×2 ways to obtain them, not just by transposing but also by taking the reciprocal and by negating. So, according to me, the matrices that represent continued fractions make up fully 1/8th of the group GL(2,Z). Why then does the condition seem strong? We can see the reason if we project the matrices into the plane of the denominators.
The matrices that represent continued fractions fall into the wedge qn−1 > qn−2 > 0, while the infinite families of parameter values lead to matrices that turn into vertical lines of dots. Of course the intersection is finite! We can also see the reason that the matrices that represent continued fractions make up 1/8th of the group: there are eight wedges, and the 2×4 ways that we didn't use map them onto each other. Now let's come gradually back down to earth. First, if we require that the matrix has determinant 1, that eliminates the two troublesome forms of length 3 and two of the four forms with u = 1 and u = 2 and leaves us with precisely the four forms that we were already familiar with. Second, let's look at some examples. Here's a table of all the solutions of t2 ≡ −1 mod u for 3 ≤ u < 100.
Notes:
I like that table a lot. Here's the feature that seems strangest to me. On the one hand, the values of u are constrained. We know from Square Roots of −1 that they have to be products of evodd primes, optionally with one factor of 2. On the other hand, the symmetrical expansions don't seem very constrained. But, when we map them back to values of u, we get all of those products and nothing else. Strange! We were supposed to be coming back down to earth, though. Third, if we require that u is a Markov number and that a0 is 2, that brings us back to regular Cohn matrices. (Regular Cohn matrices that represent continued fractions.) And, fourth, if we require that the parameter t has one of the two values produced by the little algorithm, that brings us back to the actual matrices T(u) and B(u).
|
See Also@ December (2025) |