Home

> urticator.net
  Search

  About This Site
> Domains
  Glue
  Stories

  Computers
  Driving
  Games
  Humor
  Law
> Math
  Numbers
  Science

  Continued Fractions
  The Markov Project
> The Markov Project (Epilogue)
  Game Theory (Section)
  Group Theory
  Probability
  Miscellaneous

  The True Pattern
  Mutations
  Some Definitions
  The True Formula
> More About Symmetry
  Bibliography

  A Generalization
  Matrices That Represent Continued Fractions
  The Second Question
> The First Question
  Details About Operations

The First Question

prev

Finally, let's tackle the first question. If we have a generalized Cohn matrix that represents a continued fraction, does that continued fraction have the required form? The answer is essentially yes. That was a surprise to me, and I hope it's a surprise to you as well.

Here's a short sequence of operations that explains everything.

0.  symmetrical   a1ABCDa1ut
    expansiont?
 
1.take reciprocal 0a1ABCDa1t?
ut
 
2.add a0a0a1ABCDa1a0u + t?
ut
 
3.proper-impropera0a1ABCD(a1−1)1a0u + t?
    conversionuu − t

We start with a symmetrical expansion that corresponds to a symmetrical matrix, and we end with an expansion with the required form that corresponds to the standard parametrization of a generalized Cohn matrix. So, given a generalized Cohn matrix, we can run the sequence backward to get a symmetrical matrix, see that it corresponds to a symmetrical expansion, and then run the sequence forward to get the required form.

That's the idea, anyway. As usual with continued fractions, there are plenty of details to consider.

First, what if the symmetrical expansion starts and ends with 1, like so?

1(a1−1)ABCD(a1−1)1

In that case we get a different form. We can call the two forms T and B since they're generalizations of T(u) and B(u).

a0a1ABCD(a1−1)1
a01(a1−1)ABCDa1

We can create a one-to-one correspondence between the two forms by reading backward, so the two are equally common.

Second, it really does work to run the sequence backward. In other words, although there are cases where the operations don't work (see Details About Operations), we don't encounter any of those cases here. Or, to put it another way, every generalized Cohn matrix can be obtained from a symmetrical expansion.

A few notes about the (backward) operations:

  • Proper-improper conversion changes the first coefficient of an expansion if and only if the expansion represents an integer. Here, that means if and only if u = 1.
  • For u = 2 and the normal case u ≥ 3, we already know that the first coefficient is a0. That doesn't change when we do the proper-improper conversion, and after that everything is straightforward.
  • For u = 1, the first coefficients of P(a0+1) and P(a0−1,1) become a0 when we do the proper-improper conversion.
  • For u = 1, the reciprocals are also interesting. P(a0+1) leads to something I mentioned in More About Symmetry, that the reciprocal of [0,1] is [1]. P(a0−1,1) leads to something new, that the reciprocal of [0] is the empty expansion. As a statement about expansions, that doesn't make much sense. As a statement about numbers, ditto. But, as a statement about matrices, it makes complete sense. To take the reciprocal of a matrix, we swap the rows, and when we swap the rows in P(0), we get the identity matrix, which corresponds to the empty expansion.

Now that we've examined the most important features of this elephant, we can appreciate the big picture. In each row, the left side shows a possible symmetrical expansion (and its length) and the right side shows the form that we get when we run the sequence forward.

0-(a0−1)11+
12a0112
1 (a1 ≥ 3)a1a0(a1−1)1
2+(a1 ≥ 2)a1ABCDa1a0a1ABCD(a1−1)1T
 
11(a0+1)1
211a022+
3 (a1 ≥ 3)1(a1−2)1a01(a1−1)
4+(a1 ≥ 2)1(a1−1)ABCD(a1−1)1a01(a1−1)ABCDa1B

There are two groups of rows. In the two bottom rows we have forms T and B, the required forms let's say. In the two top halves we have forms for all the combinations of u = 1 and u = 2 and determinant ±1. And, in between we have two new forms of length 3. What should we call them? Should we think of them as special cases or as the smallest required forms? I don't know.

With that we've reached the end of the generalization. To sum up, if we have a matrix that satisfies the generalized Cohn condition that the trace is a0 + 1 times the entry in the lower left corner, and if that matrix also represents a continued fraction, then except in a few special cases, that continued fraction has one of the required forms, the forms that allow us to prove that the middle part has reflection symmetry. T(u) satisfies those conditions, so that's the reason that the middle part of T(u) has reflection symmetry.

I give most of the credit to the generalized Cohn condition. The condition that the matrix represents a continued fraction seems strong in this situation because we used it to cut the infinite families of parameter values down to single values, but it's not actually very strong. Remember the argument from the middle of A Nice Correspondence about the eight ways of permuting and the eight ways of flipping? There, we only wanted expansions made of positive integers, and there were only 2×1 ways to obtain them. Here, though, we want all possible expansions, and there are 4×2 ways to obtain them, not just by transposing but also by taking the reciprocal and by negating. So, according to me, the matrices that represent continued fractions make up fully 1/8th of the group GL(2,Z). Why then does the condition seem strong? We can see the reason if we project the matrices into the plane of the denominators.

The matrices that represent continued fractions fall into the wedge qn−1 > qn−2 > 0, while the infinite families of parameter values lead to matrices that turn into vertical lines of dots. Of course the intersection is finite!

We can also see the reason that the matrices that represent continued fractions make up 1/8th of the group: there are eight wedges, and the 2×4 ways that we didn't use map them onto each other.

Now let's come gradually back down to earth.

First, if we require that the matrix has determinant 1, that eliminates the two troublesome forms of length 3 and two of the four forms with u = 1 and u = 2 and leaves us with precisely the four forms that we were already familiar with.

Second, let's look at some examples. Here's a table of all the solutions of t2 ≡ −1 mod u for 3 ≤ u < 100.

ut
x52 22
10333
x1352112
17444
2573113
26555
x29122222
x3413211112
37666
4194114
50777
53232332
58173223
61115115
65888
18311113
7327212212
7431221122
82999
85136116
382442
x893421111112
97224224
 
x61013341122114
233211111111112

Notes:

  • Of course t2 ≡ −1 is the appropriate choice for determinant 1.
  • Of course the solutions come in pairs. If t is a solution, then so is −t ≡ u − t.
  • As before, the solutions with t < u/2 have form T while the paired solutions with t > u/2 have form B. I omitted the latter so that there's one row per pair.
  • The last column shows the symmetrical expansion that we get from u and t.
  • From the symmetrical expansion we can get u and t back. For example, the rational value of [5,5] is 26/5.
  • From the symmetrical expansion we can also run the short sequence of operations to get form T. For example, [5,5] leads to [a0,5,4,1]. Note that a0 isn't part of the symmetrical expansion!
  • The first column shows which values of u are Markov numbers.
  • I included u = 610 because it's the first Markov number with more than one pair of solutions.

I like that table a lot. Here's the feature that seems strangest to me. On the one hand, the values of u are constrained. We know from Square Roots of −1 that they have to be products of evodd primes, optionally with one factor of 2. On the other hand, the symmetrical expansions don't seem very constrained. But, when we map them back to values of u, we get all of those products and nothing else. Strange!

We were supposed to be coming back down to earth, though.

Third, if we require that u is a Markov number and that a0 is 2, that brings us back to regular Cohn matrices. (Regular Cohn matrices that represent continued fractions.)

And, fourth, if we require that the parameter t has one of the two values produced by the little algorithm, that brings us back to the actual matrices T(u) and B(u).

 

  See Also

@ December (2025)