Shape injectivity of the earring space (Part II)

This is the sequel to Shape injectivity of the earring space (Part I)

We’re on our way to proving the canonical homomorphism \phi:\pi_1(\mathbb{E},b_0)\to\varprojlim_{n}F_n from the earring group to the inverse limit of free groups is injective. Part I was mostly dedicated to proving that an inverse limit of trees contains no simple closed curve. I’d like to reiterate here that the proof I’m detailing is almost completely self-contained; we’ll need to review two classical results from Continuum Theory; both are proved in Sam Nadler’s very readable book [2].

As a bonus, I hope you’ll find, in this two-part post, another excellent example of why general mathematical theory is worth developing. Our proof uses several existence/structure theorems that, by themselves, don’t tell you how to do anything practical. However, they can be used together to prove the Shape Injectivity Theorem, which does provide something concrete: a practical way to study and do calculations in the most fundamental class of groups with non-commutative infinite products.

Peano Continua and Dendrites

Definition: A Peano continuum is a connected, locally path-connected, compact metrizable space.

The following theorem is an important characterization of Peano continua (See [Nadler, 8.18]) that every topologist should keep in their back pocket.

Hahn-Mazurkiewicz Theorem: A space X is a Peano continuum if and only if it is Hausdorff and there is a continuous surjection [0,1]\to X.

Definition: A dendrite is a Peano continuum containing no simple closed curve.

Based on an early lemma from Part I, we could define a dendrite to be a uniquely arc-wise connected Peano continuum. Intuitively, a dendrite is a one-dimensional Peano continuum without any holes.

First, consider the “arc hedgehog” space ah(\omega) which is a one-point union of a shrinking sequence of arcs of length 1/2^n. This space came up in an early post about the category of locally path-connected spaces. It’s easy to see that ah(\omega) is uniquely arc-wise connected and is therefore a dendrite.

The arc hedgehog dendrite

How complicated can a dendrite be? Start with ah(\omega), which has a single branch point, meaning that if we delete it, the subspace left has at least 3 components. At the midpoint m of a segment of length 1/2^n, attach a copy of ah(\omega) scaled to have diameter 1/2^{n+1}. Continue the process inductively in a dense pattern to construct the following dendrite called Wazewski’s Universal Dendrite.

Universal Dendrite

Wazewski’s Universal Dendrite

Notice there are no open sets in the Universal Dendrite homeomorphic to an open interval. In fact, this dendrite contains a homeomorphic copy of every dendrite as a retract. Hence, as far as dendrites go, the universal dendrite is as complicated as a dendrite can be.

The second result we’ll need from continuum theory provides us with a convenient way of writing down a dendrite as an inverse limit.

Dendrite Structure Theorem [Nadler, 10.27]: Every dendrite is homeomorphic to the inverse limit of a sequence of tree’s T_n where T_1=[0,1], \overline{T_{n+1}\backslash T_n} is an arc, and the bonding retractions r_{n+1,n}:T_{n+1}\to T_n collapse the arc \overline{T_{n+1}\backslash T_n} to the attachment point \overline{T_{n+1}\backslash T_n}\cap T_n=\{p_n\}.

This structure theorem just tells you that some inverse system exists, it doesn’t help you find one. It’s a good exercise to figure out how you’d actually realize some dendrites as inverse limits of trees in the specified fashion. I recommend working out the arc-hedgehog space and Wazewski’s Universal Dendrite as examples. Hint: create an inverse system by enumerating the arcs by size.

We’ll use the Dendrite Structure Theorem to prove the last technical ingredient we’ll need. It appears as Exercise 10.51 in [2] and, I believe, is usually attributed to Borsuk.

Theorem: Dendrites are contractible.

Originally, I had posted an inverse-limit proof that uses the Dendrite Structure Theorem here that was not correct. At some point, I’ll come back and post a corrected proof using inverse limits. There is also a way to prove dendrites are contractible using metrization theory. Every dendrite D admits a metric d such that for all distinct x,y\in D, there is an isometric embedding i_{x,y}:[0,d(x,y)]\to D onto the unique arc with endpoints x and y (this is called an \mathbb{R}-tree metric). We fix v\in D and define contraction H:D\times [0,1]\to D so that H(x,s)=i_{v,x}(s\cdot d(v,x)). Then $H(x,1)=x$, $H(x,0)=v$ and one can show without too much difficulty that H is continuous.

 

Proof of the Shape Injectivity Theorem

Finally, we get to the point. Ok, remember the homomorphism \phi:\pi_1(\mathbb{E},b_0)\to \varprojlim_{n}F_n, \phi([\alpha])=([r_1\circ\alpha],[r_2\circ\alpha],\dots) from the first post? To show it’s injective, we’ll show \ker(\phi) is trivial. Suppose that \alpha:[0,1]\to \mathbb{E} is a loop based at b_0 such that r_n\circ\alpha:[0,1]\to X_n is null-homotopic for every n\in\mathbb{N}. We must show that \alpha is null-homotopic in \mathbb{E}.

Each space X_n is a wedge of circles and therefore has a universal covering space \widetilde{X}_n, which is an infinite tree. In particular, \widetilde{X}_n is the Caley graph of F_n. Let p_n:\widetilde{X}_n\to X_n be the universal covering map.

After making a choice of vertex basepoints \tilde{x}_n\in (p_{n})^{-1}(b_0), notice that the map r_{n+1,n}\circ p_{n+1}:\widetilde{X}_{n+1}\to X_n from the simply connected covering space has a unique lift s_{n+1,n}:(\widetilde{X}_{n+1},\tilde{x}_{n+1})\to (\widetilde{X}_{n},\tilde{x}_n) such that p_n\circ s_{n+1,n}=r_{n+1,n}\circ p_{n+1}.

This gives us an inverse system of based covering maps.

Consider the inverse limit \varprojlim_{n}p_n:\varprojlim_{n}\widetilde{X}_n\to \varprojlim_{n}X_n=\mathbb{E}. There are two things to be wary of: 1. the inverse limit of path-connected spaces is not always path-connected and 2. and an inverse limit of covering maps is not usually a covering map. But these general failures are not a deal-breaker in our situation.

First, pick a path component of \varprojlim_{n}\widetilde{X}_n. Specifically, take \widetilde{\mathbb{E}} to be the path component containing the basepoint \tilde{b}_0=(\tilde{x}_1,\tilde{x}_2,\tilde{x}_3,\dots)\in \varprojlim_{n}\widetilde{X}_n. Let p:\widetilde{\mathbb{E}}\to \mathbb{E} be the restriction of \varprojlim_{n}p_n.

up1

While the map p is not a traditional covering map, it is surjective and it enjoys all of the usual lifting properties of a covering map. It is a kind of “generalized covering map.” Also, the space \widetilde{\mathbb{E}} is not a dendrite. In fact, with the subspace topology inherited from the inverse limit, it’s not even locally path connected. However, if you apply the locally path-connected coreflection to it, the result is a true generalized universal covering in the sense of [1]. For our purposes, we only need to recognize that \widetilde{\mathbb{E}} is sitting inside an inverse limit of trees.

Lemma: \widetilde{\mathbb{E}} is uniquely arc-wise connected. In particular, it contains no simple closed curves.

Proof. The main theorem proved in Part I is that the limit of an inverse system of trees contains no simple closed curves. Since each covering space \widetilde{X}_n is a tree, this theorem applies and we conclude that \varprojlim_{n}\widetilde{X}_n contains no simple closed curve. In particular, the path component \widetilde{\mathbb{E}} contains no simple closed curve. \square

Recall that \alpha:[0,1]\to \mathbb{E} is a loop based at b_0 such that, for all n\in\mathbb{N}, the projection loop \alpha_n=r_n\circ\alpha:[0,1]\to X_n onto the wedge of n-circles is null-homotopic in X_n. We are looking to build a null-homotopy of \alpha from a choice of null-homotopies of the approximating loops \alpha_n despite the fact that these null-homotopies might seem completely unrelated.

up2

Let \widetilde{\alpha}_n:([0,1],0)\to (\widetilde{X}_n,\widetilde{x}_n) be the unique lift of \widetilde{\alpha}_n starting at \widetilde{x}_n, i.e. so that p_n\circ\widetilde{\alpha}_n=\alpha_n. Since \alpha_n is null-homotopic in X_n, it must be the case that each \widetilde{\alpha}_n is actually a loop based at \tilde{x}_n. The diagram below shows the following equalities hold: is8

is7

Since s_{n+1,n} preserves the basepoints (by construction) and \widetilde{\alpha}_n is the unique lift of \alpha_n starting at \widetilde{x}_n, the equality above tells us that s_{n+1,n}\circ\widetilde{\alpha}_{n+1}=\widetilde{\alpha}_{n}. This means the lifted loops \widetilde{\alpha}_{n} agree with the bonding maps of the covering space inverse system. The universal property of the top inverse system hands us a unique loop \widetilde{\alpha}:[0,1]\to \varprojlim_{n}\widetilde{X}_n based at \tilde{b}_0 satisfying s_n\circ \widetilde{\alpha}=\widetilde{\alpha}_n.

up3

Now D=\widetilde{\alpha}([0,1]) is the continuous image of [0,1] in a Hausdorff space so, by the Hahn-Mazurkiewicz Theorem, D is a Peano continuum. Moreover, since D is path connected and contains \tilde{b}_0, we have D\subseteq \widetilde{\mathbb{E}}. Since \widetilde{\mathbb{E}} contains no simple closed curves, neither does D. Therefore, D is a dendrite. Finally, we apply the theorem (from earlier in this post) all dendrites are contractible. Since \widetilde{\alpha} factors through a contractible space, it is null-homotopic in \widetilde{\mathbb{E}}. We conclude that \alpha=p\circ\widetilde{\alpha} is null-homotopic in \mathbb{E}. This completes the proof that \ker(\phi) is trivial. \square

Concluding Thoughts

Where did the null-homotopy of \alpha come from? It would have required a super-technical effort to build an explicit homotopy so we passed the hard work off to Borsuk’s Theorem that dendrites are contractible. Using Part I and the Hahn-Mazurkiewicz Theorem, we could say that the space D=\widetilde{\alpha}([0,1]) is a dendrite in the first place. Then follow any proof of the contractibility of dendrites to finish the job.

Notice that we basically didn’t use anything specific about the earring space except that the universal covers of the approximating spaces X_n are trees! So we could replace the earring with any inverse limit of graphs and the same proof would go through. So actually, we proved:

One-Dimensional Shape Injectivity Theorem: If (X,x_0)=\varprojlim_{n}(G_n,x_n) is an inverse limit of based graphs G_n, then the canonical induced homomorphism \phi:\pi_1(X,x_0)\to\varprojlim_{n}\pi_1(G_n,x_n) to the inverse limit of free groups is injective.

For example, any one-dimensional Peano continuum (including the Menger Curve and Sierpinski Carpet) can be written as the inverse limit of finite graphs and falls within the scope of this useful theorem.

If you feel comfortable with the end of the proof, then you can also prove as a quick exercise that every inverse limit of graphs is aspherical, i.e. has trivial higher homotopy groups!

References.

[1] H. Fischer and A. Zastrow, Generalized universal covering spaces and the shape group, Fund. Math. 197 (2007) 167-196.

[2] S. Nadler, Continuum Theory: An Introduction, Chapman & Hall/CRC Pure and Applied Mathematics. 1992.

Posted in Dendrite, earring group, earring space, Free groups, Fundamental group, Inverse Limit, Peano continuum, Shape theory, Tree | 3 Comments

Shape injectivity of the earring space (Part I)

One of my posts where I did some substantial hand-waving is my original post on the fundamental group of the earring space. I wrote about how to understand and work with this group, but I never gave a proof of the key fact that the earring group naturally injects into an inverse limit of free groups \varprojlim_{n}F_n. This is one of the two primary viewpoints that researchers take to study and apply the beautiful algebra of this group (and more generally fundamental groups of one-dimensional spaces). Seriously, I’m using this machinery like 1.) it’s going out of style and 2.) I understand fashion. The other approach avoids inverse limits by identifying the earring group as a group of  reduced countable, linear words over a countable alphabet. They’re logically equivalent, but sometimes one is more convenient than the other.

To be honest, I hesitated about writing this post. I say with confidence that there is no completely elementary proof. While I’ve read and understood many different proofs of shape injectivity, most of them are either super technical or they gloss over details by applying continuum and dimension theory. Some inquisitive and kind readers have given me the motivation to do it.

When trying to write this post, I dug deep into the literature trying to weasel out an almost entirely self-contained proof that a grad student would believe. After some reading, I worked things out and these posts are the results of the effort. This first post will mostly be used to set up the technical tools about arcs in inverse limits that we’ll need to prove shape injectivity.

Let C_n=\{(x,y)\in\mathbb{R}^2\mid (x-1/n)^2+y^2=(1/n)^2\} be the circle of radius 1/n centered at (1/n,0). Then \mathbb{E}=\bigcup_{n\in\mathbb{N}}C_n is the earring with basepoint b_0=(0,0). We need to set up a little more notation:

  • Let X_n=\bigcup_{k=1}^{n}C_k be the bouquet of the first n-circles with free fundamental group F_n=\pi_1(X_n,b_0).
  • Let r_{n+1,n}:X_{n+1}\to X_n be the retraction, which collapses C_{n+1} to b_0 and is the identity elsewhere.
  • Let r_n:\mathbb{E}\to X_n be the retraction, which collapses the smaller copy of the earring \mathbb{E}_{\geq n+1}=\bigcup_{k=n+1}^{\infty}C_k to b_0 and is the identity elsewhere.

This gives an inverse system of retracts:

is1ilim

The closed mapping theorem should help convince you that the earring is homeomorphic to the inverse limit of this system of bouquets. Now apply \pi_1 to this inverse system: the maps r_n:\mathbb{E}\to X_n induce homomorphisms (r_n)_{\#}:\pi_1(\mathbb{E},b_0)\to F_n, which together induce a canonical homomorphism \phi:\pi_1(\mathbb{E},b_0)\to\varprojlim_{n}(F_n,(r_{n+1,n})_{\#}) defined by \phi([\alpha])=([r_1\circ\alpha],[r_2\circ\alpha],[r_3\circ\alpha],\dots).

up4

The inverse limit of free groups, which we can abbreviate as \varprojlim_{n}F_n is precisely the first shape (or Cech) homotopy group \check{\pi}_1(\mathbb{E},b_0).

First, let’s notice that \mathbb{E} is homeomorphic to \varprojlim_{n}X_n. Basically, we just need to believe that \mathbb{E} is precisely the infinite wedge \bigvee_{n}S^1 viewed as a subspace of the infinite torus \prod_{n}S^1 with the product topology. It’s then very tempting to think that \phi is an isomorphism. However, \pi_1 doesn’t always preserve inverse limits!

Nevertheless, we can still understand and work with \pi_1(\mathbb{E},b_0) if we identify it as a subgroup of \varprojlim_{n}F_n. Hence, the motivation for showing that \phi is injective.

Shape Injectivity Theorem: \phi:\pi_1(\mathbb{E},b_0)\to\varprojlim_{n}(F_n,(r_{n+1,n})_{\#}) is injective.

Basically, this theorem says that a loop \alpha in \mathbb{E} is null-homotopic if and only if every projection r_n\circ\alpha is null-homotopic in the wedge of circles X_n. The contrapositive says that \alpha is not null-homotopic in \mathbb{E} iff there exists some n\geq 1 such that r_n\circ\alpha represents a non-trivial word in F_n.

Why this is not so obvious

Let \ell_n be the loop going once around C_n counterclockwise and let \ell_{n}^{-} be its reverse loop. Given a loop \alpha:[0,1]\to\mathbb{E}, we may assume that for each component (a,b) of [0,1]\backslash\alpha^{-1}(b_0), the restriction of \alpha to [a,b] is one of the paths \ell_n or \ell_{n}^{-}. Obviously, for each n\geq 1, the loops \ell_{n} and \ell_{n}^{-} can show up as subloops at most finitely many times or we would violate the uniform continuity of \alpha.

Suppose we know r_n\circ\alpha is null-homotopic in X_n. The primary difficulty is that the null-homotopies H_n:[0,1]^2\to X_n for r_n\circ\alpha might have nothing to do with each other. We just know one exists for each approximation level. There is no guarantee that we can “fix em up right” so that they agree with the bonding maps, i.e. satisfy r_{n+1,n}\circ H_{n+1}=H_n and thus induce a null-homotopy of \alpha in X.

Here is a more algebraic way to look at it. It doesn’t hurt to think each projection loop r_n\circ\alpha as an unreduced finite word in the letters \{\ell_{k}^{\pm}\mid 1\leq k\leq n\}. Then \ell_{n}^{m} means a concatenation \ell_n\cdot\ell_n\cdots \ell_n of length m and \ell_{n}^{-m} means a concatenation \ell_{n}^{-}\cdot\ell_{n}^{-}\cdots \ell_{n}^{-} of length m. For instance, suppose \alpha is the loop described by the following projections.

  • r_1\circ\alpha\equiv \ell_{1}^{1}\ell_{1}^{-1}
  • r_2\circ\alpha\equiv\ell_{2}^{1}\ell_{2}^{-1}\ell_{1}^{1}\ell_{2}^{2}\ell_{2}^{-2}\ell_{1}^{-1}\ell_{2}^{3}\ell_{2}^{-3}
  • r_3\circ\alpha\equiv\ell_{3}^{1}\ell_{3}^{-1}\ell_{2}^{1}\ell_{3}^{2}\ell_{3}^{-2}\ell_{2}^{-1}\ell_{3}^{3}\ell_{3}^{-3}\ell_{1}^{1}\ell_{3}^{4}\ell_{3}^{-4}\ell_{2}^{2}\ell_{3}^{5}\ell_{3}^{-5}\ell_{2}^{-2}\ell_{3}^{6}\ell_{3}^{-6}\ell_{1}^{-1}\ell_{3}^{7}\ell_{3}^{-7}\ell_{2}^{3}\ell_{3}^{8}\ell_{3}^{-8}\ell_{2}^{-3}\ell_{3}^{9}\ell_{3}^{-9}
  • and so on where between any two letters in the previous projection you insert an inverse pair of the form \ell_{n}^{k}\ell_{n}^{-k}.

Notice that deleting the \ell_2‘s from r_2\circ\alpha gives r_1\circ\alpha, deleting the \ell_3‘s from r_3\circ\alpha gives r_2\circ\alpha, and so on. Moreover, each letter \ell_n is only used finitely many times. We conclude that this does indeed give a loop in \mathbb{E}. Notice that even though these finite projection words are getting pretty long, the homotopy class [r_n\circ\alpha] will cancel to the trivial word in the free group F_n. After all, we just inserted inverse pairs that cancel!

But the infinite limit loop \alpha in \mathbb{E} is a “transfinite word” of dense order. It is null-homotopic (due to the main result of the post) but it’s much harder to come up with an explicit contraction because there is no finite reduction scheme that can do the job. Between any two of the inverse pairs that you wanted to cancel in the n-th projection, you actually had letters \ell_{n+1}^{\pm} up in the next level. You can’t just cancel in the n-th level, forget about what you just did, and then move on up to the n+1-st level.

The trouble is that the process of cancellation requires choice. Ok, there’s not much choice in cancelling the first word. But look at the second one.

r_2\circ\alpha\equiv\ell_{2}^{1}\ell_{2}^{-1}\ell_{1}^{1}\ell_{2}^{2}\ell_{2}^{-2}\ell_{1}^{-1}\ell_{2}^{3}\ell_{2}^{-3}

You could cancel all the \ell_2‘s first and then the remaining \ell_1‘s. Or you could cancel the middle \ell_2‘s first, then the \ell_1‘s, and then the remaining \ell_2‘s. It may not seem like a big deal but these are different homotopies! The only thing we have going for us it that we have some contracting reduction for each [r_n\circ\alpha]. This leaves us with an infinite sequence of reductions for the projection words – one for each level. We have no idea if these reductions match up or can be chosen so that the projection of a reduction of [r_n\circ\alpha] to the next level down is exactly the reduction for [r_{n-1}\circ\alpha]. Sure, once I choose a reduction/null-homotopy for [r_n\circ\alpha], I can project it down to make sure the reductions on lower levels match up, but then you have to start over and worry about [r_{n+1}\circ\alpha]. If you project down and fix all the lower levels and continue this process all the way up, you’re going to end up with a rearranging the reduction choice at each level infinitely many times. There is no guarantee this can be done “continuously.”

History

The first attempt to identify \pi_1(\mathbb{E},b_0) was by H.B. Griffiths [3]. However, there was a critical error in Griffiths’ proof of the injectivity. The error was observed and a correct proof finally given (30 years later!) by Morgan and Morrison [4]. Many years back when I read the original proof for the first time, I was a bit unsatisfied with how specific and technical it all was. Later on, I read the proof given by Eda and Kawamura in [2], which felt more intuitive because all I had to do was understand inverse limits and believe a little continuum theory. Bonus: It applies to all spaces with Lebesgue covering dimension 1, not just \mathbb{E}. The key idea is originally due to the work in [1] by Curtis and Fort from the 1950’s.

Trees and Inverse Limits

An important theme in wild topology is the idea of a space being “uniquely arc-wise connected.” Here an “arc” in a space X refers to a subspace of X homeomorphic to [0,1]. The image of 0 and 1 in X are the endpoints of the arc. A “simple closed curve” in X is a homeomorphic copy of the unit circle S^1.

Definition: A space X is uniquely arc-wise connected if for all distinct points x,y\in X, there is a unique arc in X whose endpoints are x and y.

The next proposition gives another useful way to describe uniquely arc-wise connected spaces.

Proposition: If X is uniquely arc-wise connected, then X is path connected and contains no simple closed curves. The converse holds if X is weakly Hausdorff.

Proof. Since S^1 is not uniquely arc-wise connected, one direction is obvious. Now suppose X is weakly Hausdorff and not uniquely arc-wise connected. Then there are distinct arcs A,B\subseteq X sharing the same endpoints. Since A\neq B, without loss of generality, we may suppose there is a point a\in A\backslash B. Note that since X is weakly Hausdorff, A and B are closed. It follows that A\cap B is non-empty and closed in A and thus A\backslash (A\cap B) is open in A. Choosing a homeomorphism h: [0,1]\to A, let (c,d) be the component of h^{-1}(A\backslash B\cap A) containing h^{-1}(a). Now A_1=h([c,d]) is a subarc of A with endpoints \{h(c), h(d)\}\subseteq A\cap B. If B_1 is the subarc of B with endpoints h(c) and h(d), then we have A_1\cap B_1=\{h(c),h(d)\}. Now it’s clear that A_1\cup B_1 is a homeomorphic image of a circle, i.e. a simple closed curve. \square

The uniquely arc-wise connected spaces you’re most likely to already be familiar with are trees.

Definition: A simplicial tree is a one-dimensional simplicial complex without any cycles. A (topological) tree is a space, which is the geometric realization of a simplicial tree.

Basic algebraic topology tells us that trees are contractible and uniquely arc-wise connected. Since a tree T is simply connected, between any two points x,y\in T there is a single homotopy (rel. endpoints) class of paths from x to y. This means \beta:[0,1]\to T of the unique arc from x to y is a reduced representative of the single homotopy class of paths from x to y in the sense that it has no null-homotopic subloops. This reduced representative is unique up to reparameterization. A non-reduced path in a tree would have some null-homotopic zig-zags that we could “delete” by a homotopy to obtain a reduced representative. Of course, there could be infinitely many zig-zags but since trees are semilocally simply connected, this is not much of an obstacle to overcome.

Now what about an inverse limit \varprojlim_{n}(T_n,f_{n+1,n}) of trees T_n? Informally, such an inverse limit “glues” together the trees T_n according to their bonding maps. The result should be one-dimensional and if f_{n+1,n} maps x,y\in T_{n+1} to the same point of T_n, then f_{n+1,n}  will send the unique arc connecting x and y to a finite topological subtree of T_n. So there should be no way for a simple closed curve to magically appear in the gluing process. We’ll prove exactly this using the simplest proof I could come up with.

Recall that an inverse limit \varprojlim_{n}(X_n,f_{n+1,n})=\{(x_n)\mid x_n\in X_n\text{ and }f_{n+1,n}(x_{n+1})=x_n\} is topologized as a subspace of \prod_{n\in\mathbb{N}}X_n. If f_n:X\to X_n are the projection maps, then a point x\in X is represented by the sequence x=(f_1(x),f_2(x),f_3(x),\dots). A basic open neighborhood latex of x is of the form X\cap \prod_{n\in\mathbb{N}}U_n where U_n is an open neighborhood of f_n(x) and there is an M such that U_n=X_n for n>M. Since the functions f_{n+1,n} are continuous and f_{n+1,n}(f_{n+1}(x))=f_n(x), we may replace U_2 with U_2\cap W_2 where f_{2,1}(W_2)\subseteq U_1. Terminating at n=M, we can inductively replace U_n with U_n\cap W_n where f_{n,n-1}(W_n)\subseteq U_{n-1}. In this way, we may take a basic open neighborhood X\cap \prod_{n\in\mathbb{N}}U_n of x to satisfy f_{n+1,n}(U_{n+1})\subseteq U_n for 1\leq n\leq M-1 and U_n=X_n for n>M.

Lemma: Suppose X=\varprojlim_{n}(X_n,f_{n+1,n}) is an inverse limit of Hausdorff spaces and f_n:X\to X_n are the projection maps. If A,B are disjoint compact subsets of X, then there exists an n\geq 1 such that f_n(A)\cap f_n(B)=\emptyset.

Proof. Suppose, to obtain a contradiction, that for every n\geq 1 there exists a_n\in A and b_n\in B such that f_n(a_n)=f_n(b_n). Notice that since the coordinates of each a_n and b_n must agree with the bonding maps f_{n+1,n}, this means f_k(a_n)=f_k(b_n) for all 1\leq k\leq n. Since A and B are compact, we may find a subsequences \{a_{n_j}\} and \{b_{n_j}\} that converge to a\in A and b\in B respectively. We’re going to prove that \{b_{n_j}\} also converges to a. Consider a basic open neighborhood U=\prod_{n}U_n of a. Let N be the minimal n such that U_n\neq X_n. Since \{a_{n_j}\}\to a, there exists a J\geq 1 such that a_{n_j}\in \prod_{n}U_n for all j\geq J. We can choose J large enough so that n_J>N. Now pick any j\geq J. Since f_{n_j}(a_{n_j})=f_{n_j}(b_{n_j}), we have f_{k}(b_{n_j})=f_{k}(a_{n_J})\in U_k for all 1\leq k\leq n_j. Since N<n_J\leq n_j, this implies that f_{k}(b_{n_j})\in U_k for 1\leq k\leq N. Hence b_{n_j}\in U (for j\geq J) and we conclude that \{b_{n_j}\}\to a. However, this means the \{b_{n_j}\} converges to both of the points a and b; an impossibility in a Hausdorff space. \square

Remark: Notice that if f_m(A)\cap f_m(B)=\emptyset, then it must also be the case that f_k(A)\cap f_k(B)=\emptyset for k\geq m. So for given A and B, we can choose m to be as large as we want.

Theorem: An inverse limit X=\varprojlim_{n}(T_n,f_{n+1,n}) of trees contains no simple closed curves.

Proof. Since topological trees are always Hausdorff, X is Hausdorff. Suppose, to obtain a contradiction, that f:S^1\to X is an embedding. For i=1,2,3,4, let S^{1}_{i} be the intersection of S^1 and i-th quadrant of the plane (include the bounding axes). Now A_i=f(S^{1}_{i}) are four (compact) arcs in X the meet at endpoints to form the simple closed curve f(S^1). Let x=f(1,0) and y=f(-1,0). Notice that A_1\cap A_3=\emptyset and A_2\cap A_4=\emptyset. Let \gamma_i be the paths that trace the arcs A_i with the orientation shown below so that \gamma_1\cdot\gamma_2 and \gamma_4\cdot\gamma_3 are injective paths from x to y.

ilim2

According to the previous Lemma (and the following Remark), if we denote the projection maps by f_n:X\to T_n, then we can find an m such that f_m(A_1)\cap f_m(A_3)=\emptyset and f_m(A_2)\cap f_m(A_4)=\emptyset. Note that f_m(x) and f_m(y) are distinct points in the tree T_m (as f_m(A_1) and f_m(A_3) are disjoint) and are thus connected in T_m by a unique arc. Let \beta:[0,1]\to X trace out this arc from f_m(x) to f_m(y). Now \beta is the reduced representative of both f_m\circ(\gamma_1\cdot\gamma_2)=(f_m\circ\gamma_1)\cdot(f_m\circ\gamma_2) and f_m\circ(\gamma_4\cdot\gamma_3)=(f_m\circ\gamma_4)\cdot(f_m\circ\gamma_3).

  • Considering the reduction of the path (f_m\circ\gamma_1)\cdot(f_m\circ\gamma_2) to \beta, we see that an initial segment \beta|_{[0,s]} has image in f_m(A_1) and the terminal segment \beta|_{[s,1]} has image in f_m(A_2).
  • Considering the reduction of the path (f_m\circ\gamma_4)\cdot(f_m\circ\gamma_3) to \beta, we see that an initial segment \beta|_{[0,t]} has image in f_m(A_4) and the terminal segment \beta|_{[t,1]} has image in f_m(A_3).
  1. If s<t, then \beta([s,t])\subset f_m(A_2)\cap f_m(A_4).
  2. If t<s, then \beta([t,s])\subseteq f_m(A_1)\cap f_m(A_3).
  3. If s=t, then \beta(s) lies in every f_m(A_i).
ilim3

An illustration of the two cancellations to \beta that leads to Case 1. Note that a middle portion may cancel in the reduction to \beta (indicated by the dashed lines).

In any of these cases, even the degenerate ones where one of s or t is 0 or 1, we arrive at a contradiction. \square

Corollary: Every path-component of the limit of an inverse system of trees is uniquely arc-wise connected.

In Part II, the fact that an inverse limit of trees contains no simple closed curves will be a critical part of proving the Shape Injectivity Theorem.

References.

[1] M.L. Curtis and M.K. Fort, The fundamental group of one-dimensional spaces, Proc. Amer. Math. Soc. 10 (1959) 140-148.

[2] K. Eda and K. Kawamura, The fundamental groups of one-dimensional spaces, Topology and its Applications 87 (1998) 163-172.

[3] H.B. Griffiths, Infinite products of semigroups and local connectivity, Proc. London Math. Soc. (3), 6 (1956), 455-485.

[4] J.W. Morgan and I. Morrison, A van Kampen theorem for weak joins, Proceedings of the London Mathematical Society 53 (1986) 562-576.

Posted in earring space, Free groups, Fundamental group, Inverse Limit, Shape homotopy group, Shape theory, Tree | Tagged , , , , , | 9 Comments

What is an infinite word?

In this post, we’ll explore the idea of non-commutative infinitary operations on groups, that is, multiplying together infinitely many elements in a group. This idea arises very naturally in “wild” or “infinitary” algebraic topology. In fact, lately, this has been at the forefront of my approach to the subject. Instead of sets with binary (or other finitary) operations, homotopy groups are viewed simply groups equipped with extra infinite product operations. It turns out that the well-definedness of infinite products as an operation is a crucial assumption of many important theorems.

Anyone who has taken a Calculus sequence or analysis course will be familiar with an infinite sum \sum_{n=1}^{\infty}a_n=a_1+a_2+a_3+\dots which extends the commutative operation of addition on the reals/complex numbers. Even in that situation, you need to take a little bit of care to distinguish absolutely and conditionally convergent series. But here we’re considering the properties of infinite products in non-commutative groups; think free groups where the elements are words. In this situation, the difference between conditional and absolute convergence becomes a little trickier, relating to “infinite” commutativity.

First, let’s review finite words.

Finite Words: The free group F(X) on a set X can be constructed as the group of as reduced finite words in the “alphabet” X. In more detail, an element is a finite length word w=x_{1}^{n_1}x_{2}^{n_2}...x_{m}^{n_m} where the “letters” x_1,x_2,...,x_m are elements of X, the exponents n_k are non-zero integers (negatives are necessary because a group must have inverses), and the word is reduced in the sense that consecutive letters can’t be the same otherwise we could combine or cancel them according to their exponents, i.e. x_{j}\neq x_{j+1} for j=1,\dots,m-1. The exponent x_{1}^{4} is meant to represent the 4-letter word x_{1}x_{1}x_{1}x_{1}. Hence the word w above has word length n_1+n_2+n_3+\dots +n_m. Multiplication in F(X) is given by concatenation (placing one word after another) and performing any necessary reduction at the place where the words are joined to get a reduced word. For example, if w_1=x_{1}^{2}x_{2}^{-3}x_{1}x_{3}x_{4}^{-1} and w_2=x_{4}x_{3}^{-2}x_{5}^{7}x_{1}^{-9}, the product would be

w_1w_2=x_{1}^{2}x_{2}^{-3}x_{1}x_{3}^{-1}x_{5}^{7}x_{1}^{-9}

since first, the inverses x_{4} and x_{4}^{-1} would cancel and then x_{3} and x_{3}^{-2} are consecutive and need to be combined.

Here’s another way to look at it: Let [n]=\{1,2,...,n\} be the finite linearly ordered set on n-elements and [0] be the emptyset. Again, X is a set of letters and X^{-1}=\{x^{-1}\mid x\in X\} will denote a set of formal inverses. A word in X is a function w:[n]\to X\cup X^{-1} and we identify it with the n-letter word w:=w(1)w(2)\dots w(n). If n=0, we call w the empty word. A word is reduced if whenever w(j)=x, we have w(j+1)\neq x^{-1}. (i.e. a letter and it’s inverse never appear consecutively). In any word, one can delete consecutive inverse pairs to obtain a unique reduced representative, which is independent of the order of reduction. The product of the words w_1:[n]\to X\cup X^{-1} and w_2:[m]\to X\cup X^{-1} is now the reduced representative of w_1w_2:[n+m]\to X\cup X^{-1}.

Of course, this latter view is equivalent to the first one; however, it highlights an easily overlooked fact. The operation in free groups really comes from the ability to “add” linear orders together, that is, place them side by side and still have a linear order.

Infinite products in groups

Infinite words and infinite products should extend their finite counterparts. However, we must be careful to jump to any conclusions about how these should work. Let’s start with a naive view that will lead us in the right direction. We want to to take a sequence g_1,g_2,g_3,\dots of elements in a group G and form a new infinite product element g_{\infty} that behaves like the notation g_1g_2g_3\dots suggests it should. In particular, we should also be able to form an infinite product element g_{n}g_{n+1}g_{n+2}\dots\in G out of the terminal subsequence g_{n},g_{n+1},g_{n+2}\dots so that

  • g_{1}^{-1}\left(g_1g_2g_3\dots\right)=g_2g_3g_4\cdots
  • g_{2}^{-1}g_{1}^{-1}\left(g_1g_2g_3\dots\right)=g_3g_4g_5\dots
  • g_{3}^{-1}g_{2}^{-1}g_{1}^{-1}\left(g_1g_2g_3\dots\right)=g_4g_5g_6\dots
  • and so on.

In other words, we need to have a sequence t_1,t_2,t_3,\dots\in G of tails where t_n represents g_ng_{n+1}g_{n+2}\cdots. Specifically, t_1=g_{\infty} is the desired infinite product itself. The above equations simply mean that the tails must satisfy the equations t_n=g_nt_{n+1} for all n\in\mathbb{N}. Hence, to find an infinite product value for g_1,g_2,g_3,\dots we should look for a tail sequence \{t_n\} and check these equations.

But this leaves us in an awkward situation. Given any sequence g_1,g_2,g_3,\dots in any group and any given g\in G, I can set t_1=g and start inductively solving t_{n+1}=g_{n}^{-1}t_n to find a tail sequence t_1,t_2,t_3,\dots that realizes the arbitrary element g as the “infinite product.” We conclude that algebraic structure alone is not sufficient to make infinite products in groups become well-defined. However, the equations do tell us that a tail sequence should uniquely determine an infinite product and conversely that the infinite product value should uniquely determine the entire tail sequence.

Warning: Here is a another way to understand why we should not expect to be able to create a well-defined infinite product element g_1g_2g_3\dots\in G out of every sequence g_1,g_2,g_3,\dots in G. Suppose g\neq e is a non-identity element and g_n=g for all n\in\mathbb{N}. If g_{\infty}=ggg\dots, then the tail t_2 is also the infinite product of the sequence g,g,g,\dots. Hence

g_{\infty} = ggg\dots = g (ggg\dots ) = g(g_{\infty})

and cancelling g_{\infty} on the right gives e=g; a contradiction.

Introducing a notion of convergence to the identity: Recalling how classical infinite sums and products in \mathbb{R} are defined, we quickly realize that such infinitary operations depend heavily on the topological structure of \mathbb{R}. Hence, we must add a notion of convergence in our group G. Using infinite sums as a guide, we should be able to form an infinite product of a sequence g_1,g_2,g_3,\dots, which is “shrinking” in some sense. So let’s suppose G is a group with identity element e. Moreover, G is equipped with a descending filtration of subgroups G_1\supseteq G_2\supseteq G_3\supseteq \cdots.

This extra structure will be the data that tells us when sequences are “converging” to e. A difference choice of filtration might yield difference notions of convergence and hence a difference infinite product operation.

Definition: A sequence g_1,g_2,g_3,\dots in G is shrinking if for every m\geq 1, there exists a N\geq 1 such that g_n\in G_m for all n\geq N.

Definition: Given shrinking sequences g_1,g_2,g_3,\dots and t_1,t_2,t_3,\dots such that t_{n}=g_{n}t_{n+1} for all n\in\mathbb{N}, we call t_1,t_2,t_3,\dots a tail sequence for g_1,g_2,g_3,\dots and g_{\infty}=t_1 an infinite product of the sequence g_1,g_2,g_3,\dots.

Definition: We say that infinite products are well-defined in the filtered group (G,\{G_m\}) if the infinite product value of a sequence g_1,g_2,g_3,\dots is unique.

Theorem: Infinite products in the filtered group (G,\{G_n\}) are well-defined if and only if \bigcap_{m\in\mathbb{N}}G_m=\{e\}.

Proof. If e\neq g\in \bigcap_{m\in\mathbb{N}}G_m, then the constant sequences e,e,e,\dots and g,g,g,\dots are both shrinking tail sequences for the sequence e,e,e,\dots. These two tail sequences define infinite product value of the sequence e,e,e,\dots as both e and g respectively. Hence infinite product values are not uniquely defined. For the converse, suppose infinite product values are not unique. Then we have shrinking sequence g_1,g_2,g_3,\dots that allows for two shrinking tail sequences t_1,t_2,t_3,\dots and s_1,s_2,s_3,\dots such that s_1\neq t_1. Define g=s_{1}^{-1}t_1 and notice g\neq e. From our tail sequence equations, we have s_{n}s_{n+1}^{-1}=g_n=t_{n}t_{n+1}^{-1} for all n and so g_n=s_{n}^{-1}t_{n}=s_{n+1}^{-1}t_{n+1} for all n. In particular, g=s_{n}^{-1}t_{n} for all n\in\mathbb{N}. Fix m\in \mathbb{N}. Since \{s_n\} and \{t_n\} are shrinking sequences, these sequences are eventually in G_m. Since G_m is a subgroup, \{s_{n}^{-1}t_{n}\} is eventually in G_m. Since m was arbitrary, we see that g\in G_m for all m\in\mathbb{N}. Therefore, e\neq g\in \bigcap_{m\in\mathbb{N}}G_m. \square

So given a descending filtration which has trivial intersection, one can meaningfully define infinite products. If one uses a filtration with G_m=\{e\} for sufficiently large m, then the infinite product structure will be trivial, i.e. only sequences terminating in e,e,e,\dots will have infinite products.

For full generality: To do this in more generality so that it applies to (\mathbb{R},+) one should use a descending filtration of subsets \{G_n\} containing e such that for given n, we have G_mG_m\subseteq G_n for sufficiently large m. For complete generality, one should fix a subgroup S\subseteq G^{\omega} of shrinking sequences.

Another warning: We know that even from the nicest situation possible (infinite sums in \mathbb{R}) that just because a sequence g_1,g_2,g_3,\dots is shrinking (converging to 0) does not mean that it yields a “convergent” infinite sum, e.g. the harmonic series \sum\frac{1}{n}. Hence the necessity of tails really is apparent even in basic analysis.

Connection to Infinitary Homotopy Groups

The motivation here is topological in nature: given a first countable space X, a point x_0\in X, and a sequence of maps \alpha_k:([0,1]^n,\partial [0,1]^n)\to (X,x_0) converging to the constant loop at x_0, we can define the infinite concatenation \alpha_{\infty}:([0,1]^n,\partial [0,1]^n)\to (X,x_0) to be the map which is (after a suitable scaling) \alpha_k on \left[\frac{k-1}{k},\frac{k}{k+1}\right]\times [0,1]^{n-1} and \alpha_{\infty}(\{1\}\times [0,1]^{n-1})=x_0. Then we could consider the homotopy class [\alpha_{\infty}] to be an infinite product of the sequence [\alpha_k] in the n-th homotopy group \pi_n(X,x_0). What’s really happening here is that we have a well-defined infinitary operation on the space of maps ([0,1]^n,\partial [0,1]^n)\to (X,x_0) and we’re hoping it descends to a well-defined operation on homotopy classes.

If we want to use a filtration, we should assume X is first countable at x_0. If you don’t, you may not have any natural infinite products. For example, if Y=\omega_1+1 is the first compact uncountable ordinal, then the reduced suspension \Sigma Y doesn’t allow for sequences like \alpha_k; the fundamental group is isomorphic to the free group F(\omega_1) where there are no meaningful infinite products. Let U_1\supseteq U_2\supseteq U_3\supseteq... be a countable neighborhood base at x_0 and let G_m be the image of the homomorphism \pi_n(U_m,x_0)\to\pi_n(X,x_0) induced by inclusion U_m\to X. Now G_1\supseteq G_2\supseteq G_3\supseteq... is a descending filtration of G=\pi_n(X,x_0). According to the theorem above, infinite products in \pi_n(X,x_0) defined using infinite concatenations of maps at x_0 and those defined using the filtration agree and are unique if there aren’t any non-trivial classes in \pi_n(X,x_0) that have arbitrarily small representatives. You have such problematic classes in the fundamental group of the Harmonic Archipelago and it’s higher dimensional analogues. In those groups, you have a notion of infinite product coming from a filtration but they are not uniquely defined, i.e. you may have [\alpha_k]=[\beta_k] for all k\in\mathbb{N} but the infinite products \prod_{k=1}^{\infty}[\alpha_k]:=[\alpha_{\infty}] and \prod_{k=1}^{\infty}[\beta_k]:=[\beta_{\infty}] are not equal.

Beyond \omega-type products

So far, we have only considered infinite products g_1g_2g_3\dots ordered by \omega=\{1,2,3,\dots\}. However, if G is non-commutative, we are forced to expand the kinds of products we must be able to form.

Suppose our filtration satisfies \bigcap_{n\in\mathbb{N}}G_n=\{e\} and gives us a well-defined notion of “infinite product” in a group G. Then we have elements g_1g_2g_3\dots that behave like the notation suggests it should. However, a group has inverses and inverting a product g_1g_2\cdots g_n requires reversing the order to g_{n}^{-1}\cdots g_{2}^{-1}g_{1}^{-1}. So if t_1,t_2,t_3,... is a tail sequence for g_1,g_2,g_3,\dots, then g_{\infty}^{-1}=t_{1}^{-1} satisfies t_{1}^{-1}=t_{n+1}^{-1}g_{n}^{-1}\cdots g_{2}^{-1}g_{1}^{-1} for all n\in\mathbb{N}. This means the inverse g_{\infty}^{-1} of the infinite product g_{\infty}=g_1g_2g_3\cdots behaves exactly as an element written ...g_{3}^{-1}g_{2}^{-1}g_{1}^{-1} should behave. Hence we have infinite products g_1g_2g_3\cdots on the right of order type \omega=\{1,2,3,\dots\} and infinite products \cdots h_3h_2h_1 on the left with the order type of the negative integers \{\dots,-3,-2,-1\}. Multiplying infinite products \cdots h_3h_2h_1 and g_1g_2g_3\cdots gives a product \cdots h_3h_2h_1g_1g_2g_3\cdots with the order type of the integers \mathbb{Z}. From here, you can probably see how things begin to take off into linear-order-land. There’s nothing stopping us from creating infinite products of infinite products, and infinite products of infinite products of infinite products, and so on. We could have products that looks like

(g_{1,1}g_{1,2}g_{1,3}...)(g_{2,1}g_{2,2}g_{2,3}...)(g_{3,1}g_{3,2}g_{3,3}...)(g_{4,1}g_{4,2}g_{4,3}...)(g_{5,1}g_{5,2}g_{5,3}...)...

of order type \omega\cdot\omega, i.e. \mathbb{N}\times\mathbb{N} in the dictionary ordering. Hence we think of the above product as being represented by function w:\mathbb{N}\times\mathbb{N}\to G, w(n,m)=g_{n,m}\in G. We also need to make sure the products respect the filtration and one way to ensure this is to demand that for each k\in\mathbb{N}, only finitely many values g_{n,m} can lie in G_{k}\backslash G_{k+1}.

We could even form a product like

...(...x_{a-1,b-1}x_{a-1,b}x_{a-1,b+1}...)(...x_{a,b-1}x_{a,b}x_{a,b+1}...)(...x_{a+1,b-1}x_{a+1,b}x_{a+1,b+1}...)...

for x_{n,m}\in G, n,m\in\mathbb{Z}. This latter product has the order type of \mathbb{Z}\times \mathbb{Z} in the dictionary ordering and is represented by a function v:\mathbb{Z}\times\mathbb{Z}\to G, v(n,m)=x_{n,m}. Again, to respect the filtration, we should insist that for each k, only finitely many x_{n,m} lie in G_{k}\backslash G_{k+1}.

Ultimately, if we use a countable version of Hausdorff’s Theorem, given any countable scattered linear order L (one not containing a copy of the dense order \mathbb{Q}), we can create products of order type L. Warning: this definition of “scattered” is related to, but not equivalent to, the notion of “scattered space” in topology. Can we know how a product in G of order type L will reduce in G, i.e. what its value will be? Sure, but you’ll need to know all of the relations in G and keep track of values after each successive infinite product. Regardless of relations, the main take-away is that it is worth considering iterated infinite products and words as filtration-respecting functions w:L\to G from a countable linear order into G.

 

Posted in Group theory, Infinite Group Theory, Order Theory | Tagged , , , , , , , , | 1 Comment