2.10 A spacetime orbifold in two dimensions, 3

A First Course in String Theory

.

(b) Draw a spacetime diagram, indicate the $\displaystyle{x^+}$ and $\displaystyle{x^-}$ axes, and sketch the family of curves

$\displaystyle{x^+ x^- = a^2}$,

where $\displaystyle{a > 0}$ is a real constant that labels the various curves.

~~~

— Me@2021-08-31 08:42:40 PM

.

.

2.10 A spacetime orbifold in two dimensions, 2

A First Course in String Theory

.

(a) Use the result of Problem 2.2, part (a), to recast (1) as

$\displaystyle{(x^+, x^-) \sim \left( e^{-\lambda} x^+, e^{\lambda} x^- \right)}$, where $\displaystyle{e^\lambda \equiv \sqrt{\frac{1+\beta}{1-\beta}}}$.

What is the range of $\lambda$? What is the orbifold fixed point? Assume now that $\beta > 0$, and thus $\lambda > 0$.

~~~

Range of $\displaystyle{\lambda}$:

\displaystyle{ \begin{aligned} 0 &< \beta < \infty \\ 1 &< \frac{1 + \beta}{1 - \beta} < \infty \\ 0 &< \ln \frac{1 + \beta}{1 - \beta} < \infty \\ 0 &< \frac{1}{2} \ln \frac{1 + \beta}{1 - \beta} < \infty \\ 0 &< \lambda < \infty \\ \end{aligned}}

.

Fixed points:

\displaystyle{ \begin{aligned} \begin{bmatrix} (x^+)' \\ (x^-)' \end{bmatrix} &= \begin{bmatrix} e^{- \lambda} x^+ \\ e^{\lambda} x^- \\ \end{bmatrix} \\ \end{aligned}}

\displaystyle{ \begin{aligned} (x^+, x^-) &= (0, 0) \\ \end{aligned}}

— Me@2021-05-16 06:31:12 PM

.

.

2.10 A spacetime orbifold in two dimensions

A First Course in String Theory

.

Consider a two-dimensional world with coordinates $\displaystyle{x^0}$ and $\displaystyle{x^1}$.

A boost with velocity parameter $\displaystyle{\beta}$ along the $\displaystyle{x^1}$ axis is described by the first two equations in (2.36). We want to understand the two-dimensional space that emerges if we identify

$\displaystyle{(x^0, x^1) \sim ({x'}^0, {x'}^1)}$.

We are identifying spacetime points whose coordinates are related by a boost!

(a) Use the result of Problem 2.2, part (a), to recast (1) as

$\displaystyle{(x^+, x^-) \sim \left( e^{-\lambda} x^+, e^{\lambda} x^- \right)}$, where $\displaystyle{e^\lambda \equiv \sqrt{\frac{1+\beta}{1-\beta}}}$.

~~~

\displaystyle{ \begin{aligned} (x')^0 &= \gamma (x^0 - \beta x^1) \\ (x')^1 &= \gamma (- \beta x^0 + x^1) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \begin{bmatrix} (x^+)' \\ (x^-)' \end{bmatrix} &= \begin{bmatrix} \gamma (1-\beta) & 0 \\ 0 & \gamma (1+\beta) \\ \end{bmatrix} \begin{bmatrix} x^+ \\ x^- \end{bmatrix} \\ &= \begin{bmatrix} \frac{1}{\sqrt{1 - \beta^2}} (1-\beta) x^+ \\ \frac{1}{\sqrt{1 - \beta^2}} (1+\beta) x^- \\ \end{bmatrix} \\ &= \begin{bmatrix} e^{- \lambda} x^+ \\ e^{\lambda} x^- \\ \end{bmatrix} \\ \end{aligned}}

— Me@2021-05-04 10:48:28 PM

.

.

2.9 Lightlike compactification, d, 2

A First Course in String Theory

.

Represent your answer to part (c) in a spacetime diagram. Show two points related by the identification (2) and the space and time axes for the Lorentz frame S’ in which the compactification is standard.

~~~

[guess]

In case you would like to have a fundamental domain:

[guess]

— Me@2021-04-21 10:35:13 PM

.

.

2.9 Lightlike compactification, d

A First Course in String Theory

.

Represent your answer to part (c) in a spacetime diagram. Show two points related by the identification (2) and the space and time axes for the Lorentz frame $S'$ in which the compactification is standard.

~~~

Note 1:

The identifications $\displaystyle{ x \sim x + 0 }$ and $\displaystyle{ x \sim x + \infty }$ have the same meaning, which is that $x$ has no identification at all. In other words, there is NO non-zero real number $r$ such that

$\displaystyle{ x \sim x + r }$

.

Note 2:

\displaystyle{ \begin{aligned} \begin{bmatrix} ct \\ x \\ \end{bmatrix} &\sim \begin{bmatrix} ct - 2 \pi R \\ x + 2 \pi \sqrt{R^2 + R_S^2} \\ \end{bmatrix} \end{aligned} }

This identification must be done to both the space and time coordinates. In other words, it cannot be done to only one of $ct$ and $x$.

— Me@2021-04-13 12:02:13 PM

.

.

2.9 Lightlike compactification, c

A First Course in String Theory

.

… Find the velocity parameter of $\displaystyle{S'}$ with respect to $\displaystyle{S}$ and the compactification radius in the Lorentz frame $\displaystyle{S'}$.

~~~

\displaystyle{ \begin{aligned} \begin{bmatrix} c t' \\ x' \end{bmatrix} &= \begin{bmatrix} \gamma & -\beta \gamma \\ -\beta \gamma & \gamma \\ \end{bmatrix} \begin{bmatrix} c\,t \\ x \end{bmatrix} \\ \end{aligned} }

\displaystyle{ \begin{aligned} \begin{bmatrix} ct \\ x \\ \end{bmatrix} &\sim \begin{bmatrix} ct - 2 \pi R \\ x + 2 \pi \sqrt{R^2 + R_S^2} \\ \end{bmatrix} \end{aligned} }

.

\displaystyle{ \begin{aligned} \begin{bmatrix} c t' \\ x' \end{bmatrix} &\sim \begin{bmatrix} ct' \\ x' \\ \end{bmatrix} + \begin{bmatrix} \gamma & -\beta \gamma \\ -\beta \gamma & \gamma \\ \end{bmatrix} \begin{bmatrix} - 2 \pi R \\ 2 \pi \sqrt{R^2 + R_S^2} \\ \end{bmatrix} \\ \end{aligned} }

\displaystyle{ \begin{aligned} R + \beta \sqrt{R^2 + R_S^2} &= 0 \\ \beta^2 &= \frac{R^2}{R^2 + R_S^2} \\ \gamma &= \sqrt{\frac{R^2 + R_S^2}{R_S^2}} \\ \end{aligned} }

\displaystyle{ \begin{aligned} \begin{bmatrix} c t' \\ x' \end{bmatrix} &\sim \begin{bmatrix} ct' \\ x' \\ \end{bmatrix} + \begin{bmatrix} 0 \\ R_S \\ \end{bmatrix} 2 \pi \\ \end{aligned} }

— Me@2021-04-07 07:02:14 AM

.

.

2.9 Lightlike compactification, b

A First Course in String Theory

.

Consider coordinates $\displaystyle{(ct', x')}$ related to $\displaystyle{(ct, x)}$ by a boost with velocity parameter $\displaystyle{\beta}$. Express the identifications in terms of the primed coordinates.

~~~

\displaystyle{ \begin{aligned} \begin{bmatrix} c t' \\ x' \end{bmatrix} &= \begin{bmatrix} \gamma & -\beta \gamma \\ -\beta \gamma & \gamma \\ \end{bmatrix} \begin{bmatrix} c\,t \\ x \end{bmatrix} \\ \end{aligned}}

\displaystyle{ \begin{aligned} \begin{bmatrix} ct \\ x \\ \end{bmatrix} &\sim \begin{bmatrix} ct - 2 \pi R \\ x + 2 \pi R \\ \end{bmatrix} \end{aligned}}

\displaystyle{ \begin{aligned} \begin{bmatrix} c t' \\ x' \end{bmatrix} &\sim \begin{bmatrix} c t' \\ x' \end{bmatrix} + \begin{bmatrix} - \gamma - \beta \gamma \\ \gamma + \beta \gamma \\ \end{bmatrix} 2 \pi R \\ \end{aligned}}

— Me@2021-03-30 08:38:16 PM

.

.

Lightlike compatification

A First Course in String Theory

.

2.9 Lightlike compatification

(a) Rewrite this identification using light-cone coordinates.

\begin{aligned} \begin{bmatrix} x \\ ct \end{bmatrix} &\sim \begin{bmatrix} x \\ ct \end{bmatrix} + 2 \pi \begin{bmatrix} R \\ -R \end{bmatrix} \end{aligned}

~~~

\begin{aligned} \begin{bmatrix} x^+ \\ x^- \end{bmatrix} &= \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix} \begin{bmatrix} x^0 \\ x^1 \end{bmatrix} \\ \end{aligned}

\begin{aligned} \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix} \begin{bmatrix} x^0 \\ x^1 \\ \end{bmatrix} &\sim \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix} \begin{bmatrix} x^0 \\ x^1 \\ \end{bmatrix} + \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix} \begin{bmatrix} - 2 \pi R \\ 2 \pi R \\ \end{bmatrix} \\ \end{aligned}

\begin{aligned} \begin{bmatrix} x^+ \\ x^- \\ \end{bmatrix} &\sim \begin{bmatrix} x^+ \\ x^- \\ \end{bmatrix} + \frac{1}{\sqrt{2}} \begin{bmatrix} 0 \\ - 4 \pi R \\ \end{bmatrix} \\ \end{aligned}

— Me@2021-03-22 06:06:10 PM

.

.

Problem 2.8

A First Course in String Theory

.

2.8 Spacetime diagrams and Lorentz transformations

Show that the $\displaystyle{{x'}^0}$ and $\displaystyle{{x'}^1}$ axes … appear in the original spacetime diagram as oblique axes.

~~~

The Lorentz transform:

\displaystyle{ \begin{aligned} (x')^0 &= \gamma (x^0 - \beta x^1) \\ (x')^1 &= \gamma (- \beta x^0 + x^1) \\ \end{aligned}}

The inverse Lorentz transform:

\displaystyle{ \begin{aligned} x^0 &= \gamma ((x')^0 + \beta (x')^1) \\ x^1 &= \gamma (\beta (x')^0 + (x')^1) \\ \end{aligned}}

.

The $\displaystyle{(x')^0}$-direction unit vector is $\displaystyle{((x')^0, (x')^1) = (1, 0)}$.

When $\displaystyle{((x')^0, (x')^1) = (1,0)}$,

$\displaystyle{(x^0, x^1) = (\gamma, \gamma \beta)}$

.

When $\displaystyle{\beta > 0}$,

\displaystyle{ \begin{aligned} \tan \phi &= \frac{\gamma \beta}{\gamma} \\ &= \beta \\ \end{aligned}}

where $\displaystyle{\phi}$ is the angle between $\displaystyle{x^0}$-axis and $\displaystyle{(x')^0}$-axis.

— Me@2021-03-17 03:42:23 PM

.

.

Problem 2.7

A First Course in String Theory

.

2.7 A more general construction for cones?

Consider the $\displaystyle{(x, y)}$ plane and the complex coordinate $\displaystyle{z = x + iy}$. We have seen that the identification $\displaystyle{z \sim e^{\frac{2 \pi i}{N}} z}$, with $\displaystyle{N}$ an integer greater than two, can be used to construct a cone.

Examine now the identification

$\displaystyle{z \sim e^{2 \pi i \frac{M}{N}} z, ~~~ N > M \ge 2,}$

where $\displaystyle{M}$ and $\displaystyle{N}$ are relatively prime integers (their greatest common divisor is one).

Determine a fundamental domain for identification.

Given two relatively prime numbers a and b, there exists integers m and n such that $\displaystyle{m a + n b = 1}$.

~~~

[guess]

Since M and N are relatively prime numbers, there exists integers m and n such that $\displaystyle{m M + n N = 1}$.

So

$\displaystyle{\frac{M}{N} m = \frac{1 - nN}{N}}$.

Therefore,

$\displaystyle{\left[e^{2 \pi i \frac{M}{N}}\right]^m = e^{2 \pi i \frac{1 - nN}{N}}}$ for some integers m and n.

As a result, a fundamental domain is provided by the points z that satisfy $\displaystyle{0 \le \arg(z) < 2 \pi \frac{1}{N}}$.

[guess]

— Me@2021-03-09 04:58:02 PM

.

.

Problem 2.6c

A First Course in String Theory

.

2.5 Constructing $\displaystyle{T^2/\mathbb{Z}_3}$ orbifold

(c) Determine the three fixed points of the $\displaystyle{\mathbb{Z}_3}$ action on the torus. Show that the orbifold $\displaystyle{T^2/\mathbb{Z}_3}$ is topologically a two-dimensional sphere, naturally presented as a triangular pillowcase with seamed edges and corners at the fixed points.

~~~

[guess]

To find the fixed points, we consider the cases when

$\displaystyle{z + m + n e^{i \pi/3} = e^{2 \pi i/3} z}$,

where $\displaystyle{m,n \in \mathbb{Z}}$.

$\displaystyle{(e^{2 \pi i/3} - 1) z = m + n e^{i \pi/3}}$

$\displaystyle{z = \frac{m + n e^{i \pi/3}}{e^{2 \pi i/3} - 1}}$

.

When $\displaystyle{m, n = 0}$,

$\displaystyle{z = 0}$

.

When $\displaystyle{m = 0; n = 1}$,

$\displaystyle{z = \frac{e^{i \pi/3}}{e^{2 \pi i/3} - 1} = \frac{-i}{\sqrt{3}}}$

When $\displaystyle{m = 1; n = 0}$,

$\displaystyle{z = \frac{1}{e^{2 \pi i/3} - 1} = \frac{1}{\sqrt{3}} e^{- 5 i \pi/6}}$

.

When $\displaystyle{m = -2; n = 1}$,

$\displaystyle{z = \frac{-2 + 1 e^{i \pi/3}}{e^{2 \pi i/3} - 1} = 1}$

When $\displaystyle{m = -1; n = -1}$,

$\displaystyle{z = \frac{-1 - 1 e^{i \pi/3}}{e^{2 \pi i/3} - 1} = e^{i \pi/3}}$

.

In the fundamental domain, the 3 fixed points are:

$\displaystyle{z = 0}$

when $\displaystyle{z = R(z)}$;

$\displaystyle{z = 1}$

when $\displaystyle{T_2 \circ T_1^{-1} \circ T_1^{-1} (z) = R(z)}$;

$\displaystyle{z = e^{i \pi/3}}$

when $\displaystyle{T_2^{-1} \circ T_1^{-1} (z) = R(z)}$.

.

Duplicate the fundamental triangle to create a fundamental parallelogram.

If we label the some edges as $B$ instead of $A$, the fundamental parallelogram will have a sphere topology $\displaystyle{ ABB^{-1}A^{-1} }$.

However, it is not exactly the same as a sphere topology, because a sphere topology would not have the $A=B$ identification.

[guess]

— Me@2021-02-23 03:44:57 PM

.

.

Problem 2.6

A First Course in String Theory

.

2.5 Constructing $\displaystyle{T^2/\mathbb{Z}_3}$ orbifold

(a) A fundamental domain, with its boundary, is the parallelogram with corners at $\displaystyle{z = 0, 1}$ and $\displaystyle{e^{i \pi/3}}$. Where is the fourth corner? Make a sketch and indicate the identifications on the boundary. The resulting space is an oblique torus.

(b) Consider now an additional $\displaystyle{\mathbb{Z}_3}$ identification

$\displaystyle{z \sim R(z) = e^{2 \pi i/3} z}$

To understand how this identification acts on the oblique torus, draw the short diagonal that divides the torus into two equilateral triangles. Describe carefully the $\displaystyle{{Z}_3}$ action on each of the two triangles (recall that the action of $\displaystyle{R}$ can be followed by arbitrary action with $\displaystyle{T_1}$, $\displaystyle{T_2}$, and their inverses).

[guess]

(a)

$\displaystyle{z = 1 + e^{\frac{i \pi}{3}}}$

(b)

[guess]

— Me@2021-02-11 06:03:36 PM

.

.

Fundamental polygon

A First Course in String Theory

.

2.5 Constructing simple orbifolds

(b) Consider a torus $\displaystyle{T^2}$, presented as the $\displaystyle{(x,y)}$ plane with the identifications $\displaystyle{x \sim x + 2}$ and $\displaystyle{y \sim y+2}$. Choose $\displaystyle{-1 < x, y, \le 1}$ as the fundamental domain. The orbifold $\displaystyle{T^2/\mathbb{Z}_2}$ is defined by imposing the $\displaystyle{\mathbb{Z}_2}$ identification $\displaystyle{(x,y) \sim (-x,-y)}$.

Prove that there are four points on the torus that are left fixed by the $\displaystyle{\mathbb{Z}_2}$ transformation. Show that the orbifold $\displaystyle{T^2/\mathbb{Z}_2}$ is topologically a two-dimensional sphere, naturally presented as a square pillowcase with seamed edges.

~~~

To find the fixed points, we consider the cases when $\displaystyle{-x = x + 2m}$ and $\displaystyle{-y = y + 2n}$, where $\displaystyle{m,n \in \mathbb{Z}}$. Since the length of the interval is only 2, we can consider only the cases when $\displaystyle{m,n = 0, 1}$. Then the only solutions are

$\displaystyle{(0,0)}$
$\displaystyle{(0,1)}$
$\displaystyle{(1,0)}$
$\displaystyle{(1,1)}$

— Me@2021-01-17 04:14:44 PM

.

— Wikipedia on Surface (topology)

.

The formula for this topology is $\displaystyle{ABB^{-1}A^{-1}ABB^{-1}A^{-1} = ABB^{-1}A^{-1} }$, which is a sphere.

— Me@2021-01-29 06:10:46 PM

.

.

Problem 2.5b1

A First Course in String Theory

.

2.5 Constructing simple orbifolds

~~~

The wikipedia page “Fundamental polygon”, specifically the subsection entitled “group generators”, has a serious mathematical error. You cannot derive a presentation for the fundamental group from the fundamental polygon using the side labels in the manner described on that page (and which you have copied), unless all of the vertices of the polygon are identified to the same point. In the picture you provided and which can be seen on that page, one opposite pair of vertices of the square is identified to one point on the sphere, the other opposite pair of vertices is identified to a different point on the sphere.

There is still a way to derive a presentation for the fundamental group from a fundamental polygon, but it is not the way described on the wikipedia page. In the sphere example of your question, you have to ignore one of the two letters $\displaystyle{A}$, $\displaystyle{B}$, keeping only the other letter. For example, ignoring $\displaystyle{A}$ and keeping $\displaystyle{B}$, you get a presentation $\displaystyle{ \langle B \mid B B^{-1} = 1 \rangle }$, which is a presentation of the trivial group. The way you tell which to ignore and which to keep is by taking the quotient of the boundary of the polygon which is a graph with vertices and edges, choosing a maximal tree in that graph, ignoring all edge labels in the maximal tree, and keeping all edge labels not in the maximal tree.

On that wikipedia page, the Klein bottle and the torus examples are correct and you do not have to ignore any edge labels: all vertices are identified to a single point and the maximal tree is just a point. The sphere and the projective plane examples are incorrect: the four vertices are identified to two separate points, the maximal tree has one edge, and you have to ignore one edge label. The example of a hexagon fundamental domain for the torus is also incorrect: the six vertices are identified to two separate points, the maximal tree has one edge, and you have to ignore one edge label.

edited Jul 23 ’14 at 17:17

answered Jul 23 ’14 at 17:11

Lee Mosher

.

yes, i thought that the fundamental polygon is this quotient space. – user159356 Jul 23 ’14 at 17:28

That’s backward: in your example, the sphere is the quotient space of the fundamental polygon, not the other way around. – Lee Mosher Jul 23 ’14 at 17:30

— Mathematics Stack Exchange

.

.

2021.01.18 Monday ACHK

Problem 2.5a

A First Course in String Theory

.

2.5 Constructing simple orbifolds

(a) Consider a circle $\displaystyle{S^1}$, presented as the real line with the identification $\displaystyle{x \sim x + 2}$. Choose $\displaystyle{-1 < x \le 1}$ as the fundamental domain. The circle is the space $\displaystyle{-1 < x \le 1}$ with points $\displaystyle{x = \pm 1}$ identified. The orbifold $\displaystyle{S^1/\mathbb{Z}_2}$ is defined by imposing the (so-called) $\displaystyle{\mathbb{Z}_2}$ identification $\displaystyle{x \sim -x}$. Describe the action of this identification on the circle. Show that there are two points on the circle that are left fixed by the $\displaystyle{\mathbb{Z}_2}$ action. Find a fundamental domain for the two identifications. Describe the orbifold $\displaystyle{S^1/\mathbb{Z}_2}$ in simple terms.

~~~

Put point $\displaystyle{x=0}$ and point $\displaystyle{x=1}$ on the positions that they can form a horizontal diameter.

Then the action is a reflection of the lower semi-circle through the horizontal diameter to the upper semi-circle.

Point $\displaystyle{x=0}$ and point $\displaystyle{x=1}$ are the two fixed points.

A possible fundamental domain is $\displaystyle{0 \le x \le 1}$.

If a variable point $\displaystyle{x}$ moves from 0 to 1 and then keeps going, that point will actually go back and forth between 0 and 1.

— Me@2020-12-31 04:43:07 PM

.

.

Problem 2.4

A First Course in String Theory

.

2.4 Lorentz transformations as matrices

A matrix L that satisfies (2.46) is a Lorentz transformation. Show the following.

(b) If $\displaystyle{L}$ is a Lorentz transformation so is the inverse matrix $\displaystyle{L^{-1}}$.

(c) If $\displaystyle{L}$ is a Lorentz transformation so is the transpose matrix $\displaystyle{L^{T}}$.

~~~

(b)

\displaystyle{ \begin{aligned} (\mathbf{A}^\mathrm{T})^{-1} &= (\mathbf{A}^{-1})^\mathrm{T} \\ L^T \eta L &= \eta \\ \eta &= [L^T]^{-1} \eta L^{-1} \\ [L^T]^{-1} \eta L^{-1} &= \eta \\ [L^{-1}]^T \eta L^{-1} &= \eta \\ \end{aligned}}

.

(c)

\displaystyle{ \begin{aligned} L^T \eta L &= \eta \\ (L^T \eta L)^{-1} &= (\eta)^{-1} \\ L^{-1} \eta^{-1} (L^T)^{-1} &= \eta \\ L^{-1} \eta (L^T)^{-1} &= \eta \\ \eta &= L \eta L^T \\ L \eta L^T &= \eta \\ \end{aligned}}

— Me@2020-12-21 04:24:33 PM

.

.

Problem 2.3b5

A First Course in String Theory

.

2.3 Lorentz transformations, derivatives, and quantum operators.

(b) Show that the objects $\displaystyle{\frac{\partial}{\partial x^\mu}}$ transform under Lorentz transformations in the same way as the $\displaystyle{a_\mu}$ considered in (a) do. Thus, partial derivatives with respect to conventional upper-index coordinates $\displaystyle{x^\mu}$ behave as a four-vector with lower indices – as reflected by writing it as $\displaystyle{\partial_\mu}$.

~~~

Denoting $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{L^{~\nu}_{\mu}}$ is misleading, because that presupposes that $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ is directly related to the matrix $\displaystyle{L}$.

To avoid this bug, instead, we denote $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{M ^\nu_{~\mu}}$. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

Using the Kronecker Delta and Einstein summation notation, we have

\displaystyle{ \begin{aligned} L^\mu_{~\nu} M^{\beta}_{~\mu} &= M^{\beta}_{~\mu} L^\mu_{~\nu} \\ &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

So

\displaystyle{ \begin{aligned} \sum_{\mu=0}^{4} L^\mu_{~\nu} M^{\beta}_{~\mu} &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

\displaystyle{ \begin{aligned} M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\ \end{aligned}}

In other words,

\displaystyle{ \begin{aligned} \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\ \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

One defines (as a matter of notation),

${\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}$

and may in this notation write

${\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.}$

Now for a subtlety. The implied summation on the right hand side of

${\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }}$

is running over a row index of the matrix representing $\displaystyle{\Lambda^{-1}}$. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of $\displaystyle{\Lambda}$ acting on the column vector $\displaystyle{A_\mu}$. That is, in pure matrix notation,

${\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.}$

— Wikipedia on Lorentz transformation

.

So

\displaystyle{ \begin{aligned} M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\ \end{aligned}}

In other words,

\displaystyle{ \begin{aligned} \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\ \end{aligned}}

.

Denote $\displaystyle{[L^{-1}]^{\beta}_{~\mu}}$ as

\displaystyle{ \begin{aligned} N^{~\beta}_{\mu} \\ \end{aligned}}

In other words,

\displaystyle{ \begin{aligned} N^{~\beta}_{\mu} &= M^{\beta}_{~\mu} \\ [N^T] &= [M] \\ \end{aligned}}

.

The Lorentz transformation:

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')_\mu &= \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \\ \end{aligned}}

.

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')_\mu &= N^{~\nu}_{\mu} x_\nu \\ \end{aligned}}

.

\displaystyle{ \begin{aligned} x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\ (x')_\mu &= M^{\nu}_{~\mu} x_\nu \\ \end{aligned}}

.

\displaystyle{ \begin{aligned} x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\ (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\ \end{aligned}}

.

\displaystyle{ \begin{aligned} \frac{\partial}{\partial (x')^\mu} &= \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu} \\ &= \frac{\partial x^0}{\partial (x')^\mu} \frac{\partial}{\partial x^0} + \frac{\partial x^1}{\partial (x')^\mu} \frac{\partial}{\partial x^1} + \frac{\partial x^2}{\partial (x')^\mu} \frac{\partial}{\partial x^2} + \frac{\partial x^3}{\partial (x')^\mu} \frac{\partial}{\partial x^3} \\ \end{aligned}}

Now we consider $\displaystyle{f}$ as a function of $\displaystyle{x^{\mu}}$‘s:

$\displaystyle{f(x^0, x^1, x^2, x^3)}$

Since $\displaystyle{x^{\mu}}$‘s and $\displaystyle{(x')^{\mu}}$‘s are related by Lorentz transform, $\displaystyle{f}$ is also a function of $\displaystyle{(x')^{\mu}}$‘s, although indirectly.

$\displaystyle{f(x^0((x')^0, (x')^1, (x')^2, (x')^3), x^1((x')^0, ...), x^2((x')^0, ...), x^3((x')^0, ...))}$

For notational simplicity, we write $\displaystyle{f}$ as

$\displaystyle{f(x^\alpha((x')^\beta))}$

Since $\displaystyle{f}$ is a function of $\displaystyle{(x')^{\mu}}$‘s, we can differentiate it with respect to $\displaystyle{(x')^{\mu}}$‘s.

\displaystyle{ \begin{aligned} \frac{\partial}{\partial (x')^\mu} f(x^\alpha((x')^\beta))) &= \sum_{\nu = 0}^4 \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu} f(x^\alpha) \\ \end{aligned}}

Since

\displaystyle{ \begin{aligned} x^\nu &= [L^{-1}]^\nu_{~\beta} (x')^\beta \\ \end{aligned}},

\displaystyle{ \begin{aligned} \frac{\partial f}{\partial (x')^\mu} &= \sum_{\nu = 0}^4 \frac{\partial}{\partial (x')^\mu} \left[ \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} (x')^\beta \right] \frac{\partial f}{\partial x^\nu} \\ &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \frac{\partial (x')^\beta}{\partial (x')^\mu} \frac{\partial f}{\partial x^\nu} \\ &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \delta^\beta_\mu \frac{\partial f}{\partial x^\nu} \\ &= \sum_{\nu = 0}^4 [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\ &= [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\ \end{aligned}}

Therefore,

\displaystyle{ \begin{aligned} \frac{\partial}{\partial (x')^\mu} &= [L^{-1}]^\nu_{~\mu} \frac{\partial}{\partial x^\nu} \\ \end{aligned}}

It is the same as the Lorentz transform for covariant vectors:

\displaystyle{ \begin{aligned} (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\ \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

.

Kronecker delta in tensor component form

Problem 2.3b4

A First Course in String Theory

.

Continue the previous calculation:

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

The two cases can be grouped into one, by replacing the right hand sides with the Kronecker delta. However, there are 4 possible forms and I am not sure which one should be used.

$\displaystyle{\delta^i_{~j}}$
$\displaystyle{\delta_i^{~j}}$
$\displaystyle{\delta^{ij}}$
$\displaystyle{\delta_{ij}}$

So I do a little research on Kronecker delta in this post.

— Me@2020-10-21 03:40:36 PM

.

The inverse Lorentz transformation should satisfy $\displaystyle{\left( \Lambda^{-1} \right)^\beta_{~\mu} \Lambda^\mu_{~\nu} = \delta^\beta_{~\nu}}$, where $\displaystyle{\delta^\beta_{~\nu} \equiv \text{diag}(1,1,1,1)}$ is the Kronecker delta. Then, multiply by the inverse on both sides of Eq. 4 to find

\displaystyle{ \begin{aligned} \left( \Lambda^{-1} \right)^\beta_{~\mu} \left( \Delta x' \right)^\mu &= \delta^\beta_{~\nu} \Delta x^\nu \\ &= \Delta x^\beta \\ \end{aligned}}

(6)

The inverse $\displaystyle{\left( \Lambda^{-1} \right)^\beta_{~\mu}}$ is also written as $\displaystyle{\Lambda_\mu^{~\beta}}$. The notation is as follows: the left index denotes a row while the right index denotes a column, while the top index denotes the frame we’re transforming to and the bottom index denotes the frame we’re transforming from. Then, the operation $\displaystyle{\Lambda_\mu^{~\beta} \Lambda^\mu_{~\nu}}$ means sum over the index $\displaystyle{\mu}$ which lives in the primed frame, leaving unprimed indices $\displaystyle{\beta}$ and $\displaystyle{\nu}$ (so that the RHS of Eq. 6 is unprimed as it should be), where the sum is over a row of $\displaystyle{\Lambda_\mu^{~\beta}}$ and a column of $\displaystyle{\Lambda_{~\nu}^\mu}$ which is precisely the operation of matrix multiplication.

— Lorentz tensor redux

— Emily Nardoni

.

This one is WRONG:

$\displaystyle{(\Lambda^T)^{\mu}{}_{\nu} = \Lambda_{\nu}{}^{\mu}}$

This one is RIGHT:

$\displaystyle{(\Lambda^T)_{\nu}{}^{\mu} ~:=~ \Lambda^{\mu}{}_{\nu}}$

— Me@2020-10-23 06:30:57 PM

.

1. $\displaystyle{(\Lambda^T)_{\nu}{}^{\mu} ~:=~\Lambda^{\mu}{}_{\nu}}$

2. [Kronecker delta] is invariant in all coordinate systems, and hence it is an isotropic tensor.

3. Covariant, contravariant and mixed type of this tensor are the same, that is

$\displaystyle{\delta^i_{~j} = \delta_i^{~j} = \delta^{ij} = \delta_{ij}}$

— Introduction to Tensor Calculus

— Taha Sochi

.

Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the covariant and contravariant metric tensors being inverse to each other:

${\displaystyle g^{ij}g_{jk}=g_{kj}g^{ji}={\delta ^{i}}_{k}={\delta _{k}}^{i}}$

where $\displaystyle{\delta^i_{~k}}$ is the Kronecker delta or identity matrix. Since there are different choices of metric with different metric signatures (signs along the diagonal elements, i.e. tensor components with equal indices), the name and signature is usually indicated to prevent confusion.

— Wikipedia on Raising and lowering indices

.

So

${\displaystyle g^{ij}g_{jk}={\delta ^{i}}_{k}}$

and

${\displaystyle g_{kj}g^{ji}={\delta _{k}}^{i}}$

— Me@2020-10-19 05:21:49 PM

.

$\displaystyle{ T_{i}^{\; j} = \boldsymbol{T}(\boldsymbol{e}_i,\boldsymbol{e}^j) }$ and $\displaystyle{T_{j}^{\; i} = \boldsymbol{T}(\boldsymbol{e}_j,\boldsymbol{e}^i) }$ are both 1-covariant 2-contravariant coordinates of T. The only difference between them is the notation used for sub- and superscripts;

$\displaystyle{ T^{i}_{\; j} = \boldsymbol{T}(\boldsymbol{e}^i,\boldsymbol{e}_j) }$ and $\displaystyle{ T^{j}_{\; i} = \boldsymbol{T}(\boldsymbol{e}^j,\boldsymbol{e}_i) }$ are both 1-contravariant 2-covariant coordinates of T. The only difference between them is the notation used for sub- and superscripts.

— edited Oct 11 ’17 at 14:14

— answered Oct 11 ’17 at 10:58

— EditPiAf

— Tensor Notation Upper and Lower Indices

— Physics StackExchange

.

Rather, the dual basis one-forms are defined by imposing the following
16 requirements at each spacetime point:

$\displaystyle{\langle \tilde{e}^\mu \mathbf{x}, \vec e_\nu \mathbf{x} \rangle = \delta^{\mu}_{~\nu}}$

is the Kronecker delta, $\displaystyle{\delta^{\mu}_{~\nu} = 1}$ if $\displaystyle{\mu = \nu}$ and $\displaystyle{\delta^{\mu}_{~\nu} = 0}$ otherwise, with the same values for each spacetime point. (We must always distinguish subscripts from superscripts; the Kronecker delta always has one of each.)

— Introduction to Tensor Calculus for General Relativity

— Edmund Bertschinger

.

However, since $\displaystyle{\delta_{~b}^a}$ is a tensor, we can raise or lower its indices using the metric tensor in the usual way. That is, we can get a version of $\displaystyle{\delta}$ with both indices raised or lowered, as follows:

[$\displaystyle{\delta^{ab} = \delta^a_{~c} g^{cb} = g^{ab}}$]

$\displaystyle{\delta_{ab} = g_{ac} \delta^c_{~b} = g_{ab}}$

In this sense, $\displaystyle{\delta^{ab}}$ and $\displaystyle{\delta_{ab}}$ are the upper and lower versions of the metric tensor. However, they can’t really be considered versions of the Kronecker delta any more, as they don’t necessarily satisfy [0 when $i \ne j$ and 1 when $i = j$]. In other words, the only version of $\delta$ that is both a Kronecker delta and a tensor is the version with one upper and one lower index: $\delta^a_{~b}$ [or $\delta^{~a}_{b}$].

— Kronecker Delta as a tensor

— physicspages

.

Continue the calculation for the Problem 2.3b:

Denoting $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{L^{~\nu}_{\mu}}$ is misleading, because that presupposes that $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ is directly related to the matrix $\displaystyle{L}$.

To avoid this bug, instead, we denote $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{M ^\nu_{~\mu}}$. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

Using the Kronecker Delta and Einstein summation notation, we have

\displaystyle{ \begin{aligned} L^\mu_{~\nu} M^{\beta}_{~\mu} &= M^{\beta}_{~\mu} L^\mu_{~\nu} \\ &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

Note: After tensor contraction, the remaining left index should be kept on the left and the remaining right on the right.

— Me@2020-10-20 03:49:09 PM

.

.

Problem 2.3b3

Now we lower the indices, by expressing the upper-index coordinates (contravariant components) by lower-index coordinates (covariant components), in order to find the Lorentz transformation for the covariant components:

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ \eta^{\rho \mu} (x')_\rho &= L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \sum_\rho \eta^{\rho \mu} (x')_\rho &= \sum_\sigma \sum_\nu L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \end{aligned}}

After raising the indices, we lower the indices again:

\displaystyle{ \begin{aligned} \eta_{\alpha \mu} \eta^{\rho \mu} (x')_\rho &= \eta_{\alpha \mu} L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \eta_{\alpha \mu} \eta^{\mu \rho} (x')_\rho &= \eta_{\alpha \mu} L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \end{aligned}}

\displaystyle{ \begin{aligned} \delta_{\alpha \rho} (x')_\rho &= \eta_{\alpha \mu} L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ (x')_\alpha &= \eta_{\alpha \mu} L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \end{aligned}}

Prove that $\displaystyle{\eta_{\alpha \mu} L^\mu_{~\nu} \eta^{\sigma \nu} = \left[L^{-1}\right]^\sigma_{~\alpha}}$.

By index renaming, \displaystyle{ \begin{aligned} (x')_\mu &= \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma} x_\nu \\ \end{aligned}}, the question becomes

Prove that $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma} = \left[L^{-1}\right]^\nu_{~\mu}}$.

Denote $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{L^{~\nu}_{\mu}}$. Then the question is simplified to

Prove that $\displaystyle{ L^{~\nu}_{\mu} = \left[L^{-1}\right]^\nu_{~\mu}}$.

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')_\mu &= \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \\ \end{aligned}}

\displaystyle{ \begin{aligned} (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( {L^{~\beta}_{\mu}} x_\beta \right) \\ \\ \end{aligned}}

\displaystyle{ \begin{aligned} \sum_{\mu = 0}^4 (x')^\mu (x')_\mu &= \sum_{\mu = 0}^4 \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 \left( L^\mu_{~\nu} x^\nu \right) \left( {L^{~\beta}_{\mu}} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \sum_{\mu = 0}^4 (x')^\mu (x')_\mu &= \sum_{\mu = 0}^4 \left( \sum_{\nu = 0}^4 L^\mu_{~\nu} x^\nu \right) \left( \sum_{\beta = 0}^4 {L^{~\beta}_{\mu}} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} &(x')^0 (x')_0 + (x')^1 (x')_1 + (x')^2 (x')_2 + (x')^3 (x')_3 \\ &= \sum_{\mu = 0}^4 \left( L^\mu_{~0} x^0 + L^\mu_{~1} x^1 + L^\mu_{~2} x^2 + L^\mu_{~3} x^3 \right) \left( L^{~0}_{\mu} x_0 + L^{~1}_{\mu} x_1 + L^{~2}_{\mu} x_2 + L^{~3}_{\mu} x_3 \right) \\ \end{aligned}}

The right hand side has 64 terms.

Since the spacetime interval is Lorentz-invariant, $\displaystyle{ (x')^\mu (x')_\mu = x^\mu x_\mu }$. So the left hand side can be replaced by $\displaystyle{ x^\mu x_\mu }$.

\displaystyle{ \begin{aligned} &x^0 x_0 + x^1 x_1 + x^2 x_2 + x^3 x_3 \\ &= \sum_{\mu = 0}^4 \left( L^\mu_{~0} x^0 + L^\mu_{~1} x^1 + L^\mu_{~2} x^2 + L^\mu_{~3} x^3 \right) \left( L^{~0}_{\mu} x_0 + L^{~1}_{\mu} x_1 + L^{~2}_{\mu} x_2 + L^{~3}_{\mu} x_3 \right) \\ \end{aligned}}

Note that the 4 terms on the left side also appear on the right hand side.

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( {L^{~\beta}_{\mu}} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( {L^{~\beta}_{\mu}} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} x^0 x_0 + x^1 x_1 + x^2 x_2 + x^3 x_3 &= \sum_{\mu = 0}^4 \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 L^\mu_{~\nu} L^{~\beta}_{\mu} x^\nu x_\beta \\ \end{aligned}}

Since this equation is true for any coordinates, it is an identity. By comparing coefficients, we have:

1. For any terms with $\displaystyle{\nu \ne \beta}$, such as $\displaystyle{\nu = 0}$ and $\displaystyle{\beta=1}$,

\displaystyle{ \begin{aligned} \sum_{\mu = 0}^4 L^\mu_{~0} L^{~1}_{\mu} x^0 x_1 &\equiv 0 \\ \left( \sum_{\mu = 0}^4 L^\mu_{~0} L^{~1}_{\mu} \right) x^0 x_1 &\equiv 0\\ \end{aligned}}

So

\displaystyle{ \begin{aligned} \sum_{\mu = 0}^4 L^\mu_{~0} L^{~1}_{\mu} &= 0 \\ \end{aligned}}

2. For any terms with $\displaystyle{\nu = \beta}$.

\displaystyle{ \begin{aligned} x^0 x_0 + x^1 x_1 + x^2 x_2 + x^3 x_3 &\equiv \sum_{\mu = 0}^4 \sum_{\nu = 0}^4 \sum_{\beta = \nu} L^\mu_{~\nu} L^{~\beta}_{\mu} x^\nu x_\beta \\ x^0 x_0 + x^1 x_1 + x^2 x_2 + x^3 x_3 &\equiv \sum_{\mu = 0}^4 \left( L^\mu_{~0} L^{~0}_{\mu} x^0 x_0 + L^\mu_{~1} L^{~1}_{\mu} x^1 x_1 + L^\mu_{~2} L^{~2}_{\mu} x^2 x_2 + L^\mu_{~3} L^{~3}_{\mu} x^3 x_3 \right) \\ \end{aligned}}

So

\displaystyle{ \begin{aligned} \sum_{\mu = 0}^4 L^\mu_{~0} L^{~0}_{\mu} &= 1 \\ \sum_{\mu = 0}^4 L^\mu_{~1} L^{~1}_{\mu} &= 1 \\ \sum_{\mu = 0}^4 L^\mu_{~2} L^{~2}_{\mu} &= 1 \\ \sum_{\mu = 0}^4 L^\mu_{~3} L^{~3}_{\mu} &= 1 \\ \end{aligned}}

.

Denoting $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{L^{~\nu}_{\mu}}$ is misleading, because that presupposes that $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ is directly related to the matrix $\displaystyle{L}$.

To avoid this bug, instead, we denote $\displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}}$ as $\displaystyle{M ^\nu_{~\mu}}$. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

— Me@2020-09-12 09:33:00 PM

.

.

Problem 2.3b2

Prove that a metric tensor is symmetric.

.

Assume $\displaystyle{\eta_{\alpha\beta} \neq \eta_{\beta\alpha}}$. Because it’s irrelevant what letter we use for our indices,

$\displaystyle{\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}}$.

Then

$\displaystyle{\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \frac{1}{2}(\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} + \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}) = \frac{1}{2} (\eta_{\alpha\beta} + \eta_{\beta\alpha})dx^{\alpha}dx^{\beta}}$

So only the symmetric part of $\displaystyle{\eta_{\alpha\beta}}$ would survive the sum. As such we may as well take $\displaystyle{\eta_{\alpha\beta}}$ to be symmetric in its definition.

— edited Jun 15 ’15 at 22:48

— rob

— answered Jun 15 ’15 at 17:52

— FenderLesPaul

.

— Why is the metric tensor symmetric?

— Physics StackExchange

.

1.

$\displaystyle{\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}}$

means that

$\displaystyle{\sum_{\alpha, \beta} \eta_{\alpha\beta}dx^{\alpha}dx^{\beta}=\sum_{\alpha, \beta}\eta_{\beta\alpha}dx^{\beta}dx^{\alpha}}$

So in

$\displaystyle{\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}}$,

we cannot cancel out $\displaystyle{dx^{\alpha}dx^{\beta}}$ on both sides. In other words, we do NOT assume that $\displaystyle{\eta_{\alpha\beta} = \eta_{\beta\alpha}}$ in the first place.

.

2.

$\displaystyle{\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \frac{1}{2}(\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} + \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}) = \frac{1}{2} (\eta_{\alpha\beta} + \eta_{\beta\alpha})dx^{\alpha}dx^{\beta}}$

means that

$\displaystyle{\sum_{\alpha, \beta}\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} = \frac{1}{2}\sum_{\alpha, \beta}(\eta_{\alpha\beta}dx^{\alpha}dx^{\beta} + \eta_{\beta\alpha}dx^{\beta}dx^{\alpha}) = \frac{1}{2} \sum_{\alpha, \beta}(\eta_{\alpha\beta} + \eta_{\beta\alpha})dx^{\alpha}dx^{\beta}}$

.

3. “… only the symmetric part of $\displaystyle{\eta_{\alpha\beta}}$ would survive the sum” means that only the sum $\displaystyle{\left(\eta_{\alpha\beta} + \eta_{\beta\alpha}\right)}$ is physically meaningful.

— Me@2020-08-14 03:34:05 PM

.

.