# Problem 2.3b1

A First Course in String Theory

.

2.3 Lorentz transformations, derivatives, and quantum operators.

(b) Show that the objects $\displaystyle{\frac{\partial}{\partial x^\mu}}$ transform under Lorentz transformations in the same way as the $\displaystyle{a_\mu}$ considered in (a) do. Thus, partial derivatives with respect to conventional upper-index coordinates $\displaystyle{x^\mu}$ behave as a four-vector with lower indices – as reflected by writing it as $\displaystyle{\partial_\mu}$.

~~~ \displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ \frac{\partial}{\partial (x')^\mu} &= \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu} \\ &= \frac{\partial x^0}{\partial (x')^\mu} \frac{\partial}{\partial x^0} + \frac{\partial x^1}{\partial (x')^\mu} \frac{\partial}{\partial x^1} + \frac{\partial x^2}{\partial (x')^\mu} \frac{\partial}{\partial x^2} + \frac{\partial x^3}{\partial (x')^\mu} \frac{\partial}{\partial x^3} \\ \end{aligned}}

The Lorentz transformation: \displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ \end{aligned}}

Lowering the indices to create covariant vectors: \displaystyle{ \begin{aligned} x_\mu &= \eta_{\mu \nu} x^\nu \\ \end{aligned}}

In matrix form, covariant vectors are represented by row vectors: \displaystyle{ \begin{aligned} \left[ x_\mu \right] &= \left( [\eta_{\mu \nu}] [x^\nu] \right)^T \\ \end{aligned}}

Change the subject: \displaystyle{ \begin{aligned} \left[ x_\mu \right]^T &= [\eta_{\mu \nu}] [x^\nu] \\ [\eta_{\mu \nu}] [x^\nu] &= \left[ x_\mu \right]^T \\ [x^\nu] &= [\eta_{\mu \nu}]^{-1} \left[ x_\mu \right]^T \\ \end{aligned}}

With \displaystyle{ \begin{aligned} \eta^{\mu \nu} &\stackrel{\text{\tiny def}}{=} \left[ \eta_{\mu \nu} \right]^{-1} \\ \end{aligned}}, we have: \displaystyle{ \begin{aligned} \left[ x^\nu \right] &= \left[ \eta^{\mu \nu} \right] \left[ x_\mu \right]^T \\ \end{aligned}} \displaystyle{ \begin{aligned} x^\nu &= x_\mu \eta^{\mu \nu} \\ \end{aligned}}

Now we lower the indices in order to find the Lorentz transformation for the covariant components: \displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ \eta^{\rho \mu} x_\rho &= L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ x_\rho &= \eta_{\rho \mu} L^\mu_{~\nu} \eta^{\sigma \nu} x_\sigma \\ \end{aligned}}

— Me@2020-07-21 10:46:32 AM

.

.

# Quantum entanglement, 4

What’s sneaky about quantum mechanics is that the whole system can be in a pure state which when restricted to each subsystem gives a mixed state, and that these mixed states are then correlated (necessarily, as it turns out). That’s what “entanglement” is all about.

The first way things get trickier in quantum mechanics is that something we are used to in classical mechanics fails. In classical mechanics, pure states are always dispersion-free — that is, for every observable, the probability measure assigned by the state to that observable is a Dirac delta measure, that is, the observable has a 100% chance of being some specific value and a 0% chance of having any other value. (Consider the example of the dice, with the observable being the number of dots on the face pointing up.) In quantum mechanics, pure states need NOT be dispersion-free. In fact, they usually aren’t.

A second, subtler way things get trickier in quantum mechanics concerns systems made of parts, or subsystems. Every observable of a subsystem is automatically an observable for the whole system (but not all observables of the whole system are of that form; some involve, say, adding observables of two different subsystems). So every state of the whole system gives rise to, or as we say, “restricts to,” a state of each of its subsystems. In classical mechanics, pure states restrict to pure states. For example, if our system consisted of 2 dice, a pure state of the whole system would be something like “the first die is in state 2 and the second one is in state 5;” this restricts to a pure state for the first die (state 2) and a pure state for the second die (state 5). In quantum mechanics, it is not true that a pure state of a system must restrict to a pure state of each subsystem.

It is this latter fact that gave rise to a whole bunch of quantum puzzles such as the Einstein-Podolsky-Rosen puzzle and Bell’s inequality. And it is this last fact that makes things a bit tricky when one of the two subsystems happens to be you. It is possible, and indeed very common, for the following thing to happen when two subsystems interact as time passes. Say the whole system starts out in a pure state which restricts to a pure state of each subsystem. After a while, this need no longer be the case! Namely, if we solve Schroedinger’s equation to calculate the state of the system a while later, it will necessarily still be a pure state (pure states of the whole system evolve to pure states), but it need no longer restrict to pure states of the two subsystems. If this happens, we say that the two subsystems have become “entangled.”

— December 16, 1993

— This Week’s Finds in Mathematical Physics (Week 27)

— John Baez

.

.

2020.07.19 Sunday ACHK

# 追求, 2

We talked half the night, and in the middle of talk became lovers.

— Bertrand Russell

.

.

2020.07.18 Saturday ACHK

# 機遇創生論 1.6.2

.

The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.

— Ralph Waldo Emerson

「通用知識」的意思是，每人日常也需要知道的東西，例如，健康、財政、人際、時間管理等。

As we get older, generic reading becomes less and less useful. We then gain new knowledge mostly by personal life experience and directed reading.

— paraphrasing John T. Reed

— Me@2020-07-16 06:14:51 PM

.

.

# Chain rule of functional variation

Ex 1.8.2.3, Structure and Interpretation of Classical Mechanics

. \displaystyle{ \begin{aligned} &\delta_\eta F[g[q]] \\ &= \delta_\eta (F \circ g)[q] \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q + \epsilon \eta]] - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q] + \epsilon \delta_\eta g[q] + \epsilon^2 (...) + \epsilon^3 (...) + ...]] - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q] + \epsilon \delta_\eta g[q] + \epsilon^2 (... + \epsilon (...) + ...)]] - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q] + \epsilon \delta_\eta g[q] + \epsilon^2 (...)]] - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q] + \epsilon \left(\delta_\eta g[q] + \epsilon (...)\right)]] - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{F[g[q]] + \epsilon \delta_{\left(\delta_\eta g[q] + \epsilon (...)\right)} F[g[q]] + \epsilon^2 (...) - F[g[q]]}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \frac{\epsilon \delta_{\left(\delta_\eta g[q] + \epsilon (...)\right)} F[g[q]] + \epsilon^2 (...)}{\epsilon} \right) \\ &= \lim_{\epsilon \to 0} \left( \delta_{\left(\delta_\eta g[q] + \epsilon (...)\right)} F[g[q]] + \epsilon (...) \right) \\ &= \delta_{ \left( \delta_\eta g[q] \right)} F[g] \\ \end{aligned}}

— Me@2020-07-14 06:00:35 PM

.

.

# Definition of time, 11

You cannot go through time without changing (the definition of) yourself.

— Me@2012.04.28

.

.

# 反貼士搵笨大行動 1.5

(如果到了這個時候，你仍然相信，世間上有『貼士』的話，你大概沒有資格讀大學吧。)

.

.

— Me@2020-07-10 04:13:13 PM

.

.

# Problem 2.3a

A First Course in String Theory

.

2.2 Lorentz transformations, derivatives, and quantum operators.

(a) Give the Lorentz transformations for the components $\displaystyle{a_{\mu}}$ of a vector under a boost along the $\displaystyle{x^1}$ axis.

~~~ \displaystyle{\begin{aligned} \begin{bmatrix} c t' \\ z' \\ x' \\ y' \end{bmatrix} &= \begin{bmatrix} \gamma & -\beta \gamma & 0 & 0 \\ -\beta \gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} c\,t \\ z \\ x \\ y \end{bmatrix} \\ \end{aligned}} \displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ a_\mu &= a^\nu \eta_{\mu \nu} \\ (a')_\mu &= L_\mu^{~\nu} a_\nu \\ [(a')_\mu] &= [a_\nu] [L^\mu_{~\nu}]^{-1} \\ [L^\mu_{~\nu}]^{-1} &= \begin{bmatrix} \gamma & \beta \gamma & 0 & 0 \\ \beta \gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\ \end{aligned}}

. \displaystyle{ \begin{aligned} a_0 &= -a^0 \\ a_1 &= a^1 \\ a_2 &= a^2 \\ a_3 &= a^3 \\ \end{aligned}}

— Me@2020-07-05 05:40:44 PM

.

.

# Consistent histories, 6.2

observer ~ a consistent description

— Me@2017-08-03 07:58:50 AM

.

.