Problem 2.3b5

A First Course in String Theory

.

2.3 Lorentz transformations, derivatives, and quantum operators.

(b) Show that the objects \displaystyle{\frac{\partial}{\partial x^\mu}} transform under Lorentz transformations in the same way as the \displaystyle{a_\mu} considered in (a) do. Thus, partial derivatives with respect to conventional upper-index coordinates \displaystyle{x^\mu} behave as a four-vector with lower indices – as reflected by writing it as \displaystyle{\partial_\mu}.

~~~

Denoting \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{L^{~\nu}_{\mu}} is misleading, because that presupposes that \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} is directly related to the matrix \displaystyle{L}.

To avoid this bug, instead, we denote \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{M ^\nu_{~\mu}}. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

Using the Kronecker Delta and Einstein summation notation, we have

\displaystyle{ \begin{aligned} L^\mu_{~\nu} M^{\beta}_{~\mu} &= M^{\beta}_{~\mu} L^\mu_{~\nu} \\ &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

So

\displaystyle{ \begin{aligned} \sum_{\mu=0}^{4} L^\mu_{~\nu} M^{\beta}_{~\mu} &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

\displaystyle{ \begin{aligned}   M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}    \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

One defines (as a matter of notation),

{\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}

and may in this notation write

{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.}

Now for a subtlety. The implied summation on the right hand side of

{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }}

is running over a row index of the matrix representing \displaystyle{\Lambda^{-1}}. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of \displaystyle{\Lambda} acting on the column vector \displaystyle{A_\mu}. That is, in pure matrix notation,

{\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.}

— Wikipedia on Lorentz transformation

.

So

\displaystyle{ \begin{aligned}   M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}    \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

.

Denote \displaystyle{[L^{-1}]^{\beta}_{~\mu}} as

\displaystyle{ \begin{aligned}   N^{~\beta}_{\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}   N^{~\beta}_{\mu} &= M^{\beta}_{~\mu} \\   [N^T] &= [M] \\   \end{aligned}}

.

The Lorentz transformation:

\displaystyle{ \begin{aligned}   (x')^\mu &= L^\mu_{~\nu} x^\nu \\   (x')_\mu &= \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   (x')^\mu &= L^\mu_{~\nu} x^\nu \\   (x')_\mu &= N^{~\nu}_{\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\   (x')_\mu &= M^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\   (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} &= \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu} \\   &= \frac{\partial x^0}{\partial (x')^\mu} \frac{\partial}{\partial x^0} + \frac{\partial x^1}{\partial (x')^\mu} \frac{\partial}{\partial x^1} + \frac{\partial x^2}{\partial (x')^\mu} \frac{\partial}{\partial x^2} + \frac{\partial x^3}{\partial (x')^\mu} \frac{\partial}{\partial x^3} \\   \end{aligned}}

Now we consider \displaystyle{f} as a function of \displaystyle{x^{\mu}}‘s:

\displaystyle{f(x^0, x^1, x^2, x^3)}

Since \displaystyle{x^{\mu}}‘s and \displaystyle{(x')^{\mu}}‘s are related by Lorentz transform, \displaystyle{f} is also a function of \displaystyle{(x')^{\mu}}‘s, although indirectly.

\displaystyle{f(x^0((x')^0, (x')^1, (x')^2, (x')^3), x^1((x')^0, ...), x^2((x')^0, ...), x^3((x')^0, ...))}

For notational simplicity, we write \displaystyle{f} as

\displaystyle{f(x^\alpha((x')^\beta))}

Since \displaystyle{f} is a function of \displaystyle{(x')^{\mu}}‘s, we can differentiate it with respect to \displaystyle{(x')^{\mu}}‘s.

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} f(x^\alpha((x')^\beta))) &= \sum_{\nu = 0}^4 \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu}  f(x^\alpha) \\   \end{aligned}}

Since

\displaystyle{ \begin{aligned}   x^\nu &= [L^{-1}]^\nu_{~\beta} (x')^\beta \\   \end{aligned}},

\displaystyle{ \begin{aligned}   \frac{\partial f}{\partial (x')^\mu}   &= \sum_{\nu = 0}^4 \frac{\partial}{\partial (x')^\mu} \left[  \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} (x')^\beta \right] \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \frac{\partial (x')^\beta}{\partial (x')^\mu} \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \delta^\beta_\mu \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\   &= [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\   \end{aligned}}

Therefore,

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} &= [L^{-1}]^\nu_{~\mu} \frac{\partial}{\partial x^\nu} \\   \end{aligned}}

It is the same as the Lorentz transform for covariant vectors:

\displaystyle{ \begin{aligned}   (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

.

2020.11.24 Tuesday (c) All rights reserved by ACHK

Global symmetry, 2

In physics, a global symmetry is a symmetry that holds at all points in the spacetime under consideration, as opposed to a local symmetry which varies from point to point.

Global symmetries require conservation laws, but not forces, in physics.

— Wikipedia on Global symmetry

.

.

2020.11.22 Sunday ACHK

Light, 3

無額外論 7

.

The one in the mirror is your Light.

— Me@2011.06.24

.

Thou shalt have no other gods before Me.

— one of the Ten Commandments

.

God teach you through your mind; help you through your actions.

— Me@the Last Century

.

.

2020.11.21 Saturday (c) All rights reserved by ACHK

一萬個小時 2.4

機遇創生論 1.6.6 | 十年 3.4

.

通用的極致,就是專業。而通用發展成極致的方法,有很多種。每種極致,就自成一門專業。通用是樹幹;專業是樹支。一支樹支的強壯與否,並不能保證,另一支樹支的健康。

那是否就代表,學生時代以後,就毋須再發展,通用知識或通用技能呢?

.

學生時代以後,工作時代中,仍然需要發展,通用知識或通用技能,從而維持自己的智能和體能,以防退化。例如,剛才所講的智力遊戲,我認為可以,避免老人家的腦部退化。

我那樣認為,是因為我看過一些老人家的訪問。有一位九十歲的女仕說,她透過玩手提遊戲機,保持頭腦的清醒靈活。

即使還未老年,只是步入中年,也應該多加留意,智能或體能上的退化。玩適當類型的電腦遊戲,可以保持反應的靈敏。什麼類型呢?

電腦遊戲有很多類型,主要有「建構」、「解謎」和「戰鬥任務」三種。「戰鬥任務」就可用來鍛鍊反應。那些遊戲會放你於,危險和緊急的處境。你當時必須有,敏捷和精準的身手,才可以完成任務,然後脫險。換句話說,那些遊戲亦同時訓練你,控制自己的心理,駕馭自己的恐懼。

同理,雖然一般的運動本身,例如掌上壓,並沒有什麼所謂的「用途」,但是,你正正需要這些「無用」的運動,去維持體能。

無論是體能和智能,如果沒有強大而穩定的主幹,就不會有健康的分支和茂盛的樹葉。

— Me@2020-11-16 04:55:18 PM

.

.

2020.11.20 Friday (c) All rights reserved by ACHK

Possion’s Lagrange Equation

Structure and Interpretation of Classical Mechanics

.

Ex 1.10 Higher-derivative Lagrangians

Derive Lagrange’s equations for Lagrangians that depend on accelerations. In particular, show that the Lagrange equations for Lagrangians of the form \displaystyle{L(t, q, \dot q, \ddot q)} with \displaystyle{\ddot{q}} terms are

\displaystyle{D^2(\partial_3L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + \partial_1 L \circ \Gamma[q] = 0}

In general, these equations, first derived by Poisson, will involve the fourth derivative of \displaystyle{q}. Note that the derivation is completely analogous to the derivation of the Lagrange equations without accelerations; it is just longer. What restrictions must we place on the variations so that the critical path satisfies a differential equation?


Varying the action

\displaystyle{ \begin{aligned}   S[q] (t_1, t_2) &= \int_{t_1}^{t_2} L \circ \Gamma [q] \\   \eta(t_1) &= \eta(t_2) = 0 \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2) &= 0 \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2) &= \int_{t_1}^{t_2} \delta_\eta \left( L \circ \Gamma [q] \right) \\   \end{aligned}}

\displaystyle{ \begin{aligned}     \delta_\eta I [q] &= \eta \\  \delta_\eta g[q] &= D \eta~~~\text{with}~~~g[q] = Dq \\   \end{aligned}}

.

Let \displaystyle{h[q] = D^2 q}.

\displaystyle{ \begin{aligned}   \delta_\eta h[q]   &= \lim_{\epsilon \to 0} \frac{h[q+\epsilon \eta] - h[q]}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 (q+\epsilon \eta) - D^2 q}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 q + D^2 \epsilon \eta - D^2 q}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 \epsilon \eta}{\epsilon} \\   &= D^2 \eta \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \Gamma [q] (t) &= (t, q(t), D q(t), D^2 q(t)) \\  \delta_\eta \Gamma [q] (t) &= (0, \eta (t), D \eta (t), D^2 \eta (t)) \\  \end{aligned}}

.

Chain rule of functional variation

\displaystyle{ \begin{aligned} &\delta_\eta F[g[q]] \\   &= \delta_\eta (F \circ g)[q] \\   &= \delta_{ \left( \delta_\eta g[q] \right)} F[g] \\ \end{aligned}}

Since variation commutes with integration,

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \delta_\eta \int_{t_1}^{t_2} L \circ \Gamma [q] \\   &= \int_{t_1}^{t_2} \delta_\eta \left( L \circ \Gamma [q] \right) \\   \end{aligned}}

By the chain rule of functional variation:

\displaystyle{ \begin{aligned}   \delta_\eta L \circ \Gamma [q] = \delta_{ \left( \delta_\eta \Gamma[q] \right)} L[\Gamma[q]] \\   \end{aligned}}

If \displaystyle{L} is path-independent,

\displaystyle{ \begin{aligned}   \delta_\eta \left( L \circ \Gamma [q] \right) = \left( DL \circ \Gamma[q] \right) \delta_\eta \Gamma[q] \\   \end{aligned}}

But is \displaystyle{L} path-independent?

The \displaystyle{L \circ \Gamma [.]} is path-dependent. Its input is a path \displaystyle{q}, not just \displaystyle{q(t)}, the value of \displaystyle{q} at the time \displaystyle{t}. However, \displaystyle{L(.)} itself is a path-independent function, because its input is not a path \displaystyle{q}, but a quadruple of values \displaystyle{(t, q(t), Dq(t), D^2 q(t))}.

\displaystyle{ \begin{aligned}   L \circ \Gamma [q] = L(t, q(t), Dq(t), D^2 q(t)) \\   \end{aligned}}

Since \displaystyle{L} is path-independent,

\displaystyle{ \begin{aligned}   \delta_\eta \left( L \circ \Gamma [q] \right)   = \left( DL \circ \Gamma[q] \right) \delta_\eta \Gamma[q] \\   \end{aligned}}

\displaystyle{ \begin{aligned}   &\delta_\eta S[q] (t_1, t_2) \\  &= \int_{t_1}^{t_2} \delta_\eta L \circ \Gamma [q] \\   &= \int_{t_1}^{t_2} \left( D \left( L \circ \Gamma[q] \right) \right) \delta_\eta \Gamma[q]  \\   &= \int_{t_1}^{t_2} \left( D \left( L(t, q, D q, D^2 q) \right) \right) (0, \eta (t), D \eta (t), D^2 \eta (t))  \\   &= \int_{t_1}^{t_2} \left[ \partial_0 L \circ \Gamma[q], \partial_1 L \circ \Gamma[q], \partial_2 L \circ \Gamma[q], \partial_3 L \circ \Gamma[q] \right] (0, \eta (t), D \eta (t), D^2 \eta (t))  \\   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta + (\partial_2 L \circ \Gamma[q]) D \eta + (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        &=   \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta      + \left[ \left. (\partial_2 L \circ \Gamma[q]) \eta \right|_{t_1}^{t_2} - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta \right]     + \int_{t_1}^{t_2} (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        \end{aligned}}

Since \displaystyle{\eta(t_1) = 0} and \displaystyle{\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &=   \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta      - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta      + \int_{t_1}^{t_2} (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        \end{aligned}}

Here is a trick for integration by parts:

As long as the boundary term \displaystyle{\left. u(t)v(t) \right|_{t_1}^{t_2} = 0},

\displaystyle{\int_{t_1}^{t_2} u(t) dv(t) = - \int_{t_1}^{t_2} v(t) du(t)}

So if \displaystyle{D \eta(t_1) = 0} and \displaystyle{D \eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_3 L \circ \Gamma[q]) D \eta \\                        \end{aligned}}

Since \displaystyle{\eta(t_1) = 0} and \displaystyle{\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta        + \int_{t_1}^{t_2} D^2 (\partial_3 L \circ \Gamma[q]) \eta \\                        \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} \left[ (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) \right] \eta \\                        \end{aligned}}

By the principle of stationary action, \displaystyle{ \delta_\eta S[q] (t_1, t_2) = 0}. So

\displaystyle{ \begin{aligned}   0   &= \int_{t_1}^{t_2} \left[ (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) \right] \eta \\                        \end{aligned}}

Since this is true for any function \displaystyle{\eta(t)} that satisfies \displaystyle{\eta(t_1) = \eta(t_2) = 0} and \displaystyle{D\eta(t_1) = D\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) &= 0 \\                        D^2 (\partial_3 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + \partial_1 L \circ \Gamma[q] &= 0 \\                        \end{aligned}}

.

Note:

The notation of the path function \displaystyle{\Gamma} is \displaystyle{\Gamma[q](t)}, not \displaystyle{\Gamma[q(t)]}.

The notation \displaystyle{\Gamma[q](t)} means that \displaystyle{\Gamma} takes a path \displaystyle{q} as input. And then returns a path-independent function \displaystyle{\Gamma[q]}, which takes time \displaystyle{t} as input, returns a value \displaystyle{\Gamma[q](t)}.

The other notation \displaystyle{\Gamma[q(t)]} makes no sense, because \displaystyle{\Gamma[.]} takes a path \displaystyle{q}, not a value \displaystyle{q(t)}, as input.

— Me@2020-11-11 05:37:13 PM

.

.

2020.11.11 Wednesday (c) All rights reserved by ACHK

Memory as past microstate information encoded in present devices

Logical arrow of time, 4.2

.

Memory is of the past.

The main point of memories or records is that without them, most of the past microstate information would be lost for a macroscopic observer forever.

For example, if a mixture has already reached an equilibrium state, we cannot deduce which previous microstate it is from, unless we have the memory of it.

This work is free and may be used by anyone for any purpose. Wikimedia Foundation has received an e-mail confirming that the copyright holder has approved publication under the terms mentioned on this page.

.

memory/record

~ some of the past microstate and macrostate information encoded in present macroscopic devices, such as paper, electronic devices, etc.

.

How come macroscopic time is cumulative?

.

Quantum states are unitary.

A quantum state in the present is evolved from one and only one quantum state at any particular time point in the past.

Also, that quantum state in the present will evolve to one and only one quantum state at any particular time point in the future.

.

Let

\displaystyle{t_1} = a past time point

\displaystyle{t_2} = now

\displaystyle{t_3} = a future time point

Also, let state \displaystyle{S_1} at time \displaystyle{t_1} evolve to state \displaystyle{S_2} at time \displaystyle{t_2}. And then state \displaystyle{S_2} evolves to state \displaystyle{S_3} at time \displaystyle{t_3}.

.

State \displaystyle{S_2} has one-one correspondence to its past state \displaystyle{S_1}. So for the state \displaystyle{S_2}, it does not need memory to store any information of state \displaystyle{S_1}.

Instead, just by knowing that \displaystyle{t_2} microstate is \displaystyle{S_2}, we already can deduce that it is evolved from state \displaystyle{S_1} at time \displaystyle{t_1}.

In other words, microstate does not require memory.

— Me@2020-10-28 10:26 AM

.

.

2020.11.02 Monday (c) All rights reserved by ACHK

尋覓

這段改編自 2010 年 10 月 14 日的對話。

.

你為什麼哭?可不可以告訴我?

(CPK: 可不可以講呀?可以?

她剛剛和男朋友分了手,因為,她男朋友長期不理會她。)

.

我剛才見到你哭,不知發生什麼事。現在,我知道是失戀。那是我所能想像的災難中,最小的一種,因為,那幾乎是,每個人都必會遇到的事情。

.

如果想避免,那就先寫下筆記。

第一點,你不要找一般的地球人,做你的另一半,因為,剛才你所講的劇情,如果是地球人的話,就一定會發生。

例如,追到你之前,會很愛惜你。不應該說「愛惜」。應該怎樣說呢?

追到你之前,就很想見到你,但追求成功後,就不太理會你。你遇到的,是不是這樣?

(CSY: 是。)

.

你想像一下,如果將來與他結了婚,會有什麼後果?

只會更加嚴重。他只會更加不理會你。所以,不可永久相愛的話,越早分手越好。試想想,如果結了婚後才分手的話,情況會有多麻煩。

結了婚但未有子女時,離婚還可以。但如果有子女的話,那就不知如何是好了。

既然幾乎是人生必經階段,那就分手越早越好。

.

當然,那不是地球人的唯一缺點。其他不能忍受的缺點,還有(例如),誠信有問題。你認識的人當中,有多少人是守時的呢?

其實,大部分人不只不守時,還常常會爽約。

.

但是,要找到你,心目中的那個外星人,不易做到。

一來,地球上,地球人很多,外星人很少。

二來,外星人中,都可以有好有壞。

三來,你二十歲未到,還未有足夠的人生閱歷,去辨認哪些人是,有誠信的外星人。

所以,你等年紀大一點,例如大學時代,才拍拖,可能會好一點。

— Me@2020-10-28 10:26:23 PM

.

.

2020.10.31 Saturday (c) All rights reserved by ACHK

Kronecker delta in tensor component form

Problem 2.3b4

A First Course in String Theory

.

Continue the previous calculation:

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

The two cases can be grouped into one, by replacing the right hand sides with the Kronecker delta. However, there are 4 possible forms and I am not sure which one should be used.

\displaystyle{\delta^i_{~j}}
\displaystyle{\delta_i^{~j}}
\displaystyle{\delta^{ij}}
\displaystyle{\delta_{ij}}

So I do a little research on Kronecker delta in this post.

— Me@2020-10-21 03:40:36 PM

.

The inverse Lorentz transformation should satisfy \displaystyle{\left( \Lambda^{-1} \right)^\beta_{~\mu} \Lambda^\mu_{~\nu} = \delta^\beta_{~\nu}}, where \displaystyle{\delta^\beta_{~\nu} \equiv \text{diag}(1,1,1,1)} is the Kronecker delta. Then, multiply by the inverse on both sides of Eq. 4 to find

\displaystyle{ \begin{aligned}   \left( \Lambda^{-1} \right)^\beta_{~\mu} \left( \Delta x' \right)^\mu &= \delta^\beta_{~\nu} \Delta x^\nu \\   &= \Delta x^\beta \\   \end{aligned}}

(6)

The inverse \displaystyle{\left( \Lambda^{-1} \right)^\beta_{~\mu}} is also written as \displaystyle{\Lambda_\mu^{~\beta}}. The notation is as follows: the left index denotes a row while the right index denotes a column, while the top index denotes the frame we’re transforming to and the bottom index denotes the frame we’re transforming from. Then, the operation \displaystyle{\Lambda_\mu^{~\beta} \Lambda^\mu_{~\nu}} means sum over the index \displaystyle{\mu} which lives in the primed frame, leaving unprimed indices \displaystyle{\beta} and \displaystyle{\nu} (so that the RHS of Eq. 6 is unprimed as it should be), where the sum is over a row of \displaystyle{\Lambda_\mu^{~\beta}} and a column of \displaystyle{\Lambda_{~\nu}^\mu} which is precisely the operation of matrix multiplication.

— Lorentz tensor redux

— Emily Nardoni

.

This one is WRONG:

\displaystyle{(\Lambda^T)^{\mu}{}_{\nu} = \Lambda_{\nu}{}^{\mu}}

This one is RIGHT:

\displaystyle{(\Lambda^T)_{\nu}{}^{\mu} ~:=~ \Lambda^{\mu}{}_{\nu}}

— Me@2020-10-23 06:30:57 PM

.

1. \displaystyle{(\Lambda^T)_{\nu}{}^{\mu} ~:=~\Lambda^{\mu}{}_{\nu}}

2. [Kronecker delta] is invariant in all coordinate systems, and hence it is an isotropic tensor.

3. Covariant, contravariant and mixed type of this tensor are the same, that is

\displaystyle{\delta^i_{~j} = \delta_i^{~j} = \delta^{ij} = \delta_{ij}}

— Introduction to Tensor Calculus

— Taha Sochi

.

Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the covariant and contravariant metric tensors being inverse to each other:

{\displaystyle g^{ij}g_{jk}=g_{kj}g^{ji}={\delta ^{i}}_{k}={\delta _{k}}^{i}}

where \displaystyle{\delta^i_{~k}} is the Kronecker delta or identity matrix. Since there are different choices of metric with different metric signatures (signs along the diagonal elements, i.e. tensor components with equal indices), the name and signature is usually indicated to prevent confusion.

— Wikipedia on Raising and lowering indices

.

So

{\displaystyle g^{ij}g_{jk}={\delta ^{i}}_{k}}

and

{\displaystyle g_{kj}g^{ji}={\delta _{k}}^{i}}

— Me@2020-10-19 05:21:49 PM

.

\displaystyle{ T_{i}^{\; j} = \boldsymbol{T}(\boldsymbol{e}_i,\boldsymbol{e}^j) } and \displaystyle{T_{j}^{\; i} = \boldsymbol{T}(\boldsymbol{e}_j,\boldsymbol{e}^i) } are both 1-covariant 2-contravariant coordinates of T. The only difference between them is the notation used for sub- and superscripts;

\displaystyle{ T^{i}_{\; j} = \boldsymbol{T}(\boldsymbol{e}^i,\boldsymbol{e}_j) } and \displaystyle{ T^{j}_{\; i} = \boldsymbol{T}(\boldsymbol{e}^j,\boldsymbol{e}_i) } are both 1-contravariant 2-covariant coordinates of T. The only difference between them is the notation used for sub- and superscripts.

— edited Oct 11 ’17 at 14:14

— answered Oct 11 ’17 at 10:58

— EditPiAf

— Tensor Notation Upper and Lower Indices

— Physics StackExchange

.

Rather, the dual basis one-forms are defined by imposing the following
16 requirements at each spacetime point:

\displaystyle{\langle \tilde{e}^\mu \mathbf{x}, \vec e_\nu \mathbf{x} \rangle = \delta^{\mu}_{~\nu}}

is the Kronecker delta, \displaystyle{\delta^{\mu}_{~\nu} = 1} if \displaystyle{\mu = \nu} and \displaystyle{\delta^{\mu}_{~\nu} = 0} otherwise, with the same values for each spacetime point. (We must always distinguish subscripts from superscripts; the Kronecker delta always has one of each.)

— Introduction to Tensor Calculus for General Relativity

— Edmund Bertschinger

.

However, since \displaystyle{\delta_{~b}^a} is a tensor, we can raise or lower its indices using the metric tensor in the usual way. That is, we can get a version of \displaystyle{\delta} with both indices raised or lowered, as follows:

[\displaystyle{\delta^{ab} = \delta^a_{~c} g^{cb} = g^{ab}}]

\displaystyle{\delta_{ab} = g_{ac} \delta^c_{~b} = g_{ab}}

In this sense, \displaystyle{\delta^{ab}} and \displaystyle{\delta_{ab}} are the upper and lower versions of the metric tensor. However, they can’t really be considered versions of the Kronecker delta any more, as they don’t necessarily satisfy [0 when i \ne j and 1 when i = j]. In other words, the only version of \delta that is both a Kronecker delta and a tensor is the version with one upper and one lower index: \delta^a_{~b} [or \delta^{~a}_{b}].

— Kronecker Delta as a tensor

— physicspages

.

Continue the calculation for the Problem 2.3b:

Denoting \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{L^{~\nu}_{\mu}} is misleading, because that presupposes that \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} is directly related to the matrix \displaystyle{L}.

To avoid this bug, instead, we denote \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{M ^\nu_{~\mu}}. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

Using the Kronecker Delta and Einstein summation notation, we have

\displaystyle{ \begin{aligned}   L^\mu_{~\nu} M^{\beta}_{~\mu}   &=  M^{\beta}_{~\mu} L^\mu_{~\nu} \\   &=  \delta^{\beta}_{~\nu} \\   \end{aligned}}

Note: After tensor contraction, the remaining left index should be kept on the left and the remaining right on the right.

— Me@2020-10-20 03:49:09 PM

.

.

2020.10.21 Wednesday (c) All rights reserved by ACHK

Tenet, 2

T-symmetry 6.2 | Loschmidt’s paradox 4

.

This drew the objection from Loschmidt that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism: something must be wrong (Loschmidt’s paradox).

The resolution (1895) of this paradox is that the velocities of two particles after a collision are no longer truly uncorrelated. By asserting that it was acceptable to ignore these correlations in the population at times after the initial time, Boltzmann had introduced an element of time asymmetry through the formalism of his calculation.

— Wikipedia on Molecular chaos

.

If an observer insists to monitor all the microstate information of the observed and the environment, i.e. without leaving any microstate information, that observer would see a time symmetric universe, in the sense that the second law of thermodynamics would not be there anymore.

It would then be meaningless to label any of the two directions of time as “past” or “future”.

— Me@2020-10-12 08:10:27 PM

.

So in this sense, as long as an observer wants to save some mental power by ignoring some micro-information, the past and future distinction is created, in the sense that there will be the second law of thermodynamics.

— Me@2020-10-12 08:12:25 PM

.

Time’s arrow is due to approximation. Time’s arrow is due to the coarse-grained description of reality. In other words, you use an inaccurate macroscopic description on an actually microscopic reality.

— Me@2020-10-12 10:41:48 PM

.

.

2020.10.13 Tuesday (c) All rights reserved by ACHK

信心動搖

Kyle:我都遇過類似的事情,大大動搖了我對自己的信心。

Me:你已經比我好。至起碼,你還有信心可以動搖。

— Me@2003

.

.

2020.10.11 Sunday (c) All rights reserved by ACHK

一萬個小時 2.3

機遇創生論 1.6.5 | 十年 3.3

.

詳細而言,任何一門專業的知識,都有一部分,可以於其他行業中,循環再用,簡稱「可轉化部分」,或者「通用部分」;同時,亦有另一部分知識,不可以於其他行業中,循環再用,簡稱「不可轉化部分」,或者「專業部分」。例如,剛才的那位醫生,如果已下定了決心要,轉行做律行的話,他原本的部分才能是,可以循環再用的,例如良好的英文和細密的心思。

.

不記得從哪裡看到的文章,講述有研究員探討,「智力遊戲」可否提高智力。亦即是問,「益智遊戲」會否益智?

該文的結論是,「智力遊戲」可以提升,有關該個智力遊戲的智力;至於其他方面的智力,則沒有大幫忙。我猜想,那個「智力遊戲」甚至連,其他智力遊戲中,所需的智力,也未必能提升。

該文的結論,我不知真假。不過,我覺得那結論可信。

試想想,如果你不斷練習足球,你足球的技巧當然會提升。但是,你籃球的技巧則不會。

為什麼會這樣呢?

「專業」,是由「通用」發展出來的。

「專業」,就是「通用」的分支。

.

通用的極致,就是專業。而通用發展成極致的方法,有很多種。每種極致,就自成一門專業。通用是樹幹;專業是樹支。一支樹支的強壯與否,並不能保證,另一支樹支的健康。

那是否就代表,學生時代以後,就毋須再發展,通用知識或通用技能呢?

.

不是。

學生時代以後,仍然需要發展通用知識,以防原有的通用技能退化,因為不進則退。但是,發展通用知識的策略上,一定和學生時代時,有所不同。大人需要維生,沒有學生時代,那麼多的學習時間。

學生時代追求通用知識時,應有的態度,是「窮盡」——可以學的都學,從而發掘自己當時的興趣,尋找自己將來的專業。

工作時代追求通用知識時,應有的角度,是「選擇」——每一個時期,選擇一樣新的學問,去研究和實踐,從而維持自己的智能和體能。只選一兩樣,原因是工作時代的工餘時間有限。

.

情形就好像一個人在單身時,應該盡量識多一些,志同道合的朋友,從而提高找到另一半的機會率。但是,有了另一半後,是否就毋須再見,任何其他朋友呢?

當然不是,因為,尋找另一半,並不是交朋友的唯一目的。有了另一半後,見朋友再不是為了尋找另一半,而是純粹為了,見那些朋友。

.

但是,因為有了另一半後,你自然會把,工餘時間中的大部分,花在他/她身上。所以,你可以見到朋友的機會,自然少了很多。

但是,一定要有。

一個人只有愛情,沒有友情,大慨不會快樂,反之亦然。

.

學生時代,每天上學,都會學到新事物。工作時代,每天上班,大概很少會學到新想法。

學生時代,每天上學,都會見到朋友。工作時代,每天上班,大概不會見到朋友。

那是否代表,永久生存於學生時代,是好事呢?

.

嬰兒的最可愛之處,在於他會長大,有著無限個未來;人們見到他時,會充滿希望。如果一個嬰兒不會長大,你再不會覺得可愛,而是覺得可惜和可悲。

學生時代的最精采之處,在於可以期望,在不久的將來,可以脫離學生時代。工作時代的恐怖之處,在於不能期望,在短時間內,可以跳出工作時代。

如果學生時代是永久的話,它就會有如,工作時代那麼恐怖。

— Me@2020-10-10 07:52:39 PM

.

.

2020.10.10 Saturday (c) All rights reserved by ACHK

Ex 1.9 Lagrange’s equations

Structure and Interpretation of Classical Mechanics

.

Derive the Lagrange equations for the following systems, showing all of the intermediate steps as in the harmonic oscillator and orbital motion examples.

b. An ideal planar pendulum consists of a bob of mass \displaystyle{m} connected to a pivot by a massless rod of length \displaystyle{l} subject to uniform gravitational acceleration \displaystyle{g}. A Lagrangian is \displaystyle{L(t, \theta, \dot \theta) = \frac{1}{2} m l^2 \dot \theta^2 + mgl \cos \theta}. The formal parameters of \displaystyle{L} are \displaystyle{t}, \displaystyle{\theta}, and \displaystyle{\dot \theta}; \displaystyle{\theta} measures the angle of the pendulum rod to a plumb line and \displaystyle{\dot \theta} is the angular velocity of the rod.

~~~

\displaystyle{ \begin{aligned}   L (t, \xi, \eta) &= \frac{1}{2} m l^2 \eta^2 + m g l \cos \xi \\  \end{aligned}}

\displaystyle{ \begin{aligned}   \partial_1 L (t, \xi, \eta) &= - m g l \sin \xi \\  \partial_2 L (t, \xi, \eta) &= m l^2 \eta  \\  \end{aligned}}

Put \displaystyle{q = \theta},

\displaystyle{ \begin{aligned}   \Gamma[q](t) &= (t; \theta(t); D\theta(t)) \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \partial_1 L \circ \Gamma[q] (t) &= - m g l \sin \theta \\  \partial_2 L \circ \Gamma[q] (t) &= m l^2 D \theta  \\  \end{aligned}}

The Lagrange equation:

\displaystyle{ \begin{aligned}   D ( \partial_2 L \circ \Gamma[q]) - (\partial_1 L \circ \Gamma[q]) &= 0 \\   D (  m l^2 D \theta  ) - ( - m g l \sin \theta ) &= 0 \\   D^2 \theta + \frac{g}{l} \sin \theta &= 0 \\   \end{aligned}}

— Me@2020-09-28 05:40:42 PM

.

.

2020.09.30 Wednesday (c) All rights reserved by ACHK

Consistent histories, 8

Relationship with other interpretations

The only group of interpretations of quantum mechanics with which RQM is almost completely incompatible is that of hidden variables theories. RQM shares some deep similarities with other views, but differs from them all to the extent to which the other interpretations do not accord with the “relational world” put forward by RQM.

Copenhagen interpretation

RQM is, in essence, quite similar to the Copenhagen interpretation, but with an important difference. In the Copenhagen interpretation, the macroscopic world is assumed to be intrinsically classical in nature, and wave function collapse occurs when a quantum system interacts with macroscopic apparatus. In RQM, any interaction, be it micro or macroscopic, causes the linearity of Schrödinger evolution to break down. RQM could recover a Copenhagen-like view of the world by assigning a privileged status (not dissimilar to a preferred frame in relativity) to the classical world. However, by doing this one would lose sight of the key features that RQM brings to our view of the quantum world.

Hidden variables theories

Bohm’s interpretation of QM does not sit well with RQM. One of the explicit hypotheses in the construction of RQM is that quantum mechanics is a complete theory, that is it provides a full account of the world. Moreover, the Bohmian view seems to imply an underlying, “absolute” set of states of all systems, which is also ruled out as a consequence of RQM.

We find a similar incompatibility between RQM and suggestions such as that of Penrose, which postulate that some processes (in Penrose’s case, gravitational effects) violate the linear evolution of the Schrödinger equation for the system.

Relative-state formulation

The many-worlds family of interpretations (MWI) shares an important feature with RQM, that is, the relational nature of all value assignments (that is, properties). Everett, however, maintains that the universal wavefunction gives a complete description of the entire universe, while Rovelli argues that this is problematic, both because this description is not tied to a specific observer (and hence is “meaningless” in RQM), and because RQM maintains that there is no single, absolute description of the universe as a whole, but rather a net of inter-related partial descriptions.

Consistent histories approach

In the consistent histories approach to QM, instead of assigning probabilities to single values for a given system, the emphasis is given to sequences of values, in such a way as to exclude (as physically impossible) all value assignments which result in inconsistent probabilities being attributed to observed states of the system. This is done by means of ascribing values to “frameworks”, and all values are hence framework-dependent.

RQM accords perfectly well with this view. However, the consistent histories approach does not give a full description of the physical meaning of framework-dependent value (that is it does not account for how there can be “facts” if the value of any property depends on the framework chosen). By incorporating the relational view into this approach, the problem is solved: RQM provides the means by which the observer-independent, framework-dependent probabilities of various histories are reconciled with observer-dependent descriptions of the world.

— Wikipedia on Relational quantum mechanics

.

.

2020.09.27 Sunday ACHK

Tenet

Christopher Nolan, 2 | 時空幻境 4 | Braid 4

.

1998 Following
2000 Memento
2002 Insomnia
2005 Batman Begins
2006 The Prestige
2008 The Dark Knight

2010 Inception
2012 The Dark Knight Rises
2014 Interstellar
2017 Dunkirk
2020 Tenet

.

香港譯名:

1998 《Following》

2000 《凶心人》

取「空心人」同音。「心」,是指「記憶」。所以,電影譯名的意思是,沒有記憶的人。

2002 《白夜追兇》

2005 《俠影之謎》

2006 《死亡魔法》

此電影的主題為魔術,所以導演把電影本身,化成一個魔術。

2008 《黑夜之神》

2010 《潛行凶間》

此電影的主題為夢境,所以導演把電影本身,化成一個夢境。

2012 《夜神起義》

2014 《星際啓示錄》

「啓示」,即是「來自未來的訊息」。

2017 《鄧寇克大行動》

2020 《天能》

A lot of Nolan’s movies are about some kinds of time travel.

For those movies, each has a unique time logic. Each is like a stage of the computer game Braid.

In Braid, there are 6 stages. Each stage has a unique time mechanics.

— Me@2020-09-20 10:36:54 AM

.

.

2020.09.25 Friday (c) All rights reserved by ACHK

相對論加量子力學

三一萬能俠, 2.2 | 太極滅世戰 2.3 | PhD, 4.2 | 財政自由 4.2

.

如果是最快樂的一個時代,我選預科的那兩年。

我暫時不能重造那個時代,主要因為還未有足夠的金錢儲備,令我可以,毋須做工維生,從而「全天候研究數學和物理」。

預科時代,需要為自己大學選擇主修。如果入到大學的話,我要選物理。但是,那時,我已經知道,中學的物理,其實不只對應於,大學的物理。其實,中學的物理有大半是,對應於大學的工程。

我想要全部,所以,如果成績許可的話,我會主修物理,副修工程。但是,一位回我中學講講座的師兄說:「工程無得(作為)副修架喎。」

工程屬於專業科目,只能作主修。所以,我立刻計劃倒轉,打算主修工程,副修物理;大學本科畢業後,到研究院時,才再研究物理。

預科後,有幸升到大學。我真的以工程為主修。

大學第二年時,可以開始有副修。但是,工程所需要讀的科目,多於其他主修一點。所以,我的時間表中,其實沒有足夠的空檔,去兼顧一個完整的副修。

我現在不記得副修物理,要修讀多少科物理科目。以下假設是八科。亦即是話,我要修夠八科,才可以於畢業成績表中,標籤物理為我的「副修」。

最終,我在第二年的上學期,和第三年的下學期,各自選修了一科。換句話說,我在大學本科時期,只修了兩科物理。

不幸中的大幸是,那兩科偏偏是,最重要的那兩科。

— Me@2020-09-16 04:01:32 PM

.

.

2020.09.21 Monday (c) All rights reserved by ACHK