Problem 2.4

A First Course in String Theory

.

2.4 Lorentz transformations as matrices

A matrix L that satisfies (2.46) is a Lorentz transformation. Show the following.

(b) If \displaystyle{L} is a Lorentz transformation so is the inverse matrix \displaystyle{L^{-1}}.

(c) If \displaystyle{L} is a Lorentz transformation so is the transpose matrix \displaystyle{L^{T}}.

~~~

(b)

\displaystyle{   \begin{aligned}   (\mathbf{A}^\mathrm{T})^{-1} &= (\mathbf{A}^{-1})^\mathrm{T} \\  L^T \eta L &= \eta \\  \eta &= [L^T]^{-1} \eta L^{-1} \\  [L^T]^{-1} \eta L^{-1} &= \eta \\  [L^{-1}]^T \eta L^{-1} &= \eta \\  \end{aligned}}

.

(c)

\displaystyle{   \begin{aligned}   L^T \eta L &= \eta \\  (L^T \eta L)^{-1} &= (\eta)^{-1} \\  L^{-1} \eta^{-1} (L^T)^{-1} &= \eta \\  L^{-1} \eta (L^T)^{-1} &= \eta \\  \eta &= L \eta L^T \\  L \eta L^T &= \eta \\  \end{aligned}}

— Me@2020-12-21 04:24:33 PM

.

.

2020.12.21 Monday (c) All rights reserved by ACHK

Pointer state, 3

Eigenstates 3.3 | The square root of the probability, 3

.

In calculation, if a quantum state is in a superposition, that superposition is a superposition of eigenstates.

However, real superposition does not just include eigenstates that make macroscopic senses.

.

That is the major mistake of the many-worlds interpretation of quantum mechanics.

— Me@2017-12-30 10:24 AM

— Me@2018-07-03 07:24 PM

— Me@2020-12-18 06:12 PM

.

Mathematically, a quantum superposition is a superposition of eigenstates. An eigenstate is a quantum state that is corresponding to a macroscopic state. A superposition state is a quantum state that has no classical correspondence.

The macroscopic states are the only observable states. An observable state is one that can be measured directly or indirectly. For an unobservable state, we write it as a superposition of eigenstates. We always write a superposition state as a superposition of observable states; so in this sense, before measurement, we can almost say that the system is in a superposition of different (possible) classical macroscopic universes.

However, conceptually, especially when thinking in terms of Feynman’s summing over histories picture, a quantum state is more than a superposition of classical states. In other words, a system can have a quantum state which is a superposition of not only normal classical states, but also bizarre classical states and eigen-but-classically-impossible states.

A bizarre classical state is a state that follows classical physical laws, but is highly improbable that, in daily life language, we label such a state “impossible”, such as a human with five arms.

An eigen-but-classically-impossible state is a state that violates classical physical laws, such as a castle floating in the sky.

For a superposition, if we allow only normal classical states as the component eigenstates, a lot of the quantum phenomena, such as quantum tunnelling, cannot be explained.

If you want multiple universes, you have to include not only normal universes, but also the bizarre ones.

.

Actually, even for the double-slit experiment, “superposition of classical states” is not able to explain the existence of the interference patterns.

The superposition of the electron-go-left universe and the electron-go-right universe does not form this universe, where the interference patterns exist.

— Me@2020-12-16 05:18:03 PM

.

One of the reasons is that a quantum superposition is not a superposition of different possibilities/probabilities/worlds/universes, but a superposition of quantum eigenstates, which, in a sense, are square roots of probabilities.

— Me@2020-12-18 06:07:22 PM

.

.

2020.12.18 Friday (c) All rights reserved by ACHK

機遇創生論 1.7

因果律 2.1

.

這個大統一理論的成員,包括(但不止於):

精簡圖:

種子論
反白論
間書原理
完備知識論

自由決定論

它們可以大統一的成因,在於它們除了各個自成一國外,還可以合體理解和應用。

.

(問:「種子論」,其實就即是「自由決定論」?)

可以那樣說。但是重點不同。

「種子論」重於討論,人生如何逹到成功。(留意,這裡的「成功」,是在你自己定義下的成功,而不是在世俗標準下。)

「自由決定論」則重於研究,宇宙既然依物理定律運行,那就代表,人的一舉一動,甚至思想意志,在宇宙創生那刻,就已經決定了?

(問:「自由決定論」即是問,世間上,有沒有「自由意志」?)

不太是。

「自由決定論」的重點跟人(或者其他意識體)沒有直接的關係。

「自由決定論」的重點在於研究,「Laplace 因果律」是否正確。

「Laplace 因果律」就是:

我們只要掌握某一個時刻,宇宙狀態的所有資料,我們就可以推斷到,宇宙在任何其他時刻的狀態。

這個話題中的細節,我們以前已經詳細討論過,所以這裡不再跟進。

「如果『因果律』是正確,人就沒有自由」只是「因果律」的一個例子。而這個例子因為直接和人相關,所以,人們特別重視。但是,即使那樣,那仍不是「因果律」的重點。

亦即是話,「自由意志」問題,只是「因果律」問題的支節。

.

(另一話題:)

「自由意志」問題方面,如果要討論的話,要先釐清「人有沒有自由意志」的意思,因為,它有超過一個常用的詮釋:

.

1. 思:

人可不可以控制到,自己的「思想意志」?

如果可以的話,

2. 因:

人(的自由思想,)可不可以控制到,自己身體的行動?

如果可以的話,

3. 果:

人(的自由行動,)可不可以控制到,自己人生(或者世界歷史)的發展大方向?

— Me@2020-12-11 06:43:58 PM

.

.

2020.12.11 Friday (c) All rights reserved by ACHK

How to Find Lagrangians

Lagrange’s equations are a system of second-order differential equations. In order to use them to compute the evolution of a mechanical system, we must find a suitable Lagrangian for the system. There is no general way to construct a Lagrangian for every system, but there is an important class of systems for which we can identify Lagrangians in a straightforward way in terms of kinetic and potential energy. The key idea is to construct a Lagrangian L such that Lagrange’s equations are Newton’s equations \displaystyle{\vec F = m \vec a}.

— 1.6 How to Find Lagrangians

— Structure and Interpretation of Classical Mechanics

.

.

2020.12.06 Sunday ACHK

Logical arrow of time, 6.4.2

Logical arrow of time, 6.1.2

.

The source of the macroscopic time asymmetry, aka the second law of thermodynamics, is the difference between prediction and retrodiction.

In a prediction, the deduction direction is the same as the physical/observer time direction.

In a retrodiction, the deduction direction is opposite to the physical/observer time direction.

.

— guess —

If a retrodiction is done by a time-opposite observer, he will see the entropy increasing. For him, he is really making a prediction.

— guess —

.

— Me@2013-10-25 3:33 AM

.

A difference between deduction and observation is that in observation, the probability is updated in real time.

.

each update time interval ~ infinitesimal

.

In other words, when you observe a system, you get new information about that system in real time.

Since you gain new knowledge of the system in real time, the probability assigned to that system is also updated in real time.

— Me@2020-10-13 11:27:59 AM

.

.

2020.12.04 Friday (c) All rights reserved by ACHK

尋覓 1.2

這段改編自 2010 年 10 月 14 日的對話。

.

但是,要找到你,心目中的那個外星人,不易做到。

一來,地球上,地球人很多,外星人很少。

二來,外星人中,都可以有好有壞。即使同樣是外星人,也可以是來自不同的星球。

三來,你二十歲未到,還未有足夠的人生閱歷,去辨認哪些人是,有誠信的外星人。你等年紀大一點,例如大學時代,才拍拖,可能會好一點。所以,宏觀而言,你現在的分手,對你的人生是好事。

.

記住,第一個要點是,不要找地球人,而要找外星人;不只要找外星人,而要找來自,高級星球的外星人。

.

第二個要點是,在時間上,所謂的「一個人」,並不是真的是,同一個人。

什麼意思呢?

例如,如果你把我十九歲時寫的文章,和我三十歲時寫的文章,比較一下的話,你會覺得,那兩篇文章是,來者不同二人的手筆。那兩文有著不同的風格,不同的看法,如果事前不告訴你,它們其實來自同一作者的話,你不會那樣估計。

亦即是話,即使假設你和現在的男朋友,互相為對方的理想對象,二人其實仍然會,各自隨時間變化。你不能保證,你們各自變了十年後,你們各自的新版本,仍然會是相愛。

那已經不是「你們」的故事,而是「他們」的旅程;而「他們」,亦可能已經,不能再是在一起了。
.

心理成熟程度,簡稱「心理年齡」。

心理年齡相差越大,相處的難度越高。

方便起見,假設你們相遇時,物理年齡一樣,都是二十歲;而心理年齡亦相同,都是二十歲。

但是,十年後的物理三十歲,一個的心理年齡已,發展到三十五歲,而另一個的心理,卻仍然停留在二十五歲。那樣的話,二人的感情,就未必再能維持。

.

除了相遇那年要夾(融洽)外,還要在之後的每一年,也是那樣夾。換句話說,即是兩人的變法要配合。所以,難度其實深了一層。

— Me@2020-11-29 10:36:46 PM

.

.

2020.12.01 Tuesday (c) All rights reserved by ACHK

Problem 2.3b5

A First Course in String Theory

.

2.3 Lorentz transformations, derivatives, and quantum operators.

(b) Show that the objects \displaystyle{\frac{\partial}{\partial x^\mu}} transform under Lorentz transformations in the same way as the \displaystyle{a_\mu} considered in (a) do. Thus, partial derivatives with respect to conventional upper-index coordinates \displaystyle{x^\mu} behave as a four-vector with lower indices – as reflected by writing it as \displaystyle{\partial_\mu}.

~~~

Denoting \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{L^{~\nu}_{\mu}} is misleading, because that presupposes that \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} is directly related to the matrix \displaystyle{L}.

To avoid this bug, instead, we denote \displaystyle{ \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma}} as \displaystyle{M ^\nu_{~\mu}}. So

\displaystyle{ \begin{aligned} (x')^\mu &= L^\mu_{~\nu} x^\nu \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \right) \\ (x')^\mu (x')_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ x^\mu x_\mu &= \left( L^\mu_{~\nu} x^\nu \right) \left( M^{\beta}_{~\mu} x_\beta \right) \\ \end{aligned}}

\displaystyle{ \begin{aligned} \nu \neq \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 0 \\ \nu = \mu:&~~~~~~\sum_{\mu = 0}^4 L^\mu_{~\nu} M^{\beta}_{~\mu} &= 1 \\ \end{aligned}}

Using the Kronecker Delta and Einstein summation notation, we have

\displaystyle{ \begin{aligned} L^\mu_{~\nu} M^{\beta}_{~\mu} &= M^{\beta}_{~\mu} L^\mu_{~\nu} \\ &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

So

\displaystyle{ \begin{aligned} \sum_{\mu=0}^{4} L^\mu_{~\nu} M^{\beta}_{~\mu} &= \delta^{\beta}_{~\nu} \\ \end{aligned}}

\displaystyle{ \begin{aligned}   M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}    \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\nu \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

One defines (as a matter of notation),

{\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}

and may in this notation write

{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.}

Now for a subtlety. The implied summation on the right hand side of

{\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }}

is running over a row index of the matrix representing \displaystyle{\Lambda^{-1}}. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of \displaystyle{\Lambda} acting on the column vector \displaystyle{A_\mu}. That is, in pure matrix notation,

{\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.}

— Wikipedia on Lorentz transformation

.

So

\displaystyle{ \begin{aligned}   M^{\beta}_{~\mu} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}    \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} &= [L^{-1}]^{\beta}_{~\mu} \\   \end{aligned}}

.

Denote \displaystyle{[L^{-1}]^{\beta}_{~\mu}} as

\displaystyle{ \begin{aligned}   N^{~\beta}_{\mu} \\   \end{aligned}}

In other words,

\displaystyle{ \begin{aligned}   N^{~\beta}_{\mu} &= M^{\beta}_{~\mu} \\   [N^T] &= [M] \\   \end{aligned}}

.

The Lorentz transformation:

\displaystyle{ \begin{aligned}   (x')^\mu &= L^\mu_{~\nu} x^\nu \\   (x')_\mu &= \eta_{\mu \rho} L^\rho_{~\sigma} \eta^{\beta \sigma} x_\beta \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   (x')^\mu &= L^\mu_{~\nu} x^\nu \\   (x')_\mu &= N^{~\nu}_{\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\   (x')_\mu &= M^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   x^\mu &= [L^{-1}]^\mu_{~\nu} (x')^\nu \\   (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

.

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} &= \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu} \\   &= \frac{\partial x^0}{\partial (x')^\mu} \frac{\partial}{\partial x^0} + \frac{\partial x^1}{\partial (x')^\mu} \frac{\partial}{\partial x^1} + \frac{\partial x^2}{\partial (x')^\mu} \frac{\partial}{\partial x^2} + \frac{\partial x^3}{\partial (x')^\mu} \frac{\partial}{\partial x^3} \\   \end{aligned}}

Now we consider \displaystyle{f} as a function of \displaystyle{x^{\mu}}‘s:

\displaystyle{f(x^0, x^1, x^2, x^3)}

Since \displaystyle{x^{\mu}}‘s and \displaystyle{(x')^{\mu}}‘s are related by Lorentz transform, \displaystyle{f} is also a function of \displaystyle{(x')^{\mu}}‘s, although indirectly.

\displaystyle{f(x^0((x')^0, (x')^1, (x')^2, (x')^3), x^1((x')^0, ...), x^2((x')^0, ...), x^3((x')^0, ...))}

For notational simplicity, we write \displaystyle{f} as

\displaystyle{f(x^\alpha((x')^\beta))}

Since \displaystyle{f} is a function of \displaystyle{(x')^{\mu}}‘s, we can differentiate it with respect to \displaystyle{(x')^{\mu}}‘s.

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} f(x^\alpha((x')^\beta))) &= \sum_{\nu = 0}^4 \frac{\partial x^\nu}{\partial (x')^\mu} \frac{\partial}{\partial x^\nu}  f(x^\alpha) \\   \end{aligned}}

Since

\displaystyle{ \begin{aligned}   x^\nu &= [L^{-1}]^\nu_{~\beta} (x')^\beta \\   \end{aligned}},

\displaystyle{ \begin{aligned}   \frac{\partial f}{\partial (x')^\mu}   &= \sum_{\nu = 0}^4 \frac{\partial}{\partial (x')^\mu} \left[  \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} (x')^\beta \right] \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \frac{\partial (x')^\beta}{\partial (x')^\mu} \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 \sum_{\beta = 0}^4 [L^{-1}]^\nu_{~\beta} \delta^\beta_\mu \frac{\partial f}{\partial x^\nu} \\   &= \sum_{\nu = 0}^4 [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\   &= [L^{-1}]^\nu_{~\mu} \frac{\partial f}{\partial x^\nu} \\   \end{aligned}}

Therefore,

\displaystyle{ \begin{aligned}   \frac{\partial}{\partial (x')^\mu} &= [L^{-1}]^\nu_{~\mu} \frac{\partial}{\partial x^\nu} \\   \end{aligned}}

It is the same as the Lorentz transform for covariant vectors:

\displaystyle{ \begin{aligned}   (x')_\mu &= [L^{-1}]^{\nu}_{~\mu} x_\nu \\   \end{aligned}}

— Me@2020-11-23 04:27:13 PM

.

.

2020.11.24 Tuesday (c) All rights reserved by ACHK

Global symmetry, 2

In physics, a global symmetry is a symmetry that holds at all points in the spacetime under consideration, as opposed to a local symmetry which varies from point to point.

Global symmetries require conservation laws, but not forces, in physics.

— Wikipedia on Global symmetry

.

.

2020.11.22 Sunday ACHK

Light, 3

無額外論 7

.

The one in the mirror is your Light.

— Me@2011.06.24

.

Thou shalt have no other gods before Me.

— one of the Ten Commandments

.

God teach you through your mind; help you through your actions.

— Me@the Last Century

.

.

2020.11.21 Saturday (c) All rights reserved by ACHK

一萬個小時 2.4

機遇創生論 1.6.6 | 十年 3.4

.

通用的極致,就是專業。而通用發展成極致的方法,有很多種。每種極致,就自成一門專業。通用是樹幹;專業是樹支。一支樹支的強壯與否,並不能保證,另一支樹支的健康。

那是否就代表,學生時代以後,就毋須再發展,通用知識或通用技能呢?

.

學生時代以後,工作時代中,仍然需要發展,通用知識或通用技能,從而維持自己的智能和體能,以防退化。例如,剛才所講的智力遊戲,我認為可以,避免老人家的腦部退化。

我那樣認為,是因為我看過一些老人家的訪問。有一位九十歲的女仕說,她透過玩手提遊戲機,保持頭腦的清醒靈活。

即使還未老年,只是步入中年,也應該多加留意,智能或體能上的退化。玩適當類型的電腦遊戲,可以保持反應的靈敏。什麼類型呢?

電腦遊戲有很多類型,主要有「建構」、「解謎」和「戰鬥任務」三種。「戰鬥任務」就可用來鍛鍊反應。那些遊戲會放你於,危險和緊急的處境。你當時必須有,敏捷和精準的身手,才可以完成任務,然後脫險。換句話說,那些遊戲亦同時訓練你,控制自己的心理,駕馭自己的恐懼。

同理,雖然一般的運動本身,例如掌上壓,並沒有什麼所謂的「用途」,但是,你正正需要這些「無用」的運動,去維持體能。

無論是體能和智能,如果沒有強大而穩定的主幹,就不會有健康的分支和茂盛的樹葉。

— Me@2020-11-16 04:55:18 PM

.

.

2020.11.20 Friday (c) All rights reserved by ACHK

Possion’s Lagrange Equation

Structure and Interpretation of Classical Mechanics

.

Ex 1.10 Higher-derivative Lagrangians

Derive Lagrange’s equations for Lagrangians that depend on accelerations. In particular, show that the Lagrange equations for Lagrangians of the form \displaystyle{L(t, q, \dot q, \ddot q)} with \displaystyle{\ddot{q}} terms are

\displaystyle{D^2(\partial_3L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + \partial_1 L \circ \Gamma[q] = 0}

In general, these equations, first derived by Poisson, will involve the fourth derivative of \displaystyle{q}. Note that the derivation is completely analogous to the derivation of the Lagrange equations without accelerations; it is just longer. What restrictions must we place on the variations so that the critical path satisfies a differential equation?


Varying the action

\displaystyle{ \begin{aligned}   S[q] (t_1, t_2) &= \int_{t_1}^{t_2} L \circ \Gamma [q] \\   \eta(t_1) &= \eta(t_2) = 0 \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2) &= 0 \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2) &= \int_{t_1}^{t_2} \delta_\eta \left( L \circ \Gamma [q] \right) \\   \end{aligned}}

\displaystyle{ \begin{aligned}     \delta_\eta I [q] &= \eta \\  \delta_\eta g[q] &= D \eta~~~\text{with}~~~g[q] = Dq \\   \end{aligned}}

.

Let \displaystyle{h[q] = D^2 q}.

\displaystyle{ \begin{aligned}   \delta_\eta h[q]   &= \lim_{\epsilon \to 0} \frac{h[q+\epsilon \eta] - h[q]}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 (q+\epsilon \eta) - D^2 q}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 q + D^2 \epsilon \eta - D^2 q}{\epsilon} \\   &= \lim_{\epsilon \to 0} \frac{D^2 \epsilon \eta}{\epsilon} \\   &= D^2 \eta \\   \end{aligned}}

\displaystyle{ \begin{aligned}   \Gamma [q] (t) &= (t, q(t), D q(t), D^2 q(t)) \\  \delta_\eta \Gamma [q] (t) &= (0, \eta (t), D \eta (t), D^2 \eta (t)) \\  \end{aligned}}

.

Chain rule of functional variation

\displaystyle{ \begin{aligned} &\delta_\eta F[g[q]] \\   &= \delta_\eta (F \circ g)[q] \\   &= \delta_{ \left( \delta_\eta g[q] \right)} F[g] \\ \end{aligned}}

Since variation commutes with integration,

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \delta_\eta \int_{t_1}^{t_2} L \circ \Gamma [q] \\   &= \int_{t_1}^{t_2} \delta_\eta \left( L \circ \Gamma [q] \right) \\   \end{aligned}}

By the chain rule of functional variation:

\displaystyle{ \begin{aligned}   \delta_\eta L \circ \Gamma [q] = \delta_{ \left( \delta_\eta \Gamma[q] \right)} L[\Gamma[q]] \\   \end{aligned}}

If \displaystyle{L} is path-independent,

\displaystyle{ \begin{aligned}   \delta_\eta \left( L \circ \Gamma [q] \right) = \left( DL \circ \Gamma[q] \right) \delta_\eta \Gamma[q] \\   \end{aligned}}

But is \displaystyle{L} path-independent?

The \displaystyle{L \circ \Gamma [.]} is path-dependent. Its input is a path \displaystyle{q}, not just \displaystyle{q(t)}, the value of \displaystyle{q} at the time \displaystyle{t}. However, \displaystyle{L(.)} itself is a path-independent function, because its input is not a path \displaystyle{q}, but a quadruple of values \displaystyle{(t, q(t), Dq(t), D^2 q(t))}.

\displaystyle{ \begin{aligned}   L \circ \Gamma [q] = L(t, q(t), Dq(t), D^2 q(t)) \\   \end{aligned}}

Since \displaystyle{L} is path-independent,

\displaystyle{ \begin{aligned}   \delta_\eta \left( L \circ \Gamma [q] \right)   = \left( DL \circ \Gamma[q] \right) \delta_\eta \Gamma[q] \\   \end{aligned}}

\displaystyle{ \begin{aligned}   &\delta_\eta S[q] (t_1, t_2) \\  &= \int_{t_1}^{t_2} \delta_\eta L \circ \Gamma [q] \\   &= \int_{t_1}^{t_2} \left( D \left( L \circ \Gamma[q] \right) \right) \delta_\eta \Gamma[q]  \\   &= \int_{t_1}^{t_2} \left( D \left( L(t, q, D q, D^2 q) \right) \right) (0, \eta (t), D \eta (t), D^2 \eta (t))  \\   &= \int_{t_1}^{t_2} \left[ \partial_0 L \circ \Gamma[q], \partial_1 L \circ \Gamma[q], \partial_2 L \circ \Gamma[q], \partial_3 L \circ \Gamma[q] \right] (0, \eta (t), D \eta (t), D^2 \eta (t))  \\   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta + (\partial_2 L \circ \Gamma[q]) D \eta + (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        &=   \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta      + \left[ \left. (\partial_2 L \circ \Gamma[q]) \eta \right|_{t_1}^{t_2} - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta \right]     + \int_{t_1}^{t_2} (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        \end{aligned}}

Since \displaystyle{\eta(t_1) = 0} and \displaystyle{\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &=   \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta      - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta      + \int_{t_1}^{t_2} (\partial_3 L \circ \Gamma[q]) D^2 \eta \\                        \end{aligned}}

Here is a trick for integration by parts:

As long as the boundary term \displaystyle{\left. u(t)v(t) \right|_{t_1}^{t_2} = 0},

\displaystyle{\int_{t_1}^{t_2} u(t) dv(t) = - \int_{t_1}^{t_2} v(t) du(t)}

So if \displaystyle{D \eta(t_1) = 0} and \displaystyle{D \eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_3 L \circ \Gamma[q]) D \eta \\                        \end{aligned}}

Since \displaystyle{\eta(t_1) = 0} and \displaystyle{\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} (\partial_1 L \circ \Gamma[q]) \eta        - \int_{t_1}^{t_2} D(\partial_2 L \circ \Gamma[q]) \eta        + \int_{t_1}^{t_2} D^2 (\partial_3 L \circ \Gamma[q]) \eta \\                        \end{aligned}}

\displaystyle{ \begin{aligned}   \delta_\eta S[q] (t_1, t_2)   &= \int_{t_1}^{t_2} \left[ (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) \right] \eta \\                        \end{aligned}}

By the principle of stationary action, \displaystyle{ \delta_\eta S[q] (t_1, t_2) = 0}. So

\displaystyle{ \begin{aligned}   0   &= \int_{t_1}^{t_2} \left[ (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) \right] \eta \\                        \end{aligned}}

Since this is true for any function \displaystyle{\eta(t)} that satisfies \displaystyle{\eta(t_1) = \eta(t_2) = 0} and \displaystyle{D\eta(t_1) = D\eta(t_2) = 0},

\displaystyle{ \begin{aligned}   (\partial_1 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + D^2 (\partial_3 L \circ \Gamma[q]) &= 0 \\                        D^2 (\partial_3 L \circ \Gamma[q]) - D(\partial_2 L \circ \Gamma[q]) + \partial_1 L \circ \Gamma[q] &= 0 \\                        \end{aligned}}

.

Note:

The notation of the path function \displaystyle{\Gamma} is \displaystyle{\Gamma[q](t)}, not \displaystyle{\Gamma[q(t)]}.

The notation \displaystyle{\Gamma[q](t)} means that \displaystyle{\Gamma} takes a path \displaystyle{q} as input. And then returns a path-independent function \displaystyle{\Gamma[q]}, which takes time \displaystyle{t} as input, returns a value \displaystyle{\Gamma[q](t)}.

The other notation \displaystyle{\Gamma[q(t)]} makes no sense, because \displaystyle{\Gamma[.]} takes a path \displaystyle{q}, not a value \displaystyle{q(t)}, as input.

— Me@2020-11-11 05:37:13 PM

.

.

2020.11.11 Wednesday (c) All rights reserved by ACHK

Memory as past microstate information encoded in present devices

Logical arrow of time, 4.2

.

Memory is of the past.

The main point of memories or records is that without them, most of the past microstate information would be lost for a macroscopic observer forever.

For example, if a mixture has already reached an equilibrium state, we cannot deduce which previous microstate it is from, unless we have the memory of it.

This work is free and may be used by anyone for any purpose. Wikimedia Foundation has received an e-mail confirming that the copyright holder has approved publication under the terms mentioned on this page.

.

memory/record

~ some of the past microstate and macrostate information encoded in present macroscopic devices, such as paper, electronic devices, etc.

.

How come macroscopic time is cumulative?

.

Quantum states are unitary.

A quantum state in the present is evolved from one and only one quantum state at any particular time point in the past.

Also, that quantum state in the present will evolve to one and only one quantum state at any particular time point in the future.

.

Let

\displaystyle{t_1} = a past time point

\displaystyle{t_2} = now

\displaystyle{t_3} = a future time point

Also, let state \displaystyle{S_1} at time \displaystyle{t_1} evolve to state \displaystyle{S_2} at time \displaystyle{t_2}. And then state \displaystyle{S_2} evolves to state \displaystyle{S_3} at time \displaystyle{t_3}.

.

State \displaystyle{S_2} has one-one correspondence to its past state \displaystyle{S_1}. So for the state \displaystyle{S_2}, it does not need memory to store any information of state \displaystyle{S_1}.

Instead, just by knowing that \displaystyle{t_2} microstate is \displaystyle{S_2}, we already can deduce that it is evolved from state \displaystyle{S_1} at time \displaystyle{t_1}.

In other words, microstate does not require memory.

— Me@2020-10-28 10:26 AM

.

.

2020.11.02 Monday (c) All rights reserved by ACHK

尋覓

這段改編自 2010 年 10 月 14 日的對話。

.

你為什麼哭?可不可以告訴我?

(CPK: 可不可以講呀?可以?

她剛剛和男朋友分了手,因為,她男朋友長期不理會她。)

.

我剛才見到你哭,不知發生什麼事。現在,我知道是失戀。那是我所能想像的災難中,最小的一種,因為,那幾乎是,每個人都必會遇到的事情。

.

如果想避免,那就先寫下筆記。

第一點,你不要找一般的地球人,做你的另一半,因為,剛才你所講的劇情,如果是地球人的話,就一定會發生。

例如,追到你之前,會很愛惜你。不應該說「愛惜」。應該怎樣說呢?

追到你之前,就很想見到你,但追求成功後,就不太理會你。你遇到的,是不是這樣?

(CSY: 是。)

.

你想像一下,如果將來與他結了婚,會有什麼後果?

只會更加嚴重。他只會更加不理會你。所以,不可永久相愛的話,越早分手越好。試想想,如果結了婚後才分手的話,情況會有多麻煩。

結了婚但未有子女時,離婚還可以。但如果有子女的話,那就不知如何是好了。

既然幾乎是人生必經階段,那就分手越早越好。

.

當然,那不是地球人的唯一缺點。其他不能忍受的缺點,還有(例如),誠信有問題。你認識的人當中,有多少人是守時的呢?

其實,大部分人不只不守時,還常常會爽約。

.

但是,要找到你,心目中的那個外星人,不易做到。

一來,地球上,地球人很多,外星人很少。

二來,外星人中,都可以有好有壞。

三來,你二十歲未到,還未有足夠的人生閱歷,去辨認哪些人是,有誠信的外星人。

所以,你等年紀大一點,例如大學時代,才拍拖,可能會好一點。

— Me@2020-10-28 10:26:23 PM

.

.

2020.10.31 Saturday (c) All rights reserved by ACHK