# Mathematics, 2

.

Approximately speaking, mathematics is whatever language with the maximum precision.

— Me@2012-04-16

.

.

# Self de-centralization, 3

.

Life is a process of self de-centralization. If you do not follow that path. You feel unhappy.

— Me@2011.08.19

.

.

# 神的旨意 2.4

.

.

.

.

.

— Me@2018-09-02 03:05:45 PM

.

.

# Illusions destroyed, 2

Ask HN: Why is nearing completion so demotivating?

534 points by danschumann 3 months ago

So I’ve been working on animation software for over two years. Part of me is very excited for launch so I can have money again (I’ve been freelancing a minimum amount these last two years, and went car-less, moved, cut lifestyle into a third). I should be wholeheartedly excited, but I’m feeling tired and generally sluggish regarding the project. I still make consistent progress, but it takes a lot of will power.

Part of me thinks it might be an aversion to sales. Part of me thinks this could have been built up so much in my head that anything short of overnight millions would be a disappointment (though I would be happy with 1500 bucks a month), part of me thinks I might be scared of success (or scared of surpassing my parents) (media attention), part of me fears the attacks that might come with success (having something to lose), part of it is the un-fun-ness of mature projects where the focus is on polish and bugs rather than broad new features, and part of me is scared of commitment: if I succeed I have to stick with this (freedom value), part of me wonders what will happen when more people become involved, if I will be able to maintain my creative direction, since I’m scratching my own itch. Part of me wonders if diet and exercise isn’t a factor.

A combination, likely…

.

mikekchar 3 months ago [-]

When your project is finished, the dream is dead and the reality is born. The death of a dream is like the death of a friend. It’s probably been with you for a long time — longer even than the length of the project. A dream is the manifestation of what’s possible. When it is over, the possible diminishes very quickly and you are left with what actually is. Will people respond well to your project — in the dream stage it is possible; everything is possible. In the reality stage, it will only be what it is.

So while it’s common to think of a release as a birth of something new, realise that you also have a significant loss. You will mourn that loss. Give yourself some emotional space to deal with the mourning.

.

riantogo 3 months ago [-]

This is exactly it. Tens of my personal projects have died in this stage. It was always much easier to move on to the next dream. There is always the next big problem that could use a solution. Why not build when it is what we do best? Rinse, repeat.

I took a break from side projects for several years but recently got back to it and couple weeks back finished building. It is the same story all over again. Same feeling. I’m dreading what comes next.

.

— Why is nearing completion so demotivating?

— Hacker News

.

.

2018.09.01 Saturday ACHK

# Problem 14.5a2

Counting states in heterotic SO(32) string theory | A First Course in String Theory

.

(a) Consider the left NS’ sector. Write the precise mass-squared formula with normal-ordered oscillators and the appropriate normal-ordering constant.

~~~

.

$\displaystyle{\alpha' M_L^2 = \frac{1}{2} \sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

.

— This answer is my guess. —

.

Equation at Problem 14.5:

$\displaystyle{\alpha' M_L^2}$

$\displaystyle{= \frac{1}{2} \sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \frac{-1}{8} + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

.

$\displaystyle{\sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$
$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ 2 \lambda_{-r}^A \lambda_r^A - 1 \right]}$

.

Equation (13.116):

$\displaystyle{\sum_{k \in \mathbf{Z}^+_{\text{odd}}} k = \frac{1}{12}}$

.

\displaystyle{\begin{aligned} &\sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A \\ &= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ 2 \lambda_{-r}^A \lambda_r^A - 1 \right] \\ &= - \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r + 2 \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A \\ &= - \frac{1}{2} \sum_{r = 1, 3, ...} r + 2 \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A \\ &= - \frac{1}{24} + 2 \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A \end{aligned}}

.

\displaystyle{ \begin{aligned} \alpha' M_L^2 &= \frac{-7}{48} + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A \\ \end{aligned}}

.

If we define $N^\perp$ in the way similar to equation (14.37), we have

\displaystyle{ \begin{aligned} \alpha' M_L^2 &= \frac{-7}{48} + N^\perp \\ \end{aligned}}

— This answer is my guess. —

.

— Me@2018-09-01 06:05:29 AM

.

.

# The square root of the probability

Probability amplitude in Layman’s Terms

What I understood is that probability amplitude is the square root of the probability … but the square root of the probability does not mean anything in the physical sense.

Can any please explain the physical significance of the probability amplitude in quantum mechanics?

edited Mar 1 at 16:31
nbro

asked Mar 21 ’13 at 15:36
Deepu

.

Part of you problem is

“Probability amplitude is the square root of the probability […]”

The amplitude is a complex number whose amplitude is the probability. That is $\psi^* \psi = P$ where the asterisk superscript means the complex conjugate.${}^{[1]}$ It may seem a little pedantic to make this distinction because so far the “complex phase” of the amplitudes has no effect on the observables at all: we could always rotate any given amplitude onto the positive real line and then “the square root” would be fine.

But we can’t guarantee to be able to rotate more than one amplitude that way at the same time.

More over, there are two ways to combine amplitudes to find probabilities for observation of combined events.

.

When the final states are distinguishable you add probabilities:

$P_{dis} = P_1 + P_2 = \psi_1^* \psi_1 + \psi_2^* \psi_2$

.

When the final state are indistinguishable,${}^{[2]}$ you add amplitudes:

$\Psi_{1,2} = \psi_1 + \psi_2$

and

$P_{ind} = \Psi_{1,2}^*\Psi_{1,2} = \psi_1^*\psi_1 + \psi_1^*\psi_2 + \psi_2^*\psi_1 + \psi_2^* \psi_2$

.

The terms that mix the amplitudes labeled 1 and 2 are the “interference terms”. The interference terms are why we can’t ignore the complex nature of the amplitudes and they cause many kinds of quantum weirdness.

${}^1$ Here I’m using a notation reminiscent of a Schrödinger-like formulation, but that interpretation is not required. Just accept $\psi$ as a complex number representing the amplitude for some observation.

${}^2$ This is not precise, the states need to be “coherent”, but you don’t want to hear about that today.

edited Mar 21 ’13 at 17:04
answered Mar 21 ’13 at 16:58

dmckee

— Physics Stack Exchange

.

.

# 神的旨意 2.3

.

.

— Me@2018-08-13 11:54:47 AM

.

.

# The Jacobian of the inverse of a transformation

The Jacobian of the inverse of a transformation is the inverse of the Jacobian of that transformation

.

In this post, we would like to illustrate the meaning of

the Jacobian of the inverse of a transformation = the inverse of the Jacobian of that transformation

by proving a special case.

.

Consider a transformation $\mathscr{T}: \bar{x}^i=\bar{x}^i (x^1,x^2)$, which is an one-to-one mapping from unbarred $x^i$‘s to barred $\bar{x}^i$ coordinates, where $i=1, 2$.

By definition, the Jacobian matrix J of $\mathscr{T}$ is

$J= \begin{pmatrix} \displaystyle{\frac{\partial \bar{x}^1}{\partial x^1}} & \displaystyle{\frac{\partial \bar{x}^1}{\partial x^2}} \\ \displaystyle{\frac{\partial \bar{x}^2}{\partial x^1}} & \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}} \end{pmatrix}$

.

Now we consider the the inverse of the transformation $\mathscr{T}$:

$\mathscr{T}^{-1}: x^i=x^i(\bar{x}^1,\bar{x}^2)$

By definition, the Jacobian matrix $\bar{J}$ of this inverse transformation, $\mathscr{T}^{-1}$, is

$\bar{J}= \begin{pmatrix} \displaystyle{\frac{\partial x^1}{\partial \bar{x}^1}} & \displaystyle{\frac{\partial x^1}{\partial \bar{x}^2}} \\ \displaystyle{\frac{\partial x^2}{\partial \bar{x}^1}} & \displaystyle{\frac{\partial x^2}{\partial \bar{x}^2}} \end{pmatrix}$

.

On the other hand, the inverse of Jacobian $J$ of the original transformation $\mathscr{T}$ is

$J^{-1}=\displaystyle{\frac{1}{ \begin{vmatrix} \displaystyle{\frac{\partial \bar{x}^1}{\partial x^1}} & \displaystyle{\frac{\partial \bar{x}^1}{\partial x^2}} \\ \displaystyle{\frac{\partial \bar{x}^2}{\partial x^1}} & \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}} \end{vmatrix} }} \begin{pmatrix} \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}} & \displaystyle{-\frac{\partial \bar{x}^1}{\partial x^2}} \\ \displaystyle{-\frac{\partial \bar{x}^2}{\partial x^1}} & \displaystyle{\frac{\partial \bar{x}^1}{\partial x^1}} \end{pmatrix}$

.

If $\bar{J} = J^{-1}$, their (1, 1)-elementd should be equation:

$\displaystyle{\frac{\partial x^1}{\partial \bar{x}^1}}\stackrel{?}{=}\displaystyle{\frac{1}{\displaystyle{\frac{\partial \bar{x}^1}{\partial x^1}}\displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}}-\displaystyle{\frac{\partial \bar{x}^1}{\partial x^2}}\displaystyle{\frac{\partial \bar{x}^2}{\partial x^1}} }} \bigg( \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}} \bigg)$

Let’s try to prove that.

.

Consider equations

$\bar{x}^1 = \bar{x}^1(x^1,x^2)$

$\bar{x}^2 = \bar{x}^2(x^1,x^2)$

Differentiate both sides of each equation with respect to $\bar{x}^1$, we have:

$A := 1=\displaystyle{\frac{\partial \bar{x}^1}{\partial \bar{x}^1}=\frac{\partial \bar{x}^1}{\partial x^1}\frac{\partial x^1}{\partial \bar{x}^1}+\frac{\partial \bar{x}^1}{\partial x^2}\frac{\partial x^2}{\partial \bar{x}^1}}$

$B := 0 = \displaystyle{\frac{\partial \bar{x}^2}{\partial \bar{x}^1}=\frac{\partial \bar{x}^2}{\partial x^1}\frac{\partial x^1}{\partial \bar{x}^1}+\frac{\partial \bar{x}^2}{\partial x^2}\frac{\partial x^2}{\partial \bar{x}^1}}$

.

$A \times \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}}:~~~~~C := \displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}=\frac{\partial \bar{x}^1}{\partial x^1}\frac{\partial x^1}{\partial \bar{x}^1}\frac{\partial \bar{x}^2}{\partial x^2}+\frac{\partial \bar{x}^1}{\partial x^2}\frac{\partial x^2}{\partial \bar{x}^1}\frac{\partial \bar{x}^2}{\partial x^2}}$

$B \times \displaystyle{\frac{\partial \bar{x}^1}{\partial x^2}}:~~~~~D := \displaystyle{0=\frac{\partial \bar{x}^2}{\partial x^1}\frac{\partial x^1}{\partial \bar{x}^1}\frac{\partial \bar{x}^1}{\partial x^2}+\frac{\partial \bar{x}^2}{\partial x^2}\frac{\partial x^2}{\partial \bar{x}^1}\frac{\partial \bar{x}^1}{\partial x^2}}$

.

$D-C:$

$\displaystyle{ \frac{\partial \bar{x}^2}{\partial x^2}= \bigg( \frac{\partial \bar{x}^1}{\partial x^1}\frac{\partial \bar{x}^2}{\partial x^2} - \frac{\partial \bar{x}^2}{\partial x^1}\frac{\partial \bar{x}^1}{\partial x^2}\bigg) \frac{\partial x^1}{\partial \bar{x}^1}}$,

results

$\displaystyle{ \frac{\partial x^1}{\partial \bar{x}^1}}=\frac{\displaystyle{\frac{\partial \bar{x}^2}{\partial x^2}}}{\displaystyle{\frac{\partial \bar{x}^1}{\partial x^1}\frac{\partial \bar{x}^2}{\partial x^2} - \frac{\partial \bar{x}^1}{\partial x^2}\frac{\partial \bar{x}^2}{\partial x^1}}}$

— Me@2018-08-09 09:49:51 PM

.

.

# Problem 14.5a1

Counting states in heterotic $SO(32)$ string theory | A First Course in String Theory

.

(a) Consider the left NS’ sector. Write the precise mass-squared formula with normal-ordered oscillators and the appropriate normal-ordering constant.

~~~

.

$\displaystyle{\alpha' M_L^2 = \frac{1}{2} \sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

.

What is normal-ordering?

Put all the creation operators on the left.

.

What for?

p.251 “It is useful to work with normal-ordered operators since they act in a simple manner on the vacuum state. We cannot use operators that do not have a well defined action on the vacuum state.”

“The vacuum expectation value of a normal ordered product of creation and annihilation operators is zero. This is because, denoting the vacuum state by $|0\rangle$, the creation and annihilation operators satisfy”

$\displaystyle{\langle 0 | \hat{a}^\dagger = 0 \qquad \textrm{and} \qquad \hat{a} |0\rangle = 0}$

— Wikipedia on Normal order

.

— This answer is my guess. —

$\displaystyle{\sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I}$

$\displaystyle{= \sum_{n \in \mathbf{Z}^-} \bar \alpha_{-n}^I \bar \alpha_n^I + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_n^I}$

$\displaystyle{= \sum_{n \in \mathbf{Z}^+} \bar \alpha_{n}^I \bar \alpha_{-n}^I + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_n^I}$

$\displaystyle{= \sum_{n \in \mathbf{Z}^+} \left[ \bar \alpha_{n}^I \bar \alpha_{-n}^I - \bar \alpha_{-n}^I \bar \alpha_{n}^I + \bar \alpha_{-n}^I \bar \alpha_{n}^I \right] + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_n^I}$

.

$\displaystyle{= \sum_{n \in \mathbf{Z}^+} \left[ \bar \alpha_{n}^I, \bar \alpha_{-n}^I \right] + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_n^I}$

$= \displaystyle{\sum_{n \in \mathbf{Z}^+} n \eta^{II} + 2 \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I}$

.

c.f. p.251:

$\displaystyle{\sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I}$

$\displaystyle{= \sum_{n \in \mathbf{Z}^+} n \eta^{II} + 2 \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I}$

$\displaystyle{= \frac{-1}{12} (D - 2) + 2 \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I}$

.

Equation at Problem 14.5:

$\displaystyle{\alpha' M_L^2}$

$\displaystyle{= \frac{1}{2} \sum_{n \ne 0} \bar \alpha_{-n}^I \bar \alpha_n^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \frac{1}{2} \left[ \frac{-1}{12} (D - 2) + 2 \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I \right] + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \frac{-1}{24} (D - 2) + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \frac{-1}{8} + \sum_{n \in \mathbf{Z}^+} \bar \alpha_{-n}^I \bar \alpha_{n}^I + \frac{1}{2} \sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

.

$D = 10$

.

$\displaystyle{\sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \sum_{r = - \frac{1}{2}, - \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} (-r) \lambda_{r}^A \lambda_{-r}^A + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ (-1) \lambda_{r}^A \lambda_{-r}^A + \lambda_{-r}^A \lambda_r^A \right]}$

.

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ (-1) \lambda_{r}^A \lambda_{-r}^A + \lambda_{-r}^A \lambda_r^A \right]}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ \lambda_{-r}^A, \lambda_r^A \right]}$

.

Equation (14.29):

$\displaystyle{\left\{ b_r^I, b_s^J \right\} = \delta_{r+s, 0} \delta^{IJ}}$

$\displaystyle{b_r^I b_s^J = - b_s^I b_r^J + \delta_{r+s, 0} \delta^{IJ}}$

.

$\displaystyle{\sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ (-1) \lambda_{r}^A \lambda_{-r}^A + \lambda_{-r}^A \lambda_r^A \right]}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ (-1) \left( - \lambda_{-r}^A \lambda_r^A + \delta_{r-r, 0} \delta^{AA} \right) + \lambda_{-r}^A \lambda_r^A \right]}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ \lambda_{-r}^A \lambda_r^A + \lambda_{-r}^A \lambda_r^A - 1 \right]}$

$\displaystyle{= \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ 2 \lambda_{-r}^A \lambda_r^A - 1 \right]}$

.

$\displaystyle{\sum_{r \in \mathbf{Z} + \frac{1}{2}}r \lambda_{-r}^A \lambda_r^A}$

$\displaystyle{= - \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ b_{-r}^A b_r^A + \lambda_{-r}^A \lambda_r^A \right]}$

$\displaystyle{= - \frac{1}{2} \sum_{r = 1, 3, ...} r + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left[ b_{-r}^A b_r^A + \lambda_{-r}^A \lambda_r^A \right]}$

$\displaystyle{= \left[ - \frac{1}{24} + \sum_{r = \frac{1}{2}, \frac{3}{2}, ...} r \left( b_{-r}^A b_r^A + \lambda_{-r}^A \lambda_r^A \right) \right]}$

— This answer is my guess. —

.

— Me@2018-08-06 10:23:48 PM

.

.

# Universal wave function, 20

The physical (synthetic) universal wave function logically cannot be found by any local observers.

The definition of “universe” is “all the things”. So there is no outside.

A global observer has to be outside the universe.

.

However, a mathematical (analytic) universal function is possible.

It applies to theoretical/model universe, which can be used to develop interpretations of quantum mechanics and successively approximate the physical universe.

— Me@2012-04-16

.

.

# The fault of optimism

The illusion of peace

.

The seemingly rational world is rendered by exceptional good parents.

The seemingly peaceful time is provided by exceptional expense of defense.

— Me@2011.08.19

.

.

# 神的旨意 2.2

.

.

「祂」既可能其實是「邪靈」，亦可能只是你自己的幻覺而已。

.

.

— Me@2018-07-16 07:51:33 PM

.

.

# Chain Rule of Differentiation

Consider the curve $y = f(x)$.

.

$\displaystyle{\frac{d}{dx}}$ is an operator, meaning “the slope of the tangent of”. So the expression $\displaystyle{\frac{dy}{dx}}$, meaning $\displaystyle{\frac{d}{dx} (y)}$, is not a fraction.

In order words, it means the slope of the tangent of the curve $y = f(x)$ at a point, such as point $A$ in the graph.

The symbol $dx$ has no relation with the symbol $\displaystyle{\frac{dy}{dx}}$. It means $\Delta x$ as shown in the graph. In other words,

$dx = \Delta x$

.

The symbol $dy$ also has no relation with the symbol $\displaystyle{\frac{dy}{dx}}$. It means the vertical distance between the current point $A(x_0, y_0)$, where $y_0 = f(x_0)$, and the point $C$ on the tangent line $y = mx + c$, where $m$ is the slope of the tangent line. In other words,

$dy = m~dx$

or

$\displaystyle{dy = \left[ \left( \frac{d}{dx} \right) y \right] dx}$

.

The relationship of $\Delta y$ and $dy$ is that

$\displaystyle{\Delta y = \frac{dy}{dx} \Delta x + \text{higher order terms}}$

$\displaystyle{\Delta y = \frac{dy}{dx} dx + \text{higher order terms}}$

$\Delta y = dy + \text{higher order terms}$

.

Similarly, for functions of 2 variables:

$\displaystyle{\Delta f(x,y) = \frac{\partial f}{\partial x} \Delta x + \frac{\partial f}{\partial y} \Delta y + \text{higher order terms}}$

$\displaystyle{df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy}$

.

For functions of 3 variables:

$\displaystyle{df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy + \frac{\partial f}{\partial z} dz}$

$\displaystyle{\frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y}\frac{dy}{dt} + \frac{\partial f}{\partial z}\frac{dz}{dt}}$

— Me@2018-07-15 09:30:29 PM

.

.

# Problem 14.4b2

Closed string degeneracies | A First Course in String Theory

.

(b) State the values of $\alpha' M^2$ and give the separate degeneracies of bosons and fermions for the first five mass levels of the type IIA closed superstrings. Would the answer have the different for type IIB?

~~~

Type IIB closed superstrings

Equation (14.85)

$(NS+, NS+), (NS+, R-), (R-, NS+), (R-, R-)$

— Me@2015.09.16 06:08 AM: Should be the same. But I am not sure whether I have missed something.

.

$f_{NS+}(x) = 8 + 128 \, x + 1152 \, x^{2} + 7680 \, x^{3} + 42112 \, x^{4} + ...$

$f_{R-}(x) = 8 + 128 x + 1152 x^{2} + 7680 x^{3} + 42112 x^{4} + ...$

$f_{NS-}(x) = \frac{1}{\sqrt{x}} + 36 \sqrt{x} + 402 x^{\frac{3}{2}} + 3064 x^{\frac{5}{2}} + ...$

$f_{R+}(x) = 8 + 128 x + 1152 x^{2} + 7680 x^{3} + 42112 x^{4} + ...$

— Me@2018-07-14 09:41:10 PM

.

.

# Pointer state

Eigenstates 3

.

In quantum Darwinism and similar theories, pointer states are quantum states that are less perturbed by decoherence than other states, and are the quantum equivalents of the classical states of the system after decoherence has occurred through interaction with the environment.

— Wikipedia on Pointer state

.

In calculation, if a quantum state is in a superposition, that superposition is a superposition of eigenstates.

However, real superposition does not just includes states that make macroscopic senses.

.

That is the major mistake of the many-worlds interpretation of quantum mechanics.

— Me@2017-12-30 10:24 AM

— Me@2018-07-03 07:24 PM

.

.

# Mirror selves, 5.2

Anatta 3.3 | 無我 3.3

.

You fight for existence, for being alive.

However, your existence is not “yours”.

The existence of you, is not your property.

The existence of you, is a property of the group you are in.

The existence of you, is a property of other people.

.

To meaningfully say the statement “I exist”, you have to specify you exist with respect to whom.

To exist, you have to specify to exist in which people’s world.

— Me@2018-05-22 7:43 AM

.

.

# 神的旨意 2.1

.

（問：你閱讀過很多有關「瀕死經驗」的文章？）

（問：那你怎樣分辨，「瀕死經驗」的文章之中，哪些是真，哪些為假？）

.

.

.

「祂」既可能其實是「邪靈」，亦可能只是你自己的幻覺而已。

— Me@2018-06-28 10:23:28 PM

.

.

# Block spacetime, 9

motohagiography 42 days ago [-]

I once saw a fridge magnet that said “time is natures way of making sure everything doesn’t happen all at once,” and it’s stuck with me.

The concept of time not being “real,” can be useful as an exercise for modelling problems where to fully explore the problem space, you need to decouple your solutions from needing them to occur in an order or sequence.

From an engineering perspective, “removing” time means you can model problems abstractly by stepping back from a problem and asking, what are all possible states of the mechanism, then which ones are we implementing, and finally, in what order. This is different from the relatively stochastic approach most people take of “given X, what is the necessary next step to get to desired endstate.”

More simply, as a tool, time helps us apprehend the states of a system by reducing the scope of our perception of them to sets of serial, ordered phenomena.

Whether it is “real,” or an artifact of our perception is sort of immaterial when you can choose to reason about things with it, or without it. A friend once joked that math is what you get when you remove time from physics.

I look forward to the author’s new book.

— Gödel and the unreality of time

— Hacker News

.

.

2018.06.26 Tuesday ACHK

# Quick Calculation 14.8.2

A First Course in String Theory

.

What sector(s) can be combined with a left-moving NS- to form a consistent closed string sector?

~~~

There are no mass levels in NS+, R+, or R- that can match those in NS-. So NS- can be paired only with NS-:

$(NS-, NS-)$

.

$f_{NS} (x)$
$= \frac{1}{\sqrt{x}} \prod_{n=1}^\infty \left( \frac{1+x^{n-\frac{1}{2}}}{1-x^n} \right)^8$
$= \frac{1}{\sqrt{x}} g_{NS}(x)$
$= \frac{1}{\sqrt{x}} + 8 + 36 \sqrt{x} + 128 x + 402 x \sqrt{x} + 1152 x^2 + ...$

.

$g (\sqrt{x})$
$= \prod_{n=1}^\infty \left( \frac{1+x^{n-\frac{1}{2}}}{1-x^n} \right)^8$
$= 1 + 8 \, \sqrt{x} + 36 \, x + 128 \, x^{\frac{3}{2}} + 402 \, x^{2} + 1152 \, x^{\frac{5}{2}} + 3064 \, x^{3} + ...$

$g (-\sqrt{x})$
$= \prod_{n=1}^\infty \left( \frac{1-x^{n-\frac{1}{2}}}{1-x^n} \right)^8$
$= 1 -8 \, \sqrt{x} + 36 \, x -128 \, x^{\frac{3}{2}} + 402 \, x^{2} -1152 \, x^{\frac{5}{2}} + 3064 \, x^{3} + ...$

.

$g (\sqrt{x}) + g (-\sqrt{x})$
$= 2(1 + 36 x + 402 x^{2} + 3064 x^{3} + ...)$

.

$f_{NS-}(x)$
$= \frac{1}{2 \sqrt{x}} \left[ g (\sqrt{x}) + g (-\sqrt{x}) \right]$
$= \frac{1}{2 \sqrt{x}} \left[ \prod_{n=1}^\infty \left( \frac{1+x^{n-\frac{1}{2}}}{1-x^n} \right)^8 + \prod_{n=1}^\infty \left( \frac{1-x^{n-\frac{1}{2}}}{1-x^n} \right)^8 \right]$
$= \frac{1}{2 \sqrt{x}} \left[ 2(1 + 36 \, x + 402 \, x^{2} + 3064 \, x^{3} + ...) \right]$
$= \frac{1}{\sqrt{x}} + 36 \sqrt{x} + 402 x^{\frac{3}{2}} + 3064 x^{\frac{5}{2}} + ...$

— Me@2018-06-26 07:36:41 PM

.

.