# AutoKey

First photo, 3 | 1990, 3

.


output = system.exec_command("date +%Y.%m.%d")
keyboard.send_keys(output + " ")
output = system.exec_command("date +%I:%M")
keyboard.send_keys(output + " ")
output = system.exec_command("date +%p")
keyboard.send_keys(output)




output = system.exec_command("date +%Y.%m.%d")
keyboard.send_keys(output + " ")
output = system.exec_command("date +%A")
keyboard.send_keys(output)
keyboard.send_keys(output)




output = system.exec_command("date +_%Y_%m_%d_")
keyboard.send_keys(output)
output = system.exec_command("date +_%H_%M_%S_%p")
keyboard.send_keys(output)



— Me@2022.04.28 09:04 AM

.

.

# Ex 1.25 Properties of Dt

Structure and Interpretation of Classical Mechanics

.

The total time derivative $\displaystyle{D_t F}$ is not the derivative of the function $\displaystyle{F}$. Nevertheless, the total time derivative shares many properties with the derivative. Demonstrate that $\displaystyle{D_t}$ has the following properties …

$\displaystyle{D_t (F + G) = D_t F + D_t G}$

~~~

Eq. (1.108):

$\displaystyle{ D(F \circ \Gamma[q]) = (DF\circ\Gamma[q])D\Gamma[q] }$

Eq. (1.109):

$\displaystyle{ DF \circ \Gamma[q] = \left[ \partial_0 F \circ \Gamma[q], \partial_1 F \circ \Gamma[q], \partial_2 F \circ \Gamma[q], ... \right] }$

Eq. (1.110):

$\displaystyle{ \left(D \Gamma[q] \right)(t) = \left( 1, Dq(t), D^2 q(t), ... \right) = \begin{bmatrix} 1 \\ Dq(t) \\ D^2 q(t) \\ ... \\ \end{bmatrix} \\ }$

.

\displaystyle{ \begin{aligned} D(F \circ \Gamma[q])(t) &= (DF\circ\Gamma[q])D\Gamma[q](t) \\ &= \left[ \partial_0 F \circ \Gamma[q], \partial_1 F \circ \Gamma[q], \partial_2 F \circ \Gamma[q], ... \right] \begin{bmatrix} 1 \\ Dq(t) \\ D^2 q(t) \\ ... \\ \end{bmatrix} \\ &= \partial_0 F \circ \Gamma[q] + \partial_1 F \circ \Gamma[q] D q(t) + \partial_2 F \circ \Gamma[q] D^2 q(t) + ... \\ &= \partial_0 F \circ \Gamma[q] u(t) + \partial_1 F \circ \Gamma[q] D q(t) + \partial_2 F \circ \Gamma[q] D^2 q(t) + ... \\ \end{aligned} },

where $\displaystyle{u(t) \equiv 1}$.

.

\displaystyle{ \begin{aligned} D(F \circ \Gamma[q]) &= \partial_0 F \circ \Gamma[q] u + \partial_1 F \circ \Gamma[q] D q + \partial_2 F \circ \Gamma[q] D^2 q + ... \\ &= \partial_0 F \circ \Gamma[q] J_0 \circ \Gamma[q] + \partial_1 F \circ \Gamma[q] J_1 \circ \Gamma[q] + \partial_2 F \circ \Gamma[q] J_2 \circ \Gamma[q] + ... \\ \end{aligned} }

where

\displaystyle{ \begin{aligned} I_0 \circ \Gamma[q] &= t \\ \\ I_{n>0} \circ \Gamma[q] &= I_{n>0} (t, q, v, a, ...) \\ &= I_{n>0} (t, q, Dq, D^2 q, ...) \\ &= D^{(n-1)} q \\ \\ J_{n} \circ \Gamma[q] &= D(I_n (t, q, v, a, ...)) \\ \end{aligned} }

.

The meaning of $\displaystyle{\delta_\eta (fg)[q]}$ is

$\displaystyle{\delta_\eta (f[q]g[q])}$

— Me@2019-04-27 07:02:38 PM

\displaystyle{ \begin{aligned} D(F \circ \Gamma[q]) &= \partial_0 F \circ \Gamma[q] J_0 \circ \Gamma[q] + \partial_1 F \circ \Gamma[q] J_1 \circ \Gamma[q] + \partial_2 F \circ \Gamma[q] J_2 \circ \Gamma[q] + ... \\ &= \left[(\partial_0 F) J_0 + (\partial_1 F) J_1 + (\partial_2 F) J_2 + ... \right] \circ \Gamma[q] \\ \end{aligned} }

.
Eq. (1.113):

$\displaystyle{ D_t F \circ \Gamma[q] = D(F \circ \Gamma[q]) }$

\displaystyle{ \begin{aligned} D_t F \circ \Gamma[q] &= \left[(\partial_0 F) J_0 + (\partial_1 F) J_1 + (\partial_2 F) J_2 + ... \right] \circ \Gamma[q] \\ \\ D_t F &= (\partial_0 F) J_0 + (\partial_1 F) J_1 + (\partial_2 F) J_2 + ... \\ \end{aligned} }

Eq. (1.114):

\displaystyle{ \begin{aligned} D_t F (t, q, v, a, ...) &= \partial_0 F(t, q, v, a, ...) + \partial_1 F(t, q, v, a, ...) v + \partial_2 F(t, q, v, a, ...) a + ... \\ \end{aligned} }

.

\displaystyle{ \begin{aligned} D_t F \circ \Gamma[q] (t) &= \partial_0 F(t, q, v, a, ...) + \partial_1 F(t, q, v, a, ...) v(t) + \partial_2 F(t, q, v, a, ...) a(t) + ... \\ \end{aligned} }

\displaystyle{ \begin{aligned} &D_t (F + G) \circ \Gamma[q] (t) \\ \end{aligned} }

\displaystyle{ \begin{aligned} &= \partial_0 \left[F(t, q, v, a, ...) + G(t, q, v, a, ...)\right] \\ &+ \partial_1 \left[F(t, q, v, a, ...) + G(t, q, v, a, ...)\right] v(t) \\ &+ \partial_2 \left[F(t, q, v, a, ...) + G(t, q, v, a, ...)\right] a(t) + ... \\ \end{aligned} }

\displaystyle{ \begin{aligned} &= \left[ \partial_0 F(t, q, v, a, ...) + \partial_0 G(t, q, v, a, ...)\right] \\ &+ \left[ \partial_1 F(t, q, v, a, ...) + \partial_1 G(t, q, v, a, ...)\right] v(t) \\ &+ \left[ \partial_2 F(t, q, v, a, ...) + \partial_2 G(t, q, v, a, ...)\right] a(t) + ... \\ \end{aligned} }

\displaystyle{ \begin{aligned} &= \partial_0 F(t, q, v, a, ...) + \partial_1 F(t, q, v, a, ...) v(t) + \partial_2 F(t, q, v, a, ...) a(t) + ... \\ &+ \partial_0 G(t, q, v, a, ...) + \partial_1 G(t, q, v, a, ...) v(t) + \partial_2 G(t, q, v, a, ...) a(t) + ... \\ \end{aligned} }

\displaystyle{ \begin{aligned} &= D_t F \circ \Gamma[q] (t) + D_t G \circ \Gamma[q] (t) \\ \end{aligned} }

.

In short,

\displaystyle{ \begin{aligned} D_t (F + G) \circ \Gamma[q] (t) &= D_t F \circ \Gamma[q] (t) + D_t G \circ \Gamma[q] (t) \\ \end{aligned} }

So

\displaystyle{ \begin{aligned} D_t (F + G) \circ \Gamma[q] (t) &= (D_t F + D_t G) \circ \Gamma[q] (t) \\ \\ D_t (F + G) \circ \Gamma[q] &= (D_t F + D_t G) \circ \Gamma[q] \\ \\ D_t (F + G) &= D_t F + D_t G \\ \end{aligned} }

— Me@2022-04-20 11:42:52 AM

.

.

# Y Combinator, 2

arethuza 7 hours ago [-]

It’s over 30 years since I was taught about the Y combinator and I still find it amazing – it allows you to define recursion in a language which has no concept of recursion or named functions – such as λ-calculus.

The fact that you then define this in terms of amazingly simple functions like the S and K combinators just, in my opinion, adds to how wonderful it is.

— Why Y? Deriving the Y Combinator in JavaScript

— Hacker News

.

.

2022.04.19 Tuesday ACHK

# Jonny English, 2

A man without hope is a man without fear.

— Daredevil: Born Again

— Frank Miller

— Me@2016-04-09 09:49:28 PM

.

.

2022.04.19 Tuesday ACHK

Posted in OCD

# 相聚零刻 1.2.2

.

— 為什麽遇到喜歡的女生，千萬去表白！

— 楚兒戀愛說

.

— Me@2022-04-18 12:29:00 PM

.

.

# Quick Calculation 3.3

A First Course in String Theory

.

Verify the equations in (3.26).

~~~

Eq. (3.26):

\displaystyle{ \begin{aligned} T_{\lambda \mu \nu} &= - T_{\mu \lambda \nu} \\ T_{\lambda \mu \nu} &= - T_{\lambda \nu \mu} \\ \end{aligned} }

.

\displaystyle{ \begin{aligned} T_{\lambda \mu \nu} &= \partial_\lambda F_{\mu \nu} + \partial_\mu F_{\nu \lambda} + \partial_\nu F_{\lambda \mu} \\ \end{aligned} }

\displaystyle{ \begin{aligned} T_{\mu \lambda \nu} &= \partial_\mu F_{\lambda \nu} + \partial_\lambda F_{\nu \mu} + \partial_\nu F_{\mu \lambda} \\ &= - \partial_\mu F_{\nu \lambda} - \partial_\lambda F_{\mu \nu} - \partial_\nu F_{\lambda \mu} \\ &= - T_{\lambda \mu \nu} \\ \end{aligned} }

— Me@2022-04-15 05:10:09 PM

.

.

# Logical arrow of time, 11

The initial microstates should be averaged, because it forms an ensemble for the initial macrostate.

Note that a macrostate is actually one particular microstate, not a collection of microstates; it is just that we don’t know which particular microstate.

But how come the final possible states should be summed over, not be averaged?

— Me@2013-08-13 05:16 PM

.

a macrostate = (a microstate in) a set of macroscopically-indistinguishable microstates

— Me@2022-01-09 07:43 AM

Note that, by definition, two macroscopically-indistinguishable microstates will never separate into two distinct macrostates.

— Me@2022-04-14 05:55 PM

.

The initial macrostate is with probability one, because it is already known. So the summation of the probabilities of all possible mutually exclusive initial microstates that are corresponding to that initial macrostate is one, such as

$\displaystyle{P(I_1) + P(I_2) = 1}$

.

By definition, the final macrostate is not known yet. Each possible final macrostate is not with probability one.

The probability of getting a particular final macrostate from that initial macrostate is the summation of the probabilities of all possible mutually exclusive final microstates that are corresponding to that final macrostate.

$\displaystyle{P(F_1~\text{or}~F_2) = P(F_1) + P(F_2)}$

$\displaystyle{P(I\to F) = \frac{1}{N_{\text{initial}}} \sum_{ij} P(I_i \to F_j)}$

— Me@2022-04-13 01:09 PM

.

The only assumptions I made are those about the addition of probabilities of assumptions and their effects – and these logical rules are fundamentally asymmetric when it comes to the role of the assumptions and their consequences. This logical arrow of time can’t be removed from any reasoning about a world that depends on time – time only copies the logical relationship of implication. And this logical arrow of time is the source of the thermodynamic arrow of time as well.

— edited Feb 2, 2011 at 15:23

— answered Jan 14, 2011 at 11:42

— Luboš Motl

— Calculation of the cross section

— Physics StackExchange

.

.

— Me@2011.10.05

.

— Me@2011.10.05

— Me@2022-04-13

.

.

# 伏線驅動程式 1.4

.

「究竟那間補習社會聘請我，我應該等待那份工作，還是立刻找其他呢？」

— Me@2022-04-12 11:34:16 AM

.

.

1990s, 19

.

— Me@2022-04-11 11:04:37 AM

.

.

# Ex 1.24 Constraint forces, 1.3

Structure and Interpretation of Classical Mechanics

.

Find the tension in an undriven planar pendulum.

~~~

Tangential component:

\displaystyle{\begin{aligned} \left(F_{\text{net}}\right)_t &= m a_t \\ - mg \sin \theta &= m l \ddot \theta \\ \end{aligned}}

\displaystyle{\begin{aligned} \left(F_{\text{net}}\right)_r &= m a_r \\ F(t) - mg \cos \theta &= m l \dot \theta^2 \\ \end{aligned}}

— Me@2022-04-10 04:22:27 PM

.

.

# Remove time from physics

motohagiography on May 14, 2018 | next [–]

I once saw a fridge magnet that said “time is natures way of making sure everything doesn’t happen all at once,” and it’s stuck with me.

The concept of time not being “real,” can be useful as an exercise for modelling problems where to fully explore the problem space, you need to decouple your solutions from needing them to occur in an order or sequence.

From an engineering perspective, “removing” time means you can model problems abstractly by stepping back from a problem and asking, what are all possible states of the mechanism, then which ones are we implementing, and finally, in what order. This is different from the relatively stochastic approach most people take of “given X, what is the necessary next step to get to desired endstate.”

More simply, as a tool, time helps us apprehend the states of a system by reducing the scope of our perception of them to sets of serial, ordered phenomena.

Whether it is “real,” or an artifact of our perception is sort of immaterial when you can choose to reason about things with it, or without it. A friend once joked that math is what you get when you remove time from physics.

I look forward to the author’s new book.

— Hacker News

.

.

2022.04.10 Sunday ACHK

# Fearing the wolf

The sheep will spend its entire life fearing the wolf only to be eaten by the shepherd.

.

.

2022.04.09 Saturday ACHK

# 十萬七千里 3.2

.

… began doing just one thing ever which he had control.

— Stephen Covey

.

.

— Me@2022-04-09 04:20:42 PM

.

.

# Inspiration

If you need inspiring words, don’t do it.

— Elon Musk

.

.

2022.04.08 Friday ACHK

# Quick Calculation 3.2

A First Course in String Theory

.

Verify that the gauge transformation (3.10) are correctly summarized by (3.21).

~~~

Eq. (3.21):

\displaystyle{ \begin{aligned} A_\nu' &= A_\nu + \partial_\nu \epsilon \\ \end{aligned} }

.

\displaystyle{ \begin{aligned} \left( A_0', A_1', ... \right) &= \left( - \Phi + \frac{\partial \epsilon}{\partial x^0}, A^1 + \frac{\partial \epsilon}{\partial x^1}, ... \right) \\ \left( -\Phi', {A^1}', ... \right) &= \left( - \Phi + \frac{1}{c} \frac{\partial \epsilon}{\partial t}, A^1 + \frac{\partial \epsilon}{\partial x^1}, ... \right) \\ \end{aligned} }

.

\displaystyle{ \begin{aligned} \Phi' &= \Phi - \frac{1}{c} \frac{\partial \epsilon}{\partial t} \\ \left( {A^1}', {A^2}', {A^3}' \right) &= \left( {A^1}, {A^2}, {A^3} \right) + \left( \frac{\partial}{\partial x^1}, \frac{\partial}{\partial x^2}, \frac{\partial}{\partial x^3} \right) \epsilon \\ \end{aligned} }

.

Eq. (3.10):

\displaystyle{ \begin{aligned} \Phi' &= \Phi - \frac{1}{c} \frac{\partial \epsilon}{\partial t} \\ \vec A' &= \vec A + \nabla \epsilon \\ \end{aligned} }

— Me@2022-04-07 07:05:29 PM

.

.

# C and Lisp

numeromancer on Jan 25, 2010 [-]

In every art there is a dichotomy between the practical and the theoretical, and each has their fundamentals. In Comp. Sci., those two sets of fundamentals are these: sets of machine instructions, which come in several varieties; and lambda calculus, or one of the equivalent (by Church’s Thesis) formal systems. C and Lisp are similar in that they represent the first steps in each case to reach the other: C is a level above machine code, providing some abstraction and portability to the use of machine code, the fundamental elements of practical computing; lisp is a level above lambda calculus, providing a practical system for using functions, the fundamental elements of theoretical computing.

In short, mastery of C is concomitant with the ability to measure the cost of computation (sometimes, regardless of the value of it); mastery of Lisp is concomitant with the ability to measure the value of computation (sometimes, regardless of the cost).

Since C and Lisp lie on opposite borders of the universe of computation, knowing both will allow you to better measure the scope of that universe.

— Ask HN: Why does learning lisp make you a better C-programmer?

— Hacker News

.

.

2022.04.07 Thursday ACHK

# Circus

drblast on Oct 27, 2010 | next [–]

Living with one child is like living with a demanding, but mostly reasonable, roommate who really likes spending time with you until she goes to bed early.

Having two or more children is like living in a circus where all the performers are deaf.

.

.

2022.04.07 Thursday ACHK

# 伏線驅動程式 1.3

.

為何我在幾年前，會在這裡教書呢？

「我可不可以與，當天的面試人員，傾一傾談？」

「不可以。」

「究竟那間補習社會聘請我，我應該等待那份工作，還是立刻找其他呢？」

— Me@2022-03-08 12:06:18 PM

.

.