Einstein (Index) Notation β€” Full Notes

Einstein (Index) Notation β€” Full Notes

Dot & double-dot, gradient operators, product rules, conversions, reading tricks, and intuition. Flat headings β€’ wide layout β€’ print-ready.

5. Dot & Double-Dot Products

Contractions in Einstein form. Repeated indices imply summation; never repeat an index more than twice in a term.

OperationIndex FormResultExample / Notes
Vector Β· Vector$a_i b_i$Scalar$a\!\cdot\!b$
Vector Β· Tensor$a_i T_{ij}$Vector$(a\!\cdot\!T)_j$
Tensor Β· Tensor (single contraction)$A_{ij}B_{jk}$TensorMatrix multiplication $(AB)_{ik}$
Tensor :: Tensor (double dot)$A_{ij}B_{ij}$Scalar$A\!:\!B=\!\sum_{i,j}A_{ij}B_{ij}=\mathrm{tr}(A^{\mathsf T}B)$
Kronecker delta (identity)
$$\delta_{ij}=\begin{cases}1,&i=j\\0,&i\ne j\end{cases},\qquad \delta_{ij}A_{jk}=A_{ik}.$$
Trace (full contraction)
$$A_{ii}=\mathrm{tr}(A),\qquad A\!:\!B=\mathrm{tr}(A^{\mathsf T}B).$$

6. Gradient Operators in Index Form

With $\partial_i\equiv\dfrac{\partial}{\partial x_i}$. For a vector $u$, $(\nabla u)_{ij}=\partial_j u_i$.

OperatorDefinition (Index)Result
Gradient$(\nabla f)_i=\partial_i f$Vector
Divergence$\nabla\!\cdot u=\partial_i u_i$Scalar
Curl$(\nabla\times u)_i=\epsilon_{ijk}\partial_j u_k$Vector
Laplacian (scalar)$\nabla^2 f=\partial_i\partial_i f$Scalar
Laplacian (vector)$(\nabla^2 u)_i=\partial_j\partial_j u_i$Vector

Index Notation Reminders

  • Repeated index β‡’ summation (contraction). Never repeat an index more than twice.
  • Dot product β†’ single contraction: $a_i b_i=a\cdot b$.
  • Double dot β†’ full contraction: $A_{ij}B_{ij}=A\!:\!B=\mathrm{tr}(A^{\mathsf T}B)$.
  • Gradient and divergence: $\nabla a$ β†’ tensor; $\nabla\!\cdot a$ β†’ scalar.

Step 1. Start from Definitions

ConceptIndex formVector form
Gradient of scalar $f$$(\nabla f)_i=\partial_i f$$\nabla f$
Divergence of vector $a$$\nabla\!\cdot a=\partial_i a_i$$\nabla\!\cdot a$
Laplacian of scalar $f$$\nabla^2 f=\partial_i\partial_i f$$\nabla^2 f$
Laplacian of vector $u$$(\nabla^2 u)_i=\partial_j\partial_j u_i$$\nabla^2 u$

Step 2. Laplacian as β€œdivergence of a gradient”

$$\nabla^2=\nabla\!\cdot\nabla.$$

Scalar $f$: $$\nabla^2 f=\nabla\!\cdot(\nabla f)=\partial_i(\partial_i f)=\partial_i\partial_i f.$$

Vector $u=(u_1,u_2,u_3)$: $$(\nabla^2 u)_i=\partial_j\partial_j u_i,\qquad \nabla^2 u=\begin{bmatrix}\nabla^2 u_1\\ \nabla^2 u_2\\ \nabla^2 u_3\end{bmatrix}.$$

Step 3. Product Rule in Index Notation

For outer product $(ab)_{ij}=a_i b_j$: $$\big(\nabla\!\cdot(ab)\big)_j=\partial_i(a_i b_j) =(\partial_i a_i)\,b_j + a_i(\partial_i b_j).$$

Step 4. Go Back to Vector Form

$\partial_i a_i=\nabla\!\cdot a$
$\partial_i b_j=(\nabla b)_{ij}$

Substitute: $$\partial_i(a_i b_j)=(\nabla\!\cdot a)\,b_j + (a\cdot\nabla)\,b_j,$$ $$\boxed{\nabla\!\cdot(ab)=(\nabla\!\cdot a)\,b + (a\cdot\nabla)\,b.}$$

Step 5. Switch Between Forms

You see this (vector)Index form
$\nabla\!\cdot a$$\partial_i a_i$
$\nabla f$$\partial_i f$
$\nabla\!\cdot(ab)$$\partial_i(a_i b_j)$
$(a\cdot\nabla)b$$a_i\,\partial_i b_j$
$\nabla^2 u$$\partial_j\partial_j u_i$
Repeated index β†’ sum (contraction). Free index β†’ the remaining component of the result.

How to Go Between Vector & Einstein (Index) Notation

Key principle: Twice = sum; once = free output component.

Vector notationEinstein notationResult type
$a$ component$a_i$Scalar (component)
$\nabla f$$(\nabla f)_i=\partial_i f$Vector
$\nabla\!\cdot a$$\partial_i a_i$Scalar
$(a\cdot\nabla)f$$a_i\,\partial_i f$Scalar
$(a\cdot\nabla)b$$a_i\,\partial_i b_j$Vector
$\nabla\times a$$(\nabla\times a)_i=\epsilon_{ijk}\partial_j a_k$Vector
$\nabla^2 f$$\partial_i\partial_i f$Scalar
$\nabla^2 a$$(\nabla^2 a)_i=\partial_j\partial_j a_i$Vector
$\nabla(a\cdot b)$$\partial_i(a_j b_j)$Vector
$\nabla\!\cdot(ab)$$\partial_i(a_i b_j)$Vector

Practice: Recognize β€œRepeated vs Free”

ExpressionWhat happensVector meaning
$a_i b_i$$i$ repeated β†’ sum β†’ scalar$a\cdot b$
$a_i b_j$both free β†’ tensor$ab$ (outer product)
$a_i \partial_i b_j$$i$ summed; $j$ free β†’ vector$(a\cdot\nabla)b$
$\partial_i a_i$$i$ summed β†’ scalar$\nabla\!\cdot a$
$\epsilon_{ijk}\partial_j a_k$antisymmetric$(\nabla\times a)_i$

Reading Trick

  • Repeated index β†’ contraction (dot) β†’ reduces rank.
  • 0 free indices β†’ scalar; 1 free β†’ vector; 2 free β†’ 2nd-order tensor; 3 free β†’ 3rd-order tensor; etc.
  • Count free indices to know the result’s type instantly.

Intuition & Correct Wording

Each index = a spatial direction

In 3D, $a=[a_1,a_2,a_3]=(a_x,a_y,a_z)$. The symbol $a_i$ means β€œthe $i$-th component of $a$.”

Why β€œrepeated = sum”

Dot product by components: $a\cdot b=a_x b_x+a_y b_y+a_z b_z \equiv a_i b_i$.

Free index = output component

$(\nabla f)_i=\partial_i f$ is a vector; $i$ labels which component. Likewise $(\nabla^2 u)_i=\partial_j\partial_j u_i$.

Two free indices = tensor

$(\nabla u)_{ij}=\partial_j u_i$ has two free indices β†’ a 2nd-order tensor (matrix).

Say it right: Use β€œ$i$-th component” or β€œsum over the repeated index $i$,” not β€œ$i$-th iteration.” Einstein summation is a symbolic sum, not a programming loop.

Shortcut Map (Cheat Sheet)

OperationEinstein formResult
Gradient of scalar$\partial_i f$Vector
Divergence of vector$\partial_i a_i$Scalar
Curl of vector$\epsilon_{ijk}\partial_j a_k$Vector
Laplacian of scalar$\partial_i\partial_i f$Scalar
Laplacian of vector$\partial_j\partial_j a_i$Vector
Directional derivative$a_i\,\partial_i b_j$Vector
Divergence of tensor $A$$\partial_i A_{ij}$Vector