\( \newcommand{\C}{\mathbb{C}} \newcommand{\R}{\mathbb{R}} \newcommand{\F}{\mathbb{F}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\D}{\mathbb{D}} \def\H{\mathbb{H}} \newcommand{\mult}{\mathop{\mathrm{mult}}\nolimits} \renewcommand{\L}{{\mathcal{L}}} \newcommand{\B}{{\mathcal{B}}} \newcommand{\A}{{\mathcal{A}}} \newcommand{\M}{{\mathcal{M}}} \renewcommand{\S}{{\mathcal{S}}} \newcommand{\T}{{\mathcal{T}}} \newcommand{\dist}{\mathop{\mathrm{dist}}\nolimits} \newcommand{\Null}{\mathop{\mathrm{null}}\nolimits} \newcommand{\Span}{\mathop{\mathrm{span}}\nolimits} \newcommand{\re}{\mathop{\mathrm{Re}}\nolimits} \newcommand{\Ind}{\mathop{\mathrm{Ind}}\nolimits} \newcommand{\Res}{\mathop{\mathrm{Res}}\nolimits} \newcommand{\esssup}{\mathop{\mathrm{ess\,sup}}} \newcommand{\im}{\mathop{\mathrm{Im}}\nolimits} \newcommand{\Int}{\mathop{\mathrm{int}}\nolimits} \newcommand{\graph}{\mathop{\mathrm{graph}}\nolimits} \newcommand{\arccot}{\mathop{\mathrm{arccot}}\nolimits} \newcommand{\myallowbreak}{} \newcommand\p[2][]{\frac{\partial #1}{\partial #2}} \newcommand\sgn{\mathop{\mathrm{sgn}}\nolimits} \newcommand\cl{\mathop{\mathrm{cl}}\nolimits} \newcommand\abs[1]{|#1|} \)

MATH 55203–55303

Theory of Functions of a Complex Variable I–II

Fall 2025–Spring 2026

(Fall) SCEN 404, MWF 2:00–2:50 p.m.

Next class day: Friday, September 19, 2025

Contents

Contents
1.1. Elementary Properties of the Complex Numbers
1.2. Real Analysis
1.2. Further Properties of the Complex Numbers
1.3. Real Analysis
1.3. Complex Polynomials
1.3. The complex derivatives \(\p {z}\) and \(\p {\bar z}\)
1.4. Real Analysis
1.4. Holomorphic Functions, the Cauchy-Riemann Equations, and Harmonic Functions
1.5. Real Analysis
1.5. Real and Holomorphic Antiderivatives
2.1. Real Analysis
2.1. Real and Complex Line Integrals
Addendum: Change of variables
2.2. Real Analysis
2.2. Complex Differentiability and Conformality

1.1. Elementary Properties of the Complex Numbers

[Definition: The complex numbers] The set of complex numbers is \(\R ^2\), denoted \(\C \). (In this class, you may use everything you know about \(\R \) and \(\R ^2\)—in particular, that \(\R ^2\) is an abelian group and a normed vector space.)

[Definition: Real and imaginary parts] If \((x,y)\) is a complex number, then \(\re (x,y)=x\) and \(\im (x,y)=y\).

[Definition: Addition and multiplication] If \((x,y)\) and \((\xi ,\eta )\) are two complex numbers, we define

\begin{align*} (x,y)+(\xi ,\eta )&=(x+\xi ,y+\eta ),\\ (x,y)\cdot (\xi ,\eta )&=(x\xi -y\eta ,x\eta +y\xi ). \end{align*}

(Problem 10) Show that multiplication in the complex numbers is commutative.

Let \((x,y)\) and \((\xi ,\eta )\) be two complex numbers. Then

\begin{align*} (x,y)\cdot (\xi ,\eta )&=(x\xi -y\eta ,x\eta +y\xi ),\\ (\xi ,\eta )\cdot (x,y)&=(\xi x-\eta y,\xi y+\eta x). \end{align*}

Because multiplication in the real numbers is commutative, we have that

\begin{align*} (\xi ,\eta )\cdot (x,y)&=(x\xi -y\eta ,y\xi + x\eta ). \end{align*}

Because addition in the real numbers is commutative, we have that

\begin{align*} (\xi ,\eta )\cdot (x,y)&=(x\xi -y\eta , x\eta +y\xi )=(x,y)\cdot (\xi ,\eta ) \end{align*}

as desired.

(Fact 20) This notion of addition and multiplication makes the complex numbers a ring—thus, multiplication is also associative and distributes over addition.

(Problem 30) What is the multiplicative identity?

(Problem 40) Let \(r\) be a real number. Recall that \(\C =\R ^2\) is a vector space over \(\R \), so we can multiply vectors (complex numbers) by scalars (real numbers). Is there a complex number \((\xi ,\eta )\) such that \(r(x,y)=(\xi ,\eta )\cdot (x,y)\) for all \((x,y)\in \C \)?

[Definition: Notation for the complex numbers]

(Problem 50) If \(x\), \(y\) are real numbers, what complex number is \(x+iy\)?

(Problem 60) If \(z=x+iy\) for \(x\), \(y\) real, what are \(\re z\) and \(\im z\)?

(Problem 70) If \(z\in \C \) and \(r\) is real, what are \(\re (zr)\) and \(\im (zr)\)?

(Problem 80) If \(z\), \(w\in \C \), what are \(\re (z+w)\), \(\im (z+w)\) in terms of \(\re z\), \(\re w\), \(\im z\), and \(\im w\)?

(Problem 90) If \(z\), \(w\in \C \), what are \(\re (zw)\), \(\im (zw)\) in terms of \(\re z\), \(\re w\), \(\im z\), and \(\im w\)?

[Definition: Conjugate] The conjugate to the complex number \(x+iy\), where \(x\), \(y\) are real, is \(\overline {x+iy}=x-iy\).1

(Problem 100) If \(z\) and \(w\) are complex numbers, show that \(\bar z+\overline w=\overline {z+w}\).

(Problem 110) Show that \(\bar z\cdot \overline {w}=\overline {zw}\).

(Problem 120) Write \(\re z\) and \(\im z\) in terms of \(z\) and \(\bar z\).

(Problem 130) Show that \(z\bar z\) is always real and nonnegative. If \(z\bar z=0\), what can you say about \(z\)?

(Problem 140) If \(z\) is a complex number with \(z\neq 0\), show that there exists another complex number \(w\) such that \(zw=1\). Give a formula for \(w\) in terms of \(z\). We will write \(w=\frac {1}{z}\).

\(z\bar z\) is a positive real number, and we know from real analysis that positive real numbers have reciprocals. Thus \(\frac {1}{z\bar z}\in \R \). We can multiply complex numbers by real numbers, so \(\frac {1}{z\bar z}\bar z\) is a complex number and it is the \(w\) of the problem statement.

[Definition: Modulus] If \(z\) is a complex number, we define its modulus \(|z|\) as \(|z|=\sqrt {z\bar z}\).

(Fact 150) \(|\re z|\leq |z|\) and \(|\im z|\leq |z|\) (where the first \(|\,\cdot \,|\) denotes the absolute value in the real numbers and the second \(|\,\cdot \,|\) denotes the modulus in the complex numbers.)

(Problem 160) If \(z\) and \(w\) are complex numbers, show that \(|zw|=|z|\,|w|\).

(Problem 170) Give an example of a non-constant polynomial that has no roots (solutions) that are real numbers. Find a root (solution) to your polynomial that is a complex number.

1.2. Real Analysis

(Fact 180) If \(z=x+iy=(x,y)\), then the complex modulus \(|z|\) is equal to the vector space norm \(\|(x,y)\|\) in \(\R ^2\).

(Fact 190) \(\C \) is complete as a metric space if we use the expected metric \(d(z,w)=|z-w|\).

(Bashar, Problem 200) Recall that \((\R ^2,d)\) is a metric space, where \(d(u,v)=\|u-v\|\). In particular, this metric satisfies the triangle inequality. Write the triangle inequality as a statement about moduli of complex numbers. Simplify your statement as much as possible.

The conclusion is that \(|z+w|\leq |z|+|w|\) for all \(z\), \(w\in \C \). This is Proposition 1.2.3 in your textbook.

(Memory 210) If \(\{a_n\}_{n=1}^\infty \) is a sequence of points in \(\R ^p\), \(a\in \R ^p\), and we write \(a_n=(a_n^1,a_n^2,\dots ,a_n^p)\), \(a=(a^1,\dots a^p)\), then \(a_n\to a\) (in the metric space sense) if and only if \(a_n^k\to a^k\) for each \(1\leq k\leq p\).

(Dibyendu, Problem 220) What does this tell you about the complex numbers?

If \(\{z_n\}_{n=1}^\infty \) is a sequence of points in \(\C \) and \(z\in \C \), then \(z_n\to z\) if and only if both \(\re z_n\to \re z\) and \(\im z_n\to \im z\).

[Definition: Maclaurin series] If \(f:\R \to \R \) is an infinitely differentiable function, then the Maclaurin series for \(f\) is the power series

\begin{equation*}\sum _{n=0}^\infty \frac {f^{(n)}(0)}{n!}x^n\end{equation*}
with the convention that \(0^0=1\).

(Memory 221) If \(x\) is real, then the Maclaurin series for \(\exp x\), \(\sin x\), or \(\cos x\) converges to \(\exp x\), \(\sin x\), or \(\cos x\), respectively.

(Memory 230) The Maclaurin series for the \(\exp \) function is \(\sum _{k=0}^\infty \frac {x^k}{k!}\).

(Memory 240) The Maclaurin series for the \(\sin \) function is \(\sum _{k=0}^\infty (-1)^{k}\frac {x^{2k+1}}{(2k+1)!}\).

(Memory 250) The Maclaurin series for the \(\cos \) function is \(\sum _{k=0}^\infty (-1)^{k}\frac {x^{2k}}{(2k)!}\).

(Memory 270) If \(x\) and \(t\) are real numbers then

\begin{align*} \sin (x+t)&=\sin x\cos t+\sin t\cos x,\\ \cos (x+t)&=\cos x\cos t-\sin x\sin t.\end{align*}

(Memory 280) The Cauchy-Schwarz inequality for real numbers states that if \(n\in \N \) is a positive integer, and if for each \(k\) with \(1\leq k\leq n\) the numbers \(x_k\), \(\xi _k\) are real, then

\begin{equation*}\Bigl (\sum _{k=1}^n x_k\,\xi _k\Bigr )^2\leq \Bigl (\sum _{k=1}^n x_k^2 \Bigr )\Bigl (\sum _{k=1}^n \xi _k^2\Bigr ).\end{equation*}

1.2. Further Properties of the Complex Numbers

(Hope, Problem 290) State the Cauchy-Schwarz inequality for complex numbers and prove that it is valid.

This is Proposition 1.2.4 in your book. If \(n\in \N \), and if \(z_1\), \(z_2,\dots ,z_n\) and \(w_1\), \(w_2,\dots ,w_n\) are complex numbers, then

\begin{equation*}\Bigl |\sum _{k=1}^n z_k\,w_k\Bigr |^2\leq \sum _{k=1}^n |z_k|^2 \sum _k |w_k|^2.\end{equation*}

We can prove this as follows. By the triangle inequality, \(|z_1w_1+z_2w_2|\leq |z_1w_1|+|z_2w_2|=|z_1||w_1|+|z_2||w_2|\). A straightforward induction argument yields that

\begin{equation*}\Bigl |\sum _{k=1}^n z_k\,w_k\Bigr | \leq \sum _{k=1}^n|z_k||w_k|.\end{equation*}
Applying the real Cauchy-Schwarz inequality with \(x_k=|z_k|\) and \(\xi _k=|w_k|\) completes the proof.

(James, Problem 300) Let \(z\in \C \). Consider the series \(\sum _{k=0}^\infty \frac {z^k}{k!}\), that is, the sequence of complex numbers \(\bigl \{\sum _{k=0}^n \frac {z^k}{k!}\bigr \}_{n=0}^\infty \). Show that this sequence is a Cauchy sequence.

(Problem 310) Since \(\C \) is complete, the series converges. If \(z=x\) is a real number, to what number does the series converge?

It converges to \(e^x\).

(Micah, Problem 320) If \(z=iy\) is purely imaginary (that is, if \(y\in \R \)), show that \(\sum _{k=0}^\infty \frac {(iy)^k}{k!}\) converges to \(\cos y+i\sin y\).

An induction argument establishes that

\begin{align*}\re i^k &= \begin {cases} 0, &k\text { is odd},\\1,&k\text { is even and a multiple of~$4$},\\-1,&k\text { is even and not a multiple of~$4$},\end {cases} \end{align*}

and

\begin{align*} \im i^k &= \begin {cases} 0, &k\text { is even},\\1,&k\text { is odd and one more than a multiple of~$4$},\\-1,&k\text { is even and one less than a multiple of~$4$}.\end {cases} \end{align*}

We then see that we may write the Maclaurin series for \(\cos \) and \(\sin \) as

\begin{equation*}\cos (y)=\sum _{k=0}^\infty \re i^k \frac {y^k}{k!}, \qquad \sin (y)=\sum _{k=0}^\infty \im i^k \frac {y^k}{k!} .\end{equation*}
We then have that
\begin{equation*}\re \Bigl (\sum _{k=0}^n \frac {(iy)^k}{k!}\Bigr ) =\sum _{k=0}^n \re \biggl (\frac {(iy)^k}{k!}\biggr ) =\sum _{k=0}^n \re i^k\frac {y^k}{k!}\end{equation*}
which converges to \(\cos y\) as \(n\to \infty \). Similarly
\begin{equation*}\im \Bigl (\sum _{k=0}^n \frac {(iy)^k}{k!}\Bigr ) =\sum _{k=0}^n \im \biggl (\frac {(iy)^k}{k!}\biggr ) =\sum _{k=0}^n \im i^k\frac {y^k}{k!}\end{equation*}
converges to \(\sin y\) as \(n\to \infty \). Thus the series \(\sum _{k=0}^\infty \frac {(iy)^k}{k!}\) converges to \(\cos y+i\sin y\), as desired.

(Bonus Problem 330) If \(z=x+iy\), show that \(\sum _{j=0}^\infty \frac {z^j}{j!}\) converges to the product \(\bigl (\sum _{j=0}^\infty \frac {x^j}{j!}\bigr )\bigl (\sum _{j=0}^\infty \frac {(iy)^j}{j!}\bigr )\).

[Definition: The complex exponential] If \(x\) is real, we define

\begin{equation*}\exp (x)=\sum _{j=0}^\infty \frac {x^j}{j!}\quad \text { and }\quad \exp (ix)=\sum _{j=0}^\infty \frac {(ix)^j}{j!}.\end{equation*}
If \(z=x+iy\) is a complex number, we define
\begin{equation*}\exp (z)=\exp (x)\cdot \exp (iy).\end{equation*}

(Muhammad, Problem 340) If \(y\), \(\eta \) are real, show that \(\exp (iy+i\eta )=\exp (iy)\cdot \exp (i\eta )\).

Using the sum angle identities for sine and cosine, we compute

\begin{align*} \exp (iy+i\eta ) &=\exp (i(y+\eta )) = \cos (y+\eta )+i\sin (y+\eta ) \\&= \cos y\cos \eta - \sin y\sin \eta \alignquadbreak + i\sin y\cos \eta +i \cos y\sin \eta \end{align*}

and

\begin{align*} \exp (iy)\exp (i\eta )&= (\cos y+i\sin y)(\cos \eta +i\sin \eta ) \\&= \cos y\cos \eta - \sin y\sin \eta \alignquadbreak + i\sin y\cos \eta +i \cos y\sin \eta \end{align*}

and observe that they are equal.

(Robert, Problem 350) If \(z\), \(w\) are any complex numbers, show that \(\exp (z+w)=\exp (z)\cdot \exp (w)\).

There are real numbers \(x\), \(y\), \(\xi \), \(\eta \) such that \(z=x+iy\) and \(w=\xi +i\eta \).

By definition

\begin{equation*}\exp (z)=\exp (x)\exp (iy),\qquad \exp (w)=\exp (\xi )\exp (i\eta ).\end{equation*}
Because multiplication in the complex numbers is associative and commutative,
\begin{align*}\exp (z)\exp (w) &=[\exp (x)\exp (iy)][\exp (\xi )\exp (i\eta )] =[\exp (x)\exp (\xi )][\exp (iy)\exp (i\eta )] .\end{align*}

By properties of exponentials in the real numbers and by the previous problem, we see that

\begin{align*} \exp (z)\exp (w) &=\exp (x+\xi )\exp (iy+i\eta ) .\end{align*}

By definition of the complex exponential,

\begin{align*}\exp (z)\exp (w)&=\exp ((x+\xi )+i(y+\eta ))=\exp (z+w)\end{align*}

as desired.

(Sam, Problem 360) Suppose that \(z\) is a complex number and that \(|z|=1\). Show that there is a number \(\theta \in \R \) with \(\exp (i\theta )=z\). How many such numbers \(\theta \) exist? (Use only undergraduate real analysis and methods established so far in this course.)

We know from real analysis that, if \((x,y)\) lies on the unit circle, then \((x,y)=(\cos \theta ,\sin \theta )\) for some real number \(\theta \). By definition of complex modulus, if \(|z|=1\) and \(z=x+iy\) then \((x,y)\) lies on the unit circle. Thus \(z=\cos \theta +i\sin \theta =\exp (i\theta )\) for some \(\theta \in \R \).

Infinitely many such numbers \(\theta \) exist.

[Chapter 1, Problem 25] If \(\theta \), \(\varpi \in \R \), then \(e^{i\theta }=e^{i\varpi }\) if and only if \((\theta -\varpi )/(2\pi )\) is an integer.

(William, Problem 370) Suppose that \(z\) is a complex number. Show that there exist numbers \(r\in [0,\infty )\) and \(\theta \in \R \) such that \(z=r\exp (i\theta )\). How many possible values of \(r\) exist? How many possible values of \(\theta \) exist? (Use only undergraduate real analysis and methods established so far in this course.)

Observe that \(|re^{i\theta }|=r|e^{i\theta }|\) because \(r\geq 0\) and because the modulus distributes over products. But \(|e^{i\theta }|=|\cos \theta +i\sin \theta |=\sqrt {\cos ^2\theta +\sin ^2\theta }=1\), and so the only choice for \(r\) is \(r=|z|\).

If \(z=0\) then we must have that \(r=0\) and can take any real number for \(\theta \).

If \(z\neq 0\), let \(r=|z|\). Then \(w=\frac {1}{r}z\) is a complex number with \(|z|=1\), and so there exist infinitely many values \(\theta \) with \(e^{i\theta }=w\) and thus \(z=re^{i\theta }\).

(Wilson, Problem 380) Find all solutions to the equation \(z^6=i\). Use only undergraduate real analysis and methods established so far in this course.

Suppose that \(z=re^{i\theta }\) for some \(r\geq 0\), \(\theta \in \R \).

Then \(z^6=r^6 e^{6i\theta }\). If \(z^6=i\), then \(1=|i|=|z^6|=r^6\) and so \(r=1\) because \(r\geq 0\). We must then have that \(i=e^{6i\theta }\). Observe that \(i=e^{i\pi /2}\). By Homework 1.25, we must have that \(6\theta =\pi /2+2\pi n\) for some \(n\in \Z \), and so \((e^{i\theta })^6=i\) if and only if \(\theta =\pi /12+n \pi /3\). Thus the solutions are

\begin{equation*}e^{\pi /12},\quad e^{5\pi /12},\quad e^{9\pi /12},\quad e^{13\pi /12},\quad e^{17\pi /12},\quad e^{21\pi /12}.\end{equation*}
Any other solution is of the form \(e^{i\theta }\), where \(\theta \) differs from one of the listed numbers by \(2\pi \).

1.3. Real Analysis

(Problem 390) Give an example of a function that can be written in two different ways.

[Definition: Real polynomial] Let \(p:\R \to \R \) be a function. We say that \(p\) is a (real) polynomial in one (real) variable if there is a \(n\in \N _0\) and constants \(a_0\), \(a_1,\dots ,a_n\in \R \) such that \(p(x)=\sum _{k=0}^n a_k x^k\) for all \(x\in \R \).

[Definition: Real polynomial in two variables] Let \(p:\R ^2\to \R \) be a function. We say that \(p\) is a (real) polynomial in two (real) variables if there is a \(n\in \N _0\) and constants \(a_{k,\ell }\in \R \) such that \(p(x,y)=\sum _{k=0}^n \sum _{\ell =0}^n a_{k,\ell } x^k y^\ell \) for all \(x\), \(y\in \R \).

(Adam, Problem 400) Let \(p(x)=\sum _{k=0}^n a_k\,x^k\) and let \(q(x)=\sum _{k=0}^n b_k\,x^k\) be two polynomials in one variable, with \(a_k\), \(b_k\in \R \). Show that if \(p(x)=q(x)\) for all \(x\in \R \) then \(a_k=b_k\) for all \(k\in \N _0\).

\(p\) and \(q\) are infinitely differentiable functions from \(\R \) to \(\R \), and because \(p(x)=q(x)\) for all \(x\in \R \), we must have that \(p'=q'\), \(p''=q'',\dots ,p^{(k)}=q^{(k)}\) for all \(k\in \N \).

We compute \(p^{(k)}(0)=k!a_k\) and \(q^{(k)}(0)=k!b_k\). Setting them equal we see that \(a_k=b_k\).

[Definition: Degree] If \(p(z)=\sum _{k=0}^n a_k\,z^k\), then the degree of \(p\) is the largest nonnegative integer \(m\) such that \(a_m\neq 0\). (The degree of the zero polynomial \(p(z)=0\) is either undefined, \(-1\), or \(-\infty \).)

(Amani, Problem 410) Let \(p\) be a polynomial. Suppose that \(x_0\in \R \) and that \(p(x_0)=0\). Show that there exists a polynomial \(q\) such that \(p(x)=(x-x_0)q(x)\) for all \(x\in \R \). Further show that, if \(p\) is a polynomial of degree \(m\geq 0\), then \(q\) is a polynomial of degree \(m-1\). Hint: Use induction.

If \(p\) is the zero polynomial we may take \(q\) to also be the zero polynomial. If \(p\) is a nonzero constant polynomial then no such \(x_0\) can exist. We therefore need only consider the case where \(p\) is a polynomial of degree \(m\geq 1\).

If \(m=1\), then \(p(x)=a_1x+a_0\) for some \(a_1\), \(a_0\); if \(p(x_0)=0\) then \(a_0=-a_1x_0\) and so \(p(x)=a_1(x-x_0)\). Then \(q(x)=a_1\) is a polynomial of degree \(0=m-1\).

Suppose that the statement is true for all polynomials of degree at most \(m-1\), \(m\geq 2\). Let \(p\) be a polynomial of degree \(m\). Then \(p(x)=a_m x^m +r(x)\) where \(r\) is a polynomial of degree at most \(m-1\). We add and subtract \(a_m x_0 x^{m-1}\) to see that

\begin{equation*}p(x)=a_m x^{m-1} (x-x_0) + a_mx_0 x^{m-1}+r(x).\end{equation*}
Then \(s(x)=a_mx_0 x^{m-1}+r(x)\) is a polynomial of degree at most \(m-1\). If \(s\) is a constant then \(0=p(x_0)=a_m x_0^{m-1}(x-x_0)+s\) and so \(s=0\); taking \(q(x)=a_mx^{m-1}\) we are done.

Otherwise, \(s(x)\) is a polynomial of degree at least one and at most \(m-1\). Also, \(s(x_0)=p(x_0)-a_mx_0^m(x_0-x_0)=0\), so by the induction hypothesis \(s(x)=(x-x_0)t(x)\) for a polynomial \(t\) of degree at most \(m-2\). Taking \(q(x)=a_mx^{m-1}+t(x)\) we are done.

(Bashar, Problem 420) Let \(p(x)=\sum _{k=0}^n a_k\,x^k\) and let \(q(x)=\sum _{k=0}^n b_k\,x^k\) be two polynomials of degree at most \(n\), with \(a_k\), \(b_k\in \R \) and \(n\in \N _0\). Suppose that there are \(n+1\) distinct numbers \(x_0,x_1,\dots ,x_n\in \R \) such that \(p(x_j)=q(x_j)\) for all \(0\leq j\leq n\). Show that \(a_k=b_k\) for all \(k\in \N _0\). Hint: Consider the polynomial \(r(x)=p(x)-q(x)\).

Let \(r(x)=p(x)-q(x)\). Then \(r(x_j)=p(x_j)-q(x_j)=0\) for all \(0\leq j\leq n\) and \(r\) is a polynomial of degree at most \(n\). Furthermore, \(r(x_j)=0\) for all \(0\leq j\leq n\).

Suppose for the sake of contradiction that \(r\) is not identically equal to zero. Then \(r\) is a polynomial of degree \(m\), \(0\leq m\leq n\). By Problem 410,

\begin{equation*}r(x)=(x-x_1)(x-x_2)\dots (x-x_m)r_m(x)\end{equation*}
where \(r_m\) is a polynomial of degree \(m-m\), that is, a constant. But
\begin{equation*}0=p(x_0)-q(x_0)=r(x_0)=(x_0-x_1)(x_0-x_2)\dots (x_0-x_m) r_m(x_0).\end{equation*}
Since \(x_j\neq x_0\) for all \(j\geq 1\) we must have that \(r_m(x_0)=0\); thus \(r_m\) is the constant function zero and so \(r\) is the constant function zero, as was to be proven. (This is technically a contradiction to the assumption \(m\geq 0\) because if \(m\geq 0\) then \(r\) is not the zero polynomial.)

(Dibyendu, Problem 430) Let \(p(x,y)=\sum _{j=0}^n\sum _{k=0}^n a_{j,k}\,x^j\,y^k\) and let \(q(x,y)=\sum _{j=0}^n\sum _{k=0}^n b_{j,k}\,x^j\,y^k\) be two polynomials of two variables, with \(a_{j,k}\), \(b_{j,k}\in \R \). Show that if \(p(x,y)=q(x,y)\) for all \((x,y)\in \R ^2\) then \(a_{j,k}=b_{j,k}\) for all \(j\), \(k\in \N _0\).

Fix a \(y\in \R \). Then \(p_y(x)=\sum _{j=0}^n \Bigl (\sum _{k=0}^n a_{j,k}y^k\Bigr ) x^j\) and \(q_y(x)=\sum _{j=0}^n \Bigl (\sum _{k=0}^n b_{j,k}y^k\Bigr ) x^j\) are both polynomials in one variable that are equal for all \(x\). So by Problem 400 their coefficients must be equal, so \(\Bigl (\sum _{k=0}^n a_{j,k}y^k\Bigr )=\Bigl (\sum _{k=0}^n b_{j,k}y^k\Bigr )\). This is true for all \(y\in \R \); another application of Problem 400 shows that \(a_{j,k}=b_{j,k}\) for all \(j\) and \(k\).

(Problem 431) Give an example of a polynomial of two variables \(p\) such that \(p(x,y)=0\) for infinitely many values of \((x,y)\), but such that \(p\) is not the zero polynomial.

The polynomial \(p(x,y)=xy\) is such a polynomial, because \(p(0,y)=0\) and \(p(x,0)=0\) for all \(x\in \R \) or \(y\in \R \), and there are infinitely many \(x\in \R \) and infinitely many \(y\in \R \).

(Memory 440) If \(\Omega \subseteq \R ^2\) is both open and connected, then \(\Omega \) is path connected: for every \(z\), \(w\in \Omega \) there is a continuous function \(\gamma :[0,1]\to \Omega \) such that \(\gamma (0)=z\) and \(\gamma (1)=w\).

(Memory 450) If \(\Omega \subseteq \R ^2\) is open and connected, we may require the paths in the definition of path connectedness to be \(C^1\).

(Memory 460) If \(\Omega \subseteq \R ^2\) is open and connected, we may require the paths in the definition of path connectedness to consist of finitely many horizontal or vertical line segments.

Definition 1.3.1 (part 1). Let \(\Omega \subseteq \R ^2\) be open. Suppose that \(f:\Omega \to \R \). We say that \(f\) is continuously differentiable, or \(f\in C^1(\Omega )\), if the two partial derivatives \(\frac {\partial f}{\partial x}\) and \(\frac {\partial f}{\partial y}\) exist everywhere in \(\Omega \) and \(f\), \(\frac {\partial f}{\partial x}\), and \(\frac {\partial f}{\partial y}\) are all continuous on \(\Omega \).

(Hope, Problem 470) Let \(B=B(z,r)\) be a ball in \(\R ^2\). Let \(f\in C^1(B)\) and suppose that \(\frac {\partial f}{\partial y}=\frac {\partial f}{\partial x}=0\) everywhere in \(B\). Show that \(f\) is a constant.

Let \(z=(x,y)\). Let \((\xi ,\eta )\in B((x,y),r)\).

We consider the case \(\xi \geq x\) and \(\eta \geq y\); the cases \(\xi <x\) or \(\eta <y\) are similar. Then \(\{(t,y):x\leq t\leq \xi \}\subset B((x,y),r)\), and if we let \(F_y(x)=F(x,y)\), then \(F_y\) is a continuously differentiable function on \([x,\xi ]\) with \(F_y'(t)=0\) for all \(x\leq t\leq \xi \); by the Mean Value Theorem, \(F_y(x)=F_y(\xi )\) and so \(f(x,y)=f(\xi ,y)\). Similarly, \(\{(\xi ,t):y\leq t\leq \eta \}\subset B((x,y),r)\), and so \(f(x,y)=f(\xi ,y)=f(\xi ,\eta )\).

Thus \(f\) is a constant in \(B((x,y),r)\).

(James, Problem 480) Suppose that \(\Omega \subseteq \R ^2\) is open and connected. Let \(f\in C^1(\Omega )\) and suppose that \(\frac {\partial f}{\partial y}=\frac {\partial f}{\partial x}=0\) everywhere in \(\Omega \). Show that \(f\) is a constant.

1.3. Complex Polynomials

[Definition: Complex polynomials in one variable] Let \(p:\C \to \C \) be a function. We say that \(p\) is a polynomial in one (complex) variable if there is a \(n\in \N _0\) and constants \(a_0\), \(a_1,\dots ,a_n\in \C \) such that \(p(z)=\sum _{k=0}^n a_k z^k\) for all \(z\in \C \).

[Definition: Complex polynomial in two variables] Let \(p:\C \to \C \) be a function. We say that \(p\) is a polynomial in two real variables if there is a \(n\in \N _0\) and constants \(a_{k,\ell }\in \C \) such that \(p(x+iy)=\sum _{k=0}^n \sum _{\ell =0}^n a_{k,\ell } x^k y^\ell \) for all \(x\), \(y\in \R \).

(Fact 490) Problem 430 is true for complex polynomials of two real variables; that is, if \(p(x+iy)=\sum _{k,\ell =0}^n a_{k,\ell } x^ky^\ell \), \(q(x+iy)=\sum _{k,\ell =0}^n c_{k,\ell } x^ky^\ell \), and \(p(z)=q(z)\) for all \(z\in \C \), then \(a_{k,\ell }=c_{k,\ell }\) for all \(k\) and \(\ell \).

(Fact 500) Problem 400, Problem 410, and Problem 420 are valid for complex polynomials of one complex variable. There are complex polynomials of two real variables with infinitely many zeroes, as in Problem 431, that are not the zero polynomial.

[Chapter 1, Problem 35] The functions \(p(x+iy)=x\) and \(q(x+iy)=x^2\) are both clearly polynomials of two real variables. Prove that neither is a polynomial of one complex variable.

(Micah, Problem 510) Show that \(p\) is a polynomial in two real variables if and only if there are constants \(b_{k,\ell }\in \C \) such that \(p(z)=\sum _{k=0}^n \sum _{\ell =0}^n b_{k,\ell } z^k \bar z^\ell \) for all \(z\in \C \).

(Problem 511) Problem 430 is true for complex polynomials of two real variables written in this form; that is, if \(p(z)=\sum _{k,\ell =0}^n b_{k,\ell } z^k\bar z^\ell \), \(q(z)=\sum _{k,\ell =0}^n d_{k,\ell } z^k\bar z^\ell \), and \(p(z)=q(z)\) for all \(z\in \C \), then \(b_{k,\ell }=d_{k,\ell }\) for all \(k\) and \(\ell \).

1.3. The complex derivatives \(\p {z}\) and \(\p {\bar z}\)

Definition 1.3.1 (part 2). Let \(\Omega \subseteq \C \) be an open set. Recall \(\C =\R ^2\). Let \(f:\Omega \to \C \) be a function. Then \(f\in C^1(\Omega )\) if \(\re f\), \(\im f\in C^1(\Omega )\).

[Definition: Derivative of a complex function] Let \(f\in C^1(\Omega )\). Let \(u(z)=\re f(z)\) and let \(v(z)=\im f(z)\). Then

\begin{equation*}\frac {\partial f}{\partial x} = \frac {\partial u}{\partial x}+i\frac {\partial v}{\partial x}, \quad \frac {\partial f}{\partial y} = \frac {\partial u}{\partial y}+i\frac {\partial v}{\partial y}.\end{equation*}

(Muhammad, Problem 520) Let \(\Psi \), \(\Omega \subseteq \C \) be two open sets, and let \(f:\Psi \to \Omega \) and \(g:\Omega \to \C \) be two \(C^1\) functions. What is \(\p {x}(g\circ f)\)?

(Nisa, Problem 530) Establish the Leibniz rules

\begin{equation*}\frac {\partial }{\partial x} (fg)=\frac {\partial f}{\partial x} g+f\frac {\partial g}{\partial x},\qquad \frac {\partial }{\partial y} (fg)=\frac {\partial f}{\partial y} g+f\frac {\partial g}{\partial y}\end{equation*}
for \(f\), \(g\in C^1(\Omega )\).

Let \(f=u+iv\), \(g=w+i\varpi \), where \(u\), \(v\), \(w\), and \(\varpi \) are real-valued functions in \(C^1(\Omega )\).

Then \(fg=(uw-v\varpi )+i(vw+u\varpi )\), where \((uw-v\varpi )\) and \((vw+u\varpi )\) are both real-valued \(C^1\) functions.

Then

\begin{align*} \frac {\partial }{\partial x} (fg) &=\frac {\partial }{\partial x} [(uw-v\varpi )+i(vw+u\varpi )] \\&= \frac {\partial }{\partial x}(uw-v\varpi )+i\frac {\partial }{\partial x}(vw+u\varpi ). \end{align*}

Applying the Leibniz (product) rule for real-valued functions, we see that

\begin{align*} \frac {\partial }{\partial x} (fg) &=\frac {\partial u}{\partial x}w + u\frac {\partial w}{\partial x}-\frac {\partial v}{\partial x}\varpi -v\frac {\partial \varpi }{\partial x} \\&\qquad + i\frac {\partial v}{\partial x}w + iv\frac {\partial w}{\partial x}+i\frac {\partial u}{\partial x}\varpi +iu\frac {\partial \varpi }{\partial x}. \end{align*}

Furthermore,

\begin{align*} f\frac {\partial g}{\partial x}+\frac {\partial f}{\partial x} g &= (u+iv) \biggl (\frac {\partial w}{\partial x}+i\frac {\partial \varpi }{\partial x}\biggr ) \\&\qquad +\biggl (\frac {\partial u}{\partial x}+i\frac {\partial v}{\partial x}\biggr )(w+i\varpi ) \\&= u\frac {\partial w}{\partial x}+iu\frac {\partial \varpi }{\partial x}+iv\frac {\partial w}{\partial x}-v\frac {\partial \varpi }{\partial x}\\&\qquad +\frac {\partial u}{\partial x}w+i\frac {\partial u}{\partial x}\varpi +i\frac {\partial v}{\partial x}w -\frac {\partial v}{\partial x}\varpi . \end{align*}

Rearranging, we see that the two terms are the same.

[Definition: Complex derivative] Let \(f\in C^1(\Omega )\). Then

\begin{equation*}\frac {\partial f}{\partial z} = \frac {1}{2}\frac {\partial f}{\partial x}+\frac {1}{2i}\frac {\partial f}{\partial y}, \quad \frac {\partial f}{\partial \bar z} = \frac {1}{2}\frac {\partial f}{\partial x}-\frac {1}{2i}\frac {\partial f}{\partial y}.\end{equation*}

(Robert, Problem 540) Let \(f(z)=z\) and let \(g(z)=\bar z\). Show that \(\frac {\partial f}{\partial z}=1\), \(\frac {\partial f}{\partial \bar z}=0\), \(\frac {\partial g}{\partial z}=0\), \(\frac {\partial g}{\partial \bar z}=1\).

Recall that \(z=x+iy\). Thus,

\begin{equation*}\frac {\partial }{\partial z} (z) = \frac {1}{2}\frac {\partial }{\partial x}(x+iy)+\frac {1}{2i}\frac {\partial }{\partial y}(x+iy) =\frac {1}{2}+\frac {i}{2i}=1 \end{equation*}
and
\begin{equation*}\frac {\partial }{\partial \bar z} (z) = \frac {1}{2}\frac {\partial }{\partial x}(x+iy)-\frac {1}{2i}\frac {\partial }{\partial y}(x+iy) =\frac {1}{2}-\frac {i}{2i}=0 .\end{equation*}

Recall that \(\bar z=x-iy\). Thus,

\begin{equation*}\frac {\partial }{\partial z} (\bar z) = \frac {1}{2}\frac {\partial }{\partial x}(x-iy)+\frac {1}{2i}\frac {\partial }{\partial y}(x-iy) =\frac {1}{2}-\frac {i}{2i}=0 \end{equation*}
and
\begin{equation*}\frac {\partial }{\partial \bar z} (\bar z) = \frac {1}{2}\frac {\partial }{\partial x}(x-iy)-\frac {1}{2i}\frac {\partial }{\partial y}(x-iy) =\frac {1}{2}+\frac {i}{2i}=1 .\end{equation*}

(Fact 550) \(\frac {\partial }{\partial z}\) and \(\frac {\partial }{\partial \bar z}\) are linear operators.

This follows immediately from linearity of the differential operators \(\frac {\partial }{\partial x}\) and \(\frac {\partial }{\partial y}\).

(Fact 560) \(\frac {\partial }{\partial z}\) and \(\frac {\partial }{\partial \bar z}\) commute in the sense that, if \(\Omega \subseteq \C \) is open and \(f\in C^2(\Omega )\), then \(\frac {\partial }{\partial z}\left (\frac {\partial }{\partial \bar z} f\right )=\frac {\partial }{\partial \bar z}\left (\frac {\partial }{\partial z} f\right )\).

(Fact 570) The following Leibniz rules are valid:

\begin{equation*}\frac {\partial }{\partial z} (fg)=\frac {\partial f}{\partial z} g+f\frac {\partial g}{\partial z}, \qquad \frac {\partial }{\partial \bar z} (fg)=\frac {\partial f}{\partial \bar z} g+f\frac {\partial g}{\partial \bar z} .\end{equation*}

We have that

\begin{align*} \frac {\partial }{\partial z} (fg) =\frac {1}{2}\frac {\partial }{\partial x} (fg) +\frac {1}{2i}\frac {\partial }{\partial y} (fg) .\end{align*}

Using the Leibniz rules for \(\frac {\partial }{\partial x}\) and \(\frac {\partial }{\partial y}\), we see that

\begin{align*} \frac {\partial }{\partial z} (fg) &=\frac {1}{2}f\frac {\partial g}{\partial x} +\frac {1}{2}\frac {\partial f}{\partial x} g +\frac {1}{2i}f\frac {\partial g}{\partial y}+\frac {1}{2i}\frac {\partial f}{\partial y}g \\&= f\biggl (\frac 12\frac {\partial g}{\partial x}+\frac {1}{2i}\frac {\partial g}{\partial y}\biggr ) +\biggl (\frac {1}{2}\frac {\partial f}{\partial x}+\frac {1}{2i}\frac {\partial f}{\partial y}\biggr )g \\&=f\frac {\partial g}{\partial z}+\frac {\partial f}{\partial z}g .\end{align*}

The argument for \(\frac {\partial }{\partial \bar z}\) is similar.

(Sam, Problem 580) Show that \(\frac {\partial }{\partial z} (z^\ell \bar z^m)=\ell z^{\ell -1}\bar z^m\) and \(\frac {\partial }{\partial \bar z} (z^\ell \bar z^m)=mz^\ell \bar z^{m-1}\) for all nonnegative integers \(m\) and \(\ell \) (with the minor abuse of notation that we take \(z^0\equiv \bar z^0\equiv 1\) and \(0z^{-1}\equiv 0\bar z^{-1}\equiv 0\), that is, we ignore the singularities at \(z=0\)).

If \(\ell =m=0\), then \(z^\ell \bar z^m=1\) and \(\ell z^{\ell -1}\bar z^m=0=mz^\ell \bar z^{m-1}\). The result is obvious in this case.

Suppose now that \(m=0\), \(\ell \geq 1\) and the result is true for \(\ell -1\). (The result is true for \(\ell -1\) if \(\ell =1\) by the above argument.) By Problem 570,

\begin{align*} \p { z} z^\ell &=\p { z}(z\cdot z^{\ell -1}) = \biggl (\p { z}z\biggr )z^{\ell -1}+z\biggl (\p { z}z^{\ell -1}\biggr ), \\ \p {\bar z} z^\ell &=\p {\bar z}(z\cdot z^{\ell -1}) = \biggl (\p {\bar z}z\biggr )z^{\ell -1}+z\biggl (\p {\bar z}z^{\ell -1}\biggr ), \end{align*}

which by the induction hypothesis and Problem 540 equal

\begin{align*} \p { z} z^\ell = z^{\ell -1}+z\cdot (\ell -1)z^{\ell -2}, \\ \p {\bar z} z^\ell = 0\cdot z^{\ell -1}+z\cdot 0, \end{align*}

which simplify to the desired result. Thus by induction the result is true whenever \(m=0\). A similar induction argument yields the result whenever \(\ell =0\). Finally, the general case follows from Problem 570:

\begin{align*} \frac {\partial }{\partial z} (z^\ell \bar z^m)&= \biggl (\p {z}z^\ell \biggr )\bar z^m+z^\ell \biggl (\p { z}\bar z^m\biggr ) = \ell z^{\ell -1}\bar z^m+0,\\ \frac {\partial }{\partial \bar z} (z^\ell \bar z^m)&= \biggl (\p {\bar z}z^\ell \biggr )\bar z^m+z^\ell \biggl (\p {\bar z}\bar z^m\biggr ) = 0+ z^{\ell }\cdot m\bar z^{m-1} \end{align*}

as desired.

(William, Problem 590) Let \(p\) be a complex polynomial in two real variables. Show that \(p\) is a complex polynomial in one complex variable if and only if \(\frac {\partial p}{\partial \bar z}=0\) everywhere in \(\C \).

If \(p\) is a polynomial in one complex variable, then by definition there are constants \(a_k\in \C \) and \(n\in \N \) such that \(p(z)=\sum _{k=0}^n a_k z^k\). By linearity of the complex derivative operator and by Problem 580, we have that \(\p {\bar z} p(z)=0\), as desired.

Now suppose that \(p\) is a complex polynomial in two real variables and \(\p {\bar z} p(z)=0\) for all \(z\in \C \). By Problem 510, there are constants \(b_{k,\ell }\in \C \) and \(m\in \N \) such that

\begin{equation*}p(z)=\sum _{k=0}^m \sum _{\ell =0}^m b_{k,\ell } z^k\bar z^\ell \end{equation*}
for all \(z\in \C \). By linearity of the complex derivative operator and by Problem 580, we have that
\begin{equation*}\p {\bar z} p(z)=\sum _{k=0}^m \sum _{\ell =0}^m \ell b_{k,\ell } z^k\bar z^{\ell -1}.\end{equation*}
This polynomial is identically equal to zero. By Problem 511, this implies that \(\ell b_{k,\ell }=0\) for all \(k\) and \(\ell \). In particular, if \(\ell \geq 1\) then \(b_{k,\ell }=0\). Thus
\begin{equation*}p(z)=\sum _{k=0}^m b_{k,0} z^k\end{equation*}
as desired.

Definition 1.4.1. Let \(\Omega \subseteq \C \) be open and let \(f\in C^1(\Omega )\). We say that \(f\) is holomorphic in \(\Omega \) if

\begin{equation*} \frac {\partial f}{\partial \bar z}=0 \end{equation*}
everywhere in \(\Omega \).

(Fact 600) A polynomial in two real variables is a polynomial in one complex variable if and only if it is holomorphic.

(Wilson, Problem 610) Suppose that \(\Omega \subseteq \C \) is open and connected, that \(f\in C^1(\Omega )\), and that \(\frac {\partial f}{\partial z}=0=\frac {\partial f}{\partial \bar z}\) in \(\Omega \). Show that \(f\) is constant in \(\Omega \).

We observe that

\begin{equation*}\p [f]{x}=\p [f]{z}+\p [f]{{\bar z}},\qquad \p [f]{y}=i\p [f]{z}-i\p [f]{{\bar z}}.\end{equation*}
Thus \(\p [f]{x}=\p [f]{y}=0\) in \(\Omega \) and the result follows from Problem 480.

[Chapter 1, Problem 34] Suppose that \(\Omega \subseteq \C \) is open and that \(f\in C^1(\Omega )\). Show that

\begin{equation*}\frac {\partial f}{\partial z}(w) = \overline {\left (\frac {\partial \overline f}{\partial \bar z}(w)\right )}\end{equation*}
for all \(w\in \Omega \).

(Adam, Problem 620) Show that \(\frac {\partial }{\partial z}\frac {1}{z}=-\frac {1}{z^2}\) and \(\frac {\partial }{\partial \bar z}\frac {1}{z}=0\) if \(z\neq 0\). Then compute \(\frac {\partial }{\partial z}\frac {1}{z^n}\) and \(\frac {\partial }{\partial \bar z}\frac {1}{z^n}\) for any positive integer \(n\).

Observe that \(\frac {1}{z}=\frac {\bar z}{z\bar z}\) and so if \(z=x+iy\), \(x\), \(y\in \R \), then \(\frac {1}{z}=\frac {x-iy}{x^2+y^2}\). We compute

\begin{align*}\p {x}\frac {1}{z}&= \p {x}\re \frac {1}{z} +i\p {x}\im \frac {1}{z}= \p {x}\frac {x}{x^2+y^2} +i\p {x}\frac {-y}{x^2+y^2} = \frac {y^2-x^2+2ixy}{(x^2+y^2)^2}, \\ \p {y}\frac {1}{z}&= \p {y}\frac {x}{x^2+y^2} +i\p {y}\frac {-y}{x^2+y^2} = \frac {-ix^2+iy^2-2xy}{(x^2+y^2)^2}. \end{align*}

Thus

\begin{align*}\p {z}\frac {1}{z} &=\frac {1}{2} \p {x}\frac {1}{z}-\frac {i}{2}\p {y}\frac {1}{z} = \frac {y^2-x^2+2ixy}{(x^2+y^2)^2} \\&= \frac {-(x-iy)^2}{(x+iy)^2(x-iy)^2} =-\frac {1}{z^2} \end{align*}

and

\begin{equation*}\p {z}\frac {1}{z} =\frac {1}{2} \p {x}\frac {1}{z}+\frac {i}{2}\p {y}\frac {1}{z} = 0. \end{equation*}
Using the Leibniz rule for the inductive step, a straightforward induction argument shows that
\begin{equation*}\p {z}\frac {1}{z^n}=-\frac {n}{z^{n+1}}, \qquad \p {\bar z}\frac {1}{z^n} =0.\end{equation*}

[Chapter 1, Problem 49] Let \(\Omega \), \(W\subseteq \C \) be open and let \(g:\Omega \to W\), \(f:W\to \C \) be two \(C^1\) functions. The following chain rules are valid:

\begin{gather*} \frac {\partial }{\partial z} (f\circ g)= \frac {\partial f}{\partial g}\frac {\partial g}{\partial z} +\frac {\partial f}{\partial \overline g}\frac {\partial \overline g}{\partial z}, \\ \frac {\partial }{\partial \bar z} (f\circ g)= \frac {\partial f}{\partial g}\frac {\partial g}{\partial \overline z} +\frac {\partial f}{\partial \overline g}\frac {\partial \overline g}{\partial \overline z}, \end{gather*}
where \(\frac {\partial f}{\partial g} = \left .\frac {\partial f}{\partial z}\right |_{z\to g(z)}\), \(\frac {\partial f}{\partial \overline g} = \left .\frac {\partial f}{\partial \bar z}\right |_{z\to g(z)}\).

In particular, if \(f\) and \(g\) are both holomorphic then so is \(f\circ g\).

1.4. Real Analysis

(Memory 630) Let \(f\) be a \(C^2\) function in an open set in \(\R ^2\). Show that \(\frac {\partial }{\partial x} \frac {\partial f}{\partial y}=\frac {\partial }{\partial y} \frac {\partial f}{\partial x}\) everywhere in the domain.

1.4. Holomorphic Functions, the Cauchy-Riemann Equations, and Harmonic Functions

Lemma 1.4.2. Let \(f\in C^1(\Omega )\), let \(u=\re f\), and let \(v=\im f\). Then \(f\) is holomorphic in \(\Omega \) if and only if

\begin{equation*}\frac {\partial u}{\partial x}=\frac {\partial v}{\partial y}\quad \text {and}\quad \frac {\partial u}{\partial y}=-\frac {\partial v}{\partial x}\end{equation*}
everywhere in \(\Omega \). (These equations are called the Cauchy-Riemann equations.)

(Amani, Problem 640) Prove the “only if” direction of Lemma 1.4.2: show that if \(f\) is holomorphic in \(\Omega \), \(\Omega \subseteq \C \) open, then the Cauchy-Riemann equations hold for \(u=\re f\) and \(v=\im f\).

(Bashar, Problem 650) Prove the “if” direction of Lemma 1.4.2: show that if \(u=\re f\) and \(v=\im f\) are \(C^1\) in \(\Omega \) and satisfy the Cauchy-Riemann equations, then \(f\) is holomorphic in \(\Omega \).

Recall that

\begin{align*} 2\p [f]{\bar z} = \p [f]{x} +i\p [f]{y} \end{align*}

by definition of \(\p {\bar z}\). Applying the fact that \(f=u+iv\), we see that

\begin{align*} 2\p [f]{\bar z}&=\p [u]{x}+i\p [v]{x} +i \biggl (\p [u]{y}+i\p [v]{y}\biggr ) \\&= \biggl (\p [u]{x}-\p [v]y\biggr )+ i\biggl (\p [v]x+\p [u]y\biggr ). \end{align*}

Because \(u\) and \(v\) are real-valued, so are their derivatives. Thus, the real and imaginary parts of the right hand side, respectively, are \(\p [u]x-\p [v]y\) and \(\p [u]y+\p [v]x\).

Thus, \(\p [f]{\bar z} = 0\) if and only if the Cauchy-Riemann equations hold.

Proposition 1.4.3. [Slight generalization.] Let \(f\in C^1(\Omega )\). Then \(f\) is holomorphic at \(p\in \Omega \) if and only if \(\frac {\partial f}{\partial x}(p)=\frac {1}{i}\frac {\partial f}{\partial y}(p)\) and that in this case

\begin{equation*}\frac {\partial f}{\partial z}(p)=\frac {\partial f}{\partial x}(p) =\frac {1}{i}\frac {\partial f}{\partial y}(p).\end{equation*}

(Dibyendu, Problem 660) Begin the proof of Proposition 1.4.3 by showing that if \(f\) is holomorphic then \(\frac {\partial f}{\partial z}=\frac {\partial f}{\partial x}=\frac {1}{i}\frac {\partial f}{\partial y}\).

By definition of \(\p {z}\) and \(\p {\bar z}\), if \(f\in C^1(\Omega )\) then \(\p [f]{x}=\p [f]{z}+\p [f]{\bar z}\) and \(\p [f]{y}=i\p [f]{z}-i\p [f]{\bar z}\). Thus, if \(\p [f]{\bar z}(p)=0\) then \(\p [f]{x}(p)=\p [f]{z}(p)\) and \(\p [f]{y}(p)=i\p [f]{z}(p)=i\p [f]{x}(p)\), as desired.

(Hope, Problem 670) Complete the proof of Proposition 1.4.3 by showing that if \(f\in C^1(\Omega )\) and \(\frac {\partial f}{\partial x}=\frac {1}{i}\frac {\partial f}{\partial y}\), then \(f\) is holomorphic.

Recall

\begin{equation*}\p [f]{\bar z} = \frac 12\biggl (\p [f]x- \frac {1}{i}\p [f]{y}\biggr ).\end{equation*}
Thus, if \(\p [f]x=\frac {1}{i}\p [f]y\) then \(\p [f]{\bar z}=0\), as desired.

Definition 1.4.4. We let \(\triangle =\frac {\partial ^2 }{\partial x^2}+\frac {\partial ^2 }{\partial y^2}\). If \(\Omega \subseteq \C \) is open and \(u\in C^2(\Omega )\), then \(u\) is harmonic if

\begin{equation*}\triangle u=\frac {\partial ^2 u}{\partial x^2}+\frac {\partial ^2 u}{\partial y^2}=0\end{equation*}
everywhere in \(\Omega \).

(Fact 680) Show that if \(f\in C^1(\Omega )\) then \(\triangle f=4\frac {\partial }{\partial z}\frac {\partial f} {\partial \bar z} =4\frac {\partial }{\partial \bar z} \frac {\partial f}{\partial z}\).

We compute that

\begin{align*}\p {z}\p [f]{\bar z} &= \frac {1}{4} \biggl (\p x+\frac {1}{i}\p y\biggr ) \biggl (\p [f] x-\frac {1}{i} \p [f] y\biggr ) \\&=\frac {1}{4}\biggl (\frac {\partial ^2f}{\partial x^2} +\frac {\partial ^2f}{\partial y^2} +\frac {1}{i}\frac {\partial ^2 f}{\partial y\partial x} -\frac {1}{i}\frac {\partial ^2 f}{\partial x\partial y}\biggr ) .\end{align*}

If \(f\in C^1\) then \(\frac {\partial ^2 f}{\partial y\partial x} =\frac {\partial ^2 f}{\partial x\partial y}\) and the proof is complete. The argument for \(\p {\bar z}\p [f]{z} \) is similar.

(Micah, Problem 690) Suppose that \(f\) is holomorphic and \(C^2\) in an open set \(\Omega \) and that \(u=\re f\) and \(v=\im f\). Compute \(\triangle u\) and \(\triangle v\).

Because \(f\) is holomorphic,

\begin{equation*}\triangle f = 4\frac {\partial }{\partial z}\frac {\partial f} {\partial \bar z}=4\frac {\partial }{\partial z}0=0.\end{equation*}
But
\begin{equation*}\triangle f = (\triangle u) + i(\triangle v)\end{equation*}
and \(\triangle u\) and \(\triangle v\) are both real-valued, so because \(\triangle f=0\) we must have \(\triangle u=0=\triangle v\) as well.

(Muhammad, Problem 700) Let \(f\) be a holomorphic polynomial. Show that there is a holomorphic polynomial \(F\) such that \(\frac {\partial F}{\partial z} = f\). How many such polynomials are there?

By Problem 600, \(f\) is a polynomial in one complex variable; that is, there is a \(n\in \N \) and constants \(a_k\in \C \) such that

\begin{equation*}f(z)=\sum _{k=0}^n a_k\,z^k\end{equation*}
for all \(z\in \C \). Let \(b\in \C \) and let
\begin{equation*}F(z)=b+\sum _{k=0}^n \frac {a_k}{k+1}\,z^{k+1}.\end{equation*}
By Problem 580, we have that \(\p {z}F=f\).

There are infinitely many such polynomials, one for each choice of \(b\). By Problem 610, any two such antiderivatives \(F_1\) and \(F_2\) must differ by a constant.

(Nisa, Problem 710) Show that if \(u\) is a harmonic polynomial (of two real variables) then \(u(z)=p(z)+q(\bar z)\) for some polynomials \(p\), \(q\) of one complex variable.

Because \(u\) is harmonic, by Problem 680 we have that

\begin{equation*}0=\triangle u=4\frac {\partial }{\partial \bar z} \frac {\partial u}{\partial z}\end{equation*}
and so \(f=\p [u]{z}\) is holomorphic. By Problem 580 we have that \(f\) is a polynomial. Thus by Problem 700 there is a holomorphic polynomial \(F\) with \(\p [F]{z}=f=\p [u]{z}\).

Let \(p=F\). Let \(g(z)=u(z)-p(z)\). Then \(g\) is a polynomial and \(\p [g]{z}=0\), so by Problem 1.34 \(\p [\overline g]{\bar z}=0\). Thus

\begin{equation*}\overline {g(z)}=\sum _{k=0}^m b_k \,z^k\end{equation*}
for some \(m\in \N \) and some constants \(b_k\in \C \). Taking the complex conjugate yields that
\begin{equation*}g(z)=\sum _{k=0}^m \overline {b_k}\,\bar z^k.\end{equation*}
This completes the proof with \(q(z)=\sum _{k=0}^m \overline {b_k}\, z^k\).

Lemma 1.4.5. Let \(u\) be harmonic and real valued in \(\C \). Suppose in addition that \(u\) is a polynomial of two real variables. Then there is a holomorphic polynomial \(f\) such that \(u(z)=\re f(z)\).

(Robert, Problem 720) Prove Lemma 1.4.5.

By Problem 710, we have that \(u(z)=p(z)+q(\bar z)\) for some polynomials \(p\) and \(q\). We may write

\begin{equation*}u(z)=\sum _{k=0}^n a_k z^k+\sum _{k=0}^n b_k \bar z^k =(a_0+b_0)+\sum _{k=1}^n a_k z^k+\sum _{k=1}^n b_k \bar z^k\end{equation*}
for some \(n\in \N \) and some \(a_k\), \(b_k\in \C \). Because \(u\) is real-valued, we have that \(u(z)=\overline {u(z)}\) and so
\begin{equation*} (a_0+b_0)+\sum _{k=1}^n a_k z^k+\sum _{k=1}^n b_k \bar z^k = \overline {(a_0+b_0)}+\sum _{k=1}^n \overline {b_k} z^k+\sum _{k=1}^n \overline {a_k} \bar z^k. \end{equation*}
By Problem 511, this implies that \(a_0+b_0\) is real and that \(a_k=\overline {b_k}\) for all \(k\geq 1\).

Thus

\begin{align*}u(z) &=(a_0+b_0)+ \sum _{k=1}^n (a_k z^k+\overline {a_k z^k}) =\re (a_0+b_0)+ \sum _{k=1}^n 2\Re (a_k z^k) \\&=\re \Bigl ((a_0+b_0)+ \sum _{k=1}^n 2a_k z^k\Bigr )\end{align*}

as desired.

1.5. Real Analysis

(Memory 730) State Green’s theorem.

Let \(\gamma :[0,1]\to \R ^2\) be a piecewise \(C^1\) simple closed curve.2 Let \(\Omega \subset \R ^2\) be the bounded open set that satisfies \(\partial \Omega =\gamma ([0,1])\); by the Jordan curve theorem, exactly one such \(\Omega \) exists. Let \(W\) be open and satisfy \(\overline \Omega \subset W\). Let \(\vec F:W\to \R ^2\) be a \(C^1\) function.

Then

\begin{equation*}\int _0^1 \vec F(\gamma (t))\cdot \gamma '(t)\,dt = \pm \int _\Omega \frac {\partial F_2}{\partial x}-\frac {\partial F_1}{\partial y}\,dx\,dy\end{equation*}
where the sign is determined by the orientation of \(\gamma \).

(Memory 740) State the Mean Value Theorem.

(Memory 750) If \(a<b\), if each \(f_n\) is bounded and Riemann integrable on \([a,b]\), and if \(f_n\to f\) uniformly on \([a,b]\), then \(f\) is also Riemann integrable on \([a,b]\), \(\lim _{n\to \infty } \int _a^b f_n\) exists, and \(\int _a^b f = \lim _{n\to \infty } \int _a^b f_n\).

(Problem 760) Let \(f:[a,b]\times [c,d]\to \R \). Suppose that \(f\) is continuous on \([a,b]\times [c,d]\). Define \(F:[a,b]\to \R \) by \(F(x)=\int _{c}^d f(x,y)\,dy\). Show that \(F\) is continuous on \([a,b]\).

(Problem 770) Let \(f:(a,b)\times [c,d]\to \R \). Suppose that \(f\) is continuous on \((a,b)\times [c,d]\) and the partial derivative \(\partial _xf=\frac {\partial f}{\partial x}\) exists and is continuous everywhere on \((a,b)\times [c,d]\). Define \(F(x)=\int _c^d f(x,y)\,dy\). Show that \(F\) is differentiable on \((a,b)\) and that

\begin{equation*}F'(x)=\frac {d}{dx} \int _c^d f(x,y)\,dy=\int _c^d \p {x} f(x,y)\,dy\end{equation*}
for all \(a<x<b\).

This will be proven as homework.

1.5. Real and Holomorphic Antiderivatives

(Sam, Problem 780) Prove the converse to Clairaut’s theorem. That is, suppose that there are two \(C^1\) functions \(g\) and \(h\) defined in an open rectangle or disc \(\Omega \) such that \(\frac {\partial }{\partial x}g=\frac {\partial }{\partial y}h\) everywhere in \(\Omega \). Show that there is a function \(f\in C^2(\Omega )\) such that \(\frac {\partial f}{\partial y}=g\) and \( \frac {\partial f}{\partial x}=h\).

(Bonus Problem 790) State the definition of a simply connected set and then generalize Problem 780 to any simply connected open set.

(Problem 800) Let \(\Omega =\R ^2\setminus \{(0,0)\}\). Let \(g(x,y)=\frac {x}{x^2+y^2}\) and \(h(x,y)=\frac {-y}{x^2+y^2}\). Show that \(\frac {\partial }{\partial x}g=\frac {\partial }{\partial y}h\).

This is routine calculation. By the quotient rule of undergraduate calculus,

\begin{equation*}\p {x}g = \frac {1(x^2+y^2)-x(2x)}{(x^2+y^2)^2} = \frac {y^2-x^2}{(x^2+y^2)^2} \end{equation*}
and
\begin{equation*}\p {y}h = \frac {-1(x^2+y^2)-(-y)(2y)}{(x^2+y^2)^2} = \frac {y^2-x^2}{(x^2+y^2)^2}\end{equation*}
which are equal.

(William, Problem 810) Show that there is no function \(f\in C^1(\Omega )\) such that \(\frac {\partial f}{\partial y}=g\) and \( \frac {\partial f}{\partial x}=h\).

(Wilson, Problem 820) Why doesn’t this contradict Problem 790?

The domain \(\Omega \) is not simply connected.

(Adam, Problem 830) Suppose that \(u\) is real-valued and harmonic (and not necessarily a polynomial) in an open rectangle or disc \(\Omega \). Show that there is a function \(f\) that is holomorphic in \(\Omega \) such that \(u=\re f\).

Let \(g=\frac {\partial u}{\partial x}\) and let \(h=-\frac {\partial u}{\partial y}\). Then

\begin{equation*}\p [g]{x}=\frac {\partial ^2 u}{\partial x^2} = -\frac {\partial ^2 u}{\partial y^2}=\p [h]{y}\end{equation*}
by definition of \(g\) and \(h\) and because \(u\) is harmonic. Thus by Problem 790 there is a \(v:\Omega \to \R \) such that \(\p [v]{y}=g=\frac {\partial u}{\partial x}\) and \(\p [v]{x}=h=-\frac {\partial u}{\partial y}\).

Then \(u\) and \(v\) satisfy the Cauchy-Riemann equations, and so by Problem 650, \(f=u+iv\) is holomorphic in \(\Omega \).

(Amani, Problem 840) Suppose that \(f\) is holomorphic in an open rectangle or disc \(\Omega \). Show that there is a function \(F\) that is holomorphic in \(\Omega \) such that \(f=\frac {\partial F}{\partial z}\).

[Chapter 1, Problem 52] The function \(f(z)=1/z\) is holomorphic on \(\Omega =\{z\in \C :1<|z|<2\}\) but has no holomorphic antiderivative on \(\Omega \).

2.1. Real Analysis

(Memory 841) Let \((a,b)\subset \R \) be an open interval, let \(\Omega \subseteq \R ^2\) be open, let \(\gamma :(a,b)\to \Omega \) be \(C^1\), and let \(f:\Omega \to \R \) be \(C^1\). Then

\begin{align*}\frac {d}{dt} f(\gamma (t))&= \nabla f(\gamma (t))\cdot \gamma '(t) = \frac {\partial f}{\partial x}\bigg \vert _{\gamma (t)}\times \gamma _1'(t)+ \frac {\partial f}{\partial y}\bigg \vert _{\gamma (t)}\times \gamma _2'(t)\end{align*}

where \(\cdot \) is the dot product in the vector space \(\R ^2\), and \(\gamma _1\), \(\gamma _2:(a,b)\to \R \) are the \(C^1\) functions such that \(\gamma (t)=(\gamma _1(t),\gamma _2(t))\).

(Memory 850) State the Intermediate Value Theorem.

(Memory 860) State the change of variables theorem for integrals over real intervals.

[Definition: Continuous] Let \((X,d)\) and \((Z,\rho )\) be two metric spaces and let \(f:X\to Z\). We say that \(f\) is continuous at \(x\in X\) if, for all \(\varepsilon >0\), there is a \(\delta >0\) such that if \(d(x,y)<\delta \) and \(y\in X\) then \(\rho (f(x),f(y))<\varepsilon \).

(Memory 870) Let \(a<b\) and let \(\varphi :[a,b]\to \R \) be continuous. Then \(\left |\int _a^b \varphi \right |\leq \int _a^b |\varphi |\leq (b-a)\sup _{[a,b]}|\varphi |\).

(Memory 880) Let \(X\) be a compact metric space and let \(f:X\to Z\) be a continuous function. Then \(f(X)\) is compact.

(Memory 890) Let \(X\) be a compact metric space and let \(f:X\to Z\) be a continuous bijection. Then \(f^{-1}\) is also continuous.

(Bashar, Problem 900) Is the previous problem true if \(X\) is not compact?

No. Let \(X=(-\pi /2,0)\cup (0,\pi /2]\subset \R \) with the usual metric on \(\R \). Then the function \(\cot :X\to \R \) is continuous on \(X\) and is a bijection, but \(\cot ^{-1}(0)=\frac {\pi }{2}\) and \(\lim _{x\to 0^-}\cot ^{-1}(x)=-\frac {\pi }{2}\), and so \(\cot ^{-1}\) (with the given range) is discontinuous at \(0\).

PIC

yX = cotx

PIC

ℝy = cot−1x

(Memory 910) If \(\gamma :X\to \R ^2\) and \(\gamma (t)=(\gamma _1(t),\gamma _2(t))\) for all \(t\in X\), then \(\gamma \) is continuous if and only if \(\gamma _1\) and \(\gamma _2\) are continuous.

Definition 2.1.1. (\(C^1\) on a closed set.) Let \([a,b]\subseteq \R \) be a closed bounded interval and let \(f:[a,b]\to \R \). We say that \(f\in C^1([a,b])\), or \(f\) is continuously differentiable on \([a,b]\), if

(a)
\(f\) is continuous on \([a,b]\),
(b)
\(f\) is differentiable on \((a,b)\),
(c)
The derivative \(f'\) is continuous on \((a,b)\),
(d)
\(\lim _{t\to a^+} f'(t)\) and \(\lim _{t\to b^-} f'(t)\) both exist and are finite.

(Memory 920) If the conditions (a), (b) and (c) hold, then the condition (d) holds if and only if the two limits \(\lim _{t\to a^+} \frac {f(t)-f(a)}{t-a}\) and \(\lim _{t\to b^-} \frac {f(b)-f(t)}{b-t}\) exist, and in this case \(\lim _{t\to a^+} \frac {f(t)-f(a)}{t-a}=\lim _{t\to a^+} f'(t)\) and \(\lim _{t\to b^-} \frac {f(b)-f(t)}{b-t}=\lim _{t\to b^-} f'(t)\).

[Definition: One-sided derivative] If \(f:[a,b]\to \R \), we define \(f'(a)=\lim _{t\to a^+} \frac {f(t)-f(a)}{t-a}\) and \(f'(b)=\lim _{t\to b^-} \frac {f(b)-f(t)}{b-t}\), if these limits exist.

[Definition: Curve] A curve in \(\R ^2\) is a continuous function \(\gamma :[a,b]\to \C \), where \([a,b]\subseteq \R \) is a closed and bounded interval. The trace (or image) of \(\gamma \) is \(\widetilde \gamma =\gamma ([a,b])=\{\gamma (t):t\in [a,b]\}\).

[Definition: Closed; simple] A curve \(\gamma :[a,b]\to \R ^2\) is closed if \(\gamma (a)=\gamma (b)\). A closed curve is simple if \(\gamma (b)=\gamma (a)\) and \(\gamma \) is injective on \([a,b)\) (equivalently on \((a,b]\)).

[Definition: \(C^1\) curve in \(\R ^2\)] A curve \(\gamma :[a,b]\to \R ^2\) is \(C^1\) (or continuously differentiable) if \(\gamma (t)=(\gamma _1(t),\gamma _2(t))\) for all \(t\in [a,b]\) and both \(\gamma _1\), \(\gamma _2\) are \(C^1\) on \([a,b]\). We write

\begin{equation*}\gamma '(t)=\frac {d\gamma }{dt}=\biggl (\frac {d\gamma _1}{dt},\frac {d\gamma _2}{dt}\biggr ).\end{equation*}

[Definition: Arc length] If \(\gamma :[a,b]\to \R ^2\) is a \(C^1\) curve, then its length (or arc length) is \(\int _a^b \|\gamma '(t)\|\,dt\).

Proposition 2.1.4. Let \(\gamma \in C^1([a,b])\), \(\gamma :[a,b]\to \Omega \) for some open set \(\Omega \subseteq \R ^2\) and let \(f:\Omega \to \R \) with \(f\in C^1(\Omega )\). Then

\begin{equation*}f(\gamma (b))-f(\gamma (a))=\int _a^b \frac {\partial f}{\partial x}\Big \vert _{(x,y)=\gamma (t)} \frac {\partial \gamma _1}{\partial t} +\frac {\partial f}{\partial y}\Big \vert _{(x,y)=\gamma (t)} \frac {\partial \gamma _2}{\partial t}\,dt.\end{equation*}

(Dibyendu, Problem 930) Prove Proposition 2.1.4. Hint: Start by computing \(\frac {d(f\circ \gamma )}{dt}\).

By the multivariable chain rule,

\begin{equation*}\frac {d(f\circ \gamma )}{dt}= \frac {\partial f}{\partial x}\Big \vert _{(x,y)=\gamma (t)}\frac {\partial \gamma _1}{\partial t} +\frac {\partial f}{\partial y}\Big \vert _{(x,y)=\gamma (t)}\frac {\partial \gamma _2}{\partial t}.\end{equation*}
The result then follows from the fundamental theorem of calculus.

[Definition: Real line integral] Let \(\gamma \in C^1([a,b])\), \(\gamma :[a,b]\to \Omega \) for some open set \(\Omega \subseteq \R ^2\) and \(F:\Omega \to \R \) be continuous on \(\Omega \). We define

\begin{equation*}\int _\gamma F\,ds=\int _a^b F(\gamma (t))\,\|\gamma '(t)\|\,dt.\end{equation*}
Let \(\vec F:\Omega \to \R ^2\) be continuous on \(\Omega \). We define
\begin{equation*}\int _\gamma \vec F\cdot \tau \,ds=\int _a^b \vec F(\gamma (t))\cdot \gamma '(t)\,dt\end{equation*}
where we use a dot product in the second integral.

2.1. Real and Complex Line Integrals

Definition 2.1.3. (Integral of a complex function.) If \(f:[a,b]\to \C \), and both \(\re f\) and \(\im f\) are integrable on \([a,b]\), we define \(\int _a^b f=\int _a^b\re f+i\int _a^b \im f\).

(Problem 931) If \(\alpha \), \(\beta \in \C \) are constants and \(f\), \(g:[a,b]\to \C \) are both continuous, show that \(\int _a^b(\alpha f+\beta g)=\alpha \int _a^b f+\beta \int _a^b g\).

Proposition 2.1.7. Suppose that \(a<b\) and that \(f:[a,b]\to \C \) is continuous. Then \(|\int _a^b f|\leq \int _a^b |f|\leq (b-a)\sup _{[a,b]}|f|\).

(Hope, Problem 940) Prove Proposition 2.1.7. Hint: Start by showing that the integral is finite.

First,

\begin{align*} \biggl |\int _a^b f\biggr | &\leq \biggl |\re \int _a^b f\biggr |+\biggl |\im \int _a^b f\biggr | = \biggl |\int _a^b \re f\biggr |+\biggl |\int _a^b \im f\biggr |. \end{align*}

By continuity of \(\re f\) and \(\im f\) and compactness of \([a,b]\), we have that \(\sup _{[a,b]}|\re f|<\infty \) and \(\sup _{[a,b]}|\im f|<\infty \) and so the integral is finite by Problem 870.

If \(\int _a^b f=0\) we are done. Otherwise, let \(\theta \in \R \) be such that \(e^{i\theta }\int _a^b f\) is a nonnegative real number. Then \(|\int _a^b f|=\int e^{i\theta } f\). But \(|\int _a^b f|\) is real and so \(\im \int e^{i\theta } f=0\). Therefore \(|\int _a^b f|=\re \int e^{i\theta } f =\int \re (e^{i\theta } f)\). The result then follows from the corresponding result for real integrals.

Definition 2.1.4. (\(C^1\) curve in \(\C \).) A curve \(\gamma :[a,b]\to \C \) is a \(C^1\) curve (in \(\C \)) if \((\re \gamma ,\im \gamma )\) is a \(C^1\) curve (in \(\R ^2\)). We write

\begin{equation*}\gamma '(t)=\frac {d\gamma }{dt}=\frac {d(\re \gamma )}{dt}+i\frac {d(\im \gamma )}{dt}.\end{equation*}

(Micah, Problem 950) If \(t\in (a,b)\) and \(\gamma :[a,b]\to \C \) is \(C^1\), show that \(\gamma '(t)=\lim _{s\to t}\frac {\gamma (s)-\gamma (t)}{s-t}\).

We compute that

\begin{align*} \lim _{s\to t}\frac {\gamma (s)-\gamma (t)}{s-t} &=\lim _{s\to t}\biggl ( \frac {\re \gamma (s)-\re \gamma (t)}{s-t} \alignquadbreak +i\frac {\im \gamma (s)-\im \gamma (t)}{s-t}\biggr ). \end{align*}

By linearity of limits

\begin{align*} \lim _{s\to t}\frac {\gamma (s)-\gamma (t)}{s-t} &=\left ( \lim _{s\to t}\frac {\re \gamma (s)-\re \gamma (t)}{s-t}\right )\alignquadbreak +i\left (\lim _{s\to t}\frac {\im \gamma (s)-\im \gamma (t)}{s-t}\right ) = (\re \gamma )'(t)+i(\im \gamma )'(t)\end{align*}

as desired.

Definition 2.1.5. (Complex line integral.) Let \(\gamma \in C^1([a,b])\), \(\gamma :[a,b]\to \Omega \) for some open set \(\Omega \subseteq \C \) and \(F:\Omega \to \C \) be continuous on \(\Omega \). We define

\begin{equation*}\oint _\gamma F(z)\,dz=\int _a^b F(\gamma (t))\,\gamma '(t)\,dt\end{equation*}
where we use complex multiplication in the second integral.

(Muhammad, Problem 960) Let \( \gamma :[0,1]\to \Omega \subset \R ^2\) be a \(C^1\) curve and let \(\vec F:\Omega \to \R ^2\) be a vector-valued function. Recall that we identify \(\R ^2\) with \(\C \), so that we identify \( \gamma =(\gamma _1,\gamma _2)\) with \(\gamma _1+i\gamma _2\) and \(\vec F=(F_1,F_2)\) with \(F=F_1+iF_2\).

Show that

\begin{equation*}\int _\gamma \vec F\cdot \tau \,ds = \re \oint _\gamma \overline {F}(z)\,dz, \qquad \int _\gamma \vec F\cdot \nu \,ds = \im \oint _\gamma \overline {F}(z)\,dz, \end{equation*}
where \(\nu =\begin {pmatrix}0&1\\-1&0\end {pmatrix}\tau \) is the unit rightward normal vector to \(\gamma \).

Proposition 2.1.6. Let \(\gamma :[a,b]\to \Omega \subseteq \C \) be \(C^1\), where \(\Omega \) is open, and let \(f\) be holomorphic in \(\Omega \). Then

\begin{equation*}\oint _\gamma \frac {\partial f}{\partial z}\,dz=f(\gamma (b))-f(\gamma (a)).\end{equation*}

(Problem 961) Give an example of a \(C^1\) curve \(\gamma \) and a \(C^1\) function \(f\) defined in an open neighborhood of \(\widetilde \gamma \) such that

\begin{equation*}\oint _\gamma \frac {\partial f}{\partial z}\,dz\neq f(\gamma (b))-f(\gamma (a)).\end{equation*}

(Nisa, Problem 970) Prove Proposition 2.1.6.

By the fundamental theorem of calculus, we have that

\begin{align*}f(\gamma (b))-f(\gamma (a)) &= \int _a^b \frac {d}{dt} \re f(\gamma (t)) \,dt \alignquadbreak + i \int _a^b \frac {d}{dt} \im f(\gamma (t)) \,dt.\end{align*}

By Problem 841 (with \(\gamma '(t)\) viewed as a vector in \(\R ^2\))

\begin{align*}f(\gamma (b))-f(\gamma (a)) &= \int _a^b \nabla (\re f)(\gamma (t)) \cdot \gamma '(t) \,dt \alignquadbreak + i \int _a^b \nabla (\im f)(\gamma (t))\cdot \gamma '(t) \,dt.\end{align*}

Let \(\gamma (t)=\gamma _1(t)+i\gamma _2(t)=(\gamma _1(t),\gamma _2(t))\) where \(\gamma _1\), \(\gamma _2\) are real valued functions. Then

\begin{align*}f(\gamma (b))-f(\gamma (a)) &= \int _a^b \p [(\re f)]{x}(\gamma (t))\,\gamma _1'(t) \,dt \alignquadbreak +\int _a^b \p [(\re f)]{y}(\gamma (t))\,\gamma _2'(t) \,dt \alignquadbreak + i \int _a^b \p [(\im f)]{x}(\gamma (t))\,\gamma _1'(t) \,dt \alignquadbreak +i\int _a^b \p [(\im f)]{y}(\gamma (t))\,\gamma _2'(t) \,dt .\end{align*}

Combining the \(\gamma _1'\) and \(\gamma _2'\) integrals gives that

\begin{align*}f(\gamma (b))-f(\gamma (a)) &=\int _a^b \p [f]{x}(\gamma (t))\,\gamma _1'(t) \,dt \alignquadbreak +\int _a^b \p [f]{y}(\gamma (t))\,\gamma _2'(t) \,dt .\end{align*}

By Problem 660, and because \(f\) is holomorphic, we have that

\begin{align*}f(\gamma (b))-f(\gamma (a)) &= \int _a^b \p [f]{z}(\gamma (t))\gamma _1'(t) \,dt \alignquadbreak +\int _a^b i\p [f]{z}(\gamma (t))\gamma _2'(t) \,dt \\&= \int _a^b \p [f]{z}(\gamma (t))\bigl (\gamma _1'(t)+i\gamma _2'(t)\bigr ) \,dt \\&= \int _a^b \p [f]{z}(\gamma (t)) \gamma '(t)\,dt .\end{align*}

By definition we have that

\begin{equation*}\oint _\gamma \frac {\partial f}{\partial z}\,dz = \int _a^b \p [f]{z}(\gamma (t))\,\gamma '(t)\,dt.\end{equation*}
This completes the proof.

Proposition 2.1.8. If \(\gamma :[a,b]\to \Omega \subseteq \C \) is a \(C^1\) curve and \(f:\Omega \to \C \) is continuous, then \(\displaystyle \left |\oint _\gamma f(z)\,dz\right |\leq \sup _{[a,b]} |f\circ \gamma | \cdot \ell (\gamma )=\sup _{\widetilde \gamma } |f| \cdot \ell (\gamma )\), where \(\ell (\gamma )=\int _a^b |\gamma '|\).

(Robert, Problem 980) Prove Proposition 2.1.8.

By definition

\begin{equation*}\oint _\gamma f(z)\,dz = \int _a^b f(\gamma (t))\,\gamma '(t)\,dt. \end{equation*}
By Problem 940
\begin{align*} \biggl |\oint _\gamma f(z)\,dz\biggr | &\leq \int _a^b |f(\gamma (t))|\,|\gamma '(t)|\,dt \leq \sup _{[a,b]} |f\circ \gamma | \int _a^b |\gamma '(t)|\,dt. \end{align*}

Recalling the definition of arc length completes the proof.

Proposition 2.1.9. Let \(\Omega \subseteq \C \) be open, let \(F:\Omega \to \C \) be continuous, let \(\gamma _1:[a,b]\to \Omega \) be a \(C^1\) curve, and let \(\varphi :[c,d]\to [a,b]\) be \(C^1\) and satisfy \(\varphi (c)=a\), \(\varphi (d)=b\). Define \(\gamma _2=\gamma _1\circ \varphi \). Then \(\oint _{\gamma _1} F(z)\,dz=\oint _{\gamma _2} F(z)\,dz\).

(Sam, Problem 990) In this problem we begin the proof of Proposition 2.1.9. Let \(\gamma _1:[a,b]\to \C \) be a \(C^1\) curve. Let \(\varphi :[c,d]\to [a,b]\) be continuous and satisfy \(\varphi (c)=a\), \(\varphi (d)=b\). Define \(\gamma _2=\gamma _1\circ \varphi \). Compute \(\gamma _2'(t)\) in terms of \(\gamma _1\), \(\gamma _1'\), \(\varphi \), and \(\varphi '\). Then show that \(\widetilde \gamma _1=\widetilde \gamma _2\). (Recall \(\widetilde \gamma \) denotes the image of \(\gamma \).)

(William, Problem 1000) Prove Proposition 2.1.9. Do not assume that \(F\) is holomorphic.

(Problem 1010) Let \(\gamma _1:[-a,a]\to \C \). Let \(\gamma _2:[-a,a]\to \C \) be given by \(\gamma _2(t)=\gamma _1(-t)\). Show that if \(F\) is continuous in a neighborhood of \(\widetilde \gamma _1\), then \(\oint _{\gamma _1} F(z)\,dz=-\oint _{\gamma _2} F(z)\,dz\).

(Wilson, Problem 1020) Let \(\gamma _1:[a,b]\to \C \) and \(\gamma _2:[c,d]\to \C \) be two curves. Suppose further that \(\widetilde \gamma _1=\widetilde \gamma _2\), \(\gamma _1(a)=\gamma _2(c)\), \(\gamma _1(b)=\gamma _2(d)\), and that \(\gamma _1\) and \(\gamma _2\) are injective. Show that there is a continuous strictly increasing function \(\varphi :[c,d]\to [a,b]\) such that \(\gamma _2=\gamma _1\circ \varphi \).

(Adam, Problem 1030) If \(\gamma _1:[a,b]\to \C \) and \(\gamma _2:[c,d]\to \C \) are simple closed curves rather than injective functions, with \(\widetilde \gamma _1=\widetilde \gamma _2\) and \(\gamma _1(a)=\gamma _1(b)=\gamma _2(c)=\gamma _2(d)\), is it necessarily the case that \(\gamma _2=\gamma _1\circ \varphi \) for a continuous strictly increasing function \(\varphi :[c,d]\to [a,b]\)?

(Bonus Problem 1040) Let \(\gamma _1:[a,b]\to \C \) and \(\gamma _2:[c,d]\to \C \) be two curves. Suppose further that \(\widetilde \gamma _1=\widetilde \gamma _2\), \(\gamma _1(a)=\gamma _2(c)\), \(\gamma _1(b)=\gamma _2(d)\), and that \(\gamma _1\) and \(\gamma _2\) are injective. Show that if \(F\) is continuous in a neighborhood of \(\widetilde \gamma _1\), then \(\oint _{\gamma _1} F(z)\,dz=\oint _{\gamma _2} F(z)\,dz\). (This does not follow immediately from Problems Problem 1000 and Problem 1020 because \(\varphi \) may not be continuously differentiable.)

(Amani, Problem 1050) Let \(\gamma _1:[a,b]\to \C \) and \(\gamma _2:[c,d]\to \C \) be two \(C^1\) curves. Suppose that \(\gamma _1(b)=\gamma _2(c)\). Show that there is a \(C^1\) curve \(\gamma _3:[-1,1]\to \C \) such that \(\gamma _3\big \vert _{[-1,0]}\) is a reparameterization of \(\gamma _1\) and \(\gamma _3\big \vert _{[0,1]}\) is a reparameterization of \(\gamma _2\). (We will write \(\gamma _3=\gamma _1*\gamma _2\). This means that \(\widetilde \gamma _3=\widetilde \gamma _1\cup \widetilde \gamma _2\) and \(\oint _{\gamma _3} F(z)\,dz=\oint _{\gamma _1} F(z)\,dz+\oint _{\gamma _2} F(z)\,dz\) for all \(F\) continuous in a neighborhood of \(\widetilde \gamma _3\).)

Addendum: Change of variables

(Bashar, Problem 1060) Let \(\Omega \), \(W\subseteq \C \) be open and let \(u:\Omega \to W\) be holomorphic. Let \(\gamma :[0,1]\to \Omega \) be a \(C^1\) closed curve. Let \(f:W\to \C \) be continuous. Show that

\begin{equation*}\oint _{u\circ \gamma } f(w)\,dw = \oint _\gamma f(u(z))\,\frac {\partial u}{\partial z}\,dz. \end{equation*}

(Dibyendu, Problem 1070) I want to compute \(\int _{-1}^1 \frac {(t+i)^3}{(t+i)^4+1}dt\). A naïve student uses the \(u\)-substitution \(u=(t+i)^4\) and converts the integral to \(\int _{-4}^{-4} \frac {1}{4} \frac {1}{u+1} du=0\). But when I compute \(\int _{-1}^1 \frac {(t+i)^3}{(t+i)^4+1}dt\) using a numerical solver, I get \(-i\pi /2\). What went wrong?

2.2. Real Analysis

[Definition: Limit in metric spaces] If \((X,d)\) and \((Z,\rho )\) are metric spaces, \(p\in Z\), and \(f:Z\setminus \{p\}\to X\), we say that \(\lim _{z\to p}f(z)=\ell \) if, for all \(\varepsilon >0\), there is a \(\delta >0\) such that if \(z\in Z\) and \(0<\rho (z,p)<\delta \), then \(d(f(z),f(p))<\varepsilon \).

[Definition: Continuous function on metric spaces] If \((X,d)\) and \((Z,\rho )\) are metric spaces and \(f:Z\to X\), we say that \(f\) is continuous at \(p\in Z\) if \(f(p)=\lim _{z\to p} f(z)\).

2.2. Complex Differentiability and Conformality

[Definition: Disc] The open disc (or ball) in \(\C \) of radius \(r\) and center \(p\) is \(D(p,r)=B(p,r)=\{z\in \C :|z-p|<r\}\). The closed disc (or ball) in \(\C \) of radius \(r\) and center \(p\) is \(\overline D(p,r)=\overline B(p,r)=\{z\in \C :|z-p|\leq r\}\).

(Hope, Problem 1080) Let \(\gamma :[0,1]\to \C \) be a parameterization of a nondegenerate scalene triangle of your choice. Sketch the trace of \(\gamma \) and of \(f\circ \gamma \) for the following choices of \(f\):

(a)
\(f(z)=z-3+i\)
(b)
\(f(z)=(\frac {1}{2}+\frac {\sqrt {3}}{2}i)z\)
(c)
\(f(z)=2z\)
(d)
\(f(z)=(1+i)z\)
(e)
\(f(z)=(1+i)z-3+i\)
(f)
\(f(z)=\bar z\)
(g)
\(f(z)=z+2\bar z\)

[Definition: Complex derivative] Let \(p\in \Omega \subseteq \C \), where \(\Omega \) is open. Let \(f:\Omega \to \C \). Suppose that \(\lim _{z\to p} \frac {f(z)-f(p)}{z-p}\) exists. Then we say that \(f\) has a complex derivative at \(p\) and write \(f'(p)=\lim _{z\to p} \frac {f(z)-f(p)}{z-p}\).

(Fact 1090) If \(\Omega \subseteq \C \) is open, \(p\in \Omega \), and \(f\), \(g:\Omega \setminus \{p\}\to \C \) are such that \(\lim _{z\to p} f(z)\) and \(\lim _{z\to p} g(z)\) exist (as complex numbers), then we have the usual formulas

\begin{gather*}\lim _{z\to p} \bigl (f(z)+g(z)\bigr )=\bigl (\lim _{z\to p} f(z)\bigr ) +\bigl (\lim _{z\to p} g(z)\bigr ), \\ \lim _{z\to p} \bigl (f(z)-g(z)\bigr )=\bigl (\lim _{z\to p} f(z)\bigr )-\bigl (\lim _{z\to p} g(z)\bigr ), \\ \lim _{z\to p} \bigl (f(z)g(z)\bigr )=\bigl (\lim _{z\to p} f(z)\bigr )\bigl (\lim _{z\to p} g(z)\bigr ) \end{gather*}
and (if \(\lim _{z\to p} g(z)\neq 0\))
\begin{equation*}\lim _{z\to p} \frac {f(z)}{g(z)}=\frac {\lim _{z\to p} f(z)}{\lim _{z\to p} g(z)}.\end{equation*}

(Fact 1100) If \(\Omega \subseteq \C \) and \(W\subseteq \C \) are open, \(p\in \Omega \), \(f:\Omega \setminus \{p\}\to W\) is such that \(L=\lim _{z\to p} f(z)\) exists, \(L\in W\), and \(g:W\to \C \) is continuous at \(L\), then

\begin{equation*}\lim _{z\to p} g(f(z))=g(L).\end{equation*}
Observe that we do require \(g(L)\) to exist, not only \(\lim _{w\to L} g(w)\).

[Chapter 2, Problem 10] If \(f\) has a complex derivative at \(p\), then \(f\) is continuous at \(p\).

[Chapter 2, Problem 8] If \(f'(p)\) exists, then \(\nabla f(p)\) exists and \(\frac {\partial f}{\partial x}\big \vert _{x+iy=p}= \frac {1}{i}\frac {\partial f}{\partial y}\big \vert _{x+iy=p}= \frac {\partial f}{\partial z}\big \vert _{z=p} = f'(p)\).

Theorem 2.2.2. Suppose that \(f\) has a complex derivative at \(p\). Then \(\frac {\partial f}{\partial z}\big \vert _{z=p}=f'(p)\).

(Problem 1110) Suppose that \(f\) has a complex derivative at \(p\). Prove Theorem 2.2.2 and also show that \(\frac {\partial f}{\partial \bar z}\big \vert _{z=p}=0\).

(Micah, Problem 1120) Let \(\Omega \subseteq \C \) be open and let \(f:\Omega \to \C \) be continuous. Let \(W\subseteq \C \) be open. Let \(g:W\to \Omega \) be continuous. Then \(f\circ g:W\to \C \) is continuous. Suppose that \(z_0\in W\) and that \(g'(z_0)\) and \(f'(g(z_0))\) exist (in the sense of limits as above). Show that \((f\circ g)'(z_0)\) exists and that \((f\circ g)'(z_0)=f'(g(z_0))\,g'(z_0)\).

Theorem 2.2.1. (Generalization.) Suppose that \(\Omega \subseteq \C \) is open and that \(f\) is \(C^1\) on \(\Omega \). Let \(p\in \Omega \) and suppose \(\p [f]{\bar z}\big \vert _{z=p}=0\). Then \(f\) has a complex derivative at \(p\) and \(f'(p)=\frac {\partial f}{\partial z}\big \vert _{z=p}\).

(Muhammad, Problem 1130) Prove this generalization of Theorem 2.2.1.

(Nisa, Problem 1140) Let \(F:\R ^2\to \R ^2\). Suppose that \(\nabla F_1\) and \(\nabla F_2\) are constants. Show that \(F(x,y)=F(0,0)+(\partial _1 F_1,\partial _1 F_2)x+(\partial _2 F_1,\partial _2 F_2)y\) for all \((x,y)\in \R ^2\).

(Robert, Problem 1150) Suppose that \(f:\C \to \C \). Suppose that \(f'\) exists everywhere and is a constant. Show that \(f(z)=f(0)+f'(0)z\) for all \(z\in \C \). Conclude that if \(z\), \(\omega \), \(w\in \C \) with \(\omega \neq z\neq w\), then \(\frac {|f(\omega )-f(z)|}{|\omega -z|}=\frac {|f(w)-f(z)|}{|w-z|}\).

(Sam, Problem 1160) Let \(F:\R ^2\to \R ^2\). Suppose that \(\nabla F_1\) and \(\nabla F_2\) are constants. If \(C\) is a circle, what is \(F(C)\)? If \(S\) is a square, what is \(F(S)\)? Now suppose that \(f:\C \to \C \) and that \(f'\) exists everywhere and is a constant. If \(C\) is a circle, what is \(f(C)\)? If \(S\) is a square, what is \(f(S)\)?